Extracting editorials #2

As I explained in the first of this series, I’m documenting my efforts to extract every editorial published in the Sydney Morning Herald in 1913 from the Trove newspaper database. It’s an experiment both in text mining and historical writing — an attempt to put the method up front.

While I didn’t think there was anything very thrilling in the first instalment, recording my thoughts and assumptions in this way has already proved useful. In a comment, Owen Stephens noted that his attempt to reproduce my search query produced fewer results. After a little bit of poking around I realised that the fulltext modifier, which I often use to switch off fuzzy matching, counteracts the ‘search headings only’ flag. So my query was returning results that had the string ‘The Sydney Morning Herald’ anywhere in the article.

Try it for yourself.

Here’s my original query — searching for fulltext:”The Sydney Morning Herald” in headings only (supposedly). You’ll notice that it returns 335 results and it’s clear from a quick scan that a number are false positives (they don’t follow the pattern for editorials).

Here’s Owen’s query — searching for “The Sydney Morning Herald” in headings only. It returns 294 results, without any obvious false positives.

So my attempt to disable fuzzy matching actually produced a less accurate result! Weird.

Actually, I think one important benefit of this sort of text mining is that it helps you understand how the search engines you’re using actually work. Once you start poking and prodding, the idiosyncrasies start to emerge.

Anyway, I harvested Owen’s cleaner result set and opened up the resulting csv file. As it seemed in Trove, there we’re very few false positives. Indeed there were only two articles that didn’t seem to follow the standard editorial format, and these were notes added to the editorial page. On the other hand, there were obviously about 20 editorials missing. I could have manually worked through the csv file to identify the missing dates, but I thought I’d try to create some tools that would do the work for me.

What I wanted was the details of the first editorial in every edition of the newspaper in 1913 — so there should be one, and only one, article for each day on which the newspaper was published. I needed a tool that would analyse the csv file and do two things:

  • identify dates that occur multiple times (false positive alert!)
  • identify dates that are absent from the result set (missing in action!)

The resulting code is all on GitHub if you want follow along. I wrote a Python script that opens up the csv file, extracts all the date strings, converts them to datetime objects and then saves them to a list. Once that’s done it’s pretty easy to loop through and find duplicates:

def find_duplicates(list):
    '''
    Check a list for suplicate values.
    Returns a list of the duplicates.
    '''
    seen = set()
    duplicates = []
    for item in list:
        if item in seen:
            duplicates.append(item)
        seen.add(item)
    return duplicates

Finding missing dates was a little more complicated, but Google came to the rescue with some handy code samples. All I had to do was set a start and end date (in this case 1 January 1913 and 31 December 1913) and create a timedelta object equal to a day. Then it’s just a matter of adding the timedelta to the start date, comparing the new date to the dates extracted from the csv file, and continuing on until you hit the end. If the new date isn’t in the csv file, then it gets added to the missing list.

if year:
        start_date = datetime.date(year, 1, 1)
        end_date = datetime.date(year, 12, 31)
    else:
        start_date = article_dates[0]
        end_date = article_dates[-1]
    one_day = datetime.timedelta(days=1)
    this_day = start_date
    # Loop through each day in specified period to see if there's an article
    # If not, add to the missing_dates list.
    while this_day <= end_date:
        if this_day.weekday() not in exclude: #exclude Sunday
            if this_day not in article_dates:
                missing_dates.append(this_day)
        this_day += one_day

I’ve tried to make the code as reusable as possible, so you can either supply a year, or the script will read start and end dates from the csv file itself.

All that left me with two more lists of dates: ‘duplicates’ and ‘missing’. At first I just wrote these out to a text file, but then I decided it would be useful to write the results to an html page. That way I could add links that would take me to the actual issue within Trove, helping me to quickly find the missing editorial.

Unfortunately there’s no direct way to go from a date to an issue — you first need to find the issue identifier. How do you do this? If you dig around in the code beneath the page for each newspaper title, you’ll find that the ajax interface pulls in a json file with issue information. You can access this through a url like: http://trove.nla.gov.au/ndp/del/titlesOverDates/[year]/[month]. Here’s an example for January 1913.

The json includes all issues for all titles in the specified month. So you then have to loop through to find a specific title and day. Once you have the issue identifier you can just attach it to a url:

def get_issue_url(date, title_id):
    '''
    Gets the issue url given a title and date.
    '''
    year, month, day = date.timetuple()[:3]
    url = 'http://trove.nla.gov.au/ndp/del/titlesOverDates/%s/%02d' % (year, month)
    issues = json.load(urllib2.urlopen(url))
    for issue in issues:
        if issue['t'] == title_id and int(issue['p']) == day:
            issue_id = issue['iss']
    return 'http://trove.nla.gov.au/ndp/del/issue/%s' % issue_id
My results file with links to Trove

Finally, to save myself having to cut and paste the missing dates back into the csv file, I added a few lines to write them in automatically.

So now I have a handy little html page, complete with dates and links, that I’m working through to find all the missing editorials. All I need for the next stage are the urls for the editorial and the page on which it’s published. I’m just cutting and pasting these from the citation box in Trove into the csv file. Once this is done I can start trying to find all the editorials.

PS: I noted in my first post that one benefit in finding the editorials was that the main news articles usually appeared on the page after the editorials. I’ve been thinking some more about ways to identify ‘major’ news stories. Word length perhaps? But not always. Hmmm, but major stories do seem to be published at the top of the page. After a bit more poking around in the code I found that there’s a ‘y value’ assigned to each article that indicates its position on the page. So if I harvest all the articles on the page after the editorials and then rank them by their y values? Interesting…

This work is licensed under a Creative Commons Attribution 4.0 International License.

Tim Sherratt Written by:

I'm a historian and hacker who researches the possibilities and politics of digital cultural collections.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *