2012 — The Making

I obviously did a lot of talking in 2012, but I also made a few things…

The evolution of QueryPic

Screen Shot 2012-09-27 at 12.08.28 AM

Try QueryPic

At the start of 2012 QueryPic was a fairly messy Python script that scraped data from the Trove newspaper database and generated a local html file. It worked well enough and was generously reviewed in the Journal of Digital Humanities. But QueryPic’s ability to generate a quick visualisation of a newspaper search was undermined by the work necessary to get the script running in the first place. I wanted it to be easy and accessible for everyone.

Fortunately the folks at the National Library of Australia had already started work on an API. Once it became available for beta testing, I started rebuilding QueryPic — replacing the Python and screen-scraping with Javascript and JSON.

In the meantime, I headed over the New Zealand for a digital history workshop and began to wonder about building a NZ version of QueryPic based on the content of Papers Past, available through the DigitialNZ API. The work I’d already done with the Trove API made this remarkable easy and QueryPic NZ was born.

Once the Trove API was publicly released I finished off the new version of QueryPic. Instead of a Python script that had to be downloaded and run from the command line, QueryPic was now a simple web form that generated visualisations on demand.

The new version also included a ‘shareable’ link, but all this really did was regenerate the query. There was no way of citing a visualisation as it existed at a certain point in time. If QueryPic was going to be of scholarly use, it needed to be properly citable. I also wanted to make it possible to visualise more complex queries.

And so the next step in QueryPic’s evolution was to hook the web form to a backend database that would store queries and make them available through persistent urls. With the addition of various other bells and whistles, QueryPic became a fully-fledged web application — a place for people to play, to share and to explore.

Headlines and history

Explore The Front Page

Explore The Front Page

Back in 2011 I started examining ways of finding and extracting editorials from digitised newspapers.  Because the location of editorials is often tied up with the main news stories, this started me thinking about when the news moved to the front page. And of course this meant that I ended up downloading the metadata for four million newspaper articles and building a public web application — The Front Page — to explore the results. ;-)

The Front Page was also the first resource published on my new dhistory site (since joined by the Archives Viewer and QueryPic). dhistory — ‘your digital history workbench’ — is where I hope to collect tools and resources that have graduated from WraggeLabs.

Viewing archives

Try Archives Viewer

Try Archives Viewer

In 2012 I also revisited some older projects. After much hair-pulling and head-scratching, I finally managed to get the Zotero translator for the National Archives of Australia’s RecordSearch database working nicely again. I also updated it to work with the latest versions of Zotero, including the new bookmarklet.

My various userscripts for RecordSearch also needed some maintenance. This prompted me to reconsider my hacked together alternative interface for viewing digitised files in RecordSearch. While the userscript worked pretty well, there were limits to what I could do. The alternative was to build a separate web interface… and so the Archives Viewer was born.

Stories and data

Expect bugs ye who enter here...

Expect bugs ye who enter here…

 

In the ‘work-in-progress’ category is the demo I put together for my NDF2012 talk, Small stories in a big data world. Expect to see more of this…

My favourite things

Two things I made in 2012 are rather special (to me at least). Instead of responding to particular needs or frustrations, these projects emerged from late night flashes of inspiration — ‘what if…?’ moments. They’re not particularly useful, but both have encouraged me to think about what I do in different ways.

Play!

Play!

The Future of the Past is a way of exploring a set of newspaper articles from Trove. I’ve told the story of its creation elsewhere — I simply fell in love with the evocative combinations of words that were being generated by text analysis and wanted to share them. It’s playful, surprising and frustrating. And you can make your own tweetable fridge poetry!

Screen Shot 2012-07-10 at 5.20.45 PM

The People Inside

One night I was thinking about The Real Face of White Australia and the work I’d done extracting photos of people from the records of the National Archives of Australia’s database. I wondered what would happen if we went the other way — if we put the people back into RecordSearch. The result was The People Inside – an experiment in rethinking archival interfaces.

 

2012 — the talking

In an attempt to try and figure out where this year went I’ve pulled together a list of my talks, presentations and workshops for 2012…

7 January 2012 — ‘Invisible Australians: Living under the White Australia Policy’, contribution to the Crowdsourcing History: Collaborative Online Transcription and Archives panel, American Historical Association annual conference, Chicago. [slides]

8 January 2012 — ‘Making friends with text mining’, contribution to the A Conversation about Text Mining as a Research Method panel, American Historical Association annual conference, Chicago.

10 January 2012 — ‘Collections, interfaces, power and people’, McGill University.

12 January 2012 — ‘Collections, interfaces, power and people’, University of Western Ontario.

7 February 2012Mining the treasures of Trove: new approaches and new tools, VALA2012.

23 March 2012 — ‘Mining Trove’, Digital History Workshop, Victoria University of Wellington.

29 March 2012 — ‘Inside the bureaucracy of White Australia’, Digital Humanities 2012, Canberra. [slides]

8 May 2012Mining for meanings, Harold White Fellowship Lecture, National Library of Australia, Canberra.

27 June 2012 — ‘Beyond the front page’, combined meeting of the Canberra Society of Editors and the Australian and New Zealand Society of Indexers, Canberra. [slides]

19 July 2012 — ‘The responsibilities of data’, Framing Lives: The 8th Biennial Conference of the International Auto/Biography Association, Canberra. [slides]

11 August 2012, Doing Our Bit Build-a-thon, Mosman Library.

12 October 2012Digital disruptions: Finding new ways to break things, Faculty of Arts eResearch Forum, University of Melbourne.

19 October 2012Too important not to try, Dipping a toe into Digital Humanities, Deakin University.

25 October 2012 — Digital disruptions: Finding new ways to break things, Australian National University.

1 November 2012 — Digital disruptions: Finding new ways to break things, Digital Humanities Symposium, University of Queensland.

13-15 November 2012Digital dimensions: A hand-on workshop for the DH curious, University of Queensland.

20 November 2012Small stories in a big data world, National Digital Forum, New Zealand.

22 November 2012, Learning how to break things, workshop at THATCamp Wellington. [outline]

29 November 2012Archives of emotion, Rethinking Archival Methods workshop, Sydney.

12 December 2012 — ‘Introducing Digital Humanities’, State Library of New South Wales.

Archives of emotion

Presented at the Reinventing Archival Methods workshop, 29 November 2012, in Sydney.

One weekend, a bit over a year ago, I built this — a wall of faces of people forced to live within the restrictions of the White Australia Policy, drawn from records held by the National Archives of Australia. It created a lot of interest, both here and overseas, particularly after I talked about it at the 2011 National Digital Forum in New Zealand.

My original post was republished in South Africa, my NDF talk made it into the inaugural edition of the Journal of Digital Humanities. The wall is being studied as part of a digital history course in the US, and was cited by two papers at the Museums and the Web conference this year. It’s also been referenced in discussions on visualisation, serendipity and race.

But perhaps most important was the email we received in which the sender described scrolling through the wall with tears rolling down their face.

It’s also important to note that the project of which the wall forms part — Invisible Australians — is completely unfunded and has no institutional home. It’s a project driven by passion. It’s a project born out of the sense of obligation and responsibility that my partner, Kate Bagnall, and I feel towards the people whose lives are documented in the archives.

Last week I was at NDF 2012, where Courtney Johnston called on us to consider the emotional landscapes in and around our collections. So it started me wondering, what is the role of emotion in the archives?

There is clearly no neutral position. In Archival Methods David Bearman rightly criticises the idea that the value of archivists lies in their political disengagement — as faithful guardians of the accumulated past. And of course archival writers like Verne Harris and Terry Cook have developed this critique in some detail.

Bearman suggests that archives can instead be seen as ‘marshaling centers’, that enable people not to observe some distant past, but to mobilise the past within their own lives — to find connections and meanings.

Recently I was talking to an academic researching the role of historical thinking in education. He argued that an emotional connection had to come first. Only then could rational arguments take root — only then could opinions, ideas and lives be changed.

And yet emotion still seems like something best avoided in public. We try not to ‘inflame’ it, we rarely seek to nurture it. Exposing the rawness of emotion is often seen as cheap or manipulative. And yet it happens, always, in and around our cultural collections.

What user or worker in archives has not been moved? By the voices and stories contained within the records, by the sheer excitement of discovery, or perhaps by the overwhelming burden of responsibility. If as Bearman argues, ‘the pasts we construct are all discussions with the present’, then these discussions are infused with joy and anger, with fear and longing, with sadness and gratitude.

Why are we so reluctant to acknowledge that archives are repositories of feeling? Is emotion meaningless because it can’t be quantified, dangerous because it can’t be controlled, or does it simply not fit with the professional discourse of evidence, authority and reliability.

As our experience of archives moves further into the online realm, so the possibilities for making emotional connections increases — simply because it’s so much easier to share. From the like button or the retweet, through to a lovingly-tended personal collection in something like Pinterest — we have new opportunities to explore what’s important to us and why.

This is happening now. Voices from the past are finding their way into online conversations. But what voices and whose conversations? Even as welcome this sort of engagement we have to remember what is not online, what is not accessible, and all the social, technical and political barriers that can prevent someone from joining the discussion.1

It worries me too that our emotional connections may be too small, too fragile to survive in the world of big data. We live in a age where our online preferences are monitored, our sentiments analysed — our feelings are harvested and tallied in order to sell us more stuff. The line between expression and consumption is increasingly blurred.

Back in the pre-web era, Bearman imagined access to archives through ‘intelligent artifices’ that would bridge databases and connect vocabularies — responding to, and learning from the activities of users. Twenty-five years later we’re exploring these possibilities at a global scale, through Linked Open Data.

While Linked Open Data is often described like a giant plumbing project, it’s really about making a whole lot of very small connections. To me it offers an opportunity to fight back against the homogenisation of data. We can use it to express complex relationships with the past. But we need to know how, and we need to find the points at which we can plug ourselves in.

Perhaps these are Bearman’s ‘marshaling centers’, short-circuiting our online connections to jack us into the past. Not a fixed or nostalgic past, but a challenging and contested past, both real and yet unknowable. As feeling becomes commodified and neutered through a variety of online filters, perhaps archives can hack us directly into powerful conduits of meaning and emotion.

How might this happen? There’s the technical stuff — persistent identifiers, blah, blah, blah — vitally important of course. But then there’s the relationship stuff. We have to stop talking about users and start talking about collaborators. We need to stop building services to be consumed, and start opening opportunities to create, to play, to break and to hack. We are all making connections.

Most importantly we need to find and support the people, both inside and outside our organisations, who are driven by passion. The people who care. The people who simply give a shit.2

  1. See, for example Tim Hitchcock’s 5 minute rant []
  2. ‘Give a shit’ from Alex Madrigal via Courtney Johnston’s opening remarks for NDF 2011 []

Small stories in a big data world

Presented at the National Digital Forum, Wellington, 20 November 2012. You can also watch the video.

Previously at NDF:

As we return to the action, Tim is wondering what happens when we bring stories and data together…

As historians, as cultural heritage professionals, as people — we make connections, we make meanings. That’s just what we do.

What really excites me about Linked Open Data is not the promise of smarter searches, but the possibilities for making connections and meanings in ways that are easier to traverse — to explore, to wander, to linger, or even to stumble.

What really frustrates me about Linked Open Data is that we still tend to talk about it as if it’s all engineering — an international plumbing project to pump data around the globe. Linked Open Data doesn’t have to be an industrial undertaking, it can be a craft, a mode of expression. It can be created with love or in anger.

And anyone can do it.

I’m currently working on a project with the Mosman Library in Sydney to collect information about the World War I experiences of local service people. The web resource we’re building will provide Linked Data all the way down. Every time someone adds a story about a person, uploads a photograph, identifies a place, or includes a link to another resource, they will be minting identifiers, creating relationships, documenting properties — sharing their knowledge as Linked Open Data.

It seems to me that Linked Open Data will be a success not when we’ve standardised on a few vocabularies, or linked everything we possibly can to DBpedia, but when have thriving online communities creating and sharing structured data about the things that are important to them. Not just the known and notable, but the local, the contested, the endangered, the ephemeral and the oppressed.

Many of us live within a Western tradition which equates knowledge with accumulation. Linked Open Data promises new means of aggregation, new powers of discovery — lots and lots more stuff! But it would be a tragedy if all we ended up with was a bigger database or a better search engine. I want more. I want new ways of using that data, of playing with structures and scales. I want to build rich contexts around my stories.

Last year I talked about this in a keynote I gave to the Australian and New Zealand Society of Indexers. To try and demonstrate some of the possibilities, I created a fancy presentation and added a whole lot of linked data to the text of my talk. But it was a bit of a cheat. The text, the triples and the presentation were still pretty much separate. What I really wanted to do was use the linked data to generate alternative views of the text, to take my story and look at it through a variety of linked data powered filters.

So for NDF this year I thought I’d have another go. I set myself a few groundrules:

  • Simple tools — should be possible for anyone with a text editor.
  • No platforms — no sneaky server-side stuff, it all had to happen in the browser, on the fly.
  • No markup madness — I wanted there to be a close relationship between the text and the data, but I wanted the markup process to be practical — something like creating a footnote.

So I hacked together a whole lot of existing Javascript libraries. I used them to extract all the triples from my text and follow external identifiers to get extra information. Then I queried the little databank I’d made to generate four different views of my talk…

WARNING WARNING! Very early demo! Expect bugs and general stupidity!

Now, none of this looks terribly exciting. Visually the various components look pretty familiar — and that’s part of the point, I’m showing how you can re-use existing tools and code libraries.

What’s interesting, I think, is the dialogue that’s evolving between text and data — a dialogue that’s taking place within one, just one, html document.

Expect bugs ye who enter here…

So here’s the text of my talk to the indexers last year. As you scroll through the document, each paragraph on the screen is examined and information about related entities — people, places, events, objects — are displayed in a sidebar. The text and the sidebar are linked, so if you click on a link in the text more information about the related entity opens in the sidebar.

If you want to look at the resources separately you can. You can re-order, and filter by type.

Then there’s the fairly traditional timeline and map views.

Most of the data that’s being displayed is coming from RDFa within the document, but not all. There are links to GeoNames and DBPedia that are drawing in data on the fly. As more Linked Open Data becomes available these links can become deeper and richer.

It’s a very rough demo and I have a long to-do list — for example better links between the data views and the text, showing their context within the narrative. But hopefully you can get an idea of how it might be possible to build data-rich stories — with layers and views that enrich, inform and engage with the narrative.

And all just with one html page, a bit of RDFa and a few Javascript libraries.

There’s no magic.

You might be wondering about my ground-rules — why did I constrain myself? Well, it has to do with this thing we call ‘access’. Oftentimes when we talk about access we mean the power to consume — the power for people to take what they’re given.

But to really have access, for something to be truly open, people also have to have the power to create. To take what they’re given and build something new — to challenge, to criticise, to offer alternatives.

That means allowing people the space to have ideas, giving them the confidence to experiment, providing useful tools and the knowledge to use them. That’s not a job for any particular institution, or sector, it’s a challenge for all of us who build things to strip away the magic and invite others to join in.

And I think it’s pretty important. I don’t really want to live in a world where data is just something that other people collect about and for us. I want slow data, as Chris described last year. I want us to enjoy the textures and tastes and not get addicted to the processed product. I want to create, enrich, wield and wonder.

So my vision of the future of Linked Open Data, is not of the Giant Global Graph linking all knowledge. But a revolutionary army of data-artisans, hand-crafting their richly contextualised stories into a glorious, messy, confusing, infuriating, WONDERFUL tapestry.

Now I know you’re all just waiting for me to press the BOOM! button.

So let’s blow some shit up!

Teaching by example?

There’s been plenty of discussion within the digital humanities community about the difficulty of getting academic recognition for digital projects. But what about being recognised for alternative forms of teaching? I don’t mean online courses, I mean the sort of peer-to-peer teaching that takes place through blogs, or Twitter, or the comments in our code. We all learn from each other.

I’ve been thinking about this while working on a few job applications recently. My opportunities for formal teaching or supervision have been limited, but over the last few years I’ve worked hard to introduce the digital humanities to a broad range of audiences. I’ve given talks to all sorts of professional and community groups, including librarians, museum curators, archivists and family historians. I’ve organised a couple of THATCamps. I’ve given papers at disciplinary conferences. I’ve blogged about my experiments and my frustrations. I’ve created a series of digital tools and made them available for all to use. Most recently I’ve been visiting universities giving talks and workshops to help staff and students make use of digital tools and resources in their own research. But I don’t ‘teach’ — or do I?

Most of this work is unpaid of course. I do it because I love it, and because I think it’s important. I do it because I want DH to live up to it’s promise of being open and engaging — I want others to share the excitement, the possibilities and the power. Sometimes it’s hard to know if it really makes any difference — usually I only hear anecdotally about the way my tools are used. But when I do receive feedback from people it’s often to say how I’ve ‘inspired’ them.

It seems to me that the ability to teach by example, to broaden horizons, and offer inspiration, is something that should find a place in a job application, but where? As I was pondering this the other night I fired off an idle tweet that brought a couple of encouraging responses:

So I’ve adopted @ProfessMoravec’s suggestion and created a Testimonials page. If I’ve managed to inspire or assist you in some way, feel free to leave a comment. Maybe next time I put together a job application I’ll have something to point to to demonstrate my ‘teaching’ credentials.

Too important not to try

On Friday 19 October I joined an enthusiastic group of digital humanities explorers at a Deakin University event entitled Dipping a Toe into the Digital Humanities and Creative Arts. @catspyjamasnz has assembled an excellent summary of the day in Storify.

In the morning I told the story of Invisible Australians. You can view the slides of Too or important not to try and listen to my dodgy audio recording via SoundCloud.

In the afternoon I gave a whirlwind workshop which included a headline roulette smackdown and an introduction to the wonders of Zotero.

Digital disruptions: Finding new ways to break things

Recently I gave a presentation at the University of Melbourne’s Faculty of Arts eResearch Forum. The slides for my talk, ‘Digital Disruptions: Finding New Ways to Break Things’, are available online (thanks to reveal.js). I also managed to make a fairly basic recording — I’m intending to create a transcript, but for now you’re welcome to download and listen you can listen via SoundCloud.

Basically I was arguing that as well as making stuff, digital humanities can involve a lot of stretching, twisting, pushing and breaking stuff. The web is not fixed or static, there are many points at which we can intervene and change the way information is presented. What we need is confidence to pull things apart, and the ability to critically examine why things work the way they do (or don’t). And imagine alternatives.

After my talk there were a number of interesting reports from people around the university. Brett Holman has provided a great summary on his Airminded blog, as well as doing his best to find me a job!

For you, with all best wishes…

Yep, there’s a new version of QueryPic.

About 18 months ago I created a little Python script to visualise search results in Trove’s collection of digitised newspapers. After a bit more tweaking. I christened it QueryPic. People started to use it. It was even reviewed in the Journal of Digital Humanities. With the release of the Trove API earlier this year I rewrote the whole thing in Javascript and let it loose on the web. People could make graphs without having to download any code or fire up the command line. Anyone could play.

And now?

The latest version lets you save your QueryPics. As new features go it’s not very revolutionary. But it meant another significant shift. From Python script, to web page, to web app. The Javascript-enabled interface now connects to a Django-powered backend. Save a graph and you can access it via a lovely, short, persistent url (like this). It’s as much a platform as a tool. But to be persistent, the urls need to work for ummm… a long time. Is this even possible for a project that has no funding and a support team of one?

I don’t know.

My enthusiasm for making tools is punctuated by regular bouts of doubt and disillusionment. With millions of dollars being spent on industrial-strength digital research infrastructure why should I devote my evenings to hand-crafting pretty little widgets like QueryPic?

My grandfather made this brass dish. He owned an engineering workshop and forge. My dad was a draftsman, engineer and builder. My mum made fine dresses in the fashion houses of Melbourne. I make things too. It’s what I do. It took me quite a few years to work this out. Years spent wondering why I felt out of place in academia. I’m also a historian, so I research and I write, but without some time to tinker, well… I’m just not happy. Making things is not separate — for me it’s all part of being a historian. I make things that let people connect to the past in different ways. And along the way I learn.

And by people I mean people. Just last week I took part in an online question and answer session organised by Inside History Magazine. It was a lot of fun. Amidst the questioning I unveiled the latest version of QueryPic. Considerable excitement ensued. QueryPic graphs are starting to be included in research publications, but anyone can make and understand them. Local and family historians are enthusiastic users of digital technologies and I’m excited to see them playing around with tools that I’ve made. I want to create things that other people use. Things that help them, and sometimes surprise them.

QueryPic has graduated from WraggeLabs to dhistory — my platform for digital history research. There it joins The Front Page and Archives Viewer. As usual, I have big plans. Are they practical? Probably not. Are they sustainable? I doubt it. Will I keeping making things anyway? Of course.

So please accept this gift. I made it for you. I hope you find it useful.

QueryPic — explore digitised newspapers from Australia & New Zealand.

http://dhistory.org/querypic/

Features include:

  • Save and date-stamp your graphs with persistent urls — perfect for citing and sharing
  • Copy and paste query urls from Trove or Digital NZ, or connect automatically with a handy bookmarklet
  • Easily regenerate saved graphs to draw in updated data
  • Explore QueryPics created by others — use them as the starting point for your own visualisations
  • Combine any number of queries, either from Australia or New Zealand
  • Click on the graphs to preview matching articles

All this and more documented on QueryPic’s extensive help page. Code on Github.

Old loves, new views…

I’m deeply in love with the collections of the National Archives of Australia. They move me, they inspire me, they make me want to do something. How do I express my love? I’ve written stories about things like atomic bombs, progress, astronomy and weather forecasting — pursuing lives and events documented in the Archives’ rich holdings. I work on projects like Invisible Australians, hoping to bring the compelling remnants of the White Australia Policy to broader public attention. And I build things. I make tools that help other people explore, understand and use the Archives. I do this because these riches need to be used. They need to be shared. They need to be part of the fabric of our lives.

A few years ago I created a little script for Firefox that put a fresh face on the display of digitised records in the National Archives’ RecordSearch database. It’s publicly available and has been installed more than 500 times. Demonstrating this script at the ‘Doing our bit’ Build-a-thon a few weeks ago made me realise again both how useful it was and how much work it still needed.

One of the most exciting features when I first created the script was the ability to display the records on a ’3D wall’, courtesy of a Firefox plugin called CoolIris. But CoolIris uses Flash and is no longer being supported. Time for a new approach.

Say hello to the Archives Viewer (naming things isn’t really one of my strengths). Instead of rewriting my existing script I decided to create a completely new web application. Why? Mainly because it gave me a lot more flexibility. I could also make use of a variety of existing tools and frameworks like Django, Bootstrap, Isotope and FancyBox. Standing upon the code of giants, I had the whole thing up and running in a single weekend. The code is available on GitHub.

What does it do? Simply put, just feed the Archives Viewer the barcode of a digitised file in RecordSearch and it grabs the metadata and images and displays them in a variety of useful ways. It’s really pretty simple, both in execution and design.

Yep, there’s a wall. It’s not quite as spacey and zoom-y as the CoolIris version, but perhaps that’s a good thing. It’s just a flat wall of page image thumbnails with a bit of lightbox-style magic thrown in. But when I say just, well… look for yourself. There’s something a bit magical about seeing all the pages of a file at once, taking in their shapes and colours as well as their content. This digital wall provides a strangely powerful reminder of the physical object.

National Archives of Australia: ST84/1, 1908/471-480

Of course you can also view the file page by page if you want. Printing is a snap — just type in any combination of pages or page ranges and hit the button. The images and metadata are assembled ready to print. No more wondering ‘which file did this print out come from?’.

But perhaps the most important feature is that each page has it’s own unique, persistent url. Basic stuff, but oh, so important. With a good url you can share and cite. Find something exciting? Tell the world about it! I’ve included your typical social media share buttons to help you along.

One disadvantage over the original userscript is that the viewer isn’t directly linked to RecordSearch. You probably don’t want to have to cut and paste the barcode every time you view a file. So I’ve also created a couple of connectors that ummm… connect things up.

The first connector is just a bookmarklet. A bookmarklet is just a little piece of javascript code disguised as a browser bookmark. Just drag this link — Archives Viewer — to your browser’s bookmark toolbar. Then when you’re on the item page of a digitised file in RecordSearch, just click the bookmarklet and you’ll be instantly transported to the wall.

The second connector is a bit smarter. It’s an enhanced version of another userscript I wrote to display the number of pages in a digitised file. It still does that, but now it also rewrites the links to the digitised files so that they automatically open in the Archives Viewer. It’s a bit harder to install. You need Chrome or Firefox and the add-ons Greasemonkey (for Firefox) or Tampermonkey (for Chrome). Then just go to the userscript page and hit the big ‘Install’ button.

You might be wondering about Zotero (at least I hope you are). My Zotero-RecordSearch translator lets you capture page images and metadata direct to your own research database, so what happens when you’re transported across to the Archives Viewer? Never fear, I’ve written a new translator that lets you save pages as you could in RecordSearch. Even better, you get a persistent, context-enriched url, and the ability to capture multiple pages at once. Yippee!

But that’s not quite all. Buried within the pages is some lovely Linked Open Data. To be truthful, it’s not really very ‘linked’ yet, but it does expose the basic metadata in a machine-readable form, borrowing from the vocabularies of projects like Locah and the Archival Ontology. It’s an experiment, as is the Archives Viewer itself. We can learn by doing.

I’ve given quite a few talks over recent times encouraging people to take up their tools and start hacking away at the digital collections of our cultural institutions. Yes, I admit it, I’m an impatient historian (and a grumpy one at that). But it’s also because I think it’s important that we recognise that access is never just something you’re given. It’s something that we make through our stories, our projects, and our tools. It’s something that’s grounded in respect and powered by love.

‘Doing our bit’ Build-a-thon

BUILD-A-THON

Last Saturday I was amongst a group of enthusiastic and knowledgeable volunteers getting stuck in to the ‘Doing our bit’ project at the Mosman Library. The Build-a-thon was the first stage in creating a new online resource documenting the experiences of World War I service people related to the Mosman area. We’re trying to make the whole process as open as possible, so the Build-a-thon was a way of exploring resources, issues, interfaces and ideas before we lay down too much code. You can read more on the project blog.

To provide some context for our labours, I gave a series of short talks:

  • ‘Small stories in a big data world’ [video] [links]
  • ‘A digital history toolkit’ [video] [links]
  • ‘Telling stories and building interfaces’ [video] [links]
  • ‘Connections and contexts through Linked Open Data’ [links]

You can see how the day unfolded on Storify, and view the participants hard at work on Flickr.