‘The Machine That Changed the World’ online

I think I’m posting this as much to tell you about it as to remind myself to watch it!

The Machine That Changed the World is the longest, most comprehensive documentary about the history of computing ever produced, but since its release in 1992, it’s become virtually extinct. …

It’s a whirlwind tour of computing before the Web, with brilliant archival footage and interviews with key players — several of whom passed away since the filming.

In other news, the mashed museum day is tomorrow, and the UK Museums on the Web conference is the day afterwards – see you there, maybe! I’ve been flat out so I’ve no idea what I’ll work on tomorrow – I have lots of ideas but haven’t had a chance to do any preparation.

Some ideas for location-linked cultural heritage projects

I loved the Fire Eagle presentation I saw at the WSG Findability event [my write-up] because it got me all excited again about ideas for projects that take cultural heritage outside the walls of the museum, and more importantly, it made some of those projects seem feasible.

There’s also been a lot of talk about APIs into museum data recently and hopefully the time has come for this idea. It’d be ace if it was possible to bring museum data into the everyday experience of people who would be interested in the things we know about but would never think to have ‘a museum experience’.

For example, you could be on your way to the pub in Stoke Newington, and your phone could let you know that you were passing one of Daniel Defoe‘s hang outs, or the school where Mary Wollstonecraft taught, or that you were passing a ‘Neolithic working area for axe-making’ and that you could see examples of the Neolithic axes in the Museum of London or Defoe’s headstone in Hackney Museum.

That’s a personal example, and those are some of my interests – Defoe wrote one of my favourite books (A Journal of the Plague Year), and I’ve been thinking about a project about ‘modern bluestockings’ that will collate information about early feminists like Wollstonecroft (contact me for more information) – but ideally you could tailor the information you receive to your interests, whether it’s football, music, fashion, history, literature or soap stars in Melbourne, Mumbai or Malmo. If I can get some content sources with good geo-data I might play with this at the museum hack day.

I’m still thinking about functionality, but a notification might look something like “did you know that [person/event blah] [lived/did blah/happened] around here? Find out more now/later [email me a link]; add this to your map for sharing/viewing later”.

I’ve always been fascinated with the idea of making the invisible and intangible layers of history linked to any one location visible again. Millions of lives, ordinary or notable, have been lived in London (and in your city); imagine waiting at your local bus stop and having access to the countless stories and events that happened around you over the centuries. Wikinear is a great example, but it’s currently limited to content on Wikipedia, and this content has to pass a ‘notability’ test that doesn’t reflect local concepts of notability or ‘interestingness’. Wikipedia isn’t interested in the finds associated with an archaeological dig that happened at the end of your road in the 1970s, but with a bit of tinkering (or a nudge to me to find the time to make a better programmatic interface) you could get that information from the LAARC catalogue.

The nice thing about local data is that there are lots of people making content; the not nice thing about local data is that it’s scattered all over the web, in all kinds of formats with all kinds of ‘trustability’, from museums/libraries/archives, to local councils to local enthusiasts and the occasional raving lunatic. If an application developer or content editor can’t find information from trusted sources that fits the format required for their application, they’ll use whatever they can find on other encyclopaedic repositories, hack federated searches, or they’ll screen-scrape our data and generate their own set of entities (authority records) and object records. But what happens if a museum updates and republishes an incorrect record – will that change be reflected in various ad hoc data solutions? Surely it’s better to acknowledge and play with this new information environment – better for our data and better for our audiences.

Preparing the data and/or the interface is not necessarily a project that should be specific to any one museum – it’s the kind of project that would work well if it drew on resources from across the cultural heritage sector (assuming we all made our geo-located object data and authority records available and easily queryable; whether with a commonly agreed core schema or our own schemas that others could map between).

Location-linked data isn’t only about official cultural heritage data; it could be used to display, preserve and commemorate histories that aren’t ‘notable’ or ‘historic’ enough for recording officially, whether that’s grime pirate radio stations in East London high-rise roofs or the sites of Turkish social clubs that are now new apartment buildings. Museums might not generate that data, but we could look at how it fits with user-generated content and with our collecting policies.

Or getting away from traditional cultural heritage, I’d love to know when I’m passing over the site of one of London’s lost rivers, or a location that’s mentioned in a film, novel or song.

[Updated December 2008 to add – as QR tags get more mainstream, they could provide a versatile and cheap way to provide links to online content, or 250 characters of information. That’s more information than the average Blue Plaque.]

Google release AJAX loader

From the Google page, AJAX Libraries API:

The AJAX Libraries API is a content distribution network and loading architecture for the most popular open source JavaScript libraries. By using the Google AJAX API Loader’s google.load() method, your application has high speed, globaly available access to a growing list of the most popular JavaScript open source libraries including:

Google works directly with the key stake holders for each library effort and accept the latest stable versions as they are released. Once we host a release of a given library, we are committed to hosting that release indefinitely.

The AJAX Libraries API takes the pain out of developing mashups in JavaScript while using a collection of libraries. We take the pain out of hosting the libraries, correctly setting cache headers, staying up to date with the most recent bug fixes, etc.

There’s also more information at Speed up access to your favorite frameworks via the AJAX Libraries API.

To play devil’s avocado briefly, the question is – can we trust Google enough to build functionality around them? It might be a moot point if you’re already using their APIs, and you could always use the libraries directly, but it’s worth considering.

Notes from ‘Maritime Memorials, visualised’ at MCG’s Spring Conference

There are my notes from the data burst ‘Maritime Memorials, visualised’ by Fiona Romeo, at the MCG Spring meeting. There’s some background to my notes about the conference in a previous post. Any of my comments are in [square brackets] below.

Fiona’s slides for ‘Maritime Memorials, visualised’ are online.

This was a quick case study: could they use information visualisation to make more of collections datasets? [The site discussed isn’t live yet, but should be soon]

A common visualisation method is maps. It’s a more visual way for people to look at the data, it brings in new stories, and it helps people get sense of the terrain in e.g. expeditions. They exported data directly from MultiMimsy XG and put it into KML templates.

Another common method is timelines. If you have well-structured data you could combine the approaches e.g. plotting stuff on map and on a timeline.

Onto the case study: they had a set of data about memorials around the UK/world. It was quite rich content and they felt that a catalogue was probably not the best way to display it.

They commissioned Stamen Design. They sent CSV files for each table in the database, and no further documentation. [Though since it’s MultiMimsy XG I assume they might have sent the views Willo provide rather than the underlying tables which are a little more opaque.]

Slide 4 lists some reasons arguments for trying visualisations, including the ability to be beautiful and engaging, provocative rather than conclusive, appeal to different learning styles and to be more user-centric (more relevant).

Some useful websites were listed, including the free batchgeocode.com, geonames and getlatlong.

‘Mine the implicit data’ to find meaningful patterns and representations – play with the transcripts of memorial texts to discover which words or phrases occur frequently.

‘Find the primary objects and link them’ – in this case it was the text of the memorials, then you could connect the memorials through the words they share.

The ‘maritime explorer’ will let you start with a word or phrase and follow it through different memorials.

Most interesting thing about the project is the outcome – not only new outputs (the explorer, KML, API), but also a better understanding of their data (geocoded, popular phrases, new connections between transcripts), and the idea that CSV files are probably good enough if you want to release your data for creative re-use.

Approaches to metadata enhancement might include curation, the application of standards, machine-markup (e.g. OpenCalais), social tagging or the treatment of data by artisans. This was only a short (2 – 3 weeks) project but the results are worth it.

[I can’t wait to try the finished ‘explorer’, and I loved the basic message – throw your data out there and see what comes back – you will almost definitely learn more about your data as well as opening up new ways in for new audiences.]

Notes from Museums Computer Group (MCG) Spring Conference, Swansea

These are my notes from the Museums Computer Group (MCG) Spring meeting, held at the National Waterfront Museum, Swansea, Wales, on April 23, 2008.

Nearly all the slides are online and I also have some photos and video from the National Waterfront Museum. If you put any content about the event online please also tag it with ‘MCGSpring2008’ so all the content about this conference can be found.

The introduction by Debbie Richards mentioned the MCG evaluation project, of which more later in ‘MCG Futures’.

I have tried to cover points that would be of general interest and not just the things that I’m interested in, but it’s still probably not entirely representative of the presentations.

Debbie did a great job of saying people’s names as they asked questions and I hope I’ve managed to get them right, but I haven’t used full names in case my notes on the questions were incorrect. Please let me know if you have any clarifications or corrections.

If I have any personal comments, they’ll be in [square brackets] below. Finally, I’ve used CMS for ‘content management systems’ and CollMS for ‘collections management systems’.

I’ve made a separate post for each paper, but will update and link to them all here as I’ve make them live. The individual posts include links to the specific slides.

‘New Media Interpretation in the National Waterfront Museum’

‘Catch the Wind: Digital Preservation and the Real World’

‘The Welsh Dimension’

‘Museums and Europeana – the European Digital Library’

‘MCG Futures’

‘Building a bilingual CMS’

‘Extending the CMS to Galleries’

‘Rhagor – the collections based website from Amgueddfa Cymru’

‘Maritime Memorials, visualised’

‘Unheard Stories – Improving access for Deaf visitors’

‘National Collections Online Feasibility Study’

Some random links…

Two very handy resources when choosing forum software: opensourcecms.com lets you try out various installations – you can create test forums and play with the settings and forummatrix.org helps you compares applications on a variety of facets, and there’s a wizard to help you narrow the choices.

Andy Powell makes the excellent point that social software-style tags function as virtual venues:

if you are holding an event, or thinking about holding an event… decide what tag you are going to use as soon as possible. … In fact, in a sense, the tag becomes the virtual venue for the event’s digital legacy.

In other news, Intel get into Mashups for the Masses – “an extension to your existing web browser that allows you to easily augment the page that you are currently browsing with information from other websites. As you browse the web, the Mash Maker toolbar suggests Mashups that it can apply to the current page in order to make it more useful for you” and the BBC reports on Metaplace, a “free tool that allows anyone to create a [3D] virtual world” and incorporates lots of social web tools.

In a post titled, What is Web 3.0?, Nicholas Carr said:

“Web 3.0 involves the disintegration of digital data and software into modular components that, through the use of simple tools, can be reintegrated into new applications or functions on the fly by either machines or people.”

And recently I went to a London Geek Girl Dinner, where Paul Amery from Skype (who hosted the event) said
“the next big step forward in software is going to be providing the plumbing, to provide people what they want, where they want …start thinking about plumbing all this software together, joining solutions together… mashups are just the tip of the iceberg”.

So why does that matter to us in the cultural heritage sector? Without stretching the analogy too far, we have two possible roles – one, to provide the content that flows through the pipes, ensuring we use plumbing-compatible tubes so that other people can plumb our content into new applications; the second is to build applications ourselves, using our data and others. I think we’re are brilliant content producers, and we’re getting better at providing re-usable data sources – but we often don’t have the resources to do cool things with them ourselves.

Maybe what I’m advocating is giving geeks in the cultural heritage sector the time to spend playing with technology and supplying the tools for agile development. Or maybe it’s just the perennial cry of the backend geek who never gets to play with the shiny pretty things. I’m still thinking about this one.

Exposing the layers of history in cityscapes

I really liked this talk on “Time, History and the Internet” because it touches on lots of things I’m interested in.

I have a on-going fascination with the idea of exposing the layers of history present in any cityscape.

I’d like to see content linked to and through particular places, creating a sense of four dimensional space/time anchored specifically in a given location. Discovering and displaying historical content marked-up with the right context (see below) gives us a chance to ‘move’ through the fourth dimension while we move through the other three; the content of each layer of time changing as the landscape changes (and as information is available).

Context for content: when was it written? Was it written/created at the time we’re viewing, or afterwards, or possibly even before it about the future time? Who wrote/created it, and who were they writing/drawing/creating it for? If this context is machine-readable and content is linked to a geo-reference, can we generate a representation of these layers on-the-fly?

Imagine standing at the base of Centrepoint at London’s Tottenham Court Road and being able to ask, what would I have seen here ten years ago? fifty? two hundred? two thousand? Or imagine sitting at home, navigating through layers of historic mapping and tilting down from a birds eye view to a view of a street-level reconstructed scene. It’s a long way off, but as more resources are born or made discoverable and interoperable, it becomes more possible.