From piles of material to patchwork: How do we embed the production of usable collections data into library work?

How do we embed the production of usable collections data into library work?These notes were prepared for a panel discussion at the 'Always Already Computational: Collections as Data' (#AACdata) workshop, held in Santa Barbara in March 2017. While my latest thinking on the gap between the scale of collections and the quality of data about them is informed by my role in the Digital Scholarship team at the British Library, I've also drawn on work with catalogues and open cultural data at Melbourne Museum, the Museum of London, the Science Museum and various fellowships. My thanks to the organisers and the Institute of Museum and Library Services for the opportunity to attend. My position paper was called 'From libraries as patchwork to datasets as assemblages?' but in hindsight, piles and patchwork of material seemed a better analogy.

The invitation to this panel asked us to share our experience and perspective on various themes. I'm focusing on the challenges in making collections available as data, based on years of working towards open cultural data from within various museums and libraries. I've condensed my thoughts about the challenges down into the question on the slide: How do we embed the production of usable collections data into library work?

It has to be usable, because if it's not then why are we doing it? It has to be embedded because data in one-off projects gets isolated and stale. 'Production' is there because infrastructure and workflow is unsexy but necessary for access to the material that makes digital scholarship possible.

One of the biggest issues the British Library (BL) faces is scale. The BL's collections are vast – maybe 200 million items – and extremely varied. My experience shows that publishing datasets (or sharing them with aggregators) exposes the shortcomings of past cataloguing practices, making the size of the backlog all too apparent.

Good collections data (or metadata, depending on how you look at it) is necessary to avoid the overwhelmed, jumble sale feeling of using a huge aggregator like Europeana, Trove, or the DPLA, where you feel there's treasure within reach, if only you could find it. Publishing collections online often increases the number of enquiries about them – how can institution deal with enquiries at scale when they already have a cataloguing backlog? Computational methods like entity identification and extraction could complement the 'gold standard' cataloguing already in progress. If they're made widely available, these other methods might help bridge the resourcing gaps that mean it's easier to find items from richer institutions and countries than from poorer ones.

Photo of piles of materialYou probably already all know this, but it's worth remembering: our collections aren't even (yet) a patchwork of materials. The collections we hold, and the subset we can digitise and make available for re-use are only a tiny proportion of what once existed. Each piece was once part of something bigger, and what we have now has been shaped by cumulative practical and intellectual decisions made over decades or centuries. Digitisation projects range from tiny specialist databases to huge commercial genealogy deals, while some areas of the collections don't yet have digital catalogue records. Some items can't be digitised because they're too big, small or fragile for scanning or photography; others can't be shared because of copyright, data protection or cultural sensitivities. We need to be careful in how we label datasets so that the absences are evident.

(Here, 'data' may include various types of metadata, automatically generated OCR or handwritten text recognition transcripts, digital images, audio or video files, crowdsourced enhancements or any combination or these and more)

Image credit: https://www.flickr.com/photos/teen_s/6251107713/

In addition to the incompleteness or fuzziness of catalogue data, when collections appear as data, it's often as great big lumps of things. It's hard for normal scholars to process (or just unzip) 4gb of data.

Currently, datasets are often created outside normal processes, and over time they become 'stale' as they're not updated when source collections records change. And when they manage to unzip them, the records rely on internal references – name authorities for people, places, etc – that can only be seen as strings rather than things until extra work is undertaken.

The BL's metadata team have experimented with 'researcher format' CSV exports around specific themes (eg an exhibition), and CSV is undoubtedly the most accessible format – but what we really need is the ability for people to create their own queries across catalogues, and create their own datasets from the results. (And by queries I don't mean SPARQL but rather faceted browsing or structured search forms).

Image credit: screenshot from http://data.bl.uk/

Collections are huge (and resources relatively small) so we need to supplement manual cataloguing with other methods. Sometimes the work of crafting links from catalogues to external authorities and identifiers will be a machine job, with pieces sewn together at industrial speed via entity recognition tools that can pull categories out or text and images. Sometimes it's operated by a technologist who runs records through OpenRefine to find links to name authorities or Wikidata records. Sometimes it's a labour of scholarly love, with links painstakingly researched, hand-tacked together to make sure they fit before they're finally recorded in a bespoke database.

This linking work often happens outside the institution, so how can we ingest and re-use it appropriately? And if we're to take advantage of computational methods and external enhancements, then we need ways to signal which categories were applied by catalogues, which by software, by external groups, etc.

The workflow and interface adjustments required would be significant, but even more challenging would be the internal conversations and changes required before a consensus on the best way to combine the work of cataloguers and computers could emerge.

The trick is to move from a collection of pieces to pieces of a collection. Every collection item was created in and about places, and produced by and about people. They have creative, cultural, scientific and intellectual properties. There's a web of connections from each item that should be represented when they appear in datasets. These connections help make datasets more usable, turning strings of text into references to things and concepts to aid discoverability and the application of computational methods by scholars. This enables structured search across datasets – potentially linking an oral history interview with a scientist in the BL sound archive, their scientific publications in journals, annotated transcriptions of their field notebooks from a crowdsourcing project, and published biography in the legal deposit library.

A lot of this work has been done as authority files like AAT, ULAN etc are applied in cataloguing, so our attention should turn to turning local references into URIs and making the most of that investment.

Applying identifiers is hard – it takes expert care to disambiguate personal names, places, concepts, even with all the hinting that context-aware systems might be able to provide as machine learning etc techniques get better. Catalogues can't easily record possible attributions, and there's understandable reluctance to publish an imperfect record, so progress on the backlog is slow. If we're not to be held back by the need for records to be perfectly complete before they're published, then we need to design systems capable of capturing the ambiguity, fuzziness and inherent messiness of historical collections and allowing qualified descriptors for possible links to people, places etc. Then we need to explain the difference to users, so that they don't overly rely on our descriptions, making assumptions about the presence or absence of information when it's not appropriate.

Image credit: http://europeana.eu/portal/record/2021648/0180_N_31601.html

Photo of pipes over a buildingA lot of what we need relies on more responsive infrastructure for workflows and cataloguing systems. For example, the BL's systems are designed around the 'deliverable unit' – the printed or bound volume, the archive box – because for centuries the reading room was where you accessed items. We now need infrastructure that makes items addressable at the manuscript, page and image level in order to make the most of the annotations and links created to shared identifiers.

(I'd love to see absorbent workflows, soaking up any related data or digital surrogates that pass through an organisation, no matter which system they reside in or originate from. We aren't yet making the most of OCRd text, let alone enhanced data from other processes, to aid discoverability or produce datasets from collections.)

Image credit: https://www.flickr.com/photos/snorski/34543357
My final thought – we can start small and iterate, which is just as well, because we need to work on understanding what users of collections data need and how they want to use them. We're making a start and there's a lot of thoughtful work behind the scenes, but maybe a bit more investment is needed from research libraries to become as comfortable with data users as they are with the readers who pass through their physical doors.

The good, the bad, and the unstructured… Open data in cultural heritage

I was in London this week for the Linked Pasts event, where I presented on trends and practices for open data in cultural heritage. Linked Pasts was a colloquium on linked open data in cultural heritage organised by the Pelagios project (Leif Isaksen, Elton Barker and Rainer Simon with Pau de Soto). I really enjoyed the other papers, which included thoughtful, grounded approaches to structured data for historical periods, places and people, recognition of the importance of designing projects around audience needs (including user research), the relationship between digital tools and scholarly inquiry, visualisations as research tools, and the importance of good infrastructure for digital history.

My talk notes are below the embedded slides.

Warning: generalisations ahead.

My discussion points are based on years of conversations with other cultural heritage technologists in museums, libraries, and archives, but inevitably I'll have blind spots. For example, I'm focusing on the English-speaking world, which means I'm not discussing the great work that Dutch and Japanese organisations are doing. I've undoubtedly left out brilliant specific examples in the interests of focusing on broader trends. The point is to start conversations, to bring issues out into the open so we can collectively decide how to move forward.

The good

The good news is that more and more open cultural data is being published. Organisations have figured out that a) nothing bad is likely to happen and that b) they might get some kudos for releasing open data.

Generally, organisations are publishing the data that they have to hand – this means it's mostly collections data. This data is often as messy, incomplete and fuzzy as you'd expect from records created by many different people using many different systems over a hundred or more years.

…the bad…

Copyright restrictions mean that images mightn't be included. Furthermore, because it's often collections data, it's not necessarily rich in interpretative information. It's metadata rather than data. It doesn't capture the scholarly debates, the uncertain attributions, the biases in collecting… It certainly doesn't capture the experience of viewing the original object.

Licensing issues are still a concern. Until cultural organisations are rewarded by their funders for releasing open data, and funders free organisations from expectations for monetising data, there will be damaging uncertainty about the opportunity cost of open data.

Non-commercial licenses are also an issue – organisations and scholars might feel exploited if others who have not contributed to the process of creating it can commercially publish their work. Finally, attribution is an important currency for organisations and scholars but most open licences aren't designed with that in mind.

…and the unstructured

The data that's released is often pretty unstructured. CSV files are very easy to use, so they help more people get access to information (assuming they can figure out GitHub), but a giant dump like this doesn't provide stable URIs for each object. Records in data dumps rarely link to external identifiers like the Getty's Thesaurus of Geographic Names, Art & Architecture Thesaurus (AAT) or Union List of Artist Names, or vernacular sources for place and people names such as Geonames or DBPedia. And that's fair enough, because people using a CSV file probably don't want all the hassle of dereferencing each URI to grab the place name so they can visualise data on a map (or whatever they're doing with the data). But it also means that it's hard for someone to reliably look for matching artists in their database, and link these records with data from other organisations.

So it's open, but it's often not very linked. If we're after a 'digital ecosystem of online open materials', this open data is only a baby step. But it's often where cultural organisations finish their work.

Classics > Cultural Heritage?

But many others, particularly in the classical and ancient world, have managed to overcome these issues to publish and use linked open data. So why do museums, libraries and archives seem to struggle? I'll suggest some possible reasons as conversation starters…

Not enough time

Organisations are often busy enough keeping their internal systems up and running, dealing with the needs of visitors in their physical venues, working on ecommerce and picture library systems…

Not enough skills

Cultural heritage technologists are often generalists, and apart from being too time-stretched to learn new technologies for the fun of it, they might not have the computational or information science skills necessary to implement the full linked data stack.

Some cultural heritage technologists argue that they don't know of any developers who can negotiate the complexities of SPARQL endpoints, so why publish it? The complexity is multiplied when complex data models are used with complex (or at least, unfamiliar) technologies. For some, SPARQL puts the 'end' in 'endpoint', and 'RDF triples' can seem like an abstraction too far. In these circumstances, the instruction to provide linked open data as RDF is a barrier they won't cross.

But sometimes it feels as if some heritage technologists are unnecessarily allergic to complexity. Avoiding unnecessary complexity is useful, but progress can stall if they demand that everything remains simple enough for them to feel comfortable. Some technologists might benefit from working with people more used to thinking about structured data, such as cataloguers, registrars etc. Unfortunately, linked open data falls in the gap between the technical and the informatics silos that often exist in cultural organisations.

And organisations are also not yet using triples or structured data provided by other organisations [with the exception of identifiers for e.g. people, places and specific vocabularies]. They're publishing data in broadcast mode; it's not yet a dialogue with other collections.

Not enough data

In a way, this is the collections documentation version of the technical barriers. If the data doesn't already exist, it's hard to publish. If it needs work to pull it out of different departments, or different individuals, who's going to resource that work? Similarly, collections staff are unlikely to have time to map their data to CIDOC-CRM unless there's a compelling reason to do so. (And some of the examples given might use cultural heritage collections but are a better fit with the work of researchers outside the institution than the institution's own work).

It may be easier for some types of collections than others – art collections tend to be smaller and better described; natural history collections can link into international projects for structured data, and libraries can share cataloguing data. Classicists have also been able to get a critical mass of data together. Your local records office or small museum may have more heterogeneous collections, and there are fewer widely used ontologies or vocabularies for historical collections. The nature of historical collections means that 'small ontologies, loosely joined', may be more effective, but creating these, or mapping collections to them, is still a large piece of work. While there are tools for mapping to data structures like Europeana's data model, it seems the reasons for doing so haven't been convincing enough, so far. Which brings me to…

Not enough benefits

This is an important point, and an area the community hasn't paid enough attention to in the past. Too many conversations have jumped straight to discussion about the specific standards to use, and not enough have been about the benefits for heritage audiences, scholars and organisations.

Many technologists – who are the ones making decisions about digital standards, alongside the collections people working on digitisation – are too far removed from the consumers of linked open data to see the benefits of it unless we show them real world needs.

There's a cost in producing data for others, so it needs to be linked to the mission and goals of an organisation. Organisations are not generally able to prioritise the potential, future audiences who might benefit from tools someone else creates with linked open data when they have so many immediate problems to solve first.

While some cultural and historical organisations have done good work with linked open data, the purpose can sometimes seem rather academic. Linked data is not always explained so that the average, over-worked collections or digital team will that convinced by the benefits outweigh the financial and intellectual investment.

No-one's drinking their own champagne

You don't often hear of people beating on the door of a museum, library or archive asking for linked open data, and most organisations are yet to map their data to specific, widely-used vocabularies because they need to use them in their own work. If technologists in the cultural sector are isolated from people working with collections data and/or research questions, then it's hard for them to appreciate the value of linked data for research projects.

The classical world has benefited from small communities of scholar-technologists – so they're not only drinking their own champagne, they're throwing parties. Smaller, more contained collections of sources and research questions helps create stronger connections and gives people a reason to link their sources. And as we're learning throughout the day, community really helps motivate action.

(I know it's normally called 'eating your own dog food' or 'dogfooding' but I'm vegetarian, so there.)

Linked open data isn't built into collections management systems

Getting linked open data into collections management systems should mean that publishing linked data is an automatic part of sharing data online.

Chicken or the egg?

So it's all a bit 'chicken or the egg' – will it stay that way? Until there's a critical mass, probably. These conversations about linked open data in cultural heritage have been going around for years, but it also shows how far we've come.

[And if you've published open data from cultural heritage collections, linked open data on the classical or ancient world, or any other form of structured data about the past, please add it to the wiki page for museum, gallery, library and archive APIs and machine-readable data sources for open cultural data.]

Drink your own champagne! (Nasjonalbiblioteket image)
Drink your own champagne! (Nasjonalbiblioteket image)

Save

Creating simple graphs with Excel's Pivot Tables and Tate's artist data

I've been playing with Tate's collections data while preparing for a workshop on data visualisation. On the day I'll probably use Google Fusion Tables as an example, but I always like to be prepared so I've prepared a short exercise for creating simple graphs in Excel as an alternative.

The advantage of Excel is that you don't need to be online, your data isn't shared, and for many people, gaining additional skills in Excel might be more useful than learning the latest shiny web tool. PivotTables are incredibly useful for summarising data, so it's worth trying them even if you're not interested in visualisations. Pivot tables let you run basic functions – summing, averaging, grouping, etc – on spreadsheet data. If you've ever wanted spreadsheets to be as powerful as databases, pivot tables can help. I could create a pivot table then create a chart from it, but Excel has an option to create a pivot chart directly that'll also create a pivot table for you to see how it works.

For this exercise, you will need Excel and a copy of the sample data: tate_artist_data_cleaned_v1_groupedbybirthyearandgender.xlsx
(A plain text CSV version is also available for broader compatibility: tate_artist_data_cleaned_v1_groupedbybirthyearandgender.csv.)

Work out what data you're interested in

In this example, I'm interested in when the artists in Tate's collection were born, and the overall gender mix of the artists represented. To make it easier to see what's going on, I've copied those two columns of data from the original 'artists' file and copied them over to a new spreadsheet. As a row by row list of births, these columns aren't ideal for charting as they are, so I want a count of artists per year, broken down by gender.

Insert PivotChart

On the 'Insert' menu, click on PivotTable to open the menu and display the option for PivotCharts.
Excel pivot table Insert PivotChart detail

Excel will select our columns as being the most likely thing we want to chart. That all looks fine to me so click 'OK'.

Excel pivot table OK detailConfigure the PivotChart

This screen asking you to 'choose fields from the PivotTable Field List' might look scary, but we've only got two columns of data so you can't really go wrong. Excel pivot table Choose fields

The columns have already been added to the PivotTable Field List on the right, so go ahead and tick the box next to 'gender' and 'yearofBirth'. Excel will probably put them straight into the 'Axis Fields' box.

Leave yearofBirth under Axis Fields and drag 'gender' over to the 'Values' box next to it. Excel automatically turns it into 'count of gender', assuming that we want to sum the number of births per year.

The final task is to drag 'gender' down from the PivotTable Field List to 'Legend Fields' to create a key for which colours represent which gender. You should now see the pivot table representing the calculated values on the left and a graph in the middle.

Close-up of the pivot fields

 

When you click off the graph, the PivotTable options disappear – just click on the graph or the data again to bring them up.

Excel pivot table Results

You've made your first pivot chart!

You might want to drag it out a bit so the values aren't so squished. Tate's data covers about 500 years so there's a lot to fit in.

Now you've made a pivot chart, have a play – if you get into a mess you can always start again!

Colophon: the screenshots are from Excel 2010 for Windows because that's what I have.

About the data: this data was originally supplied by Tate. The full version on Tate's website includes name, date of birth, place of birth, year of death, place of death and URL on Tate's website. The latest versions of their data can be downloaded from http://www.tate.org.uk/about/our-work/digital/collection-data The source data for this file can be downloaded from https://github.com/tategallery/collection/blob/master/artist_data.csv This version was simplified so it only contains a list of years of birth and the gender of the artist. Some blank values for gender were filled in based on the artist's name or a quick web search; groups of artists or artists of unknown gender were removed as were rows without a birth year. This data was prepared in March 2015 for a British Library course on 'Data Visualisation for Analysis in Scholarly Research' by Mia Ridge.

I'd love to hear if you found this useful or have any suggestions for tweaks.

Notes from Culture Hack Day (#chd11)

Culture Hack Day (#chd11) was organised by the Royal Opera House (the team being @rachelcoldicutt, @katybeale, @beyongolia, @mildlydiverting, @dracos – and congratulations to them all on an excellent event). As well as a hack event running over two days, they had a session of five minute 'lightning talks' on Saturday, with generous time for discussion between sessions. This worked quite well for providing an entry point to the event for the non-technical, and some interesting discussion resulted from it. My notes are particularly rough this time as I have one arm in a sling and typing my hand-written notes is slow.

Lightning Talks
Tom Uglow @tomux “What if the Web is a Fad?”
'We're good at managing data but not yet good at turning it into things that are more than points of data.' The future is about physical world, making things real and touchable.

Clare Reddington, @clarered, “What if We Forget about Screens and Make Real Things?”
Some ace examples of real things: Dream Director; Nuage Vert (Helsinki power station projected power consumption of city onto smoke from station – changed people's behaviour through ambient augmentation of the city); Tweeture (a conch, 'permission object' designed to get people looking up from their screens, start conversations); National Vending Machine from Dutch museum.

Leila Johnston, @finalbullet talked about why the world is already fun, and looking at the world with fresh eyes. Chromaroma made Oyster cards into toys, playing with our digital footprint.

Discussion kicked off by Simon Jenkins about helping people get it (benefits of open data etc) – CR – it's about organisational change, fears about transparency, directors don't come to events like this. Understand what's meant by value – cultural and social as well as economic. Don't forget audiences, it has to be meaningful for the people we're making it (cultural products) for'.

Comment from @fidotheCultural heritage orgs have been screwed over by software companies. There's a disconnect between beautiful hacks around the edges and things that make people's lives easier. [Yes! People who work in cultural heritage orgs often have to deal with clunky tools, difficult or vendor-dependent data export proccesses, agencies that over-promise and under-deliver. In my experience, cultural orgs don't usually have internal skills for scoping and procuring software or selecting agencies so of course they get screwed over.]

TU: desire to be tangible is becoming more prevalent, data to enhance human experience, the relationship between culture and the way we live our lives.

CR: don't spend the rest of the afternoon reinforcing silos, shouldn't be a dichotomy between cultural heritage people and technologists. [Quick plug for http://museum30.ning.com/, http://groups.google.com/group/antiquist, http://museum-api.pbwiki.com/ and http://museumscomputergroup.org.uk/email-list/ as places where people interested in intersection between cultural heritage and technology can mingle – please let me know of any others!] Mutual respect is required.

Tom Armitage, @infovore “Sod big data and mashups: why not hack on making art?”
Making culture is more important than using it. 3 trends: 1) collection – tools to slice and dice across time or themes; 2) magic materials 3) mechanical art, displays the shape of the original content; 3a) satire – @kanyejordan 'a joke so good a machine could make it'.

Tom Dunbar, @willyouhelp – story-telling possibilites of metadata embedded in media e.g. video [check out Waisda? for game designed to get metdata added to audio-visual archives]. Metadata could be actors, characters, props, action…

Discussion [?]:remixing in itself isn't always interesting. Skillful appropriation across formats… Universe of editors, filterers, not only creators. 'in editing you end up making new things'.

Matthew Somerville, @dracos, Theatricalia, “What if You Never Needed to Miss a Show?”
'Quite selfish', makes things he needs. Wants not to miss theatre productions with people he likes in/working on them. Theatricalia also collects stories about productions. [But in discussion it came up that the National Theatre asked him to remove data – why?! A recommendation system would definitely get me seeing more theatre, and I say that as a fairly regular but uninformed theatre-goer who relies on word-of-mouth to decide where to spend ticket money.]

Nick Harkaway, @Harkaway on IP and privacy
IP as way of ringfencing intangible ideas, requiing consent to use. Privacy is the same. Not exciting, kind of annoying but need to find ways to make it work more smoothly while still proving protection. 'Buying is voting', if you buy from Tesco, you are endorsing their policies. 'Code for the change you want to see in the world', build the tools you want cultural orgs to have so they can do better. [Update: Nick has posted his own notes at Notes from Culture Hack Day. I really liked the way he brought ethical considerations to hack enthusiasm for pushing the boundaries of what's possible – the ability to say 'no' is important even if a pain for others.]

Chris Thorpe, @jaggeree. ArtFinder, “What if you could see through the walls of every museum and something could tell you if you’d like it?”

Culture for people who don't know much about culture. Cultural buildings obscure the content inside, stop people being surprised by what's available. It's hard if you don't know where to start. Go for user-centric information. Government Art Collection Explorer – ace! Wants an angel for art galleries to whisper information about the art in his ear. Wants people to look at the art, not the screen of their device [museums also have this concern]. SAP – situated audio platform. Wants a 'flight data recorder' for trips around cultural places.

Discussion around causes of fear and resistance to open data – what do cultural orgs fear and how can they learn more and relax? Fear of loss of provenance – response was that for developers displaying provenance alongside the data gives it credibility; counter-response was that organisations don't realise that's possible. [My view is that the easiest way to get this to change is to change the metrics by which cultural heritage organisations are judged, and resolve the tension between demands to commercialise content to supplement government grants and demands for open access to that same data. Many museums have developed hybrid 'free tombstone, low-res, paid-for high-res' models to deal with this, but it's taken years of negotiation in each institution.] I also ranted about some of these issues at OpenTech 2010, notes at 'Museums meet the 21st century'.

Other discussion and notes from twitter – re soap/drama characters tweeting – I managed to out myself as a Neighbours watcher but it was worth it to share that Neighbours characters tweet and use Facebook. Facebook relationship status updates and events have been included as plot points, and references are made to twitter but not to the accounts of the characters active on the service. I wonder if it's script writers or marketing people who write the characters tweets? They also tweet in sync with the Australian showings, which raises issues around spoilers and international viewers.

Someone said 'people don't want to interact with cultural institutions online. They want to interact with their content' but I think that's really dependent on the definition of content – as pointed out, points of data have limited utility without further context. There's a catch-22 between cultural orgs not yet making really engaging data and audiences not yet demanding it, hopefully hack days like CHD11 help bridge the gap and turn data into stories and other meaningful content. We're coming up against the limits of what can be dome programmatically, especially given variation in quality and extent of cultural heritage data (and most of it is data rather than content).

[Update: after writing this I found a post The lightning talks at Culture Hack Day about the day, which happily picks up on lots of bits I missed. Oh, and another, by Roo Reynolds.]

After the lightning talks I popped over the road to check out the hacking and ended up getting sucked in (the lure of free pizza had a powerful effect!).  I worked on a WordPress plugin with Ian Ibbotson @ianibbo that lets you search for a term on the Culture Grid repository and imports the resulting objects into my museum metadata games so that you can play with objects based on your favourite topic.  I've put the code on github [https://github.com/mialondon/mmg-import] and will move it from my staging server to live over the next few days so people can play with the objects.  It's such a pain only having one hand, and I'm very grateful to Ian for the chance to work together and actually get some code written.  This work means that any organisation that's contributed records to the Culture Grid can start to get back tags or facts to enhance their collections, based on data generated by people playing the games.  The current 300-ish objects have about 4400 tags and 30 facts, so that's not bad for a freebie. OTOH, I don't know of many museums with the ability to display content created by others on their collections pages or store it in their collections management systems – something for another hack day?

Something I think I'll play around with a bit more is the idea of giving cultural heritage data a quality rating as it's ingested.  We discussed whether the ratings would be local to an app (as they could be based on the particular requirements of that application) or generalised and recorded in the CultureGrid service.  You could record the provence of a rating which might be an approach that combines the benefits of both approaches.  At the moment, my requirements for a 'high quality' record would be: title (e.g. 'The Ashes trophy', if the object has one), name or type of object (e.g. cup), date, place, decent sized image, description.

Finally, if you're interested in hacking around cultural heritage data, there's also historyhackday next weekend. I'm hoping to pop in (dependent on fracture and MSc dissertation), not least because in March I'm starting a PhD in digital humanities, looking at participatory digitisation of geo-located historical material (i.e. getting people to share the transcriptions and other snippets of ad hoc digitisation they do as part of their research) and it's all hugely relevant.

Slides and talk from 'Cosmic Collections' paper

This is a lazy post, a straight copy and paste of my presentation notes (my excuse is that I'm eight days behind on everything at work and uni after being grounded in the US by volcanic ash). Anyway, I hope you enjoy it or that it's useful in some way.

Cosmic Collections: creating a big bang?

View more presentations from Mia .

Slide 1 (solar rays – Cosmic Collections):

The Cosmic Collections project was based on a simple idea – what if we gave people the ability to make their own collection website? The Science Museum was planning an exhibition on astronomy and culture, to be called ‘Cosmos & Culture’. We had limited time and resources to produce a site to support the exhibition and we risked creating ‘just another exhibition microsite’. So what if we provided access to the machine-readable exhibition content that was already being gathered internally, and threw it open to the public to make websites with it?  And what if we motivated them to enter by offering competition prizes?  Competition participants could win a prize and kudos, and museum audiences might get a much more interesting, innovative site.
The idea was a good match for museum mission, exhibition content, technical context, hopefully audience – but was that enough?
Slide 2 (satellite dish):
Questions…
If we built an API, would anyone use it?
Can you really crowdsource the creation of collections interfaces?
The project gave me a chance to investigate some specific questions.  At the time, there were lots of calls from some quarters for museums to produce APIs for each project, but would anyone actually use a museum API?  The competition might help us understand whether or how we should invest in APIs and machine-readable data.
We can never build interfaces to meet the needs of every type of audience.  One of the promises of machine-readable data is that anyone can make something with your data, allowing people with particular needs to create something that supports their own requirements or combines their data with ours – but would anyone actually do it?
Slide 3 (map mashup):
Mashups combine data from one or more sources and/or data and visualisation tools such as maps or timelines.
I'm going to get the geek stuff out of the way and quickly define mashups and APIs…
Mashups are computer applications that take existing information from known sources and present it to the viewer in a new way. Here’s a mashup of content edits from Wikipedia with a map showing the location of the edit.
Slide 4 (APIs)
APIs (Application Programming Interfaces) are a way for one machine to talk to another: ‘Hi Bob, I’d like a list of objects from you, and hey, Alice, could you draw me a timeline to put the objects on?’
APIs tell a computer, 'if you go here, you will get that information, presented like this, and you can do that with it'.
A way of providing re-usable content to the public, other museums and other departments within our museum – we created a shared backend for web and gallery interactives.
I think of APIs as user interfaces for developers and wanted to design a good experience for developers with the same care you would for end users*.  I hoped that feedback from the competition could be used to improve the beta API
* we didn’t succeed in the first go but it’s something to aim for post-beta
Slide 5: (what if nobody came?)
AKA 'the fears and how to deal with them'
Acknowledge those fears
Plan for the worst case scenario
Take a deep breath and do it anyway
And on the next slides, the results.  If I was replicating the real experience, you’d have several nerve-biting months while you waited for the museum to lumber into gear, planned the launch event, publicised the project in the participant communities… Then waited for results to come in. But let’s skip that bit…
Slide 6: (Ryan Ludwig's http://www.serostar.com/cosmic/)
The results – our judges declared a winner and a runner-up, these are screenshots – this is the second prize winning entry.
People came to the party. Yay! I'd like to thank all the participants, whether they submitted a final entry or not. It wouldn't have worked without them.
Slide 7: (Natalie and Simon's http://cosmos.natimon.com/)
This is a screenshot from the winning site – it made the best use of the API and was designed to lure the visitor in and keep drawing them through the site.
(We didn’t get subject specialists scratching their own itch – maybe they don’t need to share their work, maybe we didn’t reach them. Would like to reach researchers, let them know we have resources to be used, also that they can help us/our audiences by sharing their work)
Slide 8: (astrolabe – what did we learn?)
People need (more) help to participate in a geektastic project like this
The dynamics of a competition are tricky
Mashups are shaped by the data provided – you get out what you put in
Can we help people bring their own content to a future mashup?
Slide 9: (evaluation)
I did a small survey to evaluate the project… Turns out the project was excellent outreach into the developer community. People were really excited about being invited to play with our data.  My favourite quote: "The very idea of the competition was awesome"
Slide 10: (paper sheet)
Also positive coverage in technical press. So in conclusion?
Slide 11: (Tim Berners-Lee):
“The thing people are amazed about with the web is that, when you put something online, you don’t know who is going to use it—but it does get used.”
There are a lot of opportunities and excitement around putting machine-readable data online…
Slide 12: Tim Berners-Lee 2:
But:  It doesn’t happen automatically; It’s not a magic bullet
But people won't find and use your APIs without some encouragement. You need to support your API users. People outside the museum bring new ideas but there's still a big role for people who really understand the data and audiences to help make it a quality experience…
Slide 13 (space):
What next?
Using the feedback to focus and improve collection-wide API
Adding other forms of machine-readable data
Connecting with data from your collections?
I've been thinking about how to improve APIs – offer subject authorities with links to collections, embed markup in the collections pages to help search engines understand our data…
I want more! The more of us with machine-readable data available for re-use, the better the cross-collections searches, the region or specialism-wide mashups… I'd love to be able to put together a mashup showing all the cultural heritage content about my suburb; all the Boucher self-portraits; all the inventions that helped make the Space Shuttle work…
Slide 14: (thank you)
If you're interested in possibilities of machine-readable data and access to your collections, join in the conversation on the museum API wiki or follow along on twitter or on blogs.  Join in at http://museum-api.pbworks.com/
More at https://openobjects.org.uk/ or @mia_out

Image credits include:
http://antwrp.gsfc.nasa.gov/apod/ap100415.html
http://antwrp.gsfc.nasa.gov/apod/ap100414.html
http://antwrp.gsfc.nasa.gov/apod/ap100409.html
http://antwrp.gsfc.nasa.gov/apod/ap100209.html
http://antwrp.gsfc.nasa.gov/apod/ap100315.html
http://www.sciencemuseum.org.uk/Centenary/Home/Icons/Pilot_ACE_Computer.aspx

Mash the state

'Cosmic Collections' – my MW2010 paper online

My Museums and the Web 2010 paper is up at Cosmic Collections: Creating a Big Bang and I'm working on the slides now and I'm curious – what would you like to see more of in a presentation?  It's only short (6 minutes) so I'm currently thinking setup (including lots of definitions for non-geeks), outcomes (did the project succeed?), and a bit on what I think the next steps are (basically a call to get your data online in re-usable formats).

I'm thinking of leading with this Tim Berners-Lee quote from an article in Prospect, Mash the state:

"The thing people are amazed about with the web is that, when you put something online, you don't know who is going to use it—but it does get used."

Some thoughts on linked data and the Science Museum – comments?

I've been meaning to finish this for ages so I could post it, but then I realised it's more use in public in imperfect form than in private, so here goes – my thoughts on linked data, APIs and the Science Museum on the 'Museums and the machine-processable web' wiki. I'm still trying to find time to finish documenting my thoughts, and I've already had several useful comments that mean I'll need to update it, but I'd love to hear your thoughts, comments, etc.

'Cosmic Collections' launches at the Science Museum this weekend

I think I've already said pretty much everything I can about the museum website mashup competition we're launching around the 'Cosmos and Culture' exhibition, but it'd be a bit silly of me not to mention it here since the existence and design of the project reflects a lot of the issues I've written about here.

If you make it along to the launch at the Science Museum on Saturday, make sure you say hello – I should be easy to find cos I'm giving a quick talk at some point.
Right now the laziest thing I could do is to give you a list of places where you can find out more:
Finally, you can talk to us @coscultcom on twitter, or tag content with #coscultcom.
Btw – if you want an idea of how slowly museums move, I think I first came up with the idea in January (certainly before dev8D because it was one of the reasons I wanted to go) and first blogged about it (I think) on the museum developers blog in March. The timing was affected by other issues, but still – it's a different pace of life!

Tom Morris, SPARQL and semweb stuff – tech talk at Open Hack London

Tom Morris gave a lightning talk on 'How to use Semantic Web data in your hack' (aka SPARQL and semantic web stuff).

He's since posted his links and queries – excellent links to endpoints you can test queries in.

Semantic web often thought of as long-promised magical elixir, he's here to say it can be used now by showing examples of queries that can be run against semantic web services. He'll demonstrate two different online datasets and one database that can be installed on your own machine.

First – dbpedia – scraped lots of wikipedia, put it into a database. dbpedia isn't like your averge database, you can't draw a UML diagram of wikipedia. It's done in RDF and Linked Data. Can be queried in a language that looks like SQL but isn't. SPARQL – is a w3c standard, they're currently working on SPARQL 2.

Go to dbpedia.org/sparql – submit query as post. [Really nice – I have a thing about APIs and platforms needing a really easy way to get you to 'hello world' and this does it pretty well.]

[Line by line comments on the syntax of the queries might be useful, though they're pretty readable as it is.]

'select thingy, wotsit where [the slightly more complicated stuff]'

Can get back results in xml, also HTML, 'spreadsheet', JSON. Ugly but readable. Typed.

[Trying a query challenge set by others could be fun way to get started learning it.]

One problem – fictional places are in Wikipedia e.g. Liberty City in Grand Theft Auto.

Libris – how library websites should be
[I never used to appreciate how much most library websites suck until I started back at uni and had to use one for more than one query every few years]

Has a query interface through SPARQL

Comment from the audience BBC – now have SPARQL endpoint [as of the day before? Go BBC guy!].

Playing with mulgara, open source java triple store. [mulgara looks like a kinda faceted search/browse thing] Has own query language called TQL which can do more intresting things than SPARQL. Why use it? Schemaless data storage. Is to SQL what dynamic typing is to static typing. [did he mean 'is to sparql'?]

Question from audence: how do you discover what you can query against?
Answer: dbpedia website should list the concepts they have in there. Also some documentation of categories you can look at. [Examples and documentation are so damn important for the update of your API/web service.]

Coming soon [?] SPARUL – update language, SPARQL2: new features

The end!

[These are more (very) rough notes from the weekend's Open Hack London event – please let me know of clarifications, questions, links or comments. My other notes from the event are tagged openhacklondon.

Quick plug: if you're a developer interested in using cultural heritage (museums, libraries, archives, galleries, archaeology, history, science, whatever) data – a bunch of cultural heritage geeks would like to know what's useful for you (more background here). You can comment on the #chAPI wiki, or tweet @miaridge (or @mia_out). Or if you work for a company that works with cultural heritage organisations, you can help us work better with you for better results for our users.]

There were other lightning talks on Pachube (pronounced 'patchbay', about trying to build the internet of things, making an API for gadgets because e.g. connecting hardware to the web is hard for small makers) and Homera (an open source 3d game engine).

Tim Berners-Lee at TED on 'database hugging' and linked data

This TED talk by Tim Berners-Lee: The next Web of open, linked data is worth watching if you've been 'wondering whatever happened to the semantic web?', or what this 'linked data' is about all.

I've put some notes below – I was transcribing it for myself and thought I might as well share it. It's only a selection of the talk and I haven't tidied it because they're not my words to edit.

Why is linked data important?

Making the world run better by making this data available. If you know about some data in some government department you often find that, these people, they're very tempted to keep it, to hug your database, you don't want to let it go until you've made a beautiful website for it. … Who am I to say "don't make a website…" make a beautiful website, but first, give us the unadulterated data. Give us the raw data now.

You have no idea, the number of excuses people come up with to hang onto their data and not give it to you, even though you've paid for it.

Communicating science over the web… the people who are going to solve those are scientists, they have half-formed ideas in their head, but a lot of the state of knowledge of the human race at the moment is in database, currently not sharing. Alzheimer's scientists … the power of being able ask questions which bridge across different disciplines is really a complete sea-change, it's very, very important. Scientists are totally stymied at the moment, the power of the data that other scientists have collected is locked up, and we need to get it unlocked so we can tackle those huge problems. if I go on like this you'll think [all data from] huge institutions but it's not. [Social networking is data.]

Linked data is about people doing their bit to produce their bit, and it all connecting. That's how linked data works. … You do your bit, everybody else does theirs. You may not have much data yourself, to put on there, but you know to demand it.

It's not just about the number of places where data comes. It's about connecting it together. When you connect it together you get this power… out of it. It'll only really pay off when everybody else has done it. It's called Linked Data, I want you to make it, I want you to demand it.