From piles of material to patchwork: How do we embed the production of usable collections data into library work?

How do we embed the production of usable collections data into library work?These notes were prepared for a panel discussion at the ‘Always Already Computational: Collections as Data‘ (#AACdata) workshop, held in Santa Barbara in March 2017. While my latest thinking on the gap between the scale of collections and the quality of data about them is informed by my role in the Digital Scholarship team at the British Library, I’ve also drawn on work with catalogues and open cultural data at Melbourne Museum, the Museum of London, the Science Museum and various fellowships. My thanks to the organisers and the Institute of Museum and Library Services for the opportunity to attend. My position paper was called ‘From libraries as patchwork to datasets as assemblages?‘ but in hindsight, piles and patchwork of material seemed a better analogy.

The invitation to this panel asked us to share our experience and perspective on various themes. I’m focusing on the challenges in making collections available as data, based on years of working towards open cultural data from within various museums and libraries. I’ve condensed my thoughts about the challenges down into the question on the slide: How do we embed the production of usable collections data into library work?

It has to be usable, because if it’s not then why are we doing it? It has to be embedded because data in one-off projects gets isolated and stale. ‘Production’ is there because infrastructure and workflow is unsexy but necessary for access to the material that makes digital scholarship possible.

One of the biggest issues the British Library (BL) faces is scale. The BL’s collections are vast – maybe 200 million items – and extremely varied. My experience shows that publishing datasets (or sharing them with aggregators) exposes the shortcomings of past cataloguing practices, making the size of the backlog all too apparent.

Good collections data (or metadata, depending on how you look at it) is necessary to avoid the overwhelmed, jumble sale feeling of using a huge aggregator like Europeana, Trove, or the DPLA, where you feel there’s treasure within reach, if only you could find it. Publishing collections online often increases the number of enquiries about them – how can institution deal with enquiries at scale when they already have a cataloguing backlog? Computational methods like entity identification and extraction could complement the ‘gold standard’ cataloguing already in progress. If they’re made widely available, these other methods might help bridge the resourcing gaps that mean it’s easier to find items from richer institutions and countries than from poorer ones.

Photo of piles of materialYou probably already all know this, but it’s worth remembering: our collections aren’t even (yet) a patchwork of materials. The collections we hold, and the subset we can digitise and make available for re-use are only a tiny proportion of what once existed. Each piece was once part of something bigger, and what we have now has been shaped by cumulative practical and intellectual decisions made over decades or centuries. Digitisation projects range from tiny specialist databases to huge commercial genealogy deals, while some areas of the collections don’t yet have digital catalogue records. Some items can’t be digitised because they’re too big, small or fragile for scanning or photography; others can’t be shared because of copyright, data protection or cultural sensitivities. We need to be careful in how we label datasets so that the absences are evident.

(Here, ‘data’ may include various types of metadata, automatically generated OCR or handwritten text recognition transcripts, digital images, audio or video files, crowdsourced enhancements or any combination or these and more)

Image credit: https://www.flickr.com/photos/teen_s/6251107713/

In addition to the incompleteness or fuzziness of catalogue data, when collections appear as data, it’s often as great big lumps of things. It’s hard for normal scholars to process (or just unzip) 4gb of data.

Currently, datasets are often created outside normal processes, and over time they become ‘stale’ as they’re not updated when source collections records change. And when they manage to unzip them, the records rely on internal references – name authorities for people, places, etc – that can only be seen as strings rather than things until extra work is undertaken.

The BL’s metadata team have experimented with ‘researcher format’ CSV exports around specific themes (eg an exhibition), and CSV is undoubtedly the most accessible format – but what we really need is the ability for people to create their own queries across catalogues, and create their own datasets from the results. (And by queries I don’t mean SPARQL but rather faceted browsing or structured search forms).

Image credit: screenshot from http://data.bl.uk/

Collections are huge (and resources relatively small) so we need to supplement manual cataloguing with other methods. Sometimes the work of crafting links from catalogues to external authorities and identifiers will be a machine job, with pieces sewn together at industrial speed via entity recognition tools that can pull categories out or text and images. Sometimes it’s operated by a technologist who runs records through OpenRefine to find links to name authorities or Wikidata records. Sometimes it’s a labour of scholarly love, with links painstakingly researched, hand-tacked together to make sure they fit before they’re finally recorded in a bespoke database.

This linking work often happens outside the institution, so how can we ingest and re-use it appropriately? And if we’re to take advantage of computational methods and external enhancements, then we need ways to signal which categories were applied by catalogues, which by software, by external groups, etc.

The workflow and interface adjustments required would be significant, but even more challenging would be the internal conversations and changes required before a consensus on the best way to combine the work of cataloguers and computers could emerge.

The trick is to move from a collection of pieces to pieces of a collection. Every collection item was created in and about places, and produced by and about people. They have creative, cultural, scientific and intellectual properties. There’s a web of connections from each item that should be represented when they appear in datasets. These connections help make datasets more usable, turning strings of text into references to things and concepts to aid discoverability and the application of computational methods by scholars. This enables structured search across datasets – potentially linking an oral history interview with a scientist in the BL sound archive, their scientific publications in journals, annotated transcriptions of their field notebooks from a crowdsourcing project, and published biography in the legal deposit library.

A lot of this work has been done as authority files like AAT, ULAN etc are applied in cataloguing, so our attention should turn to turning local references into URIs and making the most of that investment.

Applying identifiers is hard – it takes expert care to disambiguate personal names, places, concepts, even with all the hinting that context-aware systems might be able to provide as machine learning etc techniques get better. Catalogues can’t easily record possible attributions, and there’s understandable reluctance to publish an imperfect record, so progress on the backlog is slow. If we’re not to be held back by the need for records to be perfectly complete before they’re published, then we need to design systems capable of capturing the ambiguity, fuzziness and inherent messiness of historical collections and allowing qualified descriptors for possible links to people, places etc. Then we need to explain the difference to users, so that they don’t overly rely on our descriptions, making assumptions about the presence or absence of information when it’s not appropriate.

Image credit: http://europeana.eu/portal/record/2021648/0180_N_31601.html

Photo of pipes over a buildingA lot of what we need relies on more responsive infrastructure for workflows and cataloguing systems. For example, the BL’s systems are designed around the ‘deliverable unit’ – the printed or bound volume, the archive box – because for centuries the reading room was where you accessed items. We now need infrastructure that makes items addressable at the manuscript, page and image level in order to make the most of the annotations and links created to shared identifiers.

(I’d love to see absorbent workflows, soaking up any related data or digital surrogates that pass through an organisation, no matter which system they reside in or originate from. We aren’t yet making the most of OCRd text, let alone enhanced data from other processes, to aid discoverability or produce datasets from collections.)

Image credit: https://www.flickr.com/photos/snorski/34543357
My final thought – we can start small and iterate, which is just as well, because we need to work on understanding what users of collections data need and how they want to use them. We’re making a start and there’s a lot of thoughtful work behind the scenes, but maybe a bit more investment is needed from research libraries to become as comfortable with data users as they are with the readers who pass through their physical doors.

Keynote online: ‘Reaching out: museums, crowdsourcing and participatory heritage’

In September I was invited to give a keynote at the Museum Theme Days 2016 in Helsinki. I spoke on ‘Reaching out: museums, crowdsourcing and participatory heritage. In lieu of my notes or slides, the video is below. (Great image, thanks YouTube!)

Crowdsourcing in cultural heritage, citizen science – September 2016

More new projects and project updates I’ve noticed over September 2016.

Gillian Lattimore @Irl_HeritageDig has posted some of her dissertation research on Crowdsourcing Motivations in a GLAM Context: A Research Survey of Transcriber Motivations of the Meitheal Dúchas.ie Crowdsourcing Project. dúchas.ie is ‘a project to digitize the National Folklore Collection of Ireland, one of the largest folklore collections in the world’.

A long read on Brighton Pavilion and Museums’ Map The Museum, ‘#HeritageEveryware Map The Museum: connecting collections to the street‘ includes some great insights from Kevin Bacon.

Meghan Ferriter and Christine Rosenfeld have produced a special edition of a journal, ‘Exploring the Smithsonian Institution Transcription Center‘ with articles on ‘Crowdsourcing as Practice and Method in the Smithsonian Transcription Center’ and more.

Two YouGov posts on American and British people’s knowledge of their recent family history provide some useful figures on how many people in each region have researched family history.

Richard Light’s posted some interesting questions and feedback for crowdsourcing projects at The GB1900.org project – first look.

Archiving the Civil War’s Text Messages‘ provides more information about the Decoding the Civil War project.

Zooniverse blog post ‘Why Cyclone Center is the CrockPot of citizen science projects‘ gives some insight into why some projects appear ‘slower’ than others.

A December 2015 post, ‘How a citizen science app with over 70,000 users is creating local community’ (HT Jill Nugent ‏@ntxscied) and an interesting contrast to ‘Volunteer field technicians are bad for wildlife ecology‘. A nice quote from the first piece: ‘Young says that the number one thing that keeps iNaturalist users involved is the community that they create: “meeting other people who are into the same thing I am”’.

iNaturalist Bioblitz‘s are also more evidence for the value of time-limited challenges, or as they describe them, ‘a communal citizen-science effort to record as many species within a designated location and time period as possible’.

Micropasts continue to add historical and archaeological projects.

Survey of London and CASA launched the Histories of Whitechapel website, providing ‘a new interactive map for exploring the Survey’s ongoing research into Whitechapel’ and ‘inviting people to submit their own memories, research, photographs, and videos of the area to help us uncover Whitechapel’s long and rich history’.

New Zooniverse project Mapping Change: ‘Help us use over a century’s worth of specimens to map the distribution of animals, plants, and fungi. Your data will let us know where species have been and predict where they may end up in the future!’

New Europeana project Europeana Transcribe: ‘a crowdsourcing initiative for the transcription of digital material from the First World War, compiled by Europeana 1914-1918. With your help, we can create a vast and fully digital record of personal documents from the collection.’

‘Holiday pictures help preserve the memory of world heritage sites’ introduces Curious Travellers, a ‘data-mining and crowd sourced infrastructure to help with digital documentation of archaeological sites, monuments and heritage at risk’. Or in non-academese, send them your photos and videos of threatened historic sites, particularly those in ‘North Africa, including Cyrene in Libya, as well as those in Syria and the Middle East’.

I’ve added two new international projects, Les herbonautes, a French herbarium transcription project led by the Paris Natural History Museum, and Loki a Finnish project on maritime, coastal history to my post on Crowdsourcing the world’s heritage – as always, let me know of other projects that should be included.

 

Survey of London
Survey of London site

April news in crowdsourcing, citizen science, citizen history

Another quick post with news on crowdsourcing in cultural heritage, citizen science and citizen history in April(ish) 2016…

Acceptances for our DH2016 Expert Workshop: Beyond The Basics: What Next For Crowdsourcing? have been sent out. If you missed the boat, don’t panic! We’re taking a few more applications on a rolling basis to allow for people with late travel approval for the DH2016 conference in July.

Probably the biggest news is the launch of citizenscience.gov, as it signals the importance of citizen science and crowdsourcing to the US government.

From the press release: ‘the White House announced that the U.S. General Services Administration (GSA) has partnered with the Woodrow Wilson International Center for Scholars (WWICS), a Trust instrumentality of the U.S. Government, to launch CitizenScience.gov as the new hub for citizen science and crowdsourcing initiatives in the public sector.

CitizenScience.gov provides information, resources, and tools for government personnel and citizens actively engaged in or looking to participate in citizen science and crowdsourcing projects. … Citizen science and crowdsourcing are powerful approaches that engage the public and provide multiple benefits to the Federal government, volunteer participants, and society as a whole.’

There’s also work to ‘standardize data and metadata related to citizen science, allowing for greater information exchange and collaboration both within individual projects and across different projects’.

Other news:

Responses to questions about if the volunteers agreed that the Zooniverse… From Science Learning via Participation in Online Citizen Science

Have I missed something important? Let me know in the comments or @mia_out.

SXSW, project anniversaries and more – news on heritage crowdsourcing

Photo of programme
Our panel listing at SXSW

I’ve just spent two weeks in Texas, enjoying the wonderful hospitality and probing questions after giving various talks at universities in Houston and Austin before heading to SXSW. I was there for a panel on ‘Build the Crowdsourcing Community of Your Dreams’ (link to our slides and collected resources) with Ben Brumfield, Siobhan Leachman, and Meghan Ferriter. Siobhan, a ‘super-volunteer’ in more ways than one, posted her talk notes on ‘How cultural institutions encouraged me to participate in crowdsourcing & the factors I consider before donating my time‘.

In other news, we (me, Ben, Meghan and Christy Henshaw from the Wellcome Library) have had a workshop accepted for the Digital Humanities 2016 conference, to be held in Kraków in July. We’re looking for people with different kinds of expertise for our DH2016 Expert Workshop: Beyond The Basics: What Next For Crowdsourcing?.  You can apply via this form.

One of the questions at our SXSW panel was about crowdsourcing in teaching, which reminded me of this recent post on ‘The War Department in the Classroom‘ in which Zayna Bizri ‘describes her approach to using the Papers of the War Department in the classroom and offers suggestions for those who wish to do the same’. In related news, the PWD project is now five years old! There’s also this post on Primary School Zooniverse Volunteers.

The Science Gossip project is one year old, and they’re asking their contributors to decide which periodicals they’ll work on next and to start new discussions about the documents and images they find interesting.

The History Harvest project have released their Handbook (PDF).

The Danish Nationalmuseet is having a ‘Crowdsource4dk‘ crowdsourcing event on April 9. You can also transcribe Churchill’s WWII daily appointments, 1939 – 1945 or take part in Old Weather: Whaling (and there’s a great Hyperallergic post with lots of images about the whaling log books).

I’ve seen a few interesting studentships and jobs posted lately, hinting at research and projects to come. There’s a funded PhD in HCI and online civic engagement and a (now closed) studentship on Co-creating Citizen Science for Innovation.

And in old news, this 1996 post on FamilySearch’s collaborative indexing is a good reminder that very little is entirely new in crowdsourcing.

From grey dots to trenches to field books – news in heritage crowdsourcing

Apparently you can finish a thesis but you can’t stop scanning for articles and blog posts on your topic. Sharing them here is a good way to shake the ‘I should be doing something with this’ feeling.* This is a fairly random sample of recent material, but if people find it useful I can go back and pull out other things I’ve collected.

Victoria Van Hyning, ‘What’s up with those grey dots?’ you ask – brief blog post on using software rather than manual processes to review multiple text transcriptions, and on the interface challenges that brings.

Melissa Terras, ‘Crowdsourcing in the Digital Humanities‘ – pre-print PDF for a chapter in A New Companion to Digital Humanities.

Richard Grayson, ‘A Life in the Trenches? The Use of Operation War Diary and Crowdsourcing Methods to Provide an Understanding of the British Army’s Day-to-Day Life on the Western Front‘ – a peer-reviewed article based on data created through Operation War Diary.

The Impact of Coordinated Social Media Campaigns on Online Citizen Science Engagement – a poster by Lesley Parilla and Meghan Ferriter reported on the Biodiversity Heritage Library blog.

The Impact of Coordinated Social Media Campaigns on Online Citizen Science Engagement

Ben Brumfield, Crowdsourcing Transcription Failures – a response to a mailing list post asking ‘where are the failures?’

And finally, something related to my interest in participatory history commonsMartin Luther King Jr. Memorial Library – Central Library launches Memory Lab, a ‘DIY space where you can digitize your home movies, scan photographs and slides, and learn how to care for your physical and digital family heirlooms’. I was so excited when I about this project – it’s addressing such important issues. Jaime Mears is blogging about the project.

 

* How long after a PhD does it take for that feeling to go? Asking for a friend.

The state of museum technology?

On Friday I was invited to Nesta‘s Digital Culture Panel event to respond to their 2015 Digital Culture survey on ‘How arts and cultural organisations in England use technology’ (produced with Arts Council England (ACE) and the Arts and Humanities Research Council (AHRC)). As Chair of the Museums Computer Group (MCG) (a practitioner-led group of over 1500 museum technology professionals), I’ve been chatting to other groups about the gap between the digital skills available and those needed in the museum sector, so it’s a subject close to my heart. In previous years I’d noted that the results didn’t seem to represent what I knew of museums and digital from events and working in the sector, so I was curious to see the results.

Digital Culture 2015 imageSome of their key findings for museums (PDF) are below, interspersed with my comments. I read this section before the event, and found I didn’t really recognise the picture of museums it presented. ‘Museums’ mightn’t be the most useful grouping for a survey like this – the material that MTM London’s Ed Corn presented on the day broke the results down differently, and that made more sense. The c2,500 museums in the UK are too varied in their collections (from dinosaurs to net art), their audiences, and their local and organisational context (from tiny village museums open one afternoon a week, to historic houses, to university museums, to city museums with exhibitions that were built in the 70s, to white cube art galleries, to giants like the British Museum and Tate) to be squished together in one category. Museums tend to be quite siloed, so I’d love to know who fills out the survey, and whether they ask the whole organisation to give them data beforehand.

According to the survey, museums are significantly less likely to engage in:

  • email marketing (67 per cent vs. 83 per cent for the sector as a whole) – museums are missing out! Email marketing is relatively cheap, and it’s easy to write newsletters. It’s also easy to ask people to sign up when they’re visiting online sites or physical venues, and they can unsubscribe anytime they want to. Social media figures can look seductively huge, but Facebook is a frenemy for organisations as you never know how many people will actually see a post.
  • publish content to their own website (55 per cent vs. 72 per cent) – I wasn’t sure how to interpret this – does this mean museums don’t have their own websites? Or that they can’t update them? Or is ‘content’ a confusing term? At the event it was said that 10% of orgs have no email marketing, website or Facebook, so there are clearly some big gaps to fill still.
  • sell event tickets online (31 per cent vs. 45 per cent) – fair enough, how many museums sell tickets to anything that really need to be booked in advance?
  • post video or audio content (31 per cent vs. 43 per cent) – for most museums, this would require an investment to create as many don’t already have filmable material or archived films to hand. Concerns about ‘polish’ might also be holding some museums back – they could try periscoping tours or sharing low-fi videos created by front of house staff or educators. Like questions about offering ‘online interactive tours of real-world spaces’ and ‘artistic projects’, this might reflect initial assumptions based on ACE’s experience with the performing arts. A question about image sharing would make more sense for museums. Similarly, the kinds of storytelling that blog posts allow can sometimes work particularly well for history and science museums (who don’t have gorgeous images of art that tell their own story).
  • make use of social media video advertising (18 per cent vs. 32 per cent) – again, video is a more natural format for performing arts than for museums
  • use crowdfunding (8 per cent vs. 19 per cent) – crowdfunding requires a significant investment of time and is often limited to specific projects rather than core business expenses, so it might be seen as too risky, but is this why museums are less likely to try it?
  • livestream performances (2 per cent vs. 12 per cent) – again, this is less likely to apply to museums than performing arts organisations

One of the key messages in Ed Corn’s talk was that organisations are experimenting less, evaluating the impact of digital work less, and not using data in digital decision making. They’re also scaling back on non-core work; some are focusing on consolidation – fixing the basics like websites (and mobile-friendly sites). Barriers include lack of funding, lack of in-house time, lack of senior digital managers, slow/limited IT systems, and lack of digital supplier. (Many of those barriers were also listed in a small-scale survey on ‘issues facing museum technologists’ I ran in 2010.)

When you consider the impact of the cuts year on year since 2010, and that ‘one in five regional museums at least part closed in 2015‘, some of those continued barriers are less surprising. At one point everyone I know still in museums seemed to be doing at least one job on top of theirs, as people left and weren’t replaced. The cuts might have affected some departments more deeply than others – have many museums lost learning teams? I suspect we’ve also lost two generations of museum technologists – the retiring generation who first set up mainframe computers in basements, and the first generation of web-ish developers who moved on to other industries as conditions in the sector got more grim/good pay became more important. Fellow panelist Ros Lawler also made the point that museums have to deal with legacy systems while also trying to look at the future, and that museum projects tend to slow when they could be more agile.

Like many in the audience, I really wanted to know who the ‘digital leaders’ – the 10% of organisations who thought digital was important, did more digital activities and reaped the most benefits from their investment – were, and what made them so successful. What can other organisations learn from them?

It seems that we still need to find ways to share lessons learnt, and to help everyone in the arts and cultural sectors learn how to make the most of digital technologies and social media.  Training that meets the right need at the right time is really hard to organise and fund, and there are already lots of pockets of expertise within organisations – we need to get people talking to each other more! As I said at the event, most technology projects are really about people. Front of house staff, social media staff, collections staff – everyone can contribute something.

If you were there, have read the report or explored the data, I’d love to know what you think. And I’ll close with a blatant plug: the MCG has two open calls for papers a year, so please keep an eye out for those calls and suggest talks or volunteer to help out!

Exercises for ‘The basics of crowdsourcing in cultural heritage’

I’m running a workshop (at a Knowledge Exchange event organised by the Scottish Network on Digital Cultural Resources Evaluation and the Museums Galleries Scotland Digital Transformation Network) to help people get started with crowdsourcing in cultural heritage. These exercises are designed to give participants some hands-on experience with existing projects while developing their ability to discuss the elements of successful crowdsourcing projects. They are also an opportunity to appreciate the importance of design and text in marketing a project, and the role of user experience design in creating projects that attract and retain contributors.

Exercise: compare front pages

Choose two of the sites below to review.

The most important question to keep in mind is: how effective is the front page at making you want to participate in a project? How does it achieve that?

Exercise: try some crowdsourcing projects

Try one of the sites listed above; others are listed in this post; non-English language sites are listed here. You can also ask for suggestions!

Attributes to discuss include:

The overall ‘call to action’

  • Is the first step toward participating obvious?
  • Is the type of task, source material and output obvious?

Probable audience

  • Can you tell who the project wants to reach?
  • Does text relate to their motivations for starting, continuing?
  • How are they rewarded?
  • Are there any barriers to their participation?

Data input and data produced

  • What kinds of tasks create that data?
  • How are contributions validated?

How productive, successful does the site seem overall?

Exercise: lessons from game design

  • Go to http://git.io/2048
  • Spend 2 minutes trying it out
  • Did you understand what to do?
  • Did you want to keep playing?

Exercise: your plans

Some questions to help make ideas into reality:

  • Who already loves and/or uses your collections?
  • Which material needs what kind of work?
  • Do any existing platforms meet most of your needs?
  • What potential barriers could you turn into tasks?
  • How will you resource community interaction?
  • How would a project support your mission, engagement strategy and digitisation goals?

Digital curator at the British Library?!

Kings Library Tower, British Library
Kings Library Tower, British Library

I have a new job! I’m the newest Digital Curator at the British Library. That link takes you to a post on the BL blog for a bit more about what my job involves. If you’ve read any of my posts over the past couple of years, you’ll know that working to encourage digital scholarship is a pretty good fit for my research and teaching interests.

In other news, I passed my PhD viva! I’ve got a couple of minor corrections to fit in around work and various papers, and then my PhD is over! (Unless I decide to publish from my thesis, of course…)

My ‘Welcome’ notes for UKMW15 ‘Bridging Gaps, Making Connections’

I’m at the British Museum today for the Museums Computer Group‘s annual UK ‘Museums on the Web’ conference. UKMW15 has a packed line-up full of interesting presentations. As Chair of the MCG, I briefly introduced the event. My notes are below, in part to make sure that everyone who should be thanked is thanked! You can read a more polished version of this written with my Programme Committe Co-Chair Danny Birchall in a Guardian Culture Professionals article, ‘How digital tech can bridge gaps between museums and audiences‘.

Museums Computer Group logoUK Museums on the Web 2015: ‘Bridging Gaps, Making Connections’ #UKMW15

I’d like to start by thanking everyone who helped make today happen, and by asking the MCG Committee Members who are here today to stand up, so that you can chat to them, ideally even thank them, during the day. For those who don’t know us, the Museums Computer Group is a practitioner-lead group who work to connect, support and inspire anyone working in museum technology. (There are lots of ways to get involved – we’re electing new committee members at our AGM at lunchtime, and we will also be asking for people to host next year’s event at their museum or help organise a regional event.)

I’d particularly like to thank Ina Pruegel and Jennifer Ross, who coordinated the event, the MCG Committee members who did lots of work on the event (Andrew, Dafydd, Danny, Ivan, Jess, Kath, Mia, Rebecca, Rosie), and the Programme Committee members who reviewed presentation proposals sent in. They were: co-chairs: Danny Birchall and Mia Ridge, with Chris Michaels (British Museum), Claire Bailey Ross (Durham University), Gill Greaves (Arts Council England), Jenny Kidd (Cardiff University), Jessica Suess (Oxford University Museums), John Stack (Science Museum Group), Kim Plowright (Mildly Diverting), Matthew Cock (Vocal Eyes), Rachel Coldicutt (Friday), Sara Wajid (National Maritime Museum), Sharna Jackson (Hopster), Suse Cairns (Baltimore Museum of Art), Zak Mensah (Bristol Museums, Galleries & Archives).

And of course I’d like to thank the speakers and session chairs, the British Museum, Matt Caines at the Guardian, and in advance I’d like to thank all the tweets, bloggers and photographers who’ll help spread this event beyond the walls of this room.

Which brings me to the theme of the event, ‘Bridging Gaps, Making Connections’. We’ve been running UK Museums on the Web since 2001; last year our theme was ‘museums beyond the web’ in recognition that barriers between ‘web teams’ and ‘web projects’ and the rest of the organisation were breaking down. But it’s also apparent that the gap between tiny, small, and even medium-sized museums and the largest, best-funded museums meant that digital expertise and knowledge had not reached the entire sector. The government’s funding cuts and burnout mean that old museum hands have left, and some who replace them need time to translate their experience in other sectors into museums. Our critics and audiences are confused about what to expect, and museums are simultaneously criticised for investing too much in technologies that disrupt the traditional gallery and for being ‘dull and dusty’. Work is duplicated across museums, libraries, archives and other cultural organisations; academic and commercial projects sometimes seem to ignore the wealth of experience in the sector.

So today is about bridging those gaps, and about making new connections. (I’ve made my own steps in bridging gaps by joining the British Library as a Digital Curator.) We have a fabulous line-up representing the wealth and diversity of experience in museum technologies.

So take lots of notes to share with your colleagues. Use your time here to find people to collaborate with. Tweet widely. Ask MCG Committee members to introduce you to other people here. Let people with questions know they can post them on the MCG discussion list and connect with thousands of people working with museums and technology. Now, more than ever, an event like this isn’t about technology; it’s about connecting and inspiring people.