Updates from Digital Scholarship at the British Library

I’ve been posting on the work blog far more frequently than I have here. Launching and running In the Spotlight, crowdsourcing the transcription of the British Library’s historic playbills collection, was a focus in 2017-18. Some blog posts:

And a press release and newsletters:

Other updates from work, including a new project, information about the Digital Scholarship Reading Group I started, student projects, and an open data project I shepherded:

Cross-post: Seeking researchers to work on an ambitious data science and digital humanities project

I rarely post here at the moment, in part because I post on the work blog. Here’s a cross-post to help spread the word about some exciting opportunities currently available: Seeking researchers to work on an ambitious data science and digital humanities project at the British Library and Alan Turing Institute (London)

‘If you follow @BL_DigiSchol or #DigitalHumanities hashtags on twitter, you might have seen a burst of data science, history and digital humanities jobs being advertised. In this post, Dr Mia Ridge of the Library’s Digital Scholarship team provides some background to contextualise the jobs advertised with the ‘Living with Machines’ project.

We are seeking to appoint several new roles who will collaborate on an exciting new project developed by the British Library and The Alan Turing Institute, the national centre for data science and artificial intelligence.

Jobs currently advertised:

The British Library jobs are now advertised, closing September 21:

You may have noticed that the British Library is also currently advertising for a Curator, Newspaper Data (closes Sept 9). This isn’t related to Living with Machines, but with an approach of applying data-driven journalism and visualisation techniques to historical collections, it should have some lovely synergies and opportunities to share work in progress with the project team. There’s also a Research Software Engineer advertised that will work closely with many of the same British Library teams.

If you’re applying for these posts, you may want to check out the Library’s visions and values on the refreshed ‘Careers’ website.’

My opening remarks for MCG’s Museums+Tech 2017

My notes introducing the theme of the Museums Computer Group’s 2017 conference and a call to action for people working in cultural heritage technology below.

A divided world

2016 was the year that deep fractures came to the surface, but they’d been building for some time. We might live in the same country as each other, but we can experience it very differently. What we know about the state of the world is affected by where we live, our education, and by how (if?) we get our news.

Life in 2017

Cartoon of a dog surrounded by fire drinking coffee

    ‘This is fine’ (KC Green)

We can’t pretend that it’ll all go away and that society will heal itself. Divisions over Brexit, the role of propaganda in elections, climate change, the role of education, what we value as a society – they’re all awkward to address, but if we don’t it’s hard to see how we can move forward. And since we’re here to talk about museums – what role do museums have in divided societies? How much do they need to reflect voices they mightn’t agree with? Do we need to make ourselves a bit uncomfortable in order to make spaces for sharing experiences and creating empathy? Can (digital) experiences, collections and exhibitions in cultural heritage help create a shared understanding of the world?

‘arts and cultural engagement [helps] shape reflective individuals, facilitating greater understanding of themselves and their lives, increasing empathy with respect to others, and an appreciation of the diversity of human experience and cultures.’ From Understanding the value of arts & culture: The AHRC Cultural Value Project by Geoffrey Crossick & Patrycja Kaszynska

I’ve been struck lately by the observation that empathy can bridge divides, and give people the power to understand others. The arts and culture provide opportunities to ‘understand and share in another person’s feelings and experiences’ and connect the past to the present. How can museums – in all their different forms – contribute to a more empathic (and maybe eventually less divided) society?

‘The greatest benefit we owe to the artist, whether painter, poet, or novelist, is the extension of our sympathies. … Art is the nearest thing to life; it is a mode of amplifying experience and extending our contact with our fellow-men beyond the bounds of our personal lot.’ George Eliot, as quoted in Peter Bazalgette’s The Empathy Instinct

Digital experiences aren’t shared in the same way as physical ones, and ‘social’ media isn’t the same as being in the same space as someone experiencing the same thing, but they have other advantages – I hope we’ll learn about some today.

We need to tell better stories about museums and computers

Woman with buckets of computer cables
Engineer Karen Leadlay in Analog Computer Lab

Shifting from the public to staff in museums… Museums have been using technology to serve audiences and manage collections for decades. But still it feels like museums are criticised for simultaneously having too much and too little technology. Shiny apps make the news, but they’re built on decades of digitisation and care from heritage organisations. There’s a lot museums could do better, and digital expertise is not evenly distributed or recognised, but there’s a lot that’s done well, too. My challenge to you is to find and share better stories about cultural heritage technologies connecting collections, people and knowledge. If we don’t tell those stories, they’ll be told about us. Too many articles and puff pieces ignore the thoughtful, quotidian and/or experimental work of experts across the digital cultural heritage sector.

[Later in the day I mentioned that the conference had an excellent response to the call for papers – we learnt about more interesting projects than we had room to fit in, so perhaps we should encourage more people to post case studies to the MCG’s discussion list and website.]

The Museums+Tech 2017 programme

  • Keynote: ‘What makes a Museum?
  • Museums in a post-truth world of fake news
  • Challenging Expectations
  • Dealing with distance; bringing the museum to the people
  • How can museums use sound and chatbots?
  • Looking (back to look) forward

Speaking of better stories – I’m looking forward to hearing from all our speakers today – they’re covering an incredible range of topics, approaches and technologies, so hopefully each of you will leave full of ideas. Join us for drinks afterwards to keep the conversation going. And to set the tone for the day, it’s a great time to hear Hannah Fox on the topic of ‘what makes a museum’

Speaking of the conference – a lot of people helped out in different ways, so thanks to them all!

Save

Save

Save

Do people want access to digitised collections?

manuscript drawing
Drawing of the Battle of Lincoln from Henry of Huntingdon’s Historia Anglorum, British Library, Arundel 48. Viewed 33 million times on the front page of Italian Wikipedia in Feb 2017.

Someone asked me recently if there’s any evidence that people really want access to digitised collections, so I popped onto twitter and asked, ‘Does anyone have a good example of a digitised image on Wikimedia or similar that reached a huge audience compared to the GLAM’s own site?’. Here are the responses I received:

Michael Gasser @M_Gasser mentioned a photo from Zurich’s ETH Library that by mid-September had 160,000 views on the Wikipedia page about Sagrada Familia, dwarfing views on their own site. He also shared a blog post about their project, Reaching out to new users. ETH Library’s archives in the world of Wikimedia.

Jason Evans, (@WIKI_NLW), Wikimedian at the National Library of Wales said, ‘We shared around 15,000 images from @NLWales about 2 years ago and they have been viewed over 300 million times on Wiki’, and ‘This image by Magnum Photographer Philip Jones Griffiths is our most viewed with around half a mil views each month [link to stats on BaGLAMa]‘.

Pat Hadley (@PatHadley) said ‘Coins from @YorkshireMuseum get loads of traffic [link to stats on BaGLAMa] thanks to @YMT_Coins work long after my residency!’. Andrew Woods @YMT_Coins expanded that the project wasn’t just about getting big numbers: ‘My aims were more associated w proof of concept. Can we do this? How long does it take? Possible with volunteers with no previous exp? Etc’. It’s fantastic to see this sort of experiment with specialist collections.

Helge David (@helge_david) shared a link to a YouTube video of The Roentgens’ Berlin Secretary Cabinet, saying ‘14.1 million views of an 18th century cabinet suggests the right object can catch people’s imagination when some care is taken to make it intellectually accessible and freely available online.’ The video proves that perfectly, I think.

Sara Devine (@SaraDevine) replied to say ‘Yes! We have several @brooklynmuseum examples from past project[s]’, linking to “Africanizing” Wikipedia, one of Brooklyn Museum’s experiments with sharing images and improving content on Wikipedia.

Merete Sanderhoff (@MSanderhoff) said ‘This painting @smkmuseum is not on display but widely used on Wikipedia i.e. in entry on Lions [Christian VIII og Caroline Amalie i salvingsdragt.jpg] (thx @LizzyJongma :)’ and that ‘Some of the most popular @rijksmuseum images on Wikimedia are hidden treasures like Het kanonschot, Willem van de Velde (II), ca. 1680 and Het kasteel van Batavia, Andries Beeckman, ca. 1661‘.

Aron Ambrosiani‏ (@AronAmbrosiani) said ‘this one, on the “walrus” wikipedia page, had 280 000 views last month 🙂 Photo from @Skansen in 1908: [a man in a top hat feeding a walrus]’.

Illtud Daniel‏ (@illtud) simply linked to a tweet saying that a National Library of Wales image was used on Europeana’s 404 page, asking ‘Is this cheating?’.

Discussing images from the British Library, my colleague Ben O’Steen (@benosteen) noted that a manuscript image of Stephen of England had 735,324,085 views when it was on the front page of the English-language Wikipedia in October 2016.

Maarten Brinkerink and Johan Oomen provided an update on a 2011 post on usage of the Dutch Open Images platform for audiovisual material via email:

As of May 2017, ‘On average we get 19 million page views a month on articles that feature material from our archive. This exposure is generated by the 9,000 articles that reuse our material (spread over more than 100 languages versions of Wikipedia).

Since we’ve been available for reuse on Wikimedia Commons, in total, pages that reuse our content have generated 668 million page views.

To date we have donated about 10,000 digital objects to Wikimedia Commons, of which 35% are actually being reused in one article or more.’

As you can tell by the number of links to stats on BaGLAMa, this tool is key for organisations who want to understand where their images are being viewed across Wikimedia. The huge spike in the image shows the month mentioned by Ben when Stephen of England hit the front page of Wikipedia. (A few years ago I posted tips on Who loves your stuff? How to collect links to your site.)

British Library stats on BaGLAMa.

 

Thanks to the example shared in response to a single tweet, it seems clear that even if people don’t say to themselves, ‘what I really want is an image from a museum, archive or library’, when they want the answer to a question, content from cultural institutions helps make that answer a good one. Views on images on an institution’s own site might be relatively low, but making those images reusable by Wikimedia and other sites like Retronaut clearly has an impact. It’s not just that someone has done the work to put items in context and make them intellectually (or emotionally) accessible, it’s also that they’re placed on sites and platforms that people are already used to visiting. Access to digitised collections provides a useful public service, provoking curiosity and wonder, and teaching us about the past.

Save

Save

Save

From piles of material to patchwork: How do we embed the production of usable collections data into library work?

How do we embed the production of usable collections data into library work?These notes were prepared for a panel discussion at the ‘Always Already Computational: Collections as Data‘ (#AACdata) workshop, held in Santa Barbara in March 2017. While my latest thinking on the gap between the scale of collections and the quality of data about them is informed by my role in the Digital Scholarship team at the British Library, I’ve also drawn on work with catalogues and open cultural data at Melbourne Museum, the Museum of London, the Science Museum and various fellowships. My thanks to the organisers and the Institute of Museum and Library Services for the opportunity to attend. My position paper was called ‘From libraries as patchwork to datasets as assemblages?‘ but in hindsight, piles and patchwork of material seemed a better analogy.

The invitation to this panel asked us to share our experience and perspective on various themes. I’m focusing on the challenges in making collections available as data, based on years of working towards open cultural data from within various museums and libraries. I’ve condensed my thoughts about the challenges down into the question on the slide: How do we embed the production of usable collections data into library work?

It has to be usable, because if it’s not then why are we doing it? It has to be embedded because data in one-off projects gets isolated and stale. ‘Production’ is there because infrastructure and workflow is unsexy but necessary for access to the material that makes digital scholarship possible.

One of the biggest issues the British Library (BL) faces is scale. The BL’s collections are vast – maybe 200 million items – and extremely varied. My experience shows that publishing datasets (or sharing them with aggregators) exposes the shortcomings of past cataloguing practices, making the size of the backlog all too apparent.

Good collections data (or metadata, depending on how you look at it) is necessary to avoid the overwhelmed, jumble sale feeling of using a huge aggregator like Europeana, Trove, or the DPLA, where you feel there’s treasure within reach, if only you could find it. Publishing collections online often increases the number of enquiries about them – how can institution deal with enquiries at scale when they already have a cataloguing backlog? Computational methods like entity identification and extraction could complement the ‘gold standard’ cataloguing already in progress. If they’re made widely available, these other methods might help bridge the resourcing gaps that mean it’s easier to find items from richer institutions and countries than from poorer ones.

Photo of piles of materialYou probably already all know this, but it’s worth remembering: our collections aren’t even (yet) a patchwork of materials. The collections we hold, and the subset we can digitise and make available for re-use are only a tiny proportion of what once existed. Each piece was once part of something bigger, and what we have now has been shaped by cumulative practical and intellectual decisions made over decades or centuries. Digitisation projects range from tiny specialist databases to huge commercial genealogy deals, while some areas of the collections don’t yet have digital catalogue records. Some items can’t be digitised because they’re too big, small or fragile for scanning or photography; others can’t be shared because of copyright, data protection or cultural sensitivities. We need to be careful in how we label datasets so that the absences are evident.

(Here, ‘data’ may include various types of metadata, automatically generated OCR or handwritten text recognition transcripts, digital images, audio or video files, crowdsourced enhancements or any combination or these and more)

Image credit: https://www.flickr.com/photos/teen_s/6251107713/

In addition to the incompleteness or fuzziness of catalogue data, when collections appear as data, it’s often as great big lumps of things. It’s hard for normal scholars to process (or just unzip) 4gb of data.

Currently, datasets are often created outside normal processes, and over time they become ‘stale’ as they’re not updated when source collections records change. And when they manage to unzip them, the records rely on internal references – name authorities for people, places, etc – that can only be seen as strings rather than things until extra work is undertaken.

The BL’s metadata team have experimented with ‘researcher format’ CSV exports around specific themes (eg an exhibition), and CSV is undoubtedly the most accessible format – but what we really need is the ability for people to create their own queries across catalogues, and create their own datasets from the results. (And by queries I don’t mean SPARQL but rather faceted browsing or structured search forms).

Image credit: screenshot from http://data.bl.uk/

Collections are huge (and resources relatively small) so we need to supplement manual cataloguing with other methods. Sometimes the work of crafting links from catalogues to external authorities and identifiers will be a machine job, with pieces sewn together at industrial speed via entity recognition tools that can pull categories out or text and images. Sometimes it’s operated by a technologist who runs records through OpenRefine to find links to name authorities or Wikidata records. Sometimes it’s a labour of scholarly love, with links painstakingly researched, hand-tacked together to make sure they fit before they’re finally recorded in a bespoke database.

This linking work often happens outside the institution, so how can we ingest and re-use it appropriately? And if we’re to take advantage of computational methods and external enhancements, then we need ways to signal which categories were applied by catalogues, which by software, by external groups, etc.

The workflow and interface adjustments required would be significant, but even more challenging would be the internal conversations and changes required before a consensus on the best way to combine the work of cataloguers and computers could emerge.

The trick is to move from a collection of pieces to pieces of a collection. Every collection item was created in and about places, and produced by and about people. They have creative, cultural, scientific and intellectual properties. There’s a web of connections from each item that should be represented when they appear in datasets. These connections help make datasets more usable, turning strings of text into references to things and concepts to aid discoverability and the application of computational methods by scholars. This enables structured search across datasets – potentially linking an oral history interview with a scientist in the BL sound archive, their scientific publications in journals, annotated transcriptions of their field notebooks from a crowdsourcing project, and published biography in the legal deposit library.

A lot of this work has been done as authority files like AAT, ULAN etc are applied in cataloguing, so our attention should turn to turning local references into URIs and making the most of that investment.

Applying identifiers is hard – it takes expert care to disambiguate personal names, places, concepts, even with all the hinting that context-aware systems might be able to provide as machine learning etc techniques get better. Catalogues can’t easily record possible attributions, and there’s understandable reluctance to publish an imperfect record, so progress on the backlog is slow. If we’re not to be held back by the need for records to be perfectly complete before they’re published, then we need to design systems capable of capturing the ambiguity, fuzziness and inherent messiness of historical collections and allowing qualified descriptors for possible links to people, places etc. Then we need to explain the difference to users, so that they don’t overly rely on our descriptions, making assumptions about the presence or absence of information when it’s not appropriate.

Image credit: http://europeana.eu/portal/record/2021648/0180_N_31601.html

Photo of pipes over a buildingA lot of what we need relies on more responsive infrastructure for workflows and cataloguing systems. For example, the BL’s systems are designed around the ‘deliverable unit’ – the printed or bound volume, the archive box – because for centuries the reading room was where you accessed items. We now need infrastructure that makes items addressable at the manuscript, page and image level in order to make the most of the annotations and links created to shared identifiers.

(I’d love to see absorbent workflows, soaking up any related data or digital surrogates that pass through an organisation, no matter which system they reside in or originate from. We aren’t yet making the most of OCRd text, let alone enhanced data from other processes, to aid discoverability or produce datasets from collections.)

Image credit: https://www.flickr.com/photos/snorski/34543357
My final thought – we can start small and iterate, which is just as well, because we need to work on understanding what users of collections data need and how they want to use them. We’re making a start and there’s a lot of thoughtful work behind the scenes, but maybe a bit more investment is needed from research libraries to become as comfortable with data users as they are with the readers who pass through their physical doors.

Keynote online: ‘Reaching out: museums, crowdsourcing and participatory heritage’

In September I was invited to give a keynote at the Museum Theme Days 2016 in Helsinki. I spoke on ‘Reaching out: museums, crowdsourcing and participatory heritage. In lieu of my notes or slides, the video is below. (Great image, thanks YouTube!)

Crowdsourcing in cultural heritage, citizen science – September 2016

More new projects and project updates I’ve noticed over September 2016.

Gillian Lattimore @Irl_HeritageDig has posted some of her dissertation research on Crowdsourcing Motivations in a GLAM Context: A Research Survey of Transcriber Motivations of the Meitheal Dúchas.ie Crowdsourcing Project. dúchas.ie is ‘a project to digitize the National Folklore Collection of Ireland, one of the largest folklore collections in the world’.

A long read on Brighton Pavilion and Museums’ Map The Museum, ‘#HeritageEveryware Map The Museum: connecting collections to the street‘ includes some great insights from Kevin Bacon.

Meghan Ferriter and Christine Rosenfeld have produced a special edition of a journal, ‘Exploring the Smithsonian Institution Transcription Center‘ with articles on ‘Crowdsourcing as Practice and Method in the Smithsonian Transcription Center’ and more.

Two YouGov posts on American and British people’s knowledge of their recent family history provide some useful figures on how many people in each region have researched family history.

Richard Light’s posted some interesting questions and feedback for crowdsourcing projects at The GB1900.org project – first look.

Archiving the Civil War’s Text Messages‘ provides more information about the Decoding the Civil War project.

Zooniverse blog post ‘Why Cyclone Center is the CrockPot of citizen science projects‘ gives some insight into why some projects appear ‘slower’ than others.

A December 2015 post, ‘How a citizen science app with over 70,000 users is creating local community’ (HT Jill Nugent ‏@ntxscied) and an interesting contrast to ‘Volunteer field technicians are bad for wildlife ecology‘. A nice quote from the first piece: ‘Young says that the number one thing that keeps iNaturalist users involved is the community that they create: “meeting other people who are into the same thing I am”’.

iNaturalist Bioblitz‘s are also more evidence for the value of time-limited challenges, or as they describe them, ‘a communal citizen-science effort to record as many species within a designated location and time period as possible’.

Micropasts continue to add historical and archaeological projects.

Survey of London and CASA launched the Histories of Whitechapel website, providing ‘a new interactive map for exploring the Survey’s ongoing research into Whitechapel’ and ‘inviting people to submit their own memories, research, photographs, and videos of the area to help us uncover Whitechapel’s long and rich history’.

New Zooniverse project Mapping Change: ‘Help us use over a century’s worth of specimens to map the distribution of animals, plants, and fungi. Your data will let us know where species have been and predict where they may end up in the future!’

New Europeana project Europeana Transcribe: ‘a crowdsourcing initiative for the transcription of digital material from the First World War, compiled by Europeana 1914-1918. With your help, we can create a vast and fully digital record of personal documents from the collection.’

‘Holiday pictures help preserve the memory of world heritage sites’ introduces Curious Travellers, a ‘data-mining and crowd sourced infrastructure to help with digital documentation of archaeological sites, monuments and heritage at risk’. Or in non-academese, send them your photos and videos of threatened historic sites, particularly those in ‘North Africa, including Cyrene in Libya, as well as those in Syria and the Middle East’.

I’ve added two new international projects, Les herbonautes, a French herbarium transcription project led by the Paris Natural History Museum, and Loki a Finnish project on maritime, coastal history to my post on Crowdsourcing the world’s heritage – as always, let me know of other projects that should be included.

 

Survey of London
Survey of London site

April news in crowdsourcing, citizen science, citizen history

Another quick post with news on crowdsourcing in cultural heritage, citizen science and citizen history in April(ish) 2016…

Acceptances for our DH2016 Expert Workshop: Beyond The Basics: What Next For Crowdsourcing? have been sent out. If you missed the boat, don’t panic! We’re taking a few more applications on a rolling basis to allow for people with late travel approval for the DH2016 conference in July.

Probably the biggest news is the launch of citizenscience.gov, as it signals the importance of citizen science and crowdsourcing to the US government.

From the press release: ‘the White House announced that the U.S. General Services Administration (GSA) has partnered with the Woodrow Wilson International Center for Scholars (WWICS), a Trust instrumentality of the U.S. Government, to launch CitizenScience.gov as the new hub for citizen science and crowdsourcing initiatives in the public sector.

CitizenScience.gov provides information, resources, and tools for government personnel and citizens actively engaged in or looking to participate in citizen science and crowdsourcing projects. … Citizen science and crowdsourcing are powerful approaches that engage the public and provide multiple benefits to the Federal government, volunteer participants, and society as a whole.’

There’s also work to ‘standardize data and metadata related to citizen science, allowing for greater information exchange and collaboration both within individual projects and across different projects’.

Other news:

Responses to questions about if the volunteers agreed that the Zooniverse… From Science Learning via Participation in Online Citizen Science

Have I missed something important? Let me know in the comments or @mia_out.

SXSW, project anniversaries and more – news on heritage crowdsourcing

Photo of programme
Our panel listing at SXSW

I’ve just spent two weeks in Texas, enjoying the wonderful hospitality and probing questions after giving various talks at universities in Houston and Austin before heading to SXSW. I was there for a panel on ‘Build the Crowdsourcing Community of Your Dreams’ (link to our slides and collected resources) with Ben Brumfield, Siobhan Leachman, and Meghan Ferriter. Siobhan, a ‘super-volunteer’ in more ways than one, posted her talk notes on ‘How cultural institutions encouraged me to participate in crowdsourcing & the factors I consider before donating my time‘.

In other news, we (me, Ben, Meghan and Christy Henshaw from the Wellcome Library) have had a workshop accepted for the Digital Humanities 2016 conference, to be held in Kraków in July. We’re looking for people with different kinds of expertise for our DH2016 Expert Workshop: Beyond The Basics: What Next For Crowdsourcing?.  You can apply via this form.

One of the questions at our SXSW panel was about crowdsourcing in teaching, which reminded me of this recent post on ‘The War Department in the Classroom‘ in which Zayna Bizri ‘describes her approach to using the Papers of the War Department in the classroom and offers suggestions for those who wish to do the same’. In related news, the PWD project is now five years old! There’s also this post on Primary School Zooniverse Volunteers.

The Science Gossip project is one year old, and they’re asking their contributors to decide which periodicals they’ll work on next and to start new discussions about the documents and images they find interesting.

The History Harvest project have released their Handbook (PDF).

The Danish Nationalmuseet is having a ‘Crowdsource4dk‘ crowdsourcing event on April 9. You can also transcribe Churchill’s WWII daily appointments, 1939 – 1945 or take part in Old Weather: Whaling (and there’s a great Hyperallergic post with lots of images about the whaling log books).

I’ve seen a few interesting studentships and jobs posted lately, hinting at research and projects to come. There’s a funded PhD in HCI and online civic engagement and a (now closed) studentship on Co-creating Citizen Science for Innovation.

And in old news, this 1996 post on FamilySearch’s collaborative indexing is a good reminder that very little is entirely new in crowdsourcing.

From grey dots to trenches to field books – news in heritage crowdsourcing

Apparently you can finish a thesis but you can’t stop scanning for articles and blog posts on your topic. Sharing them here is a good way to shake the ‘I should be doing something with this’ feeling.* This is a fairly random sample of recent material, but if people find it useful I can go back and pull out other things I’ve collected.

Victoria Van Hyning, ‘What’s up with those grey dots?’ you ask – brief blog post on using software rather than manual processes to review multiple text transcriptions, and on the interface challenges that brings.

Melissa Terras, ‘Crowdsourcing in the Digital Humanities‘ – pre-print PDF for a chapter in A New Companion to Digital Humanities.

Richard Grayson, ‘A Life in the Trenches? The Use of Operation War Diary and Crowdsourcing Methods to Provide an Understanding of the British Army’s Day-to-Day Life on the Western Front‘ – a peer-reviewed article based on data created through Operation War Diary.

The Impact of Coordinated Social Media Campaigns on Online Citizen Science Engagement – a poster by Lesley Parilla and Meghan Ferriter reported on the Biodiversity Heritage Library blog.

The Impact of Coordinated Social Media Campaigns on Online Citizen Science Engagement

Ben Brumfield, Crowdsourcing Transcription Failures – a response to a mailing list post asking ‘where are the failures?’

And finally, something related to my interest in participatory history commonsMartin Luther King Jr. Memorial Library – Central Library launches Memory Lab, a ‘DIY space where you can digitize your home movies, scan photographs and slides, and learn how to care for your physical and digital family heirlooms’. I was so excited when I about this project – it’s addressing such important issues. Jaime Mears is blogging about the project.

 

* How long after a PhD does it take for that feeling to go? Asking for a friend.