Experimenting with Mastodon

I'd signed up to mastodon.cloud during an earlier twitter kerfuffle in 2017, then with ausglam.space in January last year, and glammr.us on a whim. [Edit to add, I've taken the plunge and migrated to hcommons.social/@mia as my main account].

2008-era Nokia phone with a tweet on the screen: @miaridge 'those twitters on screen are really distracting me at #mw2008'
Tweet from Museums and the Web 2008 'complaining' about being distracted by a twitterfall (remember that?) screen

This week I've gone back and taken another look. (So that's me, me and me). The energy that's poured in must be quite disconcerting for long-term users, but making new connections and thinking differently about how I want to post on social media has been quite exhilarating. It's also been a chance to think about what twitter's meant for me in the nearly 15 years I've been posting.

I've realised how constrained my tweeting has become over time, and in particular how a sense of surveillance has sucked the joy out of posting. The idea that an employer's HR, a tabloid journalist, or someone on the lookout to take offence could seize on something and blow it up – the uncertainty about how things could be taken out of context and take on a life of their own – had a chilling effect.

[Edited to add, also I've never stopped being annoyed about the way Twitter turned 'stars' into 'likes' or hearts, then shared them into timelines, as described well in this guide to Mastodon. I also acted defensively against the worst changes in twitter – my location is set to Jordan so that trending topics are in Arabic and therefore unreadable to me (except when BTS fans take over); I use the 'latest' view if I have to use Twitter's own client; and I normally use clients that only show things that people I follow have consciously tweeted, not random 'likes'.]

15 years is a long time, and I've also had to be more thoughtful about what I post as my job titles and institutions have changed. Lots of us have grown up while on the site, and benefited hugely from the conversations, friendships, provocations and more we've found there.

Twitter completely transformed events for me – you could find like-minded folk in a crowd as talks were live tweeted. Some of those conversations have continued for years. I have fond memories of making good trouble at events like Museums and the Web (and of course the Museums Computer Group's events) with people I met via their tweets.

I'll also miss the sheer size of Twitter that made random searches so interesting. You could search on any word you liked and get so many glimpses into other lives and ways of being in the world. I've never understood the 'town square' thing but it was a brilliant coffee shop. [Edit to add: that ability to search out very specific terms is also part of the surveillance vibe – it's easy to search for terms to get upset about, or to find a tweet posted to a few hundred people and pull it out of context. Mastodon apparently only allows searches on hashtagged terms, as explained in this post, so the original poster has to consciously make a word publicly searchable]

Over time, we've lost many voices as some people found twitter too toxic, or too time-consuming. Post-2016, it's been much harder to love a platform so full of harmful misinformation. At the moment this definitely feels like the last days of twitter, though I'm sure lots of us will keep our accounts, even if we don't go there as much.

If twitter doesn't last, thank thanks to everyone who's kept me entertained, changed how I think about things, commiserated, cheered me up, shared wins and losses over the years.

My IFPH panel notes, 'shared authority as work in progress'

I'm in Berlin for the International Council for Public History 20202 #IFPH2022 conference, where I'm on a panel on 'Revisiting A Shared Authority in the Age of Digital Public History'. It's part of a working group with Thomas Cauvin (Luxembourg), Michael Frisch (United States), Serge Noiret (Italy), Mark Tebeau (United States), Mia Ridge (United Kingdom), Sharon Leon (United States), Rebecca Wingo (United States), Dominique Santana (Luxembourg), Violeta Tsenova (Luxembourg). My panel notes will make more sense in that wider context, but I'm sharing them here for reference.

Shared authority as work in progress

What does 'shared authority' mean to cultural heritage institutions? (Or GLAMs – galleries, libraries, archives and museums). The view will really depend on many factors, possibly including whether GLAM staff feel the need to do any professional gatekeeping, reserving 'library' or 'archive' professional status for themselves, much as some historians do more gatekeeping than others around who's allowed to say they're 'doing history'.

Thinking about ephemerality and what’s left of the processes of sharing authority a few years after it happens…

[Visual metaphor – think of the layers around the core of an onion. At the heart are collections, then catalogue metadata about those collections, often an additional layer of related metadata that doesn’t fit into the catalogue but is required for GLAM business, then public programmes including outreach and education, then there’s the unmediated access to collections and knowledge via social media and galleries]

I think GLAMs are getting comfortable with sharing, and shared authority. Crowdsourcing, in its many forms, is relatively common in GLAMs. Collaboration with Wikipedians of various sorts is widespread. There's a body of knowledge about co-curating exhibitions, community collecting and more, shared over conferences and publications and praxis. Texts and metadata and AV of all sorts have been created – usually *by* the public, *for* institutions.

Collaboration with other GLAMs on information standards and shared cataloguing has a long history, and those practices have moved online. [And now we’re sharing authority by putting records on wikidata, where they can be updated by anyone]

There's something interesting in the idea of the 'catalogue' as a source of authority. GLAM cataloguing practices are shaped by the needs of organisations – keeping track of their collections, adding information from structured vocabularies, perhaps adding extensive notes and bibliographies – for internal use and for their readers (particularly for libraries and archives), and by the commercial vendors that produce the cataloguing platforms. 

Cataloguing platforms often lag behind the needs of GLAMs, and have been slow to respond to requests to include sources of information outside the organisation. That may be because some of this work in sharing authority happens outside cataloguing and registrar teams, or because there's not one single, clear way in which cataloguing systems should change to include information from the community about collection items.

Some GLAMs are more challenged than others by thinking generously about where 'authority' resides. Researchers in reading rooms, or open collection stores are clearly visibly engaged with specialist research. Their discussions with reference staff will often reveal the depths of their knowledge about specific parts of a collection. Authority is already shared between readers and staff. However, the expertise (or authority) of the same readers is not visible when they use online collections – all online visits and searches look the same in Google Analytics unless you really delve into the reports. Similarly, a crowdsourcing participant transcribing text or tagging images might be entirely new to the source materials, or have a deep familiarity with them. Their questions and comments might reveal something of this, but the data recorded by a crowdsourcing platform lacks the social cues that might be present in an in-person conversation.

In the UK, generations of funding cuts have reduced the number of specialist curators in GLAMs. These days, curators are more likely to be generalists, selected for their ability to speak eloquently about collections and grasp the shape, significance and history of a collection quickly. Looking externally for authoritative information – whether the lived experience of communities who used or still care for similar items, or specialist academic and other researchers – is common.

It's important to remember that 'crowdsourcing' is a broad term that includes 'type what you see' tasks such as transcription or correction, tasks such as free-text tagging or information that rely on knowledge and experience, and more involved co-creative tasks such as organising projects or analysing results. But an important part of my definition is that each task contributes towards a shared, significant goal – if data isn't recorded somewhere, it's just 'user generated content'.

For me, the value of crowdsourcing in cultural heritage is the intimate access it gives members of the public to collection items they would otherwise never encounter. As long as a project offers some way for participants to share things they've noticed, ask questions and mark items for their own use – in short, a way of reflecting on historical items – I consider that even 'simple' transcription tasks have the potential to be citizen history (or citizen science). 

The questions participants ask on my projects shape my own practice, and influence the development of new tasks and features – and in the last year helped shape an exhibition I co-curated with another museum curator. The same exhibition featured 'community comments', responses from people I or the museum have worked with over some time. Some of these comments were reflections from crowdsourcing volunteers on how their participation in the project changed how they thought about mechanisations in the 1800s (the subject of the exhibition).

Attitudes have shifted; data hasn't

However, years after folksonomies and web 2.0 were big news, the data the public creates through crowdsourcing is still difficult to integrate with existing catalogues. Flickr Commons, Omeka, Wikidata, Zooniverse and other platforms might hold information that would make collections more discoverable online, but it’s not easy to link data from those platforms to internal systems. That is in part because GLAM catalogues struggle with the granularity of digitised items – catalogues can help you order a book or archive box to a reading room, but they can't as easily store tags or research notes about what's on a particular page of that item. It's also in part because data nearly always needs reviewing and transforming before ingest. 

But is it also because GLAMs don't take shared authority seriously enough to advocate and pay for changes to their cataloguing systems to support them recording material from the public alongside internal data? Data that isn't in 'strategic' systems is more easily left behind when platforms migrate and staff move on.

This lack of flexibility in recording information from the public also plays out in ‘traditional’ volunteering, where spreadsheets and mini-databases might be used to supplement the main catalogue. The need for import and export processes to manage volunteer data can intentionally or unintentionally create a barrier to more closely integrating different sources of authoritative information.

So authority might be shared – but when it counts, whose information is regarded as vital, as 'core', and integrated into long-term systems, and whose is left out?

I realised that for me, at heart it’s about digital preservation. If it's not in an organisation’s digital preservation plans, or content is with an organisation that isn't supported in having a digital preservation plan; is it really valued? And if content isn't valued, is authority really shared?

Diagram showing an 'onion' of data from 'core metadata' at the centre to 'additional metadata' (with arrows marked 'community content' and 'algorithmic content' pointing to it, to 'public programmes' to 'unmediated public access'

Talk notes for #AIUK on the British Library and crowdsourcing

I had a strict five minute slot for my talk in the panel on 'Reimagining the past with AI' at Turing's AI UK event today, so wrote out my notes and thought I might as well share them…

The panel blurb was 'The past shapes the present and influences the future, but the historical record isn’t straightforward, and neither are its digital representations. Join the AHRC project Living with Machines and friends on their journey to reimagine the past through AI and data science and the challenges and opportunities within.' It was a delight to chat with Dave Beavan, Mariona Coll Ardanuy, Melodee Wood and Tim Hitchcock.

My prepared talk: A bit about the British Library for those who aren't familiar with it. It's one of the two biggest libraries in the world, and it’s the national library for the UK. 
 
Its collections are vast – somewhere between 180 and 200 million collection items, including 14 million books; hundreds of terrabytes of archived websites; over 600,000 bound volumes of historical newspapers, of which about 60 million pages have been digitised with partners FindMyPast so far)… 
 
We've been working with crowdsourcing – which we defined as working with the public on tasks that contribute to a shared, significant goal related to cultural heritage collections or knowledge – for about a decade now. We've collected local sounds and accents around Britain, georeferenced gorgeous historical maps, matched card catalogue records in Urdu and Chinese to digital catalogue records, and brought the history of theatre across the UK to life via old playbills. 
 
Some of our crowdsourcing work is designed to help improve the discoverability of cultural heritage collections, and some, like our work with Living with Machines, is designed to build datasets to help answer wider research questions. 
 
In all cases, our work with crowdsourcing is closely aligned with the BL's mission: it helps make our shared intellectual heritage available for research, inspiration and enjoyment. 
 
We think of crowdsourcing activities as a form of digital volunteering, where participation in the task is rewarding in its own right. Our crowdsourcing projects are a platform for privileged access and deeper engagement with our digitised collections. They're an avenue for people who wouldn't normally encounter historical records close up to work with them, while helping make those items easier for others to access.
 
Through Living with Machines, we've worked out how to design tasks that fit into computational linguistic research questions and timelines… 
 
So that's all great – but… the scale of our collections is hard to ignore. Individual crowdsourcing tasks that make items more accessible by transcribing or classifying items are beyond the capacity of even the keenest crowd. Enter machine learning, human computation, human in the loop… 
 
While we're keen to start building systems that combine machine learning and human input to help scale up our work, we don't want to buy into terms like 'crowdworkers' or ‘gig work’ that we see in some academic and commercial work. If crowdsourcing is a form of public engagement, as well as a productive platform for tasks, we can't think of our volunteers as 'cogs' in a system. 
 
We think that it's important to help shape the future of 'human computation' systems; to ensure that work on machine learning / AI are in alignment with Library values . We look to work that peers at the Library of Congress are doing to create human-in-the-loop systems that 'cultivate responsible practices'. 
 
We want to retain the opportunities for the public to get started with simpler tasks based on historical collections, while also being careful not to 'waste clicks' by having people do tasks that computers can do faster. 
 
With Living with Machines, we've built tasks that provide opportunities for participants to think about how their classifications form training datasets for machine learning. 
 
So my questions for the next year are: how can we design human computation systems that help participants acquire new literacies and skills, while scaling up and amplifying their work?

Screenshot of Zoom view from the conference stage with a large green clock and red countdown timer
The conference 'backstage' view on Zoom

Introducing… The Collective Wisdom Handbook

I'm delighted to share my latest publication, a collaboration with 15 co-authors written in March and April 2021. It's the major output of my Collective Wisdom project, an AHRC-funded project I lead with Meghan Ferriter and Sam Blickhan.

Until August 9, 2021, you can provide feedback or comment on The Collective Wisdom Handbook: perspectives on crowdsourcing in cultural heritage:

We have published this first version of our collaborative text to provide early access to our work, and to invite comment and discussion from anyone interested in crowdsourcing, citizen science, citizen history, digital / online volunteer projects, programmes, tools or platforms with cultural heritage collections.

I wrote two posts to provide further context:

Our book is now open for 'community review'. What does that mean for you?

Announcing an 'early access' version of our Collective Wisdom Handbook

I'm curious to see how much of a difference this period of open comment makes. The comments so far have been quite specific and useful, but I'd like to know where we *really* got it right, and where we could include other examples. You need a pubpub account to comment but after that it's pretty straightforward – select text, and add a comment, or comment on an entire chapter.

Having some distance from the original writing period has been useful for me – not least, the realisation that the title should have been 'perspectives on crowdsourcing in cultural heritage and digital humanities'.

About 'a practical guide to crowdsourcing in cultural heritage'

book cover

Some time ago I wrote a chapter on 'Crowdsourcing in cultural heritage: a practical guide to designing and running successful projects' for the Routledge International Handbook of Research Methods in Digital Humanities, edited by Kristen Schuster and Stuart Dunn. As their blurb says, the volume 'draws on both traditional and emerging fields of study to consider what a grounded definition of quantitative and qualitative research in the Digital Humanities (DH) might mean; which areas DH can fruitfully draw on in order to foster and develop that understanding; where we can see those methods applied; and what the future directions of research methods in Digital Humanities might look like'.

Inspired by a post from the authors of a chapter in the same volume (Opening the ‘black box’ of digital cultural heritage processes: feminist digital humanities and critical heritage studies by Hannah Smyth, Julianne Nyhan & Andrew Flinn), I'm sharing something about what I wanted to do in my chapter.

As the title suggests, I wanted to provide practical insights for cultural heritage and digital humanities practitioners. Writing for a Handbook of Research Methods in Digital Humanities was an opportunity help researchers understand both how to apply the 'method' and how the 'behind the scenes' work affects the outcomes. As a method, crowdsourcing in cultural heritage touches on many more methods and disciplines. The chapter built on my doctoral research, and my ideas were roadtested at many workshops, classes and conferences.

Rather than crib from my introduction (which you can read in a pre-edited version online), I've included the headings from the chapter as a guide to the contents:

  • An introduction to crowdsourcing in cultural heritage
  • Key conceptual and research frameworks
  • Fundamental concepts in cultural heritage crowdsourcing
  • Why do cultural heritage institutions support crowdsourcing projects?
  • Why do people contribute to crowdsourcing projects?
  • Turning crowdsourcing ideas into reality
  • Planning crowdsourcing projects
  • Defining 'success' for your project
  • Managing organisational impact
  • Choosing source collections
  • Planning workflows and data re-use
  • Planning communications and participant recruitment
  • Final considerations: practical and ethical ‘reality checks’
  • Developing and testing crowdsourcing projects
  • Designing the ‘onboarding’ experience
  • Task design
  • Documentation and tutorials
  • Quality control: validation and verification systems
  • Rewards and recognition
  • Running crowdsourcing projects
  • Launching a project
  • The role of participant discussion
  • Ongoing community engagement
  • Planning a graceful exit
  • The future of crowdsourcing in cultural heritage
  • Thanks and acknowledgements

I wrote in the open on this Google Doc: 'Crowdsourcing in cultural heritage: a practical guide to designing and running successful projects', and benefited from the feedback I got during that process, so this post is also an opportunity to highlight and reiterate my 'Thanks and acknowledgements' section:

I would like to thank participants and supporters of crowdsourcing projects I’ve created, including Museum Metadata Games, In their own words: collecting experiences of the First World War, and In the Spotlight. I would also like to thank my co-organisers and attendees at the Digital Humanities 2016 Expert Workshop on the future of crowdsourcing. Especial thanks to the participants in courses and workshops on ‘crowdsourcing in cultural heritage’, including the British Library’s Digital Scholarship training programme, the HILT Digital Humanities summer school (once with Ben Brumfield) and scholars at other events where the course was held, whose insights, cynicism and questions have informed my thinking over the years. Finally, thanks to Meghan Ferriter and Victoria Van Hyning for their comments on this manuscript.


References for Crowdsourcing in cultural heritage: a practical guide to designing and running successful projects

Alam, S. L., & Campbell, J. (2017). Temporal Motivations of Volunteers to Participate in Cultural Crowdsourcing Work. Information Systems Research. https://doi.org/10.1287/isre.2017.0719

Bedford, A. (2014, February 16). Instructional Overlays and Coach Marks for Mobile Apps. Retrieved 12 September 2014, from Nielsen Norman Group website: http://www.nngroup.com/articles/mobile-instructional-overlay/

Berglund Prytz, Y. (2013, June 24). The Oxford Community Collection Model. Retrieved 22 October 2018, from RunCoCo website: http://blogs.it.ox.ac.uk/runcoco/2013/06/24/the-oxford-community-collection-model/

Bernstein, S. (2014). Crowdsourcing in Brooklyn. In M. Ridge (Ed.), Crowdsourcing Our Cultural Heritage. Retrieved from http://www.ashgate.com/isbn/9781472410221

Bitgood, S. (2010). An attention-value model of museum visitors (pp. 1–29). Retrieved from Center for the Advancement of Informal Science Education website: http://caise.insci.org/uploads/docs/VSA_Bitgood.pdf

Bonney, R., Ballard, H., Jordan, R., McCallie, E., Phillips, T., Shirk, J., & Wilderman, C. C. (2009). Public Participation in Scientific Research: Defining the Field and Assessing Its Potential for Informal Science Education. A CAISE Inquiry Group Report (pp. 1–58). Retrieved from Center for Advancement of Informal Science Education (CAISE) website: http://caise.insci.org/uploads/docs/PPSR%20report%20FINAL.pdf

Brohan, P. (2012, July 23). One million, six hundred thousand new observations. Retrieved 30 October 2012, from Old Weather Blog website: http://blog.oldweather.org/2012/07/23/one-million-six-hundred-thousand-new-observations/

Brohan, P. (2014, August 18). In search of lost weather. Retrieved 5 September 2014, from Old Weather Blog website: http://blog.oldweather.org/2014/08/18/in-search-of-lost-weather/

Brumfield, B. W. (2012a, March 5). Quality Control for Crowdsourced Transcription. Retrieved 9 October 2013, from Collaborative Manuscript Transcription website: http://manuscripttranscription.blogspot.co.uk/2012/03/quality-control-for-crowdsourced.html

Brumfield, B. W. (2012b, March 17). Crowdsourcing at IMLS WebWise 2012. Retrieved 8 September 2014, from Collaborative Manuscript Transcription website: http://manuscripttranscription.blogspot.com.au/2012/03/crowdsourcing-at-imls-webwise-2012.html

Budiu, R. (2014, March 2). Login Walls Stop Users in Their Tracks. Retrieved 7 March 2014, from Nielsen Norman Group website: http://www.nngroup.com/articles/login-walls/

Causer, T., & Terras, M. (2014). ‘Many Hands Make Light Work. Many Hands Together Make Merry Work’: Transcribe Bentham and Crowdsourcing Manuscript Collections. In M. Ridge (Ed.), Crowdsourcing Our Cultural Heritage. Retrieved from http://www.ashgate.com/isbn/9781472410221

Causer, T., & Wallace, V. (2012). Building A Volunteer Community: Results and Findings from Transcribe Bentham. Digital Humanities Quarterly, 6(2). Retrieved from http://www.digitalhumanities.org/dhq/vol/6/2/000125/000125.html

Cheng, J., Teevan, J., Iqbal, S. T., & Bernstein, M. S. (2015, April). Break It Down: A Comparison of Macro- and Microtasks. 4061–4064. https://doi.org/10.1145/2702123.2702146

Clary, E. G., Snyder, M., Ridge, R. D., Copeland, J., Stukas, A. A., Haugen, J., & Miene, P. (1998). Understanding and assessing the motivations of volunteers: A functional approach. Journal of Personality and Social Psychology, 74(6), 1516–30.

Collings, R. (2014, May 5). The art of computer image recognition. Retrieved 25 May 2014, from The Public Catalogue Foundation website: http://www.thepcf.org.uk/what_we_do/48/reference/862

Collings, R. (2015, February 1). The art of computer recognition. Retrieved 22 October 2018, from Art UK website: https://artuk.org/about/blog/the-art-of-computer-recognition

Crowdsourcing Consortium. (2015). Engaging the Public: Best Practices for Crowdsourcing Across the Disciplines. Retrieved from http://crowdconsortium.org/

Crowley, E. J., & Zisserman, A. (2016). The Art of Detection. Presented at the Workshop on Computer Vision for Art Analysis, ECCV. Retrieved from https://www.robots.ox.ac.uk/~vgg/publications/2016/Crowley16/crowley16.pdf

Csikszentmihalyi, M., & Hermanson, K. (1995). Intrinsic Motivation in Museums: Why Does One Want to Learn? In J. Falk & L. D. Dierking (Eds.), Public institutions for personal learning: Establishing a research agenda (pp. 66–77). Washington D.C.: American Association of Museums.

Dafis, L. L., Hughes, L. M., & James, R. (2014). What’s Welsh for ‘Crowdsourcing’? Citizen Science and Community Engagement at the National Library of Wales. In M. Ridge (Ed.), Crowdsourcing Our Cultural Heritage. Retrieved from http://www.ashgate.com/isbn/9781472410221

Das Gupta, V., Rooney, N., & Schreibman, S. (n.d.). Notes from the Transcription Desk: Modes of engagement between the community and the resource of the Letters of 1916. Digital Humanities 2016: Conference Abstracts. Presented at the Digital Humanities 2016, Kraków. Retrieved from http://dh2016.adho.org/abstracts/228

De Benetti, T. (2011, June 16). The secrets of Digitalkoot: Lessons learned crowdsourcing data entry to 50,000 people (for free). Retrieved 9 January 2012, from Microtask website: http://blog.microtask.com/2011/06/the-secrets-of-digitalkoot-lessons-learned-crowdsourcing-data-entry-to-50000-people-for-free/

de Boer, V., Hildebrand, M., Aroyo, L., De Leenheer, P., Dijkshoorn, C., Tesfa, B., & Schreiber, G. (2012). Nichesourcing: Harnessing the power of crowds of experts. Proceedings of the 18th International Conference on Knowledge Engineering and Knowledge Management, EKAW 2012, 16–20. Retrieved from http://dx.doi.org/10.1007/978-3-642-33876-2_3

DH2016 Expert Workshop. (2016, July 12). DH2016 Crowdsourcing workshop session overview. Retrieved 5 October 2018, from DH2016 Expert Workshop: Beyond The Basics: What Next For Crowdsourcing? website: https://docs.google.com/document/d/1sTII8P67mOFKWxCaAKd8SeF56PzKcklxG7KDfCRUF-8/edit?usp=drive_open&ouid=0&usp=embed_facebook

Dillon-Scott, P. (2011, March 31). How Europeana, crowdsourcing & wiki principles are preserving European history. Retrieved 15 February 2015, from The Sociable website: http://sociable.co/business/how-europeana-crowdsourcing-wiki-principles-are-preserving-european-history/

DiMeo, M. (2014, February 3). First Monday Library Chat: University of Iowa’s DIY History. Retrieved 7 September 2014, from The Recipes Project website: http://recipes.hypotheses.org/3216

Dunn, S., & Hedges, M. (2012). Crowd-Sourcing Scoping Study: Engaging the Crowd with Humanities Research (p. 56). Retrieved from King’s College website: http://www.humanitiescrowds.org

Dunn, S., & Hedges, M. (2013). Crowd-sourcing as a Component of Humanities Research Infrastructures. International Journal of Humanities and Arts Computing, 7(1–2), 147–169. https://doi.org/10.3366/ijhac.2013.0086

Durkin, P. (2017, September 28). Release notes: A big antedating for white lie – and introducing Shakespeare’s world. Retrieved 29 September 2017, from Oxford English Dictionary website: http://public.oed.com/the-oed-today/recent-updates-to-the-oed/september-2017-update/release-notes-white-lie-and-shakespeares-world/

Eccles, K., & Greg, A. (2014). Your Paintings Tagger: Crowdsourcing Descriptive Metadata for a National Virtual Collection. In M. Ridge (Ed.), Crowdsourcing Our Cultural Heritage. Retrieved from http://www.ashgate.com/isbn/9781472410221

Edwards, D., & Graham, M. (2006). Museum volunteers and heritage sectors. Australian Journal on Volunteering, 11(1), 19–27.

European Citizen Science Association. (2015). 10 Principles of Citizen Science. Retrieved from https://ecsa.citizen-science.net/sites/default/files/ecsa_ten_principles_of_citizen_science.pdf

Eveleigh, A., Jennett, C., Blandford, A., Brohan, P., & Cox, A. L. (2014). Designing for dabblers and deterring drop-outs in citizen science. 2985–2994. https://doi.org/10.1145/2556288.2557262

Eveleigh, A., Jennett, C., Lynn, S., & Cox, A. L. (2013). I want to be a captain! I want to be a captain!: Gamification in the old weather citizen science project. Proceedings of the First International Conference on Gameful Design, Research, and Applications, 79–82. Retrieved from http://dl.acm.org/citation.cfm?id=2583019

Ferriter, M., Rosenfeld, C., Boomer, D., Burgess, C., Leachman, S., Leachman, V., … Shuler, M. E. (2016). We learn together: Crowdsourcing as practice and method in the Smithsonian Transcription Center. Collections, 12(2), 207–225. https://doi.org/10.1177/155019061601200213

Fleet, C., Kowal, K., & Přidal, P. (2012). Georeferencer: Crowdsourced Georeferencing for Map Library Collections. D-Lib Magazine, 18(11/12). https://doi.org/10.1045/november2012-fleet

Forum posters. (2010, present). Signs of OW addiction … Retrieved 11 April 2014, from Old Weather Forum » Shore Leave » Dockside Cafe website: http://forum.oldweather.org/index.php?topic=1432.0

Fugelstad, P., Dwyer, P., Filson Moses, J., Kim, J. S., Mannino, C. A., Terveen, L., & Snyder, M. (2012). What Makes Users Rate (Share, Tag, Edit…)? Predicting Patterns of Participation in Online Communities. Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, 969–978. Retrieved from http://dl.acm.org/citation.cfm?id=2145349

Gilliver, P. (2012, October 4). ‘Your dictionary needs you’: A brief history of the OED’s appeals to the public. Retrieved from Oxford English Dictionary website: https://public.oed.com/history/history-of-the-appeals/

Goldstein, D. (1994). ‘Yours for Science’: The Smithsonian Institution’s Correspondents and the Shape of Scientific Community in Nineteenth-Century America. Isis, 85(4), 573–599.

Grayson, R. (2016). A Life in the Trenches? The Use of Operation War Diary and Crowdsourcing Methods to Provide an Understanding of the British Army’s Day-to-Day Life on the Western Front. British Journal for Military History, 2(2). Retrieved from http://bjmh.org.uk/index.php/bjmh/article/view/96

Hess, W. (2010, February 16). Onboarding: Designing Welcoming First Experiences. Retrieved 29 July 2014, from UX Magazine website: http://uxmag.com/articles/onboarding-designing-welcoming-first-experiences

Holley, R. (2009). Many Hands Make Light Work: Public Collaborative OCR Text Correction in Australian Historic Newspapers (No. March). Canberra: National Library of Australia.

Holley, R. (2010). Crowdsourcing: How and Why Should Libraries Do It? D-Lib Magazine, 16(3/4). https://doi.org/10.1045/march2010-holley

Holmes, K. (2003). Volunteers in the heritage sector: A neglected audience? International Journal of Heritage Studies, 9(4), 341–355. https://doi.org/10.1080/1352725022000155072

Kittur, A., Nickerson, J. V., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J., … Horton, J. (2013). The future of crowd work. Proceedings of the 2013 Conference on Computer Supported Cooperative Work, 1301–1318. Retrieved from http://dl.acm.org/citation.cfm?id=2441923

Lambert, S., Winter, M., & Blume, P. (2014, March 26). Getting to where we are now. Retrieved 4 March 2015, from 10most.org.uk website: http://10most.org.uk/content/getting-where-we-are-now

Lascarides, M., & Vershbow, B. (2014). What’s on the menu?: Crowdsourcing at the New York Public Library. In M. Ridge (Ed.), Crowdsourcing Our Cultural Heritage. Retrieved from http://www.ashgate.com/isbn/9781472410221

Latimer, J. (2009, February 25). Letter in the Attic: Lessons learnt from the project. Retrieved 17 April 2014, from My Brighton and Hove website: http://www.mybrightonandhove.org.uk/page/letterintheatticlessons?path=0p116p1543p

Lazy Registration design pattern. (n.d.). Retrieved 9 December 2018, from Http://ui-patterns.com/patterns/LazyRegistration website: http://ui-patterns.com/patterns/LazyRegistration

Leon, S. M. (2014). Build, Analyse and Generalise: Community Transcription of the Papers of the War Department and the Development of Scripto. In M. Ridge (Ed.), Crowdsourcing Our Cultural Heritage. Retrieved from http://www.ashgate.com/isbn/9781472410221

Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43–52.

McGonigal, J. (n.d.). Gaming the Future of Museums. Retrieved from http://www.slideshare.net/avantgame/gaming-the-future-of-museums-a-lecture-by-jane-mcgonigal-presentation#text-version

Mills, E. (2017, December). The Flitch of Bacon: An Unexpected Journey Through the Collections of the British Library. Retrieved 17 August 2018, from British Library Digital Scholarship blog website: http://blogs.bl.uk/digital-scholarship/2017/12/the-flitch-of-bacon-an-unexpected-journey-through-the-collections-of-the-british-library.html

Mitra, T., & Gilbert, E. (2014). The Language that Gets People to Give: Phrases that Predict Success on Kickstarter. Retrieved from http://comp.social.gatech.edu/papers/cscw14.crowdfunding.mitra.pdf

Mugar, G., Østerlund, C., Hassman, K. D., Crowston, K., & Jackson, C. B. (2014). Planet Hunters and Seafloor Explorers: Legitimate Peripheral Participation Through Practice Proxies in Online Citizen Science. Retrieved from http://crowston.syr.edu/sites/crowston.syr.edu/files/paper_revised%20copy%20to%20post.pdf

Mugar, G., Østerlund, C., Jackson, C. B., & Crowston, K. (2015). Being Present in Online Communities: Learning in Citizen Science. Proceedings of the 7th International Conference on Communities and Technologies, 129–138. https://doi.org/10.1145/2768545.2768555

Museums, Libraries and Archives Council. (2008). Generic Learning Outcomes. Retrieved 8 September 2014, from Inspiring Learning website: http://www.inspiringlearningforall.gov.uk/toolstemplates/genericlearning/

National Archives of Australia. (n.d.). ArcHIVE – homepage. Retrieved 18 June 2014, from ArcHIVE website: http://transcribe.naa.gov.au/

Nielsen, J. (1995). 10 Usability Heuristics for User Interface Design. Retrieved 29 April 2014, from http://www.nngroup.com/articles/ten-usability-heuristics/

Nov, O., Arazy, O., & Anderson, D. (2011). Technology-Mediated Citizen Science Participation: A Motivational Model. Proceedings of the AAAI International Conference on Weblogs and Social Media. Presented at the Barcelona, Spain. Barcelona, Spain.

Oomen, J., Gligorov, R., & Hildebrand, M. (2014). Waisda?: Making Videos Findable through Crowdsourced Annotations. In M. Ridge (Ed.), Crowdsourcing Our Cultural Heritage. Retrieved from http://www.ashgate.com/isbn/9781472410221

Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive Load Theory and Instructional Design: Recent Developments. Educational Psychologist, 38(1), 1–4. https://doi.org/10.1207/S15326985EP3801_1

Part I: Building a Great Project. (n.d.). Retrieved 9 December 2018, from Zooniverse Help website: https://help.zooniverse.org/best-practices/1-great-project/

Preist, C., Massung, E., & Coyle, D. (2014). Competing or aiming to be average?: Normification as a means of engaging digital volunteers. Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, 1222–1233. https://doi.org/10.1145/2531602.2531615

Raddick, M. J., Bracey, G., Gay, P. L., Lintott, C. J., Murray, P., Schawinski, K., … Vandenberg, J. (2010). Galaxy Zoo: Exploring the Motivations of Citizen Science Volunteers. Astronomy Education Review, 9(1), 18.

Raimond, Y., Smethurst, M., & Ferne, T. (2014, September 15). What we learnt by crowdsourcing the World Service archive. Retrieved 15 September 2014, from BBC R&D website: http://www.bbc.co.uk/rd/blog/2014/08/data-generated-by-the-world-service-archive-experiment-draft

Reside, D. (2014). Crowdsourcing Performing Arts History with NYPL’s ENSEMBLE. Presented at the Digital Humanities 2014. Retrieved from http://dharchive.org/paper/DH2014/Paper-131.xml

Ridge, M. (2011a). Playing with Difficult Objects – Game Designs to Improve Museum Collections. In J. Trant & D. Bearman (Eds.), Museums and the Web 2011: Proceedings. Retrieved from http://www.museumsandtheweb.com/mw2011/papers/playing_with_difficult_objects_game_designs_to

Ridge, M. (2011b). Playing with difficult objects: Game designs for crowdsourcing museum metadata (MSc Dissertation, City University London). Retrieved from http://www.miaridge.com/my-msc-dissertation-crowdsourcing-games-for-museums/

Ridge, M. (2013). From Tagging to Theorizing: Deepening Engagement with Cultural Heritage through Crowdsourcing. Curator: The Museum Journal, 56(4).

Ridge, M. (2014, November). Citizen History and its discontents. Presented at the IHR Digital History Seminar, Institute for Historical Research, London. Retrieved from https://hcommons.org/deposits/item/hc:17907/

Ridge, M. (2015). Making digital history: The impact of digitality on public participation and scholarly practices in historical research (Ph.D., Open University). Retrieved from http://oro.open.ac.uk/45519/

Ridge, M. (2018). British Library Digital Scholarship course 105: Exercises for Crowdsourcing in Libraries, Museums and Cultural Heritage Institutions. Retrieved from https://docs.google.com/document/d/1tx-qULCDhNdH0JyURqXERoPFzWuCreXAsiwHlUKVa9w/

Rotman, D., Preece, J., Hammock, J., Procita, K., Hansen, D., Parr, C., … Jacobs, D. (2012). Dynamic changes in motivation in collaborative citizen-science projects. Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, 217–226. https://doi.org/10.1145/2145204.2145238

Sample Ward, A. (2011, May 18). Crowdsourcing vs Community-sourcing: What’s the difference and the opportunity? Retrieved 6 January 2013, from Amy Sample Ward’s Version of NPTech website: http://amysampleward.org/2011/05/18/crowdsourcing-vs-community-sourcing-whats-the-difference-and-the-opportunity/

Schmitt, J. R., Wang, J., Fischer, D. A., Jek, K. J., Moriarty, J. C., Boyajian, T. S., … Socolovsky, M. (2014). Planet Hunters. VI. An Independent Characterization of KOI-351 and Several Long Period Planet Candidates from the Kepler Archival Data. The Astronomical Journal, 148(2), 28. https://doi.org/10.1088/0004-6256/148/2/28

Secord, A. (1994). Corresponding interests: Artisans and gentlemen in nineteenth-century natural history. The British Journal for the History of Science, 27(04), 383–408. https://doi.org/10.1017/S0007087400032416

Shakespeare’s World Talk #OED. (Ongoing). Retrieved 21 April 2019, from https://www.zooniverse.org/projects/zooniverse/shakespeares-world/talk/239

Sharma, P., & Hannafin, M. J. (2007). Scaffolding in technology-enhanced learning environments. Interactive Learning Environments, 15(1), 27–46. https://doi.org/10.1080/10494820600996972

Shirky, C. (2011). Cognitive surplus: Creativity and generosity in a connected age. London, U.K.: Penguin.

Silvertown, J. (2009). A new dawn for citizen science. Trends in Ecology & Evolution, 24(9), 467–71. https://doi.org/10.1016/j.tree.2009.03.017

Simmons, B. (2015, August 24). Measuring Success in Citizen Science Projects, Part 2: Results. Retrieved 28 August 2015, from Zooniverse website: https://blog.zooniverse.org/2015/08/24/measuring-success-in-citizen-science-projects-part-2-results/

Simon, N. K. (2010). The Participatory Museum. Retrieved from http://www.participatorymuseum.org/chapter4/

Smart, P. R., Simperl, E., & Shadbolt, N. (2014). A Taxonomic Framework for Social Machines. In D. Miorandi, V. Maltese, M. Rovatsos, A. Nijholt, & J. Stewart (Eds.), Social Collective Intelligence: Combining the Powers of Humans and Machines to Build a Smarter Society. Retrieved from http://eprints.soton.ac.uk/362359/

Smithsonian Institution Archives. (2012, March 21). Meteorology. Retrieved 25 November 2017, from Smithsonian Institution Archives website: https://siarchives.si.edu/history/featured-topics/henry/meteorology

Springer, M., Dulabahn, B., Michel, P., Natanson, B., Reser, D., Woodward, D., & Zinkham, H. (2008). For the Common Good: The Library of Congress Flickr Pilot Project (pp. 1–55). Retrieved from Library of Congress website: http://www.loc.gov/rr/print/flickr_report_final.pdf

Stebbins, R. A. (1997). Casual leisure: A conceptual statement. Leisure Studies, 16(1), 17–25. https://doi.org/10.1080/026143697375485

The Culture and Sport Evidence (CASE) programme. (2011). Evidence of what works: Evaluated projects to drive up engagement (No. January; p. 19). Retrieved from Culture and Sport Evidence (CASE) programme website: http://www.culture.gov.uk/images/research/evidence_of_what_works.pdf

Trant, J. (2009). Tagging, Folksonomy and Art Museums: Results of steve.museum’s research (p. 197). Retrieved from Archives & Museum Informatics website: https://web.archive.org/web/20100210192354/http://conference.archimuse.com/files/trantSteveResearchReport2008.pdf

United States Government. (n.d.). Federal Crowdsourcing and Citizen Science Toolkit. Retrieved 9 December 2018, from CitizenScience.gov website: https://www.citizenscience.gov/toolkit/

Van Merriënboer, J. J. G., Kirschner, P. A., & Kester, L. (2003). Taking the load off a learner’s mind: Instructional design for complex learning. Educational Psychologist, 38(1), 5–13.

Vander Wal, T. (2007, February 2). Folksonomy. Retrieved 8 December 2018, from Vanderwal.net website: http://vanderwal.net/folksonomy.html

Veldhuizen, B., & Keinan-Schoonbaert, A. (2015, February 11). MicroPasts: Crowdsourcing Cultural Heritage Research. Retrieved 8 December 2018, from Sketchfab Blog website: https://blog.sketchfab.com/micropasts-crowdsourcing-cultural-heritage-research/

Verwayen, H., Fallon, J., Schellenberg, J., & Kyrou, P. (2017). Impact Playbook for museums, libraries and archives. Europeana Foundation.

Vetter, J. (2011). Introduction: Lay Participation in the History of Scientific Observation. Science in Context, 24(02), 127–141. https://doi.org/10.1017/S0269889711000032

von Ahn, L., & Dabbish, L. (2008). Designing games with a purpose. Communications of the ACM, 51(8), 57. https://doi.org/10.1145/1378704.1378719

Wenger, E. (2010). Communities of practice and social learning systems: The career of a concept. In Social Learning Systems and communities of practice. Springer Verlag and the Open University.

Whitenton, K. (2013, December 22). Minimize Cognitive Load to Maximize Usability. Retrieved 12 September 2014, from Nielsen Norman Group website: http://www.nngroup.com/articles/minimize-cognitive-load/

WieWasWie Project informatie. (n.d.). Retrieved 1 August 2014, from VeleHanden website: http://velehanden.nl/projecten/bekijk/details/project/wiewaswie_bvr

Willett, K. (n.d.). New paper: Galaxy Zoo and machine learning. Retrieved 31 March 2015, from Galaxy Zoo website: http://blog.galaxyzoo.org/2015/03/31/new-paper-galaxy-zoo-and-machine-learning/

Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology and Psychiatry, and Allied Disciplines, 17(2), 89–100.

What big topics in Digital Humanities should a reading group discuss in 2021?

This is a thrown-together post to capture responses to a question I asked on twitter last week. The Digital Scholarship Reading Group I run at the British Library will spend the first meeting of 2021 collaboratively planning topics to discuss in the rest of the year, so to broaden my understanding of what might be discussed, I posted, 'A question for people interested / working in Digital Humanities – what do you think are the big topics for 2021? Or what's not, but should be a focus? … New publications or conference papers welcome!'.

And since I was asking people for suggestions, it seemed like the right time to share something we'd been thinking about for a while: 'we've decided to open our discussions to people outside the British Library / Turing Institute! We'll alternate between 11am-12pm and 3-4pm meeting times on the first Tuesday of each month'. I haven't sorted the logistics for signing up – should it be on a session by session basis, or should we just add people's email address to the generic meeting request so they get the updates? (Will they get the updates, given how defensive and awful email is for collaboration these days?)

I also posted links: 'For context, here's what we read up to early 2018 What do deep learning, community archives, Livy and the politics of artefacts have in common? and a themed summary, Readings at the intersection of digital scholarship and anti-racism.

Responses to date are below. I didn't want to faff about with embedded tweets because they're more likely to break over time, so I've just indented replies with the username at the start.

Claire Boardman @boardman_claire The environmental impact of DH? Conversational AI and collections?

Jajwalya Karajgikar @JajRK Large language models, and computational text analysis overall?

                @mia_out As in models that use very large amounts of training data? And yes, we should do more on CTA, I think we could probably get broader coverage of methods, thanks for the prompt!

                Jajwalya Karajgikar @JajRK Models that use deep learning for language prediction; GPT-3 I think someone mentioned on the thread already?

Thomas Padilla @thomasgpadilla Social justice and DH – though all work that frames current strife as a new thing vs. a longstanding pervasive reality should be tossed into an abyss to make way for others

                @mia_out I won't ask you to name and shame bad pieces, but let me know if you have any favs that do it well!

Thomas Padilla @thomasgpadilla Ha! On the collections side @dorothyjberry  has a piece or two brewing.  @ess_ell_zee  work here is good too I think https://journal.code4lib.org/articles/14667

                @artepublico peeps like @gbaezaventura and @rayenchil and the @MellonFdn supported Latinx DH program are good places to look

                Same goes for @profgabrielle and all the @CCP_org  work is fantastic

Wilhelmina Randtke @randtke Long term sustained funding. Acknowledging, and even compiling a list of, projects that have had resources eliminated or been completely discontinued since March.

                Jenny Fewster @Fewster Absolutely! This is a problem internationally. Dig hums projects set up with one off funding that then aren’t sustained. Unfortunately digital projects are not a “set and forget” prospect. It’s a colossal waste of time, effort, knowledge and money

Matthew Hannah @TinkeringHuman I think we need/will see more work about the limits of neoliberal capitalism, the academy, and DH, applications of critical university studies and Marxist theory. Esp as higher ed continues to implode.

                @mia_out Sounds very timely! I don't suppose you have any papers or presentations in mind?

                Matthew Hannah @TinkeringHuman Claire Potter’s piece in Radical Teacher is also an inspiration: https://pdfs.semanticscholar.org/c3c0/b0f853710a56b13b0d232b3b435a19bf59a7.pdf

                But we need more engagement I think around the question of precarity and economics imo

                See also: https://jimmcgrath.us/blog/new-publication-precarious-labor-in-the-digital-humanities-american-quarterly-70-3/

Johan Oomen @johanoomen Detecting polyvocality in heritage collections and navigating this underexplored dimension to investigate shifting viewpoints over time. Could also be a great opportinity for crowdsourcing projects, to encourage contemporary users to voice their opinions on contentious topics.

                @mia_out Ooh, that's a really juicy one – lots of potential and lots of pitfalls

Erik Champion @nzerik The influence of social media on politics? The failure of social media apps, webchat etc to compensate for lockdown distancing? Govt and corp control on personal data? Big companies controlling VR devices and personal +physiological data?

                @mia_out As seen recently when people were annoyed they had to do a Google Recaptcha on a COVID test site

                Erik Champion @nzerik Bots need vaccines too! (Equality for bots trojans and spam machines #101)

Alexander Doria @Dorialexander On the technical side, optical manuscript recognition and layout analysis (especially for newspaper archives): mature tools are just emerging and that can change a lot in terms of corpus availability, research directions and digitization choices.

                @mia_out There is so much interesting work on newspapers right now! It feels like scholarship is going to have a quite different starting point in just a few years. Periodicals less so, maybe because they're more specialised and less (family history) name rich?

                Alexander Doria @Dorialexander Yes that's true. Perhaps also because they are less challenging both technically and intellectually (it's not that much of a stretch to go from book studies to the periodicals).

Alexander Doria @Dorialexander (On the social side I would say there is a long overdue uncomfortable discussion about the reliance of the field to diverse forms of digital labors: from the production of digitized archives in developing countries to the large use of students as a cheap/unpaid labor force)

                @mia_out That ties in with ideas from @TinkeringHuman

Max Kemman @MaxKemman I think we'll be seeing more about Computational Humanities and how it relates to Digital Humanities, for which a good starting point will be the @CompHumResearch conference proceedings http://ceur-ws.org/Vol-2723/

                @mia_out Ooh, we could have a debate or discussion about the difference!

                Lauren Tilton @nolauren And intersection/ difference from Data Science

                @mia_out Good call, the lines are becoming increasingly blurred, hopefully in more good ways than bad

Gabriel Hankins @GabrielHankins GPT-3 and algorithmic composition. Interested in the conversation if you open it!

And finally, one reason I collected these responses was:

Michael Lascarides @mlascarides A feature I wish Twitter had: When I see someone influential in a domain I'm interested in ask a really great question, I want to bookmark that question to return to in a couple of days once the responses have come in. It's a use case a bit more specific than a "like".

                Michael Lascarides @mlascarides Inspired most recently by [my] Q, but it comes up about once a week for me

Useful distractions: help cultural heritage and scientific projects from home

Today I came across the term 'terror-scrolling', a good phrase to describe the act of glancing from one COVID-19 update to another. While you can check out galleries, libraries, archives and museums content online or explore the ebooks, magazines and other digital items available from your local library, you might also want to help online projects from scientific and cultural heritage organisations. You can call it 'online volunteering' or 'crowdsourcing', but the key point is that these projects offer a break from the everyday while contributing to a bigger goal.

Not commuting at the moment? Need to channel some energy into something positive? You can help transcribe historical text that computers can't read, or sort scientific images. And don't worry – these sites will let you know what skills are required, you can often try a task before registering, and they have built-in methods for dealing with any mistakes you might make at the start.

Here's a list of sites that have a variety of different kinds of tasks / content to work on:

Some of these sites offer projects in languages other than English, and I've collected additional multi-lingual / international sites at Crowdsourcing the world’s heritage – I'm working on an update that'll make it easy to find current, live projects but (ironically, for someone who loves taking part in projects) I can't spend much time at my desk right now so it's not ready just yet.

Stuck at home? View cultural heritage collections online

With people self-isolating to slow the spread of the COVID-19 pandemic, parents and educators (as well as people looking for an art or history fix) may be looking to replace in-person trips to galleries, libraries, archives and museums* with online access to images of artefacts and information about them. GLAMs have spent decades getting some of the collections digitised and online so that you can view items and information from home.

* Collectively known as 'GLAMs' because it's a mouthful to say each time

Search a bunch of GLAM portals at once

I've made a quick 'custom search engine' so you can search most of the sites above with one Google search box. Search a range of portals that collect digitised objects, texts and media from galleries, libraries, archives and museums internationally:

The direct link is https://cse.google.com/cse?cx=006190492493219194770:xw0b7dfwb6b (it's just a search box, without any context, but it means you can do a search without loading this whole post)

Collections, deep zoom and virtual tour portals

Various platforms have large collections of objects from different institutions, in formats ranging from 'virtual exhibitions' or 'tours' to 'deep zooms' to catalogue-style pages about objects. I've focused on sites that include collections from multiple institutions, but this also means some of them are huge and you'll have to explore a bit to find relevant content. Try:

Other links

Various articles have collected institution-specific links to different forms of virtual tours. Try:

Things are moving fast, so let me know about other sets of links to collections, stories and tours online that'll help people staying home get their fix of history and culture and I'll update this post. Comment below, email me or @mia_out on twitter.

Screenshot from https://www.europeana.eu/portal/en
Europeana is just one of many online portals to images, stories, deep zooms and virtual tours / exhibitions from galleries, libraries, archives and museums internationally

Festival of Maintenance talk: Apps, microsites and collections online: innovation and maintenance in digital cultural heritage

I came to Liverpool for the 'Festival of Maintenance', a celebration of maintainers. I'm blogging my talk notes so that I'm not just preaching to the converted in the room. As they say:

'Maintenance and repair are just as important as innovation, but sometimes these ideas seem left behind. Amidst the rapid pace of innovation, have we missed opportunities to design things so that they can be fixed?'.

Liverpool 2019: Maintenance in Complex and Changing Times

Apps, microsites and collections online: innovation and maintenance in digital cultural heritage

My talk was about different narratives about 'digital' in cultural heritage organisations and how they can make maintenance harder or easier to support and resource. If last year's innovation is this year's maintenance task, how do we innovate to meet changing needs while making good decisions about what to maintain? At one museum job I calculated that c.85% of my time was spent on legacy systems, leaving less than a day a week for new work, so it's a subject close to my heart.

I began with an introduction to 'What does a cultural heritage technologist do?'. I might be a digital curator now but my roots lie in creating and maintaining systems for managing and sharing collections information and interpretative knowledge. This includes making digitised items available as individual items or computationally-ready datasets. There was also a gratuitous reference to Abba to illustrate the GLAM (galleries, libraries, archives and museums) acronym.

What do galleries, libraries, archives and museums have to maintain?

Exhibition apps and audio guides. Research software. Microsites by departments including marketing, education, fundraising. Catalogues. More catalogues. Secret spreadsheets. Digital asset management systems. Collections online pulled from the catalogue. Collections online from a random database. Student projects. Glueware. Ticketing. Ecommerce. APIs. Content on social media sites, other 3rd party sites and aggregators. CMS. CRM. DRM. VR, AR, MR.

Stories considered harmful

These stories mean GLAMs aren't making the best decisions about maintaining digital resources:

  • It's fine for social media content to be ephemeral
  • 'Digital' is just marketing, no-one expects it to be kept
  • We have limited resources, and if we spend them all maintaining things then how will we build the new cool things the Director wants?
  • We're a museum / gallery / library / archive, not a software development company, what do you mean we have to maintain things?
  • What do you mean, software decays over time? People don't necessarily know that digital products are embedded in a network of software dependencies. User expectations about performance and design also change over time.
  • 'Digital' is just like an exhibition; once it's launched you're done. You work really hard in the lead-up to the opening, but after the opening night you're free to move onto the next thing
  • That person left, it doesn't matter anymore. But people outside won't know that – you can't just let things drop.

Why do these stories matter?

If you don't make conscious choices about what to maintain, you're leaving it to fate.

Today's ephemera is tomorrow's history. Organisations need to be able to tell their own history. They also need to collect digital ephemera so that we can tell the history of wider society. (Social media companies aren't archives for your photos, events and stories.)

Better stories for the future

  • You can't save everything: make the hard choices. Make conscious decisions about what to maintain and how you'll close the things you can't maintain. Assess the likely lifetime of a digital product before you start work and build it into the roadmap.
  • Plan for a graceful exit – for all stakeholders. What lessons need to be documented and shared? Do you need to let any collaborators, funders, users or fans know? Can you make it web archive ready? How can you export and document the data? How can you document the interfaces and contextual reasons for algorithmic logic?
  • Refresh little and often, where possible. It's a pain, but it means projects stay in institutional memory
  • Build on standards, work with communities. Every collection is a special butterfly, but if you work on shared software and standards, someone else might help you maintain it. IIIF is a great example of this.

Also:

  • Check whether your websites are archiveready.com (and nominate UK websites for the UK Web Archive)
  • Look to expert advice on digital preservation
  • Support GLAMs with the legislative, rights and technical challenges of collecting digital ephemera. It's hard to collect social media, websites, podcasts, games, emerging formats, but if we don't, how will we tell the story of 'now' in the future?

And it's been on my mind a lot lately, but I didn't include it: consider the carbon footprint of cloud computing and machine learning, because we also need to maintain the planet.

In closing, I'd slightly adapt the Festival's line: 'design things so that they can be fixed or shut down when their job is done'. I'm sure I've missed some better stories that cultural institutions could tell themselves – let me know what you think!

Two of the organisers introducing the Festival of Maintenance event

Museums + AI, New York workshop notes

I’ve just spent Monday and Tuesday in New York for a workshop on ‘Museums + AI’. Funded by the AHRC and led by Oonagh Murphy and Elena Villaespesa, this was the second workshop in the year-long project.

Photo of workshop participants
Workshop participants

As there’s so much interest in artificial intelligence / machine learning / data science right now, I thought I’d revive the lost art of event blogging and share my notes. These notes are inevitably patchy, so keep an eye out for more formal reports from the team. I’ve used ‘museum’ throughout, as in the title of the event, but many of these issues are relevant to other collecting institutions (libraries, archives) and public venues. I’m writing this on the Amtrak to DC so I’ve been lazy about embedding links in text – sorry!

After a welcome from Pratt (check out their student blog https://museumsdigitalculture.prattsi.org/), Elena’s opening remarks introduced the two themes of the workshop: AI + visitor data and AI + Collections data. Questions about visitor data include whether museums have the necessary data governance and processes in place; whether current ethical codes and regulations are adequate for AI; and what skills staff might need to gain visitor insights with AI. Questions about collections data include how museums can minimise algorithmic biases when interpreting collections; whether the lack of diversity in both museum and AI staff would be reflected in the results; and the implications of museums engaging with big tech companies.

Achim Koh’s talk raised many questions I’ve had as we’ve thought about AI / machine learning in the library, including how staff traditionally invested with the authority to talk about collections (curators, cataloguers) would feel about machines taking on some of that work. I think we’ve broadly moved past that at the library if we can assume that we’d work within systems that can distinguish between ‘gold standard’ records created by trained staff and those created by software (with crowdsourced data somewhere inbetween, depending on the project).

John Stack and Jamie Unwin from the (UK) Science Museum shared some the challenges of using pre-built commercial models (AWS Rekognition and Comprehend) on museum collections – anything long and thin is marked as a 'weapon' – and demonstrated a nice tool for seeing 'what the machine saw' https://johnstack.github.io/what-the-machine-saw/. They don’t currently show machine-generated tags to users, but they’re used behind-the-scenes for discoverability. Do we need more transparency about how search results were generated – but will machine tags ever be completely safe to show people without vetting, even if confidence scores and software versions are included with the tags?

(If you’d like to see what all the tagging fuss is about, I have an older hands-on work sheet for trying text and images with machine classification software at https://www.openobjects.org.uk/2017/02/trying-computational-data-generation-and-entity-extraction/ )

Andrew Lih talked about image classification work with the Metropolitan Museum and Wikidata which picked up on the issue of questionable tags. Wikidata has a game-based workflow for tagging items, which in addition to tools for managing vandalism or miscreants allows them to trust the ‘crowd’ and make edits live immediately. Being able to sift incorrect from correct tags is vital – but this in turn raises questions of ‘round tripping’ – should a cultural institution ingest the corrections? (I noticed this issue coming up a few times because it’s something we’ve been thinking about as we work with a volunteer creating Wikidata that will later be editable by anyone.) Andrew said that the Met project put AI more firmly into the Wikimedia ecosystem, and that more is likely to come. He closed by demonstrating how the data created could put collections in the centre of networks of information http://w.wiki/6Bf Keep an eye out for the Wiki Art Depiction Explorer https://docs.google.com/presentation/d/1H87K5yjlNNivv44vHedk9xAWwyp9CF9-s0lojta5Us4/edit#slide=id.g34b27a5b18_0_435

Jeff Steward from Harvard Art Museums gave a thoughtful talk about how different image tagging and captioning tools (Google Vision, Imagga, Clarifai, Microsoft Cognitive Services) saw the collections, e.g. Imagga might talk about how fruit depicted in a painting tastes: sweet, juicy; how a bowl is used: breakfast, celebration. Microsoft tagger and caption tools have different views, don’t draw on each other.

Chris Alen Sula led a great session on ‘Ethical Considerations for AI’.

That evening, we went to an event at the Cooper Hewitt for more discussion of https://twitter.com/hashtag/MuseumsAI and the launch of their Interaction Lab https://www.cooperhewitt.org/interaction-lab/ Andrea Lipps and Harrison Pim’s talks reminded me of earlier discussion about holding cultural institutions to account for the decisions they make about AI, surveillance capitalism and more. Workshops like this (and the resulting frameworks) can provide the questions but senior staff must actually ask them, and pay attention to the answers. Karen Palmer’s talk got me thinking about what ‘democratising AI’ really means, and whether it’s possible to democratise something that relies on training data and access to computing power. Democratising knowledge about AI is a definite good, but should we also think about alternatives to AI that don’t involve classifications, and aren’t so closely linked to surveillance capitalism and ad tech?

The next day began with an inspiring talk from Effie Kapsalis on the Smithsonian Institution’s American Women’s History Initiative https://womenshistory.si.edu/ They’re thinking about machine learning and collections as data to develop ethical guidelines for AI and gender, analysing representations of women in multidisciplinary collections, enhancing data at scale and infusing the web with semantic data on historical women.

Shannon Darrough, MoMA, talked about a machine learning project with Google Arts and Culture to identify artworks in 30,000 installation photos, based on 70,000 collection images https://moma.org/calendar/exhibitions/history/identifying-art It was great at 2D works, not so much 3D, installation, moving image or performance art works. The project worked because they identified a clear problem that machine learning could solve. His talk led to discussion about sharing training models (i.e. once software is trained to specialise in particular subjects, others can re-use the ‘models’ that are created), and the alignment between tech companies’ goals (generally, shorter-term, self-contained) and museums’ (longer-term, feeding into core systems).

I have fewer notes from talks by Lawrence Swiader (American Battlefield Trust) with good advice on human-centred processes, Juhee Park (V&A) on frameworks for thinking about AI and museums, Matthew Cock (VocalEyes) on chat bots for venue accessibility information, and Carolyn Royston and Rachel Ginsberg (on the Cooper Hewitt’s Interaction Lab), but they added to the richness of the day. My talk was on ‘operationalising AI at a national library’, my slides are online https://www.slideshare.net/miaridge/operationalising-ai-at-a-national-library The final activity was on ‘managing AI’, a subject that’s become close to my heart.