'Entrepreneurship and Social Media' and 'Collaborating to Compete'

[Update: I hope the presentations from the speakers are posted, as they were all inspiring in their different ways.  Bristol City Council's civic crowdsourcing projects had impressive participation rates, and Phil Higgins identified the critical success factors as: choose the right platform, use it at the right stage, issue must be presented clearly. Joanne Orr talked about museum contexts that are encapsulating the intangible including language and practices (and recording intangible cultural heritage in a wiki) and I could sense the audience's excitement about Andrew Ellis' presentation on 'Your Paintings' and the crowdsourcing tagger developed for the Public Catalogue Foundation.]

I'm in Edinburgh for the Museums Galleries Scotland conference 'Collaborating to Compete'. I'm chairing a session on 'Entrepreneurship and Social Media'. In this context, the organisers defined entrepreneurship as 'doing things innovatively and differently', including new and effective ways of working. This session is all about working in partnerships and collaborating with the public. The organisers asked me to talk about my own research as well as introducing the session. I'm posting my notes in advance to save people having to scribble down notes, and I'll try to post back with notes from the session presentations.

Anyway, on with my notes…

Welcome to this session on entrepreneurship and social media. Our speakers are going to share their exciting work with museum collections and cultural heritage.  Their projects demonstrate the benefits of community participation, of opening up to encourage external experts to share their knowledge, and of engaging the general public with the task of improving access to cultural heritage for all.  The speakers have explored innovative ways of working, including organisational partnerships and low-cost digital platforms like social media.  Our speakers will discuss the opportunities and challenges of collaborating with audiences, the issues around authority, identity and trust in user-generated content, and they'll reflect on the challenges of negotiating partnerships with other organisations or with 'the crowd'.

You'll hear about two different approaches to crowdsourcing from Phil Higgins and Andy Ellis, and about how the 'Intangible Cultural Heritage' project helps a diverse range of people collaborate to create knowledge for all.

I'll also briefly discuss my own research into crowdsourcing through games as an example of innovative forms of participation and engagement.

If you're not familiar with the term, crowdsourcing generally means sharing tasks with the public that are traditionally performed in-house.

Until I left to start my PhD, I worked at the Science Museum in London, where I spent a lot of time thinking about how to make the history of science and technology more engaging, and the objects related to it more accessible. This inspired me when I was looking for a dissertation project for my MSc, so I researched and developed 'Museum Metadata Games' to explore how crowdsourcing games could get people to have fun while improving the content around 'difficult' museum objects.

Unfortunately (most) collections sites are not that interesting to the general public. There's a 'semantic gap' between the everyday language of the public and the language of catalogues.

Projects like steve.museum showed crowdsourcing helps, but it can be difficult to get people to participate in large numbers or over a long period of time. Museums can be intimidating, and marketing your project to audiences can be expensive. But what if you made a crowdsourcing interface that made people want to use it, and to tell their friends to use it? Something like… a game?

A lot of people play games… 20 million people in the UK play casual games. And a lot of people play museum games. Games like the Science Museum's Launchball and the Wellcome Collection's High Tea have had millions of plays.

Crowdsourcing games are great at creating engaging experiences. They support low barriers to participation, and the ability to keep people playing. As an example, within one month of launching, DigitalKoot, a game for National Library of Finland, had 25,000 visitors complete over 2 million individual tasks.

Casual game genres include puzzles, card games or trivia games. You've probably heard of Angry Birds and Solitaire, even if you don’t think of yourself as a 'gamer'.

Casual games are perfect for public participation because they're designed for instant gameplay, and can be enjoyed in a few minutes or played for hours.

Easy, feel-good tasks will help people get started. Strong game mechanics, tested throughout development with your target audience, will motivate on-going play and keep people coming back.

Here’s a screenshot of the games I made.

In the tagging game 'Dora's lost data', the player meets Dora, a junior curator who needs their help replacing some lost data. Dora asks the player to add words that would help someone find the object shown in Google.

When audiences can immediately identify an activity as a game – in this the use of characters and a minimal narrative really helped – their usual reservations about contributing content to a museum site disappear.

The brilliant thing about game design is that you can tailor tasks and rewards to your data needs, and build tutorials into gameplay to match the player’s skills and the games’ challenges.

Fun is personal – design for the skills, abilities and motivations of your audience.

People like helping out – show them how their data is used so they can feel good about playing for a few minutes over a cup of tea.

You can make a virtue of the randomness of your content – if people can have fun with 100 historical astronomy objects, they can have fun with anything.

To conclude, crowdsourcing games can be fun and useful for the public and for museums. And now we're going to hear more about working with the public… [the end!]

Rockets, Lockets and Sprockets – towards audience models about collections?

This is something I wrote for my MSc dissertation ('Playing with difficult objects: game designs for crowdsourcing museum metadata', view the games I built for it at http://museumgam.es/ or check out the paper (Playing with Difficult Objects – Game Designs to Improve Museum Collections) I wrote for Museums and the Web 2011) about the role of 'distinctiveness' in mental models about collections, that's potentially relevant to discussions around telling stories with and collecting metadata about museum collections.   I'm posting it here for reference in the conversation about instances vs classes of objects that arose on the UKMCG list after the release of NMSI (Science Museum, National Media Museum, National Railway Museum) data as CSV.  One reason I've been thinking about 'distinctiveness' is because I'm wondering how we help people find the interesting records – the iconic objects, the intriguing stories – in a collection of 240,000 objects.

I'm interested in audiences' mental models about when a record refers to the type of object vs the individual object – my sense is that 'rockets', in the model below, are generally thought of as the individual object, and that 'sprockets' are thought of as the type of object, but that it varies for 'lockets', depending how distinctive they are in relation to the person.

I'm also generally curious about the utility of the model, and would love to know of references that might relate to it (whether supporting or otherwise) – if you can think of any, let me know in the comments.

Not all objects are created equal

Both museum objects and the records about them vary in quality. Just as the physical characteristics of one object – its condition, rarity, etc – differ from another, the strength of its associations with important people, events or concepts will also vary. To complicate things further, as the Collections Council of Australia (2009) states, this 'significance' is 'relative, contingent and dynamic'.

When faced with hundreds of thousands of objects, a museum will digitise and describe objects prioritised by 'technical criteria (physical condition of the original material), content criteria (representativeness, uniqueness), and use criteria (demand)' (Karvonen, 2010). In theory, all objects are registered by the collecting institution, so a basic record exists for each. Hopefully, each has been catalogued and the information transcribed or digitised to some extent, but this is often not the case. Records are often missing descriptions, and most lack the contextual histories that would help the general visitor understand its significance. Some objects may only have an accession number and a one word label, while those on display in a museum generally have well-researched metadata, detailed descriptions and related narratives or contextualised histories. Variable image quality (or lack of images) is an issue in collections in general. This project excludes object records without images but does include many poor-quality images as a result of importing records from a bulk catalogue.

This project posits that objects can be placed on a scale of 'distinctiveness' based on their visual attributes and the amount and quality of information about them. Within this project, bulk collections with minimal metadata and distinctiveness have been labelled 'sprockets', the smaller set of catalogued objects with some distinctiveness have been labelled 'lockets', and the unique, iconic objects with a full contextual history have been labelled 'rockets'. This concept also references the English Heritage 'building grades' model (DCMS, 2010). During the project, the labels 'heroic', 'semi-heroic' and 'bulk' objects were also used.

These labels are not concerned with actual 'significance' or other valuation or priority placed on the object, but relate only to the potential mental models around them and data related to them – the potential for players to discover something interesting about them as objects, or whether they can just tag them on visual characteristics.

In theory there is a correlation between the significance of an object and the amount of information available about it; there may be particular opportunities for games where this is not the case.

Project label

Information type


Amount of information


Proportion of collection
Rockets Subjective Contextual history ('background, events, processes and influences') Tiny minority
Lockets Mostly objective, may be contextual to collection purpose Catalogued (some description) Minority
Sprockets Objective Registered (minimal) Majority

Table 1 Objects grouped by distinctiveness

This can also be represented visually as a pyramid model:

Figure 2 A figurative illustration of the relative numbers of different levels of objects in a typical history museum.

References
Department of Media, Culture and Sport (DCMS) (2010) Principles of Selection for Listing Buildings [Online] Available from: http://www.english-heritage.org.uk/content/imported-docs/p-t/principles-of-selection-for-listing-buildings-2010.pdf

Karvonen, M. (2010). "Digitising Museum Materials – Towards Visibility and Impact". In Pettersson, S., Hagedorn-Saupe, M., Jyrkkiö, T., Weij, A. (Eds) Encouraging Collections Mobility In Europe. Collections Mobility. [Online] Available from: http://www.lending-for-europe.eu/index.php?id=167

Russell, R., and Winkworth, K. (2009). Significance 2.0: a guide to assessing the significance of collections. Collections Council of Australia. [Online] Available from: http://significance.collectionscouncil.com.au/

Interview about museum metadata games and a pretty picture

I haven't had a chance to follow up Design constraints and research questions: museum metadata games with a post about the design process for the museum metadata games I've made for my dissertation project (because, stupidly, I slipped on black ice and damaged my wrist), so in the meantime here's a link to an interview Seb Chan did with me for the Fresh+New blog, Interview with Mia Ridge on museum metadata games, and a Wordle of the tags added so far.

There have been nearly 700 turns on the games so far, which have collectively added about 30 facts (Donald’s detective puzzle) and just over 3,700 tags (Dora’s lost data).

Some of the 1,582 unique tags added so far

Design constraints and research questions: museum metadata games

Back in June I posted parts of my dissertation project outline in 'Game mechanics for social good: a case study on interaction models for crowdsourcing museum collections enhancement'. Since then, I've been getting on with researching, designing, building and evaluating museum metadata games (in my copious spare time after work, in a year when we launched three major galleries).

I'm planning to blog bits of my dissertation as I write it up so there'll be more posts over the next month, but for now I wanted to contextualise the two games I'm evaluating at the moment.  In the next post I'll talk about the changes I made after the first solid round of evaluation.

Casual games
The two games, nicknamed 'Dora' and 'Donald' are designed as casual games – something you can pick up and play for five minutes at a time.  Design goals included: an instantly playable game that provides stress relief, supports a competitive spirit (but not necessarily against other people), inherently rewarding experience, simple game play and puts 'fun before do-gooding'.  The games were designed around a specific research-based persona ('Meet Janet', pdf link) – hopefully it's exactly right for some people who are close to the persona in various ways, and quite fun for a wider group.  It won't suit everyone, not least because definitions of 'fun' and expectations around 'games' can be deeply individual.

Design constraints
The games are also designed to test ideas about the types of objects and records that can be used successfully, and the types of content people would be able to contribute about the less charismatic and emotionally accessible reaches of science, technology and social history collections – this means that some of the objects I've used are quite technical, not all the images are great and small variations on object records are repeated (risking 'not another bloody telescope').  While this might match the reality of museum catalogues, would it still allow for a fun game?

The realities of a project I was building in my free time and my lack of graphic design and illustration skills also provided constraints – it had to be browser-based, it couldn't rely on a critical mass of concurrent players to validate actions or content, it had to help the player dive straight into playing and overcome any fears about creating content about museum objects, and it had to use objects ingested through available museum APIs (I selected broad subjects for testing but didn't individually select any objects).

I then added a few extra constraints by deciding to build it as a WordPress plugin – I wanted to take advantage of the CMS-like framework for user logins, navigation and page layout, and I wanted the code I wrote to be usable by others without too much programming overhead.  I'll need to tidy up the code at the end, but once that's done you should be able to install it on any hosted WordPress installation.  I'm making a related plugin to help you populate the database with objects (also part of an experiment in the effectiveness of letting people choose their own subject areas or terms to select playable objects).  I'll talk more about how I worked with those constraints and how they informed the changes I made after evaluation in a later post.

Different games for different purposes
I've been thinking about a museum metadata game typology, which not only considers different types of fun, but also design constraints like:

  • the type and state of the collection (e.g. art works, technical/specialist and social history objects; photographs and other media vs objects; reference collections vs selected highlights; 'tombstone' vs general vs interpretative records)
  • the type of data sought including information curators could add if they had infinite time (detail on the significance of the object, links to other subjects, people, events, objects, collections, etc); information that can be extrapolated from the existing catalogue record; things curators couldn't know (personal history, experiential accounts about the design, manufacture, use, disposal etc of objects); emotional responses; external specialist knowledge; amateur/hobbyist specialist knowledge; synonyms in every day language; terms in other spoken languages

I've also been playing with the idea of linking different game types to different 'life stages' of museum collection metadata.  For example, some games could help a museum work out which of its catalogued items seem more interesting to the public, others help gather tags, create links between items or encourage players to research objects and record new information or links about them, and others still could work well for validating data created in earlier games.  The data I gather through evaluating the games I've designed will help test this model.

So, all that said, if you'd like to play (and help with my evaluation), the two games are:

Donald's detective puzzle – find a fact about an object
Dora's lost data – a simpler tagging game

'Game mechanics for social good: a case study on interaction models for crowdsourcing museum collections enhancement'

I've been very quiet lately – exams for my MSc and work on the digital infrastructure for two new galleries (and a contemporary science news website) opening next week at the Science Museum have kept me busy – but I wanted to take a moment to post about my dissertation project. (Which reminds me, I should write up the architecture I designed to extend our core Sitecore CMS with WordPress to support social media-style interactions with Science Museum-authored content.)

Anyway. This project is for my dissertation for City University's Human-Centred Systems MSc. I'm happy to share the whole outline, but it's a bit academic in format for a blog post so I've just posted an excerpt here. I'd love to hear your comments, particularly if you know of or have been involved in creating, crowdsourced museum projects or games for social good.

'Game mechanics for social good: a case study on interaction models for crowdsourcing museum collections enhancement' is the current title – it's a bit of a mouthful but hopefully the project will do what it says on the tin.

Project description
The primary focus of this project is the design and evaluation of interactions applied to the context of an online museum collection in order to encourage members of the public to undertake specific tasks that will help improve the website.

The project will include a design and build component to create game-like interfaces for testing and evaluation, but the main research output is the analysis of museum crowdsourced projects and 'games for social good' to develop potential models for game-like interactions suitable for museum collections, and the subsequent evaluation of the proposed interaction models.

Aims and Objectives
This project aims to answer this question: can game-like interactions be designed to motivate people to undertake tasks on museum websites that will improve the overall quality of the website for other visitors?

More specifically, which elements of game mechanics are effective when applied to interfaces to crowdsource museum collections enhancement?

Objectives

  • Design game-like interaction models applicable to cultural heritage content and audiences through research, analysis and creativity workshops
  • Build an application and interfaces to create and store user-created content linked to collections content
  • Evaluate the effectiveness of game-like interaction models for eliciting useful content

Theory
Recent projects such as Armchair Revolutionary[1] and earlier projects such as Carnegie Mellon University's 'Games with a purpose'[2] and InterroBang?![3] are indicative of the trend for 'games for social good'. Crowdsourced projects such as the Guardian newspapers examination of MPs expense claims[4], the V&A Museum's image cropping[5], Brooklyn Museum's tagging game[6], the National Library of Australia's collaborative OCR corrections[7]; Chen's (2006) study of the application of Csikszentmihalyi's theory of 'flow' to game design; and Dr Jane McGonigal's ideas about multiplayer games as 'happiness engines'[8] all suggest that 'playful interactions' and crowd participation could be applied to help create specific content improvements on museum sites. Game mechanisms may help make tasks that would not traditionally considered fun or relaxing into a compelling experience.

Within the terms of this project, the output of a game-like interaction must produce an effect outside the interaction itself – that is, the result of a user's interactions with the site should produce beneficial effects for other site visitors who are not involved in the original interactions. To achieve this, it must generate content to enhance the site for subsequent visitors. Methods to achieve this could include creating trails of related objects, entering tags to describe objects, writing alternative labels or researching objects – these will be defined during the research phase and creativity workshops.

Methods and tools

The project is divided into several stages, each with their own methodology and considerations.

Research
The preliminary research process involves a literature review, research into game mechanics and the theory of flow, and research into museum audiences online. It will also include a series of short semi-structured interviews with people involved in creating crowdsourced projects on museum sites or game-like interactions to encourage the completion of set tasks (e.g. games for social good) in order to learn from their reflections on the design process; and analysis of existing sites in both these areas against the theories of game design. This research will define the metrics of the evaluation phase.

Creativity workshop(s)
The results of this research phase sets the parameters for creativity workshops designed to come up with ideas and possible designs for the game-like interfaces to be built. Possible objectives for the creativity workshop include:

  • designing methods for building different levels of challenge into the user experience in an environment that does not easily support different levels of challenge when museum-related skills remain at a constant level
  • creating experiences that are intrinsically rewarding to enable 'flow' within the constraints of available content

Build and test
In turn, the creativity workshops will help determine the interfaces to be built and tested in the later part of the project. The build will be iterative, and is planned to involve as many build-test-review-build iterations as will fit in the allocated time, in order to test as many variant interaction models as possible and support optimisation of existing designs after evaluation. User recruitment in this phase may be a sample of convenience from the target age group.

The interfaces will be developed in HTML, CSS and JavaScript, and published on a WordPress platform. This allows a neat separation of functionality and interface design. Session data (date, interface version, tester ID) can be recorded alongside user data. WordPress's template and plug-in based architecture also supports clear versioning between different iterations of the design, allowing reconstruction of earlier versions of the interfaces for later comparison, and enabling possible split A/B trials.

Analysis and write-up
Analysis will include the results of user testing and user data recorded in the WordPress platform to evaluate the performance of various interface and interaction designs. If the platform attracts usage outside the user testing sessions it may also include log file or Google Analytics analysis of use of the interfaces.

[1] https://www.armrev.org
[2] http://www.gwap.com/
[3] http://www.playinterrobang.com/
[4] http://mps-expenses.guardian.co.uk/
[5] http://collections.vam.ac.uk/crowdsourcing/
[6] http://www.brooklynmuseum.org/opencollection/tag_game/start.php
[7] http://newspapers.nla.gov.au/ndp/del/home
[8] http://www.futureofmuseums.org/events/lecture/mcgonigal.cfm

Scripting enabled – accessibility mashup event and random Friday link

Scripting Enabled, "a two day conference and workshop aimed at making the web a more accessible place", is an absolutely brilliant idea, and since it looks like it'll be on September 19 and 20, the weekend after BathCamp, I'm going to do my best to make it down. (It's the weekend before I start my Masters in HCI so it's the perfect way to set the tone for the next two years).

From the site:

The aim of the conference is to break down the barriers between disabled users and the social web as much as giving ethical hackers real world issues to solve. We talked about improving the accessibility of the web for a long time – let's not wait, let's make it happen.

A lot of companies have data and APIs available for mashups – let’s use these to remove barriers rather than creating another nice visualization.

And on a random Friday night, this is a fascinating post on Facial Recognition in Digital Photo Collections: "Polar Rose, a Firefox toolbar that does facial recognition on photos loaded in your browser."