These are a few of my favourite (audience research) things

On Friday I popped into London to give a talk at the Art of Digital meetup at the Photographer's Gallery. It's a great series of events organised by Caroline Heron and Jo Healy, so go along sometime if you can. I talked about different ways of doing audience research. (And when I wrote the line 'getting to know you' it gave me an earworm and a 'lessons from musicals' theme). It was a talk of two halves. In the first, I outlined different ways of thinking about audience research, then went into a little more detail about a few of my favourite (audience research) things.

There are lots of different ways to understand the contexts and needs different audiences bring to your offerings. You probably also want to test to see if what you're making works for them and to get a sense of what they're currently doing with your websites, apps or venues. It can help to think of research methods along scales of time, distance, numbers, 'density' and intimacy. (Or you could think of it as a journey from 'somewhere out there' to 'dancing cheek to cheek'…)

'Time' refers to both how much time a method asks from the audience and how much time it takes to analyse the results. There's no getting around the fact that nearly all methods require time to plan, prepare and pilot, sorry! You can run 5 second tests that ask remote visitors a single question, or spend months embedded in a workplace shadowing people (and more time afterwards analysing the results). On the distance scale, you can work with remote testers located anywhere across the world, ask people visiting your museum to look at a few prototype screens, or physically locate yourself in someone's office for an interview or observation.

Numbers and 'density' (or the richness of communication and the resulting data) tend to be inversely linked. Analytics or log files let you gather data from millions of website or app users, one-question surveys can garner thousands of responses, you can interview dozens of people or test prototypes with 5-8 users each time. However, the conversations you'll have in a semi-structured interview are much richer than the responses you'll get to a multiple-choice questionnaire. This is partly because it's a two-way dialogue, and partly because in-person interviews convey more information, including tone of voice, physical gestures, impressions of a location and possibly even physical artefacts or demonstrations. Generally, methods that can reach millions of remote people produce lots of point data, while more intimate methods that involve spending lots of time with just a few people produce small datasets of really rich data.

So here are few of my favourite things: analytics, one-question surveys, 5 second tests, lightweight usability tests, semi-structured interviews, and on-site observations. Ultimately, the methods you use are a balance of time and distance, the richness of the data required, and whether you want to understand the requirements for, or measure the performance of a site or tool.

Analytics are great for understanding how people found you, what they're doing on your site, and how this changes over time. Analytics can help you work out which bits of a website need tweaking, and for measuring to see the impact of changes. But that only gets you so far – how do you know which trends are meaningful and which are just noise? To understand why people are doing what they do, you need other forms of research to flesh them out. 
One question surveys are a great way of finding out why people are on your site, and whether they've succeeded in achieving their goals for being there. We linked survey answers to analytics for the last Let's Get Real project so we could see how people who were there for different reasons behaved on the site, but you don't need to go that far – any information about why people are on your site is better than none! 
5 second tests and lightweight usability tests are both ways to find out how well a design works for its intended audiences. 5 second tests show people an interface for 5 seconds, then ask them what they remember about it, or where they'd click to do a particular task. They're a good way to make sure your text and design are clear. Usability tests take from a few minutes to an hour, and are usually done in person. One of my favourite lightweight tests involves grabbing a sketch, an iPad or laptop and asking people in a café or other space if they'd help by testing a site for a few minutes. You can gather lots of feedback really quickly, and report back with a prioritised list of fixes by the end of the day. 
Semi-structured interviews use the same set of questions each time to ensure some consistency between interviews, but they're flexible enough to let you delve into detail and follow any interesting diversions that arise during the conversation. Interviews and observations can be even more informative if they're done in the space where the activities you're interested in take place. 'Contextual inquiry' goes a step further by including observations of the tasks you're interested in being performed. If you can 'apprentice' yourself to someone, it's a great way to have them explain to you why things are done the way they are. However, it's obviously a lot more difficult to find someone willing and able to let you observe them in this way, it's not appropriate for every task or research question, and the data that results can be so rich and dense with information that it takes a long time to review and analyse. 
And one final titbit of wisdom from a musical – always look on the bright side of life! Any knowledge is better than none, so if you manage to get any audience research or usability testing done then you're already better off than you were before.

[Update: a comment on twitter reminded me of another favourite research thing: if you don't yet have a site/app/campaign/whatever, test a competitor's!]

Sharing is caring keynote 'Enriching cultural heritage collections through a Participatory Commons'

Enriching cultural heritage collections through a Participatory Commons platform: a provocation about collaborating with users

Mia Ridge, Open University Contact me: @mia_out or https://miaridge.com/

[I was invited to Copenhagen to talk about my research on crowdsourcing in cultural heritage at the 3rd international Sharing is Caring seminar on April 1. I'm sharing my notes in advance to make life easier for those awesome people following along in a second or third language, particularly since I'm delivering my talk via video.]

Today I'd like to present both a proposal for something called the 'Participatory Commons', and a provocation (or conversation starter): there's a paradox in our hopes for deeper audience engagement through crowdsourcing: projects that don't grow with their participants will lose them as they develop new skills and interests and move on. This talk presents some options for dealing with this paradox and suggests a Participatory Commons provides a way to take a sector-wide view of active engagement with heritage content and redefine our sense of what it means when everybody wins.

I'd love to hear your thoughts about this – I'll be following the hashtag during the session and my contact details are above.

Before diving in, I wanted to reflect on some lessons from my work in museums on public engagement and participation.

My philosophy for crowdsourcing in cultural heritage (aka what I've learnt from making crowdsourcing games)

One thing I learnt over the past years: museums can be intimidating places. When we ask for help with things like tagging or describing our collections, people want to help but they worry about getting it wrong and looking stupid or about harming the museum.

The best technology in the world won't solve a single problem unless it's empathically designed and accompanied by social solutions. This isn't a talk about technology, it's a talk about people – what they want, what they're afraid of, how we can overcome all that to collaborate and work together.

Dora's Lost Data

So a few years ago I explored the potential of crowdsourcing games to make helping a museum less scary and more fun. In this game, 'Dora's Lost Data', players meet a junior curator who asks them to tag objects so they'll be findable in Google. Games aren't the answer to everything, but identifying barriers to participation is always important. You have to understand your audiences – their motivations for starting and continuing to participate; the fears, anxieties, uncertainties that prevent them participating. [My games were hacked together outside of work hours, more information is available at My MSc dissertation: crowdsourcing games for museums; if you'd like to see more polished metadata games check out Tiltfactor's http://www.metadatagames.org/#games]

Mutual wins – everybody's happy

My definition of crowdsourcing: cultural heritage crowdsourcing projects ask the public to undertake tasks that cannot be done automatically, in an environment where the activities, goals (or both) provide inherent rewards for participation, and where their participation contributes to a shared, significant goal or research area.

It helps to think of crowdsourcing in cultural heritage as a form of volunteering. Participation has to be rewarding for everyone involved. That sounds simple, but focusing on the audiences' needs can be difficult when there are so many organisational needs competing for priority and limited resources for polishing the user experience. Further, as many projects discover, participant needs change over time…

What is a Participatory Commons and why would we want one?

First, I have to introduce you to some people. These are composite stories (personas) based on my research…

Two archival historians, Simone and Andre. Simone travels to archives in her semester breaks to stock up on research material, taking photos of most documents 'in case they're useful later', transcribing key text from others. Andre is often at the next table, also looking for material for his research. The documents he collected for his last research project would be useful for Simone's current book but they've never met and he has no way of sharing that part of his 'personal research collection' with her. Currently, each of these highly skilled researchers take their cumulative knowledge away with them at the end of the day, leaving no trace of their work in the archive itself. Next…

Two people from a nearby village, Martha and Bob. They joined their local history society when they retired and moved to the village. They're helping find out what happened to children from the village school's class of 1898 in the lead-up to and during World War I. They are using census returns and other online documents to add records to a database the society's secretary set up in Excel. Meanwhile…

A family historian, Daniel. He has a classic 'shoebox archive' – a box containing his grandmother Sarah's letters and diary, describing her travels and everyday life at the turn of the century. He's transcribing them and wants to put them online to share with his extended family. One day he wants to make a map for his kids that shows all the places their great-grandmother lived and visited. Finally, there's…

Crowdsourcer Nisha.She has two young kids and works for a local authority. She enjoys playing games like Candy Crush on her mobile, and after the kids have gone to bed she transcribes ship logs on the Old Weather website while watching TV with her husband. She finds it relaxing, feels good about contributing to science and enjoys the glimpses of life at sea. Sites like Old Weather use 'microtasks' – tiny, easily accomplished tasks – and crowdsourcing to digitise large amounts of text.

Helping each other?

None of our friends above know it, but they're all looking at material from roughly the same time and place. Andre and Simone could help each other by sharing the documents they've collected over the years. Sarah's diaries include the names of many children from her village that would help Martha and Bob's project, and Nisha could help everyone if she transcribed sections of Sarah's diary.

Connecting everyone's efforts for the greater good: Participatory Commons

This image shows the two main aspects of the Participatory Commons: the different sources for content, and the activities that people can do with that content.

The Participatory Commons (image: Mia Ridge)

The Participatory Commons is a platform where content from different sources can be aggregated. Access to shared resources underlies the idea of the 'Commons', particularly material that is not currently suitable for sites like Europeana, like 'shoebox archives' and historians' personal record collections. So if the 'Commons' part refers to shared resources, how is it participatory?

The Participatory Commons interface supports a range of activities, from the types of tasks historians typically do, like assessing and contextualising documents, activities that specialists or the public can do like identifying particular people, places, events or things in sources, or typical crowdsourcing tasks like fulltext transcription or structured tagging.

By combining the energy of crowdsourcing with the knowledge historians create on a platform that can store or link to primary sources from museums, libraries and archives with 'shoebox archives', the Commons could help make our shared heritage more accessible to all. As a platform that makes material about ordinary people available alongside official archives and as an interface for enjoyable, meaningful participation in heritage work, the Commons could be a basis for 'open source history', redressing some of the absences in official archives while improving the quality of all records.

As a work in progress, this idea of the Participatory Heritage Commons has two roles: an academic thought experiment to frame my research, and as a provocation for GLAMs (galleries, museums, libraries, archives) to think outside their individual walls. As a vision for 'open source history', it's inspired by community archives, public history, participant digitisation and history from below… This combination of a large underlying repository and more intimate interfaces could be quite powerful. Capturing some of the knowledge generated when scholars access collections would benefit both archives and other researchers.

'Niche projects' can be built on a Participatory Commons

As a platform for crowdsourcing, the Participatory Commons provides efficiencies of scale in the backend work for verifying and validating contributions, managing user accounts, forums, etc. But that doesn't mean that each user would experience the same front-end interface.

Niche projects build on the Participatory Commons
(quick and dirty image: Mia Ridge)

My research so far suggests that tightly-focused projects are better able to motivate participants and create a sense of community. These 'niche' projects may be related to a particular location, period or topic, or to a particular type of material. The success of the New York Public Library's What's on the Menu project, designed around a collection of historic menus, and the British Library's GeoReferencer project, designed around their historic map collection, both demonstrate the value of defining projects around niche topics.

The best crowdsourcing projects use carefully designed interactions tailored to the specific content, audience and data requirements of a given project. These interactions are usually For example, the Zooniverse body of projects use much of the same underlying software but projects are designed around specific tasks on specific types of material, whether classifying simple galaxy types, plankton or animals on the Serengeti, or transcribing ship logs or military diaries.

The Participatory Commons is not only a collection of content, it also allows 'niche' projects to be layered on top, presenting more focused sets of content, and specialist interfaces designed around the content, audience and purpose.

Barriers

But there are still many barriers to consider, including copyright and technical issues and important cultural issues around authority, reliability, trust, academic credit and authorship. [There's more background on this at my earlier post on historians and the Participatory Commons and Early PhD findings: Exploring historians' resistance to crowdsourced resources.]

Now I want to set the idea of the Participatory Commons aside for a moment, and return to crowdsourcing in cultural heritage. I've been looking for factors in the success or otherwise of crowdsourcing projects, from grassroots, community-lead projects to big glamorous institutionally-lead sites.

I mentioned that Nisha found transcribing text relaxing. Like many people who start transcribing text, she found herself getting interested in the events, people and places mentioned in the text. Forums or other methods for participants to discuss their questions seem to help keep participants motivated, and they also provide somewhere for a spark of curiosity to grow (as in this forum post). We know that some people on crowdsourcing projects like Old Weather get interested in history, and even start their own research projects.

Crowdsourcing as gateway to further activity

You can see that happening on other crowdsourcing projects too. For example, Herbaria@Homeaims to document historical herbarium collections within museums based on photographs of specimen cards. So far participants have documented over 130,000 historic specimens. In the process, some participants also found themselves being interested in the people whose specimens they were documenting.

As a result, the project has expanded to include biographies of the original specimen collectors. It was able to accommodate this new interest through a project wiki, which has a combination of free text and structured data linking records between the transcribed specimen cards and individual biographies.

'Levels of Engagement' in citizen science

There's a consistent enough pattern in science crowdsourcing projects that there's a model from 'citizen science' that outlines different stages participants can move through, from undertaking simple tasks, joining in community discussion, through to 'working independently on self-identified research projects'.[1]

Is this 'mission accomplished'?

This is Nick Poole's word cloud based on 40 museum missionstatements. With words like 'enjoyment', 'access', 'learning' appearing in museum missions, doesn't this mean that turning transcribers into citizen historians while digitising and enhancing collections is a success? Well, yes, but…

Paths diverge; paradox ahead?

There's a tension between GLAM's desire to invite people to 'go deeper', to find their own research interests, to begin to become citizen historians; and the desire to ask people to help us with tasks set by GLAMs to help their work. Heritage organisations can try to channel that impulse to start research into questions about their own collections, but sometimes it feels like we're asking people to do our homework for us. The scaffolds put in place to help make tasks easier may start to feel like a constraint.

Who has agency?

If people move beyond simple tasks into more complex tasks that require a greater investment of time and learning, then issues of agency – participants' ability to make choices about what they're working on and why – start to become more important. Would Wikipedia have succeeded if it dictated what contributors had to write about? We shouldn't mistake volunteers for a workforce just because they can be impressively dedicated contributors.

Participatory project models

Turning again to citizen science – this time public participation in science research, we have a model for participatory projects according to the amount of control participants have over the design of the project itself – or to look at it another way, how much authority the organisation has ceded to the crowd. This model contains three categories: 'contributory', where the public contributes data to a project designed by the organisation; 'collaborative', where the public can help refine project design and analyse data in a project lead by the organisation; and 'co-creative', where the public can take part in all or nearly all processes, and all parties design the project together.[2]

As you can imagine, truly co-creative projects are rare. It seems cultural organisations find it hard to truly collaborate with members of the public; for many understandable reasons. The level of transparency required, and the investment of time for negotiating mutual interests, goals and capabilities increase as collaboration deepens. Institutional constraints and lack of time to engage in deep dialogue with participants make it difficult to find shared goals that work for all parties. It seems GLAMs sometimes try to take shortcuts and end up making decisions for the group, which means their 'co-creative' project is actually more just 'collaborative'.

New challenges

When participants start to out-grow the tasks that originally got them hooked, projects face a choice. Some projects are experimenting with setting challenges for participants. Here you see 'mysteries' set by the UK's Museum of Design in Plastics, and by San FranciscoPublic Library on History Pin. Finding the right match between the challenge set and the object can be difficult without some existing knowledge of the collection, and it can require a lot of on-going time to encourage participants. Putting the mystery under the nose of the person who has the knowledge or skills to solve it is another challenge that projects like this will have to tackle.

Working with existing communities of interest is a good start, but it also takes work to figure out where they hang out online (or in-person) and understand how they prefer to work. GLAMs sometimes fall into the trap of choosing the technology first, or trying something because it's trendy; it's better to start with the intersection between your content and the preferences of potential audiences.

But is it wishful thinking to hope that others will be interested in answering the questions GLAMs are asking?

A tension?

Should projects accept that some people will move on as they develop new interests, and concentrate on recruiting new participants to replace them? Do they try to find more interesting tasks or new responsibilities for participants, such as helping moderate discussions, or checking and validating other people's work? Or should they find ways for the project grow as participants' skill and knowledge increase? It's important to make these decisions mindfully as the default is otherwise to accept a level of turnover as participants move on.

To return to lessons from citizen science, possible areas for deeper involvement include choosing or defining questions for study, analysing or interpreting data and drawing conclusions, discussing results and asking new questions.[3]However, heritage organisations might have to accept that the questions people want to ask might not involve their collections, and that these citizen historians' new interests might not leave time for their previous crowdsourcing tasks.

Why is a critical mass of content in a Participatory Commons useful?

And now we return to the Participatory Commons and the question of why a critical mass of content would be useful.

Increasingly, the old divisions between museum, library and archive collections don't make sense. For most people, content is content, and they don't understand why a pamphlet about a village fete in 1898 would be described and accessed differently depending on whether it had ended up in a museum, library or archive catalogue.

Basing niche projects on a wider range of content creates opportunities for different types of tasks and levels of responsibility. Projects that provide a variety of tasks and roles can support a range of different levels and types of participant skills, availability, knowledge and experience.

A critical mass of material is also important for the discoverability of heritage content. Even the most sophisticated researcher turns to Google sometimes, and if your content doesn't come up in the first few results, many researchers will never know it exists. It's easy to say but less easy to make a reality: the easier it is to find your collections, the more likely it is that researchers will use them.

Commons as party?

More importantly, a critical mass of content in a Commons allows us to re-define 'winning'. If participation is narrowly defined as belonging to individual GLAMs, when a citizen historian moves onto a project that doesn't involve your collection then it can seem like you've lost a collaborator. But the people who developed a new research interest through a project at one museum might find they end up using records from the archive down the road, and transcribing or enhancing their records during their investigation. If all the institutions in the region shared their records on the Commons or let researchers take and share photos while using their collections, the researcher has a critical mass of content for their research and hopefully as a side-effect, their activities will improve links between collections. If the Commons allows GLAMs to take a sector-wide view then someone moving on to a different collection becomes a moment to celebrate, a form of graduation. In our wildest imagination, the Commons could be like a fabulous party where you never know what fabulous interesting people and things you'll discover…

To conclude – by designing platforms that allow people to collect and improve records as they work, we're helping everybody win.

Thank you! I'm looking forward to hearing your thoughts.


[1]M. Jordan Raddick et al., 'Citizen Science: Status and Research Directions for the Coming Decade', in astro2010: The Astronomy and Astrophysics Decadal Survey, vol. 2010, 2009, http://www8.nationalacademies.org/astro2010/DetailFileDisplay.aspx?id=454.

[2]Rick Bonney et al., Public Participation in Scientific Research: Defining the Field and Assessing Its Potential for Informal Science Education. A CAISE Inquiry Group Report (Washington D.C.: Center for Advancement of Informal Science Education (CAISE), July 2009), http://caise.insci.org/uploads/docs/PPSR%20report%20FINAL.pdf.

[3]Bonney et al., Public Participation in Scientific Research: Defining the Field and Assessing Its Potential for Informal Science Education. A CAISE Inquiry Group Report.


Image credits in order of appearance: Glider, Library of Congress, Great hall, Library of CongressCurzona Allport from Tasmanian Archive and Heritage Office, Hålanda Church, Västergötland, Sweden, Swedish National Heritage Board, Smithsonian Institution, Postmaster, General James A. Farley During National Air Mail Week, 1938Powerhouse Museum, Canterbury Bankstown Rugby League Football Club's third annual Ball.

'Museums meet the 21st century' – OpenTech 2010 talk

These are my notes for the talk I gave at OpenTech 2010 on the subject of 'Museums meet the 21st Century'. Some of it was based on the paper I wrote for Museums and the Web 2010 about the 'Cosmic Collections' mashup competition, but it also gave me a chance to reflect on bigger questions: so we've got some APIs and we're working on structured, open data – now what? Writing the talk helped me crystallise two thoughts that had been floating around my mind. One, that while "the coolest thing to do with your data will be thought of by someone else", that doesn't mean they'll know how to build it – developers are a vital link between museum APIs, linked data, etc and the general public; two, that we really need either aggregated datasets or data using shared standards to get the network effect that will enable the benefits of machine-readable museum data. The network effect would also make it easier to bridge gaps in collections, reuniting objects held in different institutions. I've copied my text below, slides are embedded at the bottom if you'd rather just look at the pictures. I had some brilliant questions from the audience and afterwards, I hope I was able to do them justice. OpenTech itself was a brilliant day full of friendly, inspiring people – if you can possibly go next year then do!

Museums meet the 21st century.
Open Tech, London, September 11, 2010

Hi, I'm Mia, I work for the Science Museum, but I'm mostly here in a personal capacity…

Alternative titles for this talk included: '18th century institution WLTM 21st century for mutual benefit, good times'; 'the Age of Enlightenment meets the Age of Participation'. The common theme behind them is that museums are old, slow-moving institutions with their roots in a different era.

Why am I here?

The proposal I submitted for this was 'Museums collaborating with the public – new opportunities for engagement?', which was something of a straw man, because I really want the answer to be 'yes, new opportunities for engagement'. But I didn't just mean any 'public', I meant specifically a public made up of people like you. I want to help museums open up data so more people can access it in more forms, but most people can't just have a bit of a tinker and create a mashup. “The coolest thing to do with your data will be thought of by someone else” – but that doesn’t mean they’ll know how to build it. Audiences out there need people like you to make websites and mobile apps and other ways for them to access museum content – developers are a vital link in the connection between museum data and the general public.

So there's that kind of help – helping the general public get into our data; and there's another kind of help – helping museums get their data out. For the first, I think I mostly just want you to know that there's data out there, and that we'd love you to do stuff with it.

The second is a request for help working on things that matter. Linkable, open data seems like a no-brainer, but museums need some help getting there.

Museums struggle with the why, with the how, and increasingly with the "we are reducing our opening hours, you have to be kidding me".

Chicken and the egg

Which comes first – museums get together and release interesting data in a usable form under a useful licence and developers use it to make cool things, or developers knock on the doors of museums saying 'we want to make cool things with your data' and museums get it sorted?

At the moment it's a bit of both, but the efforts of people in museums aren't always aligned with the requests from developers, and developers' requests don't always get sent to someone who'll know what to do with it.

So I'm here to talk about some stuff that's going on already and ask for a reality check – is this an idea worth pursuing? And if it is, then what next?
If there’s no demand for it, it won’t happen. Nick Poole, Chief Executive, Collections Trust, said on the Museums Computer Group email discussion list: "most museum people I speak to tend not to prioritise aggregation and open interoperability because there is not yet a clear use case for it, nor are there enough aggregators with enough critical mass to justify it.”

But first, an example…

An experiment – Cosmic Collections, the first museum mashup competition

The Cosmic Collections project was based on a simple idea – what if a museum gave people the ability to make their own collection website for the general public? Way back in December 2008 I discovered that the Science Museum was planning an exhibition on astronomy and culture, to be called ‘Cosmos & Culture’. They had limited time and resources to produce a site to support the exhibition and risked creating ‘just another exhibition microsite’. I went to the curator, Alison Boyle, with a proposal – what if we provided access to the machine-readable exhibition content that was already being gathered internally, and threw it open to the public to make websites with it? And what if we motivated them to enter by offering competition prizes? Competition participants could win a prize and kudos, and museum audiences might get a much more interesting, innovative site. Astronomy is one of the few areas where the amateur can still make valued scientific contributions, so the idea was a good match for museum mission, exhibition content, technical context, and hopefully developers – but was that enough?

The project gave me a chance to investigate some specific questions. At the time, there were lots of calls from some quarters for museums to produce APIs for each project, but there was also doubt about whether anyone would actually use a museum API, whether we could justify an investment in APIs and machine-readable data. And can you really crowdsource the creation of collections interfaces? The Cosmic Collections competition was a way of finding out.

Lessons? An API isn't a magic bullet, you still need to support the dev community, and encourage non-technical people to find ways to play with it. But the project was definitely worth doing, even if just for the fact that it was done and the world didn't end. Plus, the results were good, and it reinforced the value of working with geeks. [It also got positive coverage in the technical press. Who wouldn’t be happy to hear ‘the museum itself has become an example of technological innovation’ or that it was ‘bringing museums out into the open as places of innovation’?]

Back to the chicken and the egg – linking museums

So, back to the chicken and the egg… Progress is being made, but it gets bogged down in discussions about how exactly to get data online. Museums have enough trouble getting the suppliers they work with to produce code that meets accessibility standards, let alone beautifully structured, re-usable open data.

One of the reasons open, structured data is so attractive to museum technologists is that we know we can never build interfaces to meet the needs of every type of audience. Machine-readable data should allow people with particular needs to create something that supports their own requirements or combines their data with ours to make lovely new things.

Explore with us – tell museums what you need

So if you're someone who wants to build something, I want to hear from you about what standards you're already working with, which formats work best for you…

To an extent that's just moving the problem further down the line, because I've discovered that when you ask people what data standards they want to use, and they tell you it turns out they're all different… but at least progress is being made.

Dragons we have faced

I think museums are getting to the point where they can live with the 80% in the interest of actually getting stuff done.

Museums need to get over the idea that linkable data must be perfect – perfectly clean data, perfectly mapped to perfect vocabularies and perfectly delivered through perfect standards. Museums are used to mapping data from their collections management systems for a known end-use, they've struggled with open-ended requirements for unknown future uses.

The idea that aggregated data must be able to do everything that data provided at source can do has held us back. Aggregated data doesn't need to be able to do everything – sometimes discoverability is enough, as long as you can get back to the source if you need the rest of the data. Sometimes it's enough to be able to link to someone else's record that you've discovered.

Museum data and the network effect

One reason I'm here (despite the fact that public speaking is terrifying) is a vision of the network effect that could apply when we have open museum data.

We could re-unite objects across time and place and people, connecting visitors and objects, regardless of owing institution or what type of object or information it is. We could create highlight collections by mining data across museums, using the links people are making between our collections. We can help people tell their local stories as well as the stories about big subject and world histories. Shared data standards should reduce learning curve for people using our data which would hopefully increase re-use.

Mismatches between museums and tech – reasons to be patient

So that's all very exciting, but since I've also learnt that talking about something creates expectations, here are some reasons to be patient with museums, and tolerant when we fail to get it right the first time…

IT is not a priority for most museums, keeping our objects secure and in one piece is, as is getting some of them on display in ways that make sense to our audiences.

Museums are slow. We'll be talking about stuff for a long time before it happens, because we have limited resources and risk-averse institutions. Museum project management is designed for large infrastructure projects, moving hundreds of delicate objects around while major architectural builds go on. It's difficult to find space for agility and experimentation within that.

Nancy Proctor from the Smithsonian said this week: "[Museum] work is more constrained than a general developer" – it must be of the highest quality; for everybody – public good requires relevance and service for all, and because museums are in the 'forever business' it must be sustainable.

How you can make a difference

Museums are slowly adapting to the participation models of social media. You can help museums create (backend) architectures of participation. Here are some places where you can join in conversations with museum technologists:

Museums Computer Group – events, mailing list http://museumscomputergroup.org.uk/ #ukmcg @ukmcg

Linking Museums – meetups, practical examples, experimenting with machine-readable data http://museum-api.pbworks.com/

Space Time Camp – Nov 4/5, #spacetimecamp

‘Museums and the Web’ conference papers online provide a good overview of current work in the sector http://www.archimuse.com/conferences/mw.html

So that‘s all fun, but to conclude – this is all about getting museums to the point where the technology just works, data flows like water and our energy is focussed on the compelling stories museums can tell with the public. If you want to work on things that matter – museums matter, and they belong to all of us – we should all be able to tell stories with and through museums.

Thank you for listening

Keep in touch at @mia_out or https://openobjects.org.uk/

Performance testing and Agile – top ten tips from Thoughtworks

I've got a whole week and a bit off uni (though of course I still have my day job) and I got a bit over-excited and booked two geek talks (and two theatre shows). This post is summarising a talk on Top ten secret weapons for performance testing in an agile environment, organised by the BCS's SPA (software practice advancement) group with Patrick Kua from ThoughtWorks.

His slides from an earlier presentation are online so you may prefer just to head over and read them.

[My perspective: I've been thinking about using Agile methodologies for two related projects at work, but I'm aware of the criticisms from a requirements engineering perspective that doesn't deal with non-functional requirements (i.e. not requirements about what a system does, but how it does it and the qualities it has – usability, security, performance, etc) and of the problems integrating graphic and user experience design into agile processes (thanks in part to an excellent talk @johannakoll gave at uni last term.  Even if we do the graphic and user experience design a cycle or two ahead, I'm also not sure how it would work across production teams that span different departments – much to think about.

Wednesday's talk did a lot to answer my own questions about how to integrate non-functional requirements into agile projects, and I learned a lot about performance testing – probably about time, too. It was intentionally about processes rather than tools, but JMeter was mentioned a few times.]

1. Make performance explicit.
Make it an explicit requirement upfront and throughout the process (as with all non-functional requirements in agile).
Agile should bring the painful things forward in the process.

Two ways: non-functional requirements can be dotted onto the corner of the story card for a functional requirement, or give them a story card to themselves, and manage them alongside the stories for the functional requirements.  He pointed out that non-functional requirements have a big effect on architecture, so it's important to test assumptions early.

[I liked their story card format: So that [rationale] as [person or role] I want [natural language description of the requirement].]

2. One team.
Team dynamics are important – performance testers should be part of the main team. Products shouldn't just be 'thrown over the wall'. Insights from each side help the other. Someone from the audience made a comment about 'designing for testability' – working together makes this possible.

Bring feedback cycles closer together. Often developers have an insight into performance issues from their own experience – testers and developers can work together to triangulate and find performance bottlenecks.

Pair on performance test stories – pair a performance tester and developer (as in pair programming) for faster feedback. Developers will gain testing expertise, so rotate pairs as people's skills develop.  E.g. in a team of 12 with 1 tester, rotate once a week or fortnight.  This also helps bring performance into focus through the process.

3. Customer driven
Customer as in end user, not necessarily the business stakeholder.  Existing users are a great source of requirements from the customers' point of view – identify their existing pain points.  Also talk to marketing people and look at usage forecasts.

Use personas to represent different customers or stakeholders. It's also good to create a persona for someone who wants to bring the site down – try the evil hat.

4. Discipline
You need to be as disciplined and rigorous as possible in agile.  Good performance testing needs rigour.

They've come up with a formula:
Observe test results – what do you see? Be data driven.
Formulate hypothesis – why is it doing that?
Design an experiment – how can I prove that's what's happening? Lightweight, should be able to run several a day.
Run experiment – take time to gather and examine evidence
Is hypothesis valid? If so –
Change application code

Like all good experiments, you should change only one thing at a time.

Don't panic, stay disciplined.

5. Play performance early
Scheduling around iterative builds makes it more possible. A few tests during build is better than a block at the end.  Automate early.

6. Iterate, Don't (Just) Increment
Fishbone structure – iterate and enhance tests as well as development.

Sashimi slicing is another technique.  Test once you have an end-to-end slice.

Slice by presentation or slice by scenario.
Use visualisations to help digest and communicate test results. Build them in iterations too. e.g. colour to show number of http requests before get error codes. If slicing by scenario, test by going through a whole scenario for one persona.

7. Automate, automate, automate.
It's an investment for the future, so the amount of automation depends on the lifetime of the project and its strategic importance.  This level of discipline means you don't waste time later.

Automated compilation – continuous integration good.
Automated tests
Automated packaging
Automated deployment [yes please – it should be easy to get different builds onto an environment]
Automated test orchestration – playing with scenarios, put load generators through profiles.
Automated analysis
Automated scheduling – part of pipeline. Overnight runs.
Automated result archiving – can check raw output if discover issues later

Why automate? Reproducible and constant; faster feedback; higher productivity.
Can add automated load generation e.g. JMeter, which can also run in distributed agent mode.
Ideally run sanity performance tests for show stoppers at the end of functional tests, then a full overnight test.

8. Continuous performance testing
Build pipeline.
Application level – compilation and test units; functional test; build RPM (or whatever distribution thingy).
Into performance level – 5 minute sanity test; typical day test.

Spot incremental performance degradation – set tests to fail if the percentage increase is too high.

9. Test drive your performance test code
Hold it to the same level of quality as production code. TDD useful. Unit test performance code to fail faster. Classic performance areas to unit test: analysis, presentation, visualisation, information collecting, publishing.

V model of testing – performance testing at top righthand edge of the V.

10. Get feedback.
Core of agile principles.
Visualisations help communicate with stakeholders.
Weekly showcase – here's what we learned and what we changed as a result – show the benefits of on-going performance testing.

General comments from Q&A: can do load generation and analyse session logs of user journeys. Testing is risk migitation – can't test everything. Pairing with clients is good.

In other news, I'm really shallow because I cheered on the inside when he said 'dahta' instead of 'dayta'. Accents FTW! And the people at the event seemed nice – I'd definitely go to another SPA event.