Collected links and random thoughts on user testing

First, some links on considerations for survey design and quick accessibility testing.

Given the constraints of typical museum project budgets, it’s helpful to know you can get useful results with as few as five testers. Here’s everybody’s favourite, Jakob Nielsen, on why you can do usability testing with only five users, card sorting exercises for information architecture with 15 users and quantitative studies with 20 users. Of course, you have to allow for testing for each of your main audiences and ideally for iterative testing too, but let’s face it – almost any testing is better than none. After all, you can’t do user-centred design if you don’t know what your users want.

There were a few good articles about evaluation and user-centred design in Digital Technology in Japanese Museums, a special edition of the Journal of Museum Education. I particularly liked the approach in “What Impressions Do People Have Regarding Mobile Guidance Services in Museums? Designing a Questionnaire that Uses Opinions from the General Public” by Hiromi Sekiguchi and Hirokazu Yoshimura.

To quote from their abstract: “There are usually serious gaps between what developers want to know and what users really think about the system. The present research aims to develop a questionnaire that takes into consideration the users point of view, including opinions of people who do not want to use the system“. [my emphasis]

They asked people to write down “as many ideas as they could – doubts, worries, feelings, and expectations” about the devices they were testing. They then grouped the responses and used them as the basis for later surveys. Hopefully this process removes developer- and content producer-centric biases from the questions asked in user testing.

One surprising side-effect of good user testing is that it helps get everyone involved in a project to ‘buy into’ accessibility and usability. We can all be blinded by our love of technology, our love of the bottom line, our closeness to the material to be published, etc, and forget that we are ultimately only doing these projects to give people access to our collections and information. User testing gives representative users a voice and helps everyone re-focus on the people who’ll be using the content will actually want to do with it.

I know I’m probably preaching to the converted here, but during Brian Kelly’s talk on Accessibility and Innovation at UKMW07 I realised that for years I’ve had an unconscious test for how well I’ll work with someone based on whether they view accessibility as a hindrance or as a chance to respond creatively to a limitation. As you might have guessed, I think the ‘constraints’ of accessibility help create innovations. As 37rules say, “let limitations guide you to creative solutions“.

One of the points raised in the discussion that followed Brian’s talk was about how to ensure compliance from contractors if quantitative compliance tests and standards are deprecated for qualitative measures. Thinking back over previous experiences, it became clear to me that anyone responding to a project tender should be able to demonstrate their intrinsic motivation to create accessible sites, not just an ability to deal with the big stick of compliance, because a contractors commitment to accessibility makes such a difference to the development process and outcomes. I don’t think user testing will convince a harried project manager to push a designer for a more accessible template but I do think we have a better chance of implementing accessible and usable sites if user requirements considered at the core of the project from the outset.

At UK Museums and the Web 2007 I suggested looking at how other sites differentiate user-generated content from institutionally-created content. In that light, this post could be of interest: Newspapers 2.0: How Web 2.0 are British newspaper web sites?

Over the last two weeks I’ve reviewed eight British newspaper web sites in depth, trying to identify where and how they are using the technologies that make up the so-called “Web 2.0” bubble. I’ve examined their use of blogs, RSS feeds, social bookmarking widgets, and the integration of user-generated content into their sites.

Are shared data standards and shared repositories the future?

I keep having or hearing similar conversations about shared repositories and shared data standards in places like the SWTT, Antiquist, the Museums Computers Group, the mashed museum group and the HEIRNET Data Sans Frontières. The mashed museum hack day also got me excited about the infinite possibilities for mashups and new content creation that accessible and reliable feeds, web services or APIs into cultural heritage content would enable.

So this post is me thinking aloud about the possible next steps – what might be required; what might be possible; and what might be desired but would be beyond the scope of any of those groups to resolve so must be worked around. I’ll probably say something stupid but I’ll be interested to see where these conversations go.

I might be missing out lots of the subtleties but seems to me that there are a few basic things we need: shared technical and semantic data standards or the ability to map between institutional standards consistently and reliably; shared data, whether in a central repository or a service/services like federated searches capable of bringing together individual repositories into a virtual shared repository. The implementation details should be hidden from the end user either way – it should Just Work.

My preference is for shared repositories (virtual or real) because the larger the group, the better the chance that it will be able to provide truly permanent and stable URIs; and because we’d gain efficiencies when introducing new partners, as well as enabling smaller museums or archaeological units who don’t have the technical skills or resources to participate. One reason I think stable and permanent URIs are so important is that they’re a requirement for the semantic web. They also mean that people re-using our data, whether in their bookmarks, in mashup applications built on top of our data or on a Flickr page, have a reliable link back to our content in the institutional context.

As new partners join, existing tools could often be re-used if they have a collections management system or database used by a current partner. Tools like those created for project partners to upload records to the PNDS (People’s Network Discovery Service, read more at A Standards Framework For Digital Library Programmes) for Exploring 20th Century London could be adapted so that organisations could upload data extracted from their collections management, digital asset or excavation databases to a central source.

But I also think that each (digital or digitised) object should have a unique ‘home’ URI. This is partly because I worry about replication issues with multiple copies of the same object used in various places and projects across the internet. We’ve re-used the same objects in several Museum of London projects and partnerships, but the record for that object might not be updated if the original record is changed (for example, if a date was refined or location changed). Generally this only applies to older projects, but it’s still an issue across the sector.

Probably more importantly for the cultural heritage sector as a whole, a central, authoritative repository or shared URL means we can publish records that should come with a certain level of trust and authority by virtue of their inclusion in the repository. It does require playing a ‘gate keeper’ role but there are already mechanisms for determining what counts as a museum, and there might also be something for archaeological units and other cultural heritage bodies. Unfortunately this would mean that the Framley Museum wouldn’t be able to contribute records – maybe we should call the whole thing off.

If a base record is stored in a central repository, it should be easy to link every instance of its use back to the ‘home’ URI, or to track discoverable instances and link to them from the home URI. If each digital or digitised object has a home URI, any related content (information records, tags, images, multimedia, narrative records, blog posts, comments, microformats, etc) created inside or outside the institution or sector could link back to the home URI, which would mean the latest information and resources about an object are always available, as well as any corrections or updates which weren’t replicated across every instance of the object.

Obviously the responses to Michelangelo’s David are going to differ from those to a clay pipe, but I think it’d be really interesting to be able to find out how an object was described in different contexts, how it inspired user-generated content or how it was categorised in different environments.

I wonder if you could include the object URL in machine tags on sites like Flickr? [Yes, you could. Or in the description field]

There are obviously lots of questions about how standards would be agreed, where repositories would be hosted, how the scope of each are decided, blah blah blah, and I’m sure all these conversations have happened before, but maybe it’s finally time for something to happen.

[Update – Leif has two posts on a very similar topic at HEIR tonic and News from the Ouse.

Also I found this wiki on the business case for web standards – what a great idea!]

[Update – this was written in June 2007, but recent movements for Linked Open Data outside the sector mean it’s becoming more technically feasible. Institutionally, on the other hand, nothing seems to have changed in the last year.]

“Sharing authorship and authority: user generated content and the cultural heritage sector” online now

I’ve put a rough and ready version of my paper from the UK museums and the web conference session on ‘The Personal Web’ online at Sharing authorship and authority: user generated content and the cultural heritage sector.

Back in the real world

I’m back in London after the 2007 Web Adept – UK Museums on the Web conference and the Mashed Museum hack day in Leicester. I’ll post my paper up later today. I think my head is still spinning from all the conversations and learning and hacking. I’ll write more when it’s all settled into my brain but one thing that’s clear is that the time might be ripe for the museums sector to pull together and think about and act on shared repositories, common or global object models, folksonomies, etc, in a strategic, transparent and gracious way. Maybe the papers presented this time next year will be about ‘the death of the institutional silo’.

On a personal note, I realised that I’ve used ‘extensible, re-usable and interoperable’ in every paper I’ve given in the past two years. I guess you can take the geek out of Open Source but you can’t take the Open Source out of the geek.