Usability articles at Webcredible

The article on 10 ways to orientate users on your site is useful because more and more users arrive at our sites via search engines or deep links. Keeping these tips in mind when designing sites helps us give users a sense of the scope, structure and purpose of a website, no matter whether they start from the front page or three levels down.

How to embed usability & UCD internally "offers practical advice of what a user champion can do to introduce and embed usability and user-centered design within a company" and includes 'guerrilla tactics' or small steps towards getting usability implemented. But probably the most important point is this:

The most effective method of getting user centered design in the process is through usability testing. Invite key stakeholders to watch the usability testing sessions. Usability testing is a real eye-opener and once observed most stakeholders find it difficult to ignore the user as part of the production process. (The most appropriate stakeholders are likely to be project managers, user interface designs, creative personnel, developers and business managers.)

I would have emphasised the point above even if they hadn't. The difference that usability testing makes to the attitudes of internal stakeholders is amazing and can really focus the whole project team on usability and user-centred design.

Final diary entry from Catalhoyuk

I'm back in London now but here goes anyway:

August 1
My final entry of the season as I'm on the overnight train from Cumra to Istanbul tonight. After various conversations on the veranda I've been thinking about the intellectual accessibility of our Catalhoyuk data and how that relates to web publication and this entry is just a good way to stop these thoughts running round my head like a rogue tune.

[This has turned into a long entry, and I don't say anything trivial about the weather or other random things so you'd have to be pretty bored to read it all. Shouldn't you be working instead?]

Getting database records up on the website isn't hard – it's just a matter of resources. The tricky part is providing an interesting and engaging experience for the general visitor, or a reliable, useable and useful data set for the specialist visitor.

At the moment it feels like a lot of good content is hidden within the database section of the website. When you get down to browsing lists of features, there's often enough information in the first few lines to catch your interest. But when you get to lists of units, even the pages with some of the unit description presented alongside the list, you start to encounter the '800 lamps' problem.

[A digression/explanation – I'm working on a website at the Museum of London with a searchable/browsable catalogue of objects from Roman Londinium. One section has 800 Roman oil lamps – how on earth can you present that to the user so they can usefully distinguish between one lamp and another?]

Of course, it does depend on the kind of user and what they want to achieve on the Londinium site – for some, it's enough to read one nicely written piece on the use of lamps and maybe a bit about what different types meant, all illustrated with a few choice objects; specialist users may want to search for lamps with very particular characteristics. Here, our '800 lamps' are 11846 (and counting) units of archaeology. The average user isn't going to read every unit sheet, but how can they even choose one to start with? And how many will know how to interpret and create meaning from what they read about the varying shades of brown stuff? Being able to download unit sheets that match a particular pattern – part of a certain building, ones that contain certain types of finds, units related to different kinds of features – is probably of real benefit to specialist visitors, but are we really giving those specialist visitors (professional or amateur) and our general visitors what they need? I'm not sure a raw huge list of units or flotation numbers is of much help to anyone – how do people distinguish between one thumbnail of a lamp or one unit number and another in a useful and meaningful way? I hope this doesn't sound like a criticism of the website – it's just the nature of the material being presented.

The variability of the data is another problem – it's not just about data cleaning (though the 'view features by type' page shows why data cleaning is useful) – but about the difference between the beautiful page for Building 49 and rather less interesting page for Building 33 (to pick one at random). If a user lands on one of the pages with minimal information they may never realise that some pages have detailed records with fantastic plans and photos.

So there are the barriers to entry that we might accidentally perpetuate by 'hiding' the content behind lists of numbers; and there is the general intellectual accessibility of the information to the general user. Given limited resources, where should our energies be concentrated? Who are our websites for?

It's also about matching the data and website functionality to the user and their goals – the excavation database might not be of interest to the general user in its most raw form, and that's ok because it will be of great interest to others. At a guess, the general public might be more interested in finds, and if that's the case we should find ways to present information about the objects with appropriate interpretation and contextualisation, not only to present information about the objects but also to help people have a more meaningful experience on the site.

I wonder if 'team favourite' finds or buildings/spaces/features could be a good way into the data, a solution that doesn't mean making some kinds of finds or some buildings into 'treasure' and more important than others. Or perhaps specialists could talk about a unit or feature they find interesting – along the way they could explain how their specialism contributes to the archaeological record (written as if to an intelligent thirteen year old). For example, Flip could talk about phytoliths, or Stringy could talk about obsidian, and what their finds can tell us.

Proper user evaluation would be fabulous, but in the absence of resources, I really should look at the stats and see how the site is used. I wonder if I could do a surveymonkey thing to get general information from different types of users? I wonder what levels of knowledge our visitors have about the Neolithic, about Anatolian history, etc. What brings them to the website? And what makes them stick around?

Intellectual accessibility doesn't only apply to the general public – it also applies to the accessibility of other team's or labs content. There are so many tables hidden behind the excavation and specialist database interfaces – some are archived, some had a very particular usage, some are still actively used but still have the names of long-gone databases. It's all very well encouraging people to use the database to query across specialisms, but how will they know where to look for the data they need? [And if we make documentation, will anyone read it?]

It was quite cool this morning but now it's hot again. Ha, I lied about not saying anything trivial about the weather! Now go do some work.
(Started July 29, but finally posted August 1)

Collected links and random thoughts on user testing

First, some links on considerations for survey design and quick accessibility testing.

Given the constraints of typical museum project budgets, it's helpful to know you can get useful results with as few as five testers. Here's everybody's favourite, Jakob Nielsen, on why you can do usability testing with only five users, card sorting exercises for information architecture with 15 users and quantitative studies with 20 users. Of course, you have to allow for testing for each of your main audiences and ideally for iterative testing too, but let's face it – almost any testing is better than none. After all, you can't do user-centred design if you don't know what your users want.

There were a few good articles about evaluation and user-centred design in Digital Technology in Japanese Museums, a special edition of the Journal of Museum Education. I particularly liked the approach in "What Impressions Do People Have Regarding Mobile Guidance Services in Museums? Designing a Questionnaire that Uses Opinions from the General Public" by Hiromi Sekiguchi and Hirokazu Yoshimura.

To quote from their abstract: "There are usually serious gaps between what developers want to know and what users really think about the system. The present research aims to develop a questionnaire that takes into consideration the users point of view, including opinions of people who do not want to use the system". [my emphasis]

They asked people to write down "as many ideas as they could – doubts, worries, feelings, and expectations" about the devices they were testing. They then grouped the responses and used them as the basis for later surveys. Hopefully this process removes developer- and content producer-centric biases from the questions asked in user testing.

One surprising side-effect of good user testing is that it helps get everyone involved in a project to 'buy into' accessibility and usability. We can all be blinded by our love of technology, our love of the bottom line, our closeness to the material to be published, etc, and forget that we are ultimately only doing these projects to give people access to our collections and information. User testing gives representative users a voice and helps everyone re-focus on the people who'll be using the content will actually want to do with it.

I know I'm probably preaching to the converted here, but during Brian Kelly's talk on Accessibility and Innovation at UKMW07 I realised that for years I've had an unconscious test for how well I'll work with someone based on whether they view accessibility as a hindrance or as a chance to respond creatively to a limitation. As you might have guessed, I think the 'constraints' of accessibility help create innovations. As 37rules say, "let limitations guide you to creative solutions".

One of the points raised in the discussion that followed Brian's talk was about how to ensure compliance from contractors if quantitative compliance tests and standards are deprecated for qualitative measures. Thinking back over previous experiences, it became clear to me that anyone responding to a project tender should be able to demonstrate their intrinsic motivation to create accessible sites, not just an ability to deal with the big stick of compliance, because a contractors commitment to accessibility makes such a difference to the development process and outcomes. I don't think user testing will convince a harried project manager to push a designer for a more accessible template but I do think we have a better chance of implementing accessible and usable sites if user requirements considered at the core of the project from the outset.

I'm blogging this post on Twenty Usability Tips for Your Blog so I can find it again and because it's a useful summary. We're in the process of finding a LAMP host, which we'll be able to use for our OAI-PMH repositories as well as blogs and forums.

While on a Web 2.0-buzzword ish kick, check out the LAARC's photos on Flickr and the Museum of London photo pool. The first LAARC sets are community excavations at Bruce Castle and Shoreditch Park.

Notes on usability testing

Further to my post about the downloadable usability.gov guidelines, I've picked out the bits from the chapter on 'Usability Testing' that are relevant to my work but it's worth reading the whole of the chapter if you're interested. My comments or headings are in square brackets below.

"Generally, the best method is to conduct a test where representative participants interact with representative scenarios.

The second major consideration is to ensure that an iterative approach is used.

Use an iterative design approach

The iterative design process helps to substantially improve the usability of Web sites. One recent study found that the improvements made between the original Web site and the redesigned Web site resulted in thirty percent more task completions, twenty-five percent less time to complete the tasks, and sixty-seven percent greater user satisfaction. A second study reported that eight of ten tasks were performed faster on the Web site that had been iteratively designed. Finally, a third study found that forty-six percent of the original set of issues were resolved by making design changes to the interface.

[Soliciting comments]

Participants tend not to voice negative reports. In one study, when using the ’think aloud’ [as opposed to retrospective] approach, users tended to read text on the screen and verbalize more of what they were doing rather than what they were thinking.

[How many user testers?]

Performance usability testing with users:
– Early in the design process, usability testing with a small number of users (approximately six) is sufficient to identify problems with the information architecture (navigation) and overall design issues. If the Web site has very different types of users (e.g., novices and experts), it is important to test with six or more of each type of user. Another critical factor in this preliminary testing is having trained usability specialists as the usability test facilitator and primary observers.
– Once the navigation, basic content, and display features are in place,
quantitative performance testing … can be conducted

[What kinds of prototypes?]

Designers can use either paper-based or computer-based prototypes. Paper-based prototyping appears to be as effective as computer-based prototyping when trying to identify most usability issues.

Use inspection evaluation [and cognitive walkthroughs] results with caution.
Inspection evaluations include heuristic evaluations, expert reviews, and cognitive walkthroughs. It is a common practice to conduct an inspection evaluation to try to detect and resolve obvious problems before conducting usability tests. Inspection evaluations should be used cautiously because several studies have shown that they appear to detect far more potential problems than actually exist, and they also tend to miss some real problems.

Heuristic evaluations and expert reviews may best be used to identify potential usability issues to evaluate during usability testing. To improve somewhat on the performance of heuristic evaluations, evaluators can use the ’usability problem inspector’ (UPI) method or the ’Discovery and Analysis Resource’ (DARe) method.

Cognitive walkthroughs may best be used to identify potential usability issues to evaluate during usability testing.

Testers can use either laboratory or remote usability testing because they both elicit similar results.

[And finally]

Use severity ratings with caution."

Useful background on usability testing

I came across www.usability.gov while looking for some background information on usability testing to send colleagues I'm planning some user evaluation with. It looks like a really useful resource for all stages of a project from planning to deployment.

Their guidelines are available to download in PDF form, either as entire book or specific chapters.