BBC experimenting with inline links in articles

I noticed the following link when reading a BBC article today:

BBC: We are trialling a new way to allow you to explore background material without leaving the page.
If you turn on inline links, they appear as subtly blue text against the usual grey. Some have icons indicating which site the link relates to (YouTube, Wikipedia), others don’t. Links with an icon open the content directly over the article; links without icons open the link in the same window, taking you from the BBC story. Screenshot below:


The ‘Read more’ links to a page, Story links trial, that says:

For a limited period the BBC News Website is experimenting with clickable links within the body of news stories.

If you click on one of these links, a window will appear containing background material relevant to that word that is highlighted. The links have been carefully chosen by our journalists.

We are doing this trial because we want to see if you enjoy exploring background material presented in this way. It’s part of our continuing efforts to provide the best possible experience.

In addition to background material from the BBC News website, we are also displaying content from other sites, including Wikipedia, You Tube and Flickr.

I’d be really interested to know what the results of the trial are, and I hope the BBC share them. I’ve been thinking about inline links and faceted browsing for collections sites recently, and while the response would presumably vary if the links were only to related content on the same site, it would be useful to know how the two types of links are received.

The story I noticed the link on is also interesting because it shows how content created in a ‘social software’ way can be (probably wilfully, in this case) misinterpreted:

“Downing Street has been accused of wasting taxpayers’ money after making a jokey video in response to a petition for Jeremy Clarkson to be made PM.

A Conservative Party spokesman said: “While the British public is having to tighten its belts, the government is spending taxpayers’ money on a completely frivolous project.””

Notes from ‘Who has the responsibility for saying what we see?’ in the ‘Theoretical Frameworks’ session, MW2008

These are my notes from the second paper, ‘Who has the responsibility for saying what we see? mashing up Museum and Visitor voices, on-site and online‘ by Peter Samis in the Theoretical Frameworks session chaired by Darren Peacock at Museums and the Web 2008.

The other session papers were Object-centred democracies: contradictions, challenges and opportunities by Fiona Cameron and The API as Curator by Aaron Straup Cope; all the conference papers and notes I’ve blogged have been tagged with ‘MW2008‘.

It’s taken me a while to catch up on some of my notes – real life has a way of demanding attention sometimes. Any mistakes are mine, any comments corrections are welcome, and the comments in [square brackets] below are mine.

Peter Samis spoke about the work of SFMOMA with Olafur Eliasson. His slides are here.

How our perception changes how we see the world…

“Objecthood doesn’t have a place in the world if there’s not an individual person making use of that object… I of course don’t think my work is about my work. I think my work is about you.” (Olafur Eliasson, 2007)

Samis gave an overview of the exhibitions “Take your time: Olafur Eliasson” and “Your tempo” presented at SFMOMA.

The “your” in the titles demands a proactive and subjective approach; stepping into installations rather than looking at paintings. The viewer is integral to the fulfilment of a works potential.

Do these rules apply to all [museum] objects? These are the questions…

They aimed to encourage visitors in contemplation of their own experience.

Visitors who came to blog viewed 75% of pages. Comments were left by 2% of blog visitors.

There was a greater in interest in seeing how others responded than in contributing to the conversation. Comments were a ‘mixed bag’.

The comments helped with understanding visitor motivations in narratives… there’s a visual ‘Velcro effect’ – some artworks stay with people – the more visceral the experience of various artworks, the greater the corresponding number of comments.

[Though I wondered if it’s an unproblematic and direct relationship? People might have a relationship with the art work that doesn’t drive them to comment; that requires more reflection to formulate a response; or that might occur at an emotional rather than intellectual level.]

Visitors also take opportunity to critique the exhibition/objects and curatorial choices when asked to comment.

What are the criteria of values for comments? By whose standards? And who within the institution reads the blog?

How do you know if you’ve succeeded? Depends on goals.

“We opened the door to let visitors in… then we left the room. They were the only ones left in the room.” – the museum opens up to the public then steps out of the dialogue. [Slide 20]

[I have quoted this in conversation so many times since the conference. I think it’s an astute and powerful summary of the unintended effect of participatory websites that aren’t integrated into the museum’s working practices. We say we want to know what our visitors think, and then we walk away while they’re still talking. This image is great because it’s so visceral – everyone realises how rude that is.]

Typology/examples of museum blogs over time… based on whether they open to comments, and whether they act like docents/visitors assistants and have conversations with the public in front of the artworks.

If we really engage with our visitors, will we release the “pent up comments”?
A NY Times migraine blog post had 294 reflective, articulate, considered, impassioned comments on the first day.

[What are your audiences’ pent up questions? How do you find the right questions? Is it as simple as just asking our audiences, and even if it isn’t, isn’t that the easiest place to start? If we can crack the art of asking the right questions to elicit responses, we’re in a better position.]

Nina Simon’s hierarchy of social participation. Museums need to participate to get to higher levels of co-creative, collaborative process. “Community producer” – enlist others, get
cross fertilisation.

Even staff should want to return to your blogs and learn from them.

[Who are the comments that people leave addressed to? Do we tell them or do we just expect them to comment into empty space? Is that part of the reason for low participation rates? What’s the relationship between participation and engagement? But also because people aren’t participating in the forum you provide, doesn’t mean they’re not participating somewhere else… or engaging with it in other forums, conversations in the pubs, etc not everything is captured online even if the seed is online and in your institution. ]

Another way to find out what’s being said about your organisation

If you’re curious to know what’s being said about your institution, collection or applications, Omgili might help you discover conversations about your organisation, websites or applications. If you don’t have the resources for formal evaluation programs it can be a really useful way of finding out how and why people use your resources, and figure out how you can improve your online offerings. From their ‘about’ blurb:

Omgili finds consumer opinions, debates, discussions, personal experiences, answers and solutions. … [it’s] a specialized search engine that focuses on “many to many” user generated content platforms, such as, Forums, Discussion groups, Mailing lists, answer boards and others. … Omgili is a crawler based, vertical search engine that scans millions of online discussions worldwide in over 100,000 boards, forums and other discussion based resources. Omgili knows to analyze and differentiate between discussion entities such as topic, title, replies and discussion date.

Who’s talking about you?

This article explains how you can use RSS feeds to track mentions of your company (or museum) in various blog search sites: Ego Searches and RSS.

It’s a good place to start if you’re not sure what people are saying about your institution, exhibitions or venues or whether they might already be creating content about you. Don’t forget to search Flickr and YouTube too.

Interesting BBC article on the philosophy behind Craig’s List:

Initially it was mostly coming in via email which we would reply to, but we’ve grown so much that now the more common thing is you set up a series of discussion forums in which users bring up various things that they think are important to change or modify in some way.

Users talk amongst themselves about things we’re doing poorly or could be doing better, and then we’re able to observe that interaction. It proves to be a very kind of efficient and interesting and useful way, nowadays, of digesting that feedback.

The other important aspect that you might not imagine initially is that all of the feedback is coming in as ‘voting with their feet’. We just watch how people are using particular categories.

If we see that, ‘oh users want to do this and we’re not currently enabling this’, then we try to code up some changes to better enable them to do whatever that is.

Notes on usability testing

Further to my post about the downloadable usability.gov guidelines, I’ve picked out the bits from the chapter on ‘Usability Testing’ that are relevant to my work but it’s worth reading the whole of the chapter if you’re interested. My comments or headings are in square brackets below.

“Generally, the best method is to conduct a test where representative participants interact with representative scenarios.

The second major consideration is to ensure that an iterative approach is used.

Use an iterative design approach

The iterative design process helps to substantially improve the usability of Web sites. One recent study found that the improvements made between the original Web site and the redesigned Web site resulted in thirty percent more task completions, twenty-five percent less time to complete the tasks, and sixty-seven percent greater user satisfaction. A second study reported that eight of ten tasks were performed faster on the Web site that had been iteratively designed. Finally, a third study found that forty-six percent of the original set of issues were resolved by making design changes to the interface.

[Soliciting comments]

Participants tend not to voice negative reports. In one study, when using the ’think aloud’ [as opposed to retrospective] approach, users tended to read text on the screen and verbalize more of what they were doing rather than what they were thinking.

[How many user testers?]

Performance usability testing with users:
– Early in the design process, usability testing with a small number of users (approximately six) is sufficient to identify problems with the information architecture (navigation) and overall design issues. If the Web site has very different types of users (e.g., novices and experts), it is important to test with six or more of each type of user. Another critical factor in this preliminary testing is having trained usability specialists as the usability test facilitator and primary observers.
– Once the navigation, basic content, and display features are in place,
quantitative performance testing … can be conducted

[What kinds of prototypes?]

Designers can use either paper-based or computer-based prototypes. Paper-based prototyping appears to be as effective as computer-based prototyping when trying to identify most usability issues.

Use inspection evaluation [and cognitive walkthroughs] results with caution.
Inspection evaluations include heuristic evaluations, expert reviews, and cognitive walkthroughs. It is a common practice to conduct an inspection evaluation to try to detect and resolve obvious problems before conducting usability tests. Inspection evaluations should be used cautiously because several studies have shown that they appear to detect far more potential problems than actually exist, and they also tend to miss some real problems.

Heuristic evaluations and expert reviews may best be used to identify potential usability issues to evaluate during usability testing. To improve somewhat on the performance of heuristic evaluations, evaluators can use the ’usability problem inspector’ (UPI) method or the ’Discovery and Analysis Resource’ (DARe) method.

Cognitive walkthroughs may best be used to identify potential usability issues to evaluate during usability testing.

Testers can use either laboratory or remote usability testing because they both elicit similar results.

[And finally]

Use severity ratings with caution.”