'Shownar: reflecting online buzz around BBC programmes' [read: museum objects]

Call me mildly obsessive (sad, even), but I got really excited when I read this and mentally replaced 'BBC programme' with 'museum object'. From the BBC Internet Blog:

Today sees the launch of Shownar; a new prototype from BBC Vision which aims
to track online buzz around BBC TV and radio programmes and reflect it back in
useful and interesting ways, aiding programme discovery and providing onward
journeys to discussion about those programmes on the wider web.

Shownar aims to track the wealth of activity that takes place around BBC progammes online and work out which are currently gaining the most attention.

So, how does it work? In the first instance, we decided to focus on tracking in-bound links to programme-related pages on bbc.co.uk, so we could be confident that the discussions were actually about a BBC programme … We took a look at a range of possible suppliers, and for this initial prototype chose data provided by Yahoo! Search BOSS, Nielson Online's BlogPulse (which indexes over 100 million blogs), and Twingly (which searches microblogging services like Twitter, Jaiku and Identi.ca for links, even when they are shortened using URL shortening services such as TinyURL and bit.ly). We are also ingesting data from LiveStats, the BBC's own real-time indicator of traffic. Once ingested, this data is processed according to a specially created algorithm to calculate the 'buzz measure' for every BBC programme – more detail on the algorithm can be found on Shownar's Technical information page.

The post discusses some of the interfaces and benefits – I think the possibilities are pretty endless, and will be exploring how it might enhance the discoverability of and harness conversations about the Science Museum's online collections over the year.

Hat tip: @giv_p

'Finding yourself with Fire Eagle' at WSG Findability

On Wednesday I went to the WSG London Findability event, and I've finally got the last of my notes up.

The final talk was from Steve Marshall, on 'Finding yourself with Fire Eagle'.

Steve doesn't work on Fire Eagle but made the Python library.

Fire Eagle is a service that helps you manage your location data.

Most location-aware applications have two parts – getting the location, and using the location.

Better model – distribute the location information, but the application getting the location still has to know who's using it.

Even better model: a brokering service. Fire Eagle sits between any 'getting' applications and any 'using' applications, and handles the exchange.

[FWIW, 'Fire Eagle is a brokering service for location data' is probably the best explanation I've heard, and I'd heard it before but I needed the context of the 'get' and the 'use' applications it sits between for it to stick in my brain.]

So how does it work? In the web application context (it's different for desktop or mobile applications):
Web app: app asks for Request Token
Fire Eagle: returns Request Token
Web app: user sent to Fire Eagle with token in URL
Fire Eagle: user chooses privacy levels and authorises app
Web app: user sent back to callback URL with Request Token
Web app: app initiates exchange of Request Token
Fire Eagle: Request Token exchanged for Access Token
Web app: app stores Access Token for user

You can manage your applications, and can revoke permissions (who can set or get your location) at any time. You can also temporarily hide your location, or purge all your data from the service. [Though it might be kept by the linked applications.]

How to use:
1. Get API key
2. Authenticate with user (OAuth)
3. Make API call

Concepts:
Locations can be a point or a bounding box.
Location hierarchy – a set of locations at varying degrees of precision.

[There was some good stuff on who could/is using it, and other ideas, but my notes got a bit useless around this point.]

In summary: you can share your location online, control your data and privacy, and easily build location services.

Discussion:
Question: what makes it special? Answer: it's not coupled to anything. It plays to the strengths of the developers who use it.

'Fire Eagle: twitter + upcoming + pixie dust'.

URLs are bookmarkable, which means they can be easy to use on phone [hooray]. It doesn't (currently) store location history, that's being discussed.

Qu: what's the business model? Ans: it's a Brickhouse project (from an incubator/start-up environment).

All methods are http requests at the moment, they might also use XMP ping.

Qu: opening up the beta? Ans: will give Fire Eagle invitations if you have an application that needs testing.

I had to leave before the end of the questions because the event was running over time and I had to meet people in another pub so I missed out on the probably really interesting conversations down the pub afterwards.

My notes:
Looking at their hierarchy of 'how precisely will you let application x locate you', it strikes me that it's quite country-dependent, as a postcode identifies a location very precisely within the UK (to within one of a few houses in a row) while in Australia, it just gives you the area of a huge suburb. I'm not sure if it's less precise in the US, where postcodes might fit better in the hierarchy.

I've also blogged some random thoughts on how services like Fire Eagle make location-linked cultural heritage projects more feasible.

'Building websites with findability in mind' at WSG London Findability

Last night I went to the WSG London Findability event at Westminster University, part of London Web Week; here's part two of my notes.

Stuart Colville's session on 'Building websites with findability in mind' was full of useful, practical advice.

Who needs to find your content?

Basic requirements:
Understand potential audience(s)
Content
Semantic markup
Accessibility (for people and user agents)
Search engine friendly

Content [largely about blogs]:
Make it compelling for your audience
There's less competition in niche subjects
Originality (synthesising content, or representing existing content in new ways is also good)
Stay on topic
Provide free, useful information or tools
Comments and discussion (from readers, and interactions with readers) are good

Tagging:
Author or user-generated, or both
Good for searching
Replaces fixed categories
Enables arbitrary associations
Rich

Markup (how to make content more findable):
Use web standards. They're not a magic fix but they'll put you way ahead. The content:code ratio is improved, and errors are reduced.
Use semantic markup. Adds meaning to content.
Try the roundabout SEO test
Make your sites accessible. Accessible content is indexable content.

Markup meta:
Keywords versus descriptions. Tailor descriptions for each page; they can be automatically generated; they can be used as summaries in search results.
WordPress has good plugins – metadescription for auto-generated metadata, there are others for manual metadata.

Markup titles and headings:
Make them good – they'll appear as bookmark etc titles.
One H1 per page; the most meaningful title for that page
Separate look from heading structure.

Markup text:
Use semantically correct elements to describe content. Strong, em, etc.

Markup imagery:
Background images are fine if they're only a design element.
Use image replacement if the images have meaning. There are some accessibility issues.
Use attributes correctly, and make sure they're relevant.

Markup microformats:
Microformats are a simple layer of structure around content
They're easy to add to your site
Yahoo! search and technorati index them, Firefox 3 will increase exposure.

Markup Javascript:
Start unobtrusive and enhance according to the capabilities of the user agent.
Don't be stupid. Use onClick, don't kill href (e.g. href="#").
Use event delegation – no inline events. It's search engine accessible, has nice clean markup and you still get all the functionality.
[Don't break links! I like opening a bunch of search results in new tabs, and I can't do that on your online catalogue, I'll shop somewhere I can. Rant over.]

Performance and indexation:
Use 'last modified' headers – concentrate search agents on fresh content
Sites with Google Ads are indexed more often.

URLs:
Hackable URLs are good [yay!].
Avoid query strings, they won't be indexed
Put keywords in your URL path
Use mod_rewrite, etc.

URI permanence:
"They should be forever". But you need to think about them so they can be forever even if you change your mind about implementation or content structure.
Use rewrites if you do change them.

De-indexing (if you've moved content)
Put up a 404 page with proper http headers. 410 'intentionally gone' is nice.
There's a tool on Google to quickly de-index content.
Make 404s useful to users – e.g. run an internal search and display likely results from your site based on their search engine keywords [or previous page title].

Robots.txt – really good but use carefully. Robots-Nocontent – Yahoo! introduced 'don't index' for e.g. divs but it hasn't caught on.

Moving content:
Use 301. Redirect users and get content re-indexed by search engines.

Tools for analysing your findability:
Google webmaster tools, Google analytics, log files. It's worth doing, check for broken links etc.

Summary:
Think about findability before you write a line of code.
Start with good content, then semantic markup and accessibility.
Use sensible headings, titles, links.

WSG London Findability 'introduction to findability'

Last night I went to the WSG London Findability event at Westminster University. The event was part of London Web Week. As always, apologies for any errors; corrections and comments are welcome.

First up was Cyril Doussin with an 'introduction to findability'.

A lot of it is based on research by Peter Morville, particularly Ambient Findability.

So what do people search for?
Knowledge – about oneself; about concepts/meaning; detailed info (product details, specs); entities in society (people, organisations, etc.)
Opinions – to validate a feeling or judgement; establish trust relationships; find complementary judgements.

What is information? From simple to complex – data -> information -> knowledge.

Findability is 'the quality of being locatable or navigatable'.
Item level – to what degree is a particular object easy to discover or locate?
System level – how well does the environment support navigation and retrieval?

Wayfinding requires: knowing where you are; knowing your destination; following the best route; being able to recognise your destination; being able to find your way back.

The next section was about how to make something findable:
The "in your face" discovery principle – expose the item in places known to be frequented by the target audience. He showed an example of a classic irritating Australian TV ad, a Brisbane carpet store in this case. It's disruptive and annoying, but everyone knows it exists. [Sadly, it made me a little bit homesick for Franco Cozzo. 'Megalo megalo megalo' is also a perfect example of targeting a niche audience, in this case the Greek and Italian speakers of Melbourne.]

Hand-guided navigation – sorting/ordering (e.g. sections of a restaurant menu); sign-posting.

Describe and browse (e.g. search engines) – similar to asking for directions or asking random questions; get a list of entry points to pages.

Mixing things up – the Google 'search within a search' and Yahoo!'s 'search assist' box both help users refine searches.

Recommendations (communication between peers) – the searcher describes intent; casual discussions; advice; past experiences.
The web is a referral system. Links are entry doors to your site. There's a need for a relevancy system whether search engines (PageRank) or peer-based systems (Digg).

Measuring relevance (effectiveness):
Precision – if it retrieves only relevant documents
Recall – whether it retrieves all relevant documents.

Good tests for the effectiveness of your relevance mechanism:
Precision = number of relevant and retrieved documents divided by the total number retrieved.
Recall = number of relevant and retrieved documents divided by the total number of relevant documents.

Relevance – need to identify the type of search:
Sample search – small number of documents are sufficient (e.g. first page of Google results)
Existence search – search for a specific document
Exhaustive search – full set of relevant data is needed.
Sample and existence searches require precision; exhaustive searches require recall.

Content organisation:
Taxonomy – organisation through labelling [but it seems in this context there's no hierarchy, the taxon are flat tags].
Ontology – taxonomy and inference rules.
Folksonomy – a social dimension.

[In the discussion he mentioned eRDF (embedded RDF) and microformats. Those magic words – subject : predicate : object.]

Content organisation is increasingly important because of the increasing volume of information and sharing of information. It's also a very good base for search engines.

Measuring findability on the web: count the number of steps to get there. There are many ways to get to data – search engines, peer-based lists and directories.

Recommendations:
Aim to strike a balance between sources e.g. search engine optimisation and peer-based.
Know the path(s) your audience(s) will follow (user testing)
Understand the types of search
Make advertising relevant (difficult, as it's so context-dependent)
Make content rich and relevant
Make your content structured

I've run out of lunch break now, but will write up the talks by Stuart Colville and Steve Marshall later.