What are the right questions about museum websites?

It should be fairly simple to answer the question, ‘what’s the point of a museum website?’ because the answer should surely be some variant on ‘to further the mission and goals of the museum’.

But what is it about being online, about being on or of the web that problematises that answer?

Is it that there are so many other sites providing similar content, activities and access to knowledge? Is it that the niche role many museums play in their local communities doesn’t translate into online space? Is it that other sites got in earlier and now host better conversations about museum collections?

Or is the answer not really problematic – there have always been other conversations about collections and ways of accessing knowledge, and the question is really about where museums and their various activities fit in the digital landscape?

I don’t know, but it’s Friday night and I should be on my way out, so I’m going to turn the question over to smarter minds… What are the right questions and why is it difficult for a museum to translate its mission directly to its website?

Update, the next day… This quote from an article, Lost professors: we won’t need academics in 60 years, addresses one of my theories about why translating a museum’s mission into the online context is problematic:

…there are probably several hundred academics in Australia who lecture on, say, regression analysis, and very few of us could claim to be in the top 1% – actually only 1% of us.

The web allows 100% of the students to access the best 1%. Where is the market for duplication of mediocre course material by research academics?

I’m not saying any museum content is mediocre, of course, but the point about the challenges of the sudden visibility of duplicated content remains. If the museum up the road or in the next town has produced learning activities or expert commentary about the same regional/national history events or objects, does it further your mission to post similar content? What content or activities can you host that is unique to your museum, either because of your particular niche collections or context or because no-one else has done it yet?

Also, for further context, Report from ‘What’s the point of a museum website’ at MCN2011 and Brochureware, aggregators and the messy middle: what’s the point of a museum website? (which is really about ‘what forms do museum websites take’), and earlier posts on What would a digital museum be like if there was never a physical museum? and the related Thoughts towards the future of museums for #kulturwebb, What’s the point of museum collections online? (Angelina’s succinct response: digital content recognises audience experiences, providing opportunities for personal stories to form significant part of the process of interpretation) and finally, thoughts about The rise of the non-museum – museums are possibly the least agile body in the cultural content market right now.

Organisational pain

If you work in a large organisation (or a cultural heritage organisation of almost any size), you may find cathartic release in reading this response to criticism of a large website from a member of its internal webteam:

…simply doing a home page redesign is a piece of cake. You want a redesign? I’ve got six of them in my archives. It only takes a few hours to put together a really good-looking one, as you demonstrated in your post. But doing the design isn’t the hard part, and I think that’s what a lot of outsiders don’t really get, probably because many of them actually do belong to small, just-get-it-done organizations. But those of us who work in enterprise-level situations realize the momentum even a simple redesign must overcome, and not many, I’ll bet, are jumping on this same bandwagon. They know what it’s like.

As always, I’m not particularly pointing the finger at my own institution, but I’ve definitely been there. Cultural heritage institutions tend to have bonus! added! overload on web teams, so the list of improvements you want to make is always much longer than the resources you have available.

How to build a web application in four days

There’s been a bit of buzz around ‘How To Build A Web App in Four Days For $10,000‘. Not everything is applicable to the kinds of projects I’d be involved in, but I really liked these points:
  • The best boost you can give you or your team is to provide the time to be creative.
  • You’ll come back to your current projects with a new perspective and renewed energy.
  • It will push your team to learn new skills.
  • Simplify the site and app as much as possible. Try launching with just ‘Home’, ‘Help’ and ‘About’.
  • Make sure to build on a great framework
  • Be technologically agnostic. If your developers are saying it should be built in a certain language and framework and they have solid reasons, trust them and move on.
  • Coordinate how your designers and developers are going to work together.
  • Get your ‘Creation Environment’ setup correctly. [See the original post for details]

“The coolest thing to be done with your data will be thought of by someone else”

I discovered this ace quote, “the coolest thing to be done with your data will be thought of by someone else”, on JISC’s Common Repository Interfaces Group (CRIG) site, via the The Repository Challenge. The CRIG was created to “help identify problem spaces in the repository landscape and suggest innovative solutions. The CRIG consists of a core group of technical, policy and development staff with repository interface expertise. It encourages anyone to join who is dedicated and passionate about surfacing scholarly content on the web.”

Read ‘repository or federated search’ for ‘repository’ (or think of a federated search as a pseudo-repository) and ‘scholarly’ for ‘cultural heritage’ content, and it sounds like an awful lot of fun.

It’s also the sentiment behind the UK Government’s Show Us a Better Way, the Mashed Museum days and a whole bunch of similar projects.

Scripting enabled – accessibility mashup event and random Friday link

Scripting Enabled, “a two day conference and workshop aimed at making the web a more accessible place”, is an absolutely brilliant idea, and since it looks like it’ll be on September 19 and 20, the weekend after BathCamp, I’m going to do my best to make it down. (It’s the weekend before I start my Masters in HCI so it’s the perfect way to set the tone for the next two years).

From the site:

The aim of the conference is to break down the barriers between disabled users and the social web as much as giving ethical hackers real world issues to solve. We talked about improving the accessibility of the web for a long time – let’s not wait, let’s make it happen.

A lot of companies have data and APIs available for mashups – let’s use these to remove barriers rather than creating another nice visualization.

And on a random Friday night, this is a fascinating post on Facial Recognition in Digital Photo Collections: “Polar Rose, a Firefox toolbar that does facial recognition on photos loaded in your browser.”

Nice information design/visualisation pattern browser

infodesignpatterns.com is a Flash-based site that presents over 50 design patterns ‘that describe the functional aspects of graphic components for the display, behaviour and user interaction of complex infographics’.

The development of a design pattern taxonomy for data visualisation and information design is a work in progress, but the site already has a useful pattern search, based on order principle, user goal, graphic class and number of dimensions.

‘Finding yourself with Fire Eagle’ at WSG Findability

On Wednesday I went to the WSG London Findability event, and I’ve finally got the last of my notes up.

The final talk was from Steve Marshall, on ‘Finding yourself with Fire Eagle’.

Steve doesn’t work on Fire Eagle but made the Python library.

Fire Eagle is a service that helps you manage your location data.

Most location-aware applications have two parts – getting the location, and using the location.

Better model – distribute the location information, but the application getting the location still has to know who’s using it.

Even better model: a brokering service. Fire Eagle sits between any ‘getting’ applications and any ‘using’ applications, and handles the exchange.

[FWIW, ‘Fire Eagle is a brokering service for location data’ is probably the best explanation I’ve heard, and I’d heard it before but I needed the context of the ‘get’ and the ‘use’ applications it sits between for it to stick in my brain.]

So how does it work? In the web application context (it’s different for desktop or mobile applications):
Web app: app asks for Request Token
Fire Eagle: returns Request Token
Web app: user sent to Fire Eagle with token in URL
Fire Eagle: user chooses privacy levels and authorises app
Web app: user sent back to callback URL with Request Token
Web app: app initiates exchange of Request Token
Fire Eagle: Request Token exchanged for Access Token
Web app: app stores Access Token for user

You can manage your applications, and can revoke permissions (who can set or get your location) at any time. You can also temporarily hide your location, or purge all your data from the service. [Though it might be kept by the linked applications.]

How to use:
1. Get API key
2. Authenticate with user (OAuth)
3. Make API call

Concepts:
Locations can be a point or a bounding box.
Location hierarchy – a set of locations at varying degrees of precision.

[There was some good stuff on who could/is using it, and other ideas, but my notes got a bit useless around this point.]

In summary: you can share your location online, control your data and privacy, and easily build location services.

Discussion:
Question: what makes it special? Answer: it’s not coupled to anything. It plays to the strengths of the developers who use it.

‘Fire Eagle: twitter + upcoming + pixie dust’.

URLs are bookmarkable, which means they can be easy to use on phone [hooray]. It doesn’t (currently) store location history, that’s being discussed.

Qu: what’s the business model? Ans: it’s a Brickhouse project (from an incubator/start-up environment).

All methods are http requests at the moment, they might also use XMP ping.

Qu: opening up the beta? Ans: will give Fire Eagle invitations if you have an application that needs testing.

I had to leave before the end of the questions because the event was running over time and I had to meet people in another pub so I missed out on the probably really interesting conversations down the pub afterwards.

My notes:
Looking at their hierarchy of ‘how precisely will you let application x locate you’, it strikes me that it’s quite country-dependent, as a postcode identifies a location very precisely within the UK (to within one of a few houses in a row) while in Australia, it just gives you the area of a huge suburb. I’m not sure if it’s less precise in the US, where postcodes might fit better in the hierarchy.

I’ve also blogged some random thoughts on how services like Fire Eagle make location-linked cultural heritage projects more feasible.

‘Building websites with findability in mind’ at WSG London Findability

Last night I went to the WSG London Findability event at Westminster University, part of London Web Week; here’s part two of my notes.

Stuart Colville‘s session on ‘Building websites with findability in mind’ was full of useful, practical advice.

Who needs to find your content?

Basic requirements:
Understand potential audience(s)
Content
Semantic markup
Accessibility (for people and user agents)
Search engine friendly

Content [largely about blogs]:
Make it compelling for your audience
There’s less competition in niche subjects
Originality (synthesising content, or representing existing content in new ways is also good)
Stay on topic
Provide free, useful information or tools
Comments and discussion (from readers, and interactions with readers) are good

Tagging:
Author or user-generated, or both
Good for searching
Replaces fixed categories
Enables arbitrary associations
Rich

Markup (how to make content more findable):
Use web standards. They’re not a magic fix but they’ll put you way ahead. The content:code ratio is improved, and errors are reduced.
Use semantic markup. Adds meaning to content.
Try the roundabout SEO test
Make your sites accessible. Accessible content is indexable content.

Markup meta:
Keywords versus descriptions. Tailor descriptions for each page; they can be automatically generated; they can be used as summaries in search results.
WordPress has good plugins – metadescription for auto-generated metadata, there are others for manual metadata.

Markup titles and headings:
Make them good – they’ll appear as bookmark etc titles.
One H1 per page; the most meaningful title for that page
Separate look from heading structure.

Markup text:
Use semantically correct elements to describe content. Strong, em, etc.

Markup imagery:
Background images are fine if they’re only a design element.
Use image replacement if the images have meaning. There are some accessibility issues.
Use attributes correctly, and make sure they’re relevant.

Markup microformats:
Microformats are a simple layer of structure around content
They’re easy to add to your site
Yahoo! search and technorati index them, Firefox 3 will increase exposure.

Markup Javascript:
Start unobtrusive and enhance according to the capabilities of the user agent.
Don’t be stupid. Use onClick, don’t kill href (e.g. href=”#”).
Use event delegation – no inline events. It’s search engine accessible, has nice clean markup and you still get all the functionality.
[Don’t break links! I like opening a bunch of search results in new tabs, and I can’t do that on your online catalogue, I’ll shop somewhere I can. Rant over.]

Performance and indexation:
Use ‘last modified’ headers – concentrate search agents on fresh content
Sites with Google Ads are indexed more often.

URLs:
Hackable URLs are good [yay!].
Avoid query strings, they won’t be indexed
Put keywords in your URL path
Use mod_rewrite, etc.

URI permanence:
“They should be forever”. But you need to think about them so they can be forever even if you change your mind about implementation or content structure.
Use rewrites if you do change them.

De-indexing (if you’ve moved content)
Put up a 404 page with proper http headers. 410 ‘intentionally gone’ is nice.
There’s a tool on Google to quickly de-index content.
Make 404s useful to users – e.g. run an internal search and display likely results from your site based on their search engine keywords [or previous page title].

Robots.txt – really good but use carefully. Robots-Nocontent – Yahoo! introduced ‘don’t index’ for e.g. divs but it hasn’t caught on.

Moving content:
Use 301. Redirect users and get content re-indexed by search engines.

Tools for analysing your findability:
Google webmaster tools, Google analytics, log files. It’s worth doing, check for broken links etc.

Summary:
Think about findability before you write a line of code.
Start with good content, then semantic markup and accessibility.
Use sensible headings, titles, links.

WSG London Findability ‘introduction to findability’

Last night I went to the WSG London Findability event at Westminster University. The event was part of London Web Week. As always, apologies for any errors; corrections and comments are welcome.

First up was Cyril Doussin with an ‘introduction to findability‘.

A lot of it is based on research by Peter Morville, particularly Ambient Findability.

So what do people search for?
Knowledge – about oneself; about concepts/meaning; detailed info (product details, specs); entities in society (people, organisations, etc.)
Opinions – to validate a feeling or judgement; establish trust relationships; find complementary judgements.

What is information? From simple to complex – data -> information -> knowledge.

Findability is ‘the quality of being locatable or navigatable’.
Item level – to what degree is a particular object easy to discover or locate?
System level – how well does the environment support navigation and retrieval?

Wayfinding requires: knowing where you are; knowing your destination; following the best route; being able to recognise your destination; being able to find your way back.

The next section was about how to make something findable:
The “in your face” discovery principle – expose the item in places known to be frequented by the target audience. He showed an example of a classic irritating Australian TV ad, a Brisbane carpet store in this case. It’s disruptive and annoying, but everyone knows it exists. [Sadly, it made me a little bit homesick for Franco Cozzo. ‘Megalo megalo megalo’ is also a perfect example of targeting a niche audience, in this case the Greek and Italian speakers of Melbourne.]

Hand-guided navigation – sorting/ordering (e.g. sections of a restaurant menu); sign-posting.

Describe and browse (e.g. search engines) – similar to asking for directions or asking random questions; get a list of entry points to pages.

Mixing things up – the Google ‘search within a search’ and Yahoo!’s ‘search assist’ box both help users refine searches.

Recommendations (communication between peers) – the searcher describes intent; casual discussions; advice; past experiences.
The web is a referral system. Links are entry doors to your site. There’s a need for a relevancy system whether search engines (PageRank) or peer-based systems (Digg).

Measuring relevance (effectiveness):
Precision – if it retrieves only relevant documents
Recall – whether it retrieves all relevant documents.

Good tests for the effectiveness of your relevance mechanism:
Precision = number of relevant and retrieved documents divided by the total number retrieved.
Recall = number of relevant and retrieved documents divided by the total number of relevant documents.

Relevance – need to identify the type of search:
Sample search – small number of documents are sufficient (e.g. first page of Google results)
Existence search – search for a specific document
Exhaustive search – full set of relevant data is needed.
Sample and existence searches require precision; exhaustive searches require recall.

Content organisation:
Taxonomy – organisation through labelling [but it seems in this context there’s no hierarchy, the taxon are flat tags].
Ontology – taxonomy and inference rules.
Folksonomy – a social dimension.

[In the discussion he mentioned eRDF (embedded RDF) and microformats. Those magic words – subject : predicate : object.]

Content organisation is increasingly important because of the increasing volume of information and sharing of information. It’s also a very good base for search engines.

Measuring findability on the web: count the number of steps to get there. There are many ways to get to data – search engines, peer-based lists and directories.

Recommendations:
Aim to strike a balance between sources e.g. search engine optimisation and peer-based.
Know the path(s) your audience(s) will follow (user testing)
Understand the types of search
Make advertising relevant (difficult, as it’s so context-dependent)
Make content rich and relevant
Make your content structured

I’ve run out of lunch break now, but will write up the talks by Stuart Colville and Steve Marshall later.

Google release AJAX loader

From the Google page, AJAX Libraries API:

The AJAX Libraries API is a content distribution network and loading architecture for the most popular open source JavaScript libraries. By using the Google AJAX API Loader’s google.load() method, your application has high speed, globaly available access to a growing list of the most popular JavaScript open source libraries including:

Google works directly with the key stake holders for each library effort and accept the latest stable versions as they are released. Once we host a release of a given library, we are committed to hosting that release indefinitely.

The AJAX Libraries API takes the pain out of developing mashups in JavaScript while using a collection of libraries. We take the pain out of hosting the libraries, correctly setting cache headers, staying up to date with the most recent bug fixes, etc.

There’s also more information at Speed up access to your favorite frameworks via the AJAX Libraries API.

To play devil’s avocado briefly, the question is – can we trust Google enough to build functionality around them? It might be a moot point if you’re already using their APIs, and you could always use the libraries directly, but it’s worth considering.