Can you capture visitors with a steampunk arm?

Credits: Science Museum

This may be familiar to you if you’ve worked on a museum website: an object will capture the imagination of someone who starts to spread the link around, there’s a flurry of tweets and tumblrs and links (that hopefully you’ll notice in time because you’ve previously set up alerts for keywords or URLs on various media), others like it too and it starts to go viral and 50,000 people look at that one page in a day, 20,000 the next, furious discussions break out on social media and other sites… then they’re gone, onto the next random link on someone else’s site.  It’s hugely exciting, but it can also feel like a missed opportunity to show these visitors other cool things you have in your collection, to address some of the issues raised and to give them more information about the object.

There are three key aspects to riding these waves of interest: the ability to spot content that’s suddenly getting a lot of hits; the ability to respond with interesting, relevant content while the link is still hot (i.e. within anything from a couple of hours to a couple of days); and the ability to put that relevant content on the page where fly-by-night visitors will see it.

For many museums, caught between a templated CMS and layers of sign-off for new content , it’s not as easy as it sounds.  When the Science Museum’s ‘steampunk artificial arm’ started circulating on twitter and then made boingboing, I was able to work with curators to get a post on the collections blog about it the next day, but then there was no way of adding that link to the Brought to Life page that was all most people saw.

In his post on “The Guardian’s Facebook app”, Martin Belam discusses how their Facebook app has helped archived content live again:

Someone shares an old article with their friends, some of their friends either already use or install the app, and the viral effect begins to take hold. … We’ve got over 1.3 million articles live on the website, so that is a lot of content to be discovered, and the app means that suddenly any page, languishing unloved in our database, can become a new landing page. When an article becomes popular in the app, we sometimes package it with content. Because we know the attention has come at a specific time from a specific place, we can add related links that are appropriate to the audience rather than to the original content. …when you’ve got the audience there, you need to optimise for them

As a content company with great technical and user experience teams, the Guardian is better placed to put together existing content around a viral article, but still, I’m curious: are any museums currently managing to respond to sudden waves of interest in random objects?  And if so, how?

Notes from ‘Building a bilingual CMS’ at MCG’s Spring Conference

These are my notes from Chris Owen’s presentation, ‘Building a bilingual CMS’ (for the National Museum of Wales) at the MCG Spring meeting. Chris’ slides for ‘Building a bilingual CMS’ are online. There’s some background to my notes about the conference in a previous post. Any of my comments are in [square brackets] below.

Why did they build (not buy) a CMS?
Immediate need for content updating, integration of existing databases.
Their initial needs were simple – small group of content authors, workflow and security weren’t so important.
Aims: simplicity, easy platform for development, extensible, ease of use for content authors, workflow tailored to a bilingual site, English and Welsh kept together so easier to maintain.

It’s used on main website, intranet, SCAN (an educational website), Oriel I (more on that later in a later talk), gallery touch-screens and CMS admin.

The website includes usual museum-y stuff like visitor pages, events and exhibitions, corporate and education information, Rhagor (their collections area – more on that later too) and blogs.

How did they build it?
[In rough order] They built the admin web application; created CMS with simple data structures, added security and workflow to admin, added login features to CMS, integrated admin site and CMS, migrated more complex data structures, added lots of new features.

They developed with future uses in mind but also just got on with it.

Issues around bilingual design:
Do you treat languages equally? Do you use language-selection splash screens or different domain names?
Try to localise URLs (e.g. use aliases for directories and pages so /events/ and /[the Welsh word for events]/ do the same [appropriate] thing and Welsh doesn’t seem like an afterthought). Place the language switch in a consistent location; consider workflow for translation, entering content, etc.

Use two-character language codes (en/cy), organise your naming conventions for files and for database fields so Welsh isn’t extra (e.g. collections.en.html and the equivalent .cy.html); don’t embed localised strings in code. [It’s all really nicely done in XML, as they demonstrated later.]

Coding tip: pull out the right lang at the start in SQL query; this minimises bugs and the need to refer to language later.

It’s built on XML, as they have lots of databases and didn’t want to centralise/merge them together; this means they can just add news ones as needed.

Slide 16 shows the features; it compares pretty well to bigger open-source projects out there. It has friendly URLs, less chance of broken links, built in AJAX features and they’ve integrated user authentication, groups so there’s one login for whole website for users. The site has user comments (with approval) and uses reCaptcha. There’s also a slide on the admin features later – all very impressive.

They’ve used OO design. Slide 18 shows the architecture.

Content blocks are PHP objects – the bits that go together that make the page. Localised. Because links are by ID they don’t break when pages are moved. They’re also using mootools.

The future: they want to have more user-centric features; work with the [Welsh project] People’s Collection and other collaborations; APIs with other sites for web 2.ish things; more use of metadata features; they will make it open source if people think it’s useful.

They would really open it up, via something like sourceforge, but would take lead.

[Overall it’s a really impressive bit of work, sensibly designed and well implemented. Between that and the Indianapolis Museum of Art I’ve seen some really nice IT web tools in museums lately. Well done them!]