Linking museums: machine-readable data in cultural heritage – meetup in London July 7

Somehow I've ended up organising an (very informal) event about 'Linking museums: machine-readable data in cultural heritage' on Wednesday, July 7, at a pub near Liverpool St Station. I have no real idea what to expect, but I'd love some feisty sceptics to show up and challenge people to make all these geeky acronyms work in the real museum world.

As I posted to the MCG list: "A very informal meetup to discuss 'Linking museums: machine-readable data in cultural heritage' is happening next Wednesday. I'm hoping for a good mix of people with different levels of experience and different perspectives on the issue of publishing data that can be re-used outside the institution that created it. … please do pass this on to others who may be interested. If you would like to come but can't get down to that London, please feel free to send me your questions and comments (or beer money)."

The basic details are: July 7, 2010, Shooting Star pub, London. 7:30 – 10pm-ish. More information is available at http://museum-api.pbworks.com/July-2010-meetup and you can let me know you're coming or register your interest.

In more detail…

Why?
I'm trying to cut through the chicken and egg problem – as a museum technologist, I can work towards getting machine-readable data available, but I'm not sure which formats and what data would be most useful for developers who might use it. Without a critical mass of take-up for any one type, the benefits of any one data source are more limited for developers. But museums seem to want a sense of where the critical mass is going to be so they can build for that. How do we cut through this and come up with a sensible roadmap?

Who?
You! If you're interested in using museum data in mashups but find it difficult to get started or find the data available isn't easily usable; if you have data you want to publish; if you work in a museum and have a
data publication problem you'd like help in solving; if you are a cheerleader for your favourite acronym…

Put another way, this event is for you if you're interested in publishing and sharing data about their museums and collections through technologies such as linked data and microformats.

It'll be pretty informal! I'm not sure how much we can get done but it'd be nice to put faces to names, and maybe start some discussions around the various problems that could be solved and tools that could be
created with machine-readable data in cultural heritage.

Google updates search, playing catch-up?

A quick post in case you've missed it elsewhere – whether in response to the ridiculously-titled 'Wolfram Alpha' or to Yahoo's 'open strategy' (YOS) and work on enhancing search engine results pages (SERPs) with structured data, Google have announced a "new set of features that we call Search Options, which are a collection of tools that let you slice and dice your results and generate different views to find what you need faster and easier" and "rich snippets" that "show more useful information from web pages than the preview text". Searchengineland have compared Google and Yahoo's offerings.

Update: some of the criticism rumbling on twitter yesterday has been neatly summarised by Ian Davis in 'Google's RDFa a Damp Squib':

However, a closer look reveals that Google have basically missed the point of RDFa. The RDFa support is limited to the properties and classes defined on a hastily thrown together site called data-vocabulary.org. There you will find classes for Person and Organization and properties for names and addresses, completely ignoring the millions of pieces of data using well established terms from FOAF and the like. That means everyone has to rewrite all their data to use Google's schema if they want to be featured on Google's search engine. Its like saying you have to write your pages using Google's own version of html where all the tags have slightly different spellings to be listed in their search engine!

The result is a hobbled implementation of RDFa. They've taken the worst part – the syntax – and thrown away the best – the decentralized vocabularies of terms. It's like using microformats without the one thing they do well: the simplicity.

Further, in the comments:

the point of decentralization is not to encourage fragmentation and isolation, but to allow people to collaborate without needing permission from a middleman. Google's approach imposes a centralized authority.

There's also a (slightly disingenuous, IMO) response from Google:

For Rich Snippets, Google search need to understand what the data means in order to render it appropriately. We will start incorporating existing vocabularies like FOAF, but there's no way for us to have a decent user experience for brand-new vocabularies that someone defines. We also need a single place where a webmaster can come and find all the terms that Google understands. Which is why we have data-vocabulary.org.

Isn't the point of Google that it can figure stuff out without needing to be told?

Rasmus Lerdorf on Hacking with PHP – tech talk at Open Hack London

Same deal as my first post from today's Open Hack London event – these are (very) rough notes, please let me know of clarifications, questions or comments.

Hacking with PHP, Rasmus Lerdorf

Goal of talk: copy and pastable snippets that just work so you don't have to fight to get things that work [there's not enough of this to help beginners get over that initial hump]. The slides are available at http://talks.php.net/show/openhack and these notes are probably best read as commentary alongside the code examples.

[Since it's a hack day, some] Hack ideas: fix something you use every day; build your own targeted search engine; improve the look of search results; play with semantic web tools to make the web more semantic; tell the world what kind of data you have – if a resume, use hResume or other appropriate microformats/markup; go local – tools for helping your local community; hack for good – make the world a better place.

SearchMonkey and BOSS are blending together a little bit.

What we need to learn
With PHP – enough to handle simple requests; talk to backend datastore; how to parse XML with PHP, how to generate JSON, some basic javasccript, a JavaScript utility library like YUI or jquery.

parsing XML: simpleXML_load_file() – can load entire URL or local file.

Attributes on node show up as array. Namespace attributes call children of node, name namespace as argument.

Now know how to parse XML, can get lots of other stuff.
Context extraction service, Yahoo – doesn't get enough attention. Post all text, gives you back four or five key terms – can then do an image search off them. Or match ads to webpages.

Can use get or post (curl) – usually too much for get.

PHP to JavaScript on initial page load: JSON_encode -> javascript.

Javascript to PHP (and back)
If you can figure out these six lines of code, you can write anything in the world. How every modern web application works.
Server-side php, client-side javascript.

'There's nothing to building web applications, you just have to break everything down into small enough chunks that it all becomes trivial'.

AJAX in 30 seconds.
Inline comments in code would help for people reading it without hearing the talk at the same time.

JavaScript libraries to the rescue
load maps API, create container (div) for the map, then fill it.

Form – on submit call return updateMap(); with new location.

YGeoRSS – if have GeoRSS file… can point to it.

GeoPlanet – assigns a WOE ID to a place. Locations are more than just a lat long – carry way more information. Basically gives you a foreign key. YQL is starting to make the web a giant database. Can make joins across APIs – woeid works as fk.

YQL – 'combines all the APIs on the web into a single API'.

Add a cache – nice to YQL, and also good for demos etc. Copy and paste cache function from his slides – does a local cache on URL. Hashed with md5. Using PHP streams – #defn. Adding a cache speeds up developing when hacking (esp as won't be waiting for the wifi). [This is a pretty damn good tip cos it's really useful and not immediately obvious.]

XPath on URL using PHP's OAuth extension

SearchMonkey – social engineering people into caring about semantic data on the web. For non-geeks, search plug-in mechanism that will spruce up search results page. Encourages people to add semantic data so their search result is as sexy as their competitors – so goal is that people will start adding semantic data.

'If you're doing web stuff, and don't know about microformats, and your resume doesn't have hResume, you're not getting a job with Yahoo.'

Question: how are microformats different to RDFa?
Answer: there are different types of microformats – some very specific ones, eg hResume, hCal. RDFa – adding arbitrary tags to page. even if no specific way to describe your data. But there's a standard set of mark-ups for a resume so can use that. if your data doesn't match anything at microfomats.org then use RDFa or erdf (?).

RDFa, SearchMonkey – tech talks at Open Hack London

While today's Open Hack London event is mostly about the 24-hour hackathon, I signed up just for the Tech Talks because I couldn't afford to miss a whole weekend's study in the fortnight before my exams (stupid exams). I went to the sessions on 'Guardian Data Store and APIs', 'RDFa SearchMonkey', Arduino, 'Hacking with PHP', 'BBC Backstage', Dopplr's 'mashups made of messages' and lightning talks including 'SPARQL and semantic web' stuff you can do now.

I'm putting my rough and ready notes online so that those who couldn't make it can still get some of the benefits. Apologies for any mishearings or mistakes in transcription – leave me a comment with any questions or clarifications.

One of the reasons I was going was to push my thinking about the best ways to provide API-like access to museum information and collections, so my notes will reflect that but I try to generalise where I can. And if you have thoughts on what you'd like cultural heritage institutions to do for developers, let us know! (For background, here's a lightning talk I did at another hack event on happy museums + happy developers = happy punters).

RDFa – now everyone can have an API.
Mark Birkbeck

Going to cover some basic mark-up, and talk about why RDFa is a good thing. [The slides would be useful for the syntax examples, I'll update if they go online.]

RDFa is a new syntax from W3C – a way of embedding metadata (RDF) in HTML documents using attributes.

e.g. <span property="dc:title"> – value of property is the text inside the span.

Because it's inline you don't need to point to another document to provide source of metadata and presentation HTML.

One big advance is that can provide metadata for other items e.g. images, so you can e.g. attach licence info to the image rather than page it's in – e.g. <img src="" rel="licence" resource="[creative commons licence]">

Putting RDFa into web pages means you've now got a feed (the web page is the RSS feed), and a simple static web page can become an API that can be consumed in the same way as stuff from a big expensive system. 'Growing adoption'.

Government department Central Office of Information [?] is quite big on RDFa, have a number of projects with it. [I'd come across the UK Civil Service Job Service API while looking for examples for work presentations on APIs.]

RDFa allows for flexible publishing options. If you're already publishing HTML, you can add RDFa mark-up then get flexible publishing models – different departments can keep publishing data in their own way, a central website can go and request from each of them and create its own database of e.g. jobs. Decentralised way of approaching data distribution.

Can be consumed by: smarter browsers; client-side AJAX, other servers such as SearchMonkey.

He's interested where browsers can do something with it – either enhanced browsers that could e.g. store contact info in a page into your address book; or develop JavaScript libraries that can parse page and do something with it. [screen shot of jobs data in search monkey with enhanced search results]

RDFa might be going into Drupal core.

Example of putting isbn in RDFa in page, then a parser can go through the page, pull out the triples [some explanation of them as mini db?], pull back more info about the book from other APIs e.g. Amazon – full title, thumbnail of cover. e.g. pipes.

Example of FOAF – twitter account marked up in page, can pull in tweets. Could presumably pull in newer services as more things were added, without having to re-mark-up all the pages.

Example of chemist writing a blog who mentions a chemical compound in blog post, a processor can go off and retrieve more info – e.g. add icon for mouseover info – image of molecule, or link to more info.

Next plan is to link with BOSS. Can get back RDFa from search results – augment search results with RDFa from the original page.

Search Monkey (what it is and what you can do with it)
Neil Crosby (European frontend architect for search at Yahoo).

SearchMonkey is (one of) Yahoo's open search platforms (along with BOSS). Uses structured data to enhance search results. You get to change stuff on Yahoo search results page.

SearchMonkey lets you: style results for certain URL patterns; brand those results; make the results more useful for users.

[examples of sites that have done it to see how their results look in Yahoo? I thought he mentioned IMDb but it doesn't look any different – a film search that returns a wikipedia result, OTOH, does.]

Make life better for users – not just what Yahoo thinks results should be, you can say 'actually this is the important info on the page'

Three ways to do it [to change the SERP [search engine results page]: mark up data in a way that Yahoo knows about – 'just structure your data nicely'. e.g. video mark-up; enhance a result directly; make an infobar.

Infobar – doesn't change result see immediately on the page, but it opens on the page. e.g. of auto-enhanced result- playcrafter. Link to developer start page – how to mark it up, with examples, and what it all means.

User-enhanced result – Facebook profile pages are marked up with microformats – can add as friend, poke, send message, view friends, etc from the search results page. Can change the title and abstract, add image, favicon, quicklinks, key/value pairs. Create at [link I can't see but is on slides] Displayed in screen, you fill it out on a template.

Infobar – dropdown in grey bar under results. Can do a lot more, as it's hidden in the infobar and doesn't have to worry people.

Data from: microformats, RDF, XSLT, Yahoo's index, and soon, top tags from delicious.

If no machine data, can write an XSLT. 'isn't that hard'. Lots of documentation on the web.

Examples of things that have been made – a tool that exposes all the metadata known for a page. URL on slide. can install on Yahoo search page, add it in. Use location data to make a map – any page on web with metadata about locations on it – map monkey. Get qype results for anything you search for.

There's a mailing list (people willing and wanting to answer questions) and a tutorial.

Questions

Question: do you need to use a special doctype [for RDFa]?
Answer: added to spec that 'you should use this doctype' but the spec allows for RDFa to be used in situations when can't change doctype e.g. RDFa embedded in blogger blogpost. Most parsers walk the DOM rather than relying on the doctype.

Jim O'D – excited that SearchMonkey supports XSLT – if have website with correctly marked up tables, could expose those as key/value pairs?
Answer: yes. XSLT fantastic tool for when don't have data marked up – can still get to it.

Frankie – question I couldn't hear. About info out to users?
Answer: if you've built a monkey, up to you to tell people about it for the moment. Some monkeys are auto-on e.g. Facebook, wikipedia… possibly in future, if developed a monkey for a site you own, might be able to turn it auto-on in the results for all users… not sure yet if they'll do it or not.
Frankie: plan that people get monkeys they want, or go through gallery?
Answer: would be fantastic if could work out what people are using them for and suggest ones appropriate to people doing particular kinds of searches, rather than having to go to a gallery.

The BBC, accessibility, the hCalendar microformat and RDFa

The BBC have announced (in 'Removing Microformats from bbc.co.uk/programmes') that they'll stop using the hCalendar microformat because of concerns about accessibility, specifically the use of the HTML abbreviation element (the abbr tag):

Our concerns were:

  • the effect on blind users using screen readers with abbreviation expansion turned on where abbreviations designed for machines would be read out
  • the effect on partially sighted users using screen readers where tool tips of abbreviations designed for machines would be read out
  • the effect of incomprehensible tooltips on users with cognitive disabilities
  • the potential fencing off of abbreviations to domains that need them

Until these issues are resolved the BBC semantic markup standards have been updated to prevent the use of non-human-readable text in abbreviations.

They're looking at using RDFa, which they describe as 'a slightly bigger S semantic web technology similar to microformats but without some of the more unexpected side-effects'.

Their support for RDFa is timely in light of Lee Iverson's presentation at the UK Museums on the Web conference (my notes). It's also an interesting study of what can happen when geek enthusiasm meets existing real world users.

More generally, does the fact that an organisation as big as the BBC hasn't yet produced an API mean that creating an API is not a simple task, or that the organisational issues are bigger than the technical issues?

Yahoo! SearchMonkey, the semantic web – an example from last.fm

I had meant to blog about SearchMonkey ages ago, but last.fm's post 'Searching with my co-monkey' about a live example they've created on the SearchMonkey platform has given me the kick I needed. They say:

The first version of our application deals with artist, album and track pages giving you a useful extract of the biography, links to listen to the artist if we have them available, tags, similar artists and the best picture we can muster for the page in question.

Some background on SearchMonkey from ReadWriteWeb:

At the same time, it was clear that enhancing search results and cross linking them to other pieces of information on the web is compelling and potentially disruptive. Yahoo! realized that in order to make this work, they need to incentivize and enable publishers to control search result presentation.

SearchMonkey is a system that motivates publishers to use semantic annotations, and is based on existing semantic standards and industry standard vocabularies. It provides tools for developers to create compelling applications that enhance search results. The main focus of these applications is on the end user experience – enhanced results contain what Yahoo! calls an "infobar" – a set of overlays to present additional information.

SearchMonkey's aim is to make information presentation more intelligent when it comes to search results by enabling the people who know each result best – the publishers – to define what should be presented and how.

(From Making the Web Searchable: The Story of SearchMonkey)

And from Yahoo!'s search blog:

This new developer platform, which we're calling SearchMonkey, uses data web standards and structured data to enhance the functionality, appearance and usefulness of search results. Specifically, with SearchMonkey:

  • Site owners can build enhanced search results that will provide searchers with a more useful experience by including links, images and name-value pairs in the search results for their pages (likely resulting in an increase in traffic quantity and quality)
  • Developers can build SearchMonkey apps that enhance search results, access Yahoo! Search's user base and help shape the next generation of search
  • Users can customize their search experience with apps built by or for their favorite sites

This could be an interesting new development – the question is, how well does the data we currently output play with it; could we easily adapt our pages so they're compatible with SearchMonkey; should we invest the time it might take? Would a simple increase in the visibility and usefulness of search results be enough? Could there be a greater benefit in working towards federated searches across the cultural heritage sector or would this require a coordinated effort and agreement on data standards and structure?

Update to link to the Yahoo! Search Blog post ;The Yahoo! Search Gallery is Open for Business' which has a few more examples.

'Building websites with findability in mind' at WSG London Findability

Last night I went to the WSG London Findability event at Westminster University, part of London Web Week; here's part two of my notes.

Stuart Colville's session on 'Building websites with findability in mind' was full of useful, practical advice.

Who needs to find your content?

Basic requirements:
Understand potential audience(s)
Content
Semantic markup
Accessibility (for people and user agents)
Search engine friendly

Content [largely about blogs]:
Make it compelling for your audience
There's less competition in niche subjects
Originality (synthesising content, or representing existing content in new ways is also good)
Stay on topic
Provide free, useful information or tools
Comments and discussion (from readers, and interactions with readers) are good

Tagging:
Author or user-generated, or both
Good for searching
Replaces fixed categories
Enables arbitrary associations
Rich

Markup (how to make content more findable):
Use web standards. They're not a magic fix but they'll put you way ahead. The content:code ratio is improved, and errors are reduced.
Use semantic markup. Adds meaning to content.
Try the roundabout SEO test
Make your sites accessible. Accessible content is indexable content.

Markup meta:
Keywords versus descriptions. Tailor descriptions for each page; they can be automatically generated; they can be used as summaries in search results.
WordPress has good plugins – metadescription for auto-generated metadata, there are others for manual metadata.

Markup titles and headings:
Make them good – they'll appear as bookmark etc titles.
One H1 per page; the most meaningful title for that page
Separate look from heading structure.

Markup text:
Use semantically correct elements to describe content. Strong, em, etc.

Markup imagery:
Background images are fine if they're only a design element.
Use image replacement if the images have meaning. There are some accessibility issues.
Use attributes correctly, and make sure they're relevant.

Markup microformats:
Microformats are a simple layer of structure around content
They're easy to add to your site
Yahoo! search and technorati index them, Firefox 3 will increase exposure.

Markup Javascript:
Start unobtrusive and enhance according to the capabilities of the user agent.
Don't be stupid. Use onClick, don't kill href (e.g. href="#").
Use event delegation – no inline events. It's search engine accessible, has nice clean markup and you still get all the functionality.
[Don't break links! I like opening a bunch of search results in new tabs, and I can't do that on your online catalogue, I'll shop somewhere I can. Rant over.]

Performance and indexation:
Use 'last modified' headers – concentrate search agents on fresh content
Sites with Google Ads are indexed more often.

URLs:
Hackable URLs are good [yay!].
Avoid query strings, they won't be indexed
Put keywords in your URL path
Use mod_rewrite, etc.

URI permanence:
"They should be forever". But you need to think about them so they can be forever even if you change your mind about implementation or content structure.
Use rewrites if you do change them.

De-indexing (if you've moved content)
Put up a 404 page with proper http headers. 410 'intentionally gone' is nice.
There's a tool on Google to quickly de-index content.
Make 404s useful to users – e.g. run an internal search and display likely results from your site based on their search engine keywords [or previous page title].

Robots.txt – really good but use carefully. Robots-Nocontent – Yahoo! introduced 'don't index' for e.g. divs but it hasn't caught on.

Moving content:
Use 301. Redirect users and get content re-indexed by search engines.

Tools for analysing your findability:
Google webmaster tools, Google analytics, log files. It's worth doing, check for broken links etc.

Summary:
Think about findability before you write a line of code.
Start with good content, then semantic markup and accessibility.
Use sensible headings, titles, links.

WSG London Findability 'introduction to findability'

Last night I went to the WSG London Findability event at Westminster University. The event was part of London Web Week. As always, apologies for any errors; corrections and comments are welcome.

First up was Cyril Doussin with an 'introduction to findability'.

A lot of it is based on research by Peter Morville, particularly Ambient Findability.

So what do people search for?
Knowledge – about oneself; about concepts/meaning; detailed info (product details, specs); entities in society (people, organisations, etc.)
Opinions – to validate a feeling or judgement; establish trust relationships; find complementary judgements.

What is information? From simple to complex – data -> information -> knowledge.

Findability is 'the quality of being locatable or navigatable'.
Item level – to what degree is a particular object easy to discover or locate?
System level – how well does the environment support navigation and retrieval?

Wayfinding requires: knowing where you are; knowing your destination; following the best route; being able to recognise your destination; being able to find your way back.

The next section was about how to make something findable:
The "in your face" discovery principle – expose the item in places known to be frequented by the target audience. He showed an example of a classic irritating Australian TV ad, a Brisbane carpet store in this case. It's disruptive and annoying, but everyone knows it exists. [Sadly, it made me a little bit homesick for Franco Cozzo. 'Megalo megalo megalo' is also a perfect example of targeting a niche audience, in this case the Greek and Italian speakers of Melbourne.]

Hand-guided navigation – sorting/ordering (e.g. sections of a restaurant menu); sign-posting.

Describe and browse (e.g. search engines) – similar to asking for directions or asking random questions; get a list of entry points to pages.

Mixing things up – the Google 'search within a search' and Yahoo!'s 'search assist' box both help users refine searches.

Recommendations (communication between peers) – the searcher describes intent; casual discussions; advice; past experiences.
The web is a referral system. Links are entry doors to your site. There's a need for a relevancy system whether search engines (PageRank) or peer-based systems (Digg).

Measuring relevance (effectiveness):
Precision – if it retrieves only relevant documents
Recall – whether it retrieves all relevant documents.

Good tests for the effectiveness of your relevance mechanism:
Precision = number of relevant and retrieved documents divided by the total number retrieved.
Recall = number of relevant and retrieved documents divided by the total number of relevant documents.

Relevance – need to identify the type of search:
Sample search – small number of documents are sufficient (e.g. first page of Google results)
Existence search – search for a specific document
Exhaustive search – full set of relevant data is needed.
Sample and existence searches require precision; exhaustive searches require recall.

Content organisation:
Taxonomy – organisation through labelling [but it seems in this context there's no hierarchy, the taxon are flat tags].
Ontology – taxonomy and inference rules.
Folksonomy – a social dimension.

[In the discussion he mentioned eRDF (embedded RDF) and microformats. Those magic words – subject : predicate : object.]

Content organisation is increasingly important because of the increasing volume of information and sharing of information. It's also a very good base for search engines.

Measuring findability on the web: count the number of steps to get there. There are many ways to get to data – search engines, peer-based lists and directories.

Recommendations:
Aim to strike a balance between sources e.g. search engine optimisation and peer-based.
Know the path(s) your audience(s) will follow (user testing)
Understand the types of search
Make advertising relevant (difficult, as it's so context-dependent)
Make content rich and relevant
Make your content structured

I've run out of lunch break now, but will write up the talks by Stuart Colville and Steve Marshall later.