Let’s push things forward – V&A and British Library beta collections search

The V&A and the British Library have both recently released beta sites for their collections searches.  I’d mentioned the V&A’s beta collections search in passing elsewhere, but basically it’s great to see such a nicely designed interface – it’s already a delight to use and has a simplicity that usually only comes from lots of hard work – and I love that the team were able to publish it as a beta.  Congratulations to all involved!

(I’m thinking about faceted browsing for the Science Museum collections, and it’s interesting to see which fields the V&A have included in the ‘Explore related objects’ panel (example).  I’d be interested to see any usability research on whether users prefer ‘inline’ links to explore related objects (e.g. in the ‘tombstone data’ bit to the right of the image) or for the links to appear in a distinct area, as on this site. )

I’m not sure how long it’s been live, but the British Library beta catalogue search features a useful ‘Refine My Results’ panel on the right-hand side of the search results page.  

There’s also a ‘workspace’, where items and queries can be saved and managed.  I think there’s a unique purpose for users of the BL search that most sites with ‘save your items’ functions don’t have – you can request items directly from your workspace in advance for delivery when next in the library.  My friendly local British Library regular says the ability to save searches between sessions is immensely useful.  You can also export to delicious, Connotea, RefWorks or EndNote, so your data is transportable, though unfortunately when I tested my notes on an item weren’t also exported.  I don’t have a BL login so I haven’t been able to play with their tagging system.

They’ve included a link to a survey, which is a useful way to get feedback from their users.

Both beta sites are already useful, and I look forward to seeing how they develop.

RDFa, SearchMonkey – tech talks at Open Hack London

While today’s Open Hack London event is mostly about the 24-hour hackathon, I signed up just for the Tech Talks because I couldn’t afford to miss a whole weekend’s study in the fortnight before my exams (stupid exams). I went to the sessions on ‘Guardian Data Store and APIs’, ‘RDFa SearchMonkey’, Arduino, ‘Hacking with PHP’, ‘BBC Backstage’, Dopplr’s ‘mashups made of messages’ and lightning talks including ‘SPARQL and semantic web’ stuff you can do now.

I’m putting my rough and ready notes online so that those who couldn’t make it can still get some of the benefits. Apologies for any mishearings or mistakes in transcription – leave me a comment with any questions or clarifications.

One of the reasons I was going was to push my thinking about the best ways to provide API-like access to museum information and collections, so my notes will reflect that but I try to generalise where I can. And if you have thoughts on what you’d like cultural heritage institutions to do for developers, let us know! (For background, here’s a lightning talk I did at another hack event on happy museums + happy developers = happy punters).

RDFa – now everyone can have an API.
Mark Birkbeck

Going to cover some basic mark-up, and talk about why RDFa is a good thing. [The slides would be useful for the syntax examples, I’ll update if they go online.]

RDFa is a new syntax from W3C – a way of embedding metadata (RDF) in HTML documents using attributes.

e.g. <span property=”dc:title”> – value of property is the text inside the span.

Because it’s inline you don’t need to point to another document to provide source of metadata and presentation HTML.

One big advance is that can provide metadata for other items e.g. images, so you can e.g. attach licence info to the image rather than page it’s in – e.g. <img src=”” rel=”licence” resource=”[creative commons licence]”>

Putting RDFa into web pages means you’ve now got a feed (the web page is the RSS feed), and a simple static web page can become an API that can be consumed in the same way as stuff from a big expensive system. ‘Growing adoption’.

Government department Central Office of Information [?] is quite big on RDFa, have a number of projects with it. [I’d come across the UK Civil Service Job Service API while looking for examples for work presentations on APIs.]

RDFa allows for flexible publishing options. If you’re already publishing HTML, you can add RDFa mark-up then get flexible publishing models – different departments can keep publishing data in their own way, a central website can go and request from each of them and create its own database of e.g. jobs. Decentralised way of approaching data distribution.

Can be consumed by: smarter browsers; client-side AJAX, other servers such as SearchMonkey.

He’s interested where browsers can do something with it – either enhanced browsers that could e.g. store contact info in a page into your address book; or develop JavaScript libraries that can parse page and do something with it. [screen shot of jobs data in search monkey with enhanced search results]

RDFa might be going into Drupal core.

Example of putting isbn in RDFa in page, then a parser can go through the page, pull out the triples [some explanation of them as mini db?], pull back more info about the book from other APIs e.g. Amazon – full title, thumbnail of cover. e.g. pipes.

Example of FOAF – twitter account marked up in page, can pull in tweets. Could presumably pull in newer services as more things were added, without having to re-mark-up all the pages.

Example of chemist writing a blog who mentions a chemical compound in blog post, a processor can go off and retrieve more info – e.g. add icon for mouseover info – image of molecule, or link to more info.

Next plan is to link with BOSS. Can get back RDFa from search results – augment search results with RDFa from the original page.

Search Monkey (what it is and what you can do with it)
Neil Crosby (European frontend architect for search at Yahoo).

SearchMonkey is (one of) Yahoo’s open search platforms (along with BOSS). Uses structured data to enhance search results. You get to change stuff on Yahoo search results page.

SearchMonkey lets you: style results for certain URL patterns; brand those results; make the results more useful for users.

[examples of sites that have done it to see how their results look in Yahoo? I thought he mentioned IMDb but it doesn’t look any different – a film search that returns a wikipedia result, OTOH, does.]

Make life better for users – not just what Yahoo thinks results should be, you can say ‘actually this is the important info on the page’

Three ways to do it [to change the SERP [search engine results page]: mark up data in a way that Yahoo knows about – ‘just structure your data nicely’. e.g. video mark-up; enhance a result directly; make an infobar.

Infobar – doesn’t change result see immediately on the page, but it opens on the page. e.g. of auto-enhanced result- playcrafter. Link to developer start page – how to mark it up, with examples, and what it all means.

User-enhanced result – Facebook profile pages are marked up with microformats – can add as friend, poke, send message, view friends, etc from the search results page. Can change the title and abstract, add image, favicon, quicklinks, key/value pairs. Create at [link I can’t see but is on slides] Displayed in screen, you fill it out on a template.

Infobar – dropdown in grey bar under results. Can do a lot more, as it’s hidden in the infobar and doesn’t have to worry people.

Data from: microformats, RDF, XSLT, Yahoo’s index, and soon, top tags from delicious.

If no machine data, can write an XSLT. ‘isn’t that hard’. Lots of documentation on the web.

Examples of things that have been made – a tool that exposes all the metadata known for a page. URL on slide. can install on Yahoo search page, add it in. Use location data to make a map – any page on web with metadata about locations on it – map monkey. Get qype results for anything you search for.

There’s a mailing list (people willing and wanting to answer questions) and a tutorial.

Questions

Question: do you need to use a special doctype [for RDFa]?
Answer: added to spec that ‘you should use this doctype’ but the spec allows for RDFa to be used in situations when can’t change doctype e.g. RDFa embedded in blogger blogpost. Most parsers walk the DOM rather than relying on the doctype.

Jim O’D – excited that SearchMonkey supports XSLT – if have website with correctly marked up tables, could expose those as key/value pairs?
Answer: yes. XSLT fantastic tool for when don’t have data marked up – can still get to it.

Frankie – question I couldn’t hear. About info out to users?
Answer: if you’ve built a monkey, up to you to tell people about it for the moment. Some monkeys are auto-on e.g. Facebook, wikipedia… possibly in future, if developed a monkey for a site you own, might be able to turn it auto-on in the results for all users… not sure yet if they’ll do it or not.
Frankie: plan that people get monkeys they want, or go through gallery?
Answer: would be fantastic if could work out what people are using them for and suggest ones appropriate to people doing particular kinds of searches, rather than having to go to a gallery.

“The coolest thing to be done with your data will be thought of by someone else”

I discovered this ace quote, “the coolest thing to be done with your data will be thought of by someone else”, on JISC’s Common Repository Interfaces Group (CRIG) site, via the The Repository Challenge. The CRIG was created to “help identify problem spaces in the repository landscape and suggest innovative solutions. The CRIG consists of a core group of technical, policy and development staff with repository interface expertise. It encourages anyone to join who is dedicated and passionate about surfacing scholarly content on the web.”

Read ‘repository or federated search’ for ‘repository’ (or think of a federated search as a pseudo-repository) and ‘scholarly’ for ‘cultural heritage’ content, and it sounds like an awful lot of fun.

It’s also the sentiment behind the UK Government’s Show Us a Better Way, the Mashed Museum days and a whole bunch of similar projects.

‘Sector-wide initiatives’ at ‘UK Museums on the Web Conference 2008’

Session 2, ‘Sector-wide initiatives’, of the UK Museums on the Web Conference 2008 was chaired by Bridget McKenzie.

In the interests of getting my notes up quickly I’m putting them up pretty much ‘as is’, so they’re still rough around the edges. There are quite a few sections below which need to be updated when the presentations or photos of slides go online. Updated posts should show in your RSS feed but you might need to check your settings.

[I hope Bridget puts some notes from her paper on her blog because I didn’t get all of it down.]

The session was introduced as case studies on how cross institutional projects can be organised and delivered. She mentioned resistance to bottom-up or experimental approach, institutional constraints; and building on emerging frames of web.

Does the frame of ‘the museum’ make sense anymore, particularly on the web? What’s our responsibilities when we collaborate? Contextual spaces – chance to share expertise in meaningful ways.

It’s easy to revert to ways previous projects have been delivered. Funding plans don’t allow for iterative, new and emergent technologies.

Carolyn Royston and Richard Morgan, V&A and NMOLP.
The project is funded by the ‘invest to save’ program, Treasury.

Aims:
Increase use of the digital collections of the 9 museums (no new website)
No new digitisation or curatorial content.
Encourage creative and critical use of online resources.
[missed one]
Sustainable high-quality online resource for partners.

The reality – it’s like herding cats.

They had to address issue of partnership to avoid problems later in project.

Focussed on developing common vision, set of principles on working together, identify things uniquely achievable through partnership, barriers to success, what added value for users.

Three levels of barriers to success – one of working in an inter-museum collaborative way, which was first for those nationals; organisational issues – working inter-departmentally (people are learning or web or whatever people and not used to working together); personal issues – people involved who may not think they are web or learning people.

These things aren’t necessary built in to project plan.

Deliverables: web quests, ‘creative journeys’, federated search, [something I missed], new ways of engaging with audiences.

Web Quests – online learning challenge, flexible learning tool mapped to curriculum. They developed a framework. It supports user research, analysis and synthesis of information. Users learn to use collections in research.

Challenges: creating meaningful collection links; sending people to collections sites knowing that content they’d find there wasn’t written for those audiences; provide support for pupils when searching collections. Sustainable content authoring tool and process.

[I wondered if the Web Quest development tools are extendible, and had a chance to ask Carolyn in one of the breaks – she was able to confirm that they were.]

Framework stays on top to support and structure.

Creative journeys:
[see slide]

They’re using Drupal. [Cool!]

[I also wondered about the user testing for creative journeys, whether there was evidence that people will do it there and not on their blogs, Zotero, in Word documents or hard drives – Carolyn also had some information on this.]

Museums can push relevant content.

What are the challenges?
How to build and sustain the Creative Journeys (user-generated content) communities, individually and as a partnership?
Challenge to curatorial authority and reputation
Work with messiness and complexity around new ways of communicating and using collections
Copyright and moderation issues

But partners are still having a go – shared risk, shared success.

Federated search
Wasn’t part of original implementation plan
[slide on reasons for developing]
Project uses a cross collection search, not a cross collection search project. The distinction can be important.

The technical solution was driven by project objectives [choices were made in that context, not in a constraint-free environment.]

Richard, Technical Solution
The back-end is de-coupled from front end applications
A feed syndicates user actions.

Federated search – a system for creating machine readable search results and syndicating them out.
Real time search or harvester. [IMO, ‘real time’ should always be in scare quotes for federated searches – sometimes Google creates expectations of instantaneous results that other searches can’t deliver, though the difference may only be a matter of seconds.]

Data manipulation isn’t the difficult bit

Creative Journeys – more machine readable data

Syndicated user interactions with collections.
Drupal [slide]

Human factor – how to sell to board
Deploy lightweight solutions. RAD. Develop in house, don’t need to go to agency.

[I’d love it if the NMOLP should have a blog, or a holding page, or something, where they could share the lessons they’ve learnt, the research they’ve done and generally engage with the digital museum community. Generally a lot of these big infrastructure projects would benefit from greater transparency, as scary as this is for traditional organisations like museums. The open source model shows that many eyeballs mean robust applications.]

Jeremy Ottevanger and Europeana/the European Digital Library
[I have to confess I was getting very hungry by this point so you might get more detailed information from Jeremy’s blog when he adds his notes.]
Some background on his involvement in it, hopes and concerns.
“cross-domain access to Europe’s cultural heritage”
Our content is more valuable together than scattered around.

Partnership, planning and prototyping
Not enough members from the UK, not very many museums.
Launch November this year
Won’t build all of planned functionality – user-generated content and stuff planned but not for prototype.

Won’t build an API or all levels of multiple linguality (in first release). Interface layer may have 3 or 4 major languages; object metadata (maybe a bit) and original content of digitised documents.

Originals on content contributors site, so traffic ends up there. That’s not necessarily clear in the maquette (prototype). [But that knowledge might help address some concerns generally out there about off-site searches]

Search, various modes of browsing, timeline and stuff.

Jeremy wants to hear ideas, concerns, ambitions, etc to take to plenary meeting.

He’d always wanted personal place to play with stuff.

[Similarly to my question above, I’ve always wondered whether users would rely on a cultural heritage sector site to collate their data? What unique benefits might a user see in this functionality – authority by association? live updates of data? Would they think about data ownership issues or the longevity of their data and the reliability of the service?]

Why are there so few UK museums involved in this? [Based on comments I’ve heard, it’s about no clear benefits, yet another project, no API, no clear user need] Jeremy had some ideas but getting in contact and telling him is the best way to sort it out.

Some benefits include common data standards, a big pool of content that search engines would pay attention to in a way they wouldn’t on our individual sites. Sophisticated search. Will be open source. Multi-lingual technology.

Good news:
“API was always in plans”.

EDLocal – PNDS. EU projects will be feeding in technologies.

Bad news: API won’t be in website prototype. Is EDLocal enough? Sustainability problems.
‘Wouldn’t need website at all if had API’. Natural history collections are poorly represented.

Is OAI a barrier too far? You should be able to upload from spreadsheet. [You can! But I guess not many people know this – I’m going to talk to the people who coded the PNDS about writing up their ‘upload’ tool, which is a bit like Flickr’s Uploadr but for collections data.]

Questions
Jim O’Donnell: regarding the issue of lack of participation. People often won’t implement their own OAI repository so that requirement puts people off.

Dan Zambonini: aggregation fatigue. ‘how many more of these things do we have to participate in’. His suggestion: tell museums to build APIs so that projects can use their data, should be other way around. Jeremy responded that that’s difficult for smaller museums. [Really good point, and the PNDS/EDL probably has the most benefits for smaller museums; bigger museums have the infrastructure not to need the functionality of the PNDS though they might benefit from cross-sector searching and better data indexing.]

Gordon McKenna commented: EDLocal starts on Wednesday next week, for three years.

George Oates: what’s been most surprising in collaboration process? Carolyn: that we’ve managed to work together. Knowledge sharing.

Yahoo! SearchMonkey, the semantic web – an example from last.fm

I had meant to blog about SearchMonkey ages ago, but last.fm’s post ‘Searching with my co-monkey’ about a live example they’ve created on the SearchMonkey platform has given me the kick I needed. They say:

The first version of our application deals with artist, album and track pages giving you a useful extract of the biography, links to listen to the artist if we have them available, tags, similar artists and the best picture we can muster for the page in question.

Some background on SearchMonkey from ReadWriteWeb:

At the same time, it was clear that enhancing search results and cross linking them to other pieces of information on the web is compelling and potentially disruptive. Yahoo! realized that in order to make this work, they need to incentivize and enable publishers to control search result presentation.

SearchMonkey is a system that motivates publishers to use semantic annotations, and is based on existing semantic standards and industry standard vocabularies. It provides tools for developers to create compelling applications that enhance search results. The main focus of these applications is on the end user experience – enhanced results contain what Yahoo! calls an “infobar” – a set of overlays to present additional information.

SearchMonkey’s aim is to make information presentation more intelligent when it comes to search results by enabling the people who know each result best – the publishers – to define what should be presented and how.

(From Making the Web Searchable: The Story of SearchMonkey)

And from Yahoo!’s search blog:

This new developer platform, which we’re calling SearchMonkey, uses data web standards and structured data to enhance the functionality, appearance and usefulness of search results. Specifically, with SearchMonkey:

  • Site owners can build enhanced search results that will provide searchers with a more useful experience by including links, images and name-value pairs in the search results for their pages (likely resulting in an increase in traffic quantity and quality)
  • Developers can build SearchMonkey apps that enhance search results, access Yahoo! Search’s user base and help shape the next generation of search
  • Users can customize their search experience with apps built by or for their favorite sites

This could be an interesting new development – the question is, how well does the data we currently output play with it; could we easily adapt our pages so they’re compatible with SearchMonkey; should we invest the time it might take? Would a simple increase in the visibility and usefulness of search results be enough? Could there be a greater benefit in working towards federated searches across the cultural heritage sector or would this require a coordinated effort and agreement on data standards and structure?

Update to link to the Yahoo! Search Blog post ;The Yahoo! Search Gallery is Open for Business‘ which has a few more examples.