My notes from Christian Heilmann's talk on 'Reaching those web folk' with Yahoo!'s new-ish YQL, open data tables and APIs at the National Maritime Museum [his slides]. My notes are a bit random, but might be useful for people, especially the idea of using YQL as an easy way to prototype APIs (or implement APIs without too much work on your part).
For him it's about data on the web, not just technology.
Number of users is a crap metric, [should consider the user experience].
Stats should be what you use to discover areas where are the problems, not to pat yourself on the back.
If you think of your site as content, then visitors can become 'broadcasting stations' and relay your message. Information flows between readers and content. They're passing it on through distribution channels you're not even aware of.
Content on the web is validated with links and quotes from other sources e.g. Wikipedia. People mix your information with other sources to prove a point or validate it. eg. photos on maps.
How can you be part of it?
Make it easy to access. Structure your websites in (plain old semantic HTML) a semantic manner. Title is important, etc. Add more semantic richness with RDF and microformats. Provide data feeds or RSS. Consider the Rolls Royce of distribution – an API. Help other machines make sense of your content – search engines will love you too.
Yahoo index via BOSS API – Yahoo do it because they know 'search engines are dying'. Catch-all search engines are stupid. Apples are not the same apples for everyone. Build a cleverer web search.
http://ask-boss.appspot.com/ – nlp analysis of search results. Try 'who is batman in the dark knight' – amazing.
BOSS provides mainstream channel for semantic web and microformats. Microformats are chicken and egg problem. Using searchmonkey technology, BOSS lists this information in the results. BOSS can return all known information about a page, structured.
Key terms parameter in BOSS – what did people enter to find a site/page? http://keywordfinder.org/ – what successful websites have for a given keyword.
Clean HTML is the most important thing, semantic and microformats are good.
If your data is interesting enough, people will try to get to it and remix it.
[Curl has grown up since I last used it! Can be any browser, do cookies, etc.]
Now the web looks like an RSS reader.
Include RSS in your stats.
Guardian – any of their content websites put out RSS through CMS. They then provided an API so end users can filter down to the data they need.
Programmable Web – excellent resource but can be overwhelming.
The more data sources you use, the more time you spend reading API documentation, sos every API is different. Terms, formats, etc. The more sources you connect to, the more chances of error. The more stuff you pull in, the slower the performance of your website.
So you need systems to aggregate sources painlessly. Yahoo Pipes. A visual interface, changes have to be made by hand.
You can't quickly use a pipe in your code and change it on the fly. e.g. change a parameter for one implementation. No version control.
So that's one of the reasons for YQL: Yahoo Query Language. SQL style interface to all yahoo data (all Yahoo APIs) and the web. Yahoo build things with APIs cos it's the only way to scale. Book: 'scalable websites', all about APIs.
Build queries to Yahoo APIs, try them out in YQL console. Provides diagnostics – which URLs, how long it took, any problems encountered. Allows nesting of API calls.
Outputs XML or JSON, consistent format so you know how to use that information.
YQL also helped internally because of varying APIs between departments.
Gives access to all Yahoo services, any data sources on the web, including html and microformats, and can scrape any website.
Easy way to add own information to YQL. Tell Yahoo end point where can get the info.
[Though you do need RSS results from a search engine to point to – I'm going to see what we can output from our Google Mini and will share any code – or would appreciate some time-saving pointers if anyone has any. Yes, hello, lazyweb, that's my coat, thanks.]
Basically it's a way of providing an API without having to develop one.
Concluding: you can piggyback on people's social connections with other people by making data shareable. [Then your data is shared, yay. Assuming your institution is down with that, and no copyrights or puppies were hurt in the process.]
APIs are a commitment – have to be available all the time, lot of traffic, but hard to measure traffic and benefits. Making APIs scale is a pain and have to be clever to do it. Pointing YQL open data table pointing to search engine on your site also works.
Saves documenting API? [??]
YQL handles the interface, caching and data conversion for you. Also limits the access to sensible levels – 10,000 hits/hour.
Jim – 'images from collection' displayed on page as badge thing with YQL as RSS browser. Can just create RSS feed for exhibition than can new badge for new exhibition.
Using YQL protects against injection attacks.
Comment from audience – YQL as meta-API.
Registering is basically making the XML file. You need a Yahoo ID to use the console. [The console is cool, basically like a SQL 'enterprise' system console, with errors and transaction processing costs.]
We had questions about adding in metrics, stats, to use both for reporting and keeping funders/bosses happy and for diagnostics – to e.g. find out which areas of the collection are being queried, what people are finding interesting.
github repository as place to register open tables to make them discoverable.
There's a YQL blog.
[So, that's it – it's probably worth a play, and while your organisation might not want to use it in production without checking out how long the service is likely to be around, etc, it seems like an easy way of playing with API-able data. It'd be really interesting to see what happened if a few museums with some overlap in their collections coverage all made their data available as an open table.]