If Web 3.0 = Semantic Web is this the 'first major' Semantic Web application?

Via Rough Type post 'Freebase: the Web 3.0 machine':

"Artificial intelligence guru Danny Hillis has launched an early version of the first major Web 3.0 application. It's called Freebase, and its grandiose epistemological mission is right up there with those of Google and Wikipedia.

The product of Hillis's latest company, Metaweb Technologies, Freebase is a user-generated brain. Like Wikipedia, it allows people to freely add information to it, in the form of text or images or, one assumes, anything else that can be rendered digitally. But it also allows users to add "metadata" about the information – tags that describe what a word or picture is and how it relates to other information.

The addition of rich meta tags in a standardized form is what makes Freebase a next-generation Web application – a manifestation of what Tim Berners-Lee long ago dubbed the Semantic Web and what has recently been rebranded Web 3.0 for popular consumption.

…Freebase is really more about the creation of a community of machines than a community of people. The essence of the Semantic Web is the development of a language through which computers can share meaning and hence operate at a higher, more human level of intelligence. The meta tags are crucial to that machine language. Freebase hopes to harness the (free) labor of a big pool of vounteers to add those tags, which is a labor-intensive chore (and a big hurdle on the path to Web 3.0)."

It's worth checking out the IHT article linked above, A 'more revolutionary' Web. I liked this bit:

"A consequence of an open and diffuse Internet, he noted, is that unexpected outcomes can emerge from unanticipated places.

For instance, some early experiments in highlighting new relationships from existing Web data have come out of Flickr, a photo-sharing site that members categorize themselves, and FOAF, which stands for "friend of a friend," a research project to describe the various links between people.

Both add "meaning" where such context did not exist before, just by changing the underlying programming to reflect links between databases, Shadbolt said."

I like the idea of a Friday post looking at how people are interacting with and inhabiting museums. Here's a lovely photo of Melbourne Museum on Flickr. I have a personal interest in this photo because it reminds me of leaving the office late at night when I was working all hours to get the website finished before the launch of the museum.

I love the way this overhead photo has been marked up with notes to link to other historical photos and add layers of personal meaning: Whitechapel – a local history in pictures.

Yahoo Pipes – a new challenge? opportunity? for museums

I'm cheating and posting something I sent to the Museums Computer Group list.

Bill Thompson has written about Yahoo Pipes in 'The mash-up future of the web'.

If you haven't heard of Yahoo Pipes before, this is a reasonable summary from the article:

"Their new offering, Pipes, lets you take a data feed such as the result of a web search, or an RSS feed from a blog or news site, or a set of tagged photos on Flickr, and transform it to produce the outcome you want. You can then make it available for other people to see.

It's web-based, no more complicated than creating programs for Lego MindStorms, and already stirring up a lot of interest.

Yahoo!'s Pipes do the same with a simple graphical tool that lets you define and connect data feeds, filters and user prompts, so that you can quickly build the service you want. You still need some technical ability, but you don't need to be a programmer."

My first thought was 'cool, let's make sure our feeds are in a compatible format so people can use our data' and my second thought was 'how on earth will we measure usage?'.

It would be cool to know who's using our data and how, but overall, do we need to measure how it's used and how often it's accessed? Given that we probably can't anyway, are there other potentially useful indicators of use? Would use of our data in a mash-up affect our museums' Key Performance Indicators by driving traffic away from our sites? I'd like to say that's the wrong question, but website visitors count under some funding models.

From the AHDS blog:

The AHDS has done some investigation of the user statistics of the Stormont Papers resource. Two main points are uncovered

1. User searches show the 'long tail' effect. The bulk of searches are not on the most popular terms (which account for 21% of searches) , but on terms, phrases and words that are used very rarely (which account for 54% of searches)

2. Of the ten most popular search terms, all are available as pre-arranged links on the home page. A quick click on a link, rather than typing something into a keyboard, provides a more user-friendly way for a user to get to know what is within a resource.

Source: Users do not want what you expect, AHDS.

"Shoppers are likely to abandon a website if it takes longer than four seconds to load, a survey suggests.

It found 75% of the 1,058 people asked would not return to websites that took longer than four seconds to load." Akamai study as reported on the BBC.

It's a study of online shopping habits, but I wonder if the same holds for cultural sector sites. I guess that says something either about my knowledge of existing audience evaluation or the paucity of existing information.

The article doesn't report whether the study analysed the results by gender, but this article, Key Website Research Highlights Gender Bias, suggests that gender makes a big difference to the user experience:

"Despite the parity of target audience, the results found that 94% of the sites displayed a masculine orientation with just 2% displaying a typically female bias."