Where does Web 2.0 live in your organisation?

Last night Lynda Kelly left a comment that pointed me to her audience research blog and to an interesting discussion on fresh + new back in June last year; which in turn lead me to Organizational Barriers to Using Web 2.0 Tools. This post quoted a ‘nonprofit user’ who:

…pointed out to me that while she sees that social media tools make it easier for non-technical types to integrate technology into their workflow, at the same time there’s an ongoing organizational message that says “Leave the technology stuff to the IT department.”

Interestingly, (and this is in part based on my experience in different organisations over the years) sometimes the IT department are given the message “leave the web to the marketing department” or the education department, or to the curators…

Given that social technologies are not, by definition, traditional publications like official ‘brand’ and venue messages or rigorous academic research, and may not yet have a place in the organisational publication program, what is the practical effect of the ownership of web projects in a cultural heritage organisation?

And what happens if the ‘participatory web’ falls in an organisational limbo, with no-one able to commission or approve applications or content? More importantly, how can we work around it?

I think this is where some of the frustrations Frankie Roberto expressed come in – different departments have different priorities and working practices and are more or less risk-averse (and have different definitions of ‘risk).

(However, I don’t think you can underestimate the urge to archive and curate that many museum people feel. That archival urge possibly just goes along with the kinds of personalities that are drawn to work in museums. I have it myself so maybe I’m too sympathetic to it.)

Another way to find out what’s being said about your organisation

If you’re curious to know what’s being said about your institution, collection or applications, Omgili might help you discover conversations about your organisation, websites or applications. If you don’t have the resources for formal evaluation programs it can be a really useful way of finding out how and why people use your resources, and figure out how you can improve your online offerings. From their ‘about’ blurb:

Omgili finds consumer opinions, debates, discussions, personal experiences, answers and solutions. … [it’s] a specialized search engine that focuses on “many to many” user generated content platforms, such as, Forums, Discussion groups, Mailing lists, answer boards and others. … Omgili is a crawler based, vertical search engine that scans millions of online discussions worldwide in over 100,000 boards, forums and other discussion based resources. Omgili knows to analyze and differentiate between discussion entities such as topic, title, replies and discussion date.

Google as encyclopedia?

On the BBC this morning: Google debuts knowledge project:

Google has kicked off a project to create an authoritative store of information about any and every topic.

The search giant has already started inviting people to write about the subject on which they are known to be an expert.

The system will centre around authored articles created with a tool Google has dubbed “knol” – the word denotes a unit of knowledge – that will make webpages with a distinctive livery to identify them as authoritative.

The knol pages will get search rankings to reflect their usefulness. Knols will also come with tools that readers can use to rate the information, add comments, suggest edits or additional content.

Nicholas Carr said the knol project was … an attempt by Google to knock ad-free Wikipedia entries on similar subjects down the rankings.

So much could be said about this. Is it a peer review system for the web? How are ‘experts’ discovered and chosen? What factors would influence whether an ‘expert’ agrees to participate? Would practices of academic inclusion and exclusion apply? Will it use semantic web technologies or methodologies? Will commercial factors affect the users’ trust in search results? How will it affect traditional content providers like encyclopaedias, and new content sources like Wikipedia? Are they duplicating existing knowledge systems just to provide a new revenue stream?

Ok, last Facebook post, I promise, but for Londoners, there’s Poke 1.0, a ‘Facebook social research symposium’:

This social research symposium will allow academics who are researching the ‘Facebook’ social networking site to meet and exchange ideas. Researchers are welcome from the fields of sociology, media, communication & cultural studies, information science, education, politics, psychology, geography and any other sphere of ‘internet research’. PhD and post-doctoral researchers are especially welcome, as are researchers considering Facebook as a potential area of research.

User-Generated Content Sites See Exponential Growth in UK Visitors

I missed this comCast report at the time (September 2006).

Leading User-Generated Content Sites See Exponential Growth in UK Visitors During the Past Year

“Web 2.0 is clearly architected for participation, as it attempts to harness the collective intelligence of Web users,” commented Bob Ivins, managing director of comScore Europe. “Many of the sites experiencing the fastest growth today are the ones that understand their audience’s need for expression and have made it easy for them to share pictures, upload music and video, and provide their own commentary, thus stimulating others to do the same. It is the classic network effect at work.”

While uniformly demonstrating strong traffic growth, UGC sites are also adept at keeping users engaged.