Personal web servers on a laptop or desktop machine are very handy if you're looking for a local development environment. This article offers a few options for Mac, Linux and Windows: Set up your personal webserver.
Month: May 2008
Get your (cultural heritage) geek on
The details for two events you might be interested in have been finalised.
The program for UK Museums on the Web Conference 2008 has been announced. It's a great line-up, so I'll see you there if you can get to the University of Leicester for 19 June 2008.
And the date and venue for BathCamp have been confirmed as Saturday 13th – Sunday 14th September 2008 at the Invention Studios in Bath. More information at that blog link, or my previous post: Calling geeks in the UK with an interest in cultural heritage content/audiences.
And I've been hassled by my legion of fans to point out that you can nominate me in the Programming and development blogs: ComputerWeekly.com IT Blog Awards 08 (and you might win a £50 Amazon voucher). There's a lovely badge but I can't quite bring myself to use it, I've only just gotten used to the idea that anyone apart from three or four people I know read this blog. Anyway, there you go.
And if all that's too much excitement for you, go read about the lamest Wikipedia edit wars ever.
Notes from 'Aggregating Museum Data – Use Issues' at MW2008
These are my notes from the session 'Aggregating Museum Data – Use Issues' at Museums and the Web, Montreal, April 2008.
These notes are pretty rough so apologies for any mistakes; I hope they're a bit useful to people, even though it's so late after the event. I've tried to include most of what was covered but it's taken me a while to catch up on some of my notes and recollection is fading. Any comments or corrections are welcome, and the comments in [square brackets] below are me. All the Museums and the Web conference papers and notes I've blogged have been tagged with 'MW2008'.
This session was introduced by David Bearman, and included two papers:
Exploring museum collections online: the quantitative method by Frankie Roberto and Uniting the shanty towns – data combining across multiple institutions by Seb Chan.
David Bearman: the intentionality of the production of data process is interesting i.e. the data Frankie and Seb used wasn't designed for integration.
Frankie Roberto, Exploring museum collections online: the quantitative method (slides)
He didn't give a crap of the quality of the data, it was all about numbers – get as much as possible to see what he could do with it.
The project wasn't entirely authorised or part of his daily routine. It came in part from debates after the museum mash-up day.
Three problems with mashing museum data: getting it, (getting the right) structure, (dealing with) dodgy data
Traditional solutions:
Getting it – APIs
Structure – metadata standards
Dodgy data – hard work (get curators to fix it)
But it doesn't have to be perfect, it just has to be "good enough". Or "assez bon" (and he hopes that translation is good enough).
Options for getting it – screen scrapers, or Freedom of Information (FOI) requests.
FOI request – simple set of fields in machine-readable format.
Structure – some logic in the mapping into simple format.
Dodgy data – go for 'good enough'.
Presenting objects online: existing model – doesn't give you a sense of the archive, the collection, as it's about the individual pages.
So what was he hoping for?
Who, what, where, when, how. ['Why' is the other traditional journalists questions but too difficult in structured information]
And what did he get?
Who: hoping for collection/curator – no data.
What: hoping for 'this is an x'. Instead got categories (based on museum internal structures).
Where: lots of variation – 1496 unique strings. The specificity of terms varies on geographic and historical dimensions.
When: lots of variation
How: hoping for donation/purchase/loan. Got a long list of varied stuff.
[There were lots of bits about whacking the data together that made people around me (and me, at times) wince. But it took me a while to realise it was a collection-level view, not an individual object view – I guess that's just a reflection of how I think about digital collections – so that doesn't matter as much as if you were reading actual object records. And I'm a bit daft cos the clue ('quantitative') was in the title.
A big part of the museum publication process is making crappy date and location and classification data correct, pretty and human-readable, so the variation Frankie found in data isn't surprising. Catalogues are designed for managing collections, not for publication (though might curators also over-state the case because they'd always rather everything was tidied than published in a possible incorrect or messy state?).
It would have been interesting to hear how the chosen fields related to the intended audience, but it might also have been just a reasonable place to start – somewhere 'good enough' – I'm sure Frankie will correct me if I'm wrong.]
It will be on museum-collections.org. Frankie showed some stuff with Google graph APIs.
Prior art – Pitt Rivers Museum – analysis of collections, 'a picture of Englishness'.
Lessons from politics: theyworkforyou for curators.
Issues: visualisations count all objects equally. e.g. lots of coins vs bigger objects. [Probably just as well no natural history collections then. Damn ants!]
Interactions – present user comments/data back to museums?
Whose role is it anyway, to analyse collections data? And what about private collections?
Sebastian Chan, Uniting the shanty towns – data combining across multiple institutions (slides)
[A paraphrase from the introduction: Seb's team are artists who are also nerds (?)]
Paper is about dealing with the reality of mixing data.
Mess is good, but… mess makes smooshing things together hard. Trying to agree on standards takes a long time, you'll never get anything built.
Combination of methods – scraping + trust-o-meter to mediate 'risk' of taking in data from multiple sources.
Semantic web in practice – dbpedia.
Open Calais – bought out from Clearforest by Reuters. Dynamically generated metadata tags about 'entities' e.g. possible authority records. There are problems with automatically generated data e.g. guesses at people, organisations, whatever might not be right. 'But it's good enough'. Can then build onto it so users can browse by people then link to other sites with more information records about them in other datasets.
[But can museums generally cope with 'good enough'? What does that do to ideas of 'authority'? If it's machine-generated because there's not enough time for a person in the museum to do it, is there enough time for a person in the museum to clean it? OTOH, the Powerhouse model shows you can crowdsource the cleaning of tags so why not entities. And imagine if we could connect Powerhouse objects in Sydney with data about locations or people in London held at the Museum of London – authority versus utility?
Do we need to critically examine and change the environment in which catalogue data is viewed so that the reputation of our curators/finds specialists in some of the more critical (bitchy) or competitive fields isn't affected by this kind of exposure? I know it's a problem in archaeology too.]
They've published an OpenSearch feed as GeoRSS.
Fire eagle, Yahoo beta product. Link it to other data sets so you can see what's near you. [If you can get on the beta.]
I think that was the end, and the next bits were questions and discussion.
David Bearman: regarding linked authority files… if we wait until everything is perfect before getting it out there, then "all curators have to die before we can put anything on the web", "just bloody experiment".
Nate (Walker): is 'good enough' good enough? What about involving museums in creating better and correcting data. [I think, correct me if not]
Seb: no reason why a museum community shouldn't create an OpenCalais equivalent. David: Calais knows what reuters know about data. [So we should get together as a sector, nationally or internationally, or as art, science, history museums, and teach it about museum data.]
David – almost saying 'make the uncertainty an opportunity' in museum data – open it up to the public as you may find the answers. Crowdsource the data quality processes in cataloguing! "we find out more by admitting we know less".
Seb – geo-location is critical to allowing communities to engage with this material.
Frankie – doing a big database dump every few months could be enough of an API.
Location sensitive devices are going to be huge.
Seb – we think of search in a very particular way, but we don't know how people want to search i.e. what they want to search for, how they find stuff. [This is one of the sessions that made me think about faceted browsing.]
"Selling a virtual museum to a director is easier than saying 'put all our stuff there and let people take it'".
Tim Hart (Museum Victoria) – is the data from the public going back into the collection management system? Seb – yep. There's no field in EMu for some of the stuff that OpenCalais has, but the use of it from OpenCalais makes a really good business case for putting it into EMu.
Seb – we need tools to create metadata for us, we don't and won't have resources to do it with humans.
Seb – Commons on Flickr is good experiment in giving stuff away. Freebase – not sure if go to that level.
Overall, this was a great session – lots of ideas for small and large things museums can do with digital collections, and it generated lots of interesting and engaged discussion.
[It's interesting, we opened up the dataset from Çatalhöyük for download so that people could make their own interpretations and/or remix the data, but we never got around to implementing interfaces so people could contribute or upload the knowledge they created back to the project, or how to use the queries they'd run.]
Another model for connecting repositories
Dr Klaus Werner has been working with Intelligent Cultural Resources Information Management (ICRIM) on connecting repositories or information silos from "different cultural heritage organizations – museums, superintendencies, environmental and architectural heritage organizations" to make "information resources accessible, searchable, re-usable and interchangeable via the internet".
You can read more on these CAA07 conference slides: ICRIM: Interconnectivity of information resources across a network of federated repositories (pdf download), and the abstract from the CAA07 paper might also provide some useful context:
The HyperRecord system, used by the Capitoline Museums (Rome) and the Bibliotheca Hertziana (Max-Planck Institute, Rome) and developed as Culture2000 project, is a framework for the inter-connectivity of information resources from museums, archives and cultural institutes.
…
The repositories offer both the usual human interface for research (fulltext, title, etc.) and a smart REST API with a powerful behind-the-scenes direct machine-to-machine facility for querying and retrieving data.
…
The different information resources use digital object identifiers in the form of URNs (up to now, mostly for museum objects) for identification and direct-access. These allow easy aggregation of contents (data, records, documents) not only inside a repository but also across boundaries using the REST API for serving XML over a plain HTTP connection, in fact creating a loosely coupled network of repositories.
Thanks to Leif Isaksen for putting Dr Werner in contact with me after he saw his paper at CAA07.
Notes from 'Everything RSS' at MW2008
These are my notes from the workshop Everything RSS with Jim Spadaccini from Ideum at Museums and the Web, Montreal, April 2008. Some of my notes will seem self-evident to various geeks or non-geeks but I've tried to include most of what was covered.
It's taken me a while to catch up on some of my notes, so especially at this distance – any mistakes are mine, any comments or corrections are welcome, and the comments in [square brackets] below are me. All the conference papers and notes I've blogged have been tagged with 'MW2008'.
The workshop will cover: context, technology, the museum sector, usability and design.
RSS/web feeds – it's easy to add or remove content sources, they can be rich media including audio, images, video, they are easily read or consumed via applications, websites, mobile devices.
The different flavours and definitions of RSS have hindered adoption.
Atom vs RSS – Atom might be better but not as widely adopted. Most mature RSS readers can handle both.
RSS users are more engaged – 2005, Nielsen NetRatings.
Marketers are seeing RSS as alternative to email as email is being overrun by spam and becoming a less efficient marketing tool.
The audience for RSS content is slowly building as it's built into browsers, email (Yahoo, Outlook, Mac), MySpace widget platform.
Feedburner. [I'm sure more was said about than this – probably 'Feedburner is good/useful' – but it was quite a while ago now.]
Extending RSS: GeoRSS – interoperable geo-coded data; MediaRSS, Creative Commons RSS Module.
Creating RSS feeds on the server-side [a slide of references I failed to get down in time].
You can use free or open source software to generate RSS feeds. MagpieRSS, Feed Editor (Windows, extralabs.net); or free Web Services to create or extend RSS feeds.
There was an activity where we broke into groups to review different RSS applications, including Runstream (create own RSS feed from static content) and xFruits (convert RSS into different platforms).
Others included rssfeedssubmit.com, aiderss.com, rssmixer.com (prototype by Ideum), rsscalendar.com and feedshow.com (OPML generator).
OPML – exchange lists of web feeds between aggregators. e.g. museumblogs site.
RSSmixer – good for widgets and stats, when live to public. [It looks like it's live now.]
RSS Micro – RSS feed search engine, you can also submit your feed there. Also feedcamp.
Ideas for using RSS:
Use meetup and upcoming for promoting events. Have links back to your events pages and listings.
Link to other museums – it helps everyone's technorati/page ranking.
There was discussion of RSSmixer's conceptual data model. Running on Amazon EC2. [with screenshot]. More recent articles are in front end database, older ones in backend database.
RSS is going to move more to a rich media platform, so interest in mixing and filtering down feeds will grow, create personalisation.
Final thoughts – RSS is still emergent. It won't have a definitive breakthrough but it will eventually become mainstream. It will be used along with email marketing as a tool to reach visitors/customers. RSS web services will continue to expand.
Regular RSS users, who have actively subscribed, are an important constituency. Feeds will be more frequently offered on websites, looking beyond blogs and podcasts.
RSS can help you reach new audiences and cement relationships with existing visitors. You can work with partners to create 'mixed' feeds to foster deeper connections with visitors.
Use RSS for multiple points of dissemination – not just RSS. [At this stage I really have no idea what I meant by this but I'm sure whatever Jim said made sense.]
[I had a question about tips for educating existing visitors about RSS. I'd written a blog post about RSS and how to subscribe, which helped, but that's still only reaching a tiny part of potential audience. Could do a widget to demonstrate it.
This was also one of the workshops or talks that made me realise we are so out of the loop with up-to-date programming models like deployment methods. I guess we're so busy all the time it's difficult to keep up with things, and we don't have the spare resources to test new things out as they come along.]
Thumbs up to Migratr (and free and open goodness)
[Update: Migratr downloads all your files to the desktop, with your metadata in an XML file, so it's a great way to backup your content if you're feeling a bit nervous about the sustainability of the online services you use. If it's saved your bacon, consider making a donation.]
This is just a quick post to recommend a nice piece of software: "Migratr is a desktop application which moves photos between popular photo sharing services. Migratr will also migrate your metadata, including the titles, tags, descriptions and album organization."
I was using it to migrate stuff from random Flickr accounts people had created at work in bursts of enthusiasm to our main Museum of London Flickr account, but it also works for 23HQ, Picasa, SmugMug and several other photo sites.
The only hassles were that it concatenated the tags (e.g. "Museum of London" became "museumoflondon") and didn't get the set descriptions, but overall it's a nifty utility – and it's free (though you can make a donation). [Update: Alex, the developer, has pointed out that the API sends the tags space delimited, so his app can't tell the different.]
And as the developer says, the availability of free libraries (and the joys of APIs) cut down development time and made the whole thing much more possible. He quotes Newton's, "If I have seen further it is by standing on the shoulders of giants" and I think that's beautifully apt.
We know it's worth doing, but how do we convince others?
Bootstrapping a Niche Social Network poses the question, "How do you bootstrap your social site if you're targeting a group that doesn't yet use software (or doesn't seem interested in using software)? While software designers can often see how useful their tool can be, normal users aren't so prescient. How do you get them to see the value in your software?", and provides some answers:
People don't want to be good at software. They want to be good at fun things like acting, writing, and ultimate frisbee.
…
Once you identify the areas where the software can improve the theatre folks life, you’ll have a much easier time convincing them to give it a shot. So in their mind they won’t be using "social network software", they’ll be using a tool to help them be a better theatre group.
This is an unfortunate side-effect of the social networking craze. We have new words that we're using to communicate among those of us who design the software, but for the vast majority of folks who will actually use the software, the terms don't mean very much. So while you may understand what I mean by "niche social network", the people actually in the niche social network think of themselves as performers, actors, or what-have-you.
See also: Social Media for Social Change Behind the Nonprofit Firewall (and the discussion in the comments).
The issues are a bit different for social networks – if you get it right then your users are your content creators, while you'll probably need others outside of IT to contribute if you want blogs or videos or photos about your organisation.
Finding real world metaphors also seems to help – Andy Powell described the Ning site for the Eduserv Foundation Symposium 2008 as "a virtual delegate list – a place where people could find out who is coming on the day (physically or virtually) and what their interests are". This description has made a lot of sense to people I've discussed it with – everyone knows what a conference delegate list looks like, and everyone has probably also wondered how on earth they'll find the people who sound interesting. A social network meets a need in that context.
Notes from 'The API as Curator' and on why museums should hire programmers
These are my notes from the third paper 'The API as Curator' by Aaron Straup Cope in the Theoretical Frameworks session chaired by Darren Peacock at Museums and the Web 2008. The slides for The API as Curator are online.
I've also included below some further notes on why, how, whether museums should hire programmers, as this was a big meme at the conference and Aaron's paper made a compelling case for geeks in art, arty geeks and geeky artists.
You might have noticed it's taken me a while to catch up on some of my notes from this conference, and the longer I leave it the harder it gets. As always, any mistakes are mine, any comments corrections are welcome, and the comments in [square brackets] below are mine.
The other session papers were Object-centred democracies: contradictions, challenges and opportunities by Fiona Cameron and Who has the responsibility for saying what we see? mashing up Museum and Visitor voices, on-site and online by Peter Samis; all the conference papers and notes I've blogged have been tagged with 'MW2008'.
Aaron Cope: The API as curator.
The paper started with some quotes as 'mood music' for the paper.
Institutions are opening up, giving back to the communitiy and watching what people build.
It's about (computer stuff as) plumbing, about making plumbing not scary. If you're talking about the web, sooner or later you're going to need to talk about computer programming.
Programmers need to be more than just an accessory – they should be in-house and full-time and a priority. It boils down to money. You don't all need to be computer scientists, but it should be part of it so that you can build things.
Experts and consumers – there's a long tradition of collaboration in the art community, for example printmaking. Printers know about all the minutiae (the technical details) but/so the artists don't have to.
Teach computer stuff/programming so that people in the arts world are not simply consumers.
Threadless (the t-shirt site) as an example. Anyone can submit a design, they're voted on in forum, then the top designs are printed. It makes lots of money. It's printmaking by any other name. Is it art?
"Synthetic performances" Joseph Beuys in Second Life…
It's nice not to be beholden to nerds… [I guess a lot of people think that about their IT department. Poor us. We all come in peace!]
Pure programming and the "acid bath of the internet".
Interestingness on Flickr – a programmer works on it, but it's not a product – (it's an expression of their ideas). Programming is not a disposable thing, it's not as simple as a toaster. But is it art? [Yes! well, it can be sometimes, if a language spoken well and a concept executed elegantly can be art.]
API and Artspeak – Aaron's example (a bit on slide 15 and some general mappy goodness).
Build on top of APIs. Open up new ways to explore collection. Let users map their path around your museum to see the objects they want to see.
Their experience at Flickr is that people will build those things (if you make it possible). [Yay! So let's make it possible.]
There's always space for collaboration.
APIs as the nubby bits on Lego. [Lego is the metaphor of the conference!]
Flickr Places – gazetteer browsing.
[Good image on slide 22]: interpretation vs intent, awesome (x) vs time (y). You need programmers on staff, you need to pay them [please], you don't want them to be transient if you want to increase smoothness of graph between steps of awesomeness. Go for the smallest possible release cycles. Small steps towards awesome.
Questions for the Theoretical Frameworks session
Qu from the Science Museum Minnesota: how to hire programmers in museums – how to attract them? when salaries are crap.
Aaron – teach it in schools and go to computer science departments. People do stuff for more than just money.
Qu on archiving UGC and other stuff generated in these web 2.0 projects… Peter Samis – WordPress archives things. [So just use the tools that already exist]
Aaron – build it and they will come. Also, redefine programming.
There's a good summary of this session by Nate at MW2008 – Theoretical Frameworks.
And here's a tragically excited dump from my mind written at the time: "Yes to all that! Now how do we fund it, and convince funders that big top-down projects are less likely to work than incremental and iterative builds? Further, what if programmers and curators and educators had time to explore, collaborate, push each other in a creative space? If you look at the total spend on agencies and external contractors, it must be possible to make a case for funding in-house programmers – but silos of project-based funding make it difficult to consolidate those costs, at least in the UK."
Continuing the discussion about the benefits of an in-house developer team, post-Museums and the Web, Bryan Kennedy wrote a guest post on Museum 2.0 about Museums and the Web in Montreal that touched on the issue:
More museums should be building these programming skills in internal teams that grow expertise from project to project. Far too many museums small and large rely on outside companies for almost all of their technical development on the web. By and large the most innovation at Museums and the Web came from teams of people who have built expertise into the core operations of their institution.
I fundamentally believe that at least in the museum world there isn't much danger of the technology folks unseating the curators of the world from their positions of power. I'm more interested in building skilled teams within museums so that the intelligent content people aren't beholden to external media companies but rather their internal programmers who feel like they are part of the team and understand the overall mission of the museum as well as how to pull UTF-8 data out of a MySQL database.
I left the following comment at the time, and I'm being lazy* and pasting here to save re-writing my thoughts:
Good round-up! The point about having permanent in-house developers is really important and I was glad to see it discussed so much at MW2008.
It's particularly on my mind at the moment because yesterday I gave a presentation (on publishing from collections databases and the possibilities of repositories or feeds of data) to a group mostly comprised of collections managers, and I was asked afterwards if this public accessibility meant "the death of the curator". I've gathered the impression that some curators think IT projects impose their grand visions of the new world, plunder their data, and leave the curators feeling slightly shell-shocked and unloved.
One way to engage with curatorial teams (and educators and marketers and whoever) and work around these fears and valuable critiques is to have permanent programmers on staff who demonstrably value and respect museum expertise and collections just as much as curators, and who are willing to respond to the concerns raised during digital projects.
There's a really good discussion in the comments on Bryan's post. I'm sure this is only a sample of the discussion, but it's a bit difficult to track down across the blogosphere/twitterverse/whatever and I want to get this posted some time this century.
* But good programmers are lazy, right?
Notes from 'Who has the responsibility for saying what we see?' in the 'Theoretical Frameworks' session, MW2008
These are my notes from the second paper, 'Who has the responsibility for saying what we see? mashing up Museum and Visitor voices, on-site and online' by Peter Samis in the Theoretical Frameworks session chaired by Darren Peacock at Museums and the Web 2008.
The other session papers were Object-centred democracies: contradictions, challenges and opportunities by Fiona Cameron and The API as Curator by Aaron Straup Cope; all the conference papers and notes I've blogged have been tagged with 'MW2008'.
It's taken me a while to catch up on some of my notes – real life has a way of demanding attention sometimes. Any mistakes are mine, any comments corrections are welcome, and the comments in [square brackets] below are mine.
Peter Samis spoke about the work of SFMOMA with Olafur Eliasson. His slides are here.
How our perception changes how we see the world…
"Objecthood doesn’t have a place in the world if there’s not an individual person making use of that object… I of course don’t think my work is about my work. I think my work is about you." (Olafur Eliasson, 2007)
Samis gave an overview of the exhibitions "Take your time: Olafur Eliasson" and "Your tempo" presented at SFMOMA.
The "your" in the titles demands a proactive and subjective approach; stepping into installations rather than looking at paintings. The viewer is integral to the fulfilment of a works potential.
Do these rules apply to all [museum] objects? These are the questions…
They aimed to encourage visitors in contemplation of their own experience.
Visitors who came to blog viewed 75% of pages. Comments were left by 2% of blog visitors.
There was a greater in interest in seeing how others responded than in contributing to the conversation. Comments were a 'mixed bag'.
The comments helped with understanding visitor motivations in narratives… there's a visual 'Velcro effect' – some artworks stay with people – the more visceral the experience of various artworks, the greater the corresponding number of comments.
[Though I wondered if it's an unproblematic and direct relationship? People might have a relationship with the art work that doesn't drive them to comment; that requires more reflection to formulate a response; or that might occur at an emotional rather than intellectual level.]
Visitors also take opportunity to critique the exhibition/objects and curatorial choices when asked to comment.
What are the criteria of values for comments? By whose standards? And who within the institution reads the blog?
How do you know if you've succeeded? Depends on goals.
"We opened the door to let visitors in… then we left the room. They were the only ones left in the room." – the museum opens up to the public then steps out of the dialogue. [Slide 20]
[I have quoted this in conversation so many times since the conference. I think it's an astute and powerful summary of the unintended effect of participatory websites that aren't integrated into the museum's working practices. We say we want to know what our visitors think, and then we walk away while they're still talking. This image is great because it's so visceral – everyone realises how rude that is.]
Typology/examples of museum blogs over time… based on whether they open to comments, and whether they act like docents/visitors assistants and have conversations with the public in front of the artworks.
If we really engage with our visitors, will we release the "pent up comments"?
A NY Times migraine blog post had 294 reflective, articulate, considered, impassioned comments on the first day.
[What are your audiences' pent up questions? How do you find the right questions? Is it as simple as just asking our audiences, and even if it isn't, isn't that the easiest place to start? If we can crack the art of asking the right questions to elicit responses, we're in a better position.]
Nina Simon's hierarchy of social participation. Museums need to participate to get to higher levels of co-creative, collaborative process. "Community producer" – enlist others, get
cross fertilisation.
Even staff should want to return to your blogs and learn from them.
[Who are the comments that people leave addressed to? Do we tell them or do we just expect them to comment into empty space? Is that part of the reason for low participation rates? What's the relationship between participation and engagement? But also because people aren't participating in the forum you provide, doesn't mean they're not participating somewhere else… or engaging with it in other forums, conversations in the pubs, etc not everything is captured online even if the seed is online and in your institution. ]
Amazon's new look (with a bit of transparency)
Just today I asked if anyone used drop-down menus anymore, and here Amazon have gone and launched a new design that uses them.
I don't know how many people would notice, but I like that they've provided a link (in the top right-hand corner with the text, 'We've had a redesign. Take a look') to 'A Quick Tour of Our Redesign'. The page highlights some of the changes/new features and provides answers to questions including 'Why did you change the site?', 'How did you decide on this design?' and 'What's different?'.
I'm guessing they've done their research and found that kind of transparency helps people deal with the changes – I was hoping to blog about our web redesign process, and I think this shows its worth doing. I wonder how many people notice the 'redesign' link and are interested enough to click on it.