Why do museums prefer Flickr Commons to Wikimedia Commons?

A conversation has sprung up on twitter about why museums prefer Flickr Commons to Wikimedia Commons after Liam Wyatt, Vice President of Wikimedia Australia posted "Flickr Commons is FULL for 2010. GLAMs, Fancy sharing with #Wikimedia commons instead?" and I responded with "has anyone done audience research into why museums prefer Flickr to Wikimedia commons?".  I've asked before because I think it's one of those issues where the points of resistance can be immensely informative.

I was struck by the speed and thoughtfulness of responses from kajsahartig, pekingspring, NickPoole1, richardmccoy and janetedavis, which suggested that the question hit a nerve.

Some of the responses included:

Kasja: Photos from collections have ended up at wikipedia without permission, that never happened with Flickr, could be one reason [and] Or museums are more benevolent when it happens at Flickr, it's seen more as individuals' actions rather than an organisations'?

Nick: Flickr lets you choose CC non-commercial licenses, whereas Wikimedia Commons needs to permit potential commercial use?

Janet: Apart fr better & clear CC licence info, like Flickr Galleries that can be made by all! [and] What I implied but didn't say before: Flickr provides online space for dialogue about and with images.

Richard: Flickr is so much easier to view and search than WM. Commons, and of course easier to upload.

Twitter can be a bit of an echo chamber at times, so I wanted to ask you all the question in a more accessible place.   So, is it true that museums prefer Flickr Commons to Wikimedia Commons, and if so, why?

[Update: Liam's new blog post addresses some of the concerns raised – this responsiveness to the issues is cheering.  (You can get more background at Wikipedia:Advice for the cultural sector and Wikipedia:Conflict of interest.)

Also, for those interested in wikimedia/wikipedia* and museums, there's going to be a workshop 'for exploring and developing policies that will enable museums to better contribute to and use Wikipedia or Wikimedia Commons, and for the Wikimedia community to benefit from the expertise in museums', Wikimedia@MW2010, at Museums at the Web 2010. There's already a thread, 'Wikimedia Foundation projects and the museum community' with some comments.  I'd love to see the 'Incompatible recommendations' section of the GLAM-Wiki page discussed and expanded.

* I'm always tempted to write 'wiki*edia' where * could be 'm' or 'p', but then it sounds like South Park's plane-rium in my head.]

[I should really stop updating, but I found Seb Chan's post on the Powerhouse Museum blog, Why Flickr Commons? (and why Wikimedia Commons is very different) useful, and carlstr summed up a lot of the issues neatly: "One of the reasons is that Flickr is a package (view, comment search aso). WC is a archive of photos for others to use. … I think Wikipedia/Wikimedia have potential for the museum sector, but is much more complex which can be deterrent.".]

Performance testing and Agile – top ten tips from Thoughtworks

I've got a whole week and a bit off uni (though of course I still have my day job) and I got a bit over-excited and booked two geek talks (and two theatre shows). This post is summarising a talk on Top ten secret weapons for performance testing in an agile environment, organised by the BCS's SPA (software practice advancement) group with Patrick Kua from ThoughtWorks.

His slides from an earlier presentation are online so you may prefer just to head over and read them.

[My perspective: I've been thinking about using Agile methodologies for two related projects at work, but I'm aware of the criticisms from a requirements engineering perspective that doesn't deal with non-functional requirements (i.e. not requirements about what a system does, but how it does it and the qualities it has – usability, security, performance, etc) and of the problems integrating graphic and user experience design into agile processes (thanks in part to an excellent talk @johannakoll gave at uni last term.  Even if we do the graphic and user experience design a cycle or two ahead, I'm also not sure how it would work across production teams that span different departments – much to think about.

Wednesday's talk did a lot to answer my own questions about how to integrate non-functional requirements into agile projects, and I learned a lot about performance testing – probably about time, too. It was intentionally about processes rather than tools, but JMeter was mentioned a few times.]

1. Make performance explicit.
Make it an explicit requirement upfront and throughout the process (as with all non-functional requirements in agile).
Agile should bring the painful things forward in the process.

Two ways: non-functional requirements can be dotted onto the corner of the story card for a functional requirement, or give them a story card to themselves, and manage them alongside the stories for the functional requirements.  He pointed out that non-functional requirements have a big effect on architecture, so it's important to test assumptions early.

[I liked their story card format: So that [rationale] as [person or role] I want [natural language description of the requirement].]

2. One team.
Team dynamics are important – performance testers should be part of the main team. Products shouldn't just be 'thrown over the wall'. Insights from each side help the other. Someone from the audience made a comment about 'designing for testability' – working together makes this possible.

Bring feedback cycles closer together. Often developers have an insight into performance issues from their own experience – testers and developers can work together to triangulate and find performance bottlenecks.

Pair on performance test stories – pair a performance tester and developer (as in pair programming) for faster feedback. Developers will gain testing expertise, so rotate pairs as people's skills develop.  E.g. in a team of 12 with 1 tester, rotate once a week or fortnight.  This also helps bring performance into focus through the process.

3. Customer driven
Customer as in end user, not necessarily the business stakeholder.  Existing users are a great source of requirements from the customers' point of view – identify their existing pain points.  Also talk to marketing people and look at usage forecasts.

Use personas to represent different customers or stakeholders. It's also good to create a persona for someone who wants to bring the site down – try the evil hat.

4. Discipline
You need to be as disciplined and rigorous as possible in agile.  Good performance testing needs rigour.

They've come up with a formula:
Observe test results – what do you see? Be data driven.
Formulate hypothesis – why is it doing that?
Design an experiment – how can I prove that's what's happening? Lightweight, should be able to run several a day.
Run experiment – take time to gather and examine evidence
Is hypothesis valid? If so –
Change application code

Like all good experiments, you should change only one thing at a time.

Don't panic, stay disciplined.

5. Play performance early
Scheduling around iterative builds makes it more possible. A few tests during build is better than a block at the end.  Automate early.

6. Iterate, Don't (Just) Increment
Fishbone structure – iterate and enhance tests as well as development.

Sashimi slicing is another technique.  Test once you have an end-to-end slice.

Slice by presentation or slice by scenario.
Use visualisations to help digest and communicate test results. Build them in iterations too. e.g. colour to show number of http requests before get error codes. If slicing by scenario, test by going through a whole scenario for one persona.

7. Automate, automate, automate.
It's an investment for the future, so the amount of automation depends on the lifetime of the project and its strategic importance.  This level of discipline means you don't waste time later.

Automated compilation – continuous integration good.
Automated tests
Automated packaging
Automated deployment [yes please – it should be easy to get different builds onto an environment]
Automated test orchestration – playing with scenarios, put load generators through profiles.
Automated analysis
Automated scheduling – part of pipeline. Overnight runs.
Automated result archiving – can check raw output if discover issues later

Why automate? Reproducible and constant; faster feedback; higher productivity.
Can add automated load generation e.g. JMeter, which can also run in distributed agent mode.
Ideally run sanity performance tests for show stoppers at the end of functional tests, then a full overnight test.

8. Continuous performance testing
Build pipeline.
Application level – compilation and test units; functional test; build RPM (or whatever distribution thingy).
Into performance level – 5 minute sanity test; typical day test.

Spot incremental performance degradation – set tests to fail if the percentage increase is too high.

9. Test drive your performance test code
Hold it to the same level of quality as production code. TDD useful. Unit test performance code to fail faster. Classic performance areas to unit test: analysis, presentation, visualisation, information collecting, publishing.

V model of testing – performance testing at top righthand edge of the V.

10. Get feedback.
Core of agile principles.
Visualisations help communicate with stakeholders.
Weekly showcase – here's what we learned and what we changed as a result – show the benefits of on-going performance testing.

General comments from Q&A: can do load generation and analyse session logs of user journeys. Testing is risk migitation – can't test everything. Pairing with clients is good.

In other news, I'm really shallow because I cheered on the inside when he said 'dahta' instead of 'dayta'. Accents FTW! And the people at the event seemed nice – I'd definitely go to another SPA event.

Cosmic Collections – the results are in. And can you help us ask the right questions?

For various reasons, the announcement of the winners of our mashup competition has been a bit low key – but we're working on a site that combines the best bits of the winners, and we'll make a bit more of a song and dance about it when that's ready.

I'd like to take the opportunity to personally thank the winners – Simon Willison and Natalie Down in first place, and Ryan Ludwig as runner-up – and equally importantly, those who took part but didn't win; those who had a play and gave us some feedback; those who helped spread the word, and those who cheered along the way.

I have a cheeky final request for your time.  I would normally do a few interviews to get an idea of useful questions for a survey, but it's not been possible lately. I particularly want to get a sense of the right questions to ask in an evaluation because it's been such a tricky project to explain and 'market', and I'm far too close to it to have any perspective.  So if you'd like to help us understand what questions to ask in evaluation, please take our short survey http://www.surveymonkey.com/s/5ZNSCQ6 – or leave a comment here or on the Cosmic Collections wiki.  I'm writing a paper on it at the moment, so hopefully other museums (and also the Science Museum itself) will get to learn from our experiences.

And again – my thanks to those who've already taken the survey – it's been immensely useful, and I really appreciate your honesty and time.

Unintentional (?) Friday funny

It's a long time since I had one of these. I can go on blaming uni assessments and work, but it gets boring.

I assume it's not intentional, but this Guardian article A world of screens and plastic has fed a cultish craving for relics of the past is hilarious, and beautifully quotable. As Linda Spurdle tweeted: 'I missed this training day! "Museum staff are trained to behave as acolytes to their objects.." prob stuck on H & S day'.

On the BBC/BM 'A History of the World': "Since this is radio, we are not allowed to see the objects, thus enhancing the status of their custodian as interceding priest. … Authenticity is essential and there must be no copies or representations – in ­MacGregor's case not so much as a ­picture." Well, you could look online.

And if is to be true "[i]t does not matter if no one ever sees the shard. Most museum objects are seen only by their guardians, albeit financed by tithes from taxpayers", we'd probably better hide the 230,000 Science Museum, National Railway Museum and National Media Museum objects online. On the other hand, I do like a good 'museum as church' argument, cos if it was true the office wouldn't have bundles of excited kids on the other side of the door and it might be be quieter.

On a more serious note, whenever I come across articles like this it reminds me how far we have to go in helping people realise exactly how accessible, enjoyable, potentially challenging and just plain interesting our (your, their) museums are.

What do you mean by 'wireframe'?

This post on 'The future of wireframes?' chimed a few bells, not only because I'm revising for a Requirements Engineering exam but because I've been in the start-up phase for projects of all sizes lately and have been thinking hard about the best way to understand and communicate requirements. In doing so, I've realised that 'wireframes' has become one of those terms that mean different things to different people – and that of course, it's an entirely new term to people who haven't worked on a design phase of a digital project before. This summed up past and current definitions neatly:

For many years the primary role of wireframes was to specify software. We now use wireframes to investigate and explore how people will interact with a site. Using a ‘just enough’ approach, we often create a series of simple interactive prototypes to try out a variety of approaches to solving a problem. These prototypes can be made in HTML or they can be as simple as a series of Keynote slide for someone to click through.

This is a very different approach to wireframing. Rather than simply documenting where a link goes, the goal is to model and start experiencing what moving around a site feels like as quickly as possible. The prototype can then be tested and the results used to iteratively improve the end solution.

Of course, sites still need to be specified, but wireframes aren’t always the right tool for doing this.

Here's a list of wireframe and prototype tools – do you have any favourites?

A rare post from me – I've been completely caught up in work and my MSc for the past few months. Normal service will be resumed soon – I've still got to report on UKMW09 and a trip to Oslo to give a lecture on social media and museums, libraries and archives.