'War, Plague and Fire' and 'Bootstrapping Innovation in Museums' at 'Museum Ideas 2012 – Museums in the Era of Participatory Culture'

I've finally had a moment to catch up and post the first part of my notes from Museum/iD's conference, Museum Ideas 2012 – Museums in the Era of Participatory Culture. Overall it was a great conference that left me with a lot of things to think about for how museums can adapt and thrive in the current international context, and reminded me why museums should survive: they matter. I've posted my thoughts from the later sessions at Why museums matter: 'Museum Ideas 2012 – Museums in the Era of Participatory Culture' with a short summary of the whole event at the start.

Sharon Ament's keynote at Museum of London Docklands

The day was chaired by Ben Gammon who began by pointing out that innovation is no longer a luxury, it's now critical for survival.

The keynote speaker was the new Director of the Museum of London, Sharon Ament, who spoke on War, Plague and Fire: museums and libraries in the era of participatory culture. Previously Ament was director of public engagement at the Natural History Museum, and she drew on that background in her talk while also relating it to the collections of the Museum of London and the docklands location of the conference. She called for museums to look at what participatory culture means to the people they serve, especially when the individual has the capacity to be heard more loudly than ever before. The international context in which we're living – with civil unrest, economic crises and global warming – is a time of change and fear means that adaptation to the external environment is an important concept for museums today. Her talk, and some of the discussion afterwards, focused on the role of museums and libraries as venues for independent discovery; accessible to many because entry was free. She suggested that creative responses – small things that can happen spontaneously, like the 'pop-up' concept – might be useful for reaching people.

One final quote to close, from the Salzburg Global Seminar and the Institute of Museum and Library Services report on 'Libraries and Museums in an Era of Participatory Culture': 'technology is a tool, not an objective, and that the creation of increased public value is the end goal. Identifying stakeholders’ needs means addressing human relationships, a sense of organization, and an intellectual construct to shape information and access'.

The next session was a 'fireside chat' with Rob Stein (Dallas Museum of Art) and Seb Chan (Cooper-Hewitt Museum) reflecting on 'Bootstrapping Innovation in Museums' and their experiences in changing museums. They discussed collaboration (Stein noted that everything he's built that's had a modicum of success has been a collaboration with lots of people), the pace of change in different museums (including the need to build a risk-tolerant culture), and the risk of assuming that technology is an inherent part of innovation (Stein observed that the change that needs to happen at DAM is cultural, about shifting ambition). How do you create a culture of innovation? Chan mentioned Elaine Heumann Gurian's Wanting to be the Third on your Block and said that the first thing he did when he started at the Cooper-Hewitt was create a space that gave people permission to change. He set up 'labs' as a space for people to talk about stuff, which also gave his immediate team a public voice for the first time. He pushed fast to get quick results on some straightforward things to start to set an expectation of speed and accelerate culture: 'right now, doing things fast matters more than doing things well'. He talked about cultivating rogues and tricksters in the museum to accelerate change and get a paradigm shift and suggested tackling root problems rather than symptoms for issues like copyright. They also discussed how to play up the fun of museum jobs to make them more attractive in a competitive tech jobs market, and the importance of putting some money into innovation where possible. Stein suggested that it's possible to support innovation without a budget, e.g. museums can hold 'research forums' where people share what they're working on.

Chan also said museums have turned themselves into 'exhibition farms', letting them suck huge amounts of resource; together with the obsession with 'finish' this slows innovation that could come from re-thinking how exhibitions and public programmes work together. Stein observed 'museums seem to like gargantuan problems, things that take five years to get out the door [like] exhibitions, publications, buildings.'

They discussed the mismatch between museum exhibition launch models and software models: 'people want to feel that something's finished when it launches, they want the party and the holiday'. But in software development, no-one takes a holiday straight after launch because they're watching what people do with the new software. [I was really interested in this section as it's something I've thought about a lot (e.g. does a museum's obsession with polish hinder innovation?) – I suspect museum technologists have two clashing mental models about how to work: one is the web agency model, based around cycles of 'launch, observe, iterate, update'; the other is the 'long slog to an unmovable launch date then onto the next project' of museums. When the rest of the world moves on the agile, iterative model, it's frustrating being tied to the museum model, particularly when it seems to have more flaws than benefits for modern audiences.] In closing they talked about the effectiveness of various models of innovation, whether attempts at top-down innovation, departments of innovation or more integrated models of innovation.

This post is already quite long, so I might hit publish now and come back to the other talks later.

Disclosure: my ticket was provided by Museum/iD.

Museums and iterative agility: do your ideas get oxygen?

Re-visiting the results of the survey I ran about issues facing museum technologists has inspired me to gather together some great pieces I've read on museum projects moving away from detailed up-front briefs and specifications toward iterative and/or agile development.

In 'WaterWorx – our first in-gallery iPad interactive at the Powerhouse Museum', Seb Chan writes:

"the process by which this game was developed was in itself very different for us. … Rather than an explicit and ‘completed’ brief be given to Digital Eskimo, the game developed using an iterative and agile methodology, begun by a process that they call ‘considered design‘. This brought together stakeholders and potential users all the way through the development process with ‘real working prototypes’ being delivered along the way – something which is pretty common for how websites and web applications are made, but is still unfortunately not common practice for exhibition development."

I'd also recommend the presentation 'Play at Work: Applying Agile Methods to Museum Website Development' given at the 2010 Museum Computer Network Conference by Dana Mitroff Silvers and Alon Salant for examples of how user stories were used to identify requirements and prioritise development, and for an insight into how games can be used to get everyone working in an agile way.  If their presentation inspires you, you can find games you can play with people to help everyone understand various agile, scrum and other project management techniques and approaches at tastycupcakes.com.

I'm really excited by these examples, as I'm probably not alone in worrying about the mis-match between industry-standard technology project management methods and museum processes. In a 'lunchtime manifesto' written in early 2009, I hoped the sector would be able to 'figure out agile project structures that funders and bid writers can also understand and buy into' – maybe we're finally at that point.

And from outside the museum sector, a view on why up-front briefs don't work for projects that where user experience design is important.  Peter Merholz of Adaptive Path writes:

"1. The nature of the user experience problems are typically too complex and nuanced to be articulated explicitly in a brief. Because of that, good user experience work requires ongoing collaboration with the client. Ideally, client and agency basically work as one big team.

2. Unlike the marketing communications that ad agencies develop, user experience solutions will need to live on, and evolve, within the clients’ business. If you haven’t deeply involved the client throughout your process, there is a high likelihood that the client will be unable to maintain whatever you produce."

Finally, a challenge to the perfectionism of museums.  Matt Mullenweg (of WordPress fame), writes in '1.0 Is the Loneliest Number': 'if you’re not embarrassed when you ship your first version you waited too long'.  Ok, so that might be a bit difficult for museums to cope with, but what if it was ok to release your beta websites to the public?  Mullenweg makes a strong case for iterating in public:

"Usage is like oxygen for ideas. You can never fully anticipate how an audience is going to react to something you’ve created until it’s out there. That means every moment you’re working on something without it being in the public it’s actually dying, deprived of the oxygen of the real world.

By shipping early and often you have the unique competitive advantage of hearing from real people what they think of your work, which in best case helps you anticipate market direction, and in worst case gives you a few people rooting for you that you can email when your team pivots to a new idea. Nothing can recreate the crucible of real usage.

You think your business is different, that you’re only going to have one shot at press and everything needs to be perfect for when Techcrunch brings the world to your door. But if you only have one shot at getting an audience, you’re doing it wrong."

* The Merholz article above is great because you can play a fun game with the paragraph below – in your museum, what job titles would you put in place of 'art director' and 'copywriter'?  Answers in a comment, if you dare!  I think it feels particularly relevant because of the number of survey responses that suggested museums still aren't very good at applying the expertise of their museum technologists.

"One thing I haven’t yet touched on is the legacy ad agency practice where the art director and copywriter are the voices that matter, and the rest of the team exists to serve their bidding. This might be fine in communications work, but in user experience, where utility is king, this means that the people who best understand user engagement are often the least empowered to do anything about it, while those who have little true understanding of the medium are put in charge. In user experience, design teams need to recognize that great ideas can come from anywhere, and are not just the purview of a creative director."


If you liked this post, you may also be interested in Confluence on digital channels; technologists and organisational change? (29 September 2012) and A call for agile museum projects (a lunchtime manifesto) (10 March 2009).

Performance testing and Agile – top ten tips from Thoughtworks

I've got a whole week and a bit off uni (though of course I still have my day job) and I got a bit over-excited and booked two geek talks (and two theatre shows). This post is summarising a talk on Top ten secret weapons for performance testing in an agile environment, organised by the BCS's SPA (software practice advancement) group with Patrick Kua from ThoughtWorks.

His slides from an earlier presentation are online so you may prefer just to head over and read them.

[My perspective: I've been thinking about using Agile methodologies for two related projects at work, but I'm aware of the criticisms from a requirements engineering perspective that doesn't deal with non-functional requirements (i.e. not requirements about what a system does, but how it does it and the qualities it has – usability, security, performance, etc) and of the problems integrating graphic and user experience design into agile processes (thanks in part to an excellent talk @johannakoll gave at uni last term.  Even if we do the graphic and user experience design a cycle or two ahead, I'm also not sure how it would work across production teams that span different departments – much to think about.

Wednesday's talk did a lot to answer my own questions about how to integrate non-functional requirements into agile projects, and I learned a lot about performance testing – probably about time, too. It was intentionally about processes rather than tools, but JMeter was mentioned a few times.]

1. Make performance explicit.
Make it an explicit requirement upfront and throughout the process (as with all non-functional requirements in agile).
Agile should bring the painful things forward in the process.

Two ways: non-functional requirements can be dotted onto the corner of the story card for a functional requirement, or give them a story card to themselves, and manage them alongside the stories for the functional requirements.  He pointed out that non-functional requirements have a big effect on architecture, so it's important to test assumptions early.

[I liked their story card format: So that [rationale] as [person or role] I want [natural language description of the requirement].]

2. One team.
Team dynamics are important – performance testers should be part of the main team. Products shouldn't just be 'thrown over the wall'. Insights from each side help the other. Someone from the audience made a comment about 'designing for testability' – working together makes this possible.

Bring feedback cycles closer together. Often developers have an insight into performance issues from their own experience – testers and developers can work together to triangulate and find performance bottlenecks.

Pair on performance test stories – pair a performance tester and developer (as in pair programming) for faster feedback. Developers will gain testing expertise, so rotate pairs as people's skills develop.  E.g. in a team of 12 with 1 tester, rotate once a week or fortnight.  This also helps bring performance into focus through the process.

3. Customer driven
Customer as in end user, not necessarily the business stakeholder.  Existing users are a great source of requirements from the customers' point of view – identify their existing pain points.  Also talk to marketing people and look at usage forecasts.

Use personas to represent different customers or stakeholders. It's also good to create a persona for someone who wants to bring the site down – try the evil hat.

4. Discipline
You need to be as disciplined and rigorous as possible in agile.  Good performance testing needs rigour.

They've come up with a formula:
Observe test results – what do you see? Be data driven.
Formulate hypothesis – why is it doing that?
Design an experiment – how can I prove that's what's happening? Lightweight, should be able to run several a day.
Run experiment – take time to gather and examine evidence
Is hypothesis valid? If so –
Change application code

Like all good experiments, you should change only one thing at a time.

Don't panic, stay disciplined.

5. Play performance early
Scheduling around iterative builds makes it more possible. A few tests during build is better than a block at the end.  Automate early.

6. Iterate, Don't (Just) Increment
Fishbone structure – iterate and enhance tests as well as development.

Sashimi slicing is another technique.  Test once you have an end-to-end slice.

Slice by presentation or slice by scenario.
Use visualisations to help digest and communicate test results. Build them in iterations too. e.g. colour to show number of http requests before get error codes. If slicing by scenario, test by going through a whole scenario for one persona.

7. Automate, automate, automate.
It's an investment for the future, so the amount of automation depends on the lifetime of the project and its strategic importance.  This level of discipline means you don't waste time later.

Automated compilation – continuous integration good.
Automated tests
Automated packaging
Automated deployment [yes please – it should be easy to get different builds onto an environment]
Automated test orchestration – playing with scenarios, put load generators through profiles.
Automated analysis
Automated scheduling – part of pipeline. Overnight runs.
Automated result archiving – can check raw output if discover issues later

Why automate? Reproducible and constant; faster feedback; higher productivity.
Can add automated load generation e.g. JMeter, which can also run in distributed agent mode.
Ideally run sanity performance tests for show stoppers at the end of functional tests, then a full overnight test.

8. Continuous performance testing
Build pipeline.
Application level – compilation and test units; functional test; build RPM (or whatever distribution thingy).
Into performance level – 5 minute sanity test; typical day test.

Spot incremental performance degradation – set tests to fail if the percentage increase is too high.

9. Test drive your performance test code
Hold it to the same level of quality as production code. TDD useful. Unit test performance code to fail faster. Classic performance areas to unit test: analysis, presentation, visualisation, information collecting, publishing.

V model of testing – performance testing at top righthand edge of the V.

10. Get feedback.
Core of agile principles.
Visualisations help communicate with stakeholders.
Weekly showcase – here's what we learned and what we changed as a result – show the benefits of on-going performance testing.

General comments from Q&A: can do load generation and analyse session logs of user journeys. Testing is risk migitation – can't test everything. Pairing with clients is good.

In other news, I'm really shallow because I cheered on the inside when he said 'dahta' instead of 'dayta'. Accents FTW! And the people at the event seemed nice – I'd definitely go to another SPA event.

Agile development presentation, dev8D

These are my very rough notes on Grahame Klyne's talk on Agile Development as JISC's dev8D event in February. Grahame works with bioinformatics and the semantic web at the zoology department of Oxford University. He is interested in how people can make useful things out of semantic web technologies.

Any mistakes are mine, obviously, and any comments or corrections are welcome. (I should warn that they're still rough, even though it's taken me a month to get them online.) My notes in [square brackets] below.

He started by asking people in the room to introduce themselves, which was a nice idea as most people hadn't met.

Agile and their project
Agile seemed to be particularly appropriate for development in a research context, where the final outcomes are necessarily uncertain. [I'm particularly interested in how they managed to build Agile development into the JISC project proposal, as reflected in 'A call for agile museum projects (a lunchtime manifesto)'. Even getting agile working in a university environment seems like an impressive achievement to me, though universities tend to be more up-to-date with development models than museums.]

They had real users, and wanted to apply semantic web technologies to help them do their research.

At the start of the project, all the developers went on week-long training course on agile development, which was really important for the smooth running of the project as they all came out with a common view on how they might go forward. Everyone worked with 'how can we make this work best for us' view of agile development.

Agile – what's it about?
Agile values individuals and interactions over processes and tools. It values responding to change over following a plan – this is particularly interesting when writing proposals for funders like JISC.

From a personal perspective, the key principles became: what we do is (end) user led; spend a lot of time communicating; build on existing working system code (i.e. value code over documentation); develop incrementally. It's not in all the textbooks but they found it's important – they 'retrospect'.

User-led: you need a real user as a starting point. Not sure how to advise someone working on a project without real users [I didn't ask, but why would you be doing a project without a real users?]. It's far easier to elicit requirements from user if you have a working system.

Build and maintain trust in the team. [Important.]

Building on working code: start with something simple that works. Build in small steps – it's easy to say, but the discipline of forcing yourself to take tiny steps is quite hard. The temptation is to cram in a bit more functionality and think you'll sort it out later. When you get used to the discipline of small steps, it gets so much easier to maintain flow of development.

Minimise periods of time when you don't have working system.

Don't sacrifice quality.

Always look for easy ways of implementing what you need to do now. Don't bring in more complex solution because you think you might need it later.

Retrospection – 'the one indispensable practice of agile'? As a team, take time regularly to review and adjust.

More random points: Test-lead development is often assoc with agile development. Think of test cases as specification – it's also a useful mindset for standards development groups. Test cases are particularly useful when working with external code bases or applications – even more so than when working with your own code. [There was quite a bit of discussion about this, I think whether it made sense to you depended on your experiences commissioning or reviewing possible applications for purchase for institutional use. I can think of occasions when it would have been a very useful approach for dealing with vendor oversell – it sounds like a sensible way to test the fit of third-party applications for your situation.]

Keep refactoring separate from the addition of new functionality.

Card planning: for e.g. user stories, tasks. It's a good solution in an environment with very strong characters, it allows everyone to be confident that their input was being noted, which means they don't hijack the session to make sure their points have been heard… the team can then review and decide which are most important in next small block of work.

Outcomes: progress had been steady and continuous rather than meteoric. What seems like a slow pace at the time actually gets you quite far. It produces a sense of continuous progress.

Unexpected architectural choices – had particular view about how web interactions were going to work in the project, e.g. choice between server side or client side JavaScript – he was sceptical, but knew he could change his mind later, could follow one path and change if necessary. But actually resulted in architectural choices that would never have made upfront but which were best for the situation.

Discussion
Never refactor until you have to. Don't make stuff you *might* need, make it when you need it.

A call for agile museum projects (a lunchtime manifesto)

Yet another conversation on twitter about the NMOLP/Creative Spaces project lead to a discussion of the long lead times for digital projects in the cultural heritage sector. I've worked on projects that were specced and goals agreed with funders five years before delivery, and two years before any technical or user-focussed specification or development began, and I wouldn't be surprised if something similar happened with NMOLP.

Five years is obviously a *very* long time in internet time, though it's a blink of an eye for a museum. So how do we work with that institutional reality? We need to figure out agile, adaptable project structures that funders and bid writers can also understand and buy into…

The initial project bid must be written to allow for implementation decisions that take into account the current context, and ideally a major goal of the bid writing process should be finding points where existing infrastructure could be re-used. The first step for any new project should be a proper study of the needs of current and potential users in the context of the stated goals of the project. All schema, infrastructure and interface design decisions should have a link to one or more of those goals. Projects should built around usability goals, not object counts or interface milestones set in stone three years earlier.

Taking institutional parameters into account is of course necessary, but letting them drive the decision making process leads to sub-optimal projects, so projects should have the ability to point out where institutional constraints are a risk for the project. Constraints might be cultural, technical, political or collections-related – we're good at talking about the technical and resourcing constraints, but while we all acknowledge the cultural and political constraints it often happens behind closed doors and usually not in a way that explicitly helps the project succeed.

And since this is my lunchtime dream world, I would like plain old digitisation to be considered sexy without the need to promise funders more infrastructure they can show their grandkids.

We also need to work out project models that will get buy-in from contractors and 3rd party suppliers. As Dan Zambonini said, "Usability goals' sounds like an incredibly difficult thing to quantify' so existing models like Agile/sprint-esque 'user stories' might be easier to manage.

We, as developers, need to create a culture in which 'failing intelligently' is rewarded. I think most of us believe in 'failing faster to succeed sooner', at least to some extent, but we need to think carefully about the discourse around public discussions of project weaknesses or failures if we want this to be a reality. My notes from Clay Shirky's ICA talk earlier this year say that the members of the Invisible College (a society which aimed to 'acquire knowledge through experimental investigation') "went after alchemists for failing to be informative when they were wrong" – " it was ok to be wrong but they wanted them to think about and share what went wrong". They had ideas about how results should be written up and shared for maximum benefit. I think we should too.

I think the MCG and Collections Trust could both have a role to play in advocating more agile models to those who write and fund project bids. Each museum also has a responsibility to make sure projects it puts forward (whether singly or in a partnership) have been reality checked by its own web or digital specialists as well as other consultants, but we should also look to projects and developers (regardless of sector) that have managed to figure out agile project structures that funders and bid writers can also understand and buy into.

So – a blatant call for self-promotion – if you've worked on projects that could provide a good example, written about your dream project structures, know people or projects that'd make a good case study – let me know in the comments.

Thanks also to Mike, Giv and Mike, Daniel Evans (and the MCG plus people who listened to me rant at dev8D in general) for the conversations that triggered this post.


If you liked this post, you may also be interested in Confluence on digital channels; technologists and organisational change? (29 September 2012) and Museums and iterative agility: do your ideas get oxygen? (21 November 2010).