Museums and iterative agility: do your ideas get oxygen?

Re-visiting the results of the survey I ran about issues facing museum technologists has inspired me to gather together some great pieces I've read on museum projects moving away from detailed up-front briefs and specifications toward iterative and/or agile development.

In 'WaterWorx – our first in-gallery iPad interactive at the Powerhouse Museum', Seb Chan writes:

"the process by which this game was developed was in itself very different for us. … Rather than an explicit and ‘completed’ brief be given to Digital Eskimo, the game developed using an iterative and agile methodology, begun by a process that they call ‘considered design‘. This brought together stakeholders and potential users all the way through the development process with ‘real working prototypes’ being delivered along the way – something which is pretty common for how websites and web applications are made, but is still unfortunately not common practice for exhibition development."

I'd also recommend the presentation 'Play at Work: Applying Agile Methods to Museum Website Development' given at the 2010 Museum Computer Network Conference by Dana Mitroff Silvers and Alon Salant for examples of how user stories were used to identify requirements and prioritise development, and for an insight into how games can be used to get everyone working in an agile way.  If their presentation inspires you, you can find games you can play with people to help everyone understand various agile, scrum and other project management techniques and approaches at tastycupcakes.com.

I'm really excited by these examples, as I'm probably not alone in worrying about the mis-match between industry-standard technology project management methods and museum processes. In a 'lunchtime manifesto' written in early 2009, I hoped the sector would be able to 'figure out agile project structures that funders and bid writers can also understand and buy into' – maybe we're finally at that point.

And from outside the museum sector, a view on why up-front briefs don't work for projects that where user experience design is important.  Peter Merholz of Adaptive Path writes:

"1. The nature of the user experience problems are typically too complex and nuanced to be articulated explicitly in a brief. Because of that, good user experience work requires ongoing collaboration with the client. Ideally, client and agency basically work as one big team.

2. Unlike the marketing communications that ad agencies develop, user experience solutions will need to live on, and evolve, within the clients’ business. If you haven’t deeply involved the client throughout your process, there is a high likelihood that the client will be unable to maintain whatever you produce."

Finally, a challenge to the perfectionism of museums.  Matt Mullenweg (of WordPress fame), writes in '1.0 Is the Loneliest Number': 'if you’re not embarrassed when you ship your first version you waited too long'.  Ok, so that might be a bit difficult for museums to cope with, but what if it was ok to release your beta websites to the public?  Mullenweg makes a strong case for iterating in public:

"Usage is like oxygen for ideas. You can never fully anticipate how an audience is going to react to something you’ve created until it’s out there. That means every moment you’re working on something without it being in the public it’s actually dying, deprived of the oxygen of the real world.

By shipping early and often you have the unique competitive advantage of hearing from real people what they think of your work, which in best case helps you anticipate market direction, and in worst case gives you a few people rooting for you that you can email when your team pivots to a new idea. Nothing can recreate the crucible of real usage.

You think your business is different, that you’re only going to have one shot at press and everything needs to be perfect for when Techcrunch brings the world to your door. But if you only have one shot at getting an audience, you’re doing it wrong."

* The Merholz article above is great because you can play a fun game with the paragraph below – in your museum, what job titles would you put in place of 'art director' and 'copywriter'?  Answers in a comment, if you dare!  I think it feels particularly relevant because of the number of survey responses that suggested museums still aren't very good at applying the expertise of their museum technologists.

"One thing I haven’t yet touched on is the legacy ad agency practice where the art director and copywriter are the voices that matter, and the rest of the team exists to serve their bidding. This might be fine in communications work, but in user experience, where utility is king, this means that the people who best understand user engagement are often the least empowered to do anything about it, while those who have little true understanding of the medium are put in charge. In user experience, design teams need to recognize that great ideas can come from anywhere, and are not just the purview of a creative director."


If you liked this post, you may also be interested in Confluence on digital channels; technologists and organisational change? (29 September 2012) and A call for agile museum projects (a lunchtime manifesto) (10 March 2009).

Performance testing and Agile – top ten tips from Thoughtworks

I've got a whole week and a bit off uni (though of course I still have my day job) and I got a bit over-excited and booked two geek talks (and two theatre shows). This post is summarising a talk on Top ten secret weapons for performance testing in an agile environment, organised by the BCS's SPA (software practice advancement) group with Patrick Kua from ThoughtWorks.

His slides from an earlier presentation are online so you may prefer just to head over and read them.

[My perspective: I've been thinking about using Agile methodologies for two related projects at work, but I'm aware of the criticisms from a requirements engineering perspective that doesn't deal with non-functional requirements (i.e. not requirements about what a system does, but how it does it and the qualities it has – usability, security, performance, etc) and of the problems integrating graphic and user experience design into agile processes (thanks in part to an excellent talk @johannakoll gave at uni last term.  Even if we do the graphic and user experience design a cycle or two ahead, I'm also not sure how it would work across production teams that span different departments – much to think about.

Wednesday's talk did a lot to answer my own questions about how to integrate non-functional requirements into agile projects, and I learned a lot about performance testing – probably about time, too. It was intentionally about processes rather than tools, but JMeter was mentioned a few times.]

1. Make performance explicit.
Make it an explicit requirement upfront and throughout the process (as with all non-functional requirements in agile).
Agile should bring the painful things forward in the process.

Two ways: non-functional requirements can be dotted onto the corner of the story card for a functional requirement, or give them a story card to themselves, and manage them alongside the stories for the functional requirements.  He pointed out that non-functional requirements have a big effect on architecture, so it's important to test assumptions early.

[I liked their story card format: So that [rationale] as [person or role] I want [natural language description of the requirement].]

2. One team.
Team dynamics are important – performance testers should be part of the main team. Products shouldn't just be 'thrown over the wall'. Insights from each side help the other. Someone from the audience made a comment about 'designing for testability' – working together makes this possible.

Bring feedback cycles closer together. Often developers have an insight into performance issues from their own experience – testers and developers can work together to triangulate and find performance bottlenecks.

Pair on performance test stories – pair a performance tester and developer (as in pair programming) for faster feedback. Developers will gain testing expertise, so rotate pairs as people's skills develop.  E.g. in a team of 12 with 1 tester, rotate once a week or fortnight.  This also helps bring performance into focus through the process.

3. Customer driven
Customer as in end user, not necessarily the business stakeholder.  Existing users are a great source of requirements from the customers' point of view – identify their existing pain points.  Also talk to marketing people and look at usage forecasts.

Use personas to represent different customers or stakeholders. It's also good to create a persona for someone who wants to bring the site down – try the evil hat.

4. Discipline
You need to be as disciplined and rigorous as possible in agile.  Good performance testing needs rigour.

They've come up with a formula:
Observe test results – what do you see? Be data driven.
Formulate hypothesis – why is it doing that?
Design an experiment – how can I prove that's what's happening? Lightweight, should be able to run several a day.
Run experiment – take time to gather and examine evidence
Is hypothesis valid? If so –
Change application code

Like all good experiments, you should change only one thing at a time.

Don't panic, stay disciplined.

5. Play performance early
Scheduling around iterative builds makes it more possible. A few tests during build is better than a block at the end.  Automate early.

6. Iterate, Don't (Just) Increment
Fishbone structure – iterate and enhance tests as well as development.

Sashimi slicing is another technique.  Test once you have an end-to-end slice.

Slice by presentation or slice by scenario.
Use visualisations to help digest and communicate test results. Build them in iterations too. e.g. colour to show number of http requests before get error codes. If slicing by scenario, test by going through a whole scenario for one persona.

7. Automate, automate, automate.
It's an investment for the future, so the amount of automation depends on the lifetime of the project and its strategic importance.  This level of discipline means you don't waste time later.

Automated compilation – continuous integration good.
Automated tests
Automated packaging
Automated deployment [yes please – it should be easy to get different builds onto an environment]
Automated test orchestration – playing with scenarios, put load generators through profiles.
Automated analysis
Automated scheduling – part of pipeline. Overnight runs.
Automated result archiving – can check raw output if discover issues later

Why automate? Reproducible and constant; faster feedback; higher productivity.
Can add automated load generation e.g. JMeter, which can also run in distributed agent mode.
Ideally run sanity performance tests for show stoppers at the end of functional tests, then a full overnight test.

8. Continuous performance testing
Build pipeline.
Application level – compilation and test units; functional test; build RPM (or whatever distribution thingy).
Into performance level – 5 minute sanity test; typical day test.

Spot incremental performance degradation – set tests to fail if the percentage increase is too high.

9. Test drive your performance test code
Hold it to the same level of quality as production code. TDD useful. Unit test performance code to fail faster. Classic performance areas to unit test: analysis, presentation, visualisation, information collecting, publishing.

V model of testing – performance testing at top righthand edge of the V.

10. Get feedback.
Core of agile principles.
Visualisations help communicate with stakeholders.
Weekly showcase – here's what we learned and what we changed as a result – show the benefits of on-going performance testing.

General comments from Q&A: can do load generation and analyse session logs of user journeys. Testing is risk migitation – can't test everything. Pairing with clients is good.

In other news, I'm really shallow because I cheered on the inside when he said 'dahta' instead of 'dayta'. Accents FTW! And the people at the event seemed nice – I'd definitely go to another SPA event.

Woohoo!

The results of JISC's dev8D 'developer happiness' prize have been announced – congratulations to List8D and their "web 2.0-friendly reading lists" – it's something I'd love to see in my own uni course.

And yay the Three Lazy Geeks, because we came second! That was a lovely surprise, and really the glace cherry on the icing of the cake because the whole event was a great experience, and I really enjoyed working with Ian and Pete, as rushed as the whole thing was.

Agile development presentation, dev8D

These are my very rough notes on Grahame Klyne's talk on Agile Development as JISC's dev8D event in February. Grahame works with bioinformatics and the semantic web at the zoology department of Oxford University. He is interested in how people can make useful things out of semantic web technologies.

Any mistakes are mine, obviously, and any comments or corrections are welcome. (I should warn that they're still rough, even though it's taken me a month to get them online.) My notes in [square brackets] below.

He started by asking people in the room to introduce themselves, which was a nice idea as most people hadn't met.

Agile and their project
Agile seemed to be particularly appropriate for development in a research context, where the final outcomes are necessarily uncertain. [I'm particularly interested in how they managed to build Agile development into the JISC project proposal, as reflected in 'A call for agile museum projects (a lunchtime manifesto)'. Even getting agile working in a university environment seems like an impressive achievement to me, though universities tend to be more up-to-date with development models than museums.]

They had real users, and wanted to apply semantic web technologies to help them do their research.

At the start of the project, all the developers went on week-long training course on agile development, which was really important for the smooth running of the project as they all came out with a common view on how they might go forward. Everyone worked with 'how can we make this work best for us' view of agile development.

Agile – what's it about?
Agile values individuals and interactions over processes and tools. It values responding to change over following a plan – this is particularly interesting when writing proposals for funders like JISC.

From a personal perspective, the key principles became: what we do is (end) user led; spend a lot of time communicating; build on existing working system code (i.e. value code over documentation); develop incrementally. It's not in all the textbooks but they found it's important – they 'retrospect'.

User-led: you need a real user as a starting point. Not sure how to advise someone working on a project without real users [I didn't ask, but why would you be doing a project without a real users?]. It's far easier to elicit requirements from user if you have a working system.

Build and maintain trust in the team. [Important.]

Building on working code: start with something simple that works. Build in small steps – it's easy to say, but the discipline of forcing yourself to take tiny steps is quite hard. The temptation is to cram in a bit more functionality and think you'll sort it out later. When you get used to the discipline of small steps, it gets so much easier to maintain flow of development.

Minimise periods of time when you don't have working system.

Don't sacrifice quality.

Always look for easy ways of implementing what you need to do now. Don't bring in more complex solution because you think you might need it later.

Retrospection – 'the one indispensable practice of agile'? As a team, take time regularly to review and adjust.

More random points: Test-lead development is often assoc with agile development. Think of test cases as specification – it's also a useful mindset for standards development groups. Test cases are particularly useful when working with external code bases or applications – even more so than when working with your own code. [There was quite a bit of discussion about this, I think whether it made sense to you depended on your experiences commissioning or reviewing possible applications for purchase for institutional use. I can think of occasions when it would have been a very useful approach for dealing with vendor oversell – it sounds like a sensible way to test the fit of third-party applications for your situation.]

Keep refactoring separate from the addition of new functionality.

Card planning: for e.g. user stories, tasks. It's a good solution in an environment with very strong characters, it allows everyone to be confident that their input was being noted, which means they don't hijack the session to make sure their points have been heard… the team can then review and decide which are most important in next small block of work.

Outcomes: progress had been steady and continuous rather than meteoric. What seems like a slow pace at the time actually gets you quite far. It produces a sense of continuous progress.

Unexpected architectural choices – had particular view about how web interactions were going to work in the project, e.g. choice between server side or client side JavaScript – he was sceptical, but knew he could change his mind later, could follow one path and change if necessary. But actually resulted in architectural choices that would never have made upfront but which were best for the situation.

Discussion
Never refactor until you have to. Don't make stuff you *might* need, make it when you need it.

A call for agile museum projects (a lunchtime manifesto)

Yet another conversation on twitter about the NMOLP/Creative Spaces project lead to a discussion of the long lead times for digital projects in the cultural heritage sector. I've worked on projects that were specced and goals agreed with funders five years before delivery, and two years before any technical or user-focussed specification or development began, and I wouldn't be surprised if something similar happened with NMOLP.

Five years is obviously a *very* long time in internet time, though it's a blink of an eye for a museum. So how do we work with that institutional reality? We need to figure out agile, adaptable project structures that funders and bid writers can also understand and buy into…

The initial project bid must be written to allow for implementation decisions that take into account the current context, and ideally a major goal of the bid writing process should be finding points where existing infrastructure could be re-used. The first step for any new project should be a proper study of the needs of current and potential users in the context of the stated goals of the project. All schema, infrastructure and interface design decisions should have a link to one or more of those goals. Projects should built around usability goals, not object counts or interface milestones set in stone three years earlier.

Taking institutional parameters into account is of course necessary, but letting them drive the decision making process leads to sub-optimal projects, so projects should have the ability to point out where institutional constraints are a risk for the project. Constraints might be cultural, technical, political or collections-related – we're good at talking about the technical and resourcing constraints, but while we all acknowledge the cultural and political constraints it often happens behind closed doors and usually not in a way that explicitly helps the project succeed.

And since this is my lunchtime dream world, I would like plain old digitisation to be considered sexy without the need to promise funders more infrastructure they can show their grandkids.

We also need to work out project models that will get buy-in from contractors and 3rd party suppliers. As Dan Zambonini said, "Usability goals' sounds like an incredibly difficult thing to quantify' so existing models like Agile/sprint-esque 'user stories' might be easier to manage.

We, as developers, need to create a culture in which 'failing intelligently' is rewarded. I think most of us believe in 'failing faster to succeed sooner', at least to some extent, but we need to think carefully about the discourse around public discussions of project weaknesses or failures if we want this to be a reality. My notes from Clay Shirky's ICA talk earlier this year say that the members of the Invisible College (a society which aimed to 'acquire knowledge through experimental investigation') "went after alchemists for failing to be informative when they were wrong" – " it was ok to be wrong but they wanted them to think about and share what went wrong". They had ideas about how results should be written up and shared for maximum benefit. I think we should too.

I think the MCG and Collections Trust could both have a role to play in advocating more agile models to those who write and fund project bids. Each museum also has a responsibility to make sure projects it puts forward (whether singly or in a partnership) have been reality checked by its own web or digital specialists as well as other consultants, but we should also look to projects and developers (regardless of sector) that have managed to figure out agile project structures that funders and bid writers can also understand and buy into.

So – a blatant call for self-promotion – if you've worked on projects that could provide a good example, written about your dream project structures, know people or projects that'd make a good case study – let me know in the comments.

Thanks also to Mike, Giv and Mike, Daniel Evans (and the MCG plus people who listened to me rant at dev8D in general) for the conversations that triggered this post.


If you liked this post, you may also be interested in Confluence on digital channels; technologists and organisational change? (29 September 2012) and Museums and iterative agility: do your ideas get oxygen? (21 November 2010).

20% time – an experiment (with some results)

A company called Atlassian have been experimenting with allowing their engineers 20% of their time to work on free or non-core projects (a la Google). They said:

You see, while everyone knows about Google's 20% time and we've heard about all the neat products born from it (Google News, GMail etc) – we've found it extremely difficult to get any hard facts about how it actually works in practice.

So they started with a list of questions they wanted to answer through their experiment, and they've been blogging about it at http://blogs.atlassian.com/developer/20_percent_time/. It makes for interesting reading, and it's great to see some real evidence starting to emerge.

Hat tip: Tech-Ed Collisions.

Thumbs up to Migratr (and free and open goodness)

[Update: Migratr downloads all your files to the desktop, with your metadata in an XML file, so it's a great way to backup your content if you're feeling a bit nervous about the sustainability of the online services you use. If it's saved your bacon, consider making a donation.]

This is just a quick post to recommend a nice piece of software: "Migratr is a desktop application which moves photos between popular photo sharing services. Migratr will also migrate your metadata, including the titles, tags, descriptions and album organization."

I was using it to migrate stuff from random Flickr accounts people had created at work in bursts of enthusiasm to our main Museum of London Flickr account, but it also works for 23HQ, Picasa, SmugMug and several other photo sites.

The only hassles were that it concatenated the tags (e.g. "Museum of London" became "museumoflondon") and didn't get the set descriptions, but overall it's a nifty utility – and it's free (though you can make a donation). [Update: Alex, the developer, has pointed out that the API sends the tags space delimited, so his app can't tell the different.]

And as the developer says, the availability of free libraries (and the joys of APIs) cut down development time and made the whole thing much more possible. He quotes Newton's, "If I have seen further it is by standing on the shoulders of giants" and I think that's beautifully apt.

How I do documentation: a column of bumph and a column of gold

All programmers hate documentation, right? But I've discovered a way to make it less painful and I'm posting in case it helps anyone else.

The first trick is to start documenting as soon as you start thinking about a project – well before you've written any code. I keep a running document of the work I've done, including the bits I'm about to try, information about links into other databases or applications, issues I need to think about or questions I need to ask someone, rude comments (I know, I look like such a nice girl), references, quick use cases, bits about functions, summary notes from meetings, etc.

Mostly I record by date, blog style. Doing it by date helps me link repository files, paper notes and emails with particular bits of work, which can otherwise be tricky if it's a while since you worked on a project or if you have lots of projects on the go. It's also handy if you need to record the time spent on different projects.

I just did it like this for a while, and it was ok, but I learnt the hard way that it takes a while to sort through it if I needed to send someone else some documentation. Then I made a conscious decision to separate the random musings from the decisions and notes on the productive bits of code.

So now my document has two columns. This first column is all the bumph described above – the stuff I'd need if I wanted to retrace my steps or remind myself why I ended up doing things a certain way. The second column records key decisions or final solutions. This is your column of gold.

This way I can quickly run down the items in the second column, organise it by area instead of by date and come up with some good documentation without much effort. And if I ever want to write up the whole project, I've got a record of the whole process in the column of bumph.

You could add a third column to record outstanding tasks or questions. I tend to mark these up with colour and un-colour them when they're done. It just depends how you like to work.

It's amazingly simple, but it works. I hope it might be useful for you too. Or if you have any better suggestions (or a better title for this post), I'd love to hear them.

Open Source Jam (osjam) – designing stuff that gets used by people

On Thursday I went to Google's offices to check out the Open Source Jam. I'd meant to check them out before and since I was finally free on the right night and the topic was 'Designing stuff that gets used by people' it was perfect timing. A lot of people spoke about API design issues, which was useful in light of the discussions Jeremy started about the European Digital Library API on the Museums Computer group email list (look for subject lines containing 'APIs and EDL' and 'API use-cases').

These notes are pretty much just as they were written on my phone, so they're more pointers to good stuff than a proper summary, and I apologise if I've got names or attributions wrong.

I made a note to go read more of Duncan Cragg on URIs.

Paul Mison spoke about API design antipatterns, using Flickr's API as an example. He raised interesting points about which end of the API provider-user relationship should have the expense and responsibility for intensive relational joins, and designing APIs around use cases.

Nat Pryce talked about APIs as UIs for programmers. His experience suggests you shouldn't do what programmers ask for but find out what they want to do in the end and work with that. Other points: avoid scope creep for your API based on feature lists. Naming decisions are important, and there can be multilingual and cultural issues with understanding names and functionality. Have an open dialogue with your community of users but don't be afraid to selectively respond to requests. [It sounds like you need to look for the most common requests as no one API can do everything. If the EDL API is extensible or plug-in-able, is the issue of the API as the only interface to that service or data more tenable?] Design so that code using your API can be readable. Your API should be extensible cos you won't get it right first time. (In discussion someone pointed out that this can mean you should provide points to plug in as well as designing so it's extensible.) Error messages are part of the API (yes!).

Christian Heilmann spoke on accessibility and make some really good points about accessibility as a hardcore test and incubator for your application/API/service. Build it in from the start, and the benefits go right through to general usability. Also, provide RSS feeds etc as an alternative method for data access so that someone else can build an application/widget to meet accessibility needs. [It's the kind of common sense stuff you don't think someone has to say until you realise accessibility is still a dirty word to some people]

Jonathan Chetwynd spoke on learning disabilities (making the point that it includes functional illiteracy) and GUI schemas that would allow users to edit the GUI to meet their accessibility needs. He also mentioned the possibility of wrapping microformats around navigation or other icons.

Dan North talked about how people learn and the Dreyfus model of skill acquisition, which was new to me but immediately seemed like something I need to follow up. [I wonder if anyone's done work on how that relates to models of museum audiences and how it relates to other models of learning styles.]

Someone whose name I didn't catch talked about Behaviour driven design which was also new to me and tied in with Dan's talk.