Performance testing and Agile – top ten tips from Thoughtworks

I've got a whole week and a bit off uni (though of course I still have my day job) and I got a bit over-excited and booked two geek talks (and two theatre shows). This post is summarising a talk on Top ten secret weapons for performance testing in an agile environment, organised by the BCS's SPA (software practice advancement) group with Patrick Kua from ThoughtWorks.

His slides from an earlier presentation are online so you may prefer just to head over and read them.

[My perspective: I've been thinking about using Agile methodologies for two related projects at work, but I'm aware of the criticisms from a requirements engineering perspective that doesn't deal with non-functional requirements (i.e. not requirements about what a system does, but how it does it and the qualities it has – usability, security, performance, etc) and of the problems integrating graphic and user experience design into agile processes (thanks in part to an excellent talk @johannakoll gave at uni last term.  Even if we do the graphic and user experience design a cycle or two ahead, I'm also not sure how it would work across production teams that span different departments – much to think about.

Wednesday's talk did a lot to answer my own questions about how to integrate non-functional requirements into agile projects, and I learned a lot about performance testing – probably about time, too. It was intentionally about processes rather than tools, but JMeter was mentioned a few times.]

1. Make performance explicit.
Make it an explicit requirement upfront and throughout the process (as with all non-functional requirements in agile).
Agile should bring the painful things forward in the process.

Two ways: non-functional requirements can be dotted onto the corner of the story card for a functional requirement, or give them a story card to themselves, and manage them alongside the stories for the functional requirements.  He pointed out that non-functional requirements have a big effect on architecture, so it's important to test assumptions early.

[I liked their story card format: So that [rationale] as [person or role] I want [natural language description of the requirement].]

2. One team.
Team dynamics are important – performance testers should be part of the main team. Products shouldn't just be 'thrown over the wall'. Insights from each side help the other. Someone from the audience made a comment about 'designing for testability' – working together makes this possible.

Bring feedback cycles closer together. Often developers have an insight into performance issues from their own experience – testers and developers can work together to triangulate and find performance bottlenecks.

Pair on performance test stories – pair a performance tester and developer (as in pair programming) for faster feedback. Developers will gain testing expertise, so rotate pairs as people's skills develop.  E.g. in a team of 12 with 1 tester, rotate once a week or fortnight.  This also helps bring performance into focus through the process.

3. Customer driven
Customer as in end user, not necessarily the business stakeholder.  Existing users are a great source of requirements from the customers' point of view – identify their existing pain points.  Also talk to marketing people and look at usage forecasts.

Use personas to represent different customers or stakeholders. It's also good to create a persona for someone who wants to bring the site down – try the evil hat.

4. Discipline
You need to be as disciplined and rigorous as possible in agile.  Good performance testing needs rigour.

They've come up with a formula:
Observe test results – what do you see? Be data driven.
Formulate hypothesis – why is it doing that?
Design an experiment – how can I prove that's what's happening? Lightweight, should be able to run several a day.
Run experiment – take time to gather and examine evidence
Is hypothesis valid? If so –
Change application code

Like all good experiments, you should change only one thing at a time.

Don't panic, stay disciplined.

5. Play performance early
Scheduling around iterative builds makes it more possible. A few tests during build is better than a block at the end.  Automate early.

6. Iterate, Don't (Just) Increment
Fishbone structure – iterate and enhance tests as well as development.

Sashimi slicing is another technique.  Test once you have an end-to-end slice.

Slice by presentation or slice by scenario.
Use visualisations to help digest and communicate test results. Build them in iterations too. e.g. colour to show number of http requests before get error codes. If slicing by scenario, test by going through a whole scenario for one persona.

7. Automate, automate, automate.
It's an investment for the future, so the amount of automation depends on the lifetime of the project and its strategic importance.  This level of discipline means you don't waste time later.

Automated compilation – continuous integration good.
Automated tests
Automated packaging
Automated deployment [yes please – it should be easy to get different builds onto an environment]
Automated test orchestration – playing with scenarios, put load generators through profiles.
Automated analysis
Automated scheduling – part of pipeline. Overnight runs.
Automated result archiving – can check raw output if discover issues later

Why automate? Reproducible and constant; faster feedback; higher productivity.
Can add automated load generation e.g. JMeter, which can also run in distributed agent mode.
Ideally run sanity performance tests for show stoppers at the end of functional tests, then a full overnight test.

8. Continuous performance testing
Build pipeline.
Application level – compilation and test units; functional test; build RPM (or whatever distribution thingy).
Into performance level – 5 minute sanity test; typical day test.

Spot incremental performance degradation – set tests to fail if the percentage increase is too high.

9. Test drive your performance test code
Hold it to the same level of quality as production code. TDD useful. Unit test performance code to fail faster. Classic performance areas to unit test: analysis, presentation, visualisation, information collecting, publishing.

V model of testing – performance testing at top righthand edge of the V.

10. Get feedback.
Core of agile principles.
Visualisations help communicate with stakeholders.
Weekly showcase – here's what we learned and what we changed as a result – show the benefits of on-going performance testing.

General comments from Q&A: can do load generation and analyse session logs of user journeys. Testing is risk migitation – can't test everything. Pairing with clients is good.

In other news, I'm really shallow because I cheered on the inside when he said 'dahta' instead of 'dayta'. Accents FTW! And the people at the event seemed nice – I'd definitely go to another SPA event.

One thought on “Performance testing and Agile – top ten tips from Thoughtworks”

  1. Hi Mia,

    Thanks for the great comments and what great notes that you managed to capture. It seems like you got a bit out of this session.

    I'd be happy to chat about any of this stuff in more detail. Cheers!

Leave a Reply to Patrick Kua Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.