A recent email:
[x] and I are over quality and we put folks from our teams on to the scrum development teams to help ensure quality. Some of the key questions we have are regarding how we ensure quality. How do we know with the scrum process that the quality is adequate? What are key ways we can report and track the quality during a scrum development cycle?
Remember how Tito Puente moved the drummer from the back of the band to the front of the band? That’s what Scrum requires you to do with QA.
The Scrum framework itself is silent on engineering practices. Scrum does require you to build a potentially-shippable product increment every Sprint. “Potentially shippable” generally means you could confidently get it out the door within one stabilization Sprint, or less. A stabilization Sprint is not a testing Sprint. Every Sprint is a testing Sprint. That means the team gets zero credit for work that isn’t tested.
No one said Scrum was easy. If your testing is any good, more than a bunch of unit tests, you may find it difficult to get this done every Sprint. It’s a lot of extra work. This responsibility is owned by the whole team, because the whole team won’t get credit if it’s not done/done/done.
To make this explicit, our training encourages your teams to negotiate a robust definition of “done” for every Product Backlog Item. Write this right on the card until the whole team internalizes the new habits. This means taking on fewer Product Backlog Items during each Sprint Planning Meeting. Welcome to real life. A smaller amount of thoroughly-tested work is worth more to us than a larger amount of low quality work. Instead of reporting and tracking regression failures, we fix everything we broke in the same Sprint we broke it, or the item’s not demonstrated at the Sprint Review Meeting. Sometimes it’s slow going, but this way we always know where we stand (unlike the FBI).
If your team gets sick of all the extra work (which increases every Sprint as your codebase grows), and is willing to learn new skills, it will automate as much as possible: end to end (system) testing, load testing, “negative testing”, security testing…. When anyone can reach the “push to test” button and get rapid feedback whether it’s broken, they can make more radical design changes than they would otherwise because they’re not flying blind. Another useful engineering practice we borrow from the eXtreme Programming folks: continuous integration.
Of course you will still need some manual testing.
Remember the principle behind the practice of combining QA skills with design/coding skills one one team is to tighten the feedback loop. Don’t track bugs; detect them when they’re created and fix them! (Of course if any slip through, you can create Product Backlog Items for them to be prioritized like all other work.) The traditional practice of waiting until the end to test plays havoc with our release planning. Maybe we can predict how long it takes to test, but how long will it take to fix the things we find during testing? And then how long will it take to fix the things we broke while fixing those things? With the long feedback loop of the waterfall process we can’t predict how long it will take for the ball to stop bouncing. A flimsy definition of “done” (in Scrum, or any other approach) leads to an unbounded amount of work before we can ship.
The software industry has an imbalance of skills, personnel, and clout. There hasn’t been much career incentive for our best and brightest to get good at QA. I visited one company that gave their “developers” the desks near the window while the “testers” were clumped toward the center. (They nearly threw me out of that window when I suggested grouping by cross-functional teams instead.) Scrum can change that when combined with the Agile engineering practices and a robust definition of “done” for Product Backlog Items.
Software Process Mentor
(former embedded systems design engineer and embedded systems verification engineer)