Earlier this week, we ran a webinar entitled “Technical Debt – the High Cost of Future Change”. The topic was of course, technical debt in Agile projects. Although we left what we thought was ample time for questions, as it turned out there were many more than we had time for. So, as promised, we are posting the questions and our responses here. I hope these are helpful! (Note the answers below are not written by Michael James, the webinar presenter.)
Q: I’ve found that when asked, people come up with a very ‘full’ definition of done, but in the end they don’t follow it. How do we walk our own walk?
A: I could answer this in a number of ways, but I’ll take a more hard-edged approach here. It is about accountability. An aspect of Scrum is the idea of the “single wringable neck”. At the end of the day, it’s the product owner who determines whether a story has met the agreed upon definition of done. If he or she chooses to ignore aspects of that definition for whatever reason (expediency, effort to be a liked, business pressure), ultimately he is the one who bears the responsibility for the impact.
Q: Isn’t it likely that someone writing poor code might also write poor and ineffective tests?
A: Absolutely. Agile practices are designed to uncover and display issues like this so that they can be addressed. Practices such as Pair Programming, code review, daily stand-ups, retrospectives, and sprint review meetings are so important. There is no magical fix for poor test writing, but when we know the problem exists we can work to fix it.
Q: How was “”easyness”” vs “”difficulty”” of changing the code measured? What metrics exist for what seems like a semi-subjective value?
A: There are many variables to estimating the difficulty of a code changes. We could talk about things like the complexity of the code (Cyclomatic Complexity), the experience of the programmers, their familiarity with the topic and/or the specific module, the quality of documentation, etc. It ends up being a fairly subjective estimate. In Scrum teams, story sizes are estimated in relative terms in terms of story points.
The primary benefit to using a technique involving Relative Estimates is that you are asking the team to give you an estimate of difficulty relative to other work that has already been completed. This means that a team can easily give judgments like “This will be twice as hard as that” and come up with functional estimations for predictions without spending a great deal of time coming up with them. Estimates are just subjective guesses anyway, understanding that can be a valuable way to put more time into building something and less time into trying to guess how much time it will be to build it.
Planning Poker is one technique for building relative estimates and for coming to consensus on the effort or relative size of the stories.
Q: Technical debt always meant: those things that you postpone doing that you know must be done, that become more expensive to do as time goes on — e.g. the technical “”debt”” has compounding “”interest”” applied.
A. I like the extension of the metaphor to include the interest on the “debt” because it is quite valid. That said, I would also point out that there are often valid and justifiable reasons for incurring technical debt. As long as teams incurring that debt know what they are doing, have justified it, and have the means to pay it off in the future then that’s fine. (Perhaps there should be some kind of Consumer Protection Agency for developers? Never mind…)
Q: How often does pair programming fail as the result of pairing developers where one has a dominant and one has a passive personality — and how can this be detected and treated?
A: In pair programming you have a two programmers work together at one workstation – a driver and an observer. The driver is the person who types in code while the observer reviews each line of code as it is typed in. The idea is that the driver is focused on completing the tactical aspects of the task at hand. The observer takes a strategic view, looking for ways to improve the code and at the how it fits into the overall system framework.
Generally, you know there is a problem when you see lack of engagement between the two. Simply put, they are not talking. A good way to try to encourage communication is to swap their roles frequently – at least once per day.
Q: If the automated tests exist – but are as old as the code that it’s testing – how would that help?
A: They help to prevent regressions, where changes in other portions of the code may break something that was untouched.
Q: what happened to defensive programming?
A: I think it is an issue of semantics. The principles of defensive coding are well represented and addressed in Agile development techniques. These include reducing source code, complexity, engaging in formal source code reviews, software testing (especially in the context of Continuous Integration practices), re-use, and so on. I would posit that an experienced and mature Agile team is practicing defensive programing.