How well do your DevOps metrics provide insight into the speed, risk, and quality of software delivery? Watch this video to learn the four most critical measures of DevOps performance.
In the video I talk about how at VersionOne we put a lot of thought in what it takes to create a data-driven devops organization and it starts with flow metrics.
We have a challenge in devops. The minute we convert backlog items into source code and then convert that source code into binary artifacts, we lose all visibility into flow. It’s very difficult to track a specific backlog item in the form of a binary artifact as it moves through every single stage of our value stream.
Visualizing Delivery Flow
Creating the capability to track backlog items through delivery is the first step to solving this challenge. We call that affiliation. It’s the ability to affiliate a specific backlog item with specific code and then connect that code to specific artifacts to be able to track those artifacts as they move from phase to phase to phase, and therefore the story moves from phase to phase to phase.
Visualizing Portfolio Delivery Flow
Tracking the flow of backlog items across each step in your value stream map is important, but often what people really care about are combinations of backlog items, like features and epics. A product owner might have a question about what’s the distribution of this epic across my value stream map right now, or how much of this epic has already been delivered to my end users? It would be great to understand if every single backlog item in a feature made it to a specific stage in your value stream map. Having that kind of visibility is really important.
The next step in creating a data-driven organization is to take this real-time visibility and start to convert it into flow metrics that provide objective measurements of how your value stream is performing.
The first flow metric that we have to consider is lead time. That’s how long it is taking for the average work item to travel from development all the way to the end user.
Delivery Work In Progress
The next set of metrics that we have to track are work in progress (WIP). We do a really good job of understanding work in progress through development, but the minute we start converting backlog items into code and then convert that code into artifacts and binary objects, it becomes really difficult to track work in progress as those binary artifacts move through each phase of delivery.
Touch Time & Wait Time
Being able to understand and visualize work in progress, even after stories get coded into binary artifacts and understand exactly how much value we have stacking up at every phase of delivery. For example, how much work in progress do we have in staging right now?
The next set of metrics that I think are really important and really helpful. Some other concepts that we borrowed from Lean are the notion of touch time versus wait time. Where touch time is the amount of time we actually spend adding value to a user story or defect, and wait time is the amount of time it’s been stationery waiting for some next step, some next activity.
If you can calculate touch time, if you can calculate the wait time, at a work item level, that starts to provide some really powerful information. If you know the cycle time through a particular phase of delivery, so for example through the quality assurance phase. If I can express my touch time and my wait time, I can now start to look at:
- How efficient is this phase?
- What’s the ratio of touch time over the overall cycle time of that phase?
That starts to show us where the waste is, where the friction and where the opportunities are to start streamlining flow.
It’s good to understand over the course of a release for example, what percentage of work has been introduced into this release that can’t be tied back to specific business value. What’s the percentage of rogue commits for example?
In addition to simple risk measures like the percentage of rogue commits in a code base, we’re starting to imagine some real innovative and exciting ways to think about risk as backlog items move through our value stream maps.
One of the ones that I find most exciting, is cyclomatic complexity or fragility. We’ve been able to measure the cyclomatic complexity and fragility of our code base for years and we’ve got great tools to help us do that, but what we haven’t been able to do until now is start to measure the cyclomatic complexity or the fragility of specific backlog items. For example:
- Specific user stories
- Specific features for example
Or, answer questions like, what’s the relative fragility of release A compared to release B or release C.
Rate of Change
One last class of metrics that I think is interesting with regard to risk is change visibility. We can now track how dynamic our code base is through various stages of our value stream. We would expect our code base to be very dynamic in early stages of our devops value stream, but as we get through closer to quality assurance or certainly beyond the definition of done or beyond code complete, we would expect that code base to be very stable.
Being able to understand the rate of change and how dynamic our code base is, as we approach various stages of delivery, is important because the more change we have later in the value stream, the more risk that we have. I think there’s a lot of study and research around the risk in late stage change, but providing that visibility to all stakeholders and being able to measure that objectively, is really important.
We’ve talked about some flow metrics, and some risk metrics. These are just a couple of examples of how you can begin to build a data-driven devops organization and ultimately help you get good at getting better.
I hope this video inspires you to take another look at what metrics you are using to measure DevOps performance. Learn more about how you can accelerate the speed, reduce the risk, and ensure the quality of DevOps deployments with VersionOne Continuum.