What Matters Most for DevOps Value Stream Efficiency
CollabNet reached out to its global audience at the end of last year and conducted an industry survey to learn more about DevOps priorities and the concerns of executive leaders, project managers, engineers and other software development and testing professionals.
We asked more than 200 respondents, primarily from Fortune 2000 organizations, a few questions to better understand how we could serve their needs. The questions asked were designed to uncover respondents’ priorities and perceptions regarding the barriers faced when optimizing software delivery value streams.
If you’ve followed CollabNet at all in the last few years you should know two very important themes for us are DevOps visibility and metrics. These are two of our passions – two abilities we have sought to provide to large organizations developing software. We sat across the table from dozens of IT professionals who expressed a need for those abilities. In this survey we wanted to find out, what are the main challenges today to improving value stream efficiency? And are these two capabilities — metrics and visibility — still as necessary as we thought?
Flow Metrics and Visibility
For starters, we asked the participants to tell us out of the following areas:
1. Value Stream Automation,
2. Audit and Compliance,
3. Metrics, Reporting & Visibility or
4. Value Stream Integration,
which posed the greatest challenge to value stream efficiency?
The answer was not too surprising. Knowledge is power, or in this case, knowledge is efficiency. Organizations need to be able to view and measure the entire lifecycle with all its tools, people and processes, in order to improve quality and efficiency. You can see the responses in the graphic below.
Nearly half the respondents indicated that the greatest barrier to efficiency was a lack of metrics, reporting and visibility. But what about the other half? Don’t worry, we’ll get to automation and compliance. For now, let’s narrow in a bit more on metrics.
More About Metrics
What metrics exactly are we talking about? We wanted to know where folks were at in collecting feedback and data through the lifecycle. It’s important to have insight into the flow and efficiency, not only in operations — at the end of the cycle, but early on where the value stream starts.
Nearly half of our surveyed enterprise organizations were still working to build basic operational reports and dashboards for DevOps initiatives. Operational metrics include basic measures of activity, such as the number of code commits, builds, tests failed, defects found, deployments, and releases. Nearly one third of survey participants are beginning to focus less on basic operational metrics, and are turning their attention toward lean flow metrics that precisely describe how features and backlog items move through each activity in the value stream.
We asked participants what sort of metrics they are currently focused on.
As you can see, the greatest emphasis is on operational metrics. Let me take a moment for a shameless plug and let you know that CollabNet’s DevOps solution, Continuum, measures the entire lifecycle, gathering data on the flow of the value stream as well as basic operational metrics. This allows managers to make intelligent decisions, informed by data.
In our first question, 27 percent indicated automating the value stream to be their greatest challenge. Let’s dig into automation for a moment.
Automate My Testing Process
We wanted to know, what exactly are automation priorities? In this next question, Respondents indicated their top two automation priorities.
Replacing manual testing process with an automated testing strategy is clearly the top DevOps automation target (35 percent), as testing is most likely to be the biggest delivery bottleneck. Second, is automating the software deployment process (18 percent). There is also interest in further integrating local automation into a larger “value stream” orchestration workflow (15 percent). What does this mean? We wrote thos blog post a year ago.
Testing is increasingly becoming automated and we’ve seen a rise in “continuous testing,” where testing is baked in much earlier in the software development lifecycle. In order to trigger automated testing appropriately, your organization needs analytics from a number of places, and across tools. When a number of point solutions exist for different stages in the lifecycle—plan, monitor, and release, for example—in order to achieve efficiency across the entire DevOps lifecycle and make data-driven decisions, these tools need to be rationalized, and analytics must span the gaps.
Don’t forget about Compliance
Back to our survey, we next asked participants about compliance. Often some of the primary ideals of DevOps can seem to frustrate the need enterprise organizations have to satisfy compliance and government requirements. So we asked survey participants about their top two compliance concerns. More than 45 percent indicated a need for improved process consistency and automated documentation to ensure key process controls have been satisfied. There was also strong agreement that automated audit documentation could streamline the overall compliance process (38 percent). And improved business value traceability could simplify release management processes (31 percent).
Smaller is Better
One of the core themes in both Agile and DevOps is the objective of reducing the amount of change in each incremental release to deploy smaller bundles of change on a faster cadence. When we asked participants to identify the top obstacles facing smaller change batches in the enterprise, 36 percent of respondents pointed to the lack of clear visibility into discrete process bottlenecks (in the form of objective data). As you can see, 26 percent of respondents reported that legacy planning practices were responsible for larger batch sizes, and another 25 percent cited manual processes, such as testing and deployment, as drivers for larger batch sizes in their organization.
Unifying Value Streams to Reduce Risk
Many enterprises are trying to more tightly couple upstream planning activities with downstream DevOps automation. We asked survey participants to tell us why, and nearly 50 percent of them said that unified value streams improve software quality and/or reduce delivery risk.
Nearly one third of respondents believe that by connecting the upstream and downstream data to measure value stream performance is the key to continuous improvement and value stream optimization.
Well there you have it, straight from the horses mouth! Did any of these survey responses surprise you? How do you believe the industry should respond?
Let’s hear your thoughts, please leave a comment below and feel free to ask any questions!
Learn More about VersionOne Continuum for measuring the DevOps value stream, please visit .