A blog about software delivery, Agile & DevOps by Mirco Hering

Monthly Archives: February 2015

Those of you who know me personally, know that nothing can get me on my soapbox quicker than a discussion on measuring productivity. Just over the last week I have been asked three times how to measure this in Agile. I was surprised to notice that I had not yet put my thoughts on paper (well in a blog post). This is well overdue so here I share my thoughts.

Let’s start with the most obvious: Productivity measures output and not outcome. The business cares about outcomes first and outputs second, after all there is no point creating Betamax cassettes more productively than a competitor if everyone buys VHS. Understandably it is difficult to measure the outcome of software delivery so we end up talking about productivity. Having swallowed this pill and being unable to give all but anecdotal guidance on how to measure outcomes, let’s look at productivity measurements.

How not to do it! The worst possible way that I can think of is to do it literally based on output. Think of widgets or java classes or lines of code. If you measure this output you are at best not measuring something meaningful and at worst encouraging bad behaviour. Teams that focus on creating an elegant and easy to maintain solution with reusable components will look less productive than the ones just copying things or creating new components all the time. This is bad. And think of the introduction of technology patterns like stylesheets, all of a sudden for a redesign you only have to update a stylesheet and not all 100 web pages. On paper this would look like a huge productivity loss, updating 1 stylessheet over updating 100 pages in a similar timeframe. Innovative productivity improvements will not get accurately reflected by this kind of measure and teams will not look for innovative ways as much given they are measured on something different . Arguably function points are similar, but I have never dealt with them so I will reserve judgement on this until I have firsthand experience.

How to make it even worse! Yes, widget or line of code based measurements are bad, but it can get even worse. If we have done measurements on this we do not incentivise teams to look for reuse or componentisation of code, and we are also in danger of destroying their sense of teamwork by measuring what each team member contributes. “How many lines of code have you written today?” I have worked with many teams where the best coder writes very little code and that is because he is helping everyone else around him. The team is more productive by him doing this than by him writing lots of code himself. He multiplies his strength rather than linearly growing the team’s productivity by doing more himself.

Okay, you might say that this is all well and good, but what should we do? We clearly need some kind of measurement. I completely agree. Here is what I have used and I think this is a decent starting point:

Measure three different things:

Delivered Functionality – You can do this by either measuring how many user stories or story points you deliver. If you are not working in Agile, you can use requirements or use cases or scenarios. Anything that actually relates to what the user gets from the system. This is closest to measuring outcome and hence the most appropriate measure. Of course these items come in all different sizes and you’d be hard pressed to strictly compare two data points but the trending should be helpful. If you did some normalisations of story points (another great topic for a soapbox) then that will give some comparability.

Waste – While it is hard to measure productivity and outcomes, it is quite easy to measure the opposite: waste! Of course you should contextually decide which elements of waste you measure and I would be careful with composites unless you can translate this to money (e.g. all the waste adds to 3MUSD, not, we have a waste index of 3.6). Composites of such diverse elements such as defects, manual steps, process delays and handovers are difficult to understand. If you cannot translate these to dollars, just choose 2 or 3 main waste factors and measure them. Once they are good find the next one to measure and track.

Cycle time – This is the metric that I would consider above all others to be meaningful. How long does it take to get a good idea implemented in production? You should have the broadest definition that you can measure and then break it down into the sub-components to understand where your bottlenecks are and optimise those. Many of these will be impacted by the levels of automation you have implemented and the level of lean process optimisation you have done.

This is by no means perfect. You can game these metrics just like many others and sometimes external factors influence the measurement, but I strongly believe that if you improve on these three measures you will be more productive.

There is one more thing to mention as a caveat. You need to measure exhaustively and in an automated fashion. The more you rely on just a subset of work and the more you manually track activities the less accurate these measures will be. This also means that you need to measure things that don’t lead to functionality being delivered, like paying down technical debt, analysing new requests for functionality that does not implement or defect triage. There is plenty of opportunity to optimise in this space – Paying technical debt down quicker, validating feature requests quicker, reducing feedback cycles to reduce triage times of defects.

If you are like me, at some stage you learned about the Waterfall methodology. Often the source of the waterfall methodology is attributed to Winston Royce and his paper: “Managing the Development of Large Software Systems”. Recently I have heard many people speak about this paper and imply that it has been misunderstood by the general audience. Rather than prescribing Waterfall it was actually recommending an iterative (or shall we call it Agile?) approach. I just had to read it myself to see what is behind these speculations.

I think there is some truth to both interpretations. I will highlight four points of interest and provide a summary afterwards:

Fundamentals of Software Development –
I like the way he starts by saying that fundamentally all value in software
delivery comes from a) analysis and b) coding. Everything else (documentation,
testing, etc.) is required to manage the process and customers would ideally
not pay for those activities and most developers would prefer not to do them.
This is such a nice and simple way to describe the problem. It speaks to the
developer in me.

Problems with Waterfall Delivery – He then goes on to describe how the Waterfall model is fundamentally flawed and how in reality the stage containment is never successful. This pictures and the caption is what most Agile folks use as evidence: “Unfortunately, for the process illustrated, the design iterations are never confined to the successive steps.” So I think he again identifies the problem correctly based on his experience with delivery at NASA.

Importance of Documentation – Now he starts to describe his solution to the waterfall problem in five steps. I will spare you the details, but one important point he raises is documentation. To quote his paper “How much documentation? My own view is quite a lot, certainly more than most programmers, analysts or program designers are willing to do…” He basically uses documentation to drive the software delivery process and has some elaborate ideas on how to use documentation correctly. A lot of which makes complete sense in a waterfall delivery method.

Overall solution – At the end of the paper he provides his updated model and I have to say it looks quite complicated. To be honest many of the other delivery frameworks like DAD or SAFe look similarly complicated and we should not discount it just for that reason. I did not try to fully understand the model, but it is basically a waterfall delivery with a few Agile ideas sprinkled in: Early customer involvement, having two iterations of the software to get it right and a focus on good testing.

Summary – Overall I think Winston identifies the problems and starts to think in an Agile direction (okay Agile didn’t exist then, but you know what I mean). I think his
approach is still closer to the Waterfall methodology we all know but he is going in the right direction of iterations and customer involvement. As such, I think his paper is neither the starting point of the Waterfall model nor the starting point of an Agile methodology. I think a software archaeologist would see this as an inbetween model that came before its time.

I have already written about the importance of DevOps practices (or for that matter Agile technical practices) for Agile adoption and I don’t think there are many people arguing for the contrary. Ultimately, you want those two things to go hand in hand to maximise the outcome for your organisation. In this post I want to have a closer look at popular scaling frameworks to see whether these models explicitly or implicitly include DevOps. One could of course argue that the Agile models should really focus on just the Agile methodology and associated processes and practices. However, given that often the technical side is the inhibitor of achieving the benefits of Agile, I think DevOps should be reflected in these models to remind everyone that Software is being created first and foremost by developers.

Let’s look at a few of the more well known models:

SAFE (Scaled Agile Framework) – This one is probably the easiest as it has DevOps being called out in the big picture. I would however consider two aspects of SAFe as relevant for the wider discussion, the DevOps team and the System Team. While the DevOps team talks about the aspects that have to do with deployment into production and the automation of the process, the System Team focuses more on the development side activities like Continuous Integration and Test automation. For me there is a problem here as it feels a lot like the DevOps team is the Operations team and the System Team is the Build team. I’d rather have them in one System/DevOps team with joint responsibilities. If you consider both of them as just concepts in the model and you have them working closely together then I feel you start getting somewhere. This is how I do this on my projects.

DAD (Disciplined Agile Delivery) – In DAD, DevOps is weaved into the fabric of the methodology but not as nicely spelled out as I would like. DAD is a lot more focused on the processes (perhaps an inheritance from RUP as both are influenced/created by IBM folks). There is however a blog post by “creator” Scott Ambler that draws all the elements together. I still feel that a bit more focus on the technical aspects of delivery in the construction phase would have been better. That being said, there are few good references if you go down to the detailed level. The Integrator role has an explicit responsibility to integrate all aspects of the solution and the Produce a Potentially Consumable Solution and Improve Quality processes call out many technical practices related to DevOps.

LESS (Large Scale Scrum) – In LESS DevOps is not explicitly called out, but is well covered under Technical Excellence. Here it talks about all the important practices and principles and the descriptions for each of them is really good. LESS has a lot less focus on telling you exactly how to put these practices in place, so it will be up to you to define which team or who in your team should be responsible for this (or in true Agile fashion perhaps it is everyone…).

In conclusion, I have to say that I like the idea of combining the explicit structure of SAFE with the principles and ideas of LESS to create my own meta-framework. I will certainly use both as reference going forward.

What do you think? Is it important to reflect DevOps techniques in a Scaled Agile Model? And if so, which one is your favourite representation?

I recently got asked about the principles that I follow for DevOps adoptions, so I thought I’d write down my list of principles and what they mean to me:

Test Early & Often – This principle is probably the simplest one. Test as early as you can and as often as you can. In my ideal world we find more and more defects closer to the developer by a) running tests more frequently enabled by test automation, b) providing integration points (real or mocked) as early as possible and c) minimising the variables between environments. With all this in place the proportion of defects outside of the development cycle should reduce significantly.

Improve Continuously – This principle is the most important and hardest to do. No implementation of DevOps practices is ever complete. You will always learn new things about your technology and solution and without being vigilant about it, deterioration will set in. Of course the corollary to this is that you need to measure what you care about, otherwise all improvements will be unfocused and you will be unable to measure the improvements. You should measure metrics that best represent the areas of concern for you, like cycle time, automation level, defect rates etc.

Automate Everything – Easier said than done, this is the one most people associate with DevOps. It is the automation of all processes required to deliver software to production or any other environment. For me this goes further than that, it means automating your status reporting, integrating the tools in your ecosystem and getting everyone involved to focus on things that computers cannot (yet) do.

Cohesive Teams – Too often I have been in projects where the silo mentality was the biggest hindrance of progress, be it between delivery partner and client, between development and test teams or between development and operations. Not working together and not having aligned goals/measures of success is going to make the technical problems look like child’s play. Focus on getting the teams aligned early on in your process so that everyone is moving in the same direction.

Deliver Small Increments – Complexity and interdependence are the things that make your cycle time longer than required. The more complex and interdependent a piece of software is, the more difficult it is to test and to identify the root cause of any problem. Look for ways to make the chunks of functionality smaller. This will be difficult in the beginning but the more mature you get, the easier this becomes. It will be a multiplying effect that reduces your time to market further.

Outside-In Development – One of the most fascinating pieces of research I have seen is by Walker Royce about the success of different delivery approaches. He shows that delays are often caused by integration defects that are identified too late. They are identified late because that’s the first time you can test time in an integrated environment. Once you find them you might have to change the inner logic of your systems to accommodate the required changes to the interface. Now imagine doing it the other way around, you test your interfaces first and once they are stable you build out the inner workings. This reduces the need of rework significantly. This principle holds for integration between systems or for modules within the same application and there are many tools and patterns that support outside-in development.

Experiment Frequently – Last but not least you need to experiment. Try different things, use new tools and patterns and keep learning. It’s the same for your business, by using DevOps you can safely try new things out with your customers (think of A/B testing), but you should do the same internally for your development organisation. Be curious!

If you follow these 8 principles I am sure you are on the right track to improve your speed to market.