Topics

Featured in Development

As part of our core values of sharing knowledge, the InfoQ editors were keen to capture and share our book and article recommendations for 2018, so that others can benefit from this too. In this second part we are sharing the final batch of recommendations

Featured in Architecture & Design

Tanya Reilly discusses her research into how the fire code evolved in New York and draws on some of the parallels she sees in software. Along the way, she discusses what it means to be an SRE, what effective aspects of the role might look like, and her opinions on what we as an industry should be doing to prevent disasters.

Featured in Culture & Methods

Mik Kersten has published a book, Project to Product, in which he describes a framework for delivering products in the age of software. Drawing on research and experience with many organisations across a wide range of industries, he presents the Flow Framework™ as a way for organisations to adapt their product delivery to the speed of the market.

Featured in DevOps

The fact that machine learning development focuses on hyperparameter tuning and data pipelines does not mean that we need to reinvent the wheel or look for a completely new way. According to Thiago de Faria, DevOps lays a strong foundation: culture change to support experimentation, continuous evaluation, sharing, abstraction layers, observability, and working in products and services.

Q&A with Gil Zilberfeld on Agile Product Planning and Management

Gil Zilberfeld gave a presentation about the new agile at the Agile Eastern Europe 2015 conference. InfoQ did an interview with Zilberfeld about better ways to do product planning and tracking, his thoughts on #NoEstimates, including value in product planning discussions and how to improve decision making in product development.

InfoQ: Organizations are looking for better ways to do product planning and tracking. Can you mention some of them?

Zilberfeld: Until recently, organizations would start their agile journey at the development team level. Scrum, kanban and other methods have given us ways to plan and track. Scrum has planning processes out of the box - for a release, for an iteration and a work day. Burn-down charts and task boards visualize the status and if analyzed well can offer a good picture. For example, I worked with a team that consistently delivered 60% of their backlog in an iteration. Luckily, we just needed to look at a burn-down chart to see the team works in a consistent pace, and that they were spending too much time planning the things that will not enter the iteration.

Kanban offers a kanban board, but that’s deceptively simple. If we apply kanban’s "Make policies explicit" principle, we can track how work flows and where it gets stuck. If we create a cumulative flow diagram (CFD), we can plan ahead based on lead time and WIP. In both cases, we’re collecting information which we’re going to base our plans on. The benefit is getting some predictability into our process. If we apply lean principles, the theory of constraints and queuing theory can allow to increase predictability many folds, by visualizing, identifying bottlenecks and improving flow.

Organizations that have that part figured out, are now looking to apply the same principles into portfolio management. SAFe offers release trains for that, kanban scales from development teams to full product teams. This has started happening only in recent years, so it would be wise to take reported results (both successes and failures) as proof cautiously. The more people involved, the more complex the process gets, and predictability starts to suffer. Once again, visibility into the process leads to better predictability and ability to improve.

InfoQ: What are your thoughts about #NoEstimates?

Zilberfeld: When I first heard of #NoEstimates, it sounded like developers were vying for even more control from project sponsors. I mean, aren’t the people paying to the project allowed to get even a fraction of control over the project?

As I became more interested, I asked what estimates are really used for. It probably wouldn’t surprise you: we’re looking for predictability! Sponsors want estimates because they believe they will help them make the best decisions about the project. Alas, this doesn’t always happen. Apart from dysfunctions like estimates magically becoming commitments, or estimate inflation, we don’t get the tool we think we need.

Estimates may be easy to ask for, but they are rarely helpful. In fact, they drive decisions based on cost, rather than value. When we make cost-driven decisions, we mitigate risk, rather than innovate. In that sense, estimates limit our ability to create new and more powerful solutions.

For me, #NoEstimates became not-just-estimates. If our estimates matter for decisions, we need to support them with other factors, like complexity evaluation and historic data. If they are not useful, or low confidence ones, then don’t invest in them too much. And of course, integrate the value of the project into the equation- sometimes it is worth it regardless the cost.

InfoQ: Estimates are about costs, not about the value that products and services can bring as you mentioned. Are there things that we can do to include value in the discussion when we are doing product planning?

Zilberfeld: It’s funny, we usually ask for costs estimates for making decisions, but we almost never question value. We assume that someone has already done the right prioritization. On the other hand, I’ve witnessed many projects where after asking for cost estimates, getting bloated numbers, the projects still went forward. Why? Because they were valuable enough.

Value estimation is as hard as cost estimation. However, practitioners experiment with methods such as Cost of Delay (CoD) to make estimations on value of features and products.

For example, let’s look at a set of options:

Feature A can sell more, so it can bring more money.

Feature B may support retaining existing customers.

Feature C can increase our capability to add more features quickly.

Each one of those can be evaluated in actual currency. We can now estimate how much money we expect to earn because of feature A, or how much we’re going to save on B. Now we ask what the cost of delay of either is and can compare them. Then we can check the impact of implementing Feature C first, because of early revenues from A or savings from B.

Once we put all features on the same baseline, and estimate the value of each we can decide which to go with first.

CoD methods and their like allow us to base decisions on more information, not just cost estimates.

InfoQ: Product development involves a lot of decision making. Can you give some examples of how this is typically done? Which challenges do organizations face when making such decisions?

Zilberfeld: I just gave a talk called "ROI is Dead" at ACE! Conference, and I talked about product managers deciding what to do, from basing on gut feelings to running complex simulations. We’re basing many decisions at the portfolio level on what PMs tell us. You can say that product managers are gamblers. Unfortunately even if the gamble was right, the product may fail regardless: A good product can fail miserably, just because the same company released a bad product two years ago, got bad reputation, and no one gives a second (or first) look at the new one.

So the biggest issue is organizations face is complexity. Complexity is not new - it was always there. But in the new agile world, it is hard for us to ignore it. When old products kill new ones, that’s complexity. When we’re hit with unforeseen events, that’s complexity. It’s the project (and sometimes company) killer.

InfoQ: What can organization do to improve their decision making in product development? How can they handle complexity?

Zilberfeld: Don’t ignore complexity. Instead, we need to find a way through the fog, without too many risks. We need to assume ignorance on our part, on any level - business, development, operations. Once we admit it, comes the hard part: We need to change the way we work.

For years, product managers defined the product, and then the development team took over for months. The only feedback we got was when the product released. We can’t continue working that way anymore. Instead of having a multi-month cycle, we need to create experiments that are not risky. Instead of a $1M investment in a whole product, we need to find out if it actually solves a problem with $10K. If we’re right, we can continue validating our assumptions incremental. If not, we just saved the company a whole lot of money.

When I work with product people, I continue to badger them about their confidence in the value of their backlog, including asking for proof. When there isn’t any, we plan a short experiment, go to the customer and learn. If we’re right, we continue. If not, we change our plans.