Whereas modern product cycles rely on shorter cycles, something along the lines of this diagram:

The assumption in modern approaches is that the road to good software is shorter when making smaller steps and frequent turns than when making large steps and more radical turns. (This is geometrically true in the diagrams…)

Old-style product cycles consisted of three main steps: planning (negotiation, prioritization, scheduling), development (design, coding, testing) and launch (alpha/beta, release, outbound marketing). The main question I was trying to tackle in the talk was how the corresponding activities map to product cycles with frequent releases.

On a side note, some organizations use old-style product cycles (infrequent software releases) while using “agile development” techniques internally (that is, frequent internal releases). While perhaps better than nothing at all, this approach misses—in my mind—much of the benefit in agile software development. In the end of the day, the biggest benefit is adapting to customer feedback, and without the software reaching real customers, value diminishes.

The areas I was trying to tackle in the talk were:

How does planning occur in an environment when there’s no defined period for planning (“beginning of the release”)? When the working assumption is that many of the details (and associated effort) will be revealed during the development process. And, how do roadmaps look in such an environment?

How do product launches occur in an environment when there’s no defined period for launch, but—instead—software is ready in chunks? How and when does customer feedback get incorporated into the cycle?

How does one integrate new approaches and opportunities brought about by agile development? Mostly, agile approaches facilitate experimentation through proof-of-concepts and such (with various variants such as MVP, MSP, and lean).

Here are some of the practices we’ve come to follow over the years:

Planning Cadence

Our planning cadence at Webcollage is as follows:

Annually: high level priorities for the year and a straw-man product framework. We keep a lot of slack, which grows as we get further from where we are.

Quarterly: review priorities again, adapt for the upcoming quarter. We still keep slack at around 50%.

Planning Communication

We present external roadmaps to customers in a way that reflects our high level framework.

As part of the roadmap, we do not normally commit to specific features and timelines. We’ve come to realize that hard commitments directly reduce our degree of freedom, or our ability to be agile. This in turn limits our ability to innovate and bring more value to all of our customer base. (This was a heated discussion at the talk; to some people, the mere idea sounded like science fiction.)

Launch Cadence

We release software every two weeks.

We hold a short weekly meeting (up to one hour), which involves the leadership of many areas of the company: Products, R&D, Professional Services, Pre-Sales, Product Marketing, Operations, Technical Services, Technical Support. During the meeting we review noteworthy features in the last iteration and in the upcoming iteration, and identify follow-on action items and tasks (around launch, rollout etc.).

Launch Management

Very rarely can features be completely ready in one iteration. For one thing, creating product documentation requires a working product, which is only there at the end of each iteration.

I spent some time during the talk to present Feature Flags, or the ability to turn features off or on (oftentimes in the production environment, post installation). In our environment, we often roll out features incrementally: start with internal users; then, open to select customers; we then may open to most customers except ones whose day-to-day work may be affected, and ensure we communicate with them properly; then, we turn the feature on for all customers; finally, we remove the old behavior. (This topic, too, yielded some heated discussion around the potential need to support a large number of configurations—an issue we did not encounter as of now.)

Feedback Scheduling

As I mentioned in other posts, our methodology is based on Kanban and facilitates “open iterations”. In other words, we allow customer (and internal) feedback to enter the current iteration. This reduces predictability with respect to new functionality, but increases the speed at which we are able to adapt.

Beforehand, when shut down iteration content to new requests (as is dictated, for example, by Scrum), we ended up having an odd-even syndrome: because it took a few days for feedback to be received and analyzed, it was only handled in the following iteration.

With our current Kanban-based approach, we can schedule issues resulting from customer feedback even if an iteration has started.

Proof of Concepts and Feature Depth

In the old days, product managers had to be gamblers. They would gather feature requests, and (all processes considered) essentially gamble which features will be successful. By the time you’ve launched the next release, everyone hoped they were right. (And in many cases they were not, as is evident from the adoption rate of Windows Vista, for example).

Nowadays, proof of concept releases have become a standard business tool. Variations of the concept have different names including 3 L’s (Launch-Listen-Learn) to MVP (Minimum Viable Product), MSP (Minimum Sellable Product), and a few more.

We’ve found out that the conventional focus on “user stories” misses the point when it comes to proof of concepts. While indeed a user may need to accomplish a certain task (hence a “story”), the issue is more about the “depth” at which a feature (or story) is implemented.

Clear agreement and communication of the feature depth (i.e., level of completeness, robustness and finesse) help keep everyone (and especially coding and testing) on the same page. When the feature is deemed successful, one can and should iterate on depth, improving completeness, robustness and finesse.

All in all, I don’t believe there’s magic in managing product lifecycles in agile environments. Unfortunately, many of the old-style practices aren’t optimized for this environment; and, many of the tools are too new to provide a true end-to-end solution. My goal was merely to share one company’s know-how to potentially increase other companies’ confidence in moving in a similar direction.

7 responses to “Planning and Launching Software Products in an Agile Environment”

Thought provoking article this.
This might work in SaaS environments but how do you make this ‘every few weeks release plan’ work well for on premise deployments and that too of a large complex enterprise application? Upgrade projects are expensive for customers and they get tougher as the mission criticality of app increases. At least in my experience on premise customers dont like frequent releases for large enterprise software.
And how do you handle issues of training user base consistently if new things get added every few weeks? End user training is a significant part of any upgrade exercise for enterprise software, delivered on premise or SaaS.
I understand the benefits of short releases, quick turnaround, faster feedback etc, but how do you really implement that practically for a large enterprise product?

In the end of the day, though, I don’t think there’s a silver bullet to overcome enterprises’ reluctance to upgrade often. In fact, I suspect this issue puts the ecosystem (vendors and customers) in a trap, and is one of the catalysts to customers eventually migrating to SaaS based solution (in a similar way to how Salesforce.com has taken on the CRM space). I wrote down some of the thoughts on this topic in a previous post.

That said, I am guessing there are creative ways to shrink the gap. But first, one must acknowledge that this is an issue worth addressing. Once this is acknowledged as a risk/opportunity (for example, I believe the rapid releases is one of the key factors driving Google Chrome’s adoption compared to Internet Explorer), I believe one can move forward with various ways to lessen the gap.

Here are some techniques I would try if I were running an on-premise product:

Streamline the actual installation process. Make sure that the process itself is seamless and does not require user intervention. This is oftentimes not the case with enterprise software, but one needs to get as close to a seamless upgrade as does Google Chrome (or for that matter, WordPress). This means that various elements now requiring manual intervention (backup, schema upgrade, configuration file changes, restarting elements) need to be automated.

Come up with a supporting release strategy. Consider having two types of releases: major releases and ongoing (stable and working) releases. The latter case be used by pilot customers, new deployments, agile customers, etc. Within this strategy, one must consider the various end-to-end elements like documentation, etc. Feature Flags may help. However, some parts may (like official training) may be deferred to major releases.

Come up with a go-to-market strategy for customers. For example, it may be that customers would be open or even interested to automatically upgrade their test/stage environments. I know we (as a user of some on-premise software) would love automatic upgrades of some of our pre-production environments, so we can always test out the latest version.

Drive the organizational change. Naturally, whatever process is devised, internal staff (sales, marketing etc.) needs to be aligned. It may mean implementing some of the approaches I covered in the post, like more regular planning and launch related activities, etc.

I haven’t managed on-premise software for several years now, so these are ideas from some distance. What’s clear to me is that for on-premise software, this requires significant investment (which is carried out in a SaaS environment too, but is taken for granted there). I’m pretty convinced that on-premise vendors will not have a choice, or they’ll eventually be driven out by SaaS competitors. I’m guessing that like in all other cases of disruptive innovation, the challenge is prioritizing this strategic but seemingly remote risk (a.k.a., the innovator’s dilemma).

Great article Eilon, thanks for sharing it! Fwiw, add my experiences to yours as additional anecdotal data from a few dozen projects with different teams over the last decade or so.

As an answer to Anand…
Elion described two of the three main drivers of value from incremental delivery – (1) breaking up the work into chunks to get a better flow of delivery, (2) getting customer feedback and incorporating it “on the fly” so that the product gets “better” faster, and (3, which he didn’t mention) – delivering (less) value to customers earlier, while delivering more value overall – as a net “better” product.

It’s the third point where on-premise vs. SaaS is usually contentious. Really, as you point out, it is a cost-benefit analysis, where adoption costs (training, integration, deployment, etc) exist for the customer beyond just “accept the free update.” This can happen in SaaS too (although expectations are managed differently, and some costs are not incurred), but is more prevalent as an on-premise issue. I also have seen it much more in enterprise software than consumer (but it can exist for OTS software too).

When a customer deploys an update into production is a function of balancing those cost-benefits. Different domains (and customers) will choose to either (a) deploy infrequent production-system updates, or (b) reduce the cost-to-deploy, and deploy more frequently, or (c) some of each.

However, you can still get the second benefit that Eilon describes. Assuming you have a staging server and/or test environment that runs on “real world data” – and if you don’t, that’s a fundamental issue, for any complex enterprise software – you can deploy to that environment and get the feedback that powers benefit (2), without incurring the costs required to unlock the benefits of (3).

In discrete cases, when you’ve built to support it, you can also A/B test incremental improvements and get “real data” about those improvements in a production environment. This is something you would manage at a smaller scale than with a monolithic ERP update, but rather something like “10 percent of the call center users will be using the new UI, while 90% continue using the existing one”

Eilon,
I like the explicit agreement to address feedback without the odd-even (teams that I’ve been on have called it “tick tock”) cycle factor. That’s a nice technique to manage team expectations, and address things with greater immediacy. There are definitely times when feedback leads to head-slapping epiphanies, upon which an immediate and obvious solution can be built. There are other times when you need feedback to aggregate – so that you can identify trends and underlying issues (e.g. it isn’t that the widgets in the UI are bad, or that the workflow is “off”, but rather that we are “solving the wrong problem.” In those cases, there is some (important and valid) time to be spent before the raw feedback (data) is converted into appropriate goals (insight) and then scheduled for implementation (solutions). But relaxing the “sprint is in lockdown” requirement does help with that. Shorter iterations also help with that.

Two cents I would add – teams that are more-new-to-agile struggle with “breaking” the lockdown, and teams that are more experienced have no issues with it. At least for teams with which I’ve worked.

Eilon, thanks for sharing your article. I do agree with Anand that your article is more suited for SaaS software products rather than enterprise software products, where every upgrade, no matter how small, must pass a user acceptance/regression test that frequently last weeks. This makes upgrading expensive.

There is a place for Agile development practices in building Enterprise software products. The role of Product Management and areas covered are different. We are finding the Agile process produces more reliable release plans and reduces the risk of regressions after making a new release. I think the biggest area of improvement using Agile is the concept sprint planning. We have two week sprints. Once a sprint starts, nothing can change the planned sprint payload. Any customer requests or bugs that get reported in a sprint, will be planned into the next sprint. Preventing interruptions to developer and QA daily tasks boosted productivity and reduced regressions. There are no interruptions to the current sprint and the role of Product Management is to provide the development and QA the necessary ground cover.

One other area is that the Agile method itself provides a method for continuous process improvement. Every two weeks we evaluate the process and suggest changes to improve productivity.

Nice article and good comments. I have led teams that do both mission critical enterprise on-prem software, on-prem appliances as well as SaaS. SaaS is great but often not an option for customers for many reasons. Also there is a big difference in what customers are willing to tolerate based on the criticality to their business. If your product drives say the trading practice for a major bank the SLAs are completely different than say an HR application. Large enterprises almost always have a fixed deployment window plus a test and validation cycle before deployment. They often have long lock-down windows (for example it is not uncommon to see four months lock down around the Christmas holidays). The key to leveraging Agile with these customers is getting the product in their test labs where they want to see fast nimble movement to their needs – then Agile really gives you an advantage. They still want a single release they can deploy in production for a year (at least) but Agile let’s you make this release a killer product.