Agile Mistakes Part 2: A failed conversion to Agile

Having provided a high-level comparison of waterfall and agile methodologies (or frameworks) in my previous article, I will now begin to analyze core areas where misconceptions arise and create problems in an environment that has been declared agile.

A failed conversion from waterfall to agile

The term “requirements” means many things to many people, and I’ve often found that even in a waterfall environment there can be confusion. I once worked in a company that had a robust business development team whose job was to analyze trends, get ahead of the industry direction, and discover new business opportunities. This team flew around the country to meet with current customers and prospects alike, talking to customers about their needs, frustrations, and visions, and then capturing them in a series of business documents such as a business cases, RFPs, and SOWs before the project was approved and they were converted by the PMO into project documentation.

It was common to hear the business team discuss the “requirements” for the new product. And, in effect, they were requirements. But they were such high-level descriptions of a business need that there was a massive gulf between the theoretical need described and the nitty-gritty details that had to be elicited in order to translate the idea into actual code.

At a certain point, business analysts were brought in to dig deeper. They translated the 20,000-foot view of the business requirements into progressively detailed statements that described how the system should behave. So a statement such as “We need our customers to be able to order satellite television service online” was broken into thousands of detailed requirements in phrases such as: “The application shall allow the online customer to register an account.” Subordinate details then flowed from this, describing every detailed aspect of page layout, button sizes and descriptions, font types, colors, sizes, field validation rules, etc. This type of description was generally referred to as “software requirements,” sometimes “system requirements.” But one relatively constant limitation was placed on this type of requirement: they should describe what was being built, what the stimulus/response behavior would look like, and not how the system should be built.

That level of detail was left up to the architects and developers, captured in design artifacts that flowed from the technical design process after developers had fully analyzed and approved the “software requirements.” Confusingly, these design artifacts might also be called “system requirements.” We then input the business requirements, the software requirements and the system (design) requirements into an application and carefully performed traceability across thousands of requirements.

This process often took many months to complete before actual development ever began. And during the entire life cycle, the term “requirements” was bandied about loosely as if the term meant the same thing to everyone, which it did not.

As one might imagine, this laborious, tedious, time-consuming process often resulted in many months of expensive work being performed, only to have the customer decide to cancel the project before the first lines of code were developed or, even worse, just as code was about to be released for the first time.

And of course, the company suffered horrible embarrassments when customers first saw the product of many months of work only to find it did not in any way resemble what they thought they were getting. They held emergency meetings with the client to reassure them and figure out what had to be changed. In some cases, service-level agreements had been signed that included hefty penalties for failing to meet expectations and delivery dates, sometimes fining the company tens of thousands of dollars per day. In short, fingers pointed, heads rolled and morale was utterly destroyed.

Executive managers, always looking for a “new way” to improve efficiency, heard about this new “wiz-bang” development process called agile and thought it sounded nifty. With nary more than a cursory examination of the process, they came to the conclusion that the analysis phase of software development could be abbreviated, cutting costs and shortening time to market.

They had heard that code should be delivered more frequently, so they decided that they would perform “agile iterations” that only lasted three to six months before delivering some code. Requirements would no longer need to be so detailed, and instead many of the previously stated requirements that fit snugly into the realm of “application standards” would be assumed, not captured. Even aspects such as page layout, button and other feature designs, would be left to the teams to determine. The “brilliant” architects and developers would have more freedom to shape the product.

They made an announcement about the changes and essentially told the teams to go figure out how it would work, implement the changes, and watch the miracles happen.

But miracles didn’t happen.

Customers were still not happy. Delivery dates slipped. The product still did not meet expectations, and new problems arose: common features in one part of the product no longer looked or functioned like the same features in another part of the product. Customer requirements were either not documented at all, improperly captured, or—to great embarrassment—documented but then lost entirely!

What went wrong?

Senior executives had decided to “go agile” without selecting a framework. Nor had they contracted coaches to implement a framework. Senior executives failed to recognize the complexity of the proposed cultural change and how to manage it in a way that would result in success.

Executives briefly read about agile methodologies without gaining an in-depth understanding of how to implement a successful agile methodology. They did not have an understanding of what constituted a good “user story” and why, nor how to prioritize them properly. They also failed to implement anything resembling backlog grooming, burndown charts, daily standup meetings, retrospectives, or any of the other standard agile practices and ceremonies that are crucial to the framework.

Although members of the team had warned that they were uncomfortable with the quality of requirements and way the projects were being handled, they were ignored; executives, after all, know more than the grunts.

Certain technical resources heard that they would have more creative leeway with implementation, and so they unilaterally changed described functionality without prior approval. Sometimes they added features that were not requested (scope creep). Other times they altered functionality without validating that it met client needs (missing requirements). And quite often, they directed teams to focus their work on functionality that they thought were important, but which were much lower priority to the customer.

Executives thought that if they had previously delivered code once every six months, then doing so once every two to three months would make them agile. But they had failed to have frequent demonstrations of the functionality along the way, and as a result, were surprised to find that the solution had strayed from what was desired until it was very difficult, and quite expensive, to fix it.

Thus, the effort went awry from the start. Only high-level business requirements were gathered, and details were left for the business analyst, architects, and developers to decide based upon their understanding of what the customer wanted. The requirements were no longer captured in any methodical manner, and were stored across a ticketing system instead of in any centralized location. No applications such as Rally or Jira, designed to assist with managing agile projects, were used. More shockingly, the teams did not even know how to manage story boards, so the most basic tools were overlooked to help achieve success.

Clearly, management fundamentally failed to understand that agile is a process, not a lack of process.

They misunderstood that functionality still needed to be documented, and just because software requirements specifications (SRS) documents were not part of the process did not mean that requirement elicitation and management could largely be ignored. Nor should the early warnings from experienced team members fall on deaf ears.

Senior managers who are considering a transition to an agile framework have a responsibility to fully understand everything that goes into the change and managing it appropriately. But that is just the beginning. They must also comprehensively examine the cultural changes that will be required, and plan carefully how to make the transition. And finally, they must provide their teams with the training and tools needed to operate within an agile environment. Had they done this, their projects would very likely have been moneymakers instead of bleeding away profits.

Book-learning and reading articles is no substitute for direct expert advice; it would be wise to hire a consultant to assist with the process.

Curtis Reed's career in software development has spanned many areas of responsibility. Starting as a Technical Trainer for an international corporation, he trained clients on satellite technology and billing systems in English, Spanish and Portuguese. He later spent a decade as Business Analyst and transitioned to Project Manager in a Waterfall environment. Subsequent employment gave him experience in multiple Agile environments. He writes to share "lessons learned" over 15 years in the SDLC under multiple methodologies, in the hope that other budding analysts and project managers benefit from his experience.