Wednesday, November 30, 2011

I've been having many conversations recently about how to set up the agile teams I'm coaching with the right Product Owner. As we all know, the PO must be empowered to make decisions, yet must also be knowledgeable enough about what the software should do that she can make constant small decisions for the team so they don't have to wait. The PO understands the big picture, understands the small picture, and can set priorities.

I blogged a few months back about how the "Team Room" must be considered a metaphor, not a literal prerequisite to trying agile for the first time. I know I am stepping into equally sacred cow pies here, but I am going throw my weight behind greater thinkers than I who have already posited that the "Product Owner" should be considered as a team, not an individual. A small village, not a literal person. Consider these "Product Owner Team" proposals from Mike Cottmeyer, Ben Linders, and Marc Löffler, specifically when considering how to do the Product Owner role at scale.

Product Owner Team enthusiasts posit that if the village can reach consensus and speak with one voice in a timely way on a consistent basis, the whole agile development team will be in fine shape. The trick is building the right village.

"The Body Politic" from governingtemptation.wordpress.com

I'm currently helping a large vertical in a corporate environment structure the product ownership function for its teams, and it's going to look something like this:

1 Executive Point person where the buck stops (contacted as infrequently as possible)

Some large number of SMEs who have details about value and usage of the feature (contacted only as needed by appointment, with a fall-back appointment arranged in case the first one falls through)

1 Business-side "feature point person" per desired feature (readily available but not in the metaphorical team room.)

A team of business analysts, user acceptance testers, and business systems analysts who are responsible for knowing which SMEs are needed for each feature, and are able to get those people to the table in a reliable way (part of the core team, always working in the metaphorical team room)

For the software features these teams are building, the "value" of a particular function is generally a corporate legal compliance issue. The real "Product Owner" who is paying to get the work done has skin in this game only insofar as they need to get a signed approval from an auditor who has made a "finding" against them. If they don't get this approval, they will face true Profit and Loss consequences. They are motivated. Sadly, this person whose neck is on the line for the new feature most likely doesn't ever see the software in use by the people who use it.

On the other hand, the people who do see the software have no funding authority for the team. In a waterfall world, their only power is to scream loudly for the first time when they first see the software (at UAT or even in production). They get their way, but in the least convenient way for the business and for the technologists who need to absorb a huge influx of last minute requirements from an unexpected direction, all in the unlovely package of a "production down" situation.

If you will pardon the use of a second major metaphor, this situation is like that of the company that makes chairs for airport seating areas. The airport will willingly buy chairs that are uncomfortable if they are made of knife-resistant kevlar, if what they're optimizing is wear-and-tear. The people who actually sit in the chairs don't get the last say.

At any rate, in this enterprise environment, the actual requirements for the software are spread among many corporate divisions and controlled by many different national laws, and the people who do the actual work are spread across many parts of the business. The person with "final say" is actually an executive at a high enough level to placate others who may be annoyed at what is happening in their operational area in order for this business owner to be in compliance. Powerful executives are not available to sit in a team room all day long. They must be consulted sparingly.

The people who know all the details of the various "sunny day" and "cloudy day" scenarios of software use are scattered all over the enterprise, and all over the real world, and they only speak for a piece of the whole software puzzle. They have day-to-day responsibilities with live clients, and they are certainly not available to sit in the team room, nor do they have the objectivity to make decisions.

The actual decision-making Product Owner, therefore, must be a group of people who together have the right business-side contacts to put together a proposed solution that meets the needs of all stakeholders with the right level of priority for each stakeholder. That's where we get to the trifecta of business analyst, business systems analyst, and user acceptance tester. Decisions need to be made by the whole village, so some simultaneous-in-time meetings will need to be set up to get full Product Ownership up and running. But it is quite likely the whole Product Owner team may never meet simultaneously. The Product Owner body politic will need to self-govern in a way that will be as efficient as possible for the team, but at least it doesn't need to ask the software developers to guess what to do.

I imagine that this scenario will make some agile purists totally freak out, but I think if we're serious about taking agile to "enterprise scale," we have to be prepared to scale things like the team room and the product owner as well, and we need to be comfortable with that.

Tuesday, November 22, 2011

I've been pondering further difficulties of being a product owner, both silently and aloud, so yesterday I was happily bowled over by a new idea on the topic from my new ThoughtWorks colleague Jasper "Dutch" Steutel (@dutchdutchdutch for twitterphiles). He calls his discovery the "design spike," and we ended up talking together about a related concept, the "value spike." So what's this all about? Aside from being "Vampire Month" on the Pragmatic Agilist?

From http://io9.com/james-marsters

The Problem

It may be different in a small start-up or a firm well-organized into small, highly integrated business/technology verticals, but in a typical large corporate enterprise with matrixed silos, it is quite challenging for a Product Owner to speak for all of the stakeholders on a project.

The PO must be able to speak fluently to the technical people on the team

The PO must be able to provide one reliable set of priorities which reflect acquiescence among all of the PO's horizontal peers in the organization, plus their direct reports. If there are competing priorities among those peers, creating and maintaining this uber-backlog will involve some serious facilitation chops.

The PO must also be able to speak authoritatively within the funding hierarchy and hierarchies in which the project finds itself. The "funding authority" is likely an executive with her own relationships to maintain.

The PO should also be a user of the software and be able to speak to the full user experience; the PO should be able to conduct UAT.

While they're thus occupied, POs must also be steadily (if not continuously) available to the team.

For those of you who aren't already familiar, a "spike" or "tracer bullet" is a short piece of work within an agile project in which one or two programmers may be assigned to do outside of the iteration structure. The spike investigates unknown technology problems well enough so you can estimate them. There's a nice explanation in this blog post about these related concepts and their origins.

An example would be that the team discovers it needs to use a new visualization technology for a planned dashboard, but nobody on the team has ever used the technology before. So the team simply doesn't know how long the work will take related to that new technology. At the point where the team urgently needs to know the details about how hard it is to work with the new technology, the group agrees to send a pair of programmers off for a pre-specified amount of time (a "time box") to learn enough about the new technology so that the project work can be estimated.

Any work the programmers do in this spike typically will be thrown away--the spike produces team learning, not reusable software. Once you know how long the tasks related to the new technology are going to take, you can make appropriate decisions about what to include and what to postpone from your current planned release, compared to other features which have already been estimated. You adjust the backlog accordingly, and the team moves forward. Conceptually:

Spike allows for backlog adjustments based on needed new information. But note that no new working software has been written during the spike. The output of the spike is just learning.

Design Spikes
Dutch, who has brought many years of product management experience to
ThoughtWorks, points out that on a product-oriented team, you may be as
likely to need a "design spike" as a "technology spike" before you can
complete backlog grooming, or even complete a story that is currently in
play in an iteration.

In this case, the developers may or may not know how to write the software behind the story, but what is very clear as you talk it through in the team room is that nobody knows what the desired user experience should be for the software. What do you do? You take the story out of play for the current iteration, and hold a workshop for the PO and any SMEs who can speak for customers, or even customers themselves, and determine what the user experience should be. This could be expressed as a wireframe or a photograph of a white board--unless the question is specific to the design at the CSS level, you may find it more helpful to come out of the design spike with team learning about the desired user experience, not a complete web page template.

Note that in an idealized "Continuous Delivery" project, every iteration actually calls for a software implementation of a design spike, and so-called A/B user testing determines what your next step should be by measuring the way actual customers use the software. Notice what happens here--if you're working with known technology, you may do an entire project without doing a software spike, in the standard sense of the word. Even though some of the software gets thrown away, you're always building the real software, not a throw-away architecture. But from a design perspective, continuous delivery is nothing but a set of design spikes which result in team learning, and throw off the software itself as a side-effect.

Value Spikes

So let's take the spike concept back to the poor, overworked Product Owner stuck doing a vendor workflow implementation for internal use at a very large company. This thing is not being ported to an iPad any time soon.

The team is all sitting around in the team room talking about the new dashboard you're implementing in this sprint. As it happens, the developers are not familiar with the visualization technology, and they are eager to go off and have a spike to figure it out. And goodness knows they deserve it--all they do is write integration code all day long. But wait--before two lucky programmers run off to play with something new, it turns out that the team stops to ask the Product Owner what the value of this dashboard is going to be. It seems like a lot of architectural investigation for a product that is going to use 3D to visualize work orders going from the "pending" state to the "done" state. Why, you ask, is 3D necessary for this? What is the value?

Most likely the Product Owner does not know, in this environment, how the team ended up with a request for a 3D workflow state change visualizer. There are a lot of players, and there's a lot of politics, and it's a big project, and this is only one of a thousand requested features.

This is where the "value spike" comes in. Just as you should stop work to get a general idea of the effort involved in specific stories, or the user experience required, you should also stop work to allow the Product Owner a time box to assemble the right SMEs for a meeting to determine the authorship and impact of a requirement whose value seems questionable. The PO does not have this information top of mind any more than the developer has information on every possible technology ready to hand.

In this case, the PO will return within the time box with a fresh view of the value of the feature, and just as happens with a technology spike, the PO will do a new cost/benefit analysis based on the value the feature will bring compared to the cost it will take to develop it, and modify the project backlog accordingly. It looks like this:

Seem familiar?

In case the suspense is killing you, in our hypothetical case we'll stipulate that the Managing Director who staffs the data entry area has trouble understanding why things are so slow. The project sponsor, manager of these data entry people, has made it her top priority to make the staffing problem crystal clear. She would implement "Smellovision" if it were possible. This feature jumps to the front of the backlog, the team completes it, the sponsor is happy, and everyone has a good day.

Design and value spikes should be tactics that every Product Owner keeps handy. You don't have to be omnipotent if you have a technique that lets you become expert on one little piece at a time. And that's as close as we get to Enterprise Fun these days.

Tuesday, November 15, 2011

Product Ownership is very difficult. Take a big step away from the Agile Manifesto and think for a moment about project stakeholders, user stories, and how they don't fit together as neatly in real life as they do in Mike Cohn's User Stories Applied, as awesome as that book is. How in the world is it possible for there to be a single person standing in for all project stakeholders in negotiating with the team?

From http://www.implementingscrum.com/2007/04/23/the-cast-of-implementingscrum-infamous-yet/

Conveniently, Cohn himself points out The First Fallacy of the Product Owner. And that is, of course, that such a being actually exists:

On an ideal project we would have a single person who prioritizes work for developers, omnisciently answers their questions, will use the software when it’s finished, and writes all of the stories. This is almost always too much to hope for, so we establish a customer team. The customer team includes those who ensure that the software will meet the needs of its intended users. This means the customer team may include testers, a product manager, real users, and inter-action designers. (User Stories Applied, p. 8)

As Cohn says, the Product Owner may in fact be a "customer team" of some sort. And this team needs to somehow get onto the same page so that if the non-product-owners on the team have a question, they get only one answer, no matter who on the team they talk with. Scary, but true, and very real life. Can that be done? Yes, certainly. But it requires trust and discipline on the "customer team," and it may not come naturally at first. And wait, there's more!

The Second Fallacy of the Product Owner is that the main people with whom the project must be concerned are the real users of the software. Such a fallacy relies on a confusion between project "stakeholders/sponsors" and project "end users." They are not the same! What do you do in a corporate environment in which the "customer" with the budget is an executive decision-maker who will never use the software or even see it? On real projects in corporate environments, your product owner needs not only to understand and manage the desires of competing software users, but also to build a consensus all the way up the executive chain of any sponsoring stakeholder organizations, and keep these sponsoring stakeholders as well as the end users (not to mention developers, testers, and other team members) all on the same page. And of course business goals and user needs keep changing. And that brings us to the related:

Third Fallacy of the Product Owner, which is that "business value" can be determined by operational end users. Don't get me wrong. Executives, and even some line managers, are the last people in the world you should go to, if you want to find out how software is used in the wild. You will certainly build a big, unfortunate loser piece of software if you just listen to "the brass." They don't know! They probably don't even know all the systems their employees use to keep the business running. You must listen to the real users of the software.

But if your goal is to deliver "high value" software first, and "lower value" software later, then the real users won't have the full picture either. You need the executives to make decisions like "just skip that whole part of the old process--that never made sense." And this begins to get very tricky indeed, because the "customer team" is now dealing with Stakeholder A who may be in a position to deeply change Stakeholder B's job, or even eliminate it. So if you're under the impression that the "customer team" is one big happy family off in the "Product Owner room" all together, you need to let that go too.

So where do Behavior Driven Development (BDD) and Feature Injection come into the discussion? My colleague Jeffrey Davidson just put together a brilliant slide
presentation on these very topics, which you
can see here. BDD and Feature Injection are both methodologies which have been described as a step forward in terms of gaining a common understanding of system behavior between the "business side" of a team, represented by the Product Owner, and the "development side," represented by the developers and systems testers. Because BDD and Feature Injection allow the system to be described as a series of examples, rather than a series of "the system shall" statements, business people focus on business value, and developers figure out the best technical way to get the business value out fast.

But BDD and Feature Injection provide something even more valuable, if you're being asked to be a Product Owner, or to be part of a customer team. Both techniques provide a way to get real software users and executive stakeholders onto the same page, and to keep them there as well. And that is a very good thing.

BDD, as Jeff says, is all about describing software in terms of examples, instead of in terms of the components that make it up. (Please also see this timely repost of Martin Fowler's bliki post on "Specification By Example.") "Given" a certain circumstance, "When" a certain real software user does something, "Then" you should see a certain result. What does the Given/When/Then formulation do to help the customer team?

High level user stories ("features" or "epics") described in terms of given/when/then give real software users a succinct view of how the software brings overall revenue to the firm. You may also incur this type of benefit by revising the order of the traditional Mike Cohn style user story elements: "As a <role>, I <need to incur a specific business value>, by <feature>." Here the roles are far more likely to be executive roles like "as the CEO" or "as the CFO," but it's wonderful to know what it is that the team is building, in terms of the overall flow of revenue to the company.

Low level user stories, the size that can be developed by the team, clarify to executive stakeholders exactly what real software users are doing and why. Most normal executives cannot withstand even a single "the system shall" statement, but they may participate eagerly in a discussion couched in terms of "given/when/then," and be able to allay fears among the real software users that they have a specific need for some part of the system that is particularly obnoxious to use. That's a good thing too.

What about Feature Injection? Feature Injection, invented by Chris Matts, and explored in print primarily by Chris and fellow FI aficionado Liz Keogh, says that you need to start all software development discussions by talking in terms of the type of business value that makes sense to the CEO and the CFO. Chris and Liz will tell you that the team should be describing, with examples, what the new software will be doing for the business when it's done. So the process is: 1) identify the value, 2) identify the feature that will give you the value, 3) describe that feature in terms of examples. Kent McDonald provides a nice, "gentle" introduction to Feature Injection here.

A customer team which combines Feature Injection to build word bridge between "value" and "features," and then describes those features entirely in terms of BDD's "given, when, then" scenarios may find itself aligned not only with software developers and testers, but also with itself.

Thursday, November 10, 2011

As a coach and trainer, I have noticed that when I start the "Roles, Personas and Goals" discussions, attendees in the room are 40% more likely to start surreptitiously checking e-mail their smartphones than they when we talk about comparatively exciting topics such as "stand up meetings," "story boards," the "burn up versus burn down chart" debate, or "evolutionary design." I had to lure you to this blog post, in fact, by riding on the coat-tails of the Breaking Dawn, Part 1, premier tonight at midnight. You aren't interested! You have heard it all before! "To write good software, you need to know who will be using it and what they want to accomplish." Blah blah blah--sounds like something your mom would say. "Give the roles names, and think of them as people. If multiple types of people play the same roles, give them different names, and call those things 'personas'" Now you sound trendy and slightly unhinged. Let's go back to the burn-ups.

From http://www.fantasybooksandmovies.com/edward-cullen.html

Let's not! I'm going to simulate a "requirements" conversation with your business users twice, for purposes of comparison. "Before" will represent what you may be doing now. "After" will represent the same conversation, except that all players focus in a disciplined manner on roles and goals--who is doing the action and why? Could it be that such a slight change of focus will give you measurable improvements to your software? I say yes!

Before ("the system shall"):

Analyst: "So to complete requirement A-445, after the screen prompts for the three criteria, if the first check box is activated and the fine amount is under $50, the save button is disabled and an error message is displayed."

Business user: "Right"

Analyst: "So we're done then?"

Business user: "Yes."

After (someone in particular is doing something for some reason):

Analyst: "So who has access to the screen where you can enter the fine amount?"

Business user: "The receptionists in the front office."

Analyst: "Let's call our sample receptionist 'Gayatri,' okay?

Business user: "Um, okay, if you say so."

Analyst: "All right, so a person who lost their library card walks up to Gayatri, and Gayatri brings up the a screen where she can enter the fine amount. She asks whether the person has lost a card before, and if the answer is yes, the fine needs to be $50 or more, depending on her mood. If not, they can get a new card for some amount less than $50, based on a sliding scale that Gayatri maintains. Is that right?'

Business user: "Um, no, actually not. We have a triage person who handles the library card issues. If the person has lost their library card more than ten times, the triage person calls an armed escort to take the person out of the library forever. If it's less than ten, the triage person updates a flag on the person's record to show that it's either the first card lost, or some number larger than that. They're the ones who update the 'repeat offender' flag."

Analyst: "Okay, let's call the triage person Jens."

Business user: "Uh huh."

Analyst: "Does Jens use the same screen that Gayatri uses?"

Business user: "No, Jens gets a screen with a panic button and no fine amount. That's why I didn't bring it up. We're changing the rules around fine amount, not the repeat offender flag. Do you agile guys get partially lobotomized before they let you loose?"

Analyst: "Would you expect Gayatri to need to update the repeat offender flag, or should it be locked down?"

Business user: "Hm. Interesting point. We're instituting this new rule to ensure that the library can protect its fine revenue. We set up Jens's job in the first place to separate enforcement from the clerical function of just putting in the fine amount."

Analyst: "So the 'repeat offender' flag should be disabled on the fine entry screen?"

Business user: "Yes it definitely should. We don't want Gayatri gaming the system. We've had a history of soft-hearted admins resetting patrons to non-offenders just to take a little bit of money off of the fine. They're just enablers. Those people are monsters--they go through ten, fifteen library cards a year!"

Analyst: "Yikes! Okay, so we're making two changes to the fine entry screen: first, lock down the repeat offender flag and only allow it to be edited on the panic screen by Jens. Second, enforce that when Gayatri hits 'save,' if the repeat offender flag is set to 'yes,' she must collect $50 or more."

Business user: "Yes, that's right."

Analyst: "So we're done then?"

Business user: "Yes."

This may seem like a fanciful and trivial example, but I hope it illustrates the point. In this case, actual library revenue could have been affected if one field had been left editable rather than changed to read-only. Moreover, by describing actual people doing actual tasks, this analyst was able to find out about a whole additional screen she didn't know about before. I've been on projects where analysts focused on "the system" didn't find out until months into actual software development that "the system" was only one of TWO systems that Jens, Gayatri, and the other imaginary personas were using to keep data up to date. The work load for the project doubled in a day, once someone shadowed an actual data entry person to see what they did.

People-focused requirements gathering is the only type of requirements gathering trustworthy enough upon which to base your company's cash flows, or anything else important to your operations. Even if you are an analyst who is a subject matter expert in her own right, take the time to mentally walk through the process as performed by your actual end users, and don't focus too quickly on the details of "the system" and "the screen." The focus on people itself is important, and it will bring you and your company significant return on investment.

Friday, November 4, 2011

Jim Highsmith recently posited that "velocity is killing agility!" which is kind of a fun hypothesis. Jim observes that company leaders he talks with around the world these days are a little too quick to measure the effectiveness of their agile software development teams by keeping track of the teams' velocity (the average amount of estimated software effort the team delivers per software delivery iteration).

This is quite ironic, of course, since one of the rudimentary things you learn when you first study agile software development is that "velocity" is measured in abstract terms like "points," or "ideal hours." The numbers are relative to each other, and mapping points to time is a fluid process, and only valid to one team at a time. The idea of tracking velocity is so absurd, in fact, that there is an entire web site devoted to the concept of estimation using fruit
(cherry for very small amounts of effort; watermelon for large amounts). If each team
chooses different themes (fruit, mammals, buildings in Chicago,
planets), one can see even more clearly that trying to compare one team
to another is a recipe for disaster.

But of course these executives aren't always being quite so blunder-headed with their metrics as to compare one team to another. Instead, as Jim describes it, they:

Try to get teams to deliver faster and faster--if you delivered 5 points in this iteration, try for 6 next time. Or if your average team velocity was 5 in this project, keep the team together and try to get up to an average of 8 in the next.

Evaluate effort solely in terms of the software's value at first release to the market--if you measure your effectiveness by "how quickly you get to feature-complete," you quickly lose track of important things like "how quickly can you change" with the market, "how much does it cost" to maintain the thing, and even "how delighted are customers to use the application."

Lose sight altogether of the actual business bottom line. In real life, software developers are being paid to deliver value to their home businesses, whether measured in increased revenue, decreased cost, increased customer satisfaction, decreased risk of government regulation non-compliance, increased greenness, or anything else in the host organization's balanced scorecard.

These leaders are falling into the classic "it goes to 11" trap made famous by Christopher Guest's character, Nigel Tufnel, in the immortal movie, Spın̈al Tap. Tap aficionados will remember that Nigel, lead guitarist for the band, is very proud of his amplifier, specially produced with control knobs that go to "11," not just "10." Nigel doesn't even understand the question "but why wouldn't you just get a louder amp?" so pleased is he that his goes to 11.

But what is a leader to do, if she wants to measure the productivity of her IT staff? You need to figure out who to promote and who to put in the basement without a stapler, after all. I would recommend the following, taking Jim's points in reverse order:

Measure the value generated by software investments. This is not new--it's Jim's point in his velocity blog post, and in it, he also cites Martin Fowler's 2003 bliki post, "CannotMeasureProductivity," on this exact point. At the end of the day, a business is in business to create value, not to ensure its developers are working at an abstract maximum capacity. If Team A's software generated $10 million in value over two years, and Team B's software generated $1 million, and both efforts cost the same, then it would be worth while asking some questions about why one effort was so much more valuable to the business than the other. You may get a range of interesting answers, some having to do with how well the team is working, and some having to do with the overall value of the concept they were delivering.

Evaluate your return on investment over the life of the product, not just one quarter. In IT just as in other investments, leaders often think too much in the immediate term. Certainly, it's a good idea to rush to market with software when you can get a first-mover advantage. Even in this case, however, your IT investment should be made with an eye to the long term. What will you do after you have jumped into the market with the first offering of this kind? Have you positioned yourself to maintain your lead, because you have what Jim calls a "supple delivery engine"? Or have you worn yourself out, and put an idea on the market which can easily be copied by others? Will a competitor with a robust software delivery methodology quickly leapfrog you and pull ahead? Unless you're on the brink of insolvency, you need to look at the expected return on investment for the life of the product in the marketplace, not just your profit and loss over this budget year.

Balance each software-development specific metric with a complimentary one. You may have good reasons to measure the amount of software a team is developing. Perhaps you have concerns about specific individuals whom you feel aren't carrying their weight. Never say never, I say. But if you are going to measure velocity, then make sure you measure other things as well, including quality (fit of software to use and actual failures of software to work as intended) and complexity (is the software impossible to change, because it's so poorly written?) These three metrics balance each other out to some degree, and teams should hit minimum standards in all three dimensions, not just one. If you ask for only speed, that's all you're going to get, and you won't like it.

Jim makes the proposal that agilists have been too quick to give all of the decision-making power to the Product Owner from the business side, and he suggests remedying this problem by instead creating a team to make decisions with one person from the business and one from IT. I'm not sure I agree with this solution, since it's extremely powerful to have one person (the person who will live with the results of the decision) calling the shots. However, I do think that if business people begin to embrace the notion that software quality has actual P&L ramifications for the business, they will naturally want to consult the tech lead about what will create the best business results.

Please read Jim's post--he suggests other really good things, like using value points to measure software value as delivered, rather than focusing on the effort it took to get there. As always, there is a lot there.

About Me

Elena is a Principal Business Architect for ThoughtWorks, London. In this capacity, she focuses on transforming business architecture to better support digitally enabled retail clients. Prior to ThoughtWorks, Elena was a Program Manager and Chief Agilist for the Treasury Services vertical at JPMorgan Chase, followed by projects which measurably improved scalability and productivity in IT processes for the Corporate and Investment Bank (CIB) and the Consumer and Community Bank (CCB). In addition to business architecture, Elena’s areas of professional interest are value chain mapping, change management, and non-annoying IT productivity strategy and measurement tactics.