Tuesday, December 20, 2011

I don't mean to go all "woo woo" on you, but you already have a personal online brand. Don't believe me? Bring up your favorite browser and type your name in quotation marks ("Firstname Lastname"), and do a quick search. Try it again with your middle initial added. Did something come up? THAT's it! It's your personal brand!

From http://tentblogger.com/blog-brand/

Do you like what you see? If not, or even if you do, please, in this joyful holiday season, take some time to give yourself the priceless gift of strategic personal brand management. Here are three personas to consider as you do it:

The Shadow: let's say you're a very private person, and you would like to minimize your online brand altogether. Take some basic defensive measures. If you participate in any online social media, learn how to use the privacy settings for each site you use, and set them to maximum. You can set up Twitter, Facebook, and most blogging sites so that your content is viewable solely by people you designate. For reasons related to advertising revenue, these sites tend to change their privacy rules frequently--check and update your settings regularly, say, once a month, or when the headlines indicate Facebook has violated your basic civil rights again.

Additionally here's a handy blog post on "How to Un-Google Yourself," to get rid of matches on your name that you didn't create and that you don't want to keep.Business Up Front, Party in the Back: let's say you're a person who has a good work/life balance, but who would like to be in the public eye primarily on professional matters. You'll see advice to the contrary, and I suppose it all blurs together for the young people these days, but if you're a BUF/PITB person, here's how you can create an online business presence that will vastly overshadow online social/personal presence.

Minimum business presence: Create a LinkedIn profile which provides an online version of your resume, and with which you should reach out to your professional contacts. You might also want to create a professional Twitter account which you use to make work-related useful, succinct, or witty comments.

Social presence: If you are the head of MI6, for example, you might want to avoid having a social presence at all. You may need to share this policy with your wife as well. However, most of us can keep up with our friends on the internet without creating international incidents. I personally reserve my Facebook account for people I'm actually friends with, and keep my account on maximum security ("friends only") for purposes of photos, tagging, and so on. I suppose I could create personal twitter and blog accounts as well, but my poor friends have enough to put up with on Facebook as it is.

Note that your friends will often be very supportive of your business endeavors, so you may want to make them contacts on LinkedIn or make them aware of your blog posts, and so on, but the reverse is not always true. Business contacts may very well not care where you went on vacation this year. A week or so ago, I personally had to disconnect with a Twitter contact who posted 137 Twitpic photos of his baby.

Here's a handy list from mashable of some things you can and should do to build a reassuring online presence that will help carry you through times of employment and unemployment with grace and dignity.

Joan of Arc: would you like to change the world? And, in particular, are you a woman who would like to change the world to increase the number of women going into Science, Technology, Engineering, and Math fields? If so, you need to take this online personal brand thing even a little further. It's not good enough just to look professional online. You need to become a famous woman scientist, technology, engineer, or mathematician so you can be a role model.

Don't do it for yourself--do it for the children!

I'm completely serious. A recent MIT study showed that role models are immensely important in drawing people into particular fields. The received knowledge on this is that "women need to brag more often." And yet a recent long-term Catalyst study also supports something many of us have felt intuitively, which is that women are less likely to promote themselves than men, partly because they are actually penalized in the long term as well as the short term if they do it too obviously. It's seemingly a discouraging downward spiral. Without a critical mass of role-models, women don't go into the STEM fields as much, and even if they do, they don't tell people about it.

I have a crafty plan about this that I'm still thinking through, and no doubt you will think of one yourself if you read the Catalyst article, but for now, let's just say you are a woman who wants to stand up and be counted, damn the consequences. Here's what you can do:

Fame part 1: make yourself known.Set up a blog for yourself, and blog consistently and interestingly on topics for which
you want to become a "known expert." Optionally, you may want to claim a url based on your name, firstname-lastname.com, and point
that address to your blog. You may want to become a speaker at
conferences, and point to your appearances (past and upcoming) from your blog, your linked in site, and
your twitter account. And so on. You can always wrap your thoughts with modest, self-deprecating disclaimers. That's totally fine. Just be out there, be present, and be counted.

Fame part 2: write a book. Once you have a following of some sort, you will find it much easier to sell your book concept to a publisher.

Fame part n: (quantitative!) feel free to seek luminary status in tandem or in teams. If you don't want to become famous solo, then find a partner and become famous as a pair. If you're both women, so much the better!

But, then again, not everyone wants to be Joan of Arc. And that's...okay. No matter who you want to be, however, please make sure that the person someone finds on Google when they type in your name is not someone you're embarrassed to be associated with.

Monday, December 5, 2011

As a little palate cleanser, I thought I would do a quick blog today about my new Samsung Galaxy Tab 10.1, in case any of you are thinking of buying a tablet.

The Apple Newton

Executive Summary: I'm pretty sure you should get an iPad.

But there are some things I learned while trying to get used to my tablet that could help you make your own decision. If you read "the literature" online, you will see lots of people comparing screen brightness, processor speed, and number of applications available in the respective Android and Apple application stores. Yadda yadda. Here's what you should really think about first.

Battery management: it turns out you will want to use the device on battery power. It's meant to be a "portable" sort of thing. So find out before you buy: how many hours do you need to charge your tablet compared to the number of hours it runs without being plugged in? How long is the battery life of your device? What applications are available to help you conserve the battery WITHOUT rooting the device and voiding the warranty? Hint:

Kindle charges fast and lasts forever.

iPad charges pretty fast and lasts pretty long.

SGT10.1 requires 10 hours of charging for 10 hours of useful life. Sort of a 1:1 mapping. Easy to remember, I suppose!

File transfer: can you connect your tablet with a cable to your PC and transfer files back and forth? Like your music files, for example? Hint:

If you have an Apple PC, like a MacBook Pro, you will be able to transfer files to your iPad and vice-versa.

If you have a PC running Windows, like a Dell, or an Acer, or a Gateway, you will be able to transfer files to your iPad or your SGT10.1.

If you have a MacBook Pro you will not be able to transfer files via cable to your SGT10.1. There are many unhandy workarounds, however.

Itunes: if you are unable to move your music files to your tablet, can you simply rev up your iTunes app? Whoops, no, sorry, that doesn't work on SGT10.1

Headphones/microphone: what if you want some kind of high-tech noise blocking headphones for your new tablet? You will probably be fine if you get third party headphones, if they don't have a microphone built in, but you need to be very careful about buying third party headsets that include a microphone, because Samsung/Nokia map the connector for headphones/microphone backwards compared to the way Apple/Blackberry do. In general, headsets marked "iPod compatible" won't work with your device and vice-versa. Bose has a whole set of different connectors to deal with this, but most headset vendors just do it one way--the Apple way.

Operating system upgrades: if the manufacturer of your operating system issues a new update, what do you need to do to get it on your device?

Kindle: no idea, actually.

iPad: Apple has supported all devices with all upgrades it has ever done.

SGT10.1: you need to wait for Samsung to decide if it feels like upgrading your version of Android. But that's okay, because the upgrade will break a bunch of your applications anyway.

Forgotten A/C adapter cord: what happens if you leave your power adapter at home when you go on vacation? If you have an iPad, you can walk into any Radio Shack, Target, or Walgreens and pick up a replacement. If you have the Galaxy Tab 10.1 you just go ahead and run out of battery because nobody stocks these, and the connector is so proprietary nobody is even allowed to think about building a third-party replacement. Of course if you never forget things like power adapters, this doesn't matter to you.

Animated wallpaper: Does your chosen tablet allow you to
equip it with animated wallpaper that looks like a bunch of swaying,
decorated Christmas trees in a snowy landscape, and every once in a
while Santa flies by? I'm psyched to say that I have equipped my
SGT10.1 with just such a wallpaper, and it makes me happy every single
time I pick it up. It also shows the sky with day-of-time-appropriate
lighting.

I'm not sure if iPad supports this marvelous functionality, but it's a good thing SGT10.1 does, because otherwise I might feel a little bad about what I got!

Wednesday, November 30, 2011

I've been having many conversations recently about how to set up the agile teams I'm coaching with the right Product Owner. As we all know, the PO must be empowered to make decisions, yet must also be knowledgeable enough about what the software should do that she can make constant small decisions for the team so they don't have to wait. The PO understands the big picture, understands the small picture, and can set priorities.

I blogged a few months back about how the "Team Room" must be considered a metaphor, not a literal prerequisite to trying agile for the first time. I know I am stepping into equally sacred cow pies here, but I am going throw my weight behind greater thinkers than I who have already posited that the "Product Owner" should be considered as a team, not an individual. A small village, not a literal person. Consider these "Product Owner Team" proposals from Mike Cottmeyer, Ben Linders, and Marc Löffler, specifically when considering how to do the Product Owner role at scale.

Product Owner Team enthusiasts posit that if the village can reach consensus and speak with one voice in a timely way on a consistent basis, the whole agile development team will be in fine shape. The trick is building the right village.

"The Body Politic" from governingtemptation.wordpress.com

I'm currently helping a large vertical in a corporate environment structure the product ownership function for its teams, and it's going to look something like this:

1 Executive Point person where the buck stops (contacted as infrequently as possible)

Some large number of SMEs who have details about value and usage of the feature (contacted only as needed by appointment, with a fall-back appointment arranged in case the first one falls through)

1 Business-side "feature point person" per desired feature (readily available but not in the metaphorical team room.)

A team of business analysts, user acceptance testers, and business systems analysts who are responsible for knowing which SMEs are needed for each feature, and are able to get those people to the table in a reliable way (part of the core team, always working in the metaphorical team room)

For the software features these teams are building, the "value" of a particular function is generally a corporate legal compliance issue. The real "Product Owner" who is paying to get the work done has skin in this game only insofar as they need to get a signed approval from an auditor who has made a "finding" against them. If they don't get this approval, they will face true Profit and Loss consequences. They are motivated. Sadly, this person whose neck is on the line for the new feature most likely doesn't ever see the software in use by the people who use it.

On the other hand, the people who do see the software have no funding authority for the team. In a waterfall world, their only power is to scream loudly for the first time when they first see the software (at UAT or even in production). They get their way, but in the least convenient way for the business and for the technologists who need to absorb a huge influx of last minute requirements from an unexpected direction, all in the unlovely package of a "production down" situation.

If you will pardon the use of a second major metaphor, this situation is like that of the company that makes chairs for airport seating areas. The airport will willingly buy chairs that are uncomfortable if they are made of knife-resistant kevlar, if what they're optimizing is wear-and-tear. The people who actually sit in the chairs don't get the last say.

At any rate, in this enterprise environment, the actual requirements for the software are spread among many corporate divisions and controlled by many different national laws, and the people who do the actual work are spread across many parts of the business. The person with "final say" is actually an executive at a high enough level to placate others who may be annoyed at what is happening in their operational area in order for this business owner to be in compliance. Powerful executives are not available to sit in a team room all day long. They must be consulted sparingly.

The people who know all the details of the various "sunny day" and "cloudy day" scenarios of software use are scattered all over the enterprise, and all over the real world, and they only speak for a piece of the whole software puzzle. They have day-to-day responsibilities with live clients, and they are certainly not available to sit in the team room, nor do they have the objectivity to make decisions.

The actual decision-making Product Owner, therefore, must be a group of people who together have the right business-side contacts to put together a proposed solution that meets the needs of all stakeholders with the right level of priority for each stakeholder. That's where we get to the trifecta of business analyst, business systems analyst, and user acceptance tester. Decisions need to be made by the whole village, so some simultaneous-in-time meetings will need to be set up to get full Product Ownership up and running. But it is quite likely the whole Product Owner team may never meet simultaneously. The Product Owner body politic will need to self-govern in a way that will be as efficient as possible for the team, but at least it doesn't need to ask the software developers to guess what to do.

I imagine that this scenario will make some agile purists totally freak out, but I think if we're serious about taking agile to "enterprise scale," we have to be prepared to scale things like the team room and the product owner as well, and we need to be comfortable with that.

Tuesday, November 22, 2011

I've been pondering further difficulties of being a product owner, both silently and aloud, so yesterday I was happily bowled over by a new idea on the topic from my new ThoughtWorks colleague Jasper "Dutch" Steutel (@dutchdutchdutch for twitterphiles). He calls his discovery the "design spike," and we ended up talking together about a related concept, the "value spike." So what's this all about? Aside from being "Vampire Month" on the Pragmatic Agilist?

From http://io9.com/james-marsters

The Problem

It may be different in a small start-up or a firm well-organized into small, highly integrated business/technology verticals, but in a typical large corporate enterprise with matrixed silos, it is quite challenging for a Product Owner to speak for all of the stakeholders on a project.

The PO must be able to speak fluently to the technical people on the team

The PO must be able to provide one reliable set of priorities which reflect acquiescence among all of the PO's horizontal peers in the organization, plus their direct reports. If there are competing priorities among those peers, creating and maintaining this uber-backlog will involve some serious facilitation chops.

The PO must also be able to speak authoritatively within the funding hierarchy and hierarchies in which the project finds itself. The "funding authority" is likely an executive with her own relationships to maintain.

The PO should also be a user of the software and be able to speak to the full user experience; the PO should be able to conduct UAT.

While they're thus occupied, POs must also be steadily (if not continuously) available to the team.

For those of you who aren't already familiar, a "spike" or "tracer bullet" is a short piece of work within an agile project in which one or two programmers may be assigned to do outside of the iteration structure. The spike investigates unknown technology problems well enough so you can estimate them. There's a nice explanation in this blog post about these related concepts and their origins.

An example would be that the team discovers it needs to use a new visualization technology for a planned dashboard, but nobody on the team has ever used the technology before. So the team simply doesn't know how long the work will take related to that new technology. At the point where the team urgently needs to know the details about how hard it is to work with the new technology, the group agrees to send a pair of programmers off for a pre-specified amount of time (a "time box") to learn enough about the new technology so that the project work can be estimated.

Any work the programmers do in this spike typically will be thrown away--the spike produces team learning, not reusable software. Once you know how long the tasks related to the new technology are going to take, you can make appropriate decisions about what to include and what to postpone from your current planned release, compared to other features which have already been estimated. You adjust the backlog accordingly, and the team moves forward. Conceptually:

Spike allows for backlog adjustments based on needed new information. But note that no new working software has been written during the spike. The output of the spike is just learning.

Design Spikes
Dutch, who has brought many years of product management experience to
ThoughtWorks, points out that on a product-oriented team, you may be as
likely to need a "design spike" as a "technology spike" before you can
complete backlog grooming, or even complete a story that is currently in
play in an iteration.

In this case, the developers may or may not know how to write the software behind the story, but what is very clear as you talk it through in the team room is that nobody knows what the desired user experience should be for the software. What do you do? You take the story out of play for the current iteration, and hold a workshop for the PO and any SMEs who can speak for customers, or even customers themselves, and determine what the user experience should be. This could be expressed as a wireframe or a photograph of a white board--unless the question is specific to the design at the CSS level, you may find it more helpful to come out of the design spike with team learning about the desired user experience, not a complete web page template.

Note that in an idealized "Continuous Delivery" project, every iteration actually calls for a software implementation of a design spike, and so-called A/B user testing determines what your next step should be by measuring the way actual customers use the software. Notice what happens here--if you're working with known technology, you may do an entire project without doing a software spike, in the standard sense of the word. Even though some of the software gets thrown away, you're always building the real software, not a throw-away architecture. But from a design perspective, continuous delivery is nothing but a set of design spikes which result in team learning, and throw off the software itself as a side-effect.

Value Spikes

So let's take the spike concept back to the poor, overworked Product Owner stuck doing a vendor workflow implementation for internal use at a very large company. This thing is not being ported to an iPad any time soon.

The team is all sitting around in the team room talking about the new dashboard you're implementing in this sprint. As it happens, the developers are not familiar with the visualization technology, and they are eager to go off and have a spike to figure it out. And goodness knows they deserve it--all they do is write integration code all day long. But wait--before two lucky programmers run off to play with something new, it turns out that the team stops to ask the Product Owner what the value of this dashboard is going to be. It seems like a lot of architectural investigation for a product that is going to use 3D to visualize work orders going from the "pending" state to the "done" state. Why, you ask, is 3D necessary for this? What is the value?

Most likely the Product Owner does not know, in this environment, how the team ended up with a request for a 3D workflow state change visualizer. There are a lot of players, and there's a lot of politics, and it's a big project, and this is only one of a thousand requested features.

This is where the "value spike" comes in. Just as you should stop work to get a general idea of the effort involved in specific stories, or the user experience required, you should also stop work to allow the Product Owner a time box to assemble the right SMEs for a meeting to determine the authorship and impact of a requirement whose value seems questionable. The PO does not have this information top of mind any more than the developer has information on every possible technology ready to hand.

In this case, the PO will return within the time box with a fresh view of the value of the feature, and just as happens with a technology spike, the PO will do a new cost/benefit analysis based on the value the feature will bring compared to the cost it will take to develop it, and modify the project backlog accordingly. It looks like this:

Seem familiar?

In case the suspense is killing you, in our hypothetical case we'll stipulate that the Managing Director who staffs the data entry area has trouble understanding why things are so slow. The project sponsor, manager of these data entry people, has made it her top priority to make the staffing problem crystal clear. She would implement "Smellovision" if it were possible. This feature jumps to the front of the backlog, the team completes it, the sponsor is happy, and everyone has a good day.

Design and value spikes should be tactics that every Product Owner keeps handy. You don't have to be omnipotent if you have a technique that lets you become expert on one little piece at a time. And that's as close as we get to Enterprise Fun these days.

Tuesday, November 15, 2011

Product Ownership is very difficult. Take a big step away from the Agile Manifesto and think for a moment about project stakeholders, user stories, and how they don't fit together as neatly in real life as they do in Mike Cohn's User Stories Applied, as awesome as that book is. How in the world is it possible for there to be a single person standing in for all project stakeholders in negotiating with the team?

From http://www.implementingscrum.com/2007/04/23/the-cast-of-implementingscrum-infamous-yet/

Conveniently, Cohn himself points out The First Fallacy of the Product Owner. And that is, of course, that such a being actually exists:

On an ideal project we would have a single person who prioritizes work for developers, omnisciently answers their questions, will use the software when it’s finished, and writes all of the stories. This is almost always too much to hope for, so we establish a customer team. The customer team includes those who ensure that the software will meet the needs of its intended users. This means the customer team may include testers, a product manager, real users, and inter-action designers. (User Stories Applied, p. 8)

As Cohn says, the Product Owner may in fact be a "customer team" of some sort. And this team needs to somehow get onto the same page so that if the non-product-owners on the team have a question, they get only one answer, no matter who on the team they talk with. Scary, but true, and very real life. Can that be done? Yes, certainly. But it requires trust and discipline on the "customer team," and it may not come naturally at first. And wait, there's more!

The Second Fallacy of the Product Owner is that the main people with whom the project must be concerned are the real users of the software. Such a fallacy relies on a confusion between project "stakeholders/sponsors" and project "end users." They are not the same! What do you do in a corporate environment in which the "customer" with the budget is an executive decision-maker who will never use the software or even see it? On real projects in corporate environments, your product owner needs not only to understand and manage the desires of competing software users, but also to build a consensus all the way up the executive chain of any sponsoring stakeholder organizations, and keep these sponsoring stakeholders as well as the end users (not to mention developers, testers, and other team members) all on the same page. And of course business goals and user needs keep changing. And that brings us to the related:

Third Fallacy of the Product Owner, which is that "business value" can be determined by operational end users. Don't get me wrong. Executives, and even some line managers, are the last people in the world you should go to, if you want to find out how software is used in the wild. You will certainly build a big, unfortunate loser piece of software if you just listen to "the brass." They don't know! They probably don't even know all the systems their employees use to keep the business running. You must listen to the real users of the software.

But if your goal is to deliver "high value" software first, and "lower value" software later, then the real users won't have the full picture either. You need the executives to make decisions like "just skip that whole part of the old process--that never made sense." And this begins to get very tricky indeed, because the "customer team" is now dealing with Stakeholder A who may be in a position to deeply change Stakeholder B's job, or even eliminate it. So if you're under the impression that the "customer team" is one big happy family off in the "Product Owner room" all together, you need to let that go too.

So where do Behavior Driven Development (BDD) and Feature Injection come into the discussion? My colleague Jeffrey Davidson just put together a brilliant slide
presentation on these very topics, which you
can see here. BDD and Feature Injection are both methodologies which have been described as a step forward in terms of gaining a common understanding of system behavior between the "business side" of a team, represented by the Product Owner, and the "development side," represented by the developers and systems testers. Because BDD and Feature Injection allow the system to be described as a series of examples, rather than a series of "the system shall" statements, business people focus on business value, and developers figure out the best technical way to get the business value out fast.

But BDD and Feature Injection provide something even more valuable, if you're being asked to be a Product Owner, or to be part of a customer team. Both techniques provide a way to get real software users and executive stakeholders onto the same page, and to keep them there as well. And that is a very good thing.

BDD, as Jeff says, is all about describing software in terms of examples, instead of in terms of the components that make it up. (Please also see this timely repost of Martin Fowler's bliki post on "Specification By Example.") "Given" a certain circumstance, "When" a certain real software user does something, "Then" you should see a certain result. What does the Given/When/Then formulation do to help the customer team?

High level user stories ("features" or "epics") described in terms of given/when/then give real software users a succinct view of how the software brings overall revenue to the firm. You may also incur this type of benefit by revising the order of the traditional Mike Cohn style user story elements: "As a <role>, I <need to incur a specific business value>, by <feature>." Here the roles are far more likely to be executive roles like "as the CEO" or "as the CFO," but it's wonderful to know what it is that the team is building, in terms of the overall flow of revenue to the company.

Low level user stories, the size that can be developed by the team, clarify to executive stakeholders exactly what real software users are doing and why. Most normal executives cannot withstand even a single "the system shall" statement, but they may participate eagerly in a discussion couched in terms of "given/when/then," and be able to allay fears among the real software users that they have a specific need for some part of the system that is particularly obnoxious to use. That's a good thing too.

What about Feature Injection? Feature Injection, invented by Chris Matts, and explored in print primarily by Chris and fellow FI aficionado Liz Keogh, says that you need to start all software development discussions by talking in terms of the type of business value that makes sense to the CEO and the CFO. Chris and Liz will tell you that the team should be describing, with examples, what the new software will be doing for the business when it's done. So the process is: 1) identify the value, 2) identify the feature that will give you the value, 3) describe that feature in terms of examples. Kent McDonald provides a nice, "gentle" introduction to Feature Injection here.

A customer team which combines Feature Injection to build word bridge between "value" and "features," and then describes those features entirely in terms of BDD's "given, when, then" scenarios may find itself aligned not only with software developers and testers, but also with itself.

Thursday, November 10, 2011

As a coach and trainer, I have noticed that when I start the "Roles, Personas and Goals" discussions, attendees in the room are 40% more likely to start surreptitiously checking e-mail their smartphones than they when we talk about comparatively exciting topics such as "stand up meetings," "story boards," the "burn up versus burn down chart" debate, or "evolutionary design." I had to lure you to this blog post, in fact, by riding on the coat-tails of the Breaking Dawn, Part 1, premier tonight at midnight. You aren't interested! You have heard it all before! "To write good software, you need to know who will be using it and what they want to accomplish." Blah blah blah--sounds like something your mom would say. "Give the roles names, and think of them as people. If multiple types of people play the same roles, give them different names, and call those things 'personas'" Now you sound trendy and slightly unhinged. Let's go back to the burn-ups.

From http://www.fantasybooksandmovies.com/edward-cullen.html

Let's not! I'm going to simulate a "requirements" conversation with your business users twice, for purposes of comparison. "Before" will represent what you may be doing now. "After" will represent the same conversation, except that all players focus in a disciplined manner on roles and goals--who is doing the action and why? Could it be that such a slight change of focus will give you measurable improvements to your software? I say yes!

Before ("the system shall"):

Analyst: "So to complete requirement A-445, after the screen prompts for the three criteria, if the first check box is activated and the fine amount is under $50, the save button is disabled and an error message is displayed."

Business user: "Right"

Analyst: "So we're done then?"

Business user: "Yes."

After (someone in particular is doing something for some reason):

Analyst: "So who has access to the screen where you can enter the fine amount?"

Business user: "The receptionists in the front office."

Analyst: "Let's call our sample receptionist 'Gayatri,' okay?

Business user: "Um, okay, if you say so."

Analyst: "All right, so a person who lost their library card walks up to Gayatri, and Gayatri brings up the a screen where she can enter the fine amount. She asks whether the person has lost a card before, and if the answer is yes, the fine needs to be $50 or more, depending on her mood. If not, they can get a new card for some amount less than $50, based on a sliding scale that Gayatri maintains. Is that right?'

Business user: "Um, no, actually not. We have a triage person who handles the library card issues. If the person has lost their library card more than ten times, the triage person calls an armed escort to take the person out of the library forever. If it's less than ten, the triage person updates a flag on the person's record to show that it's either the first card lost, or some number larger than that. They're the ones who update the 'repeat offender' flag."

Analyst: "Okay, let's call the triage person Jens."

Business user: "Uh huh."

Analyst: "Does Jens use the same screen that Gayatri uses?"

Business user: "No, Jens gets a screen with a panic button and no fine amount. That's why I didn't bring it up. We're changing the rules around fine amount, not the repeat offender flag. Do you agile guys get partially lobotomized before they let you loose?"

Analyst: "Would you expect Gayatri to need to update the repeat offender flag, or should it be locked down?"

Business user: "Hm. Interesting point. We're instituting this new rule to ensure that the library can protect its fine revenue. We set up Jens's job in the first place to separate enforcement from the clerical function of just putting in the fine amount."

Analyst: "So the 'repeat offender' flag should be disabled on the fine entry screen?"

Business user: "Yes it definitely should. We don't want Gayatri gaming the system. We've had a history of soft-hearted admins resetting patrons to non-offenders just to take a little bit of money off of the fine. They're just enablers. Those people are monsters--they go through ten, fifteen library cards a year!"

Analyst: "Yikes! Okay, so we're making two changes to the fine entry screen: first, lock down the repeat offender flag and only allow it to be edited on the panic screen by Jens. Second, enforce that when Gayatri hits 'save,' if the repeat offender flag is set to 'yes,' she must collect $50 or more."

Business user: "Yes, that's right."

Analyst: "So we're done then?"

Business user: "Yes."

This may seem like a fanciful and trivial example, but I hope it illustrates the point. In this case, actual library revenue could have been affected if one field had been left editable rather than changed to read-only. Moreover, by describing actual people doing actual tasks, this analyst was able to find out about a whole additional screen she didn't know about before. I've been on projects where analysts focused on "the system" didn't find out until months into actual software development that "the system" was only one of TWO systems that Jens, Gayatri, and the other imaginary personas were using to keep data up to date. The work load for the project doubled in a day, once someone shadowed an actual data entry person to see what they did.

People-focused requirements gathering is the only type of requirements gathering trustworthy enough upon which to base your company's cash flows, or anything else important to your operations. Even if you are an analyst who is a subject matter expert in her own right, take the time to mentally walk through the process as performed by your actual end users, and don't focus too quickly on the details of "the system" and "the screen." The focus on people itself is important, and it will bring you and your company significant return on investment.

Friday, November 4, 2011

Jim Highsmith recently posited that "velocity is killing agility!" which is kind of a fun hypothesis. Jim observes that company leaders he talks with around the world these days are a little too quick to measure the effectiveness of their agile software development teams by keeping track of the teams' velocity (the average amount of estimated software effort the team delivers per software delivery iteration).

This is quite ironic, of course, since one of the rudimentary things you learn when you first study agile software development is that "velocity" is measured in abstract terms like "points," or "ideal hours." The numbers are relative to each other, and mapping points to time is a fluid process, and only valid to one team at a time. The idea of tracking velocity is so absurd, in fact, that there is an entire web site devoted to the concept of estimation using fruit
(cherry for very small amounts of effort; watermelon for large amounts). If each team
chooses different themes (fruit, mammals, buildings in Chicago,
planets), one can see even more clearly that trying to compare one team
to another is a recipe for disaster.

But of course these executives aren't always being quite so blunder-headed with their metrics as to compare one team to another. Instead, as Jim describes it, they:

Try to get teams to deliver faster and faster--if you delivered 5 points in this iteration, try for 6 next time. Or if your average team velocity was 5 in this project, keep the team together and try to get up to an average of 8 in the next.

Evaluate effort solely in terms of the software's value at first release to the market--if you measure your effectiveness by "how quickly you get to feature-complete," you quickly lose track of important things like "how quickly can you change" with the market, "how much does it cost" to maintain the thing, and even "how delighted are customers to use the application."

Lose sight altogether of the actual business bottom line. In real life, software developers are being paid to deliver value to their home businesses, whether measured in increased revenue, decreased cost, increased customer satisfaction, decreased risk of government regulation non-compliance, increased greenness, or anything else in the host organization's balanced scorecard.

These leaders are falling into the classic "it goes to 11" trap made famous by Christopher Guest's character, Nigel Tufnel, in the immortal movie, Spın̈al Tap. Tap aficionados will remember that Nigel, lead guitarist for the band, is very proud of his amplifier, specially produced with control knobs that go to "11," not just "10." Nigel doesn't even understand the question "but why wouldn't you just get a louder amp?" so pleased is he that his goes to 11.

But what is a leader to do, if she wants to measure the productivity of her IT staff? You need to figure out who to promote and who to put in the basement without a stapler, after all. I would recommend the following, taking Jim's points in reverse order:

Measure the value generated by software investments. This is not new--it's Jim's point in his velocity blog post, and in it, he also cites Martin Fowler's 2003 bliki post, "CannotMeasureProductivity," on this exact point. At the end of the day, a business is in business to create value, not to ensure its developers are working at an abstract maximum capacity. If Team A's software generated $10 million in value over two years, and Team B's software generated $1 million, and both efforts cost the same, then it would be worth while asking some questions about why one effort was so much more valuable to the business than the other. You may get a range of interesting answers, some having to do with how well the team is working, and some having to do with the overall value of the concept they were delivering.

Evaluate your return on investment over the life of the product, not just one quarter. In IT just as in other investments, leaders often think too much in the immediate term. Certainly, it's a good idea to rush to market with software when you can get a first-mover advantage. Even in this case, however, your IT investment should be made with an eye to the long term. What will you do after you have jumped into the market with the first offering of this kind? Have you positioned yourself to maintain your lead, because you have what Jim calls a "supple delivery engine"? Or have you worn yourself out, and put an idea on the market which can easily be copied by others? Will a competitor with a robust software delivery methodology quickly leapfrog you and pull ahead? Unless you're on the brink of insolvency, you need to look at the expected return on investment for the life of the product in the marketplace, not just your profit and loss over this budget year.

Balance each software-development specific metric with a complimentary one. You may have good reasons to measure the amount of software a team is developing. Perhaps you have concerns about specific individuals whom you feel aren't carrying their weight. Never say never, I say. But if you are going to measure velocity, then make sure you measure other things as well, including quality (fit of software to use and actual failures of software to work as intended) and complexity (is the software impossible to change, because it's so poorly written?) These three metrics balance each other out to some degree, and teams should hit minimum standards in all three dimensions, not just one. If you ask for only speed, that's all you're going to get, and you won't like it.

Jim makes the proposal that agilists have been too quick to give all of the decision-making power to the Product Owner from the business side, and he suggests remedying this problem by instead creating a team to make decisions with one person from the business and one from IT. I'm not sure I agree with this solution, since it's extremely powerful to have one person (the person who will live with the results of the decision) calling the shots. However, I do think that if business people begin to embrace the notion that software quality has actual P&L ramifications for the business, they will naturally want to consult the tech lead about what will create the best business results.

Please read Jim's post--he suggests other really good things, like using value points to measure software value as delivered, rather than focusing on the effort it took to get there. As always, there is a lot there.

Saturday, October 22, 2011

Let's say you are in charge of the "services" operation within the IT department of a large enterprise. You're a government entity, a telecommunications giant, or some other titan of industry. Other IT organizations have grown up around you in the enterprise over time, and they're writing cute little front-ends that get information from customers to your services, and pass the results back. They're doing iPods and tablets, and you're still dealing with Cobol. Your colleagues are all concerned with "cascading style sheets" and "user experience" and color schemes and the like, but you're doing all the grungy, large-scale back-end work that actually causes the money to pour into your organization and keep you all paid.

Image courtesy of http://www.apolloamusements.com

Needless to say, your vast IT cube farm in the sub-basement is not equipped with a foosball table.

Suddenly, one day, you get an edict that your enterprise is going to "go agile!" A perky trainer, most likely bringing with her stickers, pipe cleaners, and brightly colored balloons, explains that in the brave new world you are entering, you will now be demonstrating your software to your customers every two weeks. It will be a "feel good" moment for you and for them.

This is where you break it to Ms. Perky that you have no user interface. There is nothing to show. If you do your work correctly, calculations no-one sees will now occur differently in some way, and sometime, some system, somewhere, probably running on the experimental browser of someone's 4-year old's Nintendo DS, is going to register that new calculation as an asterisk on the screen. As ThoughtWorks project management guru Joe Zenovich says, "Backend Stories Make for Unexciting Demos," although of course he goes on to show you what to do about that, as does "Scrumwiz" in How To Demo Your Backend Software Increment.

So fear not, Backend System Vice President! You too can experience the fun and excitement of agile software development. There is, indeed, a way for you to structure development "stories" around your work which will be useful to customers, will keep your conversations interesting and can be demonstrated at a biweekly showcase! Solutions fall into three categories:

Pair up. If your system has one or two main customers, and your teams are dependent on each other, then build and demo the stories together. Sometimes what happens is that your team is doing "the big Oracle database" and their team is doing the "slick web interface," because you have different skill sets on your teams. But from a business perspective, you're both producing exactly one experience for your customer--they interact with the screen, the screen interacts with your systems. The best thing you can do for your customer is show them steady progress on actual software they can use, and that means structuring your work together and doing a joint demo.

Create viewable mocks or stubs. As my rockstar BA colleague @jenny_wong pointed out when I consulted her on this matter, your services do not work in isolation. if your system serves as the back end to a lot of disparate customers, then you will still need to know what each client system is expecting your back end services to do, from a customer perspective. This map of usage serves as the basis for the details of the interface between your system and all of its clients. In this case, for functional automated testing purposes, you will likely need to create an all-purpose "mock" or "stub" which will mimic the front-end behaviors the real systems will give you (for the difference see Martin Fowler's "Mocks Aren't Stubs"). Seeing those tests run can be quite informative and inspiring for customers of the various front-end systems, particularly when accompanied by a powerpoint deck that explains what they are seeing.

Just show powerpoint plus log files flashing across the screen. At worst, you may want to show them a running batch file spitting out status messages at rapid pace to a screen, with some accompanying powerpoint showing progress this week on the architecture compared to progress last week. Seeing the fast-paced letters fill the screen can produce a small amount of adrenalin in the most weathered front-end system veteran, particularly if this week's messages are different than last week's.

As Mike Cohn, so often the last word on all things scrum, says on his web site, what ties these story-writing and story-demo techniques together is that at the end of the day, you and your team of grungy back end system coders may never see a customer or even daylight, but you are in business because someone calls your service to do something, and someone calls back later to see what happened. The key to making your work interesting to your funding authorities on a biweekly basis is to bring in the people who are in touch with your front end system or systems, and showing them the return they're getting on their investment.

Sunday, October 16, 2011

If you are a BA looking down the barrel of an agile adoption at your work place, you may feel worried that you will be switching from reams of paper stored in large binders to index cards. And not the big cards either. You're looking at the 3x5 ones. You feel this plan is ridiculous. You may murmur something to yourself about "insane fads!" or "damage control!" or "keep secret requirements locked in my desk and bring them out later to save the day!" Take heart, dear friends.

card wall from http://www.scrumology.net/2011/09/15/kanban-kickoff/ (Mr. Yuck is public domain)

Things may be different at start-ups, in small shops, and in places already whirring like Kanban tops. But in most fair-sized enterprise IT shops I've worked in, your first efforts at agile implementation will not aim to reduce the quantity of requirements documentation significantly. Instead, they change the timing at which the requirements are written, and with that change in timing, reduce the work you will spend as a BA filing and executing change requests, and reworking traceability matrices. The overall work may also be reduced since you don't write details for features you don't implement, but the ratio of working software to the number of words and diagrams in corresponding documentation may not change at all.

That's right: you will still end up with high quality requirements at familiar levels of bulk. Let's compare what you do now with what you might do with a more agile process.

Requirements-first technique (you likely do this now):
1) You and your group may write requirements for 3-6 months. (This may be divided into separate "business" and "systems" requirements phases). You may then have some "architects" do some further system requirements in the form of a detailed "systems architecture document." (SAD)

2) Once development starts, if anything changes, you file or update a change request and re-write all of the linked requirements (and the traceability matrix that links them together!) to match the actual behavior of the working software. Changes can occur because:

the market changed, so now your business needs something different

you have learned more about what you're doing, so you don't want exactly the same implementation you wanted before

the development group ran out of time, so a bunch of requirements need to be removed.

or for other reasons.

If development takes 3 months, and system test takes 3 months, and UAT takes 3 months, you could end up spending 15-18 months first writing, then continuously re-writing the requirements. And if you're in the kind of shop that has to "trace changes," your resulting documents may be very hard to read, because everything you ever wrote is still there, color coded or struck out.

"Just in time" requirements technique (typical enterprise-level agile teams do this):
1) You, the SMEs, the developers, and the testers all get together for a "release planning" workshop to determine what the high-level goals are, and what the system will look like (also at a high level) to meet those goals. You create a flexible outline of the pieces of the system using the index cards (sometimes called "story" cards), arranged in the rough order you plan to build those pieces. You can think of this as just writing the table of contents for what would have formerly been your requirements documents.

2) Just before development starts, you work with SMEs and other BAs to write fully detailed requirements for each of the stories you intend to tackle in the upcoming 2-week iteration. You may start an iteration ahead, or you may just start a few days ahead. The fully detailed requirements attached to a story are called "narratives" at ThoughtWorks, or sometimes just "detailed requirements." As you go, you keep writing the details, the stuff that adheres to your table of contents, just before the developers actually start coding. If changes occur to the big picture or any of the implementation that's going to occur down the line, that's fine. You let developers complete the work for the current iteration, but you change the remaining "release plan" cards to reflect the team's new learnings and business needs.

If your workshop took 2 weeks, and development takes 3 months, you have now completed exactly one outline of the work (the release plan), and one set of detailed documents for what you actually did (the narratives attached to the stories you did). Roughly speaking, 18 months turns into a little over 3.

Graphically:

So if someone tries to tell you that agile lacks rigor, or that you don't need BAs to facilitate SME discussions and record the results any more in an agile project, you may want to take their hysteria with a pound or so of salt. Ask them: are they a theory-drunk zealot, or have they actually tried it in your environment?

I was originally going start the title of the post with "BA Corner," but I didn't, because
my teenage daughter has confided in me that whenever I say "BA," she
thinks "Bad A**." You can imagine the resulting dinner conversations, when I attempt to talk shop. At any rate, if you're feeling particularly fierce today, please feel free to re-read this post substituting her "BA" definition. Power to the BAs!

'Using "working software" as the measure of progress is narcissistic,' since it focuses on what developers are interested in (the software), not what the business is interested in (the value the software brings to the business)

It's a further cop-out to think 'that great software can be developed through a process of dead-reckoning with business people.' (the hyperlink is Gualtieri's). Incremental design by committee may be democratic, but it's not going to create any iPods.

I found myself very entertained by this blog post. Particularly the "dead-reckoning" reference. Anyone who knows Johnny Cash's famous song, "One Piece at a Time," has probably thought of that song when first encountering evolutionary design.

From http://lyricsdog.eu/lyrics/505397

But what are we to aspire to, if we use the "cop-out" label on "working software," "customer collaboration," and "responding to change?" That would make us one for four, Agile Manifesto-wise. Here, I found myself less in agreement with Gualtieri. He proposes four new pillars to replace the four old ones:

Parallel (implement things in parallel, not serially)

Immersive (developers should know the business, so they can build an experience, not just write some loops)

Software (needed for completeness, since this is a new software manifesto)

Studio (re-imagine software development as producing a customer experience)

I think it's actually three pillars, not four, plus as Gualtieri points out wryly, the pillar names together build to an unfortunate acronym. But that's okay.

What I was surprised by is that when Gualtieri says "agile is a cop-out," he doesn't mean that the original manifesto was too focused on the development point of view (at the expense of other team members and stakeholders), but instead, that developers should aim for more. They should stop being part of the cast, and take the heroic central roles they've always meant to have. The problem: '[d]evelopers often blindly rely on businesspeople, business analysts, user experience designers, and customers to tell them what will make a great user experience.' Gualtieri's solution? Developers should do user experience themselves and cut out the middle man!

Great software talent are renaissance developers who have passion, creativity, discipline, domain knowledge, and user empathy. These traits are backed by architecture, design, and by technical know-how that spans just knowing the technology flavor of the day.
Yikes, and measuring progress by "working software" is narcissistic? Moreover, how many people out there can actually be "best" at passion, creativity, discipline, domain knowledge, user empathy, architecture, design, technical know-how, all simultaneously, and while still holding a day job? I think I know one person who has six out of seven of these, but he can't design his way out of a paper bag, and gets help with that.

I had actually expected Gualtieri to take this somewhere different. Having agreed with his diagnosis that agile needs to be more than coding, and needs to take more interest in delighting customers, I had thought he might take this more in the direction of Jez Humble's Continuous Delivery or Eric Reis's Lean Startup, where small teams build towards compelling "delightful" customer visions by breaking down and testing ideas against actual customer use, and modifying to fit. The "definition of done" in this case isn't "deployed" but rather "deployed with an ability to do A/B testing and measure usage patterns to determine next steps."

Or, alternatively, I thought he might invoke Steve Jobs, so much in our minds these days, to say that having a single person of genius impose a vision with an iron hand is more important than everyone having a say all the time, and developers should have some respect for that concept.

Or indeed, I thought he might be going in the direction of Stephen Denning, whose appealing "Radical Management" corporate vision seems to meet Gualtieri's needs for Manifesto tweaking by spreading "the Wisdom of Crowds" all the way up into senior management, empowering small teams to come up with delightful products by getting executives out of the way.

Of course I agree with Gualtieri that a focus on "customer delight" appears to be an engine which can drive corporate success quite vigorously. And of course design by committee and an over-reliance on consensus leads to least-common-denominator solutions which maximize team peace and at the expense of measurable customer value. And of course coders should take some responsibility for entering into discussions about design visions or user interactions, and not require minute written instructions. But I would picture getting to "customer delight" as being a process of refactoring the way teams work together, and perhaps elevating our joint respect for user experience designers as people with distinct skills not available to the entire general population. A coder can be an experience designer but need not be (and vice versa). Let's just make sure there's always at least one on the team.

Tuesday, October 4, 2011

It was a bright and sunny day, and suddenly an agile software development
project began.

Scott Ambler started his classic 2008 Dr. Dobbs article "Iteration Minus One" this way. And of course he went on to drive home the point that projects don't just emerge fully staffed like Aphrodite from the waves. Somebody has to do the logistics.

But how much planning and setup are you allowed to do on a project before they revoke your agile badge and change the secret handshake? Although the Agile Manifesto celebrated its tenth anniversary this summer at the Agile 2011 "Return to Snowbird" conference, people seem as disagreeable as ever about what the runway should look like for an agile project, and how long it should be.

Just before that conference, Vikas Hazrati's InfoQ article
urged readers to put their teams straight to work: "every iteration
needs to produce working software." He cited George Dinwiddie, Alistair
Cockburn, and Ken Schwaber, concluding "there seems to be a general
consensus that iteration zero could be best avoided, if possible."
Hazrati conceded that "chartering" could still occur, but not "building a
detailed backlog and creating the infrastructure."

Are you kidding me? If we're working with software teams that currently think nothing of spending a whole quarter producing nothing but requirements, do we seriously have to be all macho about "showing the business side some working software" after an hour or even a week?

In most real-life corporate teams I have seen, "jump in and scrum" creates a situation where software is prolific and predictability is a four-letter word. (Who's counting letters? Pencil pusher!) Sure, the old estimates were off, but the "just scrum" people are now telling the stakeholders that they are wrong to even ask for specific estimates around their return on investment. Questions like "what will I get by the end of the summer?" are waved away with "gotcha--we're agile now." Throw in concepts like the "overflow scrum," (where you do testing of the stuff you wrote over the last quarter or two to make it actually work), and IT has just used agile vocabulary to really screw over the businesses that fund them. No--this vileness isn't "pure agile," but it certainly seems to happen a lot.

In non-theoretical situations I recommend would-be newbie corporate agilists to take deep breaths, take their fingers off the keyboard, and do three things before starting Iteration 1, the sum of which will more than pay for itself in project accountability, transparency, predictability, quality, and actual value to someone in particular:

Build a lean business case, comprised of a hypothesis that the business will gain more than it loses by investing in a software project with a particular direction. The business case should include the specific measures the funding authority will use to test that the investment was worthwhile along the way.

A chartering, quick-start, discovery, or inception process, in which all relevant project stakeholders from business and technology (including "operations" people) meet together in person for a short amount of time (2 weeks should be more than enough for most projects--ThoughtWorks has even seen this work for a project lasting a year) to agree on the big picture from both a business and technical perspective.

An "iteration 0," "sprint 0," or "tech setup time," during which business-side people (product owner/business analyst/tester/UAT tester) begin working out details of the stories for iteration 1 (talking across the table as needed to their development and testing buddies), and technical-side people (devs and automated test experts) set up work stations, build, test, UAT, and production environments, along with a functioning continuous integration and deployment environment that does nothing more than build "hello, world!"

The Lean Business Case
Far too few businesses actually think through their project portfolio in terms of what the likely actual ROI will be for each project (increased revenue, decreased cost, increased customer satisfaction, faster speed to market, quality, or achievement of other corporate non financial goals). Even if you did nothing else whatsoever to become more agile, really great things would happen at your company if someone wrote down ahead of time what the claimed ROI was for each project, and then compared it to what actually happened when the project was delivered. And the "lean" business case would go further--calculate a minimum product to test in the market, actually measure how it does, and evolve from there, always measuring as you go. But take baby steps. Try for just ONE testable hypothesis for the whole project and see how you like it. Success is contageous.

QuickstartI've long outed myself as a fan of in-person kickoffs that go longer than the typical executive pep talk you may be used to. I've now discovered that as usual, my "discovery" was covered in a book even before the Agile Manifesto--please read Kent Beck and Martin Fowler on "Planning Extreme Programming" for advice which is as true now as it was in 2000. There are things you need to do even before you grudgingly start your "Iteration 0" concept, whatever that might be. Get the whole team together, business and technical stakeholders alike. Establish the relationships that will carry into phone and email conversations for months to come. Let everyone understand what's going on. And to give your business more predictability and control than they ever dreamed possible, go ahead and put together a "span plan" that shows what users will do with your software, divided into logical coding segments of no longer than an estimated 3-5 days apiece.

During this time, the group should put together a small charter
document, a backlog of high level epics/features which comprise the
current project vision, a high level set of scenarios for use of the
software, a high level architecture, and a detailed story backlog for
the first release where all stories have been sized at about 3-5 days,
estimated in nebulous units of time.

Despite now-conventional-wisdom about "just scrum," please lay out a sensible and fairly complete initial "master story list" or
"product backlog." up to the first release. Allow some additional room in the plan for stakeholders to
change their mind (say, 20%), figure out what the expected number of
iterations will be and lay out which stories you'll do when. The plan
will change, but now everyone is looking at the backlog with a fixed
cost and fixed time release in mind, and everyone understands that scope is the only thing you can play with.

Iteration 0
Once your release plan is done, plan for some time to prepare before you start to measure your velocity in Iteration 1. You need some requirements to be well understood so developers can hit the ground running with those first stories on the first day of real coding. You may very well do technical investigation work at this time ("spikes"), as well as setting up environments and detailing out initial stories and acceptance criteria.

Again, don't plan to do this for a whole quarter--if you're doing a 2-week iteration, start the project with a 2-week iteration 0 to get ready. Don't belabor, don't buy all of your production hardware before it goes on clearance, and certainly don't lay out "the whole data layer" in Iteration 0

Saturday, October 1, 2011

My ThoughtWorks colleague Rolf Andrew Russell recently pointed out on our company intranet that there are implicitly two definitions out there of the agile manifesto's principle of "continuous delivery of value":

"'business' CD - what we sometimes call the full monty. minimizing lead time from idea to production and then feeding back to idea again."

He further observed, "When a business person or executive hears the term 'continuous delivery' they naturally interpret it as the latter. And this makes sense because 'business' CD speaks to the true business value and encompasses 'technical' CD. But 'business' CD is way more than just the devops stuff. It is changing the way XD, PMO, analysis, etc, work, and minimizing their lead times."

And yet it's still hard to find a list of the practices a CD-oriented person should follow if she is not a developer or QA, but is instead an Agile user experience designer, program manager, product owner, or business analyst.

Technical CD

But let's start with "technical CD." What is that? We've been very excited lately at ThoughtWorks about the concept of technical continuous delivery, championed by Rolf as well as Jez Humble, Martin Fowler, and TW alum David Farley. Supported by a practice called "Continuous Deployment," this concept captures the imagination especially if you're in a corporate IT department, because you picture a world where not only can you have software teams WRITE software quickly, but you can also have them DEPLOY quickly and safely to production.

Perhaps you live in a world where you've managed to set up a hyperperforming agile team, and you've started delivering small chunks of tested software to your pre-deployment environment every two weeks. Your users are delighted, even though your well-tested software has to sit in a QA holding bin for a week before it moves laboriously to the UAT testing server, and from there, in ponderous splendor to production. Months have passed, and a small startup in Albuquerque is already making money on your concept, except in version 3.0.

Now picture that you can put that software into production at will, even ten times per day. And it's perfectly safe, because you can decide later whether you want to turn new functionality on or not. And you can even be ITIL compliant. That is a beautiful dream vision for many of us, and CD is consequently and justifiably quite popular. Jim Highsmith built this wonderful diagram to show how you might progress from agile development practices to "continuous integration" (frequent check-ins to main and builds), and from there to "continuous deployment", and the accompanying strategic impact to be derived from this progression:

From http://www.jimhighsmith.com/2010/12/22/continuous-delivery-and-agility/strategic_continuous_delivery/

But note that introducing "technical CD" into your environment only gets you from development kick-off to operational deployment. Although your IT department is now building software "right" in a big way, how do you know that you're building the right thing? That's what brings us to:

Business CD

What would the "continuous delivery business case" look like? I think you can look at this from (at least) two perspectives:

CD powers the lean startup. As Jez Humble points out in this InformIT article (and elsewhere), continuous deployment enables what Eric Ries calls "the Lean Startup." Within the development cycle, agilists have speeded things up considerably by eliminating "Big Upfront Design" (BUFD) in favor of a general design plan whose details evolve to fit the customer need. Analogously, the Lean Startup calls for eliminating the Big Upfront Business Case (BUFBC) in favor of a general vision and strategy, followed by a series of small experiments which lead to modifications in both the strategy and the resulting software as you go. Because the Lean Startup comes equipped with CD developers and QAs, it could be 20 minutes from the time a software behavior is requested to the time it is deployed in production to a controlled subset of the software's user base. What this means is that:

Business-side counterparts of CD technical practitioners must therefore be skilled entrepreneurs: Ries's book provides this diagram for the business side of CD:

Image taken from: http://gumption.typepad.com/blog/entrepreneuria/

Business-side CD practitioners need to be people who can identify what Ries identifies as the value and growth assumptions in their business cases, devise experiments to test those assumptions, starting with the kind of very small experiments developers can build and deploy quickly, the ability to interpret the data, and an ability to pivot the strategy to allow for the next round of experiments.

What does that mean for today's corporate PMO director, product owner, business analyst or user experience designer? That answer isn't obvious, but let's take some solace from the fact that the best "traditional" software architects were delighted to descend from their ivory towers and start making software prototypes instead of writing vast theoretical documents. The best build and release technologists were happy to become specialists in designing and building deployment pipelines instead of sweating with hardware and firmware "surprises" under pressure. In the same way, the best business-side people who interact with software development have an opportunity to stop making huge, unjustified bets based on risky assumptions, and instead build up excellent and well-grounded business cases.

So as you put your next agile pilot together, think about your reforms starting at the idea phase, and your business case evolving right along with the software--cost of change is now quite low. You can and must do experiments that fail in order to feel confident about the strategy that succeeds. Sure, it's scary, but it's also tremendously exciting, and it can't be done by technology alone. This is management, analysis, and design stripped to their essence. (But you can leave your hat on).

I've always felt happier belonging to the "yes-and" teams much more than the "no-but" ones, but and I wanted to see for myself how improv philosophies and techniques could provide a useful framework for agile/lean software development. So, inspired by Lisa, here's what I found out.

Most inquiries into improv lead to Del Close, granddaddy of Chicago-style improvisational comedy, and co-author of Truth in Comedy: The Manual of Improvisation. Del famously bequeathed his skull to Chicago's Goodman Theater, to be used as Yorick in productions of Hamlet. His co-author and executor, Charna Halpern, was unable to obtain his actual skull and was forced to donate a purchased replacement, which skulduggery (sorry) was subsequently unearthed (again) by The Chicago Tribune and reported on with great amusement by The New Yorker in 1999.

Trust... trust your fellow actors to support you; trust them
to come through if you lay something heavy on them; trust yourself.

Avoid judging what is going down except in terms of whether it
needs help (either by entering or cutting), what can best follow, or
how you can support it imaginatively if your support is called for.

LISTEN

Do these apply? Well, it's not clear in the software development context that jokes shouldn't be allowed, unless the team includes a particularly bad punster. But with that specific caveat aside, these rules are wonderful for describing how to behave when you're part of a team, not an individual achiever.

In particular, the advice to trust, avoid judgement, provide support, and check impulses seems like it would go quite a long way to create a team where everyone would want to be working with the best part of their brain at all times.

What I like the best, though, is the advice to "save your fellow actor, don't worry about the piece." The example Todd Charon provides in his YouTube video is of a case where a troupe was performing on a slippery stage, and the first person went out and fell off a chair while pretending to screw in a lightbulb. Without hesitation, a second troupe member ran out on stage and did the exact same thing, turning a potentially embarrassing and show-stopping moment into "part of the show."

There are a lot of lessons for us in this vignette about a unified and supportive team stance in face of adversity, and about how to communicate with stakeholders in a face-saving way at all times. But at the end of the day, what I like about this advice is that as you look over the course of your career, you really DON'T care if this project or that one succeeded or failed. Even my successful software projects have been rendered obsolete by the passage of time. I think fondly of the Dbase III purchase request generating system I wrote in 1985, for example. But what matters over time is the relationships you build with the people around you.

I'm pretty sure "focus on the present" is another rule of improv which holds very true for agile software development teams. But how nice that this art form shows how focusing on the present sets you up well for a life well lived.

About Me

Elena is a Principal Business Architect for ThoughtWorks, London. In this capacity, she focuses on transforming business architecture to better support digitally enabled retail clients. Prior to ThoughtWorks, Elena was a Program Manager and Chief Agilist for the Treasury Services vertical at JPMorgan Chase, followed by projects which measurably improved scalability and productivity in IT processes for the Corporate and Investment Bank (CIB) and the Consumer and Community Bank (CCB). In addition to business architecture, Elena’s areas of professional interest are value chain mapping, change management, and non-annoying IT productivity strategy and measurement tactics.