Tuesday, March 27, 2012

You have seen it before. After months of defining and redefining
the requirements, the business signs off. Development and testing occurs. The software
is delivered and the response is often, “That is not what I asked for!” Each
side “lawyers-up” and the battle rages on.

This is not a rare occurrence. Anyone who has done
development has at least one similar story to share. Why does this happen?

This is frequently the results of “contract development”. Developers
are tired of being blamed. They demand detailed requirements. They demand from
the business all the details before beginning the development. By having all
the requirements defined, signed off and locked down, they reason, they can
deliver what was asked for and not end up in hot water.

The problem is developers have painted the business into a
corner. While the business may know in a broad sense what they want, they are
not sure of what is possible let alone all the details. The business is fearful
of signing the requirements document. Given the hoops that one much jump
through to make a change, is it any wonder they have nightmares of what they
might have missed. Consequently, the requirements become a catch all of
everything under the sun. The gathering takes longer than necessary. Even after
much time the requirements are vague, confusing and cumbersome. It is only when
the project manager says the project is behind schedule and over budget that
the requirements are signed.

This is all about avoiding blame instead of delivering
value. Not a good place to be. What are the chances of delivering business
value with such a start?

Many of the agile practices are put in place to prevent the
failure brought about by “contract development”. It is collaboration and tight
feedback loops that enable the developers and business to quickly and iteratively
discover what is needed.

What is needed is for the business to have “skin in the game”.
The business has to be as responsible for deliver as the developers. How do you
accomplish business accountability? This is primarily possible via the role of
the product owner. The product owner owns

determining what is in the product backlog

setting priorities for the items in the backlog

changing the items in the backlog or adjusting
the priorities each iteration

determining when an item in the backlog is done

Talk about “skin in the game”. The business, via the product
owner, plays a critical role in the success of the team. They are continually involved
in making key decisions and trade-offs. The product owner is aware the budget
is fixed and the need to deliver ASAP. After all, the ultimate business need is
that “it takes no time and cost no money”.

The product owner helps the team to focus on delivering as
many backlog items as possible for the dollars and time budgeted. The product
owner soon comes to realize when he or she makes changes it often comes at the
expense of completing other items on the backlog. Thus, the product owner weighs changes
carefully and deliberately, yet the business makes changes when justified by
business value.

The wall that existed between development and the business soon
disappears. It is no longer two groups with arched backs and defensive postures.
Instead, the two organizations are working in close tandem attempting to
complete as much as possible under the current constraints. Little time is
wasted arguing about requirements or what is a defect. Everyone is focused on
the solution and maximizing what can be delivered. It is soon recognized by
everyone that during development, issues are uncovered and trade-offs must be
made. Given the team wants the best solution possible, there is little time
spent covering ones backside or arguing about the original spec., project
schedules or budget. Everyone is focused on what is the best solution. Talk
about eliminating waste.

All on the team are accountable for the time and dollars
spent and the value delivered. It is the team taking accountability and credit for
the results. It is the recognition by the developers that not all requirements
are knowable to begin with. It is the recognition by the business that they
will change their minds and priorities as they see the solution unfold and that
the scope needs to be adjusted with the new reality. It is only when both sides
recognize that everyone on the team is accountable that the two sides become a
single team and high performing.

By the way, a funny thing happens to the project manager.
When the project manager sees the business involved with the team and proud of
what the team has accomplished, the project managers become less ridged about
the plan and begins to understand the difference between being “plan driven”
and “value driven”. They watch as everyone becomes more focused and excited
about the solution and less concerned about a plan that was built when the
least was known about what was possible.

It is all about shared focus and accountability, about
breaking down the walls built by self-preservation. The product owner is the
catalyst that brings the business and the developers together. The product
owner responsibilities ensure the business has as much “skin in the game” as
the development organization. The product owner brings two sides to the same
side. Contracts disappear and value emerges.

Bottom line is to never skip on the product owner. Do with
one less pair of developers or tester or business analyst. However, never ever
go without a product owner. Without a product owner, the business has no “skin
in the game”.

Tuesday, March 20, 2012

Of all the agile topics I have listened too, none seem to
generate the passion and emotion as the topic of points. What is a point? How
is a point calculated? Can you compare points across teams?

A typical reaction to standardizing the definition of a point
is an emotional one. Many teams do practice relative sizing for point estimates, that
is, if feature X was 1 point, what is the estimate for feature Y? Usually
feature Y would be estimated as a factor of feature X. Typical point ranges would be 1, 2,
3, 5, 8 points etc. (it is common to use
Fibonacci or geometric progression when estimating).

The approach I have taken with points goes against conventional thinking. Often times, the initial reaction is one of apostasy. However, once
cooler heads prevail, much of the resistance disappears.

To begin with, points are a means to estimate. It is not a measure of productivity. It is a
means to size effort for a card by a pair on a team. A team will measure its ability to do
work in points and use the points it completed from the last sprint as a
baseline to plan its capacity for the next sprint. As each card is estimated in
points, it is tallied against the total points a team completed in the
previous sprint. The total points completed by the team in the previous sprint
are the team’s current capacity. Typically, a team will plan a few additional cards
beyond its capacity in case things go better than they did in the previous
sprint.

It is stated by some that points are an estimate of complexity. Most
insist that points are not tied to time. If you suggest that a point is tied to
a block of time “them are fighten words!” Perhaps it is because time was used
with waterfall estimates and we hate to tie an agile estimate to a block of
time. I say, it is because time was often used as a means to “whip teams”.
Consequently we avoid the time estimate experience by using points.

However, if you ask a team how many points they completed in
the last iteration they can tell you. If you ask how many pairs the team had
for the last iteration they can also tell you that. If you ask the team for the
length of the iteration you will get an answer. From there, figuring out the
time it takes a pair to do a point is a simple calculation.

Example: a team composed of two pairs of developers
(assuming a right sized team composed of business analysts and automated
testers along with the Scrum Master and Product Owner) and the iteration is two
weeks in length (9 business days for development, 1 day for show and tell,
sprint planning and retrospective) you can perform the following calculation.
Assume the team completed 18 points.

1 point = 9 days / (18 points / 2 pairs) = 1 day

That is, it took one pair one day
to complete one point. (yes, the example is contrived to make the math simple
but it works with any number of points completed by a team).

Once I realized that it was myth that points were not tied
to time (time is a law of nature and the fourth dimension we all live in, even
on agile teams) I decided to embrace it. Not to use as a whip or to intimidate developers but to improve
the team’s ability to plan. I tried the following experiment.

I asked the team to set its relative size for a point. I
asked the team to recall a card that took one day to complete. With the card in mind, I ask the team to set the card's relative size to one
point. With the team's common understanding of a point the team then played planning
poker. The other constraint used by the team was that we only would work on cards that were
estimated to be one, two or three points. If a card was five points or more, we
decided the card was not yet defined and would need to be further decomposed
before it was ready to be estimated. Thus, a one point card equated to one day
of work for a pair, two points equated to 2 days of work and a three point card
was estimated to take a pair three days. By keeping all cards small (one, two or three points), we improved the team's ability to estimate via planning poker.

Note: the team was not asked to estimate based on time, but rather would normalize a point for the team (and across teams) by recalling recent cards that took about a day. There are no rules on settting relative size. A team or teams can set relative size to anything they choose. By coming up with a common definition across teams for the one point relative size, this enabled the teams to continually keep the value of a point from drifting and providing some consistentcy of what a point meant for each team across all teams.

Taking this approach for over a dozen teams for a couple of years, I observed the following. If one point was the work that one
pair could do in a day’s time, during the two week iteration it could be
calculated that a pair could complete 9 points. If the team had three pair, the
calculation for the total team would be about 27 points (i.e., 3 pair, times 9
days, times 1 point per pair per day). However, in practice, that is not what I observed.

Using the above assumptions, instead of the teams hitting
the points as calculated above, the teams would consistently come in at about
80% of their estimated points. So in the above example, instead of completing
27 points, the team would typically complete about 22 points. The 80% rule appeared
to apply to teams of two, three, four or five pairs (we did not have any one pair
teams).

To introduce terms, the calculated points are called “ideal
points”. The 80% of ideal points are the “actual points” completed by the team.
We will use these terms again.

Now remember, a point is an estimate. It is not a measure of
an absolute quantity of work that a team can do. For example, two teams can estimate
a backlog of cards. Their estimates will likely be different. Each team takes
into account its skill level, the complexity of the environment and technology
and the difficulty of the features. For a given team, one point is an estimate by
the team of the work that a pair can do in a day by that team. This gives the
team complete control over its estimates and enables it to consider the
critical factors needed to complete the work. This is essential for a
self-directed team. A self-direct team owns its estimates. One team's estimates can never be used by another team for its estimates.

Once I understood the relationship between an established
team’s “ideal points” and its “actual points”, I would use the same approach
for new teams. What I discovered was that a new team would typically take
between 3 to 5 iteration to hit its “actual points”. I was able to apply the three to five
iteration ramp-up to full capacity (a team’s “actual points”) across a number
of new team. It worked surprisingly well and proved to be very useful as a planning tool.

Furthermore, under crunch, I noticed that some established
team’s “actual points” might exceed 80% of its “ideal points”. However, it came
at a cost. Such teams would express weariness and did not believe they could sustain
such a pace for an extended period of time. Seeing a team’s “actual points” climb above 80% of its “ideal
points” became an early warning to me that the team was running hot and it needed to plan less points for the next iteration. On the other hand, if a team was not hitting 80% of its ideal points, it would indicate to me that their were blockers in the system. I would then assist the team by identifying and removing blockers in the system which then enabled the team to hit its "actual points".

If you have a development center composed of a number of teams,
by standardizing on a relative size of a point equals one pair effort for one day,
it became easy to understand if the teams are estimating correctly. It also became
easier to plan a new team’s ramp-up and it enabled one to recognize when a team
or teams were running to hot or perhaps had blockers that prevented them from
hitting their actual capacity.

Try it on your teams. I would like to hear about your
experiences. The teams I worked with became very comfortable with their
estimates once they standardized on the relative size of a point. They could
easily calculate their “ideal capacity” for a sprint and then would commit to their
“actual capacity”. The model was very sustainable.

Some might say, why use points at all, just use time or hours. I say no. The reason being is that hours provides false precision. The agile estimating process using discreet points such as one, two or three points for a reason. While it is true that any one card typically does not take exactly one, two or three days, in agregrate the estimates prove to be an accurate measure. By keeping each cards estimate less precise (done in points not hours), the net result is the total points estimated for the sprint backlog proves to be a good measure of the team's capacity to do work.

One other reason that hours don't work. Hours for a point can vary for a team depending on the ratio between developer pairs and business analysts and automated testers not to mention the Scrum Master and Product Owner. Don't fall into the trap of using hours. It does not allow for the variability in team make up and it provides a false sense of precision that does not exist on any team or project, agile or not.

Oh, by the way, even if you standardize on the relative size
of one point equals a pair effort for one day, you still cannot compare teams.
There are too many other variables including skills, technology stacks, feature
complexity, environments etc. However, it does make it easier for a team to
know if their points are reasonable. It also keeps the number of points
provided by similar teams consistent in size. What is does not do is ensure
the amount of actual work completed across teams is the same. Given the complexity of teams,
organizations, businesses and technologies, I do not know how that will ever be
possible. Nor is it necessary for teams to be productive and provide real business value.

Monday, March 12, 2012

Defects reflect a breakdown in a team's system, i.e., the way they work. The greater the number of defects, the more a team's system is in need of repair.

While each team member has ones own personal reaction to defects (anger, disappoint, fear, frustration, indifference etc), none of those responses will help a team identify and address the root cause of a defect, that is the break down in the way the team works.

High performing teams are cross-functional, yet each team member plays specific roles in the team's system. While teams collectively have ownership for delivering value to the customer, each role also has a primary focus. It is important that each team and its members understand the proper response to defects. If a team learns to responds properly, defects will grow increasingly rare.

To a team's customers, a defect creates a lack of confidence in the team's ability to deliver a product or products. The greater the number of defects the greater the loss of trust. A team can have a great product road map and strive to climb it's customers value chain, however if a team's quality is lacking, its road map will provide little value and have little chance of succeeding.

When a defect is opened the following must occur

validate - a team should attempt to replicate the defect in its environment when possible, if that is not possible, a team should analyze the data to determine if the defect is actually a defect

categorize - a team should understand the severity of the defect, an example severity definition may be

major - has significant negative business impact to which there is no known work around

minor - has negative business impact but a known work around exists and can be easily implemented

cosmetic, - has little impact but is detectable

count - all defects must to be counted, it is essential that a team count defects and view them in the larger context of trends, only then will a team know if they are properly addressing the break downs in their system

track and make visible - it is the business of every team member to know the state of its system (i.e., the way it works), defect trends are a key component to the wellness of its system

Across the board each team member needs to learn to respond as follows

Automated Regression Testers, look at the regression suite to determine if the team is missing a regression test, if so, add it, if not, determine how did the defect passed through the regression tests and plug the hole

Information Developers (Documentation) look at the product information and determine if the information is wrong, misplaced or unclear, if so address

Infrastructure Engineers, look at the environment to determine if it is responsible for defects passing through, if so, what is the plan to fix the environment

Developers, look at the unit and acceptance tests to determine if they are missing a unit or acceptance test or tests, if so, add it, if not, determine how did the defect pass through our test harness and fix it

Product Managers/ Owners/ Business Analyst, look at the acceptance criteria to determine if they are missing the proper acceptance criteria (happy and sad paths), if so, add it, if not, determine how did the defect pass through the exit (acceptance) meeting and fix it

Leaders (Tech Leads, Scrum Masters, Managers, Directors and VPs), look at the system, determine what is missing or blocking the system from working properly, once the item or items have been identified, fix it

A team's product is a reflection on ateam and the way it works. When a team releases its product to its customers, they are promising its customer that the product is ready for the customer to use. That the team has confidence that the product will add value and create a positive experience for the users. When a customers encounter a defect, the team has failed to deliver on it promise. The team has let its customer down. The customer has every right to walk away from the team and its product.

It is important for each team member to accept ones role in delivering great products. Each team member should come to work each day with the goal of creating great customer experiences. Yes, a team may have challenges, however, if the team all agree to take the action continuously as outlined above, they will soon be able able to deliver on its promise of great users experiences to its customers.

Thursday, March 1, 2012

The enterprise can be a dangerous place. Risk taking is rarely rewarded. Most attempt to stay under the radar.

It takes a special leader to bring about an agile transformation. The enemy of change is resistance. Resistance is fueled by complaints. Complaints come from those that hide behind the process. Their motivation is self-preservation. They are the pillars of the status quo.

Complaints attract attention. After all, not many understand agile. If so many are complaining, the complaints must be true. However, there is one group that is not complaining. It is the business. They are the beneficiaries of the working software produced by a high performing team. Working software that is delivered frequently and with quality. The business is elated, euphoric really. Who knew it was possible to deliver quality working software frequently, continuously, at a sustainable pace? They want to pinch themselves.

Where does the resistance come from? Often it comes from the run team. It also comes from the Project Management Office. Too frequently the source is senior leadership. However, they all are conflicted. The complaints are heard yet the support from the business is very real.

Middle management is the most threatened. They have learned to measure their worth by the size of their budgets and organization. "If teams are both high performing and self-directing, what am I to do", they ask.

It is a tricky dance for a transformation leader. One must learn to leverage the good will of the business to manage through the resistance of change. People won't easily give up what they know. They have been trained for years to believe in detailed projects plans, large signed off requirements documents, change control processes, high level and low level designs and detailed test plans. They've been trained to always have someone one to blame (i.e., project managers, requirement leads, tech leads, test leads etc.) when things go wrong. They call it holding people accountable. It really is code for finger pointing and redirecting blame when the project stumbles or fails. They throw others under the bus and sell themselves as strong leaders. It is an act of self preservation. Anti-courage.

How many of your collegues do you know that are willing to assume the risk required of a transformation leader? Not many. Are you? Why not? Their role is often not welcomed. The change that comes is disruptive. People are threatened. It is hard to make friends, build relationships and become a "member of the club" with so many changse going on and so many resisting and complaining.

A transformation leader has inner drive. Passion for making things better. A passion stronger than the ambition of self promotion. The corporate culture does not often breed such a person.

Organizations that go through transformation realize benefits never thought possible. The tension between the business and the developers disappears. The team hits a sustainable pace and the resulting software is a game changer for the users. The business can't say enough good things about the team. It is the value of their work that motivates the team.

Transformation is not possible without courage. Not a faux courage, nor a staged courage. It is a courage born out of passion to make things better. The drive for continuous improvement is stronger that the instict of self-preservation. If courage is the engine of transoformation, successfully delivering working software is the fuel.

Is your organziation fortunate enough to have such a leader? Do you have the courage to support transformation or are you a pillar of the status quo?