This chapter describes seven properties set up by the best teams. Crystal Clear requires the first three. Better teams use the other four properties to get farther into the safety zone. All of the properties aside from osmotic communication apply to projects of all sizes.

This chapter describes seven properties set up by the best teams. Crystal Clear requires the first three. Better teams use the other four properties to get farther into the safety zone. All of the properties aside from osmotic communication apply to projects of all sizes.

I only recently awoke to the realization that top consultants trade notes about the properties of a project rather than on the procedures followed. They inquire after the health of the project: Is there a mission statement and a project plan? Do they deliver frequently? Are the sponsor and various expert users in close contact with the team?

Consequently, and in a departure from the way in which a methodology is usually described, I ask Crystal Clear teams to target key properties for the project. "Doing Crystal Clear" becomes achieving the properties rather than following procedures. Two motives drive this shift from procedures to properties:

The procedures may not produce the properties. Of the two, the properties are the more important.

Other procedures than the ones I choose may produce the properties for your particular team.

The Crystal family focuses on the three properties frequent delivery, close communication, and reflective improvement1 because they should be found on all projects. Crystal Clear takes advantage of small team size and proximity to strengthen close communication into the more powerful osmotic communication. Aside from that one shift, experienced developers will notice that all the properties I outline in this chapter apply to every project, not just small-team projects.

By describing Crystal Clear as a set of properties, I hope to reach into the feeling of the project. Most methodology descriptions miss the critical feeling that separates a successful team from an unsuccessful one. The Crystal Clear team measures its condition by the team's mood and the communication patterns as much as by the rate of delivery. Naming the properties also provides the team with catch phrases to measure their situation by: "We haven't done any reflective improvement for a while." "Can we get more easy access to expert users?" The property names themselves help people diagnose and discuss ways to fix their current situation.

Property 1. Frequent Delivery

The single most important property of any project, large or small, agile or
not, is that of delivering running, tested code to real users every few months.
The advantages are so numerous that it is astonishing that any team doesn't
do it:

The sponsors get critical feedback on the rate of progress of the team.

Users get a chance to discover whether their original request was for
what they actually need and to get their discoveries fed back into
development.

Developers keep their focus, breaking deadlocks of indecision.

The team gets to debug their development and deployment processes and
gets a morale boost through accomplishments.

All of these advantages come from one single property: frequent delivery. In
my interviews, I have not seen any period longer than four months that still
offers this safety. Two months is safer. Teams deploying to the Web may deliver
weekly.

Have you delivered running, tested, and usable code at least twice to your
user community in the last six months?

Just what does "delivery" mean?

Sometimes it means that the software is deployed to the full set of users at
the end of each iteration for production use. This may be practical with
Web-deployed software or when the user group is relatively small.

When the users cannot accept software updates that often, the team finds
itself in a quandary. If they deliver the system frequently, the user community
will get annoyed with them. If they don't deliver frequently, they may miss
a real problem with integration or deployment. They will encounter that problem
when it is very late, that is, at the moment of deploying the system.

The best strategy I know of in this situation is to find a friendly user who
doesn't mind trying out the software, either as a courtesy or out of
curiosity. Deploy to that one workstation, for trial (not production)usage. This allows the team to practice deployment and get useful feedback
from at least one user.

If you cannot find a friendly user to deliver to, at least perform a full
integration and test as though you were going to. This leaves only deployment
with a potential flaw.

The terms integration, iteration, user viewing, and
release get mixed together these days. They have different effects on
development and should be considered separately.

Frequent integration should be the norm, happening every hour, every day, or,
at the worst, every week. The better teams these days have continuously running
automated build-and-test scripts, so there is never more than 30 minutes from a
check-in until the automated test results are posted.

Simply performing a system integration doesn't constitute an
iteration, since an integration is often performed after any single
person or subteam completes as a fragment of a programming assignment. The term
iteration refers to the team completing a section of work, integrating
the system, reporting the outcome up the management chain, doing their periodic
reflectiveimprovement (I wish), and, very importantly, getting emotional
closure on having completed the work. The closure following an iteration is
important because it sets up an emotional rhythm, something that is important to
us as human beings.

In principle, an iteration can be anywhere from an hour to three months. In
practice, they are usually two weeks to two months long.

The end date of an iteration is usually considered immovable, a practice
called "time boxing." People encounter a natural temptation to extend
an iteration when the team falls behind. This has generally shown itself to be a
bad strategy, as it leads to longer and longer extensions to the iteration,
jeopardizing the schedule and demotivating the team. Many well-intentioned
managers damage a team by extending the iteration indefinitely, robbing the team
of the joy and celebration around completion.

A better strategy is to fix the end date and have the team deliver whatever
they have completed at the end of the time box. With this strategy, the team
learns what it can complete in that amount of time, useful feedback to the
project plan. It also supplies the team with an early victory.

Fixed-length iterations allow the team to measure their speed of
movementthe project's velocity. Fixed lengths iterations give
that rhythm to the project that people describe as the project's
"heartbeat."

Some people lock the requirements during an iteration or time box. This gives
the team peace of mind while they develop, assuring them they will not have to
change directions, but can complete something at least. I once
encountered a group trying out XP where the customer didn't want the trial
to succeed. This customer changed the requirements priorities every few days so
that after several iterations the team still had not managed to complete any one
user story. In such hostile environments, both the requirements locking and the
peace-of-mind are critical. Requirements locking is rarely needed in
well-behaved environments.

The results of an iteration may or may not get released. Just how often the
software should be sent out to real users is a topic for the whole team,
including the sponsor, to deliberate. They may find it practical to deliver
after every iteration, they may deliver every few iterations, or they may match
deliveries to specific calendar dates.

Frequent deliveryis about delivering the software to users, not
merely iterating. One nervous project team I visited had been iterating monthly
for almost a year, but not yet delivered any release. The people were getting
pretty nervous, because the customer hadn't seen what they had been
working on for the last year! This constitutes a violation of frequent
delivery.

If the team cannot deliver the system to the full user base every few months,
user viewings become all the more critical. The team needs to arrange for
users to visit the team and see the software in action, or at least one user to
install and test the software. Failure to hold these user viewings easily
correlates to end failure of the project, when the users finally, and too late,
identify that the software does not meet their needs.

For the best effect, exercise both packaging and deployment. Install the
system in as close to a real situation as possible.