October 08, 2007

I was working on site for a client recently when I said, “Jim, I hate you.”

Jim’s head pops over the side of the cubicle next to me. He flashes me a quizzical look and asks, “Why is it that?

“Because, I can’t write a single line of code now without writing a test to support it. It is all your fault.”

He smiles and replies, “Well, if you are blaming me for writing better code, I’m ok with that.”

Just in case you don’t know, Jim Erwin is our resident rock star (literally, he was a musician in a previous life) and the author of FoxUnit, a unit testing framework for Visual FoxPro. He is also the Software Development Practice Manager and the evangelist for Test Driven Development (TDD) at Visionpace. In addition to having a great sense of humor he is my ever patient coach on TDD.

I took my first TDD course from Jim and I have been incorporating it into my development practices ever since, but not without some trepidation. I try to keep myself open to new methods and try not to discard a new idea until I have tried it. I will usually adapt a new idea into my development and practice it for a while before I make up my mind on how useful it is. My first impression of TDD was that it really wasn’t that different from how we always did things. Other than that whole idea of writing a test that was bound to fail before you even start, how different could it be? We always test our code, right? I mean, it is not like we write some code and just throw it out there for our users. So how is that TDD thing so different?

It wasn’t until I developed the habit of testing my software with unit tests that I really found that out. I have to admit that it wasn’t easy at first. There were all kinds of problems. Most of which I created by not following good practices. Early on I wanted to test everything with FoxUnit. I tested forms and reports and anything that was part of my application. But that is not what unit testing is all about.

As I progressed as a student of TDD, I began to realize that unit testing meant just that. You test units, not complete applications. As developers we are familiar with the concept of writing small pieces of code with discrete functionality. I realize that I’ve known that for years. It was something that I was taught in college, and that was longer ago than I want to admit. However, it wasn’t until I started unit testing that I really became aware of what that meant. I thought I knew and practiced that.

The more I tried to test my software the more I started thinking of how I could break it down into testable units. There were times I wanted to test some object but I would end up testing ten other things at the same time because of the built-in dependencies.That forced me to look at how I could reduce those dependencies.

As I continued to practice TDD I found myself thinking of how I was going to test the software before I wrote it. When I realized that, I also realized that I wasn’t really doing TDD. When you practice TDD, you write the test first and the code later. What I had been doing was writing the code first and trying to find a way to test it later. When I started writing the test first, I started finding ways to reduce the dependencies.

Jim, I hate you because I can never write code the same again. You taught me to use TDD. You opened my eyes to a different way to write software. You forced me to see some of the imperfections in how I write code. My code is cleaner because of you, and I have the tests to prove it.

Jim is smiling, and he is ok with that.

…

If you are using FoxUnit or practicing TDD, I would love to hear about your experience. Send us an email or stop by FoxUnit.Org and let us know what your experience has been.

June 08, 2007

As we coach software development teams in Kansas City, we are often asked about the process of breaking User Stories down into development tasks (anything related to implementing and verify a User Story). This is one of the areas teams struggle with when adopting an Agile process. It's actually something they often struggle with using most any process but most Agile approaches rely heavily on it for a number of reasons.

1) Iteration Planning

Tasks are a good last check for iteration planning purposes. Even though we can rely (to some extent) on past actual velocity to get a sense of what a team might be able to achieve in subsequent iterations, breaking stories down into development tasks/hours allows the team to leverage what they've learned so far during a given release in terms of some stories being potentially larger or smaller than previously thought. They can use that information to develop a better, more reliable/realistic iteration plan.

During planning, openly discussing development tasks gives all team members a view of what will take place while implementing the stories selected for the iteration. During that discussion they can help each other clarify and improve their development plan and discuss lesser known areas of the code, database, etc. to help other team members learn and become more productive in their development.

2) Iteration Burndown (what's left <how many hours> to do within a given iteration)

Having tasks with hours estimates enables the team to discuss (during the Daily Stand Up meeting) why certain tasks might be taking longer than planned, why some tasks were overlooked when a story was initially discussed, why some tasks weren't ultimately needed. With this information, it becomes apparent that at times, developers may be struggling with a given task and may need help.

It also enforces thinking more thoroughly about tasks needed to complete a story and aides in becoming better at tasking and task estimating during subsequent iteration planning meetings. It also allows the team to get an earlier sense of whether the goals and story points planned for a given iteration will be achieved and allows the team to consider adjustments sooner.

3) Accountability

Defining and estimating tasks makes team commitments to an iteration public knowledge and increasesthe sense of urgency of getting better at task definition and estimation. It also can help with maintaining a sense of focus on agreed upon tasks and indirectly trims waste related to gold plating. It can encourage team members to seek assistance on tasks they might be struggling with as their 2-hour task still isn't done after a couple of days effort while the teams velocity begins to become negatively impacted.

Developing for a story without tasks can lead to stories that don't get done or done as hoped. Without tasks, the team loses the ability to provide assistance (since nobody knows what you're doing) and overall iteration and ultimately release predictability suffers.

4) Shared and Individual Learning

Discussing, defining, estimating and tracking tasks allows the entire team to learn about the problem domain, especially when the domain or parts of it might be new to certain team members. It also helps all team members become better about planning the work needed for all stories and helps them to become better definers and estimators of tasks.

5) Tasking Encourages Better Design

Thinking through a plan of attack for implementing user stories and creating steps (a.k.a tasks) to achieve it tends to create a higher level of focus and optimize overall productivity. It also facilitates design discussion often resulting in better and more complete story implemenation.

6) Forecasting Velocity

When you don't have the luxury of running an iteration to get an actual velocity but need to provide stakeholders with some sense of cost and schedule, you need to forecast the teams velocity. Using tasks is very effective for this. Do this by estimating team capacity, breaking stories down into tasks/hours until the capacity is filled and adding up the points for the stories you just tasked. You now have a forecasted velocity to provide a preliminary forecast of cost and schedule.

7) Tasks Serve as Reminders

When you task, typically at the beginning of an iteration (during the planning meeting), you have the users attention and are able to ask questions to which you'll need answers to enable you to think about your plan of attack for a given story in terms of development tasks that will be necessary. Even a few days into the actual iteration, you'll forget at least part of what you discussed with the user if you don't have recorded tasks and you'll have potentially less access to the user to confirm/reconfirm tasks and/or stories.

8) Talk to the Dog

Having to talk out loud and/or in a public setting about tasks you'd need to complete a user story tends to create greater focus than just beginning to code. It typically creates better overall productivity and thoughtfulness in approaching the implementation of a story.

9) What if you hear, "I can't/won't task."?

If a team or team member simply says they won't task, that's more of a personal discussion. Obviously asking them why they won't task would be a useful starting point, it may ultimately speak more to ego, personality or an underlying resistance to change.

When a team or team member claims they can't task you have more information to work with. A common stated reason is that they "just have to start coding" and don't really know about the tasks until they start working on it. One approach is to say "Fine, then write down the tasks as you uncover them during coding and we'll discuss and learn from those to make you a better up-front tasker.". You can also allow a "research" task (2-4 hours) to allow a developer to spend time looking at the code, database, etc. for purposes of ultimately tasking a user story.

Above are just a few broad reasons why tasking is useful for software development, in particular for Agile processes.

June 04, 2007

Please join Martin Olson for a presentation on Agile Development. The presentation will be on June 26, 2007 at Visionpace in Independence, MO. It will be from 11:30-1:00.

Explore how agile software development can improve your bottom line with just-in-time delivery of "clean code that works". Learn why the mantras "inspect and adapt" and "release early and often" have led to a revolution in system development and delivery.

Martin Olson is a Project Manager at Visionpace with over fifteen years of industry experience. Martin has been working with agile development and the Visionpace team for five years. As a Certified Scrum Master, Martin assists Visionpace's clients in defining and steering their projects to a successful completion.

To reserve your seat for this FREE LUNCHEON and presentation, Please contact Kelly at 816-350-7900 to RSVP no later than June 22. See you there!!

March 07, 2007

At a recent Agile KC meeting in Kansas City, Missouri we met to discuss, among other things, topics that might interest software developers and managers alike for upcoming meetings. One of the topics that always comes up when such a list is discussed is testing. It's interesting to see how software developers increasingly are discussing ways to test software. Everyone (QA, developers, managers, customers, etc.) wants to know what "done" means.

I know we have seen increased interest from our consulting clients in Kansas City where we coach software developers on agile software development techniques and teams and managers on agile project management. Software developers are interested in defining "done" (just as other stakeholders in the project)through the use of not only unit tests, where they're able to demonstrate the integrity of the code at the unit level, but also in terms of acceptance testing or functional testing where the focus is on defining "done" in the customer's eyes.

Acceptance tests sometimes known as "conditions of satisfaction" define the requirements for software features which are most often comprised of one or more user stories. They are defined from the perspective of the customer for a given business need, exactly where the emphasis should be in the first place. So the combined set of tests ideally maintained and run in an automated fashion to help speed up development, make up the overall requirements for the system.

Having the requirements defined in advance of each iteration planning session is a highly effective way to keep the team pointed in the right direction and create a focus for software development efforts. The idea is that at the end of the each iteration, working software is available to potentially ship because it works or is "done" in that it has passed the tests defined at the beginning of the current iteration and all tests defined in previous iterations.

Knowing what targets to shoot for each iteration lets the team focus on what needs to be done, eliminates gaming other metrics in use and helps a team identify what adjustments they need to make to reach the goal of delivering running tested software every iteration in a just-in-time manner.

This emphasis not only ensures the software output matches the requirements but it results in leaner, more concise and targeted code that's developed with those conditions of satisfaction in mind. There's less of a tendency to get off track and build features that weren't requested resulting in overall lower maintenance costs and an overall better team in that they develop a shared understanding of the business domain and the software that supports it. It also helps with schedule and cost predictability in that untested code represents an undefined amount of work (potential bugs) that, if left to deal with toward the end of a project, can create wack-a-mole development that often accompanies the classic "stabilization" phase.

We have found that this "ultimate" metric is the best target to hang out at the end of every iteration. Even if it takes a number of iterations to establish the necessary rhythm to achieve the frequent and consistent delivery of running tested software, it's a goal worth tracking and building a team process around.

January 29, 2007

I was recently in a planning poker meeting which is a step in the release planning phase for agile software development projects where user stories are sized relative to each other. This is a critical step to estimating the cost and schedule of a project. By the way we typically call these meetings in our process "sizing" rather than estimating because we are specifically sizing the stories relative to one another, followed by another step to actually determine the estimate in terms of hours and therefore cost and schedule.

This was one of the most painful meetings I've attended in recent memory. Painful because of the communication (or lack of) taking place during the meeting. While there was no shortage of experienced and talented developers and other stakeholders in the meeting there was a significant shortage of meeting etiquette which resulted not only in the typical thrashing that can plague such meetings but in interruptions, sidebar conversations and conversation threads that didn't pertain to the specific user stories or in some cases the project being sized.

We typically track the time it takes on average per user story across projects when it's feasible to do so in order to inspect and adapt and find ways to optimize the time we spend sizing each story. What we noticed at the end of this particular meeting was that over 80% of the time spent was in the area of discussing items not specific to the story or in some cases far too specific, which was an indication for us that not enough time was spent in the most productive parts of sizing a story.

Based on this performance we have decided to display simple meeting etiquette rules similar to those types of rules we've seen in other corporate meetings reminding people to focus on participants as they are speaking and to try to limit interruptions unless they feel they're absolutely necessary in terms of understanding the point at hand and limiting side conversations, again unless they're seen as necessary.

In the context of a user story sizing meeting we've also reminded ourselves of simple adjustments such as letting the customer read the story and then letting developers ask questions by taking turns. We'll ask those people not speaking to write down points or comments they feel they need to make. We hope these steps will go a long way towards focusing the conversation and reducing the needless amounts of static that we witnessed during this meeting and in general help optimize the time spent sizing stories across all of our projects.

While we acknowledge the fact that it's impossible to completely eliminate things such as interruptions and thrashing and even some sidebar conversations especially when you have very bright and creative people in the room such as we did for this meeting, we also feel strongly that if this isn't addressed it will result in not spending resources wisely.

January 17, 2007

We talk a lot about our experiences in this column about our agile software development practice...so much so, that perhaps it sounds a little self-serving at times. After all, we're heavily invested in agile methodology. So here's a link to another developer's (Russ Nemhauser's) blog...a developer who joined an agile team at a Large Software Company Somewhere in the State of Washington. His background was in BUFD (big up-front design) and he therefore approached this Scrum project (in .NET) with a great deal of skepticism. Read about his conversion here. He also gives an example of test-driven development and talks about why he's now a believer...so much so that he's uncomfortable writing code before he writes the test(s)....and how (at least in this example) it was a time saver. (YMMV, of course.)

An interesting idea:

This doesn't solve the problem of some clients requiring large functional specification documents, but it does offer at least one potential change to the way they're written: the functional specification can be written AFTER the majority of functionality has been developed and delivered. This is a huge step toward an accurate specification and it also drastically reduces the amount of time it takes to write the document.

January 16, 2007

Visionpace has been working with a client, providing agile coaching, over the last few weeks. Like all of our clients, the organization is full of bright people, making a viable product, and struggling to balance new development projects against existing legacy code. In this instance, the situation has lead to a few critical people wearing multiple hats and juggling a lot of different responsibilities.

Since these resources are in such demand for a lot of competing things, there always seems to be a bottleneck around them. As one might imagine, the responsibilities that involve human interaction (managing people or supporting customers) take a higher priority than those that are more technology derived (like testing).

Some of the common problems that we’ve seen in this situaion include:

The team has spent a significant amount of time refining and re-defining what the scope of a story is during the iteration.

During the iteration planning the developers work with the customer proxies to define the stories and acceptance tests for them, but the tests are not always captured at this time.

The tests are sometimes too narrowly defined or too broadly defined to be of any use in validating the code.

There is often the desire to select stories to work on that have not been fully clarified. They are the highest priority, or are the next logical step, but during iteration planning some questions arise that need outside input. At these times, the notion is to work on what we know now and we’ll fill in the blanks before we run out of defined tasks. Usually the answer to the unknown things changes the ‘known’ items and leads to more questions. This cycle is repeated a few times before the dust settles.

Some of the smells associated with these problems are:

Constantly adding tasks during the iteration to account for missed features

Running out of tasks in the backlog mid-iteration because the features were misunderstood.

Generating a lot of code inventory during the iteration because it is waiting on testing (developers saying that ‘I think it’s done, but it needs to be tested.’)

Switching gears from implementing tasks to iteration planning (user story discussion and task breakdown) mid-iteration. This isn’t always bad. It just shouldn’t be the norm.

So what’s the proposed solution to these smells you ask? Inspect and adapt. We’ve been including the expert user in the iteration planning meetings and asking them to review the user story with the developers. This causes the developer to get into discussions with the customer proxy about the pros and cons of what is or isn’t included in the story. It causes overall confusion as to what needs to be tasked out, and can lead to a feeling of uncertainty about what needs to be implemented. The focus of the iteration planning moves to architecture possibilities and eventually a ‘code speak’ between the developers about where we should go. To avoid this we’re going to try something new on the project that we’ve used successfully in the past with other Visionpace clients.

We’ll break the iteration down into to steps; iteration prep and iteration planning. The goal is to define what the user story is (and isn’t) in the iteration prep. The output from the iteration prep is a set of written acceptance tests and some low fidelity models of forms, reports, etc. for the user story. The tests are then used in the tasking portion of the iteration planning meeting to refine the scope of the conversations. The features of the user story are described to the development team via the list of acceptance tests and the low fidelity mock ups are used to explain the tests. (Say it in tests!) Another benefit of this approach is that the acceptance tests are refined in the iteration planning and when the developer feels that they have implemented the feature, they have a series of tests to confirm their belief. Finally the tests from the iteration prep are either added to a manual test script or incorporated in an automated testing tool.

The takeaway for all of this is that the roles and responsibilities are leveraged in this situation to minimize chaos. The customer (or proxy), the scrum master, and other appropriate parties work to define what the user story is without worrying about underlying architecture or developer centric details. This definition is delivered to the developers in the form of low fidelity screen mock ups and documented acceptance tests, so they can focus on defining what they are being asked to implement.

December 11, 2006

As you've worked on or with teams making the change to more of an Agile Software Development approach, you've probably heard comments along the lines of, "Hey, this is just assigning a name to what we've already been doing." -or- "These are all just common sense concepts.".

As we coach teams on adopting Agile principles we try to respond to comments like this confirming that yes, Agile principles mostly represent filtering out activities that don't seem to be useful (at least not in every situation) and doing more of the remaining activities most of the time. These activities (or Agile principles) tend to be the items that people list as being useful to deliver working and testing code on a frequent/consistent basis. These also tend to be the elements that people list as existing on teams they've worked on in the past that they viewed as successful.

We encourage teams to not get bogged down in calling the process Agile (or not) but to just focus on identifying and alleviating their software development pains by applying principles that may or may not be a traditional part of one or more Agile methodologies. It goes without saying that every team and situation is different, so one way to view Agile principles is that they provide a framework of useful concepts that you can introduce (gradually in some cases) to attempt to fix a broken process. Some of them will be common sense (depending on which team member is viewing them), some fill in holes in a current process that isn't working and others add "just enough" structure to provide useful metrics and oversight for management and all stakeholders.

So on the topic of being or becoming Agile, the debate shouldn't focus on "installing yet another methodology". It should focus more on identifying and admitting process pains and deciding which principles (and how deeply you employ them) will be useful to begin addressing the pain.

November 27, 2006

As the holidays approach and the cold weather begins moving into Kansas City, it's a good time to reflect on our efforts and goals in the areas of custom software development and agile process coaching.

In the area of User Story Points estimation we made one seemingly subtle change in that we refer to the activity as "Sizing" a User Story rather than "Estimating". For us, it seemed to further focus the discussion on the size of a given User Story, especially when triangulating the story relative to other already-sized stories. This has been useful in a variety of ways, including helping move people past struggling to understand the concept of a story point.

We look at discussing and sizing a story in terms of the elements common to most stories (at least in terms of our software development) which tend to fall in the area of the classic three layers: UI, Business and Data layers. Traditionally, teams have relied mostly on free-form discussion and developer (sizer) intuition to determine point values and during triangulation. While we agree that expert developer opinion and intution is highly valuable, sessions can sometimes be prone to thrashing (excess discussion), fatigue and personality dominance.

When we conduct sizing (using the Planning Poker approach) sessions, we read a User Story card and then the customer answers questions from sizers (developers) until there are no more questions. This is fairly standard in most Planning Poker approaches. But instead of sizers flipping one card to reveal their overall size opinions, they use three cards. One for the UI layer, one for the business layer and one for the data layer. The idea is that since they are already thinking about the complexity and therefore the amount of work involved in implementing a user story and since they usually do this in terms of design,testing and development along the lines of the three classic layers of UI, business and data, we have them assign story layer point values (using the same point scale they would for overall story points) to generate discussion about differing opinions and to confirm assumptions even when their size opinions are similar the first time they flip their cards. In some cases, we'll even have the sizers shout out the objects (Forms, Reports, Classes, Controls, Tables, Stored Procs, Views, etc.) they had in mind across the layers and record those on the wall for a number of reasons, including for reminders of effort we already accounted for in sizing other stories. The resulting layer size opinions and object counts help with triangulation for an overall story size opinion. This is useful for current sizing sessions and for future triangulation as new stories are uncovered during subsequent iterations. The final consideration is developer (sizer) intuition. Even with object counts and layer assumptions in place, if the group feels that similar stories based on the above shouldn't share the same overall point assignment based on their gut that design, acceptance testing, etc. makes the story belong in another point column, then the story gets moved.

Doing the above does add some additional time to the sizing session but it also saves time by reducing thrashing and directing a portion of the discussion along the lines of the story layers. The net result is no more time is taken and a there is typically a higher degree of confidence in story sizes.

November 06, 2006

Visionpace recently conducted another ScrumMaster Certification course and a few of our associates participated as well students from other organizations. (If you’re not familiar with the course check http://www.visionpace.com/developereducation.html.) The Visionpace folks have been engaged in agile development (XP and Scrum) for anywhere from one to three years as developers. This training allowed them to consider the agile development from the Scrum Master perspective. As such, it generated a lot of good conversation over some recent lunches.

One such topic that I think is universal to all learning, is that one can’t go to a training session and then expect to be fully proficient at the end of the training. Nor can one go to training and not use the skills presented in the training, and expect to retain them. In order to become proficient in a new skill set (be it a new language, new technology, new art form, or new project management approach) one has to continuously use, hone and train with the skill set.

This was the topic of conversation recently. It seems that a lot of people expect that if they send a person to a class and\or have them read a book that they will then be an expert in the topic overnight. (Does this sound familiar? “Jetson, we’ve been hearing a lot about agile development lately and figure if it can work at Chrysler it can work at ACME Sprockets. Attend this three day seminar next week and be ready to tell the Board what we need to do when you get back.”) It’s not unlike your boss telling you to go home over the weekend and read a white paper about swimming and spend a day or two at the pool swimming laps because you’re going to be the captain of the new company swim team. In order to be proficient at anything, you have to train. You have to condition yourself for the skill and continually build on it.

At Visionpace we strongly believe in the phrase ‘Inspect and adapt’ as part of this continual conditioning. Collectively and individually we look at our project and the associated process to see what we’re doing well and what we need to address. We have regular reviews and sessions to help each other get better at being more productive team members. This is good because it allows us to review and reinforce our actions and processes. Over time we find the results continually improve and as one would expect proficiency increases. A simple concept, that often time is overlooked in organizations.