The Kanban Story: What We Do and What We Don’t Do

byPawel BrodzinskionNovember 30, 2009

Last two posts of the series (on Kanban Board and on setting up whole thing) were Kanban implementation-in-a-pill kind of story. As you may guess that’s not all since Kanban doesn’t prescribe many things which you can use and, most of the time, you do.

To show the big picture there are two lists: practices we use and these we don’t.

Things we do

• Cross-functional team.
As I mentioned at the very beginning (the_beginning) we have cross functional team which is a great thing. Virtually every action connected with our product in any way is performed within the team.

• Co-location.
We sit together in one small room. That’s so damn great. I wouldn’t change it for private office even if you paid me.

• Roles.
We have old-school roles within the team. Developers who, guess what, develop. We have our commando guy who mixes roles of QA, system administration and maintenance. He could work as our bodyguard too. My role is project and product management which in our case could be called Product Owner as well. We also have a guy responsible for business development. Nothing fancy here. Everyone just knows what they’re supposed to do.

• Measuring lead time.
When I initially wrote a draft for this post this was on the “things we don’t do” list. For some time it wasn’t needed much since with no production environment it wasn’t crucial to know whether specific feature will be ready in 2 weeks or rather 16 days. Now, as we need more precise estimates, measuring lead time emerged as useful tool so we started doing that.

• Continuous improvement.Implementing Kanban was an improvement to the old methodology. Starting to measure lead times was another. We look for tools or frameworks which improves the way we develop our products in terms of speed, quality, code readability etc. Nothing fancy here – we just try to make our lives a bit easier whenever we can.

• Unit testing.
We write and maintain unit tests. We don’t have 100% coverage though. I trust our developers would write unit tests wisely. Most of the time they add unit test covering a bug scenario as they fix bugs too.

• Continuous integration.
Oh, is there someone who doesn’t do that?

• Static code analysis.
There are coding standards we follow. They’re checked during every build. After a couple of weeks coding standards became native and basically invisible so it’s hard even to say that it’s a real practice.

Things we don’t do

• Iterations.
At least formal iterations. We don’t have sprints whatsoever. To bring some order we group some features and call them “iteration,” but that’s nothing like scrumish iterations. I’ll write a bit more about our pseudo-iterations soon since that isn’t something very standard done by-the-book.

• Stand-ups.
We have permanent sit-downs instead. Whenever something needs to be discussed we just start the discussion not waiting for morning stand-up whatsoever. No-meeting culture works surprisingly well so far.

• Formal retrospectives.
Same as in previous point. We just launch “I don’t like it, let’s change it” kind of chats whenever someone has some valuable input. You could call them informal retrospectives on call.

• Burndown charts.
We have no fixed scope within iteration to burn since we don’t have real iterations. I occasionally draw burndowns for a specific feature (a single sticky note). I do it mainly to check how my well works schedule simulation (estimates plus statistical analysis).

• Code review.
I’m a big fan of code review. I tried to implement it twice in my teams and failed twice. As long as developers won’t want to do code review I’m not going to force them. That just doesn’t work this way.

• Pair programming.
This one I don’t believe in. So do our developers.

• Test driven development.
We write unit tests. We do this after not before. This is another practice I’m not a big fan of and another which takes people who strongly believe in it to make it work.

If I forgot to mention something on either list just let me know what I omitted and I shall provide update.

True, but that’s true of any adoption. You’ve got to make the developers (and others in the team) want to adopt the technique, otherwise the adoption will fail. I believe this is an issue of leadership, as well as an issue of adopting techniques for the wrong reasons (i.e. pushing new techniques into an environment, instead of pulling them based on need and value).

Yes, if you want to achieve success implementing new practice you need to let them believe or make them believe in it. Otherwise you’ll fail. And yes, if you want to make them believe it’s all about leadership.

I’m surprised that your team is against code reviews. When we worked at our Definition of Done (http://bit.ly/lnap53) it was my teammates who added point on peer code reviews because they recognized that it is a great way of:
– learning about the code and understanding all aspects of the project,
– finding bugs and spotting things that will hurt our development later
In short they understand that having code reviews is beneficial mostly for them. Personally I believe having peer code reviews in your Definition of Done is essential for providing high quality code and I’m really happy my teammates share this opinion.

I have a problem with understanding the reasons why developers vote against it. Don’t they like feedback on what they do?

Alas, I have no idea how to make someone want to have a code review if he/she does not feel like this. …have you tried bribing them? ;)

@Tomek – Actually it has changed over time. We evolved and the team has changed their views on code review. You didn’t expect we stayed with what we had year and a half ago, did you?

Anyway the longer I think about it the more I’m convinced that we did have some kind of code review. Since we had collective code ownership (it isn’t on the list as it came up as sort of surprise for us) developers were often working on the code not built by themselves. So each time they were basically starting with code review and refactoring.

This realization might be the reason to change the stance against code review.

In terms of convincing people to any practice, not only code reviews, I don’t believe in extrinsic motivations. Since I wasn’t able to convince developers at that moment and I definitely didn’t want to take the power of deciding on engineering practices from them it was really no-brainer.

The only option left was waiting until developers change their attitude and they actually did. Maybe tricking them into doing some informal code reviews helped, but does it really matter?

@Tomek – I’d say that objections were pretty typical. People didn’t want to “waste” time reviewing something which already works. They didn’t expect to find real issues, except of some formatting glitches and they were afraid of having other tasks (code review) besides their regular stuff and same deadlines.

I’d say that code review is one of those rather unintuitive practices – unless you try it and get it well it’s really hard to get people jump on the bandwagon.

I also think atmosphere in the team is crucial here since as long as people treat code review as potential attack on themselves you don’t even have a chance to get it working.