Multi-hatting is a fact of life in organizations of any size. As long as you remain aware of what hat you’re wearing at any particular moment, life is good, right? I’ve said as much myself.

When I talk about knowing what hat you’re wearing, I usually mean it in the sense of knowing whether or not you own the outcome, or are simply a stakeholder in someone else’s outcome. Sadly, knowing who owns the outcome isn’t enough to do the job well. Different roles have different outlooks.

A typical multi-hat in many organizations, large and small, is engineering manager and product owner. Need a product owner for a team? Slap an engineering manager in there – we pay them a lot, they know the product and the team and have a little broader outlook than the troops. But what happens when they start managing a backlog, in particular writing epics?

In the course of managing backlog – writing stories and epics – I’ve bumped up against the “what is an epic” question a lot, especially when backtracking from story to epic. I’ll think about the story one way and infer the epic from it, create that epic, then later realize that it fits another epic that looks at the story entirely differently. An analysis of the epics in question split them into two very clear categories – stuff I write when my head is in the code and stuff I write when my head is in the backlog.

Let’s take a real-life example from the Dog-Food-Diet series. We want to remotely control multiple 120VAC lights and I wanted to start work on a single story called “turn AC outlet on/off”. It was not part of an epic. So I scanned the list of epics, didn’t see the right one and promptly wrote an epic labelled “Controllable power strip”. Typical engineer-think – leap to the solution and generalize. That epic describes a thing that we’re not intending to sell – a controllable power strip – rather than the intent we’re trying to accomplish within this product which is to turn lights on and off.

Fortunately, as I discovered later, there was already an epic in the backlog that neatly illustrated the point I’m trying to make here – “Hand-off control of light”. A totally different perspective on the work.

Epic I wrote when my head was in the code – Controllable power strip

Epic I wrote when my head was in the backlog – Hands-off control of light

What you see with the first of those epics is my wearing the hat without assuming the perspective. A “controllable power strip” isn’t an aggregation of business value, it’s a real-world thing that’s kind of like the component that we think we want to build to achieve the business goal of “turning a light on and off”.

The backlog is, first and foremost, an expression of business value. If the things in it don’t express business value then there’s no obvious sense to the order that you build them in and you’ve lost the point to the backlog. In other words, do your backlog items describe how you’re going to build it, or why you’re going to build it?

Note on tool use: If we’re building a controllable power strip to achieve this function it feels weird to not categorize this story as “controllable power strip”. JIRA offers two ways to capture the knowledge that “how” we’re going to turn lights on and off is through a controllable power strip that we’ll build into the system. You can use components, or tags.

I chose a tag for this particular example. We have an Android Things embedded device, an API, a web UI and the physical installation itself at the component level. If we had a team and a separate backlog for each of these, the power strip might be a component in that backlog. It doesn’t particularly matter as component and tag are simple attributes in JIRA, not structural features.

There ARE tools that use JIRA’s component field as a structural item. FeatureMap comes to mind. It’s a tool that translates back and forth between story maps and backlogs. I like the product but … it uses the component field to group stories into features. If you’re already using components for actual components then you’ll probably end up reorganizing things to work with it. Going back to the Dog-Food product, for instance, and its components, a story map with columns labelled “embedded”, “web UI”, “API” and “physical installation” makes no sense whatsoever.

I’m writing this post, and tagging it with “Empty Test Suite” all over the place, because Googling “Empty Test Suite” won’t tell you what I discovered. In fact, the StackOverflow method of debugging this led me on a couple of hours of bad adventure.

The setup:

Android Things developer preview 0.6

Raspberry Pi connected hardware

IntelliJ

JUnit tests using the AndroidJunitTestRunner

Almost all my tests (200+ at this point) are run on-device as they require hardware. After hours of fiddling with Gradle, I get my tests, 45 minutes worth, running on the device. Even hooked it up to the CI pipeline.

Then I went on a couple of hours worth of refactoring. Run CI again, and bang! Empty Test Suite.

That error message seems pretty clear. The test runner couldn’t find any tests to run. To me, this said that I’d broken my build configuration – either grabbing a bad dependency, picking the wrong test runner or otherwise hosing the build.gradle somehow. Setting up connected testing was just painful enough in those particular ways that this seemed almost inevitable.

Well, the error message wasn’t clear, or at least complete. What happened was that my Android Application was blowing up on a simple null pointer in the onCreate and crashing the process, which the test runner, bless its heart, interprets as an empty suite. If I had, first time out, looked at the logcat:

I would have seen that I’d written an easily fixed bit of crappe code, not a painful gradle misconfiguration. If the tests had been running locally, not connected, the crash would have reported to the console just like the erroneous “Empty Test Suite” message, and again, the issue would have been obvious.

It’s standard practice these days to have a weird, bearded man in your startup. A guy you can cite when someone says something innocuous like “Everyone likes kittens!”, and you say “Well, except for Jason ….”. Everyone nods. Yeah, he makes the code magic, but what’s the deal with Jason and the pet ferret?

But do you really need Jason? Or put more precisely, is the value of a bearded man worth the price you pay to buy him? I’m going to make the case today that it isn’t. This flies in the face of the advice you’ll get from angels, VCs, accelerators and of course bearded men everywhere, so I’m tilting at a windmill that will continue to turn no matter what I say. Left to their own devices, that’s what bearded men tend to do.

A bearded man.

Let’s say you’re a non-technical founder – an MBA or Subject Matter Expert. You have a product idea. The standard route to MVP is that you recruit a technical co-founder, give him half the company and he makes it happen. I enjoy this state of affairs. As a bearded man, I need people like you to need people like me because in a just capitalist world, a guy like me who couldn’t sell fire to the Eskimos and doesn’t play well with others would starve.

To you, the non-technical cofounder, the benefit of having the bearded man on-board is that you can go deep on customer development and shallow on product development because, theoretically, your bearded man has that covered. You do what you like, and are good at, Jason does what he likes and is good at … your path to IPO is clear.

But consider this … the base salary for guys like Jason at jobs that he can do blindfolded is 150k-250k. The entire time he’s working for you, recruiters will be knocking on his LinkedIn reminding him of this. And he can land those gigs without trying. True story: I once showed up late for a job interview wearing torn jeans and a black Jack Daniels sweatshirt with the sleeves hacked off, rocking an epic unkempt biker beard and got the job. You have to overpay to get a bearded man and even if you do that’s no guarantee that you can keep him because it’s so easy and profitable to leave.

Consider also that really hard technical work comes in fits and starts at many raw startups. When the MVP ships and your critical path to survival is sales and bizdev, not product development, a huge chunk of your equity, cash and mindshare may be tied up doing non-critical-path make-work because “Jason doesn’t sell”. The more stereotypically technical Jason is, the uglier that period of time is going to be for you and the more bad decisions you’re going to make just to “keep Jason’s head in the game” or to serve his misconception that product development is the be-all, end-all.

And finally, think about how hard it really is to build the thing you want to build. I’ve talked to dozens of non-technical co-founders over the last decade or so, and 90% of them are pitching pure execution plays – no new science involved. Everything is work, but not everything is hard work.

You’re pitching yourself to investors as a scalable CEO – someone who can figure out when, and how much, to pay up for the things you need and when to find another way to get it done. If you start your venture by overpaying for a technical cofounder, you’re not a scalable CEO.

What’s that “other way”? That’s your problem. Remember, I’m the bearded man. I like things the way they are. But I have seen a few different approaches that work.

Be creative. One non-technical cofounder I know took his idea to undergrad CS classes, more than once, and had them work up versions for him. Is this textbook, ideal-world product development? No. Did he enjoy it? No. Did it work? Yes – he made it work. Remember your Lean Canvas (and you have done your Lean Canvas, right?)? What’s your unfair advantage? If you have a way of making this happen without overpaying a bearded man, that’s an unfair advantage.

Find actual volunteers. Non-technical co-founders scam free work from friends and randos all the time. As a bearded man I find this annoying and, in another mood will tell you “Don’t be that guy”, but today I’m here to actually help you. So yes, sometimes you need to be that guy asking people to do stuff for you for free. You will be rejected a lot.

Find equity volunteers. Face it, paying people with equity is the equivalent of not paying them at all. That said, there are way more equity volunteers out there than you think. And there are reputable, proven contract houses that will work for equity if they believe in the founder and the business. The trick is in finding them and making them believe.

Pay up. Find a contractor at someplace like toptal and pay them to do the work. This is actually harder to manage than actual volunteers or equity volunteers. It’s also, in my experience, least likely to succeed. The more I think about it, the less I like this option. I take it back. Don’t do this.

And now, having wasted a lot of your time delivering you something you almost certainly didn’t want and can’t use, I will say what every true bearded man says when he does something like that:

A minimum viable spouse – plenty of room for improvement, but cheap and widely available

At the “surprise ending” of the Dog Food Diet series we ran headlong into a situation where the product became invaluable way before we’d finished the backlog. Without aiming at it, or even thinking about it, we’d run into the Minimum Viable Product.

When I train Scrum, I run into lots of excellent people who hate the concept of MVP. The idea of doing a “minimum” anything is repulsive to them. They tend to be the craftsmen among us, and their philosophy of work is that whatever they build, it should be the best. They’re the kind of people I want building my products for me. The “minimum” in minimum viable product is like nails on a chalkboard to them.

V = R/E

It’s useful in those cases to think of MVP differently – as Maximally Valuable Product. Value has a numerator and a denominator – revenue divided by effort – V=R/E. If you’re prioritizing well, there comes a time in the product’s lifecycle where the revenue you get from the following incremental releases starts to decline. If effort remains constant the “value” of that release declines relative to previous releases.

Graph of relative value delivered by sprint on the Dog Food Diet project

In The Dog Food Diet’s Sprint 7, Scrum product ownership pushed us off the edge of this cliff really abruptly. The team, even the Product Owner, didn’t see it coming. But in Sprint 7 the customer said, in pretty clear terms, “We’ve got this. We’re good. Stop helping.”.

The market told us what the minimum viable product was. By delivering just that and no more as quickly as we could, we built the maximally valuable product for the business.

I’ve been droning on about Scrum (or hiding in my cubicle at athena) and haven’t handed out any righteous startup-CEO wisdom in a while so, based on a Quora query, here’s some advice for all you business types as you leap from the walled garden of corporate life into the wild world of startups.

Once you leave the corporate world for startup land you quickly discover all the things that corporate did for you that you now have to do for yourself. Like “personal IT” i.e. the services you use and the devices you carry. Here, from careful observation of several startup CEOs, I offer some distilled best practices when it comes to handling your own IT.

Keep the Windows XP laptop from your last job. Startup CTOs love Windows update. And it’s okay if the thing crashes every couple of hours and only boots every other try, hard-reboot fixes anything.

Insist on leaving it just as it was so you can continue to use all the expensive Windows tools your old company paid for. That way you can keep trying to domain-login to a network you’re not even on anymore.

Whatever you do, your laptop and all the programs on it must be entirely different from the laptop and programs that everyone else in the company uses. This will ensure that your CTO never gets complacent.

Virus protection slows the machine down so turn it off whenever you need the machine to go faster. Remember whatever trouble you get into, your technical cofounder can get you out of it. After all, he’s a genius/ninja/rockstar. You said so yourself. Oh, and McAfee is the best – you can tell because their founder is a psycho.

Names are hard, and domain names are REALLY HARD. So you need a shitload of them. And you never know when your current startup is going belly-up so you should register all the company domain names in your personal GoDaddy account. All DNS registrars are the same. And keep adding domains to your personal account even after the CTO has setup a corporate account somewhere else. After all transferring domains is easy.

If you’re one of my former cofounders and recognize yourself here, pat yourself on the back, but realize that you are not alone. Every one of these best-practices have been vetted by multiple co-founders. In some important ways, all startup CEOs are the same.

So I have this condition called gout. It’s manageable, as anyone can tell you, if you behave within very reasonable dietary guidelines. These very reasonable dietary guidelines include avoiding things like organ meat like liver and brains, which is pretty easy, broccoli (seriously?) and beer (forget it).

There’s also an unwritten, but very real, rule of living with gout which goes something like “don’t poke the beast”. In other words, because gout multiplies any inflammatory issue you might have, don’t do anything like “getting out of bed” that might inflame the joints.

So I start this new job, and as part of the initiation we go off to do a week in the woods team building. Really not my thing, and I’m paying very close attention to “not poking the beast”. By dinner time on the last night I’m feeling pretty good. I haven’t gotten fired yet, my team (GO PLOWS!) isn’t in last place, and the beast snores contentedly. And that’s where it all goes to shit.

They feed us a huge dinner at the restaurant on the top of the mountain where the views look like this:

Yes. That really is the view. We stomped all over the mountain all day and I’m beat and hungry. There’s a fabulous shrimp thing on offer. Garlic and oil. I can’t get enough, and go back for thirds.

In case you haven’t guessed by now, those very reasonable dietary guidelines include “don’t eat shrimp”. Within 24 hours my left big toe is bright red, swollen and ON FIRE. With every heartbeat the toe throbs as if I was hitting it with a hammer. And of course it’s my own damned fault.

Oh well, accidents happen. I knew the shellfish restriction but because I seldom run into a situation where shellfish is the best thing on offer I totally forgot about it. Like I said, it happens.

The usual treatment for this is to hit it with a decent dose of prednisone (steroid), slap ice on it, keep it off the ground for a week or so and once again, you’re good to go. Sadly, rest is not an option. In fact, this job seems, in a relentlessly upbeat way, to be determined to kill me before I even figure out where the mens room is.

First, there’s the walking tour of the campus, sprawled scenically over a quarter of a mile of low-rise brick buildings. My cube is, of course, as far from the garage as you can get without being off-campus. And being a big company, the first few weeks is an endless march from one meeting room to another. While this is a laid-back place, full of dogs and such, I just can’t imagine that putting my foot up on the conference table with an ice bag on it will be well-received. So I sneak in ice-breaks between meetings and continue to poke the beast.

And after one day in the office, we have to fly to Austin for a week of walking around. Like I said, they’re trying to kill me.

So of course the toe doesn’t get better as fast as it usually does. In fact, after a couple of weeks, when we’re winding down the prednisone, the toe’s only a little better but because I’m continuing to poke the beast by walking like a duck all day both knees and the other big toe are starting to get involved. It’s clear that the usual script isn’t going to work here.

<Skip long sad story about communication and scheduling misfires within the office of my regular rheumatologist>

<Skip shorter, equally sad story about communication and scheduling misfires between my regular rheumatologist and the covering rheumatologist>

<Skip even shorter, equally sad story about communication and scheduling misfires within the office of the covering rheumatologist>

So we’re a month out from the start of this attack and it’s not fixed because I didn’t rest it enough, and I’m finally in to see a rheumatologist. She takes a history, takes a look at the right toe where the attack has now moved and proposes a two-prong strategy: lots more prednisone (which I agree with) and potentially some colchicine but only if a blood test shows good liver/kidney function. Cool.

It’s noon time on a Friday. As she puts in the lab order I ask innocently,

ME: Is that gonna get done in time to write the colchicine scrip today?

HER: Oh sure, I’ll write STAT on it and we’ll have the results by the end of the day today.

Somehow, I am not reassured. But I limp gamely across the hallway to the lab, literally 20 feet away. The order is right there and they’re waiting for me. What service! The guy looks at my order and has only one question for me:

HIM: Are you getting imaging done?

ME: Huh? No.

But my spidey-senses are tingling. This can’t simply be an idle inquiry from a nosey phlebotomist.

ME: But it says STAT there right? That means it’s going to be done right away, right?

HIM: Oh yeah, it says STAT right here.

As I leave the lab my spidey-sense has not calmed down one bit, so I ask one more time:

ME: STAT, right?

HIM: Right

Suffice to say, one should always, always, always trust ones spidey-sense. STAT written in that field, on that form, does not mean shit because the tests weren’t run by end of day, no results for my rheumatologist and thus no colchicine for me. The fact that I wasn’t “going for imaging” put my sample in the “whenever” bin, STAT be damned.

So to put this in perspective, the strategy to kill off this gout attack that my rheumatologist cooked up wasn’t executed because two offices on the same Epic installation and located physically within 20 feet of each other couldn’t communicate the fact that this test needed to be run TODAY rather than next business day (i.e. three calendar days).

Even if the colchicine scrip is written on Monday, because of the way the prednisone scrip is written, it’ll be paired with 30mg of prednisone, rather than the 50mg it would have been paired with on Friday. A tragedy? No. Less effective? Probably. Completely unacceptable? You bet.

What does all this say about anything? Even in situations where providers and staff are on the same system, nay even within the same physical office, working as a team the most expensive health care in the world delivers a quality of service that would be unacceptable from a dry-cleaner or an auto mechanic. I work with teams for a living, and if two of them dropped the ball like this, there would be consequences. By contrast, I suspect that by the metrics Atrius Health collects, this episode will count as a huge success.

This slipup with the lab and rheumatology was only the last in a series of unpleasant interactions with the system (see skipped episodes above) that all illustrate the same point – providers within the same office, or across offices in the same EMR, are unable or unwilling to communicate such that what’s best for the patient actually happens. Which proves, as much as anything can, that’s what’s best for Harvard Vanguard/Atrius Health and what’s best for the patient are not the same thing.

Full disclosure: All this sadness takes place within Harvard Vanguard/Atrius Health and Epic. I now work for AthenaHealth. That said, I can’t say this wouldn’t happen at an AthenaNet provider.

This project, as it stands today, is clear proof of the axiom that beauty is in the eye of the beholder. As a Project Management Professional looking at this project I’d be horrified at the size of the remaining backlog. As a Business Analyst, I’d be appalled that critical features have fallen to the bottom of the backlog. As a Product Designer, I’d be miffed that we’ve done the absolute minimum design we could get away with. And as a Scrum Trainer and Coach I am embarrassed at some of the #scrumfail we’ve committed.

As a Scrum Product Owner, though, I’d be … content. To understand why, we have to look at one of the recurring impediments in this project – one that really blew up in Sprint 7.

We want to be ‘hardcore’ Scrum. The stricter you are about working the methodology, the faster you deliver value. That’s the theory, and amidst the various #scrumfails we’ve clung grimly to it. So we’ve kept our Definition of Done strict, which includes deployment to production. As a result, we left 15 points undone in Sprint 7 not because we couldn’t do it, but because we couldn’t get in to production to deploy it.

Our access to production has always been limited but in Sprint 7 it was nearly completely shut off. Why? Because the product worked so well that we’d come to rely on it. The value of the new features didn’t outweigh the cost of shutting down production for the time it took to deploy them.

In six sprints we’d gone from not having it at all, to completely relying on it.

Our burndown (see below) screams #scrumfail, but we’re completely reliant on the thing that got built. That’s success. We’ve proved the 80/20 rule of product management – 80% of the value is in 20% of the features. What’s left in the backlog is largely the 80% of the features that provide only 20% of the value.

As a team member building the product, I really want to build those features. As a Product Manager, I really want to be able to sell those features. But as a business, we really didn’t need those features. Who could have known?

The business can choose to do, or not do, what remains on the backlog. The relentless prioritization in Scrum has forced us to build first, only what is provably most valuable. Remember in our last post – Product Management Interlude – I looked ahead and guessed we’d be done somewhere around Sprint 10? Even then, it hadn’t yet struck me how close we were to an MVP for this product. Yet here we are, at the end of Sprint 7 with what is needed, and nothing more. That, to me, is beautiful.

Luke, I AM Your Father

All that remains to be done on this project, the real must-haves, is a small amount of cleanup/future-proofing, some market analysis, and a case presentation to management of the commercialization question.

So this constitutes a bit of a “surprise ending” to a series that was supposed to be about spinning up a kick-ass Scrum team and doing a project. Our Scrum board and burndowns are still full of #scrumfail, but when the product’s done, well … it’s done. So as the team backlog has already started filling up with ‘the next thing’, now seems like a good time to wind up this series, at least as far as following the action sprint to sprint goes.

Post Mortem

I intended this series to show Scrum in action, warts and all, on a real project. I didn’t know how it would turn out. There are a few things that I find notable about it.

My expectation was that we’d take a couple of sprints to get a baseline velocity then progress upward from there. But it didn’t work out that way. In the end, our velocity was all over the place.

I expected that, since we know our Scrum, we’d dive right in and put on a Scrum clinic. It turned out that it took us several sprints to start doing it right. The pull of the dark side, over-commitment particularly, is strong.

I never dreamed that we’d end up declaring victory this early with so many of the original stories drowning at the bottom of the backlog. Knowing that the methodology drives that phenomenon is one thing, but seeing it in action is quite another.

I spent a lot of time pointing out where 1-week sprints instead of 2-week probably would have helped us fix our #scrumfail faster. That said, my opinion of 1-week sprints remains unchanged. IMHO, 1-week sprints are just another way for project managers to steal your weekend.

As Dilbert once said (sort of) “Welcome to product management. Two drink minimum.” At this point, 6 sprints into the project our release burndown looks something like this:

Aaaaaaannnnnnyyyyyyyhooo ….

This release is going to happen, probably sometime around Sprint 10. There is no question that the product works and the functional release goals will be substantially met. There are stories that have appeared out of nowhere and gone to the top of the backlog, stories that seemed critical that have fallen to the bottom of the backlog and stories that have failed in execution and had to be reworked from scratch.

Given that this is an internal release there’s little angst that goes into “calling it done”. The more interesting questions revolve around what comes next. After this release we’ll have something with a positive ROI when sold to one customer, us. What about customers who aren’t “us”?

Up to this point we’ve put a small handful of PO stories on the backlog. Generating lean canvas for the product, maintaining an up-to-date bill of materials to give us an upper-limit unit cost, and some better-than-internal documentation. Now our PO needs to build a decision about whether or not to sell the product to people who aren’t “us”.

We’ve reached a commercialize-or-not decision for the product. The team, including the PO, desperately wants to commercialize. You want them to want this. “Purpose” is a key part of motivation (Autonomy, Mastery, Purpose) and if your team doesn’t feel they are serving a big, worthwhile purpose then they won’t perform well. But this means that if the team and PO just sit down and talk about whether or not to commercialize all you’ll hear is various flavors of “Hell yeah!”. We need metrics.

Because of the company’s unique (alright not-so-unique) position, “time to break-even” and “maximum exposure before break-even” put hard limits on the investment we can make in the product. A version of the product that takes too long to break even, or takes too much investment to launch at all would require the company to bring in a round of investment, which pushes the decision up a level from the PO/Management Team level where it lives now. Again, typical constraints.

Getting those metrics done well, and being hard-headed about them is top priority. What do we need to get those metrics? For this particular product:

Cost of labor to build this. This should fall out from a well-constructed product backlog and the velocity metrics we’ve already produced.

Legal and regulatory. There are expensive regulations that need to be met to commercialize this, we know what they are, we just need hard numbers and timelines about how to meet them.

Cost of go-to-market. Given a product that meets regulatory and does things people will pay for, what does it cost us to get them to buy enough of it that we reach break-even without breaking our total investment ceiling?

Second priority is the potential return on this investment. We split this into two questions that will look familiar to fans of Lean Customer Development:

Does anyone besides us have the problem this product solves?

How many people have this problem and what will they pay to solve it?

Scrum Implications of Commercialization Decision

What does all this mean for the team? In large companies, except for the BOM and people-cost, all this “business stuff” would disappear into a black hole of decision-making populated by business trolls doing exclusively business stuff. But we’re a small company and we need to do all this with pretty much the same team that’s actually building the product.

We’ll use a typical structure that enables swarming. Our PO will structure the Lean Customer Development stories, and the team/PO will figure out how to get them done and run them off the same Sprint Backlog that we build the product from.

On top of all that, the team realizes that given the amount of work and sheer calendar time that needs to go into this decision, we’re very late getting started. Customer development interviews, in particular, eat calendar time. We’ve already started Sprint 6, with almost exclusively product development stories and won’t want to wait two weeks to start on commercialization stories. Fortunately, our kaizen from Sprint 5 was to implement the interrupt buffer pattern and we now have 10 points of interrupt we can draw from without angst.

So What Do We Do First?

There’s a bunch of stuff we need to do to build this commercialization decision. That stuff becomes stories on our product backlog. Our team is doing backlog refinement weekly, so our PO needs to write those stories, get them sized and prioritized, recognizing that he has only 10 points of interrupt for this sprint where he can jam in unplanned work.

Notes on sprint length (are you bored with sprint length yet?). This once again argues for one week sprints over two weeks. Realizing that it will take at least a couple of days to create new PO stories, we could easily put off starting on them until next week in a one week sprint system. In a two-week sprint system we’re forced to use interrupt buffer to accommodate the urgency of these PO stories.

We’ve explicitly excluded physical aesthetics from this first release but there are limits to everything and the mess of wires in production threaten to make trouble down the road. Add to that the fact that we need to take production apart to enable some of the control-function stories (wires that weren’t brought out to an accessible terminal) and it makes sense for this to be a goal. Feedback controlled function is the embedded system making an intelligent, sensor-based, not schedule-based, local decision to “do something”.

Sprint Planning: Our velocity is 54 and we take 54, including an interrupt buffer which we eyeball at 10 points.

Everyone’s anxious that the product backlog is thinning out and starting to be dominated by business value research stories, bigger, more difficult technical stories and stories that apply more strongly to a commercial version of the product than to an in-house version. A decision point is looming for our PO structure.

Sprint Review: We showed the radically reorganized installed hardware and a control function directed by internal sensor feedback. The sprint goal was met, and with room to spare.

Retrospective: Lots to talk about at this retro. Looking at the chart, we flat-lined for the first three days, got on the curve then flat-lined again. We’re repeating a pattern of floating “above the curve” for most of the sprint and bringing it in, if we do, at the end of the second week. This could mean we’re taking in stories that are too big but looking at the second half of the burndown chart report we see that the stories we burned were all ones, twos and threes so we’re good there.

This floating above the curve phenomenon is a builtin tension to doing almost anything, by almost any methodology. Jeff Sutherland always compares burning down work to landing an airplane which is a great visual analogy. The biggest problem with floating above the curve as a way of life is that it has a completely ill effect on both the Scrum Master and Product Owner. Your SM and PO can have confidence that you’ll land the plane, but it will be based on faith, and history, not on the current state of the chart. There may be a happiness metric benefit to, every now and then, taking a quick win at the start of the sprint.

Despite our velocity, we also had three dangling stories that fell into two camps. One was a story that was complete except for deployment to production. Remember, like any ongoing business we’re limited in our access to production and the size of the window we can bring it down.

The other was a pair of stories having to do with controlling a motor. The team fears it. It’s messy and risky so no one wants to touch it. It was probably a mistake to put them at the end of the sprint backlog where they’d be easier to punt. We’ll swarm at least one of them at the beginning of Sprint 7.

Jira note: We took in two stories about the same physical part – a three pointer to jury-rig it to get another week out of it, and a five pointer to replace it entirely. We ended up doing the five pointer on day eight without needing to do the jury-rig so we removed the three pointer from the sprint. In the burndown chart that descoping is represented just like completed work, but Jira is smart enough not to count it toward our velocity.

How did our kaizen from Sprint 5 work out, part 1? TDD – TDD was epic. The team loves it so much we started looking at BDD. Note that like most everything that seems radical and new, BDD is actually a decade old now.

When we took this kaizen, we estimated it at 5 points. Then I realized that it was stupid to size this kaizen. That’s what you see on day 5 where five points disappear from the scope of the sprint. My thinking here went as follows. TDD requires effort, but that effort for each story is part of the story itself, not a separately sized thing. And theoretically it should be speeding us up, so if we count velocity for doing TDD and get a velocity bump on the individual stories, we’ll be double-counting one kaizen which we’ll have to “give back” next sprint when it’s not the kaizen anymore.

So put the kaizen on the backlog, but don’t size it because it doesn’t directly create business value.

How did our kaizen from Sprint 5 work out, part 2? The interrupt buffer came in handy almost immediately. On the fifth day of the sprint we had a production emergency, three point story. So we added a three point story to the sprint backlog. But we want those three points to “come out of the interrupt buffer”. So we reduced the point estimate on the interrupt buffer to seven. That’s the spike that shows up on day 5. Adding three points of scope for the new stories, subtracting three points of scope from the interrupt buffer. And when we burned down those three points on the same day, we end up, as we expect, on the same slope as if we’d just burned a three-pointer from the sprint backlog.

Sprint Goal: At least one control-function installed and working within physical element and more sensors reporting actual values. Until now the embedded system has been reporting values to the web, but not responding to commands from the web. Now we want to largely complete the reporting function and implement one instance of the control function.

Sprint Planning: Backlog was in good shape, our velocity was 40, we took 40.

Sprint Review: We showed one control-function working within the embedded unit. The one we chose to show was run from a schedule within the device, rather than controlled from the web because that was highest value to the PO. The team using the system didn’t need manual control (which was also harder) but simple scheduled control.

Retrospective: Finally it all seemed to click. We under-committed our actual velocity and finished early, way early, then piled on the wins.

Note that we burned down to our last committed story on the 11th day of our 14 day sprint, then started taking stories off the backlog without taking the last one to done. The last story was blocked temporarily. The sprint goal was already crushed, and we knew we’d get that last one in so we went to the backlog and ended up completing 6 more stories in addition to the last one from the commit.

How did our kaizen from Sprint 4 work out?

We more than doubled our velocity from Sprint 4. Our official kaizen was to only commit our actual velocity, which probably accounts for some of our acceleration. Some of it was luck/random variation. But we think that improving our testing game, which has been ongoing and included test infrastructure stories on the backlog, probably accounts for far more.

What is our kaizen for next sprint?

The team attributed most of our acceleration to more and better testing, done sooner. Stories that failed integration or worse, crapped out in production, sailed through both those and went straight to done. So we chose to double down on that by trying TDD for all new code for this sprint.

We also haven’t managed to eliminate interruptions – work added mid-sprint. No one ever does. The cure for this is the interrupt buffer. The average interrupt for the first five sprints has been X points so we’ll start with that. This is something we should have been doing all along.

Taking two kaizen is not ideal. This was driven as much by the team not wanting to wait two weeks to do something they’d already decided to do and knew how to do. That makes it another sprint length note, again in favor of 1 week sprints.