I’m a recruiter. But I don’t judge my candidates based on their length of employment on each company they worked with. Candidates seem like job hoppers if they frequently change jobs, but I see them beyond being job hoppers. Each employment have their own stories. Each candidates have their own passion and strength. You will never know what will happen on your job until you’re in it every day. I’m happy to hire a candidate as long as he performed excellently in interview. I believe each new joiner will bring a new change in company. I am Gen-Y and I hate traditional recruitment mind-set.

A gentleman replied:

Job hoppers are a problem for hiring managers and teams – they are not team oriented and tread a ‘greedy’ path of ‘growing bucks’ instead of steady state and investing in a Career. DO not agree at all – I hate this Gen Y recruitment mindset.

In this post I would like to counter this huge generalisation with one or two of my own, and hopefully we can land somewhere in the middle, where the reality lies.

Great talent often “job hops” because of the sheer amount of companies with a culture of mediocrity, and the lack of companies with a culture of excellence.

High performers get bored because their talent is not anywhere near utilised in the majority of organisations in which they are likely to end up.

They get frustrated because their ideas are continually ignored.

They get bemused by the baffling decisions made by senior management without involving the folks who will be affected most by the decisions.

They get tired of management constantly harping on about their teams needing to be “more efficient”, or “delivering what they committed to”, or being “lazy”.

What of the folks who don’t “job hop”?

Often those who stay with companies for years and years are folks who are so comfortable they should be coming to the office wearing slippers.

They see no need to stretch their capabilities. They will never challenge the status quo. They will dutifully “do their job” every day, without ever really learning or improving new techniques or approaches. They are lacking ambition.

The #NoEstimates conversation is largely about estimating nowadays rather than NOT estimating.

Estimating, but in a probabilistic way. People often refer to this type of estimating as forecasting. Using cycle time. Throughput. Variance. Little’s Law. Monte Carlo.

All famously good stuff.

But I don’t want people thinking that’s all there is to the conversation. Many folks have interpreted it that way.

For me, larger questions remain. For example, is it possible, in certain situations, to deliver value to the customer at a rate which negates the need for doing any estimating at all, both up front and ongoing? Quick enough that they do not need to make any decisions or commitments based on anticipated delivery, only what was actually delivered?

Beyond whether this is possible or not in certain contexts, why might it actually be important or desirable to be in this state of not needing estimates? I can get away with not eating apples, but is it actually useful for me to not eat apples?

Well, the fact that estimates are usually needed implies that decisions and commitments of some form are made based on them. This is a common argument cited as to why estimating is immutable when working with customers in uncertain domains.

However, often the knock on effects of an initially inaccurate estimate are damaging financially or culturally. So I can imagine, in certain situations, it might be possible, and desirable, for the customer to ask for delivery of tiny working increments which can provide value for them right away and, explicitly, no estimates are asked for because doing so would create potentially irreversible knock on effects. Perhaps losing another customer’s trust by not meeting your “commitment” to them. Perhaps having to trash another project for which you had a team lined up to work on if things “went to schedule”.

I can imagine a few reasons why we might want to enter a working relationship in which we explicitly value the rapid delivery of added value over the anticipated delivery of value at some future point. Not to mention the trusted working relationship side of things. “Customer collaboration over contract negotiation”.

These are the broader questions I’m interested in. We get it, we can forecast with data to avoid deterministic estimation rituals and provide more solid, transparent estimates of when we will be done, or what will be done by when.

But can #NoEstimates thinking actually take us further? Into whole new ways of working with our stakeholders and customers?

This is a concept I devised a couple of years ago, and it seems there is a new #NoEstimates audience that would like to know more about it.

A Slicing Heuristic is essentially:

An explicit policy that describes how to "slice" work Just-In-Time to help us create consistency, a shared language for work and better predictability.

Crucially, the heuristic also describes success criteria to ensure it is achieving the level of predictability we require.

The Slicing Heuristic is intended to replace deterministic estimation rituals by incorporating empirical measurement of actual cycle times for the various types of work in your software delivery lifecycle. It is most effective when used for all levels of work, but can certainly be used for individual work types. For a team dabbling in #NoEstimates, a User Story heuristic can be an extremely effective way of providing empirical forecasts without the need for estimating how long individual stories will take.

However, if you are able to incorporate this concept from the portfolio level down, the idea is that you define each work type (e.g. Program, Project, Feature, User Story, etc.) along with a Slicing Heuristic, which forms part of that work type’s Definition of Ready.

For example,

"A feature ready to be worked on must consist of no more than 4 groomed user stories"

or

“A user story ready to be worked on must have only one acceptance test”.

The success criteria will describe the appropriate level of granularity for the work type. For example, you might want user stories to take no more than 3 days, and features no more than 2 weeks.

Here is the really important part. The idea is not to slice work until you estimate it will take that long. You never explicitly estimate the work using the Slicing Heuristic. Instead, as the work gets completed across the various work types you use the heuristic(s) to measure theactual cycle times, and then inspect and adapt the heuristic(s) if required.

At the user story level, I’ve found the “1 acceptance test” heuristic to be consistently effective over different domains for creating an average story cycle time of 3 days or less. However, there are alternatives. Instead of acceptance tests you could use e.g. number of tasks:

"A user story must have no more than 6 tasks".

Here is an example Slicing Heuristic scenario for a Scrum team using the feature and user story heuristics described above:

Product Owner prioritises a feature that she wants worked on in the next Sprint

PO slices feature into user stories

If feature contains more than 4 stories, it is sliced into 2 or more features

PO keeps slicing until she has features consisting of no more than 4 user stories; they are now ready to be presented to the teamNote: Unless this is the very first feature the team is developing, the PO now has an estimate of how long the feature(s) will take, based on historical cycle time data for the feature work type; no need to ask the team how long it will take

In Sprint Planning, team creates acceptance tests for each user story

If there is more than 1 acceptance test, story is sliced into 2 or more stories

Team keeps slicing until all stories consist of only one acceptance testPO now has an even more reliable forecast of when the feature(s) will be delivered because she can now use the user story cycle time data in conjunction with the feature data

Team delivers each story, and records its cycle time in a control chart

If a story is taking longer than 3 days, it is flagged for conversation in Daily Standup

Multiple outliers are a sign that the heuristic should be adapted in the Sprint Retrospective

When the feature is delivered, its cycle time is measured also

Again, if features are taking longer than is acceptable for the heuristic, the heuristic should be adapted to improve predictability (e.g. reduce maximum number of user stories per feature to 3)

This is an idea for a series of focused retrospectives called CAIN (Continuous Attention to Individual’s Needs). It is inspired by, and based upon, Bob Marshall’s Antimatter Principle (Bob is @flowchainsensei on Twitter – you can find all related posts and tweets here).

The premise of the Antimatter Principle is simple – Attend to folks’ needs.

Think of CAIN as one example of a concrete implementation of this principle, or a method.

CAIN adapts the typical retrospective questions of “What’s working well?“, “What’s not working well?” and “How can we improve?” to directly address the needs of folks in a team in a systematic way.

The team is looked upon as a group of individuals with unique human needs rather than purely a homogeneous unit. Continuous improvement efforts are focused on the habitual attendance to each individual team member’s needs (hence the name CAIN) rather than trying to ascertain the needs of the team as a whole.

From a Toyota Kata perspective, the current condition is the number of unmet needs in the team. The target condition is zero unmet needs. The team as a whole will continuously endeavour to reduce the number of unmet needs of the individuals in the team via deliberate actions and experiments identified in the retrospective.

What’s working well for me? (needs being met)
What’s NOT working well for me? (needs NOT being met)

Each team member spends time individually reflecting on events since the last retrospective that directly addressed one or more of their innate needs, and those that did the opposite. They are also invited tohighlight needs that feel unmet due to something that didn’t happen.

:) "I feel very valued this week, and that I am starting to form friendships in this team."

For this exercise, the team might find it useful to refer to a model for representing human needs, such as Maslow’s Hierarchy of Needs.

Folks are invited to consider their emotional response to events rather than trying to be rational or scientific about what is working and not working. How they feel about what happened (or didn’t happen) rather than an objective assessment of what is effective or not.

For this to happen, it is more important than ever that the team feels they are in a safe environment, so the need for a safety check is paramount. Folks are being invited to share intimate thoughts and needs as human beings, so a high degree of trust is required. Consider that CAIN might also be used as an approach to build this required trust in the first place.

It’s also worth pointing out here that reflecting on one’s actual needs is not a simple task*. It is far easier for us to talk about what we want, or think we want, rather than what we actually need.

However, there is much inherent value in simply talking about needs – especially those of the deepest human kind – even if the “needs” that get identified are not truly the innate needs of the individuals. With practice, the team will become more effective at identifying genuine needs, and in the meantime they will at least be talking about them, building trust and perhaps making their work environment more joyful in small ways.

*This is also apparent in so-called requirements elicitation, where folks try and identify what the customer needs by asking them what they want. Actual needs are somewhat intangible in practice, and tend to emerge over time rather than be identifiable in the present.

How might we attend to my unmet needs?

Having celebrated the needs of its members that are being met, the team will turn its attention to addressing unmet needs. Folks are invited to spend time individually thinking of ideas that might reduce the unmet needs count.

All ideas are presented, and the group votes for the one they think might have the biggest impact.

An experiment is formed, and each team member goes back to her/his work routine, hopefully with an enriched view of her/his own needs as well as those of their colleagues.

What next?

My hypothesis is that CAIN might reap better results than other retrospective approaches because it reduces the risk of groupthink.No attempt is made to collate things identified as (not) working well into a team consensus. It is always the needs of the individuals that are focused on.

CAIN also might reap positive results because it focuses on the strongest lever for improving effectiveness, which is mindset. The conversations that arise when folks are unravelling their personal and professional needs will reveal differences in mindset – dissonance – which, left unaddressed, will result in a perpetuation of ineffective strategies for getting needs met, leading to conflict, competition, poor results from a team’s perspective and, ultimately, that of the entire organisation.

I invite you to give CAIN a try in your next team retrospective, and share your experience🙂

1. Folks have shared goals with the organisation at a shared time

— i.e. they are synchronised with each other and the organisation’s goals

When we think about what makes Agile such an effective approach to software product development, we think about a single team, working toward a single product vision, happily iterating and incrementing toward a common goal.

As soon as you add just one more team or product in the mix (often referred to as “scale”), you have already added significant complexity to the situation in terms of product prioritisation, team processes, methods, estimation, relationships, dependencies and more. In short, keeping the magic of single team/single goal becomes increasingly difficult — seemingly impossible.

Well, It’s certainly not impossible. Agile organisations make principle-led decisions that allow them to keep the single team/product magic alive. They ensure that every person in the organisation has a clear goal at any given time — the same goal as that of the team they work with — which in turn is aligned to the correct organisational goal at that precise time.

Principles are one thing, but structurally this might sound too hard. Again, yes it’s hard, but it can be achieved via clear and ongoing prioritisation of initiatives (the things of [assumed] value that we want to achieve as an organisation), and forming autonomous teams/squads/tribes around them. If there are dependencies between teams, act to minimise them — remove them completely if possible — for your highest priority initiatives. Push the dependencies down the priority list.

It can also be achieved by forming teams around long-lived themes, such as customer capability. For example, MYOB builds accounting software. If I were to ask one of our SME customers “What does MYOB software enable you to do?“, the type of answer that would come back would be “banking“, “taxes“, “payroll“, “reporting“, etc.

These functions are all candidates for forming cross-functional squads around — squads that include all folks required to deliver end-to-end value within that area of capability for the customer. Suddenly, our business mission of “Making business life easier” is broken down into “Making banking easier“, “Making taxes easier“, “Making payroll easier“, and so on.

As a side note – this kind of customer-centric approach is also a hallmark of a truly agile organisation.

2. Decisions are made quickly and daily by all

— via ubiquitous information, not based on rank

In knowledge work organisations, thousands of little decisions are made every day. If folks do not have good information with which to make those decisions — or they are not empowered to make them for some other reason — there can be a huge impact on the organisation, both culturally and economically.

For example, if I am asked to deliver two outcomes, and it becomes apparent it is only possible for me to deliver one of them, what should I do? Do I have the appropriate information (and authority) to choose which one to sacrifice? What about technology choices? How should I deal with this customer situation? Should I fix this bug? Should I ship this feature now or delay delivery for a week?

If I cannot make these decisions quickly — in a way that is consistent with how others would decide (because we all have access to the same information), and without fear of being punished for making the wrong decision — the organisation might effectively grind to a halt.

An agile organisation makes decisions quickly — based on clear decision frameworks that enable everyone from the CEO to the cleaner to fearlessly make good decisions every day.

3. “How the work works” is optimised for sustainable responsiveness to customer needs, not output nor strategic certainty

— i.e. responding to change over following a plan

An agile organisation is not [necessarily] one that can churn out masses of features every week. It is also not one that jumps around, shifting priorities from one week to the next. Instead, it is an organisation that can seize opportunities very quickly — opportunities that arise either via internal ideation or changes in the market.

An agile organisation should have a strategy, sure, but it recognises that the strategy might be misguided, or needs to change for some other reason, so it ensures that teams can adapt quickly to a change in strategy rather than require a painful restructure. It does not put all its eggs in one basket and optimise for delivery of the strategy. It actually embraces agility itself as a strategy.

Lead time is a common metric for lean/agile organisations to focus on, and rightly so – how quickly can we turn an idea or request into real value for a customer?

In theory, it is easy to turn that killer idea into a real thing for a customer quickly, even if your organisation does not yet have the necessary infrastructure for true agility. You can achieve it via an authority figure usurping other “priorities” and thus seeing their request expedited by a team swarming around it, doing what they are told to do as quickly as possible at the expense of everything else. Look how quickly critical production issues are resolved when all the key people are thrust together immediately with a single, shared focus and goal.

This is why organisational agility requires not just responsiveness but sustainable responsiveness. The expedite situation above can never be sustained. True agility is being able to turn on a sixpence due to work being done in small enough batches — and with enough slack in the system for folks to be quickly available when new and better opportunities arise than the ones we currently have.

4. Where rituals and practices are required in order to achieve 1-3, the preference is always for individuals and interactions over processes and tools

— and conveying information to and within a teams via face-to-face conversation.

Organisations typically default to processes and tools when trying to address improvements in performance. Truly agile organisations are full of folks who recognise that improving the quality and quantity of their interactions with other folks is the key to improved performance.

For example, frequent all-hands gatherings to plan together, celebrate and review achievements, learning and progress toward goals — over trying to get everyone to put all their tasks in Jira.

5. All hiring is for mindset over skills over experience, which allows for implicit trust that folks will always commit to doing their best

— i.e. build projects [sic] around motivated individuals – give them the environment and support they need, and trust them to get the job done

This one almost speaks for itself. If every hiring manager in the organisation hires other folks with a mindset aligned to the desired collective organisational mindset, they are [by design] hiring people they trust and who will be motivated to achieve the organisation’s goals.

There is no place — nor need — for hierarchical authority, carrots and sticks in effective agile organisations.

Managers and leaders who focus their efforts on trying to improve the performance of individuals in their teams are playing a low percentage game in terms of the likelihood of any significant effect on organisational performance. Managers are far better served addressing “the way the work works” — the system conditions — to achieve the kind of improvements they are being asked to make.

If you want your organisation to get better in a particular area (e.g. efficiency), fix the environment to support improved efficiency for the whole eco-system, not any one team or individual.

(I’ve referred in my talks to this systems approach to improvement as “building a network for high speed trains” rather than “trying to make trains faster“).

7. There is no delineation of “business” and “IT”

— technology folks work daily with other folks who share the primary concern of serving customer needs, thus the organisation operates as one “we” rather than many “them’s”

As I’ve already alluded to, organisational agility requires high trust between individuals, teams and departments. Unfortunately, typically organisations are instead built around silos, which encourages and then perpetuates a low trust environment. This is why folks in, say, development teams, end up referring to folks in, say, sales or marketing teams as “the business”.

High level strategy and day-to-day task execution must be mutually respected in equal measure by the folks who are responsible for them. Everyone in an organisation is “the business”, and until folks recognise this and live it daily they cannot be part of a true agile organisation.

8. There is continuous attention to [technical] excellence, simplicity, learning and improvement

Organisational agility requires an embedded continuous learning and improvement culture. There are always better ways of doing things in complex environments (such as knowledge work organisations). “Best practice” and cookie-cutter processes will never achieve agility.

Instead, we must experiment with models, heuristics and methods that allow us to adapt, pro-act, react and enact with respect to what we’re building and how we’re building it.

What other signs are there of an agile organisation? I’m sure there are more — please share🙂

Being one of the early contributors to the #NoEstimates (NE) hashtag, and a regular blogger on the topic, I am understandably regarded as a “#NoEstimates advocate”. When I get introduced to folks at meetups and conferences, a typical exclamation is “Hey, you’re the #NoEstimates guy!”

Another consequence of my reputation as a pioneer of the “movement” is that I will often get asked questions that, when answered, are deemed to represent the views of all NE advocates or, more bizarrely, NE itself. It’s as if NE is a thing that can have an opinion, or is a single method/approach. “What does NE say about X?” or “Here’s what the NE’ers think“.

What some don’t realise is that there are wide and varied disagreements between so-called NE advocates. It’s similar to the variety of viewpoints that you would get within, say, a political party. The party represents a set of values and principles, but there will rarely be a situation where all the members agree with every policy proposed or pushed through in the party’s name. I guess the same could be said of Agile too.

Folks are naturally interested in the practicalities of what a #NoEstimates approach might look like. This is fantastic, and I welcome questions and discussion on this. I engage in such conversations often. But I do want to make a point about an underlying presumption behind most of the questions I receive. Here are some of the most typical ones:

“How do you prioritise at the portfolio level without estimates?”“How can you make decisions in the presence of uncertainty without estimates?”“How do you convince senior management to implement #NoEstimates?”“How can we minimise the number of things we need to estimate?”

What these questions have in common is that “not estimating” at all levels of work is where we want to head. That the goal is to reduce our estimates across the portfolio, with zero estimates as utopia. That the premise of #NoEstimates is the less we estimate, the more effective we will be.

For me, DOING NO ESTIMATES, or even LESS ESTIMATES, has never been a destination from my point of view.

My focus has always been on improving the way we work such that estimating becomes redundant.

This means understanding our business better. Becoming more stable and predictable in our method of working. Building relationships based on high levels of trust and respect. Reducing dependencies between teams. And so on.

People ask “So, Neil, how do we get started with #NoEstimates? Should we simply stop estimating and see what happens?”

The answer to this is a categorical “NO“, at least from where I sit. There are a set of minimum conditions (or “barriers to entry”) before you can get anywhere near being in an environment where you do not need to estimate. Other NE’ers might not answer in the same way, but that has always been my stance. Read my earlier #NoEstimates posts if you don’t believe me!

My views have certainly evolved on the topic, and some of my early work might take a slightly more extreme stance. But I would never advise people to stop doing anything without knowing anything about their context. Even if I did know their context, I would be suggesting small experiments rather than simply stopping doing something that may be of value to that team and/or wider organisation.

Some people see #NoEstimates as meaning “NO ESTIMATES”, and can’t see beyond that.

If anyone wants to start tweeting to these hashtags, go ahead! I prefer to tweet to where the conversation actually is (and shorter hashtags :)), and trust that the reader does their own research and understands the nuances of the debate. You need to scratch well beneath the surface to find where the “NE’ers” agree and disagree.

The destination, in our jobs as software professionals, is becoming more effective at building great software for our customers. The journey is one of continuous improvement via experimentation. We can use Agile, Lean and Kanban principles to help us with that. We can use Scrum, XP, Kanban Method, SaFE, LeSS and other methods to help us with concrete implementations of the principles.

#NoEstimates started as just another Twitter hashtag. It has since become an enduring symbol of an industry that is unhappy with the prevailing way estimation is done, and the effect that has on what we’re trying to achieve professionally and personally. Some critics have cited “poor management” as the root cause of the dysfunctions we see around estimation. If that’s true, and estimates aren’t to blame, what next? How do we address a widespread problem with poor management?

Simply telling people how to do better estimations won’t do the trick. #ShouldWeDoNoEstimates? Perhaps, perhaps not. Either way, let’s at least have a bloody good debate about how we go about things in the workplace. Let’s put our heads together and “uncover better ways of working”.

Behind the NE hashtag is a world of opinion, ideas, principles and approaches that may be worth exploring and experimenting with on your journey to becoming more effective at software development. Many have done so. Many continue to do so.

One of the criticisms of the #NoEstimates (NE) stance is the view that even contemplating not estimating is a non-starter – because estimates are “for those paying our salaries”, not those doing the work. That the business folk in our organisations need to know what will happen and when in order to run their company successfully.

OK, even if NE advocates could successfully argue against that assertion, perhaps it is time we started to acknowledge the “impediments” of the debate? The back and forth arguments that prevent us from moving forward to a more constructive place.

Perhaps it is time to find the common ground, and build on that.

A simple truth is that business wants (needs) both speed and predictability. I think we can all (mostly) agree on that🙂

Some NE critics argue that we should learn better estimation skills such that our predictability improves. Yep, sure. Difficult to argue that learning to do something better is a bad thing.

Given that we have to do a lot of estimating as software practitioners, learning and using more effective estimation techniques seems a good idea.

However, in return for NE advocates acknowledging that we need to provide estimates for those asking, and get better at doing so, I think it’s time for the critics to acknowledge that arguing better estimation as the answer to all the dysfunctions surrounding software estimation is another impediment to the debate moving forward.

I see common ground in that we are all trying to create better predictability for our teams, customers and internal stakeholders. If we put aside “better estimation” as a way of doing that, how else might we do it?

Better predictability can be achieved in many other ways:

Stability and autonomy of teams

Limited WIP of initiatives (1 per team at any given time)

Frequency of delivery of “done” software

Cadence of collaboration – planning, building and reviewing together

High visibility and transparency of work and progress

Shared understanding of variability, its effects on long range planning and how/when to try to minimise it or leverage it to our advantage

to name but a few.

To take the NE debate forward, we need to find ways to provide “those paying our salaries” with the predictability they need, while at the same time moving away from the dysfunctional behaviours associated with holding teams accountable to estimates in highly unpredictable environments.

What is an unpredictable software development environment? One in which 1 or more of the things listed above are not being addressed. It might not be a stretch to suggest that’s pretty much every software shop on the planet.

There is common ground between critics and advocates in this debate. Let’s move on from “no estimates for anyone!” and “just learn better estimation techniques” – these arguments will perpetuate butting heads.

Instead, let’s explore – together – how we might create more predictable environments for our software teams, such that estimation becomes easier (and, in some cases, redundant).

“We are uncovering better ways of developing software by doing it and helping others do it.”

One of my frustrations as a software practitioner is our seemingly programmed human bias toward keeping the status quo.

I guess it wouldn’t be so bad if the status quo (pictured above) was actually something approaching effective, inspiring or at least motivating. But unfortunately the reality for many (most) people making their living in the crazy (in a bad way) world of software development remains one of boredom, dysfunction, wasting time on unimportant things, going along with stupid decisions (or lack of them), stress, hatred of Mondays, being put in our place by our “superiors”, et cetera, et cetera.

“23,858 tweets and counting. Worthwhile or a colossal waste of time?”

I tweeted this yesterday. Often I wonder why I stay in an industry that suffers from the afflictions listed above. My work mood swings from utter dejection to tremendous elation. Like the software we create, the variability in my mental state is subject to wild fluctuations.

Here’s the thing. The reason I do this; the reason I stay in the industry, tweet opinions, tips and debate; the reason I write these blog posts; the reason I give a significant portion of my time freely, mostly at my own cost, to talk at meetup groups, conferences and company brown-bag lunches; is…

Because I want to play a small part in creating a better world of work for those involved in software development.

Particularly developers, who I believe have been treated for years like some kind of underclass in organisations of all sizes and industries. Crammed like sardines into some dark, dingy corner of the building, given to-the-letter specifications of some crappy software system that will keep them busy for a few months and then will never be used by a soul. Forced to commit to an estimate of how long this will all take (minus whatever needs to be trimmed off because the estimate doesn’t fit into the already agreed timelines). Constantly being micro-managed and asked “why is this taking so long?” and “why is this so hard?”.

Yes, I’m angry about this. And I want things to change. So I’m trying to do that in my own little way.

I want us to start treating smart, motivated people with the respect they deserve – right from the moment we hire them. Why on earth companies put engineers through 3 or 4 rounds of interviews and then fail to actually trust them once they get the job is beyond me. Managers continue to spoon feed solutions to their subordinates because they “can’t be trusted” to solve business problems quickly and efficiently enough.

This is why I am challenging the status quo in our industry. Sometimes what I write or say is found provocative by some. One dimensional. Context-less. “It depends on the context”, people say. “There’s no one right way. No advice is universal.”

I get disappointed (sometimes annoyed) when people who have never met me and know nothing about my professional reputation and abilities confuse what I tweet as “professional advice”, and then start questioning my integrity and ability as a consultant. It is hypocritical and way off the mark.

The reason why people write blog posts with provocative titles, and tweet with controversial hashtags, is because it is interesting. It invites conversation and debate. It stirs things up a bit. God knows (and so should the rest of us) that this industry is in dire need of some stirring up.

I was questioned by a couple of people about a tweet I wrote recently:

“In fact my tip is NEVER do a MoSCoW prioritisation. The implied fixing of scope makes change very difficult. Order things instead.”

A tweet, I might add, that was retweeted dozens of times, so obviously resonated with many.

I was told that my opinion was “unjustified”. That I shouldn’t make “categorical statements”. That “never is a long time”. That some poor soul may take my advice (assuming a tweet constitutes professional advice?!) and destroy a project because I am uninformed about their “context”.

I am constantly told the same kind of things about the #NoEstimates debate. That I can’t tell people not to estimate because I don’t know their context. Their boss might need estimates. Sometimes we need them, sometimes we don’t. Et cetera, et cetera.

With all due respect to these people, they are completely missing the point. For a start, I think it’s ridiculous to suggest that people would read a tweet from little old me and that would somehow create a chain of events that would destroy a project. Even if I were someone with anywhere near the influence and expertise of the great Ron Jeffries or Kent Beck, I don’t think I would yield that kind of power over people.

I do not use Twitter to dish out free professional advice. It is a forum for opinion, conversation and debate. Well written tweets resonate with people in some way, such that they retweet them, favourite them or, preferably, start conversations about them.

Perhaps reading a tweet like the one above will encourage someone to think a bit more about a practice that they have always done without question. To look into alternative ways of organising and prioritising work. To completely reject what I’m saying. Good tweets create a reaction, and whether this reaction is an angry disagreement or a nodding of the head, it has done its job.

Twitter is not to be taken too seriously, but the conversations it can create are serious and, I believe, are helping us as an industry to increasingly question long established practices. This can help us improve the way we work. The way we think. It is vitally important for us to have our world view challenged on a regular basis. This is how we learn and evolve.

I don’t just want to read tweets saying that “it depends on context”. Stuff that confirms my world view. Stuff that I agree with all the time. If every piece of advice or opinion “depends on context” then we might as well just give up trying to improve things.

“Depending on your context, you might want to consider alternatives to MoSCoW prioritisation. However, if it works for you then fine, just keep on doing it.”

Politically correct, perhaps, but it’s not exactly going to give me a reaction. I’ll probably not even notice that tweet on my timeline. “Be happy”. Ooh, can’t say that, it depends on context.

Moving away from social media for a second and into the real world of professional coaching and consulting – As Agile coaches I believe we can do much, much more for our clients. If someone tells me that I’m being unprofessional for suggesting better alternatives than MoSCoW then we are on different planes, I’m afraid. I know that there are certain principles and practices that have proved effective for me time and time again.

I’m not alone on this. I believe some statements are universally applicable, regardless of context. Questioning the way we do things doesn’t depend on context. Respecting each other and striving to work more collaboratively doesn’t depend on context. Adopting good engineering practices will help you to deliver incrementally and iteratively at a constant pace over time – this is universally applicable also.

Of course context is important – to me that’s so obvious that I can’t believe people keep saying it. We know that. It goes without saying.

But it’s not the point. The point is that many, many companies are still struggling to grasp the principles and practices that we in the Agile and Lean community know can increase effectiveness. Our clients deserve better advice from us than “well, if that’s working for you then keep on doing it”. We all know that something “working” is a perception and may actually be destroying the morale of the employees, or even putting the business as a whole at risk.

It is not “professional” for us to keep playing the context card. We need to be bold in our decisions and advice giving. Take risks. Challenge the status quo. Encourage innovation, not just of products but of process also. Be a true change agent, not just blend into the environment.

If you like what I tweet and blog, that’s wonderful so please do keep following! If you don’t like it, please unfollow. Twitter is wonderful because it is the ultimate pull system. If we don’t like what we see we can block and unfollow. We can filter out content that doesn’t interest us. It’s brilliant. And I shall continue to use it to challenge, provoke and generate conversation and debate. I cannot begin to measure how much I have learned and evolved my thinking thanks to conversations on, or starting on, Twitter. I’m pretty sure others will say the same.

And I will continue to help clients, in their context, get better whilst trying to create happy and humane workplaces. I want to live in a world where people enjoy going to work. It’s time away from our family and friends, and we spend most of our time there, so for God’s sake if we’re not enjoying it then what are we doing?

I don’t get it right all the time. Probably not even most of the time. But I do this because I care. I will continue to risk getting lambasted by people and losing the respect of gurus and experts. Like the rest of us, I don’t know it all – far from it. But I do not learn by being uncontroversial and not pushing the boundaries of what I believe or how I think things should work.

Thanks for listening🙂

Note: I will write a follow-up post about MoSCoW prioritisation itself. Aside from the fact that it perpetuates the myth of “requirements” (if something is not a “must-have” then how can it be a requirement?), I’m not including my further ideas on the topic here because it’s not really what this post is about.

Many have already written about the damage it can do and some better alternatives to set you on the road to delivering a successful project (read building a successful product). For starters, Joakim Holm wrote a great post about it the other day. And there’s lots more to investigate using our friend Google!

I am privileged to publish this guest post from the brilliant and lovely Michael Rembach (@mrembach).

In October I stumbled across a blog article about product development using Scrum and the hindering effect that Scrum can have on the innovation process especially if the organisation is fully ‘agile immersed’. The blog was written by Brian de Haaf (@bdehaaff) who is the co-founder of Aha! – A product management software company. While the article was well written and brought up many salient points about innovation, I disagree with the overall premise that Scrum may have innovation-limiting behaviours You can read the original article here zite.to/17HnE4S .

The first thing I’d like to point out is that I agree with the points about innovation in the article. Innovation practices, such as having a shared vision, engendering trust in your organisation and having a strategic direction are all vital ingredients for success and even more so in technology companies. The thing about innovation is that it’s a cultural thing and no framework/methodology/philosophy in the world is going to make your company innovative without the desire (or need) to. Having a myopic view of your product because you’re ‘Agile’ misses the point of the delivery focus and discounts the innovation-enabling practices that Agile encourages.

Scrum, and other Agile methodologies, are essentially delivery focussed which is why there is a requirement for product owners to focus strongly on the Sprint cycle and the short-term delivery timeline that it brings. However, this does not and should not excuse the product owner for not checking that what is being delivered is aligned to the strategic goals for the product or in fact, the organisation. The two aren’t mutually exclusive and a product owner is responsible for communicating that vision to the project team so that they are aware of the purpose of the product. Constantly checking in with the vision by all the team should ensure that what is being built doesn’t deviate from the intention of the product’s purpose. The product owner is simply not performing her role properly if she suffers from the myopic concern with delivery-cycles without also ensuring that the product is meeting its intended strategic objectives.

Rather than inhibiting innovation, I posit that Agile has a number of practices that encourage innovative behaviour:

MVP – the primary reason for creating a minimum viable product is to determine that what you’re trying to produce is viable, but it also serves a couple of other important purposes. The first is prototyping; where you have the opportunity to experiment with your solution, try something small and novel and see if it works and the second; it gives you the opportunity to solicit feedback from your clients, the product ecosystem and anywhere else. This is a primary source of knowledge for decision-making.

Fast-failure – Agile methodologies allow you to fail quickly and learn some valuable lessons before it costs you too much. Innovation is all about finding out new ways to do things and failing fast and safely is one of the best ways to forge new paths.

Continuous learning through retrospectives – a learning organisation is an innovative organisation and retrospectives provide an excellent opportunity to improve not only what we are producing (again, you can look at the strategic alignment at the end of every sprint or release cycle), but also how we work together.

Embracing change – if making changes to your product is painful then your ability to be innovative will be too. Agile methodologies accept that change is inevitable from the get go and therefore provide less resistance to innovating during the development of a product.

Innovation is difficult at the best of times. As Clayton Christensen illustrates in his famous Innovator’s dilemma, history is filled with the burnt out shell of successful companies that died as a result of not being able to change. To succeed, innovation needs to be part of the organisations culture. The premise that progressive change-embracing frameworks like Scrum inhibit innovation does not recognise these aforementioned practices. Agile won’t make you innovative, but it sure can help encourage it.

Everyone had fun and was intensely engaged throughout. There were loads of interesting dynamics emerging from the teams, perhaps surprising given the contrived nature of the experiment.

Set up

We set up three same-sized (10-12 people) teams, each with:

an identical jigsaw puzzle (way too big to be completed)

a Product Owner (to provide the vision and direction) and

a Scrum Master (to help the team achieve the PO’s vision)

We opted for 3 * 15-minute iterations, with 3 minutes for a Retro in between

Each team was told to use a different method – one was a Scrum team, one was a “mob team” and one was a “no rules” team. Here’s what that meant:

Scrum team

Must have Planning (including estimation), Review and Retro in each iteration

We provided Planning Poker cards for the estimation but the team was free to choose whatever estimation method they liked

Must only work on “stories” agreed in Planning – new stories can’t be introduced mid-iteration

Stories are only “done” when PO accepts them (in Review or before)

“Mob” team

No formal ceremonies required

Team all works on one story at a time until “done” (single-piece flow approach)

No estimation

Retro encouraged but not “enforced”

“No Rules” team

Can work like the Scrum team, the Mob team, any combination of the two, or any other way they like

Outcome

Scrum team delivered most stories (3; the other teams delivered 2 each)

Whole group was asked to vote on which they thought was the best outcome

“No rules” team won (emphatically)

Scrum team lost

Interesting Observations

Here are some empirical observations of the evening’s events and outcomes, along with my interpretation of what they indicate in an Agile/#NoEstimates context (==> in bold italics underneath the observation).

Scrum team

Delivered most in terms of stories but least in terms of value, both for their Product Owner and as voted for by the wider group==> Output <> Value
==> Comparing teams in a useful way would require consistent measures of both effort and value velocity across teams

Spent far too large a proportion of time (particularly the first iteration) in planning, and needed to be alerted to this fact==> Consistent timeboxing is important to ensure there is time to do all that is required, and for less variability of outcomes

A member of the team openly admitted that he inflated an estimate because he did not agree with the value of the story that the PO wanted to do next==> Estimates are often gamed, and for various reasons

“No rules” team

Implicitly chose not to estimate, but instead to maximise the time they had for building

Eventually delighted their Product Owner (and wider group), but during the game the PO felt like:

The approach to delivery was too ad-hoc, even chaotic, especially at the beginning==> Teams must collaborate in order to be co-ordinated, improve and deliver the right outcomes

Stories were too large (epic) so delivery all happened near the end rather than incrementally==> Smaller stories have lower variability and can help with early and frequent delivery, creating better predictability for PO/customer and lessening the need for estimates==> Larger, higher variability stories rely on estimates of time, or at least relative size, to provide the illusion of predictability

Started with no process at all but this was deemed unproductive (with such a big team), so they split into smaller teams with focused goals==> Smaller teams are more effective because it is easier to collaborate, change direction, gain consensus, etc.

General

Scrum and Mob team both delivered purely incrementally (concentrating on edges) rather than iteratively (identifying a recognisable area of interest and building upon it), although stories were clearly too big==> An iterative approach is crucial for risk management, predictability and delivering the right thing (value), i.e. without such an approach you have no choice but to estimate

Product Owners all felt like they weren’t being listened to – this had particularly bad consequences for the Scrum and Mob teams, perhaps due to their purely incremental approach==> Important for all team voices to be heard, especially given the PO is driving what should be built in order to deliver on the vision

As with many simple and now commonplace “Agile practices”, debates still rage on about the Daily Standup (Scrum) meeting, a meeting which has somehow become a ritualistic signal that a team is “Agile” but is often an equally conspicuous signal of the exact opposite.

I’ve been in many organisations where God forbid anyone asks whether we should get rid of the meeting, or even change it, despite the fact that no one is getting any value out of it every single goddamn day*.

*Except some managers. A daily status update meeting? Terrific! The Daily Standup is an opportunity to micro-manage people every single day without having to approach their desks!

I digress. The point is, people still question the value of the Daily Standup and, if it is indeed valuable, how we might make it more effective.

I share the view of the Scrum Guide on this – at least in what the spirit of an effective Daily Standup meeting is, if not necessarily the prescribed format.

An effective Daily Standup meeting, for me, is one in which the team inspects and adapts both product and process.

That is to say it is an alignment meeting. A daily planning meeting. An opportunity to change our path if there is a better one. We do not have to (and should not) wait for the Sprint Review (product) and Retrospective (process) for this. Continuous improvement is about daily inspection and adaptation.

Here are some of the more effective questions that can be used in a Daily Standup meeting:

How will we work together today to move toward our goal?

What should we focus on today?

What should we not do that we originally thought we would do?

How will we remove this impediment right now?

Given we are a little behind, how might we simplify this product increment?

It is about purposeful intent for the day. It is certainly not intended as a status meeting. If managers and others outside of the core team are not getting the information they require from conversations or the team wall then it will surely pay dividends to improve visibility and transparency in the way people interact while doing their work rather than have a daily status update meeting.

In fact, I would go as far as saying that the ritual of an unchanging Daily Standup meeting is usually a smell of poor collaboration in and between teams on the actual work to be done. Some companies mistake this meeting as a way of actually getting people to collaborate. It’s almost as if they think that the benefits of collaboration, as Agile promotes, can be gleaned simply by having this meeting.

Unfortunately it is not that simple. Standing (or sitting) people together does not make them collaborate.

Collaboration is an organic thing and only comes if the “way the work works” is designed to encourage it.

I sometimes see or hear the argument that, “because we’re Agile we should make the meeting fit with the way we currently work“, and that doing this will intrinsically make it more valuable. So, the argument continues, it’s OK if it becomes a status update meeting because that’s what the environment demands.

The issue with this approach is that the environment in which you currently operate is likely one of managers wanting status updates. One of traditional ways of doing things.

But in order to be effective with an Agile approach we have to do things differently. To think differently.

Agile does not mean “make compromises”. It is about mindful changes in the way we work to move toward improved effectiveness. If something feels a bit different and uncomfortable then it may well be a sign you are on the right track.

As coaches, we ought to let the team decide how they can get most value from a Daily Standup meeting. Then, rather than focusing all our attention on how to improve the meeting, we should instead be helping the managers create an environment in which actual collaboration (working together effectively toward common goals) is encouraged and starts to feel natural.

Where excellence, rather than dogma, can prevail.

P.S. Standing up is not mandatory! If the meeting is timeboxed to 15 minutes then it will be quick regardless of whether you’re sitting down, standing up or doing the cha-cha.

Next week I am speaking at a SIGiST (Specialist Group in Software Testing) event in Melbourne. Having to prepare my presentation has encouraged (OK, forced) me over the past couple of weeks to re-immerse myself in the world of quality, testing and BDD (Behaviour Driven Development).

Despite everything we’ve learned about the value of conversations when deciding what to build into our software — about the value of automating as much of our testing as possible in order to shorten the feedback loops between things breaking and us knowing about them breaking, and to instill confidence among the stakeholders and the team that we can rapidly add new features without breaking existing ones; about the value of taking a test driven approach to building our software, based on real user behaviour rather than code behaviour, to enforce good design practices and ensure the software does what it is supposed to do — I still constantly see and hear of teams struggling with their approach to quality.

Some are struggling to find time to improve due to a combination of legacy systems with brittle or no automated test coverage and looming deadlines for new products or features. Some are struggling to create a short enough feedback loop for testing software increments as they are built so that problems can be addressed before code is deployed, or before developers have moved on to the next or even the next feature.

There is no denying that it is crucial to get the technical practices right from the start. Enough has been written about this. BDD at all layers, continuous integration and automated acceptance and regression tests.

However, when you find yourself in a situation where you are adopting a legacy system or process – i.e. you or your predecessors haven’t got your technical practices right from the start – then your only viable option will usually be to improve things gradually. Have developers learn how to and implement automated acceptance tests. Chuck out and replace flaky record-and-play UI tests with robust unit, integration and browser tests using best-of-breed tools. Embed testers in the development team. Gradually start to do all the things that ideally would have been done from the start.

It seems like a desperate situation, but all is not lost. Far from it. I feel that a common mistake teams and businesses make is to place too much focus too early on the necessary technical improvements.

In my experience, the most important thing to improve is the conversations between the business people, customers and the development team.

One effective technique for doing this is The Three Amigos approach, where the customer / Product Owner / BA has a chat with a developer and tester from the team to agree on the acceptance criteria for a new feature or story before it is undertaken. From this conversation the team can decide exactly what tests are needed, and where they should be implemented, in order to prove that the completed functionality will do what is supposed to do.

A mature Agile team would now write the necessary tests in their tool of choice (e.g. JBehave for Java), the developers would write just enough code for the tests to pass, then refactor. When all the acceptance tests pass, the story is considered “done” from a functional perspective.

But what if the tester and/or developers have little or no experience with an automated testing approach? I have worked with teams in this situation and it cannot be fixed right away (or even at all if there is no willingness from the business to invest in training and slack time to address the problem).

Let’s say the tester is traditional in his approach, and would typically create test cases which he will use to manually test the code when it comes to him from the developer. What tends to happen here is that the developer writes the code for the story, then hands it off to the tester, who then hands it back because the code doesn’t do what the tester expects it to do. This to-ing and fro-ing can happen once, twice, three times. It’s time consuming and frustrating for everyone, and makes it very difficult to complete product increments in a timely fashion.

However, if the tester and the developer have a conversation before the developer starts coding (with the PO/BA in the Three Amigos meeting, or just-in-time in a story kick-off), the tester can take the developer through his test cases (derived from the acceptance criteria) so that the developer understands everything that the tester expects to work when he is handed the code.

Over time in these conversations the developer will start making suggestions, so the test cases become more collaborative and thus effective. He will also want to make sure the story does not bounce back to him from the tester when he’s coded it, so he may do some more manual testing of the functionality or even write some (more) unit tests before handing the story to the tester. His confidence in his code is likely to have improved, and the bounce-backs become the exception rather than the rule.

The key to building in quality is first and foremost in the conversations because they create improvements in the way we work together, whatever situation we are in technically. The good technical practices will emerge from the better conversations. Agile is largely about focusing on technical excellence but, as the first line in the Manifesto tells us, more important is the interactions between the people doing the work. Continuous improvement allows us to start where we are and take one step at a time.

These up front and ongoing conversations, such as the Three Amigos, can have a massive impact on your effectiveness both individually and as a team, and on the quality and maintainability of your product, increasing your agility to adapt and innovate . Adding such conversations to your process is a great sign of continuous improvement and embracing the first and most important line of the Agile Manifesto.

"Various projective practices upon trending have been used to forecast progress, like burndowns, burn-ups, or cumulative flows. These have proven useful. However, these do not replace the importance of empiricism. In complex environments, what will happen is unknown. Only what has happened may be used for forward-looking decision-making."
-- Scrum Guide

Agile/Scrum teams are often asked to estimate how long a release might take. Or an entire project. Sometimes this is done under the guise of relative size estimates like T-shirt sizes – or, perhaps more commonly, story points – coupled with an estimated (or guessed) velocity. This is sometimes done even with new teams that have no velocity history.

Scrum, as defined in the Scrum Guide, places a large emphasis on the use of empiricism. Aside from the quote above, the following nuggets can also be found:

"Scrum is founded on empirical process control theory, or empiricism. Empiricism asserts that knowledge comes from experience and making decisions based on what is known. Scrum employs an iterative, incremental approach to optimize predictability and control risk."

My interpretation of Scrum is that, while the Development Team are expected to estimate each PBI (Product Backlog Item), they are not asked nor expected to determine delivery dates, or how much work will be completed by a delivery date.

At Sprint Review:

"The Product Owner discusses the Product Backlog as it stands. He or she projects likely completion dates based on progress to date (if needed)"

So, the Product Owner uses the estimates on the PBIs combined with the empirical knowledge gained from what has actually been done to determine completion dates of a set of PBIs (e.g. a release). At no point does the Product Owner ask the team what will get done (beyond the current Sprint).

This use of empiricism is often neglected by Scrum teams. Teams are asked to project release dates, sometimes several months out, without any velocity history. This is not making projections based on what has actually happened. It is not empirical, and does not work in a complex, ever changing environment.

"A Product Backlog is never complete. The earliest development of it only lays out the initially known and best-understood requirements. The Product Backlog evolves as the product and the environment in which it will be used evolves. The Product Backlog is dynamic; it constantly changes to identify what the product needs to be appropriate, competitive, and useful. As long as a product exists, its Product Backlog also exists."

If you are using estimates, it is important that you use probabilistic estimates based on real, empirical data. Scrum suggests this. Practitioners suggest this also. Don’t ask the team to forecast any further out than the current Sprint. As the Product Owner, use real data to make forecasts and decisions. Asking the team to make longer term projections is not respecting the data showing what is actually getting done.

Well, I don’t want that happening again. How can I make sure I don’t forget to let the dogs out again? Another foul up (forgive the pun) will be difficult to take.

Perhaps I could put a sign up on the wall in the landing, on the way to my bedroom: “DON’T FORGET TO LET THE DOGS OUT!” Won’t be foolproof, but it might help. My wife might decide she can’t trust me to let the dogs out every evening, so she will start reminding me every night, or coming into the living room to check.

Of course she might forget to do this one night. If that happens to coincide with a night on which I also forget, the same outcome may occur.

Now who’s to blame?

This kind of scenario might sound oddly familiar if you work in an IT department or work for a software development company. An innocent mistake (like releasing an obscure but potentially damaging bug), leading to blame of the individual, leading to more control of releases (processes and procedures) and a “don’t fuck up” culture.

Of course we don’t want the dogs to crap on the rug. Blaming me for this incident, imposing more control (the sign on the wall) and reducing trust in me (my wife checking I’ve put the dogs out) *may* solve the problem. But in reality there is still a chance that it will happen again. People make mistakes. People repeat mistakes.

Problem dissolution

By employing a systems thinking approach to this scenario, we can look to *dissolve* the problem. That is, the problem of “the dogs might crap on the rug during the night” is actually removed rather than its probability reduced.

If I install a doggy door, the dogs can get in and out whenever they need to, so they will never be stuck inside when they need to crap. My wife will never have to worry about me messing up again, and blaming me for my stupidity. We won’t need signs up on the wall, serving as a constant reminder to myself and my family that I messed up.

Sometimes buggy software will be released, no matter how high the quality of our code or the stringency of our release procedures. Because people miss things. People make mistakes. People repeat mistakes.

If we make releasing really quick and easy, we can update our tests and release bug fixes before there is any time for blame and increased control to become necessary.

Do you look to merely solve problems in your organisation, or to dissolve them?

This is the first in a series of small posts aimed at new Scrum teams, organisations newly adopting Scrum and people who have been doing Scrum for a while but are struggling to get the results they crave.

This post is based on a response I gave to a question in a LinkedIn forum:

“The BA role is an integral and implicit part of Product Owner Role in Scrum. What is your take on this?”

This is a very common question among those new to Scrum and Agile. It’s an interesting one and a classic example of why, in my opinion, companies the world over are failing to do well with Scrum.

To begin to answer it, I will let the Scrum Guide do the talking:

The Scrum Team consists of a Product Owner, the Development Team, and a Scrum Master.

Scrum Teams are self-organizing and cross-functional.

The Product Owner is the sole person responsible for managing the Product Backlog.

The Product Owner is one person, not a committee.

Development Teams are cross-functional, with all of the skills as a team necessary to create a product Increment;

Scrum recognizes no titles for Development Team members other than Developer, regardless of the work being performed by the person; there are no exceptions to this rule;

Scrum recognizes no sub-teams in the Development Team, regardless of particular domains that need to be addressed like testing or business analysis; there are no exceptions to this rule; and,

Individual Development Team members may have specialized skills and areas of focus, but accountability belongs to the Development Team as a whole.

Departmental silos are entrenched in the way companies typically do things. They are part of the system. The culture. As a result, the urge to maintain departmental silos is strong.

I would suggest this is a key reason why Scrum implementations might (and do) fail.

Straight off the bat, certain elements of the Scrum Guide are typically ignored or deliberately rejected. These elements may or may not turn out to be key in your organisation, but the fact is they are in there for very good reason. It is a mistake to assume from the outset that your context requires removal of these elements.

Scrum is not asking companies to remove departmental silos, but it is asking that these silos are ignored such that they do not exist within the Scrum team. In the Scrum team, everyone building the product increment is part of the Development team. There are only 2 other people in the team – the Product Owner and the Scrum Master. That’s it. That’s the Scrum team model. Period.

There is absolutely no prescription as to who should be in the Development Team, only that the team has all of the skills and capabilities required within it to build a product increment, and that the team jointly owns all of the work, activities and decisions. In order for effective teamwork to flourish, Scrum says that roles should be left at the door.

That does not mean that our individual expertise and experience is left at the door along with our job titles. On the contrary, the best self-organising teams decide how best to leverage the expertise within the team.

If the question asked in the LinkedIn discussion was actually:

“Are the typical activities undertaken as a BA part of the Product Owner’s responsibilities in Scrum?”

then my answer would be that these, and any other activities involved in building and managing a product’s development lifecycle end-to-end, are shared between the Scrum Master, Product Owner and Development Team. This is made very clear in the Scrum Guide.

To that end, there is no “BA role” in Scrum, much like there is no “tester”, “QA” or “UX designer” role. Roles are part of traditional siloed thinking. Scrum (and Agile) focus (deliberately and alternatively) on cross-functional teams. Roles are a function of the particular company, not the activities that need to be done as part of product development.

To get the best results from Scrum it is a good idea to stop thinking about what roles you need in the team, and instead think about what activities are required to build your product. A good self-organising Scrum team will share these activities regardless of whether they have a specialist, designated BA or not.

Personally I like to encourage “collaborative analysis”, where all of the “what” and “why” for every decision, every story, is talked about by the whole Scrum team. Then the “how” is handled by the Development Team.

The popular model of having BAs “writing stories” and handing them off to the developers in the team is highly ineffective, not the hallmarks of a collaborative, self-organising team and about as far from both Scrum and Agile as you can get.

To build products effectively with Scrum, it’s a good idea to map out all of the activities that are required to build the product. Forget current roles and responsibilities for now. Once you’ve listed the activities, gather a team that can execute those activities in their entirety. If your company has BAs and you need one of them for your Scrum team then by all means have them in the team.

But please remember to ask yourself this key question:

“Is the BA part of the Development Team or are they the Product Owner?”

The SAFe approach to normalised story points makes a classic mistake that everyone seems to make with story points. It is not “relative sizing” to compare stories to a reference story that has been estimated in time (in this case “about a day”).

As soon as you introduce time as a basis for your reference story, and use e.g. story points with Fibonacci sequence, all of the comparisons you make are based on time, i.e. a 2 point story pertains to 2 days, 5 points to 5 days, etc.

Even if you are not doing this consciously you will do it unconsciously. So all you have done is estimated “how long” the stories will take to deliver. This is not relative sizing!

The whole point of using relative sizing instead of time-based estimation is that humans are better at comparing the size of things than we are about making absolute judgement of size, e.g. we’re good at being right that building A is bigger than building B, but we’re not so good at being right that building A is about 200 metres high and building B is 150 metres.

Unfortunately when it comes to tasks that we perform, our natural tendency is to use absolute terms because the “size” of a task essentially equates in our brains to “how long”. The fact that story points are numbers doesn’t help with this. Where story points completely lose their value is when we start deliberately equating a point value with a length of time.

True relative sizing of a backlog is to pick a low value story (one that you are unlikely to implement for some time) and do not estimate it at all. What you now do is compare other stories to that story, i.e. I think story C will take longer than story B, story D will take longer than story C, story D is about the same size as story C, etc. At no point do we actually predict how long something will take. We are simply saying which stories will take longer than others, by our estimation.

When a new story emerges you then do the same thing – decide if it will take longer than the reference story (which, because you have not yet implemented it, you will not be influenced by the actual time it took), less time or about the same.

You can now measure progress against the total backlog as you deliver the stories.

One thing I do agree with in the SAFe approach is that you should not do any re-calibration/estimation. As soon as you start re-estimating stories based on how long things are actually taking you are being influenced by time. This can not only throw off the relative calibration of the backlog but also ignores the inherent variability of software increments; i.e. there will be outliers within size groups that take significantly longer (or shorter) than the modal average.

P.S. If you’ve read my other #NoEstimates stuff on this blog you will know I do not advocate the use of story point estimations at all, especially due to the way they are typically misused and abused. However, there may be some potential value in doing relative size estimates (e.g. T-shirt sizes), if done right, for one or more teams working from the same initial product backlog in order to give some indication of the overall viability of the initiative and to provoke discussion within the team(s) about the value and possible approaches for undertaking individual pieces of work, aka “what shall we do next”.

Introduction

A continuing theme of counter-arguments posed at the #NoEstimates ideas is that development cost estimates are required in order both to manage risk and to derive value.

This blog post intends to give further insights into how risk can be effectively managed, and how we might determine the value of our initiatives, without the need for making up front and deterministic development cost estimates.

Risk

“Risk is the probability of an unfavorable impact to the project” – Glen Alleman (@galleman).

From the risk angle, the argument goes along the lines that the built-in “risk management” in Agile approaches is not aligned with conventional definitions of risk management in software development.

I’ll go along with this. Agile (and #NoEstimates) does not take the conventional approach to software risk management, which sees project success as “on time, on budget” and thus requires an up front estimate of total scope, cost and duration.

Agile/#NoEstimates offers an alternative way to manage risk on projects (and, no, I’m not taking about Agile Estimation, the spin-off brand of traditional estimation promoted by Mike Cohn). I’ll explain more about this later.

Value

The argument regarding value is that estimated cost is required to determine value, given that value is related both to the timing of when things are released and how much it costs to develop the things that will (potentially) generate value. That the worth of something to someone can only be evaluated if we know how much that things costs.

Again I agree to an extent, but there are two key sticking points for me here. One is that we only know how much software development costs after the fact. People say “we need to estimate because we need to know the cost”. Estimating, however accurately we think it is being done, does not allow us to know the cost.

Before the event we can only estimate what will be done and how much it will cost. In addition, the further out we are estimating cost and value, the riskier (and potentially costlier) our estimates become.

By estimating, rather than fixing, cost we have no greater insight into the value, which is also estimated. Essentially we are increasing our risk by estimating both cost and value rather than just value, which is what #NoEstimates promotes. More on this later.

The other sticking point is that value is often highly subjective and personal. I know how valuable a particular brand new Ferrari is, partly because I know how much it costs. That said, if you gave me two different Ferraris to test drive and didn’t tell me how much they cost, I would tell you which one I prefer. Which one was more valuable to me. This has nothing to do with the cost. The one I prefer might be significantly cheaper, but its value to me is higher because it’s more fun to drive and I prefer the look of it.

The same applies with software. There is so much to consider when we try and measure value. Aside from the empirical measure of monetary returns, we have to consider the needs of the customers, the stakeholders and our corporate strategy (to name but a few), not to mention the fact that all of these things change over time.

Agile is about delivering value early, not trying to predict how to maximise value over a given timeframe or a product’s lifecycle. It is the early delivery of value that allows us to tune and adjust our course for maximum longer term benefit.

This is why it is an alternative, and completely viable, approach and should be considered as such.

Agile Risk Management

The key aspects of Agile that help us manage risk effectively are:

Iteration

Continuous selection of highest value work (i.e. making decisions)

Fixed, cross-functional teams with 100% focus on current project

Early and frequent delivery of end-to-end working software increments and

Empirical measures of progress toward goals

With Waterfall projects, the need for conventional risk management is clear. We have no way of measuring progress from day one in terms of working software because we are carrying out requirements analysis, specification and design phases before we write a line of code. People are often working on multiple projects and so we must allocate a percentage of their time to the project at hand.

The only way to measure percentage progress toward project completion is to have a breakdown of the SDLC phases and tasks within each, estimated in days/weeks, and tick them off as we go along. If we don’t complete all the necessary tasks for a given phase in the estimated timeframes, we are “off track” and we need to take corrective action.

With a phased delivery approach, the only way to manage risk is to have an estimate of the total scope, cost and duration of the project.

But if we are working in an Agile way, we are not taking a phased approach to project delivery. We are delivering full end-to-end working solutions in an iterative manner, early and frequently. We are working in fixed, cross-functional teams so teams costs are known and consistent.

This approach allows us to manage risk and measure progress toward project completion (meeting of stakeholder goals within a given budget) from the get-go.

Progress

If we are truly iterating by delivering vertical slices through the system, after our first iteration we will be able to measure progress toward the project goals. We will have delivered a working, albeit perhaps low quality, solution to the problem. We may even have actually met the project goals.

Either way, we can inspect what we have done and decide if we are on the right track. If we are, we can iterate over our solution, improving quality in the desired areas and incrementing new features. If we are not, or we see a better way of solving the problem, we can throw away what we’ve done and start again. We may even decide to scale up our efforts and add more teams, if there is emergent estimated value in doing so.

Given in Agile we are delivering end-to-end working software from the get-go, we are not burdened with the problems we faced in our Waterfall projects for measuring progress. We have the ability to empirically measure progress because we are delivering “done” functionality, as opposed to hitting pre-determined “milestones” which are not based on what we have actually delivered in terms of a working product.

In Waterfall, so long as we are hitting our milestones then the project status is “green”. For software product development projects, this means that we are deferring our risk management until we actually start writing code. We don’t know that the scope of what we want to build is achievable, and we can’t reduce scope until we actually realise it’s too much (well into the development phase, deep into the project).

In Agile we can manage scope right from the beginning, because we are continually focusing on building the most valuable thin, vertical slices which represent iterations over an end-to-end solution to the problem. We can empirically measure how much we got done and how much is left to do. We can regularly take proactive decisions to cut scope or switch to an alternative approach to improve our chances of delivering a successful outcome. What should we do next for maximum value and maximum impact in meeting our goals? What should we not do? What is the simplest approach for our next iteration?

This is risk management.

These kinds of conversations enable us to focus on doing the simplest thing, for maximum impact, given the budget that we have available. To not wait 9 months to deliver a solution but to deliver a solution in 1 month, then make it better.

Most “Agile” projects are not managing risk

If we decide up front in a project inception on the requirements (product backlog) and solution we will be sticking to, and estimate it will take, say, 9 months, all we will do is incrementally build the solution, usually in horizontal slices, components or modules.

After each “iteration” we will not have a holistic view of what we’re building.

This is a very common approach by “Agile” teams. In this situation we are deferring the management of risk until we actually have a system that can meet (some of) the needs of the project stakeholders, usually late in the game when the deadline is getting close.

This is not risk management. If we work in this way we cannot work with #NoEstimates.

How do we estimate value without estimating development cost?

OK, so assuming we have the capability and will to deliver vertical slices through a solution early and rapidly, and we have a fixed cross-functional team, 100% committed to the project at hand, we can focus on the potential value of the ideas we want to build while controlling cost using small “drips”.

When we use ROI to decide whether a project is worth pursuing, or which of 2 or more potentially valuable projects we should choose given limited people and resources, we base the “investment” measure on the estimated cost, of which the development costs are part, and the “return” is the value we expect to generate, measured on the same scale as the investment (usually money).

There is a flaw with this approach.

6 months, 2 years, it’s all the same!

Let’s say we estimate a project will take 6 months of development time, costing $500k. We expect that when the product is complete it will generate $2m in revenue. The timing of when that revenue gets generated is key. Will we get anything at all before the product is built in its entirety? Will there be a few months of marketing required after all the features are done before we will start seeing the cash rolling in?

The implication of the timing of value generation is that the actual ROI of what we’re building in a 6-month project might still be negative after 6 months of development time, even if we get everything done that we originally wanted done (and estimated).

Now compare that to, say, a project with an estimated duration of 2 years. After 6 months, the ROI of the two projects will be identical. Our net loss in both cases is $500k, so our ROI is -100%; we have spent half a million bucks with nothing (yet) to show for it.

So, given the erratic, inconsistent and numerous ways we can measure value in software, is the traditional ROI approach an ideal decision making model in this domain?

Agile is about early delivery of value, not trying to predict maximum value

The upshot of this is that the less risky approach to generating a positive “ROI” is to work on options that will potentially generate value early, i.e. with relatively small and simple effort. Put simply, if we prioritise initiatives by virtue of which ones we expect to generate value early rather than how much value they will generate over the product’s lifecycle then we do not need to batch these initiatives up into “projects” and estimate how long the project will take.

This can easily be reverse engineered. If our starting point is a “project”, with a list of requirements, the best thing we can do to manage risk (keep our decisions within the bounds of the near, more certain, future) and ensure we deliver value early is to pick the most valuable requirement/problem to solve and come up with a simple, creative approach to fulfilling that requirement in a very short timeframe.

What’s next? One at a time…

The team can go away for, say, 1 month, after which time we holistically assess where we’re at in terms of fulfilling that requirement. What have we learned? Is this requirement still the most valuable one to work on (ignoring sunk costs)? Are we better off ditching what we’ve done and investing in attacking another requirement?

Our measure of what is valuable must reset after each iteration. It’s irrelevant how much we’ve already spent (sunk cost fallacy).

We need to constantly concern ourselves with what is the most valuable thing to do next. This is Agile. This is #NoEstimates.

And this is risk management. Yes, it’s an approach that requires a different way of thinking about how we choose what work to invest in, how much to invest and the decisions we make along the way. But it is risk management nonetheless.

But we can’t do this when $200m is at stake!

The #NoEstimates debate has hit a point where the main remaining arguments are around its application in big money projects. Most of the original dissenters – who have now spent time reading more about the ideas put forward by myself and the other #NoEstimates crew – are now in agreement with us that, at least for small scale projects, we can get away with not doing “micro-estimates”, and indeed it may be preferable to work this way.

But when it comes to “macro-estimates” – i.e. how much of the customer’s money are we going to spend – it is argued that a #NoEstimates approach is not viable. That when “you are spending someone else’s money” you need a plan (estimated schedule) to ensure you deliver what is required for the money, with some deterministic level of confidence.

The irony of this argument is that when the big number guys come out swinging with their big numbers, these numbers are estimates! When we call a project that we haven’t yet completed, or even started, a “$200m project”, what we are actually saying is “our customer has a $200m budget and we have to deliver what they want for their money”. In other words, the decision has been made to go ahead, and the budget is $200m. There is no go/no-go decision to be made – it’s already been decided that the project is going ahead, and they want a result for $200m.

For me, with such large sums and timeframes at play, there is all the more reason to manage risk by drip funding small amounts and iterating over a solution in the way I’ve described. Scaling up where required. Tuning and adjusting.

The alternative is to manage risk by using probabilistic estimation techniques based on past projects such as Monte Carlo simulations to derive a total estimated cost with a confidence interval, and then constantly adjust these calculations as the project progresses. But I maintain that the Agile way, where we start from a budget or fixed deadline and then actively build and manage scope along the way, is preferable because it harnesses the creativity of designing and building great software and allows us to welcome and embrace change every step of the way.

Create the future rather than predict it

Instead of trying to nail down a plan and predict outcomes, we are forging our own future based on current market conditions at any given time, and the way we feel about what we’ve built so far. We are controlling our costs by working with fixed teams in short timeboxes, and we are constantly assessing the value of what we’re building.

If we work this way we do not need to estimate things up front. Empirical data is being generated as we go along, and we can look at the market with fresh eyes after each iteration. We can see what we’re getting done and what we’re not. We can change our mind on whether we care that we didn’t get the things done that we wanted to get done. We can see which of our assumptions were true and which were false. We can steer our ship in whichever direction we need to avoid the iceberg ahead, while remaining focused on the destination.

This is at the heart of #NoEstimates from my point of view. It is possible to work this way. It is not easy to get to a position where you are able to, but if you can get to that place it is, as Ron Jeffries describes it, “the best known way to work”.

Systems Thinking tells us that we are products of the system in which we operate. That we will perform based upon the ways we are being measured.

Personally I am astutely aware if the way I am being measured is also a target. I know the measure is not an effective way of helping me contribute to reaching the organisation’s goals.

But the thing I struggle to understand is that if we are gaming the system, and know we are doing so, at what point do our ethics kick in? What is our tipping point?

I once worked with a team that was battling against technical debt. Regression bugs were appearing with increasing frequency due to a lack of automated integration test coverage with legacy systems. My team wanted to do the right thing and fix the bugs that they found, despite the fact that it was not them who created the bugs, but were concerned that they were falling behind with their own work.

They assigned no blame to the unfortunate soul who checked in the code that caused the regression. In fact, they didn’t even get to find out who the culprit was until after time was already spend determining the cause of the bug. There was much complexity in the interactions between components and a gaping lack of integration tests across them. The team just wanted to fix the problem, add some appropriate tests to prevent the problem from happening again, and move on.

The problem for me was that this was impacting on our project schedule. The team were supposed to be working on stories for my project but instead were taking time working on bugs created by other teams. I was being measured on the delivery of the agreed scope in the agreed timeframe, not on our software delivery effectiveness across the portfolio. Surely it was in my best interest to ask the team not to work on other people’s bugs? My delivery schedule was being jeopardised. I would be held accountable for this. I would be asked tough questions. Why didn’t I deliver everything I said I would?

But here’s the thing. Despite how I am measured, I am passionate about creating good outcomes for the stakeholders, the customer and the company, not my specific project. I do not see the work to be done as a set of easily definable story cards. In this and other similar situations I wanted my team, and other teams, to spend time reducing technical debt across the board, improving code quality, collaborating with each other to find ways of making everyone’s lives easier, etc.

I can choose to let the system define me. To be a product of the system. Or I can choose to question things. To think holistically about how we can improve.

The system will reject this. But at least I can go to sleep at night knowing that I am doing what I believe is right.

How much do your ethics influence the decisions you make or don’t make in the workplace?

1. You’ve mentioned on Twitter that in your opinion, #NoEstimates = Agile + Real Options. For the curious newbie, what does this mean?

The approach I talk about is very much underpinned in Agile principles. In fact it’s what I believe Agile was intended to be at its core (although I’ve had some disagreement from the likes of Ron Jeffries and Alistair Cockburn on this point).

To summarise #NoEstimates from my point of view:

Constraints breed creativity

Use real constraints to drive decisions, e.g. “this is how much we want to spend” or “we need something by June in time for Wimbledon”

Create mini-constraints (i.e. drip funded iterations) to promote a creative approach to what we are going to build to address the problem at hand

Build awesome teams

Create fixed, capable teams so we know how much our time costs

Scale up team capacity if enough positive value has emerged (by adding teams, not people to teams)

Empower our teams to be bold and free in making solution choices, with focus on “building the right thing” and “delighting customers and stakeholders”

Keep our options open

Cover multiple, potentially valuable options with small experiments rather than committing to one option per team for long periods

Reassess options frequently to ensure initiative is still valuable (ignore sunken cost) and is more valuable than other options for which we could divert our team capacity

Anything we haven’t yet built (e.g. our product backlog) is only an option – we shouldn’t assume we’ll build it and shouldn’t worry how “big” it is unless we actually want to do it now, or very soon

Put the “iterate” back into “iterations”!

Truly iterate over the solution (holistic determination of where to take the product next) rather than just incrementing pre-determined backlog items

Deliver early and frequently, with very small (even daily) feedback loops – this makes us predictable

Create collaborative working agreements

Create flexible, collaborative working agreements with our customers which allow us to truly embrace change and deliver to customers’ present needs rather than their needs when we started

Allow customer to cut the cord early if they are happy with what they have (or not happy with progress)

Start from a position of trust rather than paranoia (which traditional contracts are based on)

Favour empiricism over guesswork

Keep work items small and simple, and limit WIP to create a predictable system

Slice features into simple, unambiguous stories using a heuristic rather than estimation rituals

Price work per feature if appropriate, using empirical average cost of features to guide price rather than a deterministic estimate of individual features

Use cycle time and throughput to make near-term prioritisation calls, not to determine release dates (there are no big releases in this approach anyway)

Shift focus away from estimation

Create a culture of honesty by removing negative estimation culture (i.e. get rid of story points and the notion of estimates as promises or deadlines)

Make work and project success about creative delivery of value (i.e. “what shall we do next?”) rather than “on time, on budget”, schedules, deadlines, etc.

2. Describe what you mean by a “slicing heuristic”

Essentially it’s a policy for how we break up our work. For example, “A user story must have only one acceptance test”. Rather than breaking features into stories and then estimating the stories, we can use the heuristic, measure our cycle times and then inspect and adapt the heuristic if required.

I’ve found the “1 acceptance test” heuristic to be consistently effective over different domains for creating an average story cycle time of 3 days or less.

3. How does your approach differ from that of Woody Zuill? Or, are there more similarities than differences?

I can’t speak for Woody but I feel that Woody’s approach is simpler than mine. He believes that if you follow the Agile Manifesto properly then the need for estimates dissipates.

I agree with him in principle but see systemic issues, particularly in analytic/mechanistic organisations, that I feel need to be addressed in order for #NoEstimates to strike a chord with more traditional managers and executives. At its core though, #NoEstimates is about exploring various approaches to delivering software without the use of estimates, and the commonality between our approaches seems to be the continuous delivery of small increments of high quality, valuable software.

4. Do you think any team can work without estimates? What’s the minimum “barrier to entry” ?

Any team (with the right coaching and knowledge) can embrace the slicing of work, limiting of WIP and measurement of throughput/cycle times, even if they are being asked to estimate with story points or time. #NoEstimates is not about refusing to estimate.

If you’re talking more about the overall approach from the portfolio level down, I’d say there is a minimum barrier to entry:

Mini constraints such as weekly demo/review with customer (small, early and frequent releases)

This looks very much like any typical “Agile” team to me🙂

5. What advantages does working without estimates provide your team over, say, a team that is using longer cadences, eg. Scrum?

My approach is entirely compatible with Scrum. In some ways I think that it’s what Scrum was intended to be (or at least, in my opinion, should be).

If a Scrum team is working in 2-week Sprints, truly iterating, delivering working software every Sprint, inspecting and adapting the product etc. then this looks very much like the approach I am advocating.

6. A common criticism of #NoEstimates is that when you slice off functionality to deliver (the “heuristic” approach) you are, in effect, estimating. Is this a correct interpretation? Why/why not?

Well arguably if you create a heuristic for creating “small” work then I can understand why it is interpreted that way. However, I don’t believe it is estimating. The point is to create simple and unambiguous story cards. The “smallness” is a by-product of doing this.

If we don’t get the smallness we’re looking for (after measuring the result) then we inspect and adapt the heuristic. At no point do we actually look at a card and say “I estimate that this is small”. We trust in the heuristic.

7. You’ve been a really vocal advocate for working without estimates, standing up to some tough questions from established agile practitioners. Why do you think this topic has so many people so roused?

Because the way software projects are typically governed is largely driven by estimates, so it touches almost everyone in the industry. It’s an established way of doing things so it is deemed controversial.

8. What would your advice be to a team considering working without estimates? What should their first steps be?

Don’t simply stop estimating. Try and get better at creating simple, unambiguous slices of functionality. Measure your throughput. Compare story count data with your story point data. Discover for yourselves if a #NoEstimates approach is right for you and a good fit for your organisational culture.

People need estimates. So they can predict how much software will cost and how long it will take.

People need umbrellas. So they don’t get wet when it rains.

Although, some people don’t need umbrellas. They have awesome waterproof jackets with hoods. They have solved the problem of “how do I stop getting wet?” with a different solution to the humble umbrella.

People need to know what time the trains are running so they can plan their trip to work. Some people do not need to know this because they take the London Underground, where trains typically arrive every 2 or 3 minutes.

What’s your point, Neil, you might be asking? My point is that when people are debating against the #NoEstimates movement, they always seem to gravitate toward the same two arguments:

People need estimates, so we should provide them

We cannot simply start building software without having an idea how long it will take or how much it will cost

To the first point, people only need estimates if we determine that the only solution to the problem of wanting to know “how long and how much” is to make a guess. People who have found other solutions to that problem do not need estimates.

I now wonder: just because the people who still need estimates have not discovered any alternative solutions, does that mean they need estimates or that they think they needthem? Or simply prefer to use them over other solutions?

People do not need umbrellas. They need a way to stay dry on a rainy day.

To the second point, I categorically want to put an end to the myth that #NoEstimates equates to #NoPrice or #NoDate. If you read my previous blog posts on the subject or read my tweets you will hopefully understand that my point is the absolute opposite. We DO need a price and/or a date. The only difference is how we arrive at those things.

With estimation, you guess one or both of them (and, in doing so, have a stab at scope too – otherwise what are you estimating?)

With #NoEstimates you set the price and/or date, either through experience and choice (for e.g. setting price/date for the kind of work you do regularly, with a fixed team and cost) or through a real budgetary or time constraint (e.g. “I’ve only got $100k, what can we build for that?” or “The Australian Open starts in 3 months so the Aus Open app needs to be ready to go live the day before”.)

You then incrementally and iteratively deliver, setting mini-constraints within the wider constraint that breed creativity, innovation and predictability of delivery, and have a flexible working and payment arrangement with the customer.

People need certainty about what they will get and how much they have to spend. Unfortunately there is no certainty in software design and development. However, I would argue that #NoEstimates gives greater certainty than estimating does.

When estimating a date or cost you are creating uncertainty around those things, because you are guessing. You are saying “we’ll deliver somewhere between here and here”. However, if your delivery date and/or cost is set by a real constraint, as advocated by the #NoEstimates approach, you have created certainty around those things.

Yes, you may decide to shift the date/cost as you get closer to the initial figures, or once the customer decides they are happy with what they have. You have been delivering frequently and learning about what you are building. You have been creating data, such as throughput and cycle times, and using heuristics and slicing to reduce work increment size, so informed decisions can be made along the way. But you will only go beyond those initial figures if the emergent value of what has been built, and other data you have gathered, suggests that you should. Scope remains uncertain whether you estimate or not.

People still need 500-page business requirement documents. People still need separate test teams and development teams. But there are alternative solutions which may render these needs unnecessary. The alternatives to estimation are real, both at the project and the portfolio level, and are being used by many people across the globe in varying sized businesses.

All I ask is that we consider those alternatives and do not stop searching due to need.

It is no secret to my Twitter followers, and perhaps beyond the Twitter-sphere, that I am on a crusade of sorts to get people considering other ways besides estimating when it comes to costing software development projects and tasks. Such a view remains controversial, even among Agile practitioners. People argue that there is no alternative; customers want estimates, so we must provide. Stakeholders need to know when things will get done. Estimation is seemingly one of the few remaining immutable practices hanging over from the Waterfall era.

One of the common criticisms of my view is that it is unduly dismissive. When asked by our boss or a customer for an estimate, we can’t simply palm them off and say “I don’t estimate! Talk to the hand, sir!”

Of course this is true. But I should point out that I actually see nothing wrong with being asked for an estimate of how long something will take. What I object to is being asked to carry out (or ask my team to carry out) estimation rituals whose results will then be used for making important business decisions.

We cannot palm people off, but what we can do is offer alternative, empirical approaches to traditional and “Agile” forms of estimating, explain exactly how we will provide the required information and why such approaches offer advantages over guessing “how long” or “how big”.

First off, I would suggest that there are many problems with the “how long/big” approach, the biggest of which is that such an estimate does not take into account the:

Inherent unpredictability of building software

Current work in progress (i.e. the team/dev may not be able to start the work “now”, or even for a few days, weeks or longer)

Capacity to do the work (i.e. the team/dev may make the estimate based on certain assumptions of team size which turn out to be false, or a colleague being there who ends up not being), nor

Any upcoming changes in priorities (i.e. something may jump above the piece of work in priority).

From a task point of view, what is estimated as a “10 minute job” may end up taking a day or longer due to one or more of the above. I’m sure you have seen this situation many times over. From a project point of view, this situation is magnified and can be hugely costly, even catastrophically so. 3 month projects become 6 months. 1 year projects become 3 years.

In a situation where there are small tasks flowing through from the customer to the development team that are unpredictable in their timing (e.g. BAU work queues, feature development, etc.), a far better, probabilistic approach to get some semblance of predictability is to do the following:

Measure actual lead times of every piece of work and plot them in a Lead Time Distribution graph

Measure throughput (you can start by simply counting the number of cards in the “done” column at the end of every week)

Use a fixed WIP limit on cards in progress (start, if you like, with the natural limit of team size)

You can now use Little’s Law to calculate average lead time for a card at position n in the queue, i.e. (WIP + n) / throughput:

Lead time = (2+1)/4 = 0.75 days (i.e. on average it will take three quarters of a day for a card at the top of the queue to be delivered)

With the same formula you can predict where a card 2nd, 3rd or xth in the queue will get done, which is very helpful for guiding your prioritisation:

e.g. Using the same example above, a card 2nd in the queue will likely be done in 4/4 = 1 day, while a card 6th in the queue will likely be done in 8/4 = 2 days

Bear in mind the only way this formula can provide useful numbers is by having a WIP limit that is fixed (as far as possible). There will of course be variability in how long each card takes, but the law of large numbers will even this out to an acceptable average and it’s certainly far more scientific than asking people to estimate each card.

Note that if you use Scrum, and thus the team breaks down features into small tasks just-in-time at the beginning of every Sprint, you can use the same principles as above to determine when a new feature might be delivered (Scrum has a WIP limit over the Sprint length of the number of tasks in the Sprint Backlog, throughput is the number of “done” stories/tasks divided by the Sprint length, etc.).

Over time you can achieve a higher level of confidence with the predictions as you start to identify and split out different work types, determine probability of delivery times using your Lead Time Distribution graph, etc.

What about “how long will this project take?” !! Warning !! You can scale this approach up to the portfolio level. But… do bear in mind that building an entire software product rarely has a finite end point or a repeatable result because it is not possible (nor desirable) to define all of the scope required to deliver a delightful, valuable outcome. Use such predictions with extreme caution. There is no substitute in software product development for creating certainty around costs and delivery times via fixed agile teams delivering working software early and often, short feedback loops with the customer, etc.

So, next time you’re asked “how long” or “how big” about a software project or task, don’t palm off your boss or your customer with simply “I don’t estimate!”. Perhaps you might consider answering: “I don’t estimate! But… here is how we can save ourselves the cost of estimation meetings and make empirical predictions going forward to answer these questions with more confidence.”

This is the second in a series of blogs about why I believe we should not be estimating software projects. The first post talked about estimating at the team level, whereas here I talk about the contractual level and how to arrive at more Agile, iterative working arrangements.

Agile team, same old contract

Traditional software contracts, particularly with external parties, are based on:

Establishment of scope

Estimated time to deliver that scope

A price derived from that time + associated costs + profit margin

Many, if not most, of today’s software contracts are based on similar premises, even in supposedly “Agile” projects. In order to mitigate the risk of their deliverable running late and bumping up the cost, many customers demand fixed price contracts. Others demand that the supplier contractually fixes the delivery date to ensure meeting some obligation around the date and shy away from time-and-material engagements. Suppliers often like the fixed time approach as well because it creates predictability around cost. Fixed price contracts provide certainty around the project’s ROI, assuming it can be delivered at a low enough cost, and customers like to know how much they are spending.

There is nothing inherently wrong with any of these approaches or the reasons behind doing them. The problem lies in how we arrive at delivery dates and prices. In order for a contractual engagement between a supplier and customer to be worthwhile to the supplier it must deliver a positive return on investment. Usually this means that the money received from the customer for the supply of the product or service must exceed the money spent by the supplier providing it. So how do we balance that equation? Customers want certainty they will get what they want in the agreed timeframe and/or for the agreed price, while suppliers want to make sure they make a profit on the engagement. Seems simple enough. But what is missing from these scenarios? Even if both parties accept the well-understood iron triangle of time/cost, scope and quality, and that at least one of the three must be variable, is this enough on which to base a low risk and mutually valuable contract? I believe the answer is no, and not just because scope needs to be movable.

Quality is variable, not fixed

What?! Sounds controversial but I believe it to be true. In addition to the need for scope being variable, Agile folk also tend to talk about quality being fixed and uncompromising, meaning that time and cost can also be variable to deliver the best possible outcomes. Aside from the fact that leaving the cost and/or completion time of a project open is generally deemed an unacceptable way to conduct business, and likely why many businesses shy away from “Agile” contracts or working arrangements, I actually think it is un-Agile to fix quality. By this I’m not talking about code quality (the debate about what are bugs and acceptable levels of bugs in minimum viable and evolving products is for another blog post, another day). I mean quality in terms of what the customer defines as quality, and for me they are the only ones qualified to do so. IMO quality is an ever-changing variable in a project, just like scope. The difference is that the customer defines quality, either explicitly or implicitly, consciously or unconsciously. Scope, however, is defined by the supplier. Personally I think of quality in the context of products and services as:

“A subjective meeting of a need or requirement to the satisfaction or delight of the customer.”

If it is fair to say that what might delight a particular customer one day might not do so in 6 months time, and that what delights that customer right now may horrify another customer right now, I believe it is also fair to posit that quality ought not be fixed. I believe quality is what we should try and achieve, and it is what the customers want, but cannot fix what it means to achieve it. We will fail if we concentrate on time/cost and/or scope without making sure we are adjusting our delivery behaviour to suit the customer’s perception of quality. When we talk about projects being either “on track” or “off track” we always base it on our own interpretation of whether we are meeting the customer requirements. I believe theonly way we can know if we are on or off track is by asking the customer. They are the ones who know what they want. And this will most likely change. And this is fine! Great, in fact! That’s why we’re being Agile, and why they signed an Agile contract, right?

Don’t deliver the requirements, deliver what the customer wants

Delivering all the scope the customer wants may not actually delight them. It may even annoy them. Or cost them big time. They’ve hired you because you’re an awesome web design company with a great track record. They love your previous creative, innovative designs. And now you have done exactly what your customer has told you to do and it looks crap because your customer does not have a flair for web design. They are the customer, you are the supplier. You are the expert in what you do. You should be telling the customer the scope that will meet their requirement, not the other way round. And they should be telling you whether you are meeting their requirements or not. I believe you can never be “on track” in a truly Agile project, at least in a Gantt chart or velocity-based-Agile-release-plan sense, because the entire fabric of what you are building can change at any moment. If the contractual arrangement is done right then change is absolutely fine, to be expected and welcomed.

Agile contracts – the reality

So what really is an Agile contract?

Fixed price contracts are fine. Fixed time contracts are fine. But here are the caveats:

Do not fix time based on an estimate of cost because that inherently means you are agreeing to up-front scope detail that will likely bite you on the arse later and restrict the customer’s ability to request changes (and yours to welcome them) for their competitive advantage

If the customer does not fully understand and embrace the inherent unpredictable, creative and innovative nature of quality software solutions then work with them at your peril

If you don’t want to turn away work so you try and agree scope with the customer because “they insist”, and then base dates and times on estimates, do not pretend this is an Agile contract and make sure all parties understand the implications of this

Know your costs by having a fixed team and determine a “final” delivery date, or allow the customer to determine it

If the delivery date is acceptable to both supplier and customer then you now have a certain delivery date, no guesswork required; if the customer wants delivery sooner, reduce the price AND the expectation of quality

When you purchase something more cheaply outside of software, e.g. a cheap old banger of a car, you can assume you will likely receive a lower level of quality – why is software any different?

Negotiate a flexible, iterative, drip-funded contract that allows the customer to retreat early (either because they’re already happy with their product or because they’re not happy with the progress; if it’s the latter learn from their feedback, improve and move on)

The aim is to delight the customer and make a profit so do not simply do what they ask you to do; they are buying your expertise and guidance for meeting their need, so don’t take this responsibility lightly and think you’re serving the customer simply by “delivering customer requirements”

Deliver early and often (duh!); iterate, don’t just increment, and make this part of the working agreement

If possible give the customer a sense of the kind of outcome they can expect for varying price and/or delivery times (based on previous work done by your company) and given them options to “upgrade” or “downgrade”

Remember we’re supposed to “welcome” change?

Yes, don’t try and fix scope. But be prepared to move around on quality also. Allow the customer to accept an earlier version of your product because it does the job and they’re delighted they don’t need to spend any more cash on achieving their desired outcome. Or to love their product so much that they now want to spend more enhancing it. This is variable quality, in my book. Variable scope refers to the cost-side of building software; the amount of work we need to do to reach a specified outcome. Variable quality refers to the value the customer feels they are getting. It’s subjective, dependent on the customer and their particular circumstances. Delivering high value outcomes to the customer may cost more than lower value outcomes or they may not, depending on what the customer feels about the iterative outcomes. That “old banger” that you bought for $1000 may actually provide very high value and quality to you personally. Or it may be housing a classic engine that you didn’t previously know about, giving it emergent value. To someone else it’s a worthless piece of junk.

In the same way software solutions, products and services are entirely subjective in their quality. Some people think Microsoft Word is awesome and feature-packed and they base their entire business operations around it. Some think it is terrible, buggy and doesn’t do anything they want it to do. Let’s not pretend that delivering “quality” software is a predictable outcome any more than fixed scope is.

Variable quality pertains to the wonderful opportunities we ought to have with Agile software development for correcting the course and building the right thing; truly welcoming and embracing change for the customer’s (and our) benefit. This is what Agile contracts should be about IMO. Remove the uncertainty of time and cost by making them certain, and celebrate with your customers or suppliers the uncertainty around exactly what will be built. Why not consider basing your contracts on a mantra more along the lines of:

“We guarantee we will work with our customers’ time and budget constraints to iteratively build and evolve a delightful outcome to an agreed level of expectation?”

And for everyone’s sake, we should not be estimating in order to do it.

Introduction

This is the first in a series of essays exploring the huge topic of estimation within software development projects.

There are many different contexts in which estimates are given, and I am going to try and cover off as many as I can think of in these blogs, but the pattern of my argument will remain consistent: I believe we ought not make decisions in software projects based on estimates and that there are better alternatives for both the suppliers of software products (financially and ethically) and their customers (internal and external). Many of these alternatives are being used in real companies delivering to real customers with great effect.

Given the vastness of the topic, this post focuses purely on the scenario of one Scrum (or other method of iterative product development) team delivering a software product without estimating. Issues of scaling up or down capacity (adding or removing teams) will be covered in a later post about estimating at the portfolio level.

Will we deliver on time?

This is a question that often gets asked of a software development team at the beginning and throughout a project, and is a key reason why many believe we need to estimate. However, the ironic twist of seeking predictability by making predictions based on guesses is not lost on most people. We all know, or at least suspect, that we’re plucking numbers out of thin air. That we don’t yet know or understand the solution. Or the domain. We comfort ourselves by calling our guesses “educated” or “quick and dirty”, to justify our using them to make important business decisions.

Building software is by its very nature unpredictable and unrepetitive. While building software we cannot easily break down the work into same-sized, repeatable widgets like we can when manufacturing car parts. Unlike car production, the exact product we are building is unknown until we’ve built it, so how can we break the work down into smaller parts up front? One increment of software is not like the next. Software development is a creative, variable pursuit, and solutions are often revealed as we go along. For this reason, fixing scope in software projects is not really possible. Even if it were, it is becoming widely accepted that attempting to do so is undesirable because such an approach does not allow for (or, at least, does not embrace) emergent design, requirements, change and innovation. If we accept that scope is always variable, we must also accept that the delivery date may end up as a moving goalpost while we scamper to deliver what we think is fixed scope “on time” and “on budget”.

So, if it is true to say the concepts of “on time” and “on budget” are usually based on an estimate of how long it will take (and how much it will cost) to build software to meet a fixed set of requirements, rather than a concrete time or budget constraint, it is likely fair to say that we may take longer to deliver the software than we initially estimated. Yes, we may also be quicker than we thought. Or we may get our estimate just right. But, regardless of the outcome, does it actually matter how “correct” our estimates were? Does the act of estimating our work have any impact at all, positive or negative, on the delivery of great software or its return on investment?

Vision is key

To build software we need a clear vision and shared purpose of what success looks like. When commencing with a potentially valuable software initiative we need well understood high level goals, not the detail of how we will achieve those goals. In true iterative fashion we can then align our just-in-time decisions about how we will improve the product in the next iteration (i.e. what we will build next, aka top items in the Product Backlog) with these goals. I posit that trying to estimate how long it will take to deliver software to achieve one or more high level goals, and then basing real decisions on this estimate, is a questionable approach. Don’t we want our solution and architecture to emerge? Don’t we we want to welcome and embrace changes for the customer’s competitive advantage as the product evolves and becomes more real to the users? These are key principles in the Agile Manifesto and I believe they lie at the heart of a truly Agile approach to building software.

Remove the unknowns

Instead of depending on an accurate estimate for predictability we can take away the unknowns of cost and delivery date by making them… well, known. The Product Owner can fix the delivery date based on a concrete budgetary and/or time constraint (e.g. 3 days before the Australian Open starts for the Australian Open app is a concrete time constraint, and “we have to build something for $30,000″ is a concrete budgetary constraint). Within that constraint the team can then fix incremental delivery dates (e.g. end of every Sprint) to allow focused effort on iterative product evolution (it’s not good to have priorities changing every day on a whim) andprovide the opportunity to deliver early and/or under budget. This approach is also useful where there is no concrete budget or delivery date, although the need for interim release dates diminishes if the team (and organisation) is mature enough to have a continuous delivery model.

Estimating sprint velocity is waste

Rather than fix the solution up front (which is required in order to give a “how long” estimate), or make forecasts every Sprint about how many points or stories will get done, I believe teams ought to commit at the outset to building and delivering the best possible product by a given date and/or for a given amount of money. For me, release planning using, e.g velocity (“how many points can we deliver by the release date?”, or “what is our release date given our remaining scope and velocity”) is contrary to an iterative approach (holistic, evolutionary improvement of the product) and is more in line with a purely incremental approach (delivering a pre-defined Product Backlog feature by feature).

When we estimate and use velocity as a planning tool we are making an assumption of how much can get done in a time period. For that information to be useful and meaningful we need to have an amount of stuff in mind that we want to deliver (i.e. a fully estimated Product Backlog). I don’t think it would be too controversial to suggest that all the time (and therefore $$$) spent on estimating backlog items that do not end up getting delivered is waste (at least in the Lean sense).

But what about all the time and $$$ spent on estimating backlog items that do get delivered? To answer that question, I will ask one more question: “Did the PO ever prioritise one story over another based on it having a lower estimated cost (story point size)?” If the answer to this questions is “No” then I conclude that all estimating in this context was waste because no decision was made based on the estimates that were given (instead the PO simply prioritised the highest value stories). If, however, the answer is “Yes” then estimates controlled what I believe should be value-based decisions. Estimating a backlog up-front and then release planning using velocity is a cost-based approach. While costs are obviously important in running a software project and, indeed, a business, if decisions are made purely on cost then some of the great software we use and rely upon today (e.g. much of what is made by Google, Facebook, Apple, Yahoo, Spotify, etc.) would never have been built and we would have one explanation as to why there is so much crap, expensive, bloated software in the world.

Iterate, don’t estimate

I believe iterative (Agile) development is 100% about making decisions based on customer and/or business value, using empiricism over guesswork and fixing cost by having a fixed team (a la the Spotify “squad” model) with known timeframes (frequent, predictable release dates as opposed to “deadlines”, which are release dates for “fixed” scope based on imaginary constraints). Knowing our costs and delivery dates gives us certainty which allows us to embrace the delicious uncertainty of building great software.

btw – Having a fixed delivery date doesn’t mean that we will necessarily stop building our product on the delivery date. We may have already stopped or we may choose to continue. What it does mean is that we will continually make go/no-go decisions based on the emergent or potential value of what we are building rather than estimating the cost of a particular solution.

Shift focus to “small”

From the team’s point of view, I believe it is far more valuable to get better at breaking down stories JIT (and only JIT – any earlier is potentially wasteful) to be as small as possible (or, at least, as is practically possible) than to “increase velocity”. For me, a high-performing team has the ability to deliver frequent ”done” increments to the product that can derive immediate feedback and/or potential value for those using it. Clearly the smaller the increments the more frequently delivery can happen, which leads to shorter feedback loops and increased learning and flexibility for the PO to prioritise emergent features over features she originally thought she wanted/needed that have diminished in value, or even take a complete change in direction. This, in my opinion, is far more in tune with true business agility.

The importance of how many stories or points gets delivered in a Sprint becomes truly insignificant when the team is delivering frequent changes to the product and putting them in the hands of users. This, for me, is the crux of why software projects are trying to embrace an Agile approach. But until the estimation stops I believe we’re being held back from true high performance which can deliver awesome outcomes for customers.

I love this approach to splitting up user story value by considering vertical slices through the technical solution.

Iterative and incremental development is a tricky art to master. Delivering very small increments of value takes some practice. With iterative development we must be happy to frequently revisit areas of the system that we are building as we learn more about them, which is quite different from the traditional approach (broad and shallow engineering versus narrow and deep).

This is where I believe the Agile Manifesto authors were coming from when they spoke about “Simplicity, the art of maximising the amount of work not done“. Implementing the simplest technical solution in order to deliver value quickly. It does not necessarily constitute the final solution, and it certainly does not mean “quick and dirty”. We still need code quality (unit/integration/acceptance tests), and the goal is to have a usable system. Something we ourselves would be happy to use and would be able to provide feedback on.

But for an individual user story we are simply trying to meet the goal of that story in the quickest and simplest way possible while providing an acceptable technical solution to meet that purpose. If the code is simple and maintainable we can easily build upon it if required, and the required architecture will evolve as we both iterate and increment.

So we want stories as small as possible (no more than a couple of days of work) and with the simplest acceptable solution under the covers. A good way of looking at it is “what’s the minimum amount of code I need to write to pass the acceptance tests?” (this approach of course leads naturally into the worlds of TDD and BDD, which I encourage you to read more about).

Working this way enables us to get early feedback on the feature and decide whether to invest more effort (via more stories) for that feature, thus allowing the flexibility for the product owner to prioritise a different area of the system if (s)he so wishes.

Have a great weekend everyone. Perhaps consider making the goal of your Sprint Planning meeting on Monday to split your stories down even smaller using some of the excellent techniques available. The benefits are numerous.

To demonstrate why this request is nonsensical first imagine a mature, high performing Agile team who delivers on average 10 stories of roughly the same size in every 2-week Sprint (i.e. 1 story per working day).

Now imagine we asked the team to take on just ONE story every Sprint. Their capacity is 10 stories, but we ask them to only deliver 1. What might happen?

Well, we can’t be sure but it is fairly safe to assume that the 1 story is guaranteed to be delivered. We can also be pretty sure that it will be of an extremely high quality, given that the team are working well under capacity and so have plenty of time to dedicate to ensuring a bug-free and pleasant user experience. They may also spend extra time on exploratory testing, ensuring that the whole product, of which this story is a small part, is not hiding some ugly buggy behaviour. If they do find some bugs, they may fix them and add some tests to their regression suite to ensure the bugs don’t recur, increasing the holistic quality and maintainability of the system.

Given that the team knows they are an awesome, high performing team and they have plenty of time to spare in the Sprint, they will likely spend a large portion of their time not working at all. Having fun. Slacking off a little. Giving their brains time to breathe, to reset. Enhancing their team culture and spirit.

From a planning point of view, we may not have speed but we sure have predictability. We know that the team delivers 1 story every Sprint so we can very easily figure out when our product will be delivered with close to (if not exactly) 100% confidence.

OK, now let’s instead imagine we ask the team to deliver 2 stories per Sprint. It’s not too much of a stretch to assume we would get a similar result to the above, except this time some (albeit small) sacrifices will be made. Perhaps some of the extra, luxury activities will be left out. Perhaps all of the aforementioned activities will be done but with less time spent on them. So a little less story and product quality. A little less fun and recuperation time. A little less team building. While it’s highly likely that the team will surely deliver the 2 stories, the probability is slightly less than when we asked them to deliver 1 story. So we have a little less predictability.

What about if we extend this scenario to 5 stories? Then 8? Now imagine we’re struggling to hit a contractual deadline so we feel the need to “speed up”. So we ask the team that predictably delivers 10 stories to deliver 12 (now we’re over capacity). Or even 14?

Hopefully you can see where I’m going with this. The more stories we ask the team to deliver, the less time they can spend on quality, the more likely shortcuts will be taken, the more likely technical debt will be incurred, the more likely team culture and effectiveness will suffer, the less fun will be had, the more fried the team’s brains will be and the less predictable we will become at delivering software.

Read that again – the “faster” we ask (or worse, tell) our teams to go, the less predictable at delivering software we become, and that software is more likely to be of a lower quality. Allowing our teams to deliver at a constant, sustainable pace ensures quality, predictable software delivery, a higher chance of happy teams and happy customers, which leads to higher business value (e.g. profit).

In short, by allowing the team to find the right balance and deliver high quality software at their capacity, a cycle of success is created.

So, managers, please think twice before asking your teams to speed up, i.e. deliver more stories (or story points) than usual in a Sprint or sequence of Sprints. It’s like asking a marathon runner to start running faster after 32k for the final 10k – you’re increasing the chances of long term failure (not completing the marathon at all due to fatigue) for a potential short term gain (running some quicker kilometers).

If I want someone to, say, build me a website, in most cases there are two possible constraints I have. I either have a maximum amount I want (or have available) to spend, or I need my website delivered by a particular date. In a truly Agile project, both of these are the same for the supplier because there is a fixed team, i.e. time constraint = budgetary constraint.

Back to my requirements. Let’s say I have $5000 available. If I engage a web design company, I can choose to not tell them my constraint, perhaps because I want to save money and get the “best/cheapest quote”. I can simply ask “how much will my website cost, given that I want x, y and z?”

This is the predicament many software companies have – how do we determine a price for the customer? The answer is invariably to take the customer’s requirements, devise a solution and estimate how long that solution will take. This will then derive the cost to the company, which will determine the price to the customer.

As customers, let’s stop and think about this. Is this the approach I want the web design company to take? Does this provide the best possible value for me? When I engage the web company, would I rather the following:

A: Stay shy about my $5000 budget, and the company comes back and tells me they can build my site for $4500, having based that decision on a fixed design/solution and guess of how long that design will take to build. Perhaps they’ve actually shaved time from the team estimates in order to under-cut a competitor. Perhaps they’ve added on time as a “buffer”, increasing the price for me. We will sign a contract based on a SoW detailing what I will get for my money. If I want to change any of the detail as I start to see the website built I will need to pay extra or I will need to drop out some of the originally agreed features. These small increments will need to be costed accordingly, again based on a guess of how long the new feature will take compared to the original feature.

B: Reveal my budget. They come back and say that my $5000 buys 5 weeks of work, and the team will build the best possible website they can for that price. They might show me examples of other clients’ websites that cost around $5000 to give me an idea of the quality my website will be. They will work with me in weekly iterations to ensure I’m happy with the progress, can change things as we go along and that the key things that are important to me are always being built first. They will deploy my site to a demo URL daily so I can see the site evolve and provide feedback at any time. If after a week, or two weeks, or 3 weeks, I’m not happy with what is being produced I can choose to end the relationship. This makes it clear to me that the web company is absorbing much of my risk and they are very confident they will do a great job for me. I as the customer am the one gauging the progress against my requirements rather than them estimating that they are “on track”. They want to form a working relationship with me in order to build the right thing, and that they might get my repeat business. That I might recommend them to my friends and colleagues. Their mantra is to delight their customers.

Option A requires estimation (guessing/risk/uncertainty), upfront design and makes change hard. Option B requires no estimation, design can change and emerge as we go along, embraces changes as I see the site evolve and shows a company wanting to work closely with me to achieve a result I am delighted with. One that is prepared to front extra risk (of losing money on the contract) because they are so confident in the quality of work they do and of the relationships they form with their customers.

I find it curious when people criticise Scrum as if it is competing with Kanban. I don’t believe it is, and I don’t believe it is particularly worthwhile debating Scrum vs Kanban as two Agile methods because that’s not really the case. Kanban and Scrum have quite different purposes (although they do perhaps have similar intentions).

Put simply, the purpose of Kanban is to create a kaizen culture, one whose primary concern is that of learning, improvement and process evolution using “the scientific method”. Conversely, despite Scrum having lofty yet admirable aims of “changing the world of work”, the purpose of Scrum is to enable teams to develop products effectively. Scrum is generally a bottom-up, team-based approach and so, as the Kanban brigade rightly point out, it is not particularly (if at all) effective at instilling a kaizen culture (fortnightly team retrospectives, even done well, do not create a culture of continuous improvement in an organisation). It’s also not great as an enterprise solution to perceived effectiveness problems unless the organisation really understands the cultural implications of moving to Scrum across the board and has a collective mindset that can buy-in and adapt.

But here’s the rub. To me it’s not about whether an organisation should choose Scrum or Kanban – both are frameworks or methods for different contexts and different intended outcomes. Many companies have identified that they are crap at delivering software and want to get better at it. Rightly or wrongly, these companies are not seeking a kaizen culture. They simply want to deliver software better (by their terms), not improve their effectiveness overall. I am not saying this is a good thing but at least by choosing Scrum to (try and) improve their software delivery it might just get them thinking about the importance of learning and improvement to overall organisational effectiveness. I know from personal experience of coaching new Scrum teams (imposed or not) that they begin to get curious about Scrum and Agile, and then the curiosity spreads to Lean and Kanban. A good coach will introduce teams and their managers to Lean and Kanban concepts and techniques within Scrum (or evolving away from it as the team grows in confidence) as part of a drive for true self-management, measuring, learning and improvement. I have seen, and been part of, many Scrum-ban implementations. They may not have changed their companies for the better as a whole but they certainly helped those companies deliver software better, which is what Scrum ultimately is intended for.

As for the argument about Scrum prescribing roles, meetings and processes, I believe this is down to mindset. If rather than describing the Scrum framework by what it “prescribes” (I prefer the word “recommends” but I will continue to use the word “prescribes” because I see no harm in prescribing something within a framework that one chooses to use) we instead describe it by what it intends, Scrum is a framework for enabling teams to iterate over a product until the business or customer deems it valuable enough to ship. So, if you’re in a position where you want to develop a product iteratively (or at least incrementally) and want to put a team together to do that, Scrum is (potentially) an excellent choice. If you were to choose just Kanban for developing a product, which of course you could, then by default you will not be changing anything about the way you currently work. This is not necessarily a good thing.

For example, Kanban does not prescribe iterations but often Kanban implementations use some kind of iterative process (even if it’s just having a fortnightly review of the product) and teams do this for good reason. Sure, having iterations (Sprints in Scrum) doesn’t guarantee an iterative and incremental approach to building the product but it at least hints it might be a good idea. Even if you don’t fix your scope within the timebox it still makes sense to have (say) fortnightly demos and a chance for everyone to review and evaluate the product holistically. This is a sound and effective approach to software delivery, as borne out by the Agile Manifesto’s recommendation of measuring progress via working software and delivering value early and often.

Similarly, Kanban doesn’t prescribe cross-functional teams, so if you happen to have silos of developers, testers, designers, etc. working in a Waterfall fashion with hand-offs then you will continue to work in that way and not reap the benefits (at least early in the game) of forging collaborative relationships and working as a cross-functional team until such time as the kaizen to try this is agreed upon. This approach may be better in the long run in terms of organisational effectiveness, but in the short term it could be a slow path – too slow for the business to accept – to delivering shippable increments early and often and measuring progress with working software.

Being a framework Scrum prescribes meetings and roles, but without them there is no guidance toward effective delivery of value early and often or the aim of breaking down complex problems by building an end-to-end shippable product in increments as a team – in other words, if you take these meetings and roles away it’s not really a framework is it?! The meetings point out the importance of continuous business/customer feedback, prioritisation and trade offs (as does the Product Backlog), just-in-time planning, correcting your course, team process improvements etc. The roles point out that there is conflict in the traditional Project Manager role between serving the team and serving the business, and that an iterative (Agile) approach to software development requires coaching at both the team and business level, hence the Scrum Master and Product Owner roles.

A product development framework without some semblance of structure renders it useless as a framework. If the framework is abused (as it often is, but this is not the fault of Scrum) then its effectiveness will be diminished or negated completely. But this does not mean that Kanban is better than Scrum for product development or that Scrum should not be used. In the right context and with the right mindset, Scrum can be extremely effective.

To be honest it all depends on context (as it always does) but, put simply, if an organisation wants to change in terms of improving software delivery, Scrum may well be more effective than Kanban. If an organisation recognises that it needs to embrace a kaizen culture, not just to be better at shipping software, then pure Kanban could be the way to go. But trashing Scrum because it is not always good as an enterprise solution (ironically it can be but doesn’t prescribe how to do this) or because it defines structure (which guides towards effective practices congruent with Agile) seems glib to me.

Scrum and Kanban are different approaches for different contexts but can work beautifully together in certain situations (generally product development in a team and company of the right mindset to be open to new collaborative, approaches to delivering value). One can evolve into the other, either way. They are both interesting and have noble principles. There is much to learn, and teach, in both.

I was pondering this morning about the difference between Learning, Believing and Knowing. The differences may seem obvious but I’d like to explore whether the following is true:

Does learning lead to knowing or merely to believing?

What constitutes knowing something?

If a fact requires experience to confirm it, what if we have no experience of the subject of the fact?

We say things like “you learn something new every day!” but how much of the stuff that is absorbed into our brains on a daily basis is actually learning? Since I started using Twitter a couple of years ago I feel that I have learned very much from many people on many subjects. Similarly, as I read blogs, articles and books and talk to people I feel I am learning more and more. But what do we mean when we say we are learning? Do we mean that we are acquiring new facts (or believe we are) or are we merely merging what we are being told and what we have seen and read into our own opinions and views of what we know?

Does Peru exist?

This seems a silly question but I am using it to make an important distinction between knowledge and belief. Of course the answer to this should be a unanimous “yes”. But why am I so sure that Peru exists? I have never been there. I can’t remember talking to anyone who says they have been there. The reason I know it exists is that there is overwhelming evidence to its existence that I have observed. I have seen pictures (claiming to be) taken in Peru. I have seen video footage (supposedly) shot in Peru. I’ve seen (what I’m told is) Peru on satellite images of the Earth. It is a “fact”. Right?

“A fact (derived from the Latin factum, see below) is something that has really occurred or is actually the case. The usual test for a statement of fact is verifiability, that is whether it can be proven to correspond to experience.”

Hang on, so I can only verify that Peru’s existence is a fact if it has proven to correspond to experience? Well I have no experience of Peru, other than the pictures, video, etc. that I’ve seen, so until I’ve actually got on a plane and gone to Peru can I be absolutely 100% sure it exists? If I’m really pushed may my confidence level only be 99.9999999%? I’m relying on other people’s proof and experience for me to be so sure that Peru exists. Rather like we rely on scientific understanding of the world to establish facts that would be impossible for us individually to verify (like gravity) and reject information that is not established as fact (like the existence of a higher being, intelligent design, etc.).

I don’t remember the instant when I first heard there was a country called Peru. Let’s assume as a child I heard someone mention it and I asked my parents “What’s Peru?”, to which my Dad answered “It’s a country in South America”. Now, my question here is: at the point my Dad told me of Peru’s existence as a country in South America, did I learnthat Peru exists or did I simply begin to believe that Peru exists? I was a child so I was also told of Santa Claus and the Tooth Fairy’s existence. What made Peru’s existence more real to me?

“Simply put, the Model explains how the effectiveness of any knowledge-work organisation is a direct function of the kind of mindset shared collectively by all the folks working in the organisation – managers, executives and employees, all.

effectiveness = f(mindset)”

Since I first learned of the Marshall Model’s existence (I observed it personally, and so can you with the link above, so can verify as a fact that the Marshall Model exists), I have read more about it, interacted with Bob on Twitter and blog posts and from all this have gleaned a genuine interest in organisational effectiveness (thanks Bob, if you’re reading this).

What’s also interesting to me though is how I have embraced the rightshifting concept to a point that I tell others about it. I now know not only about its existence but also what it tells us about organisations. Or do I? Bob came up with the model and so obviously believes, knows it to be a true reflection of organisational effectiveness. But when I read more and talked to Bob about it, did I learn more about the model or do I merely start believing more in the model? Do I now know that effectivness is a function of mindset, do I merely believe it or have I simply learned that someone else believes or knows it?

I have always felt in my career that there are certain types of organisation when it comes to culture and how they get things done, and certainly prosper more readily in, to use Bob’s model, the more rightshifted organisations. So is there a chance that when I saw the Marshall Model my cognitive bias leaned me towards the principles and helped me embrace it as observable and true? Or do I actually have evidence that the model is true and thus I have learned the model’s effects as fact?

My cognitive bias also leaned me towards Agile because the values and principles align with me as a human being. One might call this “mindset“. I coach Agile principles and practices and have observed certain behaviours causing certain results, some repeatedly. But all of my experiences and what I constitute as knowledge is all based on my own view of the work and the world. Without continued learning on everything I think I know about, even things I consider myself an “expert” in, I cannot be sure that I actually know enough, or will ever. For all I know, everyone else I encounter might think I’m a complete duffer when it comes to product development even though I think I’m quite good at it!

Learn to learn

We all use our knowledge every day in our work and our personal lives. I do think though that it’s very important to acknowledge that much of what we think we know may actually just be things we believe and have never actually verified them to be fact.

This is one of the many reasons why learning is the key word from the three used in the title of this post. We cannot know, even believe in something until we have learned about it. I learned about God as a child and started to believe in Him. I learned about Santa Claus and believed in Him too. But I never really knew that either existed. I certainly thought I knew (presents arrived on Christmas Day), but I didn’t. Unless we recognise that we must learn how to learn, then continue to learn daily, infinitely, we cannot purport to truly know anything.

Introduction

After a year or two of “having a hunch” about this, and after many years of either estimating work or working to someone else’s estimates, I’ve now finally come to the conclusion that the use of estimation of any kind in a project is not only a waste of time but is actually destructive.

I am fully aware this is an extremely controversial statement, so I am going to be as thorough as I can in explaining how I came to this conclusion via experience, data and validation. Indeed, when I read Duarte Vasco’s post about this several months ago, I saw his “point” (no pun intended) but also argued the merits of using story point estimation for the purposes of:

Up-front sizing of a project to determine its validity within a given budget or timeframe

Increasing shared understanding and knowledge within the team based on the discussions that arise from a Planning Poker session

Allowing the PO to make trade-off decisions between different sized stories (based on ROI)

Measuring team velocity

To continually validate the initial project sizing by predicting scope-fit within a given release date

To allow the team to measure and improve its performance

Why shouldn’t we estimate?

I have since come to the conclusion that some of these things do not need to be done at all, and the other things can be done without the need for estimating (guesswork) of any kind. I would now additionally argue that even if you acknowledge the shortcomings of estimation and use ranges, account for uncertainty, etc., the act of estimation in itself is destructive for the following reasons:

“Fixed” scope project delivery expectations are often (always?) based on an up-front estimate of scope (guess) and how long that scope will take to be delivered (another guess), leading to the obvious dysfunctions like death-marches, low quality, etc.

If the budget is fixed, there is no way of going “over budget” in order to deliver the fixed scope. Yet “over budget” is a common term used when describing failed projects. If your budget is truly a constraint then you will only deliver what can be delivered. Agile methods ensure that what you deliver is of the highest value to the business.

I chatted to a team member earlier and he complained of feeling pressure to increase velocity. I asked him where this pressure was coming from and he said that it stemmed from the concern that the project will fail if the team isn’t able to deliver more stories more quickly. No one is actually specifically asking the team to deliver more, but there is an implied pressure to do so because they are aware the budget is running out. This mindset comes from years of poorly funded, gated projects, death marches, focus on productivity rather than quality and canned or failed projects.

Asking teams to estimate how long their work will take (or how many points they will deliver in a Sprint or a Release, same thing) has connotations that their output is being measured by an external party (manager), creating an environment of fear and massaging figures to reflect what is desired rather than what is predicted

To increase velocity the team simply needs to over-estimate stories to give the illusion of delivering more. They may not consciously do this but it may happen sub-consciously. The project manager pats them on the back, but all that has happened is the same amount of “done” working software has been delivered.

It’s time to get real and use real data to reflect real progress, whether it’s good news or bad.

We shouldn’t be defining all our scope up front, meaning we shouldn’t estimate all our scope up front, meaning we shouldn’t be defining our delivery date based on our scope

We should be fixing our Release 1 delivery date and aiming to build the best possible product by that date (variable scope).

As soon as we introduce the word “estimation”, the default mindset is to consider “how long will this project take?” (if this isn’t asked explicitly). This causes us to consider the complete scope and duration of the project (this is anti-Agile and I won’t go into why it’s a bad idea because enough has been written about that already elsewhere)

How do we size a project?

Short answer – you shouldn’t. If you don’t have a firm deadline for your project (e.g. day 1 of the Grand Prix for a Grand Prix app), you will have a budget for your project (set by the PMO or the external customer), from which you can derive a deadline. The smart thing to do is to then plan an interim release (say at the halfway point) where you can gauge how the project is going based on the working software measure.

For example, if your budget gives you enough cash for ten 2-week Sprints (given a fixed, 100% allocated team), clearly you need to assume that your go-live date is in 20 weeks time. But the aim should be to get working software in a production environment in 2 weeks time (after Sprint 1). You should then iterate over the product, allowing requirements (scope) to emerge and shape the direction the product takes, and take time to reassess after Sprint 5.

These things are not predictable up front – estimation will set you up with a load of scope (expectations) that will not get delivered and will only create unnecessary analysis time (money) and pressure.

How does the team get shared understanding of a story?

Simple. When a new item is added to the top of the product backlog, the team will discuss it in Sprint Planning and break it down if necessary. If it doesn’t need breaking down then it is likely already well understood. If it does then the act of breaking it down will necessitate conversations around the implementation detail that will facilitate shared understanding.

In short, the team does not need to be in an estimation session to discuss and break down a story.

How can the PO make trade-off decisions?

The PO probably needs to know the ROI of a story when introducing it to the team to be delivered. In order to calculate the ROI she needs to know how much it will cost to be delivered (how long).

Here a team would estimate the item using story points and then the PO, armed with the team’s velocity, can estimate the item’s ROI. But without story points how can this be done?

This is where the concept of “implicit estimation” comes into play. In order to create predictability in the flow of work, the team will break down stories just-in-time (in Sprint Planning) so that they are all roughly the same size. This is something that happens naturally throughout the course of the project. Over time the size of stories normalises because the team naturally wants bite-size chunks to work on in the short time period of the Sprint. They get used to delivering a certain number of stories, give or take, in a Sprint.

So for the PO to cost the item, she just needs to ask the team if it is understood or needs breaking down. If the PO considers it high enough priority she will want to introduce it in Sprint Planning so that it gets built right away, if it makes sense to do so. Sprint Planning is the place for the team to break down the story if required and decide if it can be delivered in the Sprint. If it can, the cost of the item is essentially 2 weeks of team wages (assuming production deployment is done at the end of the Sprint – a continuous delivery model can improve speed to market and ROI, but that’s a discussion for another day).

If the item can’t be delivered in the Sprint, the PO can simply look at how many stories have been spawned from the epic item and determine the likelihood of it being delivered in the next Sprint or the Sprint after, based on how many stories the team usually gets through. This leads me nicely on to the topic of how we measure velocity in the absence of story points.

How do we measure velocity?

Now I’m moving firmly into Duarte territory. The answer is we count stories rather than accumulate story points, hence negating the need to estimate. As I mentioned before, teams break stories down into roughly the same size, so counting how many stories are delivered in each Sprint makes for a satisfactory measure of velocity. If the team usually delivers 5 stories with zero defects and then one Sprint delivers 6 or 7 stories with zero defects, an improvement has been made (disregarding variance, which exists whatever unit you use to measure velocity).

Due to the hunch I mentioned earlier, I have been tracking velocity as both story count and points for my current team and making projections using both methods. As I suspected (and as Duarte points out with much supporting data), story count provides just as good, if not better a measure of progress and predictability as story points do. Therefore why spend all the time, cost and effort on estimation sessions and velocity calculations?

While story count works great for velocity, I would still warn against using this or any other velocity measure as a way of predicting when you can deliver. You should know when you are delivering and only be predicting what you can deliver at that date. Don’t leave your delivery date to chance, even if you are using historical data rather than guesswork to predict how many stories can be done.

What you can do, however, is use velocity to help the PO understand scoping trade-offs in the backlog (“the data tells me the team can deliver 20 more stories before the release date, so I’ll make sure the most important 20 are at the top of the backlog“).

Conclusion

It’s taken me several years to come to this conclusion. But, if you think about it, people laugh and joke about estimates all the time. Everyone knows they’re a guess. Everyone knows they’re wrong. Yet we continue to do them. I believe it is time for us to acknowledge that it makes far more sense to eliminate the risk and cost of estimation completely and use only empirical data (as Agile and Scrum promotes) to make predicitions.

In a world without estimation overhead the team is likely to be more happy and productive, the inefficiency of spending time on estimating rather than delivering working software is eliminated and the PO will have real data with which to make decisions rather than guesses made under pressure.

I’ve just watched a presentation that’s made me so angry it’s prompted me to write my first blog post in ages! Sorry I’ve been away so long🙂

I’m not a fan of the “Scaled Agile Framework” to say the least. Dean Leffingwell is in on this, a man who I generally find myself agreeing with. However this framework is a horrible, pure money-making bastardisation and Frankenstein of Scrum, Agile and Waterfall that is being sold to large companies who are too afraid to really change and just want to increase productivity, reduce defect counts, etc. and find a place in the “Agile” world for their managers.

The whole concept of iterating over a product rather than simply incrementing features is fundamental to Agile and Scrum but completely bypassed with this framework. Continuous delivery in order to tap into the market as early as possible and adapt the product is ignored (instead a 2-day release plan meeting is held in which all the features the PM wants done in the next 10 weeks are broken down into user stories and put into Sprints – yuk).

There is even a “hardening Sprint” which is a fancy term for a 2-week phase for bug-fixing and deployment activities because companies “really need it” (read it’s too hard to truly get things “done done” so we’ll leave time for it at the end – of course “the end” is a deadline date based on an estimation of how long all the features will take to build – i.e. guesswork around fixed requirements – ring any bells?). Yuk yuk yuk!

Scrum scales perfectly well without this framework, thank you very much! Each product has a backlog, which is derived from an overall program backlog at the portfolio level. Each product has 1 to many synchronised teams – done! Why synchronise the whole frigging organisation’s product development?! Yeah like that will work. Means any one team can’t adapt their process because it’s locked in to the organisation’s “Agile” framework.

Scrum-at-scale is far better because it holds true to the founding principles of Agile and Scrum but also allows hundred of people to be working together towards a common goal. If the business needs to change program priorities then they can do because they are doing Scrum! Simply cease work (if required) on the product or work stream that is being moved down the backlog at the end of the next Sprint and start the team (or a different team) on the new product.