July 2011

26 July 2011

It is two in the morning and Lynda is staring at the ceiling. Barney has consistently slowed the team’s work, he is always an obstruction, and he gets harsh and complains every time someone suggests that he match the rest of the team’s performance. Lynda hates firing people, but the facts are the facts.

By eight a.m. Lynda is writing Barney’s “You Gotta Go” letter. Company policy is to have an exit interview with at least three managers. So she goes to Nate and asks if he will come. He looks startled, but says “Yes.”

She then goes to Joanne, who also looks startled, but says “Yes.”

Lynda then goes to HR to let them know that there is consensus about Barney.

Ten minutes later, Nate and Joanne show up in HR asking why Barney is being fired. They think Barney is not an obstruction, but that he’s been given way too much work to do and is bogged down.

Lynda’s logic was sound. Barney was not pulling his weight and had to go. When she visited her colleagues, she never even asked if they agreed. She just assumed they wouldn’t disagree with such sound logic.

Lynda is a victim of the False Consensus Effect.

The False Consensus Effect is no stranger to the modern worker. We make choices that are perfectly rational and logical given our current conditions. We don’t realize that there are other, equally logical, explanations and reactions that, when examined, could yield a much better outcome.

Logic, as much as we’d like to think otherwise, is impacted by context and we often don’t know the full context of a situation. The current Tea Party movement is an excellent case-in-point. The members of the Tea Party have limited information about the context of their beliefs. In that limited world-view, their claims make perfect sense. Smaller government means less control and more freedom.

That is an airtight argument. You simply can’t argue against it. It’s a logical chain. Therefore they must be right. Therefore others must agree with them. Therefore those that don’t are incorrect.

Unfortunately, the truism does not tell a complete picture. While just about anyone you ask would support smaller government (myself included), only a handful of people are actually in or support the Tea Party Movement. This is because there are many differing views on the role of government in different contexts.

Most people who have been through a disaster appreciate the fact that there is someone to help with clean water and food. A quick search of Tea Party and FEMA shows that disaster relief would be very high on the Tea Party list of agencies to remove. While I’m all for smaller government, having been through a series of tornadoes followed by high heat an no drinking water … I was pretty happy to see FEMA and the Red Cross show up.

In business this happens often, we get to the end of a meeting and ask “Are we all in agreement?” Participants nod and mumble. We take that as a yes, but what was really agreed to? Odds are, the people at the table have lacklusterly agreed to something much less grandiose than is in your mind right then.

False Consensus can be dangerous and expensive. When you are moving forward, ask yourself often, “Am I sure I have buy-in from others?” You’ll find yourself asking and usually getting positive responses. But every so often you will be surprised – your assumptions are not common opinion.

25 July 2011

Years ago, Ricardo Montalban (known to his friends as Ricardo Gonzalo Pedro Montalbán y Merino)understood that our moods were signals. Messages that told people that things were going okay, not-so-okay, or downright horribly.

Below, those things are smilees.

Stupid, simple, immediately recognizable facial drawings.

They are also perhaps the most important business metric you will ever find.

This image below is a team kanban using smilees to note how people felt about a particular task. Upon completion, the person or people doing the work, annotate the ticket with something denoting their mood. We’ve seen people do this with everything from simple faces, to pictures of angry cats or mushroom clouds.

This is to combat something called Availability Heuristic (among other things). Availability Heuristic says that we are more likely to estimate a higher probability of events that we actually hold in our memories. Meaning, if we remember it, we assume it will happen more often than events we have not experienced or previously heard about. In addition, we tend to remember emotionally charged events. So we would be more likely to remember the big wins and big problems over their less emotional counterparts.

This means we tend to focus on very painful or very wonderful (read relatively rare) events that we want to eradicate or experience, respectively. We will get together to discuss the emotionally charged, but rare, events and forget the smaller ones. They become the comme-ci comme-ca of our day – just normal stuff that happens.

But those small victories, those small pain-points are the ones where solution to continue or improve are much easier to envision and act out.

Those smilees measure Subjective Well Being at a very fine level. They say, “How did you feel today? Right Now! At the moment of doing?” That measure can then be used later to review tasks and see what is it in our daily work that really does make us perform better, that really does make us happy, that really does upset us just a little.

When we seek out what to improve as individuals, teams and companies, seeking out the real day-to-day minor victories and small imperfections and understanding their impacts is crucial. This gives us cost-effective, immediate improvements with very high return. It can free us from huge and unnecessary rules or changes to combat the rare, but emotionally painful events – and put them in perspective.

18 July 2011

In 1977 and 1978 there were 1,185 UFO sightings by Brits. Nearly two UFOs a day were apparently buzzing England – one can only imagine it was to get tickets to see Queen at Wembley. What was going on that had the UK’ers looking up from their mushy peas long enough to see UFOs?

They were coming down from a Spielbergian high – in 1977 Close Encounters was released and everyone saw it.

When we put our minds to it, we can create just about anything – including UFOs. In 2009, the Guardian ran an article about what they called “The Will Smith Effect” where sightings of UFOs were remarkably higher in years where movies involving aliens (funny or not) were in the theatres. (Or Dr. Who did something notable.)

Now, most of those who phoned-in a UFO sighting truly did believe there was, if not an actual UFO, a good chance that they did actually see a UFO.

So, the question is … how do we keep UFOs out of our retrospectives? Continuous Improvement demands that we spot ill-suited patterns and deal with them. When we are looking for patterns in our work, how can we see the real patterns and not items that represent our biases, current popular memes, or other false positives?

The UFO-happy Brits had been seeded with an expectation that they would find a UFO if they looked up in the sky. And, the thing is, it was not a very strong seeding. It wasn’t like Prime Minister Callaghan came out and said, “We have been contacted by Aliens, they seem pretty cool, watch the skies for the little buggers.”

So, with a relatively weak seeding, UFO sightings skyrocketed. All of them false positives.

When we tell people in continuous improvement that things can always be improved. Or when we get together in a retrospective and place a not-so-subtle demand that we might be able to do things better. Being good professionals, they go looking for them.

Ideas for continuous improvement still need to be examined and vetted. Teams in the process of continuous improvement should watch that the Will Smith effect isn’t turning out UFOs, as opposed to real, demonstrable opportunities for change.

14 July 2011

Not-manager: I have too much to do and am constantly interrupted by the other guys needing detail.

Manager:None of them have this problem, you’re a lazy whiner! Stop complaining and get your work done!

In March, we wrote a post entitled, “You Cannot Yell at a Board with Stickies on It.” That post described how using a Personal Kanban actually helped depersonalize work. Depersonalization here means that when something was delayed or difficult to do, the visualization helped us see that the work was the team’s issue and not a problem or reason to blame an individual.

Our tendency to blame people first is called Fundamental Attribution Error or Correspondence Bias. I like Fundamental Attribution Error because it’s very graphic and, in the work place, apt. The results of this are simple and devastating: when we are part of a team and a team member’s task is not getting completed or if their work is routinely poor, we will approach them with questions like:

“What is wrong with you? Why is your work not getting completed?”

Because we primarily visually focus on the person before us, we are looking for answers regarding that person. Since the person is all that is visualized, the person becomes the problem. Also, since interpersonal issues can be a rat’s nest to solve, we are actively not looking for reasons outside that person. We want the problem to be a personal defect and then order that person to solve it.

Nice, neat, contained, and seemingly logical.

Since the problem is rarely the person, but is more often a policy, procedure, communication breakdown, or random events – we then are not satisfied with the results. Rather than taking responsibility for their horrible personal behavior, the person says, “I’m not getting my work done because the graphic design manager is a jackass and won’t get me the materials I need or give me access to the people I need to talk to.”

This is valid, from their point-of-view. But the manager probably (not definitely, but probably) is not a jackass – the person you are angry with is also engaging in Fundamental Attribution Error. Soon we get a Fundamental Attribution Cascade and blame, disrespect, and distrust proliferates around the office.

As the “You Cannot Yell at a Board with Stickies on It” article suggests, most of these issues arise from ambiguities in communication. In other words, we don’t have clarity into what others are doing, they don’t have clarity into what we are doing, and that creates a ripe field of breakdowns in which to harvest a whole crop of Fundamental Attribution Errors.

People care less about what others do than about why they do it. Two equally rambunctious nephews may break two equally expensive crystal vases at Aunt Sofia's house, but the one who did so by accident gets the reprimand and the one who did so by design gets the thumbscrews. Aunts are in the business of understanding what makes nephews act as they do…

In this case, Aunt Sophia presumably has some insights – or some clarity – into the actions and motivations of the boys. The key issue here though is the first sentence: People care less about what others do than about why they do it.

The question is … do we understand why they do it, or do we engage in fundamental attribution error. Without a kanban or something visual to take the focus from the individual and place it squarely on the situation-in-context, all we have to blame is each other. Basically, we get into bias soup here because we’re misattributing a problem in work flow with a person’s character or motivations.

The visualization of work provides clarity to the team to help diffuse this chaos.

13 July 2011

When I ask you if you would like an apple, you may say “Yes.” Because you feel that an apple would make you happy. So I place an apple before you and you begin to eat it. And you are happy.

But what if I were to place two apples on the table -- one was the one you would have have happily eaten and the other which is slightly fresher looking.

You will choose the fresher apple and eat it and be happy.

That all makes sense. But, if I asked you, “would you have enjoyed eating that other apple,” you would likely say "No.” Even though in our alternate no-choice reality you were perfectly happy with the apple.

This is a simplistic view of “Distinction Bias.” Distinction Bias says that when we compare things directly, we tend to overestimate the importance of their differences. Yes, the fresher apple was better – but it didn’t make the less-fresh apple unenjoyable.

If there were five apples on the table, you might take a few minutes to examine each apple so that you would be sure you had the best one. Each minute you spend laboring over that decision of mostly-like things is likely waste.

Distinction bias causes us to over-examine and over-value the differences between things as we scrutinize them. Psychologists and economists note that we have two ways of examining the suitability of something. The first is through single evaluation – with one apple, we ask questions like “will this apple taste good?” The other is through joint evaluation – we compare two things and ask questions like “will this apple taste better than that apple?”

That second question is very different than the first. We switch into joint evaluation mode when we have choices and we take the exercising of choice very seriously – even when those choices have very limited impact. We overestimate that impact.

In my house, I have a 32” HDTV which is the perfect size for where we have it. In its space, it is big, has a great display, and is a fine television. The other day I went to Costco and there was this HUGE HDTV, a smaller one, a smaller one than that, and then a really really tiny one.

In cavernous Costco, the tiny display almost seemed comic. Why would anyone buy such a small TV? When I approached it, the impossibly small set was a 32”. The same one that’s in my house, in fact.

When seen in the Costco context, when subjected to joint evaluation, the perfectly suitable 32” option was somehow unsuitable. In a home context, with single evaluation, the television is wonderful.

Today, on Facebook, I was struck by this A/B test apparently put to me by Mark “So sue me” Zuckerburg himself. He wanted to know which place I liked better – Alyssa Lewis’ Seattle Pie Company (which makes some of the most excellent pies I’ve ever had, amazing crusts, wonderful fresh fillings) and Brian Voltaggio’s VOLT restaurant in Fredrick, Maryland (which creates some of the most incredible experimental American cuisine I’ve ever had).

So if you want a pie or dessert or even an awesome chicken pie – I’m sending you straight to Seattle Pie Company. If you want a cool creative dinner that will surprise and delight you – I’m sending you to VOLT. (They do an avacado mousse that is unbelievable). I have been to both several times and have never (not once) been disappointed.

Facebook wants a database of preferences to build up so they can market stuff to people more effectively. The problem is they’ve created an A / B test that makes sense to them (two restaurants Facebook knew I enjoyed) and actively encourages false distinctions from distinction bias by making me jointly evaluate two things that are actually dissimilar or even complimentary. Ranking Seattle Pie Company against VOLT is like ranking oxygen against drinking water. The results can only be misleading.

In business and in our personal lives, we want to optimize for something. Sometimes it is cost, other times it might be effort, still others it might be profitability, screen size, or capacity. When we are given a choice, we want to optimize the choice. That is natural and even a good thing. But sometimes we go a little overboard, getting stuck in analysis paralysis and taking way too long to make a decision – negotiating our way through only marginal distinctions.

12 July 2011

When people decide that the nine-to-five world isn’t for them any more and it’s time for them to enter the consulting world, I give them this piece of advice:

You need eight “sure things” to get one project.

There are a variety of reasons why this is the case. Today I’d like to talk about the one that squarely rests on our own shoulders: The Overconfidence Effect.

Let me describe this to you, and then give yourself a few minutes to think of all the ways this little bugger works its way into your daily life.

The overconfidence effect is where people take too much stock in their own opinions, judgments or decisions. They are then either surprised or indignant in the future when things do not go as expected. The tendency then, rather than say, “Oh wow, I was wrong,” is to say, “Who messed up my obviously correct expectation?”

No one messed up your expectation, man. You were wrong!

We are wrong all the time. Probability works that way. It is simply improbable that we will be right all the time. But we fall into the trap of the overconfidence effect repeatedly, we feel confident in a potential outcome and act on that confidence, rather than evidence.

This is also closely linked to something called Optimism Bias. This, as it sounds, is the tendency to actually believe the positive outcome you are planning for will happen. Now, heaven help us if we don’t have at least SOME Optimism Bias or we’d never bother to do anything.

I worked with one person who seriously based his company on Optimism Bias. He once told me that the key to success was to be “relentlessly optimistic.” That didn’t turn out to be such a good business plan for him. After repeatedly making decisions based on optimism and having few of them come to pass, he closed his company and went to find options elsewhere.

Eight sure things to get one project.

We will not only fool ourselves from time to time – we stand the chance of fooling ourselves often.

The key here, for all projects, is to recognize that overconfidence and optimism bias plays a role in how we interpret plans, potential and outcomes. Be confident. Be optimistic. I certainly would not advocate pessimism. But never lie to yourself and look for those points where you might be doing just that.

11 July 2011

The Gemba is where it all happens. It’s the factory floor, the crime scene, ground zero. It’s where the evidence is, the witnesses are, and the location to find clarity. In lean we are told to “Go to the Gemba” – to walk the floor, talk to the people, and make the change happen. The Gemba, in Lean, is where managers go to learn the real story from the ground level.

But we have a horrible (sometimes catastrophically so) history of doing this.

In 1847, Ignaz Semmelweis was the Gemba and was destroyed by the Gemba. The Gemba actively worked against itself. Semmelweis was a doctor who was dealing with a terrifying problem. Babies were dying of something called “childbed fever” in obstetric wards. All doctors shared a desire to rid the world of this calamity.

Semmelweis figured out that if medical staff simply washed their hands with a disinfectant of chlorinated lime after performing an autopsy, the rate of disease dropped dramatically. His peers (other doctors) found his theory laughable – even repugnant. His theories could not possibly explain all the cases of childbed fever. Discredited and, worse yet, unable to prove his theory correct and forced to know that babies were continuing to die, Semmelweis fell into a deep depression and was committed to an institution where he later died.

A few years later, Louis Pasteur greatly advanced the notion of germ theory and proved Semmelweis correct. While it wasn’t necessarily handling the cadavers that was causing childbed fever, it was pervasive germs that required more than soap to cleanse them.

The result from this is now known as the Semmelweis Reflex. This is a professional (and general psychological) reflex to reject new evidence that in some way threatens established norms or firmly held beliefs. Throughout my career, I’ve watched this play out while building freeways (what do we need ramp metering for!?), planning rail systems (why not just drive?!), contracting software projects (what do you mean you can't say exactly what the software will look like?!), and introducing new process ideas (creating tests first is a waste of time!). People require an extremely high level of proof to offset a strongly held belief. And yes, sometimes those people were me.

We are beginning to use visual controls like kanban to help see past our Semmelweis Reflexive tendencies, but these controls will not save us outright. Even if we visualize the problem and make these blind spots explicit, we will still find ways to explain them away.

So, remember this. When you conduct your retrospectives, when you examine your workflows, when you take a look at your own lives, question institutionalized biases that may be making your own Semmelweis Reflex fire.

08 July 2011

As might be clear from the last several posts, I’m reading a lot about bias right now. I want to learn why we have such a hard time accepting variation as part of life (change happens), why we have a tendency to over-plan when we know it will lead to disappointment, why we deny that the disappointment has happened, why we blame circumstances beyond our control for repeated failures, and on and on.

In short, why do people repeatedly and predictably fall into traps of poor logic, bad citizenship, and self destruction?

If we as managers, as workers, as leaders, as humans – can’t get a grasp on why we make uninformed decisions or ignore information … how will we ever be able to make decisions effectively?

Do you like people to tell you what to do? Do you like your freedoms taken away by others? Do you really enjoy someone limiting your choices?

My guess is probably not.

When you make rules for others, this is exactly what you are doing. Even if it is “for their own good,” the more rules you have, the more freedoms you have removed. For this reason, when we make rules or set process, we need to understand that no matter how well we explain these new rules – no matter how great the buy-in from others in the group – there will be those that resent or break those rules.

When you set a rule, people ask two questions:

(1) What is the minimum amount I need to do to not break this rule

(2) What is the easiest way for me to not follow this rule at all?

This is due to Reactance – which is a tendency in people to push back against reductions in perceived or real freedoms. The more the new rule is unexplained, unexpected, or disruptive, the more reactance you will encounter.

Most organizational change or processes come with outside rules that actively and immediately engage reactance behaviors. People don’t like being controlled. You don’t, I don’t … no one does.

What people do like is clarity. A law we willingly follow does not limit our freedoms by fiat. It merely becomes a cultural norm that supports our preferred way of life. If we have clarity over the work we are doing, what our team members are doing, and how we fit into the production of value – we’ll likely need less rules to control our behavior. Less rules mean less reactance.

This does not mean structure should be abandoned, it means that our approach to rule making needs to be examined. Collaborative creation of rules, of process, and of the rate of change is required for healthy rule creation and adoption.

--

To quickly turn that on its head … in the software world I have seen IT groups and development groups adopt much-needed and healthy agile methodologies and the rest of the organization engage in reactance behavior as the techies approached them and said, “Our new process means you have to follow these new rules.” Since they were not involved in the decision for agile migration, or see themselves as direct beneficiaries of the new rules, they push back.

07 July 2011

“I shall not today attempt further to define the kinds of material I understand to be embraced within ["hard-core pornography"]; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.”

I’ve used this quote a lot lately in talks about quality and completeness. There are some things that are entirely contextual and, therefore, lend themselves to ambiguity. This is the way language works. Some (few perhaps) words are very concretely defined like “five” … “five” means when something is one more than four of some state. Five years old, five batteries, five muppets singing five people in my family.

For everything else, we have different degrees of ambiguity. The current ambiguity of the week might be what “guilty” means for Casey Anthony.

A lot of that ambiguity is driven by our past experiences, which make up our “world view”. Our world view is basically a filter by which we take in information and rapidly process it. Often, this is very easy and painless. Our more-than-likely-shared-world-view says a vacuum cleaner shaped object is probably a vacuum cleaner. This lack of variability makes vacuum cleaners fairly safe for discussion.

But what other topics like religion, elected officials, education, leadership styles, international relations, HR policies, health care, road improvements, anti-trust law, … and so on have attached to them are a passel of biases. Regardless of which of many possible sides of these issues we may personally be on, the existence of bias is apparent.

We don’t shut off our biases at work either. So, today I’d like to discuss a key bias (anchor bias) and explore how it might make things easier or harder for us at the office.

Easier?Yes, easier. Many biases we set up are the result of actively training our brains to respond quickly to certain situations. If we are working with a particular system and that system has three states (A, B and C) and we know that A requires response 1, B requires response 2, and C requires response 3 – we will start to watch for those three states and quickly respond to them.

Further, if we notice over time that B happens 70% of the time, C happens 23% of the time and A only happens 2% of the time, we’ll start to assume that B is going to usually happen, C will happen sometimes, and A is rare. We will then tool ourselves and our environment to suit mostly condition B.

We develop a cognitive bias about the outcomes from the machine.

How strongly we hold onto that bias is key. We can use it as a rule of thumb: “B happens a lot, so I’m going to prep myself for B. But if it changes, I’ll just adapt.” Or we can use it as something to anchor our processes to: “We’ve seen that B happens 70% of the time, so we are going to train everyone to expect B 70% of the time and develop policies and procedures around this 70% number. It has always been 70% and always will be.”

Harder. As we get closer to that second one, we approach what is called “Anchoring Bias.” We begin to rely on that 70% number and institutionalize policies around it. If that number happens to change, neither we nor our systems can easily adjust. The 70% figure has become part of our world view. A change in that number is now no longer merely a fluxuation, it becomes a major upheaval.

Some may call anchoring bias “sacred cows” – these concepts are central to the operations of your group. We will always have core beliefs that make up the mission of our organization. Missions, by default, need to have some type of world view – and moreover a clear and well-defined world-view. But the more we add to it, the more we run risk of unhealthy anchoring bias.