Ben Rady wrote a post pondering if there’s special inflection point when you have exactly two developers. I think there is. On the benefit side of the equation there’s a step function when your number of collaborators go from 0 to 1. While more people on a team brings the potential for more benefits there is certainly a diminishing return with each additional person. By contrast, on the cost side, communication paths grow exponentially with team size, and two people is before the exponential factor starts to bite. So far this is just the limit of “smaller teams are more efficient”, yet I think there’s a bit more.

Two developers means we can pair, and that pair is the entire team. Pairing on coding means the entire team has intimate knowledge of every change. Pairing on refactoring means the entire team feels the code is clear. Pairing on deployment and operations means the entire team has masterful knowledge end-to-end. Beyond just collaboration there’s a step change in ownership and alignment here. A two developer team that did not pair would not, in my model, reap the same benefits. And I think there’s more to Ben’s situation.

There is an additional gain when it is not just two developers but the entire team size is two. When your two developers are also doing your business analysis, your product management, they know not just how and what, they fully understand the why of the work. They have a direct connection to the users so they hear every request, every complaint, and every story. There’s something visceral about having intimate knowledge of every single detail and the reason for it. This lack of any filters is, for me, the final element to explain the power you can get from having just two.

Open acceptance allows learning. Narrowness & restriction are valuable for protection, but they have opportunity costs that are largely hidden. It pays to investigate them to make informed design choices in every system – social & technical.

I think Michael’s defense can be applied fairly directly to communication between humans: “Open communication allows learning. Narrowing and restricting what’s acceptable is valuable for protection, but there are hidden opportunity costs. It pays to make informed design choices in how and what you communicate.”

Making informed design choices is what I’m after in my study of psychology, communication skills and techniques. This working with humans stuff is hard!

Dr. Burns divides his five secrets into the three categories of Empathy, Assertiveness and Respect:

Empathy

Disarming Technique: start with the words “you’re right” and find the truth in what the other person is saying

Thought Empathy and Feeling Empathy: summarize what the other person has said using their own words (TE), and make a guess as to how they are probably feeling based on what they’ve said (FE)

Inquiry: demonstrate interest in learning more about what they are thinking and feeling

Assertiveness

“I Feel” Statements: share your own feelings using the construction of “I’m feeling” followed by an emotion. “I’m feeling worried” is in keeping with the formula; “I’m feeling you need to try harder” is not.

Respect

Affirmation: communicate respect and caring

For our practice at the meetup we tried an exercise suggested by Benjamin. Dividing into pairs, person A would attack person B (on a topic suggested by person B) and person B would defend themselves.

A: “You’re always leaving dishes in the sink and I’ve had about enough of it. Why don’t you ever think of other people?”
B: “I was running late as it was. You know how busy I’ve been!”
…

With this kind of back and forth the volume and energy of the conversation would escalate until we stopped the exchange after 30 seconds or so. Then we would start again on the same topic, with the same people, but this time with person B using the five secrets, and in particular starting off with the Disarming Technique.

A: “You’re always leaving dishes in the sink and I’ve had about enough of it. Why don’t you ever think of other people?”
B: “You’re right, I did leave the dishes in the sink today. I’m hearing that in your mind I’m not thinking of other people. I can see how upsetting that would be.”
…

Listening to this exercise again and again over the course of the meetup it was almost magical how effective the techniques were. After the empathetic reply the emotional tone of the discussion was entirely different. Person A might be using the same words but they just couldn’t summon the same emotional energy. It was hard to keep ranting when you were getting an empathetic response. I’m very encouraged to find new ways to practice and apply these techniques.

In Action Science workshops I teach a technique I call Coherence Busting. To understand why it is useful I ask the audience to imagine themselves making a proposal: “While you are talking you notice the main stakeholder — the person in the audience you most hope to persuade — glance at their watch. What do you do?”

I ask this question to allow the audience to experience the decision-making heuristics that Daniel Kahneman describes in this book Thinking, Fast & Slow. Kahneman models our consciousness as being two systems, our fast, automatic, unconscious System 1 and our slow, deliberate, effortful System 2. Part of what makes System 1 fast are the shortcuts it uses. Two of these shortcuts consistently arise with the watch example. The first is that we assume that a coherent story must be correct. The second is that we limit the facts to what we can immediately recall, a process Kahneman calls What You See Is All There Is (WYSIATI).

These two shortcuts are displayed in the watch scenario.

We unconsciously construct a coherent story for what the glance means — for instance “she has somewhere else to go”. This story is based on first thoughts of what the glance might mean (WYSIATI). The coherence in our story give us the sense that our story is true. We then design our actions in response to a story we made up. This is the key lesson of the watch example: We feel as though we are responding to the reality of the situation, because WYSIATI and coherence cause us to mistake our single plausible story for the truth.

This is why we need Coherence Busting.

With the watch example, I ask the audience to describe what they think the glance means. After I have harvested the normal stories from the audience (“they’re bored”, “their attention has drifted”, “they are running short of time”) I ask them to consider other possible meanings of the glance, all of the possible reasons, even wildly implausible ones (“they have their plan for world domination written on their hand”). Now the audience generates dozens of possible reasons: it is a nervous habit, they were admiring their new watch, there could have been an alert on a smartwatch, maybe an itch on their wrist, and lots more. What makes this Coherence Busting is not just that there are lots of options but that the options are mutually incompatible. Once we can imagine conflicting explanations we are not longer trapped by the original coherent story. These options were always there, but it requires invoking System 2, our conscious and effortful thought process, to bring them to the surface. That’s not something we do when we feel we already have a good explanation. So what would trigger us to use Coherence Busting?

I’ve found Coherence Busting a useful tool to reach for when I recognize that I’m frustrated. A common pattern for me is to get frustrated when I can’t come up with a justifiable explanation for the other person’s actions, when I don’t like the explanation that System 1 has suggested for me. When I recognize that pattern I try and think of at least three incompatible motivations for why they might be behaving the way they are. The technique of Coherence Busting is a way of reminding myself that there are infinitely more possibilities than I have considered. It allows me to let go of the story I’ve made up about the other person. It reminds me that if I want to understand what the other person is thinking, I’m going to have to get out of my head and into theirs — probably starting with asking them a genuine question about what they are thinking.

Much of my work with Action Science is learning the skill of asking good questions. Coherence Busting reminds me to use those skills.

At a certain point in reading Daniel Kahneman’sThinking, Fast & Slow I realized I had discovered a possible explanation for the mystery of “museum sleepies”. Museum sleepies is my wife‘s term for the fatigue we feel after a rather short time in a museum, a term we’ve used much more frequently since moving to London. I know this is a common experience, and a search will findvariousexplanations, both physical and mental. What I got from Kahneman is a better model of what’s happening to us, and — very exciting to me as a learning nerd — a testable prediction.

The focus of Kahneman’s work is decision theory, and the title mirrors the model of cognition he uses. We can imagine that our mind has two systems for processing the world and making decisions. System 1 is fast and intuitive (think Blink), unconscious, involuntary, and cognitively cheap. System 2 is our deliberate, conscious, analytical thought process that is also, unfortunately, both slow and cognitively expensive. Most of the book is an explanation of how our reliance on System 1 results in predictable cognitive biases. To get there Kahneman first describes experiments to establish a measure of mental effort, evidence that all types of mental effort draw on the same limited pool of resources, and that engaging our System 2, focusing our attention and performing deliberate analysis, draws off that limited pool.

In the larger world the findings Kahneman describes have serious implications in terms of ego depletion and decision fatigue. In our tour of the museum there isn’t much at risk, but it struck me as a application of the same findings. After reading Thinking, Fast & Slow my model for the museum sleepies is that I’m using my System 2 to analyze one artifact after another and that this is draining my limited pool of mental energy. This model leads me to predict that someone trained to analyze the artifacts wouldn’t suffer the same effects. The trained person would see the same artifacts differently because they would have a set of prebuilt patterns and categories to draw upon. They would notice the patterns largely through System 1; System 2 efforts would be brief and efficient. Finally, I expect I could use deliberate practice to train myself to understand some class of artifacts, and that henceforth I would no longer suffer the same fatigue when viewing that type of collection.

I’ve spent a lot of effort in the past few years trying to develop my abilities to promote organizational learning. Much of my focus has been on the Action Science approach to effective communication. Another part of it has been trying to teach the scientific mindset that learning is about detecting and correcting error, which we can do by making testable hypothesis and testing them. I don’t plan to conduct any cognitive psychology experiments with art historians, curators, and artists, but I love the fact that someone could. This museum sleepies story has become part of my arsenal for illustrating some important concepts in cognition, in training, and in theory making. I hope you find it useful too.

At the prompting of Douglas Squirrel I just read Yossi Kreinin‘s blog post People can read their manager’s mind. This seemingly magical power is the mundane result of combining “People generally don’t do what they’re told, but what they expect to be rewarded for”, and people are good at spotting what is actually rewarded. As a manager/leader I’m taking away a heuristic I want to test: when I’m not able to get alignment with my stated goals I’m going to pretend the team is reading my mind, and that they are heeding my hidden thoughts rather than my words. From that mindset, what does that suggest about my own values? The results aren’t likely to be flattering. I embrace the idea that “the only way to deal with the problems I cause is an honest journey into the depths of my own rotten mind.”

(As much as I embrace that message for myself, I’d warn non-managers from seeking solace in the article, from using it as a shield to deny their own accountability. Yes, the article says that it is an “insane employee” who will work to fix important but unglamorous problems, and that “the average person cannot comprehend the motivation of someone attempting such a feat”. Do you find solace in being average? In being powerless? I don’t. I think it is worth always seeking to improve, and to improve the organization I’m part of. I believe someone out there could improve the situation I’m in, so if I’m frustrated it is probably my fault.)

This past year I’ve spent a lot of my time on Action Science as a route to organizational learning, and one of the real insights I’ve had is how painful it is to learn. Learning, according to Chris Argyris, is the detection and correction of error. The emotionally difficult part is detecting our own errors, in being genuinely open to the idea that we are part of why things aren’t working better than they are. In my view a professional should cultivate the mindset that they need to improve. Otherwise they risk being a “scrub”.

None of us like to be wrong. I’ve tested this with many audiences, asking them “how does it feel when you’re wrong?” “Embarrassing”, “humiliating” or simply “bad” are among the most common answers. Stop now and try and think of your own list of words to describe the feeling of being wrong.

These common and universally negative answers are great from a teaching perspective, because they are answers to the wrong question. “Bad” isn’t how you feel when you’re wrong; it’s how it feels when you discover you were wrong! Being wrong feels exactly like being right. This question and this insight come from Kathryn Schulz’s TED Talk, On being wrong. Schulz talks about the “internal sense of rightness” we feel, and the problems that result. I think there’s a puzzle here: we’ve all had the experience of being certain while also being wrong. If the results are “embarrassing”, why do we continue to trust our internal feeling of certainty?

My answer comes from Thinking Fast & Slow. That sense of certainty comes from our System 1, the fast, intuitive, pattern recognition part of our brain. We operate most of our lives listening to System 1. It is what allows us to brush our teeth, cross a street, navigate our way through a dinner party. It is the first filter for everything we see and hear. It is how we make sense of the world. We trust our sense of certainty because System 1 is the origin of most of our impulses and actions. If we couldn’t trust System 1, if we had to double check everything with the slow expensive analytical System 2, we would be paralyzed. So we need our System 1 and we need the sense of certainty it provides. We also need to be aware it can lead us astray.

When our sense of being right guides us we are acting from a Model I / Unilateral Control mindset. The result of a Unilateral Control mindset is less information, reduced trust and fewer opportunities to learn. And we all like to learn right? I now ask my audiences this question and I get universal nods. We all like to learn. “No you don’t”, I reply. “You just told me that the feeling of becoming aware you were wrong feels bad! Well guess what? That’s what learning feels like.” That’s my recent ah-ha moment: that we claim we like to learn, but when it actually comes to learning, to correcting a wrong belief with a right one, we don’t like it.

I find this discrepancy very interesting, very revealing. My theory is that when we imagine learning we are thinking of writing on a blank slate. It is about learning facts where before there were none. That is a good feeling, we get a little chemical kick from our brain when that happens. We don’t imagine correcting our mistaken beliefs when we think of learning, and that’s a real shame, that should change. By all rights we should value that kind of learning even more than learning new facts: “It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.” (Will Rogers)

I think the problem is that we are primates. To primates, from an evolutionary psychology standpoint, status is everything. Status is the primary determinant of reproductive success. Losing status can be the same as a reproductive, evolutionary death sentence. In our modern knowledge economy, chest thumping is the assertion that we are right, and winning the fight is proving the other person wrong. That’s how we put them in their place (in the status hierarchy). This means our instinctive reaction to becoming less wrong tends to be negative. The loss of status feels too high a price to pay for learning. Even trying to help someone else become less wrong is understood as a risky prospect. We don’t want them to lose face, we don’t want them to get angry with us for correcting them. Thus the habits of Unilateral Control, protecting both ourselves and others, are reinforced.

All of this explains why developing habits for learning, developing Model II / Mutual Learning habits, requires a lot of practice. We are fighting decades of acculturation on top of millions of years of evolution. To win this fight we need to be committed to what we are fighting for. We need to care more about learning than being right. We’ve got to care about making the most informed choice possible. When I can remember to hold these values in mind it becomes easier to act differently. I can go seek out those people who are most likely to disagree with me, who are most likely to teach me something. I can deliberately share my chain of reasoning and invite others to poke holes in it. With practice, lots of practice, I can come to see the person who corrects me as more friend than rival, and to feel the correction as the victory of joint learning rather than an individual moment of shame.

Last week at the August session of the London Action Science Meetup we started with a discussion of the phrase “the story I’m making up…” I love this phrase! It captures the process of the Ladder of Inference, but it has an immediate emotional resonance that the ladder does not.

I came across this phrase from an unlikely source: Oprah.com. Now I’ve got nothing against Oprah, but this it just isn’t in a url that shows up a lot in my browser history! So the real credit goes to @frauparker, who is someone I clearly trust well enough to follow off into the well-scrubbed bright and shiny parts of the internet; in this case an article by Brené Brown on How to Reckon with Emotion and Change Your Narrative, an excerpt from her new book Rising Strong. The article is well worth reading but I’m going to take a look at it through the narrow lens of the action science workshops I’ve been leading.

For a few months now I’ve been leading weekly lunchtime workshops at TIM Group where anyone who is interested can join in a discussion of some two-column case study. This might be a canned case study, or one people recall and write down on the spot, but the best sessions have been when someone has come with something in mind, some conversation in the past or future that is bothering them. As a group then we try to help that person to explore how they were feeling, what they didn’t share and why, and how they might be more effective in the future. One of the common observations, and one this article reminded me of, is how hesitant the participants are to put their feelings into words. It seems we can almost always have a productive conversation by asking the series “How did you feel at that point?” followed by “Is there any reason you didn’t share that reaction?” These questions get quickly to the real source of our frustrations that are hiding back in the shadows. Bringing them into the light we find out something quite surprising: they were stories we were making up.

The article has good advice that will be familiar to students of action science and the mutual learning model, suggestions like separating facts from assumptions, and asking “what part did I play?” It also had a key action that is hidden in plain sight. Do you see it in this exchange from the article?

Steve opened the refrigerator and sighed. “We have no groceries. Not even lunch meat.” I shot back, “I’m doing the best I can. You can shop, too!” “I know,” he said in a measured voice. “I do it every week. What’s going on?”

As I read the remainder of the article from the author’s point of view it was easy to overlook the role Steve played here. It was his calm empathetic question that allowed the author to reply with the title phrase: “The story I’m making up is that you were blaming me for not having groceries, that I was screwing up.” I really struggle generating this kind of response, of not getting caught up with my own emotional reaction. So I admire what Steve accomplished here.

I’ve long said that “stories are the unit of idea transmission”. What I hadn’t realised before reading this article was how powerful I’d find using the word story to describe what is going on in my head. I’m really looking forward to practicing with this phrase in future conversations.

The last time I spoke at a Devopsdays was London 2013 (video here). That was another fun talk, and had some overlap in content, but I did feel that I tried to put too many concepts in a single 30 minute talk. My goal this time was to be much more deliberate and leave enough time for each concept. Where I ended up is a talk in three parts. Part one is cognitive psychology, how our mind generates an illusion of certainty where we don’t deserve it. Part two is Action Science and the Mutual Learning Model as a set of behaviours that appropriate for an uncertain world. Part three is the need for practice. This section uses the piano analogy and then — my big risk! — a live demonstration. A brave member of the audience joined me on stage to try applying the concepts I’d just discussed.

As you sit and watch this video I hope it is the final of section of the talk that makes the biggest impression. All the video watching, all the reading, all the learning will mean nothing if you don’t act, if you don’t practice and find the limits of your current abilities and then learn to move beyond them. And I’ll make the first step easy: Download the slides and try the exercise in the video. What would you say to Ted? Write it out and read it aloud. How did you do? Maybe you want to try again…