These days I speak extensively about how we designed Lunar Logic as an organization. After all, going through a transition from a traditional management model to a situation where company has no managers at all is quite an achievement. One of the pillars of managerless organizational design is autonomy.

After all, decisions won’t just make themselves. Someone has to call the shots. Once we got rid of managers, who would normally make almost all decisions, we need everyone else to embrace decision making. For that to happen, we need to distribute autonomy.

Decentralizing control requires decentralizing both the authority to make decisions and the information required to make these decisions correctly.

Don Reinertsen

Authority refers to a formal power to make a decision. However, I tend to make a clear distinction between authority and autonomy. Ultimately, as a manger, I can give my team authority to make a decision. However, at the same time I can instantiate fear or pressure on decision-makers so before they actually make their call they would ask me what I think about the topic and go with my advice. This mean that even if authority was distributed autonomy is not there.

Corollary to that, I may not have formal authority but I can feel courageous enough to make a decision. If that is an acceptable part of an organizational culture it means that I may have autonomy without authority. By the way the latter case is interesting as it pictures the attitude I’m very fond of: ask forgiveness rather than get a permission.

I’m not going to fundamentally disagree with Don Reinertsen, though. As a matter of fact, we are on the same page as he follows up with his train of thought.

To enable lower organizational levels to make decisions, we need to give them authority, information, and practice. Without practice and the freedom to fail upon occasion, they will not take control of these decisions.

Don Reinertsen

In the first quote Don is talking about prerequisites to decentralize control. In the second he focuses on enabling it. He adds a crucial part: people need to practice. This, as a consequence, means that occasionally they will fail, a.k.a. make bad decisions.

And that’s exactly what autonomy is in its core.

In vast majority of cases autonomy is derived from authority. It doesn’t work the other way around, though. In fact, situation of having formal authority but no real autonomy to make a decision is fairly common. It is also the worst thing we can do if we want people to feel more accountable for an organization they’re with.

Not only do they realize that the power they got is virtual but once it happens they’re not even back to square one. It’s worse. They got burned. So they’re not jumping on that autonomy bandwagon again when they are asked to get more involved in decision making.

The other day we had a brief discussion at Lunar Logic on an idea that the company should provide hand cream for us. While normally we don’t really discuss such petty expenses, this time quite a few people got involved.

One could say that the discussion itself cost the company more than a stash of hand cream that would suffice for several years. And they would be right.

Why was I involved then? And why would I write about it afterwards?

The thing is we don’t make decisions in isolation. Of course we can look at any decision in individual context. It’s all about hand cream and several dollars, right?

Not really. Or at least not only. The meta-decision that was being made was about what is the extent to which the company provides its employees with stuff. It was about setting, or rather resetting, what benefits are available.

Of course, at any company there are things that almost everyone would use, like coffee and tea, paper towels etc. These are no-brainers.

But then, very quickly we enter the land of less obvious options. Like a hand cream. Ultimately not everyone would be using it. I’m betting around half of people maybe. So we’re making a small nice gesture to some.

The question is: should we be making such small nice gestures to other groups?

We have quite a bunch of people who are cooking lunches at the office. Should we buy cooking oil for them? Or spices? These would all be small expenses after all.

So how about free food available at the office? Well, given that we have a couple vegans, healthy load of vegetarians, some burger lovers, a diabetic, a couple people on gluten-free diet and a couple more trying to lose a few pounds there would always be someone left out. These aren’t obvious decisions anymore.

These kind of calls are really about deciding about where we set the limits. What is acceptable. It’s not about hand cream. It’s about what rationale would be enough to justify an expense on the account of the company. We are talking about norms.

Have I just said “norms”? Oh well, it seems we are talking about organizational culture now.

organizational culture

the behavior of humans who are part of an organization and the meanings that the people react to their actions

includes the organization values, visions, norms, working language, systems, symbols, beliefs, and habits

Simply put organizational culture is a sum of behaviors of everyone in an organization. Not only behaviors themselves, though, but also what drives these behaviors: shared values, common principles, rules and norms.

This is why I got involved in the discussion about hand cream. The trigger was realization that we are just about to change a norm and I’d rather have an explicit discussion about that beforehand. Such a change may affect the common attitude from “we’re not doing such things here” to “yeah, we’ve seen that happening before so it’s OK.”

What’s more, giving all sorts of benefits away is not something that can be taken back seamlessly. As Daniel Kahneman in his profound book Thinking Fast and Slow points we think differently about something that we gain than about something that we lose.

In other words getting hand cream is all fine and nice but almost instantly it becomes a new norm that hand cream is there. We’ve just set new expectation level. Once we stop supplying cream we would perceive that as a loss. The cost of removing a benefit would be bigger than a gain we got from introducing it.

That’s why we can’t label changes that affect organizational culture as safe to fail. Like in: let’s try the hand cream thing and if people don’t care we’ll just stop buying it. When we are touching organizational culture there’s no rollback button. Even when we technically bring the situation back to the square one, culturally it’s different because we have a new experience so we look at things differently.

That’s why I will get involved occasionally in discussions like the one about hand cream. And that’s why it was worth a blog post.

I’ve been known to bring up research on collective intelligence in many situations, e.g. here, here, or here. In my personal case, the research findings heavily influenced my perception of how to build teams and design organizations. The crucial lesson was that social perceptiveness and having everyone being heard in discussions were key to achieve high collective intelligence. This, in turn, translates to high effectiveness of a team in pretty much any flavor of knowledge work.

Since the original work was published, the research has been repeated and findings were confirmed. Nevertheless, in software industry we tend to think we are special (even though we are not) and thus I often hear an argument that trading technical skills for social perceptiveness is not worth it. The reasoning is that technical skills easily translate to better effectiveness in what is our bread and butter—building software. At the same time fuzzy things, like e.g. empathy, do not.

The research, indeed, was run on people from all walks of life. At the same time every niche has some specific prerequisites that enable any productivity. I don’t deny that there is specific set of technical skills that is required to get someone contributing to work a team tires to accomplish. That’s likely true in an industry and software development is no different.

As a matter of fact, enough fluency with engineering is something we validate first when we hire at Lunar Logic. The way we define it, though, is “good enough”. We want to make sure that a new team member won’t hamper a team they join. Beyond that, we don’t care too much. It resonates with a simple realization that it is much easier to learn how to code than it is to develop empathy or social perceptiveness in general.

The whole approach is based on an assumption that findings on collective intelligence hold true in our context. Now, do they?

It’s not the technical excellence that lands teams in the group of accomplishers. By the way, neither is management style—it was orthogonal to how well teams were doing. The patterns that were vividly seen were caring about other team members and equal access to discussion time.

What’s more, the teams which did well against one goal seemed to do well against other goals as well. Conversely, teams that were below average seemed to be so in a consistent manner. The secret sauce seemed to work fairly universally against different challenges.

What a surprise! After all, we are not as special as we tend to think we are.

I could leave it here, as one of those “You see? I was right all that time!” kind of posts. There is more to learn from the Google story, though. Aspects that are mentioned often in the research are norms, either explicit or implicit. This refers to specific behaviors that are allowed and supported and, as a result, to organizational culture.

When we are talking about teams, we talk about culture pockets as teams, especially in a big organization, may differ quite a bit one from another.

It seems that even slight changes, such as attitude in group discussions, can boost collective effectiveness significantly. If we look deeper at what drives such behaviors we’ll find two keywords.

Empathy and respect.

Empathy is the enabler of social perceptiveness. It is this magic powder that makes people see and care for others. It pays off because empathic person would likely make everyone around better. Note: I’m using a very broad definition of empathy here, as there is a whole discussion how empathy is defined and decomposed.

Then, we have respect that results in psychological safety, as people are neither embarrassed nor rejected for sharing their thoughts. This, in turn, means that everyone has equal access to ongoing conversations and they are heard. Simply put, everyone contributes. Interestingly enough, it is often perceived as a nice-to-have trait in organizations but rarely as the core capability, which every team needs to demonstrate.

Corollary to that is an observation that both respect and care for others are deep down in the iceberg model of organizational culture. It means that we can roughly sense what are capabilities of an organization when it comes to collective intelligence. It’s enough to look at the execs and most senior managers. How much are they caring for others? How respectful are they? Since the organizational culture spreads very much in a top-down manner it is a good organizational climate metric.

I would risk a bold hypothesis that, statistically speaking, successful organizations have leaders who act in respectful and empathic way. I have no proof to support the claim, and of course there’s anecdotal evidence how disrespectful Steve Jobs or Bill Gates were. That’s why I add “statistically speaking” to this hypothesis. Does anyone have a relevant research on that?

Finally, there is something that I reluctantly admit since I’m not a believer in “fake it till you make it approach”. It seems that some rules and rituals can actually drive collective intelligence up. There are techniques to take turns in discussions. On one hand it creates equal access to conversation time. On the other if fakes respect in this context. It challenges ego-driven extroverts and, eventually, may trigger emergence of true respect.

Similarly, we can learn to focus on perception of others so that we see better how they may feel. It fakes empathy but, yet again, it may trigger the right reactions and, eventually, help to develop the actual trait.

In other words we are not doomed to fail even if so far we paid attention to technical skills only and we ended up with an environment that is far too nerdy.

However, we’d be so much better off if we built our teams bearing in mind that empathy and respect for others are the most important traits for candidates. Yes, for software developers too.

There’s one observation that I pretty much always bring to the table when I discuss the rates for our work at Lunar Logic. The following is true whenever we are buying anything, but when it comes to buying services the effect is magnified. A discussion about the price in isolation is a wrong discussion to have.

What we should be discussing instead is value for money. How much value I get for what I pay. In a product development context, the discussion is interesting because value is not provided simply by adding more features. If it was true, if the following dynamics worked—the more features the better the product—we could distill the discussion down to efficient development.

For anyone with just a little bit of experience in product development such an approach would sound utterly dumb.

Customers who will use a product don’t want to have more features or more lines of code but they want their problem to be solved. The ultimate value is in understanding the problem and providing solutions that effectively address it.

Less is more mantra has been heard for years. But it’s not necessarily about minimalism, but more about understanding the business hypothesis, the context, the customer and the problem and proposing a solution that works. Sometimes it will be “less is more”. Sometimes the outcome will be quite stuffed. Almost always the best solution will be different that the one envisioned at the beginning.

I sometimes use a very simple, and not completely made up, example. Let’s assume you talk to a team that is twice as expensive as your reference team. They will, however, guide you through the product development process, so that they’ll end up building only one third of the initial scope. It will be enough to validate, or more likely invalidate, your initial business hypothesis. Which team is ultimately cheaper?

They first team is not cheaper if you take into account the cost of developing an average feature. Feature development is, however, neither the only nor the most important outcome they produce. Looking from that perspective the whole equation looks very differently, doesn’t it?

This is a way of showing that in every deal we trade different currencies. Most typically, but not necessarily so, one of these currencies is money. We already touched two more: functionality or features and validation of business hypothesis. We could go further: code quality, maintainability, scalability, and so on and so forth.

Now, it doesn’t mean that all these currencies are equally important. In fact, to stick with the example I already used, rapid validation of business hypothesis can be of little value for a client who just needs to replace an old app with a new one, that is based on the same, proven business model.

In other words in different situation different currencies will bear different value for a purchasing party.

The same is true for the other side of the deal. It may cost us differently to provide a client scalable application than to build a high quality code. This would be a function of skills and experience that we have available at the company.

The analogy goes even further than that. We can pick any currency and look how much each party values that currency. The perception of value will be different. It will be different even if we are talking about the meta currency—money.

If you are an unfunded startup money is a scarce resource for you. If at the same time we are close to our ideal utilization (which is between 80% and 90%) additional money we’d get may not even be a good compensation for lost options and thus we’d value money much less than you do.

On the other hand, if your startup just signed round B funding abundance of available money will make you value it much less. And if we just finished two big projects and have nothing queued up and plenty developers are slacking then we value money more than you do.

This is obviously related to current availability of money and its reserves (put simply: wealth) in a given context. Dan Kahneman described it with a simple experiment. If you have ten thousand dollars and you get a hundred dollars that’s pretty much meh. If you have a hundred dollars and you get a hundred dollars, well, you value that hundred much, much more.

Those two situations create a very different perception of the offer one party provides to the other. They also define two very different business environments. In one it is highly unlikely that the collaboration would be satisfying for both parties, even if it happens. In the other, odds are that both sides will be happy.

This observation creates a very interesting dynamics. The most successful deals will be those when each party trades currency that is low-valued for the one that is valued highly.

In fact, it makes a lot of sense to be patient and look for the deals where there is a good match on this account than to jump on anything that seems remotely attractive.

Such an attitude requires a lot of organizational self-consciousness on both sides. At Lunar Logic we think of ourselves as of product developers. It’s not about software development or adding features. It’s about finding ways to build products effectively. It requires broader skills set and different attitude. At the same time we expect at least a bit of Lean Thinking on account of our clients. We want to share understanding that “more code” is pretty much never the answer.

Only then we will be trading the currencies in a way that makes it a good deal for parties.

And that’s exactly the pattern that I look for whenever I say “value for money.”

I’m a huge fan of Real Options. Along with Cynefin, it is one of the models that can be very universally applied in different domains. No wonder that some time ago I proposed application of Real Options as a sense making mechanism that connects different levels of work being done in an organization.

Simply put, potential work, be it projects or products, are options. We rarely, if ever, can effectively work on all the potential initiatives we have on our plates. That’s why we end up picking, a.k.a. committing to, only a subset of options we have.

Each commitment to start an initiative instantly generates a set of options on a lower level of work. Once we commit to run a project there are so many ways we can structure the work and so many possible feature sets that we can end up building. We again have a set of options available and again eventually commit to execute some of them. That in turn generates the options on a layer or finer granularity work items, say individual features. It goes all the way down to the most atomic work items we have.

We need an accompanying mechanism to close a full feedback loop between the layers of work. We simply need to provide information back to the higher level of work. Think of situations like a project taking longer than expected. We obviously want that information to be taken into account when we are making commitments on a portfolio level. Ultimately, it means that available capabilities have changed and thus it influences the set of options we have on a portfolio level.

Again, the similar dynamics will be seen between any of the two neighboring layers of work. Specific technical choices for features will influence how other features are built or how much time we’d need to make changes in a product.

The model can be easily scaled up to reflect all the layers of work that are present in an organization. In big companies there will be multiple layers of work even in the area of portfolio management only.

The underlying observation is that we very, very rarely need information to be escalated farther than between neighboring levels of work. In other words a single feature that is late will not affect decision-making process on portfolio level. By the same token commitment to start a new project, as long as it takes into account available capabilities, will be of little interest to a feature team involved in an ongoing initiative.

There is, however, one basic assumption that I subconsciously made when proposing this model. The assumption is about autonomy.

Work flows down to the finer-granularity level is through a commitment at a coarser-granularity level. The commitment, however, is not only expressing good will that we want to build something. If we make a commitment to run a project we need to fund and staff it. The part of the commitment is providing people, skills and resources required to accomplish that project within expected constraints, be it time, budget, scope, etc.

If there are other constraints that are important they need to be explicitly described once the commitment is being made. One example that comes to my mind would be around the ultimate goals for a product or a project. It can be about technical constraints – for whatever reasons technologies that a product will be built in may be fixed. Another common case would be about high level dependencies, e.g. between two interconnected systems.

Such constraints need to be explicit and need to be expressed when the commitment is being made simply because they influence what options we will have in the lower level of work.

There’s also another important reason why we want explicit constraints. When we move our perspective to a different level of work we also change the team that is involved in work. In the most common scenario the team context will change from PMO, through a project team to a feature team as we go down through the picture.

And that’s exactly when autonomy kicks in. Commitment on a higher level of work generates options on a lower level. What kind of options we get depends on the constraints we set. These are all prerogatives of a team making decisions on a higher level.

The specific choice among the available options, on the other hand, is responsibility of a team that operates on a lower level.

Obviously, we don’t want PMO leader to tell developers how to write unit tests. That’s the extreme example though and I see violation of autonomy all over the place.

Let’s start from the top. The role of PMO in such a scenario would be to pick initiatives that we want to run, a.k.a. make project- or product-level commitments. The part of the process would be defining relevant constraints for each commitment. These would be things like manning and funding the new initiative, sharing expectations deadlines, etc. This is supposed to provide fair amount of predictability and safety to the team that will be doing the actual work.

One crucial part of defining constraints is making the goals of the initiative explicit. What we are trying to achieve with this product or project. In other words why we decided to invest time of that many people and that much money and we believe it was a good idea.

And now the final part. Then PMO should get out of the way. Options are there in a product team or a project team. That team should have autonomy to pick the ones they believe are the best. Interference from the top will disable autonomy and as such will be a source of demotivation and disengagement. It is very likely that such interference would yield suboptimal choice of options too.

The pattern remains the same when we look at any two neighboring layers of work. For example, we will see similar dynamics between a product team and a feature team.

The influence on which options get executed happens through definition of constraints and not by enforcing a specific choice of options. Those different levels of work are, in a way, isolated between each other by the mechanism of commitment that yields options on a lower level, feedback loops going up and finally by distributing authority and maintaining autonomy to make decisions within own sphere of influence.

Unsurprisingly the latter gets abused fairly commonly, which is exactly why we need to be more aware and mindful about the issue.

I’m a long-time fan of visual management. Visualizing work helps to gather low-hanging fruits: it makes the biggest obstacles instantly visible and thus helps to facilitate the improvements. By the way, these early improvements are typically fairly easy to apply and have big impact. That’s why we almost universally propose visualization as a practice to start with.

At the same time a real game-changer in the long run is when we start limiting work in progress (WIP). That’s where we influence the change of behaviors. That’s where we introduce slack time. That’s where we see emergent behaviors. That’s where we enable continuous improvements.

What’s frequently reported though, is that introducing WIP limits is hard for many teams. There’s resistance against the mechanism. It is perceived as a coercive practice by some. Many managers find it really hard to go beyond the paradigm of optimizing utilization. People naturally tend to do what they’ve always been doing: just pull more work.

How do we address that challenge? For quite some time the best idea I’ve got was to try it as an experiment with a team. Ultimately there needs to be team buy-in to make WIP limits work. If there is resistance against a specific practice, e.g. WIP limits, there’s not much point in enforcing it. It won’t work. However, why not give it a try for some time? There doesn’t have to be any commitment to go on after the trial.

The thing is that it usually feels better to have less work in progress. There’s not that much of multitasking and context switching. Cycle times go down so there’s more of sense of accomplishment. The pressure on the team often goes down as well.

There’s also one thing that can show that the team is actually doing much better. It’s enough to measure start and end dates for work items. It will allow to figure out both cycle times (how much time it takes to finish a work item) and throughput (how many work items are finished in a given time window).

If we look at Little’s Law, or rather its adoption in Kanban context, we’ll see that:

It means that if we want to get shorter cycle time, a.k.a. quicker delivery and shorter feedback loops, we need either to improve throughput (which is often difficult) or cut work in progress (which is much easier).

We are then in a situation where we understand that limiting WIP is a good idea and yet realize that introducing such a practice is not easy. Is there another way?

One strategy that worked for me very well over years was to change the discussion around WIP limits altogether. What do we expect when WIP limits are in place? Well, we want people to pull less work and instead to focus on finishing items rather than starting them.

So how about focusing on these outcomes? It’s fairly simple. It’s enough to write a simple guidance. Whenever you finished work on an item first check whether there are any blockers. If there are attempt to solve them. If there aren’t any look at any unassigned items starting from the part of the board that’s closest to the done column (typically the rightmost part of the board). Then gradually go toward the beginning of value stream. Start a new item only when there’s literally nothing you can do with ongoing items.

If we took a developer as an example the process might look like this. Once they finish coding a new work item they look at the board. There is one blocked item. It just so happens that the blocker is waiting for a response from a client. A brief chat in the team may reveal that there’s no point in pestering the client for feedback on the ticket for now. At the same time it may be a good time to remind the client about the blocker.

Then the developer would go through the board starting at the point closest to the doneness. If the process in place was something like: development, code review, testing and acceptance by the client, the first place would be acceptance. Are there any work items where the client shared feedback, i.e. we need to implement some changes? If so that’s the next task to focus on. In fact, it doesn’t matter whether the developer was the author of the ticket but if there are any tickets that used to be his it may be input for prioritizing.

If there’s nothing to act upon in acceptance, then we have testing. Are there any bugs from internal testing than need to be fixed? Are there any tickets that wait for testing that the developer can take care of? Of course we have a century-old discussion about developers not willing to do the actual testing. I would however point that fixing bugs for the fellow developers is equally valuable activity.

If there’s nothing in testing that can be taken care of then we move to code review and take care of anything that’s waiting for code review or implement feedback from code review that’s been done already.

Then we move to development and try to figure out whether the developer can help with any ongoing work item. Can we pair with other team members? Or maybe there are obstacles where another pair of eyes may be useful?

Only after going through all the steps the developer moves to the backlog and pulls a new ticket.

The interesting observation is that in vast majority of cases there will be something to take care of. Just try to imagine a situation where there’s literally nothing that’s blocked, requires fixing or improvements and nothing that’s in waiting queue. If we face such a situation we likely don’t need to limit work in progress any further.

And that’s the whole trick. Instead of introducing an artificial mechanism that yields specific outcomes we can focus on these outcomes. If we can guide the team to adopt simple guidance for choosing tasks we effectively made them limit work in progress and likely with much less resistance.

Now does it matter that we don’t have explicit WIP limits? No, not really. Does it matter that the actual limits may fluctuate a bit more than in a case when the process has hard limits? Not much. Do we see actual improvements? Huge.

When I’m writing these words I’m on my way home from Lean Agile Scotland. While summarizing the event Chris McDermott mentioned a few themes, two of them being organizational culture and experimentation.

Experimentation is definitely my thing. I am into organizational culture too. I should be happy when Chris righteously pointed both as the themes of the event. At the same at that very moment time alarm lights went off in my head.

We refer a lot to safe to fail experiments. We talk about antifragile or resilient environments. And then we quickly turn into organizational culture.

The term culture hacking pops up frequently.

And I’m scared.

The reason is that in most cases there is no safe to fail experiment when we talk about an organizational culture. The culture is an outcome of everyone’s behaviors. It is ultimately about people. In other words an experiment on the culture, or a culture hack if you will, means changing people behaviors.

If you mess it up, more often than not, there’s no coming back. We may introduce a new factor that would influence how people behave. However, removing that factor does not bring the old behaviors back. Not only that though. Often there’s no simple way to introduce another factor that would bring back the old status quo.

There’s a study which showed that introducing a fine for popping up late at a daycare to pick up a child resulted in in more parents being late, as they felt excused for their behavior. This was quite an unexpected outcome of the experiment. However, even more interesting part is that removing the fine did not affect parents’ behaviors at all – they kept popping up late more frequently than before the experiment.

It’s natural. Our behaviors are outcome of the constraints of the environment and our experience, knowledge and wisdom.

We will affect behaviors by changing the constraints. The change is not mechanistic though. We can’t exactly predict what’s going to happen. At the same time the change affects our experience, knowledge and wisdom and thus irreversibly changes the bottom line.

I can give you a simple example. When we decided to go transparent with salaries at Lunar Logic it was a huge cultural experiment. What I knew from the very beginning though was there was no coming back. Ultimately, we can make salaries “non-transparent” again. Would that change what people learned about everyone’s salary? No. Would that change that they do look at each other through the perspective of that knowledge?

It might have affect the way they look at the company in a negative way, as suddenly some of the authority that they’d had was taken away. In other words, even from that perspective they’d have been better if such an experiment hadn’t been run at all than if it was tried and rolled back.

I’m all for experimentation. I definitely do prefer safe to fail experiments. I am however aware that there are whole areas where such experiments are impossible most of the time, if not all of the time.

The culture is one such area. It doesn’t mean that we shouldn’t be experimenting with the culture. It’s just that we should be aware of the stakes. If you’re just flailing around with your culture hacks there will be casualties. Having experimentation mindset is a lousy excuse.

I guess the part of my pet peeve with understanding the tools and the methods is exactly this. When we introduce a new constraint, and a method or a tool is a constraint, we invariably change the environment and thus influence the culture. Sometimes irreversibly.

It get even trickier when the direct goal of the experiment is to change the culture. Without understanding what we’re doing it’s highly likely that such a culture hack will backfire. Each time I run an experiment on a culture I like to think that the change will be irreversible and then I ask myself once again: do I really want to run it?

]]>http://brodzinski.com/2015/10/dont-mess-culture.html/feed3http://brodzinski.com/2015/10/dont-mess-culture.htmlContext Switching: The Good and the Badhttp://feedproxy.google.com/~r/SoftwareProjectManagement/~3/UUrMH2X26pM/context-switching-good-bad.html
http://brodzinski.com/2015/08/context-switching-good-bad.html#commentsMon, 31 Aug 2015 19:42:52 +0000http://brodzinski.com/?p=4961

Multitasking is bad. We know that. Sort of. Yet still, we keep fooling ourselves that we can do efficiently a few things at the same time.

When I talk about limiting work in progress I point a number of reasons why multitasking and its outcome – context switching – is harmful. One of them is Zeigarnik Effect.

Zeigarnik Effect is an observation that our brains remember much better tasks that we haven’t finished. Not only that though. If we haven’t finished something we will also have intrusive thoughts about that thing.

So it’s not only that it’s easy for us to recall tasks that we haven’t finished. We don’t necessarily control that we occasionally think about these tasks either.

What are the consequences? Probably the most important outcome is that, in a situation where we handle a lot of work in progress, it is an illusion that we are focusing on a single task. This is an argument that I’d frequently hear: it doesn’t matter that we have a dozen work items in development. After all, at any given time I only work on a single feature, right?

Wrong. What Zeigarnik Effect suggests is that our brains will be switching context despite what we consciously think it would do.

An interesting perspective to that discussion is that Zeiganik effect has been disputed – it isn’t universally accepted phenomenon. Let me run a quick validation with you then. When was the last time that, while doing something completely different, you’ve had an intrusive thought about an unfinished task. Be it an email you forgot to send, a call you didn’t make, a chore you was supposed to do or whatever else.

We do have those out of the blue thoughts, don’t we? Now, think what happens when we do. Our brain instantly switches the context. It doesn’t really matter what we’ve been doing prior to that: driving a car, coding or having a discussion.

That’s exactly where the multitasking tax is rooted.

It’s not all bad though. We frequently use Zeigarnik Effect to help us. A canonical example is when we struggle with solving a complex issue, we give up, just to figure it all out when we take shower, brush our teeth or just lay in bed after we’ve woken up in the morning. We simply release the pressure of sorting it all out instantly and let our brain take care of that.

And it does. On a moment that’s convenient for our thinking process we face a context switch that brings us to a solution of a puzzle that we’ve been facing.

It is worth to remember that it’s a case of context switching as well. It just so happens that we’ve been taking a shower thus it doesn’t hit our productivity. The pattern in both cases is exactly the same though.

That’s why it is worth remembering that adding more and more things to our plate doesn’t make us effective at all. At the same time we may use exactly the same mechanism to let our brain casually kick in when we struggle with solving a difficult problem.

Whenever a topic of motivation at work pops up I always bring up Dan Pink’s point. In the context of knowledge work, in order to create an environment where people are motivated we need autonomy, mastery, and purpose.

The story is nice and compelling. However, what we don’t realize instantly is how high Dan Pink sets the bar. Let me leave the purpose part aside for now. It is worth the post on its own. Let’s focus on autonomy and mastery.

First of all, especially in the context of software development, there’s a strong correlation between the two. Given that I have enough autonomy in how I organize my work and how the work gets done, I most likely can pursue mastery as well. There are edge cases of course, but most frequently autonomy translates to mastery (not necessarily so the other way around though).

The problem is that the way organizations are managed does not support autonomy across the board. Vast majority of organization employs hierarchy-driven structures. A line worker has a manager, that manager has their own manager, and so on and so forth up to a CEO.

The hierarchy itself isn’t that much of an issue though. What is an issue is how power is distributed within the hierarchy. Typically specific powers are assigned to specific levels of management. A line manager can do that much. A middle manager that much. A senior manager even more. Each manager is a ruler of their own kingdom.

Why is power distribution so important? Well, ultimately in knowledge organizations power is used for one purpose: making decisions. And decision-making is a perfect proxy if we are interested in assessing autonomy.

Of course each ruler has a fair level of flexibility when it comes to decide how the decision-making happens in their teams. There are, however, mechanisms that discourage them to change the common pattern, i.e. a dictatorship model.

The hierarchical, a.k.a. dictatorship, model has its advantages. Namely it addresses the risks of indecisiveness and accountability. Given that power is clearly distributed across the hierarchy we always know who is supposed to make a decision and thus who should be kept accountable for it.

That’s great. Unfortunately, at the same time it discourages attempts to distribute decision-making. As a manager I’m still kept accountable for all the relevant decisions made so I better make them myself or double-check whether I agree with those made by a team.

This in turn means that normally there’s very little autonomy in hierarchical organizations.

It brings us to a sad realization. The most common organizational structures actively discourage autonomy and authority distribution.

If we come back to where we started – what are the drivers for motivation – we would derive that we should see really low levels of motivation out there. I mean, vast majority of companies adopt the hierarchical model as it was the only thing there is. Not only that though. Even within hierarchical model we may introduce a culture that encourages autonomy, yet very, very few companies are doing so.

We could conclude that if the above argument is true we would expect really low levels of motivation globally in the workforce. It is a safe assumption that high motivation would result in engagement and vice versa.

Interestingly enough Gallup run a global survey on employee engagement. The bottom line is that only 13% of employees are engaged in work. Thirteen. It would have been a shock if not the fact that we just proposed that one of the current management paradigms – a prevalent organizational structure – is unsuitable to introduce autonomy across the board and thus high levels of motivation.

In fact, active disengagement, which would translate to being openly disgruntled, is universally more common that engagement. Now, that tells a story, doesn’t it?

It is also a challenge for a dominant management paradigm that makes a rigid hierarchy a prevalent and by far the most popular organizational structure out there. While such hierarchy addresses specific risks it isn’t the only way of dealing with them. The price we pay for following that path is extremely high.

By now Minimal Viable Product (MVP) is for me mostly a buzzword. While I’m a huge fan of the idea since I learned it from Lean Startup, these days I feel like one can label anything an MVP.

Given that Lunar Logic is a web software shop we often talk with startups that want to build their product. I think I can recall one or maybe two ideas that were really minimal in a way that they would validate a hypothesis and yet require least work to build. A normal case is when I can easily figure out a way of validating a hypothesis without building a half or even two thirds of an initial “MVP”.

With enough understanding of business environment it’s fairly easy to go even further than that, i.e. cut down even more features and still get the idea (in)validated.

A prevalent approach is still to build fairly feature-rich app that covers a bunch of typical scenarios that we think customers would expect. The problem is it means thinking in terms of features not in terms of customer’s problems.

Given that Lunar is around for quite a long time – it’s going to be the 11th birthday this year – we also have a good sample of data how successful these early products are. Note, I’m focusing here more on whether an early version of a product survived, rather than whether it was a good business idea in the first place.

Roughly 90% of apps we built are not online anymore. It doesn’t mean that all these business ideas weren’t successes. Some eventually evolved away from the original code base. Others ended up making their owners rich after they sold the product to e.g. Facebook. The reasons vary. Vast majority simply didn’t make the cut though.

From that perspective, the only purpose these products served was knowledge discovery. We learned more about business context. We learned more about real problems of customers and their willingness to pay for solving them. We learned that specific assumptions we’d had were completely wrong and others were right on spot.

In short, we acquired information.

In fact, we bought it, paying for building the app.

This is a perspective I’d like our potential clients to have whenever we’re discussing a new product. Of course we can build something that will cost 50 thousand bucks and only then release it and figure out what happens. Or maybe, we can figure out how to buy the same knowledge for much less.

There are two consequences of such approach.

One is that most likely there will be a much cheaper way to validate assumptions than building the app. The other is that we introduce one more intermediate step before deciding to build something.

The step is answering how much knowing a specific thing is worth for us. How much would we pay to know whether our business idea would work or not. This also boils down to: how much it will be worth if it plays out.

I can give you an example. When we were figuring out whether our no estimation cards make sense as a business idea we discussed the numbers. How much we may charge for a deck. What volumes we can think of. The end result of that discussion was that we figured that potential business outcomes don’t even justify turning the cards into a product on its own.

We simply abandoned the productization experiment as the cost of learning how much we could earn selling the cards was bigger that potential gain. Validating such a hypothesis wasn’t economically sensible.

By the way, eventually we ended up building the site and made our awesome cards available but with a very different hypothesis in mind.

In this case it wasn’t about defining what is a Minimal Viable Product. It was rather about figuring out how much potential new knowledge is worth and how much we’d need to invest to learn that knowledge. The economic equation didn’t work initially so we put any effort on hold till we pivoted the idea.

If we turned that into a simple puzzle it would be obvious. Imagine that I have 2 envelopes. There is a hundred dollar bill inside one and the other is empty. How much would you be willing to pay for information where is the money? Well, mathematically speaking no more than 50 dollars. That’s simple.

If only we could have such a discussion about every feature that we build in our products we would add much less waste to software. Same thing is true for products.

Next time someone mentions an MVP you may ask what hypothesis they’re going to validate with the MVP and how much validating that hypothesis is worth. Only then a discussion about the cost of building the actual thing will have enough context.

And yes, employing such attitude does mean that many of what people call MVPs wouldn’t be built at all. And yes, I just said that we commonly encourage our potential clients to send us much less work than they initially want. And yes, it does mean that we get less money building these products.

And no, I don’t think it affect the financial bottom line of the business. We end up being recommended for our Lean approach and taking care of best interest of our clients. It is a win-win.