It’s based on a 2002 research paper by Baron and Hannan exploring the impact of “employment blueprints” on companies success.

I was excited to read the 25-page paper as it covers one of the topics that I’m most passionate about: the impact of organizational choices and decisions on company performance. And on a more meta level, it also seems to corroborate a broader pattern that I’ve observed: we seem to have a serious knowledge management problem when it come to organizational design. This paper was published in 2002. 13 years later it is rediscovered, but during that time, the debate about some of the insights (for example: hiring for fit vs. potential) raged on, uninformed by data as always…

Reading the paper left me with mixed feelings. On the one hand, the “blueprint” approach seems to resonate and yield some pretty interesting insights. On the other hand, the opacity around the analysis done by the authors, casts a serious doubt on the validity of the insights, at least in my opinion.

The Good

The team looked at a 200-company data set from the “Stanford Project on Emerging Companies” . They’ve classified the employment decisions that they’ve made on three dimensions: basis of attachment and retention, criterion for selection and means of control & coordination:

They then clustered the 36 possible permutations into five common blueprints (archetypes):

After looking at the impact blueprint selection has on softer aspects, such as other choices in organization-building (timing of bringing in specialized HR capacity, sequence of hiring compared to other business milestones, level of early attention to organizational concerns), and exploring some initial attributes of blueprint switching (who switches, why, and to what), the authors turn their attention to the main research question:

Once a certain blueprint was chosen, do the benefits of switching outweigh its costs?

To answer that question, the authors first explore the intrinsic costs and benefits of each blueprint. The costs are measured in the level of administrative overhead over time, and the benefits are measured through the impact on three performance indicators:

Likelihood of Failure

Likelihood of IPO

Annual growth rate in market capitalization (post IPO)

The “commitment” blueprint seems to be the clear winner of the first two, while the “star” blueprint is the winner of the last one.

Then they turn to explore the impact of changing the blueprint, and find that while it slightly increases the chance of IPO, it has more profound effects in increasing the likelihood of failure, increasing employee turnover and reducing yearly growth in market cap. Perhaps the most interesting finding w/r/t employee turnover was the following:

“It turns out that CEO succession does have a strong effect on
turnover. However, this effect appears to be due entirely to the tendency for
CEO succession to be accompanied by changes in HR blueprints”

The Bad

As I’ve eluded to in the intro, the opacity around the statistical analysis done by the authors is an issue that kept bothering me throughout the paper. The authors mention several times that they’ve taken other factors into account while performing the analysis but don’t reveal any of the data. A subsequent working paper sheds some additional light, but not enough alleviate my concerns. This is particularly troubling for several reasons:

Small data set – only 156 companies

Selection bias – only 42 of the companies went public, so any analysis of post-IPO performance is particularly selection-prone

Unclear statistical significance of the results – for example, w/r/t the information shown in figures 6 & 7, the authors acknowledge in the working paper that: “The differences among models (aside from the contrast vis-à-vis Commitment) are not jointly significant”

Unclear explanatory power – I’m probably not using the right statistical term here, but here’s the issue: almost all the information is presented in relative terms looking at the impact of one blueprint compared to another, typically using the “engineering” blueprint as the default. However, what portion of the overall variability in the performance indicators is explained by blueprint selection (and whether that insight is statistically significant) is never discussed. Put differently: it seems unlikely in the heavily-scrutinized post-IPO environment that if switching from the “engineering” blueprint to the “star” blueprint would have yielded an 80% improvement in the annual growth in market cap, only 11% of companies would switch blueprints…

Unique time bias– the boom-and-bust period of the late 90s and early 00s

Bottom Line

The subsequent working paper exposes another facet of this paper, which may help explain some of the analytical challenges with it. In it, the authors state that:

“The main focus of this research was to learn whether changing initial blueprints destabilized the SPEC companies”

And that is indeed, the less controversial part of the research. I really like the idea of organizational blueprints/archetypes and think it merits further exploration. Beyond that, personally I find it hard to overcome the analytical challenges in making a call on whether one blueprint is better than the others.

He argues that lack of clarity on the scope in which people are interested in making an impact, is a significant driver of job dissatisfaction and churn. He illustrates it with an example from the healthcare space: becoming a doctor, a hospital administrator or a healthcare policy-maker are all ways to drive positive change in the healthcare space, but in very different scopes. Each one of those represents implicit trade-offs around the control over, as well as the speed and tangibility of the change you may seek to have. Different people who are motivated by the opportunity to move healthcare forward, may vary in their preferences on those implicit trade-offs. Unawareness of those personal preferences, may lead them to pursue career opportunities that while promoting the cause they care about will still lead to job dissatisfaction.

I’ll conclude with the same question Aaron chose to conclude his post with: which one are you?

As an industry we, for the most part, know how to scale up our software. […]We also know how to scale up our organizations, putting in the necessary management structures to allow thousands of people to work together more or less efficiently.

On the other hand, I’d argue that we don’t really yet have a good handle on how to scale that area that exists at the intersection of engineering and human organization. […] And, worse, it often seems that we don’t even understand the importance of scaling it as we go from that first line of code to thousands of engineers working on millions of lines of code.

Peter’s pieces consists of two main parts. The first part is a play-by-play history of Twitter’s code base and development methodologies which highlights the key areas where focus on “engineering effectiveness” would have helped.

The second part decomposes “engineering effectiveness” to three main areas:

Reduction of tech debt first where it tends to accumulate the most (tooling) and then elsewhere in the code base

Help in the dissemination of good practices (around code reviews, design docs, testing, etc.) and the reduction of bad practices

Building tools which help engineers do their job better

In that second part Peter also suggests a model to determine the optimal level of investment in “engineering effectiveness” (ee):

Where “E” is total effectiveness (which we’re trying to maximize), “eng” is the total engineering headcount, “ee” is the engineering effectiveness group headcount, “b” is the boost that the first engineering effectiveness headcount brings to the rest of the engineering team and “s” is the scaling impact that each add’l engineering effectiveness headcount contributes (0<s<1 since we should assume diminishing returns).

Assuming b=0.02 (2% effectiveness boost) and s=0.7, for a total engineering headcount of 10, 100, 1000 and 10000, he gets an optimal ee headcount of 0, 2, 255, and 3773 respectively. As the engineering org scales, a larger portion of the total headcount should be dedicated to making the rest of the engineering org more effective, with ~100 engineers being the inflection point of making the investment worthwhile (for these b and s values).

Another important aspect here is in providing guidance on the type of initiatives that such organizations should take on: breadth very quickly trumps depth – making 1000 engineers 2% more effective, has a much greater overall impact, than making 10 engineers 50% more effective.

This model is particularly interesting since it can easily be generalized for any other group whose mission is to help a larger part of the org be more effective. These support groups, in companies that are wise enough to have them, tend to be staffed and funded based on a fixed headcount ratio to the total headcount of the org they support. Peter’s analysis suggests that when those organizations scale significantly, the traditional approach will lead to under-investment. Adopting this more refined methodology and having a thoughtful conversation about the appropriate “s” and “b” values for the particular use case, will likely lead to a better outcome.

People tend to be defensive about the responsibilities they own in a company. It’s natural that they struggle with giving those responsibilities to new employees and trusting that they’ll do a good a job as they did. And yet, giving away responsibility is exactly what we need them to do in order to effectively scale the company. As Molly puts it: “giving away responsibility — giving away the part of the Lego tower you started building — is the only way to move on to building bigger and better things”. More people does not mean less work for the people already there, it means the company can do more as a whole.

Her advice to managers is to be proactive in communicating about this challenge. Acknowledge that this feeling of defensiveness around giving away responsibility is completely normal, but getting beyond this initial, emotional reaction is exactly what the company needs them to do, in order to be successful. Focusing on the bright, new, shiny Lego tower that you need them to build next, is also a good idea.

Molly argues that the true scaling chaos happens approximately when the company has 30-750 people (every company is a bit different). Beyond that, the scaling challenges manifest themselves mostly on a departmental level, rather than a corporate level. She identified three distinct growth phases in which scaling presents different challenges:

30 – 50 people: communication, which has been almost effortless until that point, becomes exponentially more challenging. The best solution here is to start putting things down on paper: mission, values, philosophies, etc. and being particularly mindful about over-communicating them.

50-200 people: this is the most critical phase in the shaping of the company culture. Thought and focus must be directed to building the systems that’ll take the values off the paper and make them real. One of the hardest and most important aspects of this is pruning the talent pool – letting go of the people who are not a good culture fit to the culture we’re trying to create. It should only take a couple of months to assess whether someone is a good culture fit. And if the answer is “no” – action must be taken quickly.

200-750 people: At this point, the personality and habits of the organization are pretty much molded. The focus now shifts to scaling and preserving them as more people join. Onboarding, training and other business practices are key. Any desired cultural change at this point will be challenging, and must be undertaken deliberately, assuming a lot of work will have to be done by the CEO and leadership team in order to make it happen.

in an Agile conference four years ago, and I added it to my reading queue. For whatever reason, other books kept getting ahead of it (a relatively rare event in my reading queue), until it was mentioned in a book I recently finished causing it to jump back to the top of the queue.

The book chronicles David’s personal story, taking command of an under-performing nuclear submarine, the USS Santa Fe, and transforming it into a top-performing one. What makes this book a worthwhile topic for this blog, is the way David chose to go about doing that: by pushing power/control/decision-making down the chain-of-command – in a complete opposite direction to traditional Navy doctrine.

The biggest lesson learned, in my opinion, from David’s experience is best summarized in his own words:

“Control, we discovered, only works with a competent workforce that understands the organization’s purpose. Hence, as control is divested, both technical competence and organizational clarity need to be strengthened.”

Many books and articles make the case for why pushing control/power/decision making down the org (often times erroneously referred to as “empowerment”) is important. But the tight coupling between doing so, technical competence and organizational clarity was illuminating to me. Reflecting on past situations when I hesitated to delegate control, or did so and was disappointed with the outcome, I can almost always attribute the root cause for my hesitation or disappointment to one, or both, of these elements.

Even though the quote I shared above is taken from the book’s introduction, this is not one of those would-have-been-better-as-a-10-page-HBR-article books. The book adds colors and nuances to this high-level idea, and breaks it down to specific mechanisms that David use to push down control, improve technical competence and enhance the organizational clarity. Many of which can be adapted from a submarine-setting to a corporate-setting (environment that are worth a more in-depth comparing and contrasting).