Main menu

The debate over the impact of automation on employment has been raging fiercely in the last several years, as advances in artificial intelligence and robotics have been making some science fiction-esque scenarios look more plausible, and technology-enabled services like Uber have been upending conventional labor models. But when do you think people first started debating this issue? Some would point to the early Industrial Revolution, and in particular the Luddite protests of the early 19th century. While this is the canonical historical example – so famous it added the word “Luddite” to our vocabulary – in identifying a starting point for the debate over technology and employment, it turns out the Luddites are at least 600 years too late!

In this age of rapid technology-driven change, it is interesting to read history and see how much in human affairs actually stays constant over the centuries. There are a number of socioeconomic trends that many believe are unique to our modern era, but which were in fact also prevalent in pre-industrial times. It is also interesting to parse out the things that truly have changed over time – the genuine fundamental shifts in human affairs.

Some months ago I read Carlo Cipolla’s Before the Industrial Revolution: European Society and Economy 1000 – 1700. The book is a very interesting and enjoyable read on European economic history that never feels dry despite being a true scholarly work full of hard facts and figures. Given how wrapped up I am in the technology industry, I couldn’t help but read this book with a particular eye toward technological change, and a desire to understand society’s relationship with technology prior to the critical inflection point of the Industrial Revolution. I found many interesting and surprising ways in which pre-industrial society functioned, with respect to economic and technological affairs, very similarly to today. In this post I will highlight some of the more interesting comparisons.

I should disclaim that just because a socioeconomic phenomenon is centuries-old does not mean we should accept it as immutable or dismiss those who wish to change it. However, we should acknowledge that if something has been going on for centuries, there may be something in human nature that strongly favors its persistence, and efforts to change it will be an uphill battle. In some cases, we’d be better off learning from history and finding ways to mitigate the effects of a particular phenomenon, rather than fighting it directly.

Similarities between the pre-industrial economy and today

Labor-saving technology faced resistance due to fears of job loss
Many have pointed out that the debate over technology’s displacement of jobs is not new. However, there is still a perception that this conflict began with the Industrial Revolution. In fact, there is evidence the debate goes back to at least the thirteenth century, when water mills were being introduced into the process of making textiles. Citing Cipolla’s book:

In France the adoption of mills sparked violent protests among workers who maintained that the new technology was detrimental not only to the quality of the product but also to employment.

Interestingly, Cipolla makes the case that water mills and windmills were extremely significant precursors to the Industrial Revolution because they were the first incarnation of the harnessing of energy sources that were not living things (human, animal, or plant).

The earliest mechanical clocks kept time so imperfectly that they had to be continually adjusted, the corrections being made by “clock governors” who turned the hour hand (the minute hand appeared only a good deal later) backward or forward precisely on the basis of sundials and water clocks. The first mechanical clocks cannot therefore be regarded as substitutes for sundials and water clocks.

Mechanical clocks initially brought zero benefit over other options, but some forefather of today’s “engineer in a garage” found them conceptually interesting and was drawn to their potential.

Many important civilian technologies had origins in the militaryMany technologies we use on a daily basis were developed as part of military research, such as GPS, the Internet, and even the general-purpose computer (designed primarily to calculate ballistic trajectories). Though we associate this phenomenon with the twentieth century and particularly WWII and the Cold War, it dates back much further. Again from Cipolla:

So many developments of a technological nature, from the casting of iron to the emergence of schools of veterinary science and engineering, had military beginnings… It was certainly with an eye to greater effectiveness in battle that the technical innovations in iron working and horse breeding had been first promoted. Eventually, in the course of the twelfth century, both the use of the horse and that of iron were handed down from the squires to the peasants.

Civilian adoption of iron and horses led to improvements in agricultural productivity that were certainly not envisioned by the military, much in the same way that the early computing researchers could not possibly foresee all the eventual impacts of computers on society.

Humans rapidly depleted natural resources, often in very irresponsible ways that governments could not succeed in controllingIn pre-industrial Europe, the primary environmental issue was deforestation. Public concern about deforestation dates back to at least the 13th century. In France this concern led to a series of royal and local ordinances regulating the consumption of timber. Similar statutes appeared all over Europe in the coming centuries, but were generally ineffective at preventing what Cipolla describes as “parasitic and extremely wasteful” destruction of forests. England in particular experienced a timber crisis at the beginning of the 17th century, despite a number of Acts of Parliament. However, it can be argued that scarcity drove innovation, as the shortage of timber forced England to make use of another type of fuel: coal. This was one of the factors that set England on a course toward the Industrial Revolution. A contemporary at the time wrote:

There is so great a scarcitie of wood through the whole Kingdom, that not only the Citie of London, all haventowns [ports] and in very many parts within the land, the inhabitants in generall are constrained to make their fires of sea-coal or pit-coal.

There was a war for talentTech companies know that hiring the best is critical to success (Mark Zuckerberg famously said the best engineers are ten times more productive than average engineers). Although the term “war for talent” was apparently coined as recently as 1997, the importance of human capital was recognized by leaders even back in medieval times. Cipolla writes:

In the Middle Ages and in the Renaissance, the relevance of human capital to economic prosperity was taken for granted. Governments and princes were active in trying to attract artisans and technicians and in preventing their emigration.

For example, a welcoming immigration policy, tax breaks, and interest-free loans to artisans were all part of an economic development policy of the Commune of Bologna in 1230.

In fact, movement of human capital was often closely associated with shifting economic tides, such as the flight of professionals with “artisanship, commercial know-how, entrepreneurial spirit and, often, hard cash” from the Spanish-controlled southern Netherlands to the Northern Netherlands in the late 16th century. In the following century, the Netherlands became the most dynamic economy in Europe, while Spain went into decline, lacking human capital and heavily indebted after a military spending binge fueled by gold and silver from the Americas.

Economic booms and busts were quite common and severeAlthough economic cycles tend to be associated with our modern capitalist economy, severe recessions were actually quite common in pre-industrial Europe. One of the reasons recessions were so severe was the fact that most capital was in the form of working capital, specifically inventory, rather than fixed capital like production equipment. Working capital is extremely volatile relative to fixed capital, as Cipolla explains:

When investment is in the form of working capital, disinvestment is easier: one sells existing stocks (if one can) and refrains from replenishing them. This means that when stocks and inventories make up a large fraction of the existing capital, disinvestment can be more massive and drastic than if a large fraction of investment is sunk in fixed capital.

Differences between the pre-industrial economy and today

Some of the aforementioned similarities between the pre-industrial and modern economies may have given the mistaken impression that the Middle Ages was a vibrant economic period in which a modern businessman or technologist would have felt right at home. In fact, pre-industrial Europe was a generally miserable time and place to be alive, something I will expand upon when discussing the period’s abysmal living standards. But besides the prevalence of death and poverty –things that have deservedly garnered this period its reputation as the Dark Ages – what else was different?

A disconnect between science and technologyIn the Middle Ages and early Renaissance, science and technology were seen as completely distinct, as embodied in the saying scientia est unum et ars est aliud (science is one thing and technology is another). Due to a combination of flawed epistemology and rigid class distinctions, science was viewed as a philosophical endeavor and not a utilitarian one. Science was the domain of upper class gentlemen funded by inheritance, or perhaps aristocratic patronage, who had no interaction with craftsmen who produced anything. The idea of commercializing academic research was completely unheard of.

In the 17th and 18th centuries this began to change with the adoption of the modern scientific method and also an emerging view that science could be directed at the material improvement of mankind. In time, the linkage between “scientists” and “artisans” was strengthened. Today we have professors using their research to found companies worth tens of billions of dollars.

The economic power of the ChurchOne of the things I found most alien about the economic landscape in the Middle Ages was the role played by the Church. Today, the Catholic Church is still an enormously wealthy institution, more so than I realized until I looked up the figures. The Economist estimated the Church’s annual spending in the US at $172 billion, roughly equivalent to the revenue of General Motors and not too far behind Apple. However, that’s still less than 5% of the US federal budget of $3.8 trillion. In the Middle Ages, the Church was a tremendous economic force, in many places more wealthy and influential than the government. Cipolla writes:

About 1430 the English monasteries owned about 15% of the English land, while the rest of the Church owned another 10% and the Crown only 6%.

With few productive places to invest long-term capital, fixed capital was disproportionately directed to construction of churches and monasteries:

A medium-sized city such as Pavia (Italy) in the fifteenth century, with about 16,000 inhabitants, had over one hundred churches, and this was in no way exceptional. Hospital beds, on the other hand, were so scarce that, until well into the 19th century, it was common practice in every part of Europe to place two or three patients in one bed.

The slow diffusion of technologyThe instantaneous sharing of knowledge is a powerful driver of our modern economy. It enables new technology, once it is invented, to achieve pervasive adoption very quickly. It also enables aggressive recombinant innovation, supercharging the pace of new technology introductions. In pre-industrial Europe, technology spread very slowly. Remember those water mills the French were protesting in the 13th century? While they were common in some areas of Europe in the 10th century, it was not until roughly 500 years later that their use had spread across most of Western Europe. Compare that to today, when new technology can reach billions of people in a few decades or less.

Abysmal living standards and economic productivityThe most obvious difference between pre-industrial and modern economies is how far we’ve advanced in terms of living standards and productivity. Pre-industrial Europe was not a good time or place to be alive. Society was wracked by war, famine, and epidemics, and any progress came at a snail’s pace. The Industrial Revolution ushered in sustained per-capita growth in productivity and living standards, essentially bringing the exponential function to human affairs for the first time. With two hundred years of compounded annual growth behind us, GDP per capita is now on the order of 100x what it was in 1800. It is not the intent of this post to rehash how much of a miraculous discontinuity the Industrial Revolution represents in the course of human history, but studying pre-industrial Europe can be a sober reminder of how miserable life used to be.

Everyday life could be horrific to an extent difficult for a resident of a 21st century developed economy to imagine. A 17th century Italian wrote of walking in the city:

You cannot walk down the street or stop in a square or church without multitudes surrounding you to beg for charity: you see hunger written on their faces, their eyes like gemless rings, the wretchedness of their bodies with the skins shaped only by bones.

Although fertility was normally higher than mortality, pre-industrial Europe was characterized by frequent catastrophes that would wipe out any population growth achieved in “normal” times. There was a total lack of an economic safety net:

In a year of bad harvest or of economic stagnation, the number of destitute people grew conspicuously. We are accustomed to fluctuations in unemployment figures. The people of pre-industrial times were inured to drastic fluctuations in the number of beggars.

The cities of pre-industrial Europe – even major ones such as London – were such death-traps that they actually had higher mortality than fertility, and only grew because of continued inflow of migrants from the countryside. Disease was of course a major contributor to mortality. Plague was a more or less regular feature of the landscape, repeatedly flaring up in various pockets of Europe over the centuries. One particular plague, the Black Death, killed 25 million people out of a total European population of 80 million in the short span of 1347-1351.

Historical Lessons?

I believe the similarities between pre-industrial and industrial economies illustrate that there are certain facets of human nature that will always influence society: resistance to technological progress when it impacts one’s livelihood, the importance of hobbyists and recreational tinkerers, the power of fear (of one’s enemy) to drive innovation, the difficulty in controlling resource consumption at a societal level, the importance of human capital, and the ups and downs of the business cycle. Yet, whether in spite of or because of these factors, paradigm shifts in human affairs are possible. This makes the future reassuringly difficult to predict.

I don’t spend a lot of time thinking about macroeconomics or monetary policy. I did spend part of my career in public market investing, but I realized that the operation of businesses and the power of technological innovation were more compelling to me than interest rates and macroeconomic indicators. Every so often, however, I have a train of thought that spans both the world of tech startups and the world of macroeconomics. Listening to Mohammed El-Erian discussing Fed policy on the radio yesterday resurfaced some thoughts I periodically turn over in my head about the experimentation occurring at the Federal Reserve, and how this compares with the experimentation engaged in by the technology innovation ecosystem. The two types of experimentation have markedly different risk profiles, and make me wonder how we can implement low-risk experimentation into government policymaking.

It is widely acknowledged that the policy actions taken by the Fed and other central banks since the financial crisis have been unprecedented and experimental. Since 2009 the Fed, through its quantitative easing program, has purchased about $3 trillion in mortgage-backed securities, federal agency debt, and long-term Treasuries. The sheer scale of this program is difficult to comprehend. It is true that history holds examples of central bank balance sheet expansions, in the US and elsewhere, that were comparable in magnitude to the recent post-crisis expansion, but these were usually to finance wartime spending as in WWI and WWII. Today’s massive balance sheet expansion is intended to stimulate demand and employment, which has little or no precedent in history. In addition, the Fed has stated its intention to eventually unwind this massive expansion, which has no precedent because the Fed did not shrink its balance sheet after WWII.

No one knows exactly what consequences the Fed’s actions will have. There are many who argue strongly that they’re certain of what the consequences will be (rampant inflation, etc.), but these commentators are armchair quarterbacking. The only thing one can say for sure is that a $3 trillion experiment is likely to have some unforeseen results. The scale of the experiment means the consequences, where they occur, are unlikely to be trivial. The full impact may not be observed for many years to come. Fed policy doesn’t just impact the US economy, either. Given the US dollar’s status as the world’s reserve currency, the impacts are likely to be global and possibly even greater in developing countries than at home in the US.

Startups do lots of experimentation as well. Like the Fed in recent years, startups make big, bold bets in the face of tremendous uncertainty, and there is little precedent for what they are doing. From this experimentation emerges lots of failure and lots of brilliance. Yet the innovation ecosystem is markedly different from the world of monetary policy, because the costs of failure are well-contained (without constraining the upside of successful experiments). A failed startup has zero systemic impact on the technology industry as a whole. Venture capital portfolios are set up to accommodate failure; it is expected that many startups won’t succeed. A loss on one investment is made up for by gains on another. A failed startup is more difficult for the entrepreneurs involved, because they lack the diversification enjoyed by their VC backers. They pour themselves wholeheartedly into a single venture. Yet thankfully, our tech industry is so dynamic that a failed entrepreneur can typically rebound quickly and either move on to a new startup or work for a bigger company. In today’s environment of acqui-hires, failed startups can even make a return for their founders and investors, though this feels more like a cyclical rather than structural feature of the industry.

Within this environment that is so highly robust to failures, aggressive experimentation is encouraged and rewarded. The Lean Startup philosophy has provided a framework for conducting these experiments as efficiently as possible, and technologies ranging from infrastructure-as-a-service to 3d printing help accelerate the process by shortening the feedback cycle.

By contrast, central banks are not robust to failure. A failed monetary policy experiment can have a detrimental impact on the entire global economy and billions of people in some form or another. Experimentation at the Fed lacks the characteristics that make experimentation by startups such a powerful force for improving standards of living, namely that the costs of failure are borne by a few but the potential benefits to society as a whole are unbounded.

This raises to me an important question: how can governments and central banks conduct policy experiments where the costs of failure are contained? Policies must evolve over time, and there are surely beneficial but untested policies out there. It would seem that a logical method is to try out policies on a local level and adopt the most successful policies on a larger scale. The legalization of marijuana in Colorado and Washington is a good example of local policy experimentation that could pave the way for a national policy shift. Unfortunately, I’m not aware of any ways the Fed can conduct legitimate policy experiments where the cost of failure is contained, but perhaps they exist or will exist in the future.

At the risk of sounding like I’m trumpeting the tech industry as a model for the way everything should work, it seems like governments and maybe even central banks could benefit by adopting systems that permit low-risk experimentation. I am sure there are others who have thought about and studied this more than I have. I look forward to a day when policymaking is both more innovative and less systemically dangerous.

I have been spending a lot of time investigating the OpenStack ecosystem recently, including a journey to Hong Kong for the OpenStack Summit. In this post, I intend to present some thoughts and observations on the state of OpenStack and why the momentum it has gained recently could mark the cusp of a very important transition in the IT industry. I don’t intend to describe what OpenStack is, so check out http://www.openstack.org/ if you are new to the game.

Here is the TL;DR summary:
1. OpenStack has picked up a lot of momentum in the last 6 – 9 months. This is significant because if the momentum continues, OpenStack could prove to be a classic disruptive technology, the kind of disruptive force that makes venture capitalists excited and incumbent vendors fearful.
2. OpenStack and truly dynamic cloud architectures represent a fundamental rethinking of IT infrastructure, not just an evolution of existing static virtualization infrastructure.
3. How effectively VMware and other incumbents navigate this transition is an important open question in the industry. My expectation is the cracks in VMware’s armor will become increasingly apparent moving forward.
4. There will be many opportunities for new and innovative vendors to capitalize on OpenStack and the shift to dynamic cloud architectures. Thinking futuristically, I am interested in the potential for new technologies to replace today’s hypervisors and perhaps even operating systems.

The Cloud Shift

OpenStack has gained a lot of steam in the last 6 – 9 months due to a confluence of growing support from heavyweight IT vendors, crossing the “production grade” maturity threshold, and the growing competitive pressures on certain classes of enterprise and service provider to rethink their infrastructure capabilities. If OpenStack is a long-term success, we will likely look back on this period as the key inflection point. It is still early days. When I first started trying to estimate the scope of existing OpenStack deployments, some people told me there were perhaps 10k – 20k physical servers in production. I think the actual number is at least 40k, but this number is still not huge in the grand scheme of things. More significant is the number of PoCs underway, including some in mainstream enterprises like insurance companies rather than just early adopters like service providers. The OpenStack professional services firms are going gangbusters as the market tries to figure out how to get started with this technology (alluding to one of the near-term challenges: it’s too hard to deploy).

The cloud architecture embodied by OpenStack represents a fundamental rethinking of IT infrastructure, even though some CIOs still fantasize that the cloud is merely an evolution of their existing virtualization infrastructure, and some vendors still fantasize that they can get by with merely “cloudwashing” their existing products. Amazon Web Services pioneered how a true cloud infrastructure should behave: completely self-service, rapidly and automatically provisioned, elastically scalable. They also set an example for how this infrastructure should be implemented: with pools of commodity hardware managed by powerful distributed software (not that other web-scale companies like Google and Yahoo weren’t exploiting this model in some fashion as well). The fact that such an architecture makes heavy use of server virtualization is just about the only major similarity with existing enterprise IT infrastructure. In a true cloud architecture virtual machines are short-lived, run on cheap commodity hardware, and are highly tolerant of the failure of any individual node. In a traditional static virtualization environment, VMs are long-lived, run on expensive dedicated hardware, and have the expectation that their dedicated hardware will be nursed back to health if it goes down.

More enterprises and service providers are waking up to the competitive reality that they need to either use AWS or make their infrastructure work like AWS. Not for cost reasons, but for revenue reasons – they need to launch new services rapidly to remain competitive. OpenStack is a way to accomplish the necessary infrastructure transformation. It isn’t easy, and not just because the technology is new, but also because such an architecture requires new organizational processes, and people are slower to change than technology. Nevertheless, there are many signs that the cloud model is the future of infrastructure, both public and private.

The Incumbents’ Dilemma

If this is the case, a billion dollar question is: how will the incumbent vendors navigate this transition? Many large incumbents have something to lose here, such as those who have built their businesses selling big centralized hardware products that are now threatened by the paradigm of commodity hardware pools. But the one I am really watching is VMware. The data center is becoming software-defined, and VMware is a software company which would seem to position it more favorably than hardware incumbents. VMware has a very dominant position in server virtualization, which it is wisely leveraging to become more entrenched in business processes. But again, virtualization is just about the only similarity between a traditional enterprise VMware environment and a true cloud. Neither VMware’s technology nor its economic model is geared toward a massive number of short-lived VMs running on pools of expendable commodity hardware. Its pricing model doesn’t work in this new paradigm and the escalating costs of running a VMware stack are causing a number of enterprises to start at least evaluating alternatives to avoid lock-in.

In many ways, OpenStack is a classic disruptive technology to VMware: it starts off inferior on many axes that matter to the mainstream, and VMware’s bread and butter accounts won’t touch it with a 10-foot pole, yet with time it could become superior to VMware even on the traditional axes. There is still growth for VMware at the high end of the market, where the most mission critical workloads have yet to be virtualized, and accordingly VMware is putting much of its energy into tapping this remaining bastion of growth. Yet concurrently, the low end workloads like test & dev (where VMware itself got its start) are starting to be placed on OpenStack-based clouds. As VMware moves up market, the low end is being eaten out from under it in a classic disruptive wave. We are starting to see some organizations bifurcate their infrastructures, keeping legacy static workloads on a traditional stack powered by incumbent vendors, and placing new dynamic workloads on clouds powered by OpenStack and new scale-out technologies like Ceph. With more maturity, OpenStack could eventually become superior to the legacy stack even for mission critical workloads, and the disruptive cycle will come full circle.

Does VMware comprehend the risk, and understand what it must do to adapt? My sense is the leadership at VMware gets it. They are smart people. The acquisitions of Nicira and DynamicOps were in some ways an acknowledgement of the changing landscape. But translating acknowledgement into execution all the way down through the organization is very hard. How many big companies are able to successfully navigate such a major transition and come out ahead, as opposed to simply preserving the legacy business for as long as possible while ever-so-slowly being supplanted? How many incumbents successfully navigated the smartphone and tablet revolution, for example? My expectation is the cracks in VMware’s armor will become increasingly apparent moving forward.

Opportunities Ahead

There are many risks to OpenStack, such as its decentralized structure as an open source project and the current tendency to value new features over unglamorous things like stability. However, whether or not it ends up being OpenStack, some form of cloud management system will ultimately make a big mark on the industry, and right now OpenStack is the clear lead contender.

This is an exciting time for OpenStack and the cloud. There will be many opportunities for new and innovative vendors to capitalize on the paradigm shift that the cloud represents. One area I think is particularly interesting is the emergence of new computing models that replace virtual machines entirely. We are already seeing a lot of interest in things like Docker and various other containerization approaches. Recently Rackspace acquired ZeroVM, which creates distributed VMs that can tap into massive distributed processing power rather than that of a single host. Moving yet further down the stack, not even Linux is safe in this new world, with cloud-optimized operating systems like OSv emerging. These ideas could be the topic of entire blog posts in themselves, so I will leave things here. I look forward to discussing these ideas further with anyone interested.