Monday, 17 October 2011

The Law of Disruption Occupies Wall Street

From Tea Party activists to Wall Street occupiers; from the Middle East to Europe and back. We’re seeing passionate and sometimes violent reactions to the slow pace of institutional change.

Citizens are calling foul on political and social institutions that no longer reflect their values, using technologies, tools, and devices invented in the last decade to organize, coordinate, and speak.

Around the world, protesters are writing their manifestos on WordPress, arranging marches using Facebook, and chanting on Twitter. Their weapons of choice are smartphone apps, mobile broadband, and social networks. (In most cases, it’s well worth noting, these technologies were designed for entirely different purposes, or perhaps with no particular purpose in mind.)

Technology is not only the agent of change; it is also the catalyst. Indeed, I see all of these movements as fallout from what I coined The Law of Disruption, a principle of modern life that becomes more determinative as new technologies enter the social bloodstream ever faster. Even though I’ve been writing this column for several months, I’ve never explained what the Law of Disruption is. Now seems like a good time to correct that failure.

The Law of Disruption can be stated simply: Social, political, and economic systems change incrementally, but technology changes exponentially. In the widening gap between the potential change technology makes possible and the actual change existing institutions achieve, the likelihood of surprising, radical, and unintended shifts is fast increasing.

Borrowing a term from venture investing, in 1998 I called these surprises “killer apps,” a phrase I intended to be provocative. If existing institutions didn’t learn to move faster, to adapt more quickly, to make more creative use of new technology, they stood to be victims. Not so much of start-up businesses but of the technology itself, operating through entrepreneurs.

It was like the old joke about the two campers who hear a bear rummaging around outside their tent at night. “Why are you putting your shoes on?” the one camper asks the other. “You can’t outrun a bear.” “I don’t have to outrun the bear,” the other camper replies. “I just have to outrun you.”

After a decade of operating principally on business and economic system, the Law is now shifting its focus to law and government. To see what’s coming in the next decade, it’s useful to begin with a review of what’s already happened in the last one.

The Persistence of “Normal Science”

I first described the Law of Disruption in my 1998 book “Unleashing the Killer App.” Reviewing dozens of early Internet start-ups who were wreaking havoc on the business models and supply chains of established “brick-and-mortar” industries, I realized that what drove the innovators most was not so much their big ideas or even their youth. It was the accelerating pace of technological change.

The acceleration was in turn a function of Moore’s Law—Intel founder Gordon Moore’s 1965 prediction that computer power would double every 12-18 months even as price held constant. Later work suggests Moore’s Law applies equally to other key drivers of the Internet revolution, including communications speeds and data storage. Together, the relentless push toward the faster, cheaper, and smaller computing made change possible at an exponential pace.

So why, I wondered, did actual change occur so much more slowly? And why, in particular, were the most entrenched institutions—including government and business—the least able to take advantage of the revolutionary potential of new technologies?

The answer, oddly enough, came from MIT historian Thomas Kuhn. Just a few years before Gordon Moore’s first articulation of Moore’s Law, Kuhn published the first edition of his seminal work, “The Structure of Scientific Revolutions.”

Looking over the history of major changes in scientific thinking—what Kuhn coined “paradigm shifts”—it became clear that there was a pattern of resistance, counter-revolution, and finally, acceptance.

Kuhn uses the example of Copernican astronomy, which Galileo proved with his new telescope. Astronomers (and others, including the Vatican) had a vested interest in a view of the solar system in which all solar bodies including the Sun revolved around the Earth. Galileo’s evidence to the contrary needed to be explained away, even when doing so required revisions to the old model that eventually made it look absurd.

Scientists who had been trained as students in a particular dogma for their field—the paradigm—could not be expected to embrace a radically different paradigm even as evidence mounted of a model that better approximated reality.

That’s because what scientists are trained to do is not to think big thoughts so much as to refine the dominant paradigm—what Kuhn called “normal science.” Look at professional journals for physicists, economists, biologists and other sciences, and you’ll quickly realize that most academic research reflects normal science—small experiments, gaps in the literature, tiny adjustments to an existing model of how some aspect of the world works.

In fact, Kuhn goes on, a true paradigm shift tends to take at least twenty years to become the new normal, even after the evidence has become overwhelming. Why twenty years? That’s the amount of time, Kuhn concluded, for the existing generation of practicing scientists to retire or die off.

The current generation, in other words, never make the shift to the new paradigm; it’s only when the next generation takes over the field that the old paradigm—encoded in textbooks, maps, experiments and training materials–can be discarded.

Looking at business and government reaction to technological revolutions, particularly in information technology, I came to the conclusion that Kuhn’s work had broader application than just the sciences. CEOs, legislators, judges—all are likewise trained in the dominant paradigm of their age (increasingly at graduate business and law schools).

Like scientists, they spend their careers in the “normal science” of working within the paradigm to achieve modest improvements and relative efficiencies. A few more percentages of market share, more focused incentives and penalties, clearer statements of rules—these are the normal science of social institutions.

The Computing Revolution’s True Nature

When revolutionary change occurs, social institutions likewise resist, struggling mightily to explain away a new reality in the language of the old way of doing things. Take information technology. Business computing began in 1955 with the sale of the first Univac for commercial use—a payroll system for General Electric.

Following that model, computers were long seen as tools for automating existing business practices, offering improved efficiency but not competitive advantage. (See the wonderful commercial for Univac below.)

In the 1970’s, mainframe computers running back office accounting and manufacturing applications became a cost of doing business, a source of productivity improvement but one that was largely competed away to cost improvements enjoyed by customers. No one saw computers as revolutionary tools for redefining customer interactions—at least, no one inside large corporations.

But something unexpected happened. Personal computers moved from the bottom of the food chain to the front line of experimentation, pulling the information it wanted rather than pushing it back up for consolidation and summarization. Spreadsheets and other “what if” tools became the transitional killer apps, putting computing power in the hands of users to do with what they wanted, not what they were told.

Then followed the explosive growth of the Internet, a non-proprietary data communications protocol that took full advantage of Moore’s Law. Initially, it was ignored by business and policy leaders alike.

IT departments, well-drilled in “normal science” of incremental improvements and low-risk investing, dismissed it through the early 1990’s as an academic or at best scientific computing tool, not fit for high-volume, high-reliability transaction processing. Technically, they were right. TCP/IP offered an inferior networking standard compared to proprietary architectures including IBM’s SNA and Digital’s DECnet.

That, of course, assumed that the purpose of computing was to codify and automate existing hierarchies and one-way communications. As with all revolutions, the true potential of Moore’s Law wasn’t realized until a new generation of entrepreneurs, venture investors, engineers and–perhaps the first time—users began to experiment with the Internet, not as a tool for automation but as a technology first and foremost of collaboration.

The Internet, and the devices and applications that sprang up to take advantage of it, allowed for a remarkable range of new kinds of interactions in every conceivable supply chain—whether that meant product design and customer service, government transparency and accountability, or new forms of family and personal relationships embodied in social networks.

Once those new interactions were discovered, they moved quickly from the frontier back to mainstream life. Customers now demanded access to business information. And more, they demanded the right to express their views on how products and services performed—and how they ought to be improved.Values of social, ecological, and open access were articulated. Markets emerged to supply these and other aspirations; markets that might never have taken shape without disruptive technologies to help define new demands.

Shift Happened

Since the publication of “Unleashing the Killer App,” the revolutionary nature of Moore’s Law has only become more pronounced. In good economies and bad, booming and busting stock markets, through political upheaval and social change, computing continues to drive deeper into human experience, enabling change even as it redefines the nature of interactivity.

Along the way, many paradigms have been challenged, with predictable responses from those most closely tied to their propagation. In business, I observed CEOs frustrated both by the ability of start-ups to capture the imagination and loyalty of new customers and their own paralysis to respond, let alone initiate. Not surprising, that frustration was particularly acute in industries that had long been stabilized by regulation (airlines, communications, utilities, financial services) or cartel (lawyers, doctors, and other professional service providers).

Even when industries were granted dramatic deregulatory freedom, the old paradigm persisted. Ironically, one of the toughest obstacles to change were existing computer systems, which had embodied obsolete business practices and information flows in inflexible software code that no one was brave enough to hack.

In my role as shaman of the killer app religion, senior executives regularly confessed to me that they simply couldn’t change their way of looking at the business. In the end, faced with the inevitability of disruptive change, they wanted simply to last long enough to retire and let the next generation figure out what to do. (My advice to those executives was to retire as soon as possible, which some of them, to their credit, actually did, although never soon enough.)

Traditional businesses had many valuable assets that could be leveraged in competition with the start-ups. That was the good news. The bad news was that the valuable assets weren’t the physical ones that determined success in the industrial age. Few business leaders were willing to accept that the trucks, printing presses, retail locations and other physical plant that dominated the balance sheet had become liabilities overnight.

But online commerce turned the value proposition upside down. Shopping at home was more convenient than any retail experience, especially in an era where low unemployment translated to incompetent customer service at the point of sale. Information goods—including news, entertainment, and money, for starters—could begin and end life as bits, traveling cheaply and instantly over phone lines.

The real value for the incumbents was trapped in what I called the “hidden balance sheet”–the transaction data, expertise, and relationships carelessly filed away in the aptly-named data warehouse. Intelligence about customers and suppliers, deep industry expertise, and brands to which only lip service was paid were the truly valuable assets of the brick-and-mortars.

Few businesses found them in time. Biting at the heels of every slow-moving Blockbuster was a reckless Netflix, able to cancel out the advantage (if any) of an existing customer base with the decreasing cost of new user acquisition made possible by viral marketing and cheap broadband. And customer loyalty proved chimerical, especially when businesses tried to secure that loyalty through closed systems and product lock-in.

Either way, in some industries more than others, the paradigm shift occurred, leaving the existing participants at best reconfigured and at worst out of business.

Often, the process took a long time, but the result was never in much doubt. When Amazon first launched in 1994, it referred to itself audaciously as the “World’s Largest Bookstore.” Barnes & Noble sued on the ground that calling itself a “store” was false advertising, because it had no retail outlets.

That, of course, was the point of e-commerce. But Barnes & Noble and other book retailers (like retailers in other categories) were more comfortable suing to protect the old paradigm than to find ways of leveraging their existing assets to compete in a new reality.

Here the law proved a valuable ally, to slow if not to stop the disruption. Copyright, patent, antitrust, and other bodies of industrial law were called to duty, applied not to their traditional problems but to stop technological progress itself, by any means necessary. Napster, MP3.com, and even Microsoft were stalled or destroyed. But YouTube, iTunes, and Google were waiting in the wings. Technology, as always, adapted faster than law.

Now, less than twenty years later (take that, Kuhn!), the book business has changed utterly, leaving many casualties. Traditional publishers are still struggling to find their place in the new order, and continue to resist the move from physical to electronic—first of the distribution of books, and now the books themselves.

How do online sales fit in the making of a bestseller list? How should e-books be priced so as not to cannibalize hardcovers? You can hear the old heliocentric astronomers at work, tugging at their beards as they fretfully erase and redraw the orbits of the planets to avoid the reality. There’s a new center of the universe, and it ain’t the Earth.

But, at the same time, what has emerged is a far more convenient and cost-effective experience for customers. Both my oldest and youngest friend have each adapted quickly to the Kindle’s winning combination of low cost, light weight, readable text, and virtual library available through the Internet. Sentiment and status attached to the physical book—an artifact of history where books were scarce and literacy a sign of wealth—are fading fast.

Amazon, meanwhile, hacked its own systems, and allowed itself to be taken by the tidal wave of change to wherever Moore’s Law led it. The company morphed quickly from selling books in a new way to selling everything in a new way, and from there to recognizing itself as a platform—as software—that could be leveraged not just to other merchandise but to other merchants.

The company now offers cloud computing services, extending the platform beyond merchandising to any complex set of interactions. With the breathtaking success of the Kindle and its successor products, the company has taken the next step in its accelerating evolution, becoming a platform not just for other product categories and other businesses but also for its customers.

The Policy of DisruptionSince 2007, I have been increasingly focused on applying the Law of Disruption to regulation and policy. Business, for better or worse, is well along on the path to change. Law is not. Last year, I published “The Laws of Disruption,” looking at the ten most intense legal battles at the border of traditional existence and digital life. These included privacy, copyright, antitrust, crime, patents, infrastructure and human rights.

Fights over how to rewrite these sinking bodies of industrial law for our increasingly virtual lives have only intensified in the last two years, and in many ways are converging to a general revolution.

Grumblings over one-sided terms of service, limits on remixing content, government surveillance and excessive patent protections have sharpened into movements and advocacy, including Creative Commons, the Electronic Frontier Foundation, and TechFreedom, a new policy think tank aimed at limiting all forms of regulatory interference with innovation. (I work with TechFreedom as an adjunct fellow.)

Despite what existing governments may think, anarchy is not the only alternative to their continued monopoly. Rather, the revolutionaries–sometimes groping, sometimes articulately—are striving for a new social contract, one based on the unique social and economic properties of information.

The problem with existing law is baked right into the founding of the modern state. Democratic systems of government, after all, are designed to change slowly and deliberately, through separation of powers and checks and balances that ensure the passions of the day are tempered with wisdom before significant change occurs.

The business of government is truly normal science—a good day in Washington is a day in which absolutely nothing happens. And for the most part, when it comes to the regulation of innovation, doing nothing is the best way to help.

Governments do best when they establish a healthy environment for entrepreneurs—avoiding taxation of emerging industries, establishing markets that function with minimal transaction costs, incentivizing long-term research and investment and encouraging self-regulation of dynamic industries.

Safe harbors, including a provision of U.S. law that protects online publishers from lawsuits over third party content, establish clear (or clearer) boundaries for acceptable behavior, reducing the risk of failure for new ventures. A provision of California law that refuses to enforce most non-compete clauses allows talent to flow where it needs to go without undue friction, perhaps a key (but largely unsung) factor in the success of Silicon Valley over other high-tech geographies.

Governments do their worst when they try to intervene and micromanage fast-changing realities, especially when those realities are being shaped by technologies over which they have no experience or expertise. For then they are fighting the Law of Disruption, asking technology to change at the pace of the modern bureaucratic state. It’s a doomed combination, like keeping one foot on the dock and the other in a speedboat.

In the last few years, I’ve participated in dozens of hearings and meetings on Capitol Hill to talk about regulating “the Internet.” There’s a bizarre and worrisome ritual at these meetings. Elected officials begin the conversation by confessing they’ve never used the products and services they proceed to praise or condemn. They feel obliged to act, they say, because they know their children are using them all the time. Why do they take such pride in their ignorance? And what are they really worried about?

The result isn’t surprising. The last decade in particular is littered with failed efforts to “solve” problems of on-line life that regulators didn’t define or even understand in the first place. At the federal, state, and international level, we have a body of worthless law aimed poorly at a range of early artifacts, including spam, spyware, identity theft, privacy, pornography, gambling, intellectual property, bullying, net neutrality.

Many of these issues turned quickly into other issues; some were solved by new technology, or by joint actions of users and providers. Some got worse.

In every case, new laws and new regulations did nothing to help. But they are hardly inert. Laws and rules are fixed in time in ways that technology is not. So even the best-intended laws can and increasingly do have unintended consequences later on, often exacerbating the very problem they intended to solve.

ECPA, a 1986 law on electronic surveillance, has never been updated, leaving most data stored in the cloud seizable without a warrant by law enforcement agents. A statute aimed at protecting government computers from hackers has been warped to impose criminal sanctions for violating the terms of service of social networking sites. Expect more, not fewer, of these perversions.

The Revolution Will be Tweeted

As growing resistance to today’s political institutions suggests, governments have yet to embrace the reality of technology-driven paradigm shifts. Citizens and consumers alike are making their own rules, writing their own laws, and drafting their own constitutions for digital life. Some are working constructively on the new; others are more focused on dismantling what they don’t like.Elected officials would be wise to heed the lessons of history: Don’t obsess over the speck of dust in your neighbor’s eye, when you have a log in your own. Give evolving forms of governance the benefit of the doubt. Embrace the change and the technology that’s causing it. Or retire, quickly.

As economic, social, and political life migrates to the Internet, governments increasingly feel the gravitational pull to follow. And there is a role for government in the information economy.

As our digital paradigm evolves, we’ll need wise leaders and sound law to preserve the order of digital society. The sooner policymakers learn to stop fighting Moore’s Law and leverage their true assets, the more likely existing institutions of governance will find a meaningful place in the new reality.

That, in any case, is the common theme of the revolts and protests happening around the world this year. Though they have different origins and different grievances, each manifests the Law of Disruption in its frustration with incremental change and unintended consequences.

The stakes are higher now than they were when I first coined the Law of Disruption. In politics, unlike business, violent revolution is always a last resort for the wielders of killer apps. That’s a feature we need to avoid as much as possible.