The author is a Forbes contributor. The opinions expressed are those of the writer.

Loading ...

Loading ...

This story appears in the {{article.article.magazine.pretty_date}} issue of {{article.article.magazine.pubName}}. Subscribe

Continued from page 3

The Computing Revolution’s True Nature

When revolutionary change occurs, social institutions likewise resist, struggling mightily to explain away a new reality in the language of the old way of doing things. Take information technology. Business computing began in 1955 with the sale of the first Univac for commercial use—a payroll system for General Electric.

Following that model, computers were long seen as tools for automating existing business practices, offering improved efficiency but not competitive advantage. (See the wonderful commercial for Univac below.)

In the 1970’s, mainframe computers running back office accounting and manufacturing applications became a cost of doing business, a source of productivity improvement but one that was largely competed away to cost improvements enjoyed by customers. No one saw computers as revolutionary tools for redefining customer interactions—at least, no one inside large corporations.

But something unexpected happened. Personal computers moved from the bottom of the food chain to the front line of experimentation, pulling the information it wanted rather than pushing it back up for consolidation and summarization. Spreadsheets and other "what if" tools became the transitional killer apps, putting computing power in the hands of users to do with what they wanted, not what they were told.

Then followed the explosive growth of the Internet, a non-proprietary data communications protocol that took full advantage of Moore’s Law. Initially, it was ignored by business and policy leaders alike.

IT departments, well-drilled in “normal science” of incremental improvements and low-risk investing, dismissed it through the early 1990’s as an academic or at best scientific computing tool, not fit for high-volume, high-reliability transaction processing. Technically, they were right. TCP/IP offered an inferior networking standard compared to proprietary architectures including IBM’s SNA and Digital’s DECnet.

That, of course, assumed that the purpose of computing was to codify and automate existing hierarchies and one-way communications. As with all revolutions, the true potential of Moore’s Law wasn’t realized until a new generation of entrepreneurs, venture investors, engineers and--perhaps the first time—users began to experiment with the Internet, not as a tool for automation but as a technology first and foremost of collaboration.

The Internet, and the devices and applications that sprang up to take advantage of it, allowed for a remarkable range of new kinds of interactions in every conceivable supply chain—whether that meant product design and customer service, government transparency and accountability, or new forms of family and personal relationships embodied in social networks.

Once those new interactions were discovered, they moved quickly from the frontier back to mainstream life. Customers now demanded access to business information. And more, they demanded the right to express their views on how products and services performed—and how they ought to be improved.

Values of social, ecological, and open access were articulated. Markets emerged to supply these and other aspirations; markets that might never have taken shape without disruptive technologies to help define new demands.