People often say that cloud computing is just a reinvention of the service bureaus of old. Rather than being a forward advance, they suggest that accessing all our applications and data from vast, centralized cloud platforms is a lurch backwards to the time-honored mainframe model of computing. But perhaps instead cloud computing will force a reinvention of the mainframe. History has habit of repeating itself in unexpected ways.

The cornerstone of cloud computing is multi-tenancy, in which a single infrastructure or application instance is simultaneously shared among many different customers at the same time. People sometimes claim multi-tenancy itself as a principle invented long ago by mainframe software engineers. Of course the validity of the assertion depends on your definition of multi-tenancy. There’s no doubt that virtualization is a technology with a deep mainframe pedigree, introduced in the 1960s to allow ‘time sharing’ of mainframe computing resources.

As Wikipedia explains, time sharing meant more efficient use of these expensive machines: “in most cases users entered bursts of information followed by long pause, but a group of users working at the same time would mean that the pauses of one user would be used up by the activity of the others.”

The service bureaus that sprang up in the 1960s took advantage of time sharing technology to make expensive computing resources available on a pay-for-use basis to companies that couldn’t otherwise afford them. Today’s cloud computing providers use large-scale virtualization of commodity x86-based server farms to make highly scalable computing available on a low-cost, pay-for-use basis. In making those resources vastly more affordable, they’re repeating the achievement of the service bureaus. But the cloud computing and applications stack allows them to offer a far richer selection of capabilities.

In cloud applications, the multi-tenancy reaches all the way down into the database, time-sharing individual tables, as well as out across the entire environment, using complex policy settings to time-share integrations and customizations. Although I can’t claim to be an expert on the inner workings of modern mainframes, I would venture to suggest that this is at an entirely different level of sophistication than traditional time-sharing architectures.

What I do find interesting, though, is where cloud computing seems to be leading some of the larger cloud providers in their hardware choices. Even though users of cloud computing resources don’t have to care about the underlying hardware, providers increasingly do. Google, for example, has its servers built to a custom design because there are huge savings to be made on power bills by changing some of the components ordinarily shipped with general-purpose PCs. Meanwhile, leading chip designers including Intel and ARM have now begun to work on new generations of low-power microprocessors that will further reduce the power and cooling needs of cloud data centers. So while software engineers work to maximize usage of the underlying computing, hardware engineers are battling to minimize its operating costs, further improving affordability.

There’s a parallel trend towards more specialization within cloud computing infrastructure, matching the capabilities of underlying hardware to specific application types. For high volume analytics, for example, a cloud provider might offer an infrastructure that’s designed for large-scale in-memory processing, whereas for high-volume transactions, there may be more emphasis on the speed and robustness of the connection between processors and data storage. Once cloud providers are operating at large enough scale, it may make sense to have different hardware platforms for different computing tasks and then be able to charge customers extra for using those higher-performing resources.

The potential outcome of these developments is an increasing specialization of cloud infrastructure platforms, which some might well view as a return, full-circle, back to the principle of mainframe computing. Like the mainframes of old, these huge constructs of digital machinery will fill entire rooms rather than single racks, and they will be built from specialized components designed specifically for the purpose. The big difference is that, whereas in the old days the service bureaus were very much in a minority, in the future it is providers rather than individual enterprises who will be the dominant operators of these cloud-scale computing platforms.