Cloud "operating systems." Cloud frameworks such as OpenStack -- and integrated solutions such as Microsoft's Cloud OS or VMware's vCloud -- provide the means to orchestrate all those virtualized infrastructure resources and offer them to users on a self-service basis. Such automation is essential for auto-scaling, where a cloud automatically pours on infrastructure resources as user demand for an application increases. Amazon has its own proprietary system, which the private cloud solution Eucalyptus emulates, while HP and Rackspace offer OpenStack public clouds. SoftLayer, acquired by IBM in the summer of 2013, also offers some OpenStack public cloud services, which it plans to ramp up this year.

Configuration management.Puppet, Chef, Ansible, and Salt make it much easier to configure and maintain dozens, hundreds, or even thousands of servers. Major cloud providers find them essential to orchestrating their data centers, as do more and more enterprise shops.

Server-side storage caching. Flash storage has become standard equipment on many data center servers. Why not assemble all the flash memory into one large, distributed cache, vastly reducing the percentage of reads and writes that must travel all the way to the SAN? It's a red-hot idea that PernixData has done a great job of implementing.

Data layer technologiesThe decades-long dominance of the RDBMS has been broken. A new generation of NoSQL databases has emerged, some of which got their start as back ends for cloud services and have now begun to invade the enterprise. In our hyperconnected cloud era, new database technologies are being complemented by enterprise solutions that process data and update these repositories in real time.

NoSQL. We've covered the various flavors of NoSQL databases extensively, with Andrew Oliver's 2012 classic "Which freaking database should I use?" still attracting readers. The whole idea behind NoSQL is to be able to scale out by simply adding servers to a cluster -- and to avoid the RDBMS overhead of laboriously rearchitecting every time you need to change the data model. In 2013, the leading vendor of NoSQL document databases, MongoDB, became the second open source company after Red Hat to be valued at more than $1 billion.

New Hadoop frameworks. The ubiquitous open source solution for storage and analysis of semi-structured data, Hadoop has always had two basic components: HDFS (Hadoop Distributed File System) and the MapReduce job execution layer. The successor to MapReduce arrived in October 2013 with Hadoop 2.0: The whimsically named YARN (Yet Another Resource Negotiator), which enables multiple Hadoop applications to run simultaneously and takes Hadoop beyond batch processing. Also, Cloudera's Impala, Pivotal's HAWQ, and Apache Squoop provide new, vital bridges between SQL databases and Hadoop.

Event-stream processing. You could argue that the subtext of CES 2014 was the Internet of things, with wearable computers and household appliances providing all sorts of telemetric data to be digested by enterprises. But the Internet of things is already achieving liftoff in a few industrial sectors -- and the key technology to handle that flood of data is event-stream processing. When the VMware spin-off Pivotal launched in April 2013, it did so with a $105 million investment by GE, which is busily embedding millions of sensors in all sorts of industrial products, with Pivotal's GemFire meant to handle the real-time input. Joining the event processing party last November were Amazon with its Kinesis service and Salesforce with its Salesforce1 integration platform.