Modern enterprise data centers are some of the most technically sophisticated business activities on earth. Ironically enough, they are also often bastions of inefficiency, with equipment utilization much below ten percent and 30 percent of the servers in those facilities being comatose (using electricity but performing no useful information services). The operators of these facilities also struggle to keep pace with rapid changes in deployments of computing equipment.

In the past, we were concerned with recovering application and data within the confines of the physical data center. Disaster recovery focused on how to rebuild the existing infrastructure of the hardware, software, and applications at a location apart from a compromised operations site. In other words, we needed to have a box to replicate the box.

There are multiple vendors trying to say that DCIM is easy, as if saying it enough times will make it so. I know it isn’t true because I work with their customers years later as they replace these easy DCIM solutions which are not providing them the value they thought they were buying. It is not enough to simply import your Visio diagram and transfer data from spreadsheets to a database and then call it a DCIM solution. DCIM solutions are not magical. DCIM is more than a hardware or software product; it is a process which involves multiple groups in your organization, many of which are currently working in siloes.

While “boring” is a good thing as far as data center operators are concerned, 2015 was not a boring year for the data center industry. From data center outages on both sides of the Atlantic, caused by lightning and an explosion, to one of the biggest data center providers thinking of offloading its data centers, to a big change to the way the industry’s most important reliability rating system works, the year delivered plenty of excitement for some and anxiety for others.

All signs indicate that 2016 will be a year of many challenges. Disruptive technologies will be introduced, the exponential increase in computing power will continue, while businesses will demand a prompt response to quickly changing requirements. At the same time the requirement to be highly resource efficient will stay the same.

As a result of these challenges we predict these changes will emerge in 2016:

CA Technologies has decided to get out of the data center infrastructure management market, where it was considered one of the leaders.

The New York-based IT infrastructure management software giant will no longer sell its stand-alone DCIM software solution, called CA DCIM, which has been deployed in data centers operated by Facebook and NTT-owned RagingWire, among others.

There’s no magic number for the length of the hardware refresh cycle that works for everyone, but the set of variables that together determine the ideal time to replace a server is fairly uniform across the board. Identifying those variables and analyzing their relationship is a question Amir Michael and his team at Coolan, a data center hardware operations startup, recently asked.

Expected to come online late next year or early in 2017, this will be the company’s first cloud region in the country and third in Europe, the company’s CTO Werner Vogels wrote in a blog post. The other two are in Germany and Ireland.

Growing complexity in the today’s data centers has increased the risk of combining power, cooling, racks, cabling and management components to run an efficient facility due to the shortage of essential skills needed to design and integrate them.

Smart organizations therefore have turned to tightly integrated, aisle-based physical infrastructure modules, or PODs, along with non-containerized integrated infrastructure solutions to optimize the use of power, space and cooling capacity while simplifying specification, design, validation, procurement and installation.