What is driving lower data center energy use?

A recently released report from a consulting professor at Stanford University identifies that the growth in electricity use in data centers over the years 2005 to 2010 is significantly lower than the expected doubling based on the growth rate of data centers from 2000 to 2005. Based on the estimates in an earlier report on electricity usage by data centers, worldwide electricity usage has only increased by about 56% over the time period of 2005 to 2010 instead of the expected doubling. In contrast, the growth in data center electricity use in the United States increased by 36%.

Based on estimates of the installed base of data center servers for 2010, the report points out that the growth in installed volume servers slowed substantially over the 2005 and 2010 period by growing about 20% in the United States and 33% worldwide. The installed base of mid-range servers fell faster than the 2007 projections while the installed base of high-end servers grew rapidly instead of declining per the projections. While Google’s data centers were not able to be included in the estimates (because they assemble their own custom servers), the report estimates that Google’s data centers account for less than 1% of electricity used by data centers worldwide.

The author suggests the lower energy use is due to impacts of the 2008 economic crisis and improvements in data center efficiency. While I agree that improving data center efficiency is an important factor, I wonder if the 2008 economic crisis has a first or second order effect on the electricity use of data centers. Did a dip in the growth rate for data services cause the drop in the rate of new server installs or is the market converging on the optimum ratio of servers to services?

My data service costs are lower than they have ever been before – although I suspect we are flirting with a local minimum in data service costs as it has been harder to renew or maintain discounts for these services this year. I suspect my perceived price inflection point is the result of service capacities finally reflecting service usage. The days of huge excess capacity for data services are fading fast and service providers may no longer need to sell those services below market rate to gain users of that excess capacity. The migration from all-you-can-eat data plans to tiered or throttle accounts may also be an indication that excess capacity of data services is finally being consumed.

If the lower than expected energy use of data centers is caused by the economic crisis, will energy spike up once we are completely out of the crisis? Is the lower than expected energy use due more to the market converging on the optimum ration of servers to services – if so, does the economic crisis materially affect energy use during and after the crisis?

One thing this report was not able to do was ascertain how much work was being performed per unit of energy. I suspect the lower than expected energy use is analogous to the change in manufacturing within the United States where productivity continues to soar despite significant drops in the number of people actually performing manufacturing work. While counting the number of installed servers is relatively straightforward, determining how the efficiency of their workload is changing is a much tougher beast to tackle. What do you think is the first order affect that is slowing the growth rate of energy consumption in data centers?

This entry was posted
on Wednesday, August 3rd, 2011 at 2:41 pm and is filed under Energy Management, Question of the Week.
You can follow any responses to this entry through the RSS 2.0 feed.
You can skip to the end and leave a response. Pinging is currently not allowed.

3 Responses to “What is driving lower data center energy use?”

I think the doc missed it completely. Use of electricity by datacenters has been capped by the utilities since 2007/8 because the grid can’t support anymore power going into supercomputer and server farm facilities. That’s the main reason.

Although I fail to see how this discussion relates to embedded or real-time technology, I will add my 2 cents:
The reduction in energy consumption in my opinion is due to technology, ideology and most importantly economics.
Electricity is one of the largest OpEx lines for a data center, so with increased demand and competition, downward price pressures motivated to optimize and save on energy costs.
Environmental ideology which has expanded into mainstream that calls for CO2 emission reduction has added to the motivation significantly.
Technologies such as Virtualization and Storage Area Networks enabled a significant improvement in efficiency – instead of dedicating entire machines to a service, hardware resources now can be allocated based on actual load and usage, while unused hardware can be turned off.
Processor technology had to overcome a barrier at about the same time – no longer could Intel simply increase the clock of their new processors – the heat density has reached a critical level which required a methodical redesign of the logic circuits that in turn resulted in a leap advance in MIPS/Watt efficiencies.
These and undoubtedly other advances that took place in the recent decade have all undoubtedly contributed to the trend, but I am doubtful the economic downturn had much impact at all – this market is way too conservative to make changes so quickly.

A significant improvement in efficiency has come from hard disk drive manufacturers prioritizing power management in their products. Some years ago large data center users were reporting that they had hit capacity limits purely because the local power grid could not support any growth in consumption. The HDD manufacturers responded with a large investment in data caching algorithms, power management within the drive, vastly increased storage capacity per unit and faster host interfaces.