Data centres can run at significantly higher temperatures and humidity levels than they do today without affecting overall equipment failure rates, according to The Green Grid.

In its latest report, "Data Centre Efficiency & IT Equipment Reliability", The Green Grid states that the current perception of data centre equipment's tolerance to heat and humidity is based on archaic practices dating back to the 1950s, resulting in an enormous waste of money and carbon.

Harkeeret Singh, who contributed to the report, said that periods of high heat and humidity can be compensated by periods of more favourable environmental conditions, where water- and air-side economisers can be used for cooling. This allows data centres to reduce reliance on mechanical chillers without any detriment to overall failure rates.

"While we are not yet ready to do away completely with mechanical cooling, the industry is making constant progress in minimising the need for air conditioning thanks to economisers, better data centre design, and more efficient operating practices," said Singh

Intel has been advising its customers to increase the temperature in their data centres for some time. Speaking to Techworld at the start of this year, Richard George, director of cloud services at Intel, said that companies can save four percent in energy costs for every 1C they turn up the heat.

Now some of the data centre equipment manufacturers are also urging customers to cut down on their use of mechanical chillers and adopt more eco-friendly forms of cooling. Dell, for example, is currently pushing its Fresh Air initiative, which relies on outside climate conditions for cooling.

Hugh Jenkins, enterprise programs manager at Dell, told Techworld that the company aims to ensure that the infrastructure that goes into data centres will to work within fresh air environments. Typically that means that customers are able to run systems at slightly higher ambient temperatures.

Dell's latest generation of servers are designed to be able to operate 10C hotter than traditional server infrastructure. Jenkins said that they can do this by intelligently detecting and throttling work loads to accommodate higher operating temperatures.

"35C has tended to be at the top end of operating temperature, and if you move outside of that, all bets are off, the systems are not tested to operate beyond that," said Jenkins. "With the latest systems we've been designing, they're actually able to operate at 45C."

Jenkins said that customers can set policies so that if the external temperature gets too high, non-critical applications will automatically be closed down. Fans within the servers can also programmed to automatically counteract any dramatic changes in temperature.

"Being able to take out the capital cost of investing in expensive air con and chillers and then the operational cost of running that is quite a considerable sum of money," said Jenkins.

"Not every customer will be able to get to that, but even being able to safely run those data centres at a higher ambient temperature is going to be helpful in terms of achieving efficiency and being able to de-risk that decision by knowing that your server infrastructure can cope with those higher ambient temperatures."