From

Thank you

Sorry

One of the eternal concerns for any data center is cooling the mass of metal that makes a room a data center. Over the years we've seen a significant decrease in the power consumption and heat generation of a general-purpose server, as well as an order-of-magnitude decrease in those variables from a per-server-instance standpoint. Where once stood 2U servers with several 3.5-inch disks running a single server instance, you'll now find two 1U servers with 2.5-inch drives or no disk at all, running maybe 30 server instances.

But the fact remains that we're also seeing a proliferation of logical server instances. A server-by-server comparison of a data center five years ago and the same infrastructure today should show a decrease in the number of physical servers, but that's not a guarantee.

In the meantime, costs for power and cooling have not been stagnant. Power consumption is still and likely always will be a source of pain in the data center budget. I can recall a time in the engineering labs at Compaq where the monthly power bills for the data center-sized labs would run into the hundreds of thousands of dollars, and that was many moons ago.

Data center power draw comes from two main sources: the hardware (servers, storage, and networking) and the cooling systems. The larger and hotter the metal, the more power needed to cool it, to exhaust the hot air, and to maintain suitable humidity. There are many ways to combat the laws of physics and maintain reasonable intake temperatures. There are water-cooled racks and in-row cooling units that serve to bring the cold air where it's most necessary -- at the server inlet.

These methods aren't for general use, however. They generally require careful hot- and cold-aisle designs, and while they can reduce overall power and heating bills, they can also cost more initially. Perhaps surprisingly, these designs are also very effective in smaller builds, where a prospective eight racks can be cooled by only two units.

These units can be water-cooled with a chill-water unit mounted to the roof or behave like normal air conditioners, using plenum space for exhaust and intake. Generally speaking, the water-cooled units will be a better long-term bet. Building construction considerations and potential plenum blockages can make it a challenge to run the air units. Your mileage may vary depending on rack density and the actual equipment present, but in-row cooling definitely has a place in the data center.

And of course, we have the massive AC units bolted to the walls of the room or on the roof, pumping out 68-degree Fahrenheit air nonstop either through dedicated ducting or through a raised floor, while pulling in hot air from the room. This is the traditional method. However, the larger issue today may be not what kind of cooling system should be used, but at what temperature should the data center operate.

We've all been in data centers that run at 68 degrees ambient. It might have been 100 degrees outside, but you still wore a sweater or a coat to do server maintenance. Hot servers mean angry servers, after all. But 68 degrees may no longer be necessary. If you take a look at the recommended operating temps of many servers and network devices, they say those boxes should not be operated at temps higher than 90 to 95 degrees. That's a long way away from 68.

That right there is also part of the problem. If we run at 68 and have a cooling problem that causes temperatures to rise, they will take longer to reach critical levels than if we're running at 85, say. In a big room, a sizable cooling outage can throw the room into critical temps in minutes. Once at that level, the cooling systems have to work much harder to return the space to normal operating temps. This is generally known as A Really Bad Day. It is to be avoided at all costs.

I'm not saying we should turn up the heat in our rooms or blithely ignore hot servers, but the fact of the matter is that modern servers may do just fine with an ambient or intake temperature around 80, depending on the cooling coverage and redundancy. It might be a good idea to take a look at the hardware you're running and what it can handle, then maybe tap the thermostat up a few degrees, especially when the weather outside is frightful.