Unless you are a supermodel or a rock star, it is not cool to be hot anymore--particularly in the data center. Things have always been hot and heavy in the data center--think IBM mainframes, 3390 disk drives, and whirring tape drives and you'll get the right image--but the heat density of data centers is reaching an all-time high. How high? Try a factor of 10 explosion in power consumption and heat in 10 years. And this is causing a lot of problems.

That's why Hewlett-Packard this week announced the Modular Cooling System, a system that allows racks of servers to plug into existing water-based data center cooling systems that it co-developed with Rittal, a German supplier of industrial enclosures, power distribution, and cooling systems.

Yes, water-cooling--through necessity--is coming back in vogue.

According to Paul Perez, vice president of storage, networking, and infrastructure for HP's Industry Standard Servers group, a populated rack of servers can throw off 15 kilowatts of heat, and even denser racks of blade servers--which by design cram computers into even smaller spaces--can be as high as 30 kilowatts per rack in heavy configurations. And while no one wanted to go back to water-cooling in the data center--mainframes were mocked for this by Unix suppliers a decade and a half ago--the infrastructure is there for water-cooling in a lot of data centers, and it is the quickest and easiest way to deal with overheating in the data center with the modern X64, Itanium, and RISC servers that HP sells.

The Modular Cooling System is an air-water heat exchanger that is half a rack wide that you bolt onto the side of an HP server rack. This exchanger sucks the heat out, where it is absorbed by the water and pumped away to an outside chiller. The HP rack is completely closed off, and it takes the hot air from the back of the rack and pumps it to the exchanger and then pumps cool air to the front of the rack, where it can keep the computers cool. The system provides 2,700 cubic feet per minute of cool air, which is distributed along the full height of a 42-server rack. And the Modular Cooling System doesn't just work on ProLiant rack servers and BladeSystem blade servers, but also on HP's Itanium-based Integrity systems and its older HP 9000 and NonStop machines, as well as its StorageWorks storage arrays.

Customers that already have water-cooling in their data centers are golden, because this system only costs $28,500 per rack; all you have to do is plug it into your existing water pipes and chiller. HP's Factory Express service can equip new racks with the cooling system, too, so you can roll them right into the data center, ready to plug into the cool water. Perez says that HP's System Insight Manager systems software for its server products can see the Modular Cooling System gadgetry, and can send alerts to administrators if something goes awry; in the future, SIM will be able to proactively deal with the situation--for example, shutting down servers in a rack if the water pump fails so they don't melt. HP has yet to productize the dynamic cooling system it built for DreamWorks, which was based on a cluster of ProLiant DL360 servers and which had a sophisticated network of room sensors and adaptive cooling units that could keep the room cool and cut energy usage by 60 percent. (Plenty of customers will, having heard such numbers, want whatever this DreamWorks stuff was.)

HP's sales pitch for the Modular Cooling System is pretty straightforward--it will allow blade server-level power densities in a data center that was never designed for them. And, because it takes heat directly out of server racks instead of relying on inefficient air cooling systems, companies can spend less money on cooling--as much as 30 percent, says Perez. But the savings can be much larger than that. Perez calculates that an 11,000 square foot data center consumes about 3.6 megawatts of power and costs about $16 million to build in a major metropolitan area. With the Modular Cooling System as designed by HP and Rittal, you can cram the same servers into about 3,000 square feet, you don't need a raised-floor, air-cooled environment, and you can chop the cost to about $7.4 million. HP has deployed the Modular Cooling System in its HP Labs facility in Palo Alto, California, and has cut its energy bill by 25 percent, and the data centers in its Houston facility also use it.

In addition to the water-cooling system, HP also this week rolled out a new set of racks, the HP 10000 G2 Series, that allow all of HP's storage and servers to use the same racks. Up until now, HP and Compaq lines have had seven distinct types of racks, and they are not compatible even if the form factors of servers they held were uniform and standard. This new rack was designed to speed up the Factory Express custom-build process, according to Perez, but in all honesty, just moving to one new rack from seven different types (with seven different rail systems and other widgets) would probably account for most of the improvements that workers see in the factory. The new racks have more intelligent power distribution units, too, which were co-developed with Eaton, the power specialists from Cleveland, Ohio. The 36U rack costs $1,200, and the 42U rack costs $1,249.

And finally, HP has announced new power assessment services to help companies cut their power bills even as they add more power-hungry servers to their data centers. "Just like data centers have to over-provision their servers for peak workloads, they have over-provisioned their data center power and cooling for hot spots," explains Perez. HP is partnering with Eaton to analyze power distribution from the point it leaves the electrical substation and ends up in the data center. So HP's experts are willing, for a fee, to do a power assessment, look at the temperature and thermals of your data center, and do a data center plan for you. Such a service debuted out of necessity, says Perez, a few years ago when HP rolled out the power-hungry Superdome servers. Companies didn't have a lot of experience with such behemoths, and HP wanted to make sure they were kept fed with electricity and cool enough to perform well.

Buy: Check out the savings and performance with high end p4 technology.Lease: A great way to get the technology you need without committing to a sale.Rent: Already decided to move to p5? Test your migration strategy with a rental!Disaster Recovery: Build a hot or warm failover solution for the same price you pay for a subscribed hot-site solution.