Posted
by
CmdrTacoon Thursday November 03, 2005 @04:42PM
from the it's-getting-hot-in-here dept.

mstansberry writes "In part three of a series on the price of power in the data center, experts debate the merits of raised flooring. It's been around for years, but the original raised floors weren't designed to handle the air flow people are trying to get out them today. Some say it isn't practical to expect air to make several ninety-degree turns and actually get to where it's supposed to go. Is cooling with raised floors the most efficient option?"

Being, literally, a grey-beard who remembers working on intelligent (3270-series) terminals and water-cooled mainframes and Unix and DOS punks crowing about how "the mainframe is dead"... things like Citrix, LTSP, liquid-cooled racks, and IBM setting new records in "number of mainframe MIPS sold every year" really amuses me.

There are a number of slashdot visitors that do actually care about server room issues. The fact that you don't understand the need does not negate it's importance.

Large organizations rely on server rooms for their computing environment. Having a cobbled environment where the file server is on the 3rd floor, and the application server is in the janitor's closet, etc. is a recipe for disaster. Troubleshooting connectivity issues (among others) can end up costing more than the apparent simplicity of such a design.

Understanding ways to better cool the space that our servers occupy is important. And being able to do so in a cost effective manner is also important. The organization that I work in has one in-house server room (containing 60 racks of servers), and one 'co-located' server room (containing 72 racks of servers). Heat and power are the two killers. If we experience a 50% power loss (assume that one power grid is knocked out), do we have enough power to run AND cool the server room? If not, what percentage of my gear do I need to shut down in order to prevent overheating, without impacting critical business systems (like payroll).

If we can find a cheaper / better / more cost effective method for cooling that utilizes less power, or find a way to use the cooling systems that we have in a more efficient manner, is that not worth an article on slashdot?

easy, you upgrade to a whiskey habit, which requires no refrigeration and makes no telltale psssst when you open it. Also, indefinite shelf life, sealed or open. when I worked for a dot-com in 2000-2002, we were allowed beer after 5:00pm, and my boss, the CTO, said it was ok if I could keep a bottle of hard stuff in my desk so long as I waited till past 5 before taking a shot.

One of the new trends is side-to-side flow. Draw cooled air from the raised floor on the left side and exhaust hot air through the suspended ceiling on the right. To reduce interference, route power through the floor and data cables through the ceiling or vice-versa. This way, no system has to take any other's heat.

Some datacenters have very odd cooling systems... some even distribute cold air from the top and collect hot air at the floor, quite a questionable choice.

Funniest story I ever heard involved a 1970's computer-room retrofit into an old commercial chemistry building. The computer room was a big area in the center with a hall completely around it, and small labs all along the outside of the hall. They ran more AC and put in a raised floor, but otherwise just pretty well crammed the mainframe in. One thing they didn't consider was the fire system.

Sprinklers are as bad for chemical labs as for computers, but what the building had instead wasn't much better.

Yes...the bodies of mice. In all seriousness, every so often we get the most awful smell in our server room. That's when we call Rentokil, and they inevitably find the bodies of dead mice in our raised flooring in our server room. Bear in mind it's a couple of floors up....when people said to me "you are never more than 10 foot away from a rat when you are in London" I took it to mean horizontal distance, and not *actual* distance (I didn't imagine that many rats lived on every floor of buildings...)

Amatuer. You keep the beancounter's "backups" on that rack. Your blackmail and porn tapes are kept with your fine liquors in the fireproof vault , which your boss *thinks* he has the combination to (when he, in fact, has the emergency code that when punched in immediately triggers the halon release with the 60-second delay disabled).

After reading this very insightful article summary, I was planning to completely replace all of the ductwork in my house on the assumption that air can't go around corners. You just saved me several thousand dollars.

Indeed. It's been years since I've seen a raised floor. As far as I know, most new datacenters use racks and overhead wire guides instead. The reason for this is obviously not the air flow. The raised floor made sense when you had only a few big machines that ran an ungodly number of cables to various points in the building. (At a whopping 19.2K, I'll have you know!) Using a raised floor allowed you to simply walk *over* the cabling while still allowing you to yank some tiles for easy troubleshooting.

(Great way to keep your boss at bay, too. "Don't come in here! We've got tiles up and you may fall in a hole! thenthegruewilleastyouandnoonewillnoticebwhahaha")

With computers being designed as they are now, the raised floor no longer makes sense. For one, all your plugs tend to go to the same place. i.e. Your power cords go to the power mains in one direction, your network cables go to the switch (and ultimately the patch panel) in another, and your KVM console is built into the rack itself. With the number of computers being managed, you'd be spending all day pulling up floor tiling and crawling around in tight spaces trying to find the right cable! With guided cables, you simply unhook the cable and drag it out. (Or for new cables, you simply loop a them through the guides.)

Obviously you realize that as the equipment contents of datacenters change, it doesn't make sense to change the room sturcture all that much? Hence many older datacenters have retained their raised floors. Of course, their air conditioners were also designed for raised floors.

I don't know where you've worked, but every datacenter I've seen has had a raised floor, and all of them still had at least one mainframe structure still in use... hence, they still routed cables under the floor for them, by design.

Actually, with the way computers are being designed now, raised flooring and proper cooling is even MORE of an issue than it was.

With the advent of blades, the heat generated per rack space is now typically MUCH higher than it was a back in the day. If anything, the raised flooring should be redesigned, as it can't cope with the airflow that is needed for higher density server rooms.

You'll find that a number of racks are being redesigned with built-in plenums for cooling... a cold feed on the bottom, and a hot return at the top, with individual ducts for various levels of the rack.

There are even liquid-cooled racks available for the BIG jobs.

I think that it's not so much that we're going to get rid of raised floors, but just redesign the materials and layout of them to be more effective with the needs of today.

that'd apply if you're forcing air into a pipe and watching it come out of the other end. the issue is that if you're forcing air into, say, an underfloor system that's full of different shapes, air passages, etc that the airflow will tend to the path of least resistance, and you'll get less airflow in pockets, which might cause problems.

I still miss why running cabling under the floor is worse than running it in overhead trays. Either the trays are too high to get at without a ladder (thus making them at least as inconvenient as floor tiles), or they're too low and you bash things into them.

Overhead tray systems also suffer from a fairly rigid room layout, and I have yet to see a data center being used the way it was originally layed out after a few years. Raised flooring allows for a lot of flexibility for power runs, cabling runs and so on without having to install an overhead tray grid.

Raised flooring also offers some slight protection against water leaks. We had our last raised floor system installed with all the power and data runs enclosed in liquidtight conduit due to the tenant's unfortunate run-ins with the buildings drain system and plumbing in the past.

I guess overhead tray makes sense if all you want to do is fill 10,000 sq ft with rack cabinets, but it's not really that flexible or even attractive, IMHO.

you're right in some sense, the pressure underneath theplenum will force air through no matter what. thereare however two problems. the first is that turbulenceunderneath the floor can turn the directed kinetic energyof the air into heat...this can be a real drag. in circumstanceswhere you need to move alot of air, the channel may noteven be sufficiently wide.

more importantly, the air ends up coming out where theresistance is less, leading to uneven distribution ofair. if you're grossly overbudget and just relying onthe ambient temperature of the machine room, this isn'ta problem. but when you get close to the edge it cantotally push you over.

It's more a matter of airflow. If you have high airflow, it can matter. For example, if you drive your car towards or against the wind, either way you get where you're going, but it just takes more energy to fight the wind.

Granted, this is 70mph wind stuff we're talking about, so it likely wouldn't apply in a datacenter environment. Although it'd be fun to imagine losing certain co-workers getting sucked into the hurricane-force winds. Tune in tonight at 7 for "When Datacenters Attack!"

Granted, this is 70mph wind stuff we're talking about, so it likely wouldn't apply in a datacenter environment.

You've obviously not been in our data center. Rasied floor, two rows of racks, air blown up from the floor in front of the racks (every pannel immediately in the front of the racks), hot-air-returns in the ceiling behind the racks (center aisle). There's about 10 degree difference between the front and backside of the racks, and more than one person has complained about the "marlyin monroe" effect

1) Resistance. Turns, right angled plenum, or obstructions from cables/power cords would impede airflow right?
2) While atmospheric differential is key, the magnitude of the differential would be indicate how much resistance/efficiency there is.
3) Even a perfectly working system must only be capable of delivering a certain amount of cool air flow. With these hotter and hotter computers, at some point the equipment exceeds your airflow budget.

Ever try to pull a string around a corner, or ten corners? Your pull may be the same, but the result is not.When the air is forced to turn a corner it creates more friction than if it is pushed/pulled in a straight line. This serves to both heat the air, and to cause the motors creating the negative/positive atmospheres to do that much more work.

I do wonder how much difference either effect really has. Doesn't seem like there should be much. Raised floors are optimal for taking advantage of convection c

Well, for starters, wasting a few thousand square feet of usable space for ventilation is silly. Also you may not want to bring in fresh air. If it's 100 out and 70 in the room, why bring in 100 degree air? Also moving air by convection is not a quick process.

So long as you have positive air pressure under your floor, you'll get *some* effect from your perf tiles. However, as I'm sure some fluid dynamics folks will jump in with, air flow is a HARD problem. Yeah, so you're getting cold air coming up through your perfs. Well, most of them. Some of them are actually pulling air DOWN. Why?

If you're bored, check out TileFlow [inres.com]. It's an underfloor airflow simulator. You put in your AC units, perf tiles, floor height, baffles, you name it. It will (roughly) work out how many CFM of cold air you're going to see on a given tile. It's near-realtime (takes a second to recalculate when you make changes), so you can quickly add/remove things and see the effect. I spent some time messing with this a couple of years ago, and it's very easy to set up a situation where you have areas in your underfloow with *negative* pressure.

The article basically summed it up for me:

McFarlane said raised floors should be at least 18 inches high, and preferably 24 to 30 inches, to hold the necessary cable bundles without impeding the high volumes of air flow. But he also said those levels aren't realistic for buildings that weren't designed with that extra height.

I'd go with 24 inches MINIMUM, myself. Also, proper cable placement (ie: not just willy-nilly) goes a long way towards helping airflow issues. Like they said though, you don't always have the space.

Of course, with the introduction of a blade chassis or 4, you suddenly need one HELL of a lot more AC:)

True, but IMO not the best way to handle wiring, overhead runs are much easier and cleaner. Every raised floor environment I have worked in was a mess under the floor and a nightmare to run new cables through.

If cooling is not a concern, concrete slab with overhead runs is the best way. If cooling is an issue, use raised floor, for cooling only and overhead runs for cables.

I know a cooling technician who once got lost in a company's ductwork. Crawled around for an hour or two, found a spot where his cell got a bit of reception, and called up someone with a map to guide him out.

When I was at IBM's Cottle Rd. facility, now (mostly) part of Hitachi, they had just finished rebuilding their main magnetoresitive head cleanroom (Taurus). They took the idea from the server techs, and dug out eight feet from under the existing cleanroom (without tearing down the building) and put in a false floor.

All of the chemicals were stored in tanks under the floor. Pipes ran veritcally, and most spills (unless it was something noxious) wouldn't shut down much of the line. It was a big risk but, if what I hear is correct, people still say it's the best idea they had in a while.

Ever try to play a trumpet? Ever map out the air ducts in a large building? If something is airtight, putting air in one end will move air out the other end.

I am by no means a cooling expert, and I certainly have never designed a cooling system for a server room (I just service the servers, thanks). But complaining that the air has to "turn 90 degrees" seems a little silly to me. Is there something I'm missing that an expert can clarify here?

If something is airtight, putting air in one end will move air out the other end.

The problem lies with larger datacenter environments. Imagine a room the size of a football field. Along the walls are rows of air conditioners that blow cold air underneath the raised floor. Put a cabinet in the middle of the room and replace the tiles around it with perforated ones and you get a lot of cooling for that cabinet. Now start adding more rows & rows of cabinets along with perforated tiles in front of each of them. Eventually you get to a point where very little cold air makes it to those servers in the middle of the room because it's flowing up through other vents before it can get there. What's the solution? Removing servers in the middle of hotspots & adding more AC? Adding ducting under the floor to direct more air to those hotspots? Not very cheap & effective approaches...

Put a cabinet in the middle of the room and replace the tiles around it with perforated ones and you get a lot of cooling for that cabinet.

Maybe this is the problem. Every industrial datacenter I have been in places racks over either empty spaces, or tiles with a large vent in them. The rack has fans in it to force air through vertically (bottom to top). A few perforated tiles get scattered about for the humans, but I have been in some datacenters without them to maximize airflow to the racks. But t

Every industrial datacenter I have been in places racks over either empty spaces, or tiles with a large vent in them.

That works to an extent, but what if the cabinet is pretty much fully loaded? We loaded up 8-foot cabinets with 30+ 1U dual CPU servers. The amount of air coming up through the holes underneath the cabinets were never enough to cool all that hardware down. Besides, my original example was just that - an example.

Not an expert, but I had some HVAC work done recently in my home.The blower moving the air only has a certain amount of power. Hook it up to a duct ten feet long, and output basically equals input. Hook it up to a duct ten *miles* long -- even a perfectly airtight one -- the power you put into one end will be lost by the other end, because the air molecules lose momentum (and gain heat) as they bounce off each other and the walls of the duct.

I used to work in a large building which had air ducts for heating/cooling. Unfortunately, the air pressure wasn't well balanced to compensate for the location of the Sun and office walls (which were added after the office block was built). So people ended up with either freezing cold blasts of air (the North/West sides), or being cooked by the heat of the Sun ( South/East sides). Those in the centre got no natural daylight at all and in those offices at the end of the air duct the air would become stale if

it can turn on a dime, but also stay on that dime. poor circulation results. trumpets have nice (if tight) curves, and even building ducts can have redirects inside the otherwise rectangular ducts to minimize trapped airflow in corners. for the most part even those corners are curved to help the stream of air.

most server rooms aren't part of the duct, for example, the one here is large and rectangular, with enormous vents at either end. not very well designed.

airflow is a very complicated problem, my old employer had at least three AC engineers on full time staff to work out how to keep the tents cold ( I worked for a circus, hence the nick.) the ducting we had to do in many cases was ridiculous.

why do you think the apple engineering used to use a cray to work out the air passage through the old macs. just dropping air-conditioning into a hot room isn't going to do jack if the airflow isn't properly designed and tuned. air, like many things, doesn't like to turn 90 degrees, it needs to be steered.

That won't work for the same reason that leaving the cover off of many old Unix workstations would cause them to overheat - the air doesn't go where you need it. Take a look inside a sparc IPX or something, and it will give you an idea of what directed airflow is all about. Now, multiply that by a factor of a gojillion.

I interned at ARL inside of Aberdeen Proving Grounds this past summer and when touring the supercomputer room (more like cluster room these days), the guide said they used one of the computers in the room to simulate the airflow in that room so they could align the systems for better cooling. How geeky is that!

An engineering firm that was hired to do some upgrades to our 2 room computer facility which included a fan to circulate air between the two rooms. We asked what the CFM of the fans were and how often the air would be exchanged between the rooms. Their answer: Dunno, never thought of that. Good thing we did.

Perhaps more importantly, better software solutions can make large hardware systems unnecessary. Instead of running and cooling 10 servers for a certain purpose, write better software to allow you to do the same thing on just one or two servers. If you cut down the amount of servers in the room by enough, you don't even need dedicated cooling.

I am waiting for the day where someone invents a computer that doesn't need to be cooled or generate excess heat.

Think about the lightbulb....A standard 60-watt incadescent bulb generate lots of heat. A better design is something like the LED bulbs that generate the same amount of lumens, with much less power, and more importantly little to no heat.

Good design can allow these devices to not generate excess heat, hence eliminating the need for the raised floor.

This is essentially impossible. Unless you consider so called "reversible computing". But reversible computing must be adiabatic, and thus very slow. Basically, as you slow a computation down you begin to approach ideal efficiency.See http://en.wikipedia.org/wiki/Reversible_computing [wikipedia.org]

Fast computing is made possible by destroying information (that's all computers do really, they destroy information). That destruction process entails an entropy cost that must be paid in heat.

But when a white LED delivers 15-19 lumens per watt, its about the same as a 100W incandescent and five times worse than a fluorescent. LEDs appear bright because they put out a fairly focused beam - not because they put out lots of light.

That only works until you have a situation where you need to cut the green wire with the yellow stripe, NOT the black wire with the white stripe, in order to shut down your server before it explodes. That oxygenated fluid is pink, making colour detection damn near impossible.

Now, if you're willing to host an alien spaceship at the bottom of your datacentre, maybe they could lend a hand...

Racks need to be built more like refridgerators. Foamcore/fiberglass insulated with some nice weatherstripping to create a chamber of sorts. Since the system would be near sealed, convection currents from the warm air exaust rising off the servers in the rack would pull cold air down. Cold air goes in through the bottom of the rack, heats up, gets pushed back through the top. This could pro

Note that I'm not calling the parent poster stoopid, but rather the design of forcing cold air through the *floor*. As the parent here notes, cold air falls. This is presumably why most home fridges have the freezer on top.

I was most surprised to read this article. I've never worked in a data center, but I have worked in semiconductor production cleanrooms, and given the photos I've seen of data centers with the grated flooring, I guess I always assumed the ventilation was handled the same way as in a

Someone needs to create an air interconnect standard that lets server room designers snap-on cold air supplies onto a standard "air-port" on the box or blade. The port standard would include several sizes to accomodate different airflow needs and distribution form large supply ports to a rack of small ports on servers. A Lego-like portfolio of snap-together port connections, tees, joints, ducts, plenums, etc. would let an IT HVAC guy quickly distribute cold air from a floor, wall or ceiling air supply to a rack of servers.

I would think that if one had multiple racks, the ventilation could be done in between them, for example sucking the return air out of the middle of a pair of racks, and feeding fresh air in the sides. This could be extended as needed.

My thinking is a good rack system should have the airflow under control.

We had an issue where I once worked because we had so many servers the general server room that many different groups used was no longer adequate for our needs, since we were outgrowing our alotted space. Now instead of building us a new server room with the appropriate cooling (which presumably would have included raised flooring) we got a closet in a new building. This is obviously not much fun for the poor people who worked outside the closet, because the servers made a good deal of noise and even with the door closed were quite distracting.

Now, we had to get building systems to maximize the air flow from the AC vent in the room to ensure maximum cooling and the temperature on the thermostat was set to the minimum (about 65 F I believe). One day, while trying to do some routine upgrades to the server, I noticed things not going so well. So I logged off the remote connection and made my way to the server room.

What do I find when I get there? The room temperature is approximately 95 F (the outside room was a normal 72) and the servers are burning up. I check the system logs and guess what, it has been like this four nearly 12 hrs (since sometime in the middle of the night). To make this worse our system administrator was at home for vacation around X-Mas, so of course all sorts of hell was busting loose.

We wound up getting the room down after the people from building systems managed to get us more AC cooling in the room; however, the point is it was never really enough. Even on a good day it was anywhere from 75 F to 80 F in the room and with nearly a full rack and another one to be moved in there is was never going to be enough. This is what happens though when administrations have apathy when it comes to IT and the needs of the computer systems, particularly servers. Maybe we should bolt servers down and stick them in giant wind tunnels or something...

Okay, screw this post [slashdot.org] about putting the servers in a giant tank filled with a coolant. Put the servers in a vertical wind tunnel so you can practice your sky diving while swapping a hard disk!

We have been using raised flooring in our data center for decades and never had any cooling issues. Granted we have 4 large air handlers for the room but when running a raised floor one must have the proper system in place. Some hardware is designed to get it's air right from the floor and some is not. Our large server racks don't have floor openings so we have vent tiles in the floor on the front side and the servers in turn suck the cool air through. Raised floor is a great place to route cables/power/pho

it's unlikely a computer room is going to get "too small" unless your company is growing at an astounding rate. Moore's law has been making computers smaller and faster and more power-efficient by several db per year.

More likely the powers that be have overbought capacity, in order to expand the apparent size and importance of their empire. I've seen several computer rooms that could have been replaced with three laptops and a pocket fan.

Or, alternatively, the powers that be don't want to buy all-new hardware every 18 months because Moore's so-called law told them to. Maybe it's often more cost effective to add another server in parallel to the existing ones than to buy new servers, move everything off the old ones onto the new ones, then throw the old servers out.

The raised floor has more to do with how heat moves in an environment rather then how you move air through a duct. Most raised floors don't have major ducting under them. In our data centers the raised floor provides a controlled space that we can use to modify temps.

Heat rises, our original designs back in 2002 for our data center called for overhead cooling using a new gel based radiator system. It would have been a great solution and caused us to go with a lower raised floor, just for cables and bracin

Where I've worked it was primarily for running wires, not cooling. I've also worked in places that have the overhead baskets, and quite frankly, although they are convenient, they are 'tugly. They are great for temporary installations and where stuff gets moved alot, but I'd rather have my critical wires away from places where they can get fiddled with by bored individuals.

So, no, I don't think they will be obselete any time soon. But hey, I'm an old punchcard guy.

I'm in a data center right now with two rack mounted clusters and three IBM Z series machines plus a load of other kit. Without the raised flooring AND the ventilation systems things would get pretty toasty here but it has to be done right. The clusters are mounted in back to back Compaq network racks which draw air in the front and push it out the back. We therefore have 'cold' isles where the air is fed in through the raised floor and 'hot' isles where the hot air is taken away to help heat the rest of the building.

The only other option would be water cooling but that's viewed by my bosses as supercomputer territory.

We worked very closely with Liebert ( http://www.liebert.com/ [liebert.com] ) when we recently rennovated our data center for a major project. The traditional CRAC (Computer Room AC) units supplying air through a raised floor is no longer viable for the modern data center. CRAC units are now used as supplemental cooling, and primarily for humidity control. When you have 1024 1U, dual processor servers producing 320 kW of heat in 1000 sq ft of space, an 18 inch raised floor (with all kinds of crap under it) is not adequate to supply the volume of air needed to cool that much heat in so small a space.

We had intended to use the raised floor to supply air, but Liebert's design analysis gave us a clear indication of why that wasn't going to work. We needed to generate air velocities in excess of 35 MPH under the floor. There were hotspots in the room where negative pressure was created and the air was actually being sucked into the floor rather than being blown out from it. So, we happened to get lucky as Liebert was literally just rolling off the production line their Extreme Density cooling system. The system uses rack mounted heat exchangers (air to refrigerant), each of which can dissipate 8 - 10 kW of heat, and can be tied to a building's chilled water system, or a compressor that can be mounted outside the building.

This system is extremely efficient as it puts the cooling at the rack, where it is needed most. It's far more efficient than the floor based system, although we still use the floor units to manage the humidity levels in the room. The Liebert system has been a work horse. Our racks are producing between 8 - 9 kW under load and we consistently have temperatures between 80 - 95 F in the hot aisle, and a nice 68 - 70 F in the cold aisles. No major failures in two years (two software related things early on; one bad valve in a rack mounted unit).

This seems to be more about bad rack design than raised floors. It's a basic principle of ducting design that, as the airflow spreads out from the source through different paths, the total cross section of the paths should stay roughly constant. (Yes, I am simplifying and I as sure someone can explain this better and in more detail. Yes, duct length and pressure drop is important. But the basic concept is true. If I want consistent airflow in my system, and the inlet is one square metre, the total of all th

It's a basic principle of ducting design that, as the airflow spreads out from the source through different paths, the total cross section of the paths should stay roughly constant.

I used to do commercial HVAC work, and everybody in the business does the opposite from what you describe. The ducts are largest near the air handler, and they are smallest at the end of the line. Typically, the main trunk of the duct gets smaller in diameter after each branch comes off of it and goes to a diffuser.

The problem is that power density has gone through the roof. It used to be that a rack of computers was between 2kw and 5kw. Modern blade servers easily push that up to 25kw per rack. You'd have to have 10 feet or more of space below the floor to accomplish cooling with an external source, thus the move to in-rack cooling systems, and the new hot aisle / cold aisle systems.

Wiring is now usually ABOVE the equipment, and with 10Gigabit copper, you can't just put all of the cables in a bundle any more, you have to be very careful.

It's a brave new datacenter world. You need some serious engineering these days, guessing just isn't going to do it. Hire the pros, and save your career.

Heating Ventilation and Air Conditioning (HVAC) design is based upon how air moves through a given pipe or duct.

When you are designing for a space (such as a room) you design for the shortest amount of ductwork for the greatest amount of distribution. Look up in the ceiling of an office complex sometime and count the number of supply and return diffusers that work to keep your air in reasonable shape. All of the ducts that supply this air are smooth, straight and designed for a minimal amount of losses.

All air flow is predicated on two imporant points within a given pipe (splits and branching with in the duct work is not covered here): pressure loss within the pipe and how much power you have to move the air. The higher the pressure losses, the more power you need to move the same amount of air. Every corner, turn, rough pipe, longer pipe all contribute to the amount of power needed to push the air through at the rate you need.

Where am I going with all of this? Well under floor/raised floor systems do not have alot of space under them and it is assumed that the entire space under it is flexible and can be used (ie no impediments or blockages). Ductwork is immobile and does not appreciate being banged around. Most big servers need immense amounts of cooling. A 10"x10" duct is good for roughly 200 CFM of air. That much air is good for 2-3 people (this is rough, since I do not have my HVAC cookbook in front of me.. yes that is what it is called). Servers need large volumes of air and if that ductwork is put under the floor, pray you don't need any cables in that area of the room.
Before you ask: Well why don't we just pump the air into the space under the floor and it will get there? Air is like water, it leaves through the easiest method possible. Place a glass on the table and pour water on the table and see if any of the water ends up in the glass. Good chance it ends up spread out on the floor where it was easiest to leak out. Unless air is specifically ducted to exatcly where you want it, it will go anywhere it can (always to the easiest exit).

Ductwork is a very space consuming item. Main trunks for 2 and three story buildings can be on the order of four to five feet wide and three to four feet high. A server room by itself can require the same amount of cooling as the rest of the floor it is on. (ignoring wet bulb/dry bulb issues, humidity generation and filtering, we are just talking about number of BTUs generated). A good size server room could easily require a seperate trunk line and return to prevent the spreading of heated air throughout the building (some places do actually duct the warm air into the rest of the building during the winter). Allowing this air to return into the common plenum return will place an additional load on the rest of the buildings AC system. Place the server on a seperate HVAC system to prevent overloading the rest of the building's AC system (which is designed on a per square foot basis assuming for a given number of people/computers/lights per square foot if the floor plan does not include a desk plan layout).

Raised flooring is useful for several reasons, moving cool air through a data center is only one of them. While requiring air to make severe turns to get out of the floor isn't optimal, most cabinets and the equipment in those cabinets is engineered with this in mind. Air is generally drawn in through the front of the cabinet and device and warm air blows out the back. Fans in the equipment pull the air in - the air doesn't have to "turn" on its own again (not that is really did in the first place). Warm air then rises after leaving the device where it is normally drawn back into the top of the AC unit.

Raised flooring also provides significant storage for those large eletrical "whips" where 30A (in most US DCs any how) circuits are terminated as well as a place to hide miles of copper and fiber cable (preferably not too close to the electrical whips). Where else would you put this stuff? With high density switches and servers, we certainly aren't seeing less cable needed in the data centers. Cabinets that used to hold five or six servers now hold 40 or more. Each of these needs power (typically redundant) and network connectivity (again, typically redundant), so we actually have more cables to hide than ever before.

Cabinets are built with raised flooring in mind. Manufactureres expect your cabling will probably feed up through the floor into the bottom of the cabinet. Sure, there is some space in the top of the cabinets, but nothing like the wide open bottom!

Anyhow, there you have the ideas of someone who is quickly becoming a dinosaur (again) in the industry.

Raised floor cooling was designed back when the computer room held mainframe and telephone switch equipment with vertical boards in 5-7 foot tall cabinets. The tile was holed or removed directly under each cabinet, so cool air flowed up, past the boards and out through the top of the cabinet. It then wandered its way across the ceiling to the air conditioners' intakes and the cycle repeated.

Telecom switching equipment still uses vertically mounted boards for the most part and still expects to intake air from the bottom and exhaust it out the top. Have any AT&T/Lucent/Avaya equipment in your computer room? Go look.

Now look at your rack mount computer case. Doesn't matter which one. Does it suck air in at the bottom and exhaust it out at the top? No. No, it doesn't. Most suck air in the front and exhaust it out the back. Some suck it in one side and exhaust it out the other. The bottom is a solid slab of metal which obstructs 100% of any airflow directed at it.

Gee, how's that going to work?

Well, the answer is: with some hacks. Now the holed tiles are in front of the cabinet instead of under it. But wait, that basically defeats the purpose of using the raised floor to move air in the first place. Worse, that mild draft of cold air competes with the rampaging hot air blown out of the next row of cabinets. So, for the most part your machines get to suck someone elses hot air!

So what's the solution? A hot aisle / cold aisle approach. Duct cold air overhead to the even-numbered aisles. Have the front of the machines face that cold aisle in the cabinets to either side. Duct the hot air back from the odd-numbered aisles to the air conditioners. Doesn't matter that the hot aisles are 10-15 degrees hotter than the cold aisles because air from the hot aisles doesn't enter the machines.

Well, you're close. You are correct that the answer lies in a "hot aisle/cold aisle" configuration. The difference is, it works better when the cold air is coming up from below the raised floor tiles.Why? You must keep in mind, you're not trying to pump "cold" air in, you're trying to take heat out, and as Mother Nature knows, heat rises. So why not harness the natural convection of heat, allow it to flow up to the ceiling, and have some "perf" ceiling tiles and use the space over the ceiling t

I spent the first 8 years of my professional life stuck working in NOCs with standard raised flooring, the cooling was just one of the many things it was needed for.

Examples:

Wiring: Not everyone likes to use overhead ladders to carry cables around. In the Army we had less than 50% of our wiring overhead, the rest was routed thru channels underneath the raised flooring.

HVAC Spill protection: Many of our NOCs had huge AC units above the tile level, and these things could leak at any moment. With raised flooring the water will pool at the bottom instead of run over the tiles and cause an accident. We had water sensors installed, so we knew we had a problem as soon as the first drop hit the floor.

If the natural airflow patterns are not enough for a specific piece of equipment, it does not take a lot to build conducts to guarantee cold air delivery underneath a specific rack unit.

The one thing I did not like about the raised floors was when some dumbass moron (who did NOT work within a NOC) decided to replace our nice, white, easy to buff tiles, with carpeted tiles. 10 years later and I can't still figure out why the hell would he approve that switch, since our NOC with its white tiles looked fricking gorgeous just by running a buffer and a clean mop thru it. The tiles with carpeting were gray so they darkened our pristine NOC.

I bet many of the people against raised flooring are land lords that don't want to get stuck with the cost of rebuilding flooring if the new tenant does not need a NOC area. I have been to a NOC in a conventional office suite, they basically crammed all of their racks into what seemed to be a former cubicle island. The air conditioning units were obviously a last-minute addition and it looked like the smallest spill would immediately short the lose power strips on the first row of racks in front of them. Shoddy as hell.

I wouldn't use water but something that if a leak occurs nothing bad happens. Anti-freeze is pretty much inert and transfers heat well. IIRC, some of the Cray supercomputers were water cooled. So I guess that technology belongs to SGI (for now) since they bought Cray.

I wouldn't use water but something that if a leak occurs nothing bad happens. Anti-freeze is pretty much inert and transfers heat well.

Water (non-pure... which it will be as soon as it hits your computer) conducts electricity.
Antifreeze is not better and conducts electricity.

The liquid you're looking for is fluorinert, but the price is one the order of hundreds of dollars per gallon.
When you consider the price, you'll see why many people just use water and high-quality plumbing. Why use $500 of flo

Cooling, IMO, is a secondary use of raised floors.The real usefulness is the ability to run cabling from any point A to any point B in the floor space.

That's good to an extent, as long as the cable runs aren't too long. Go take a look at an enterprise grade colocation hosting facility and you may change your mind. I've spent a lot of time at one of the top-tier MCI facilities. It has a raised floor that's used for cooling and power distribution, but all networking is done via 3 or 4 layers of overhead c

Without the raised floor, you have to put your rats nest of cabling somewhere else, which almost certainly mean vertical.

I don't believe that there should be a rats nest of cabling _anywhere_ in a datacenter. I hate raised floors because they allow techs to get sloppy. Vertical wiring trays eliminate that possibility by showing their hackish wiring job to everyone.

When your datacenter is new, you should pre-wire patch panels in each cabinet for SAN and Ethernet. Each cabinet should have a PDU.