Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

1sockchuck writes "Google has begun operating a data center in Belgium that has no chillers to support its cooling systems, which will improve energy efficiency but make weather forecasting a larger factor in its network management. With power use climbing, many data centers are using free cooling to reduce their reliance on power-hungry chillers. By foregoing chillers entirely, Google will need to reroute workloads if the weather in Belgium gets too warm. The facility also has its own water treatment plant so it doesn't need to use potable water from a local utility."

I have to back this up. TFA says the maximum temperature in Brussels is 66 to 71 degrees. I recall it being warmer than that during the summer I lived there. I can't quite remember the temperature, but 24 or 25 C (which is in the mid to upper 70s F) comes to mind.

If global warming ever did what the alarmists keep saying it's going to do, chillers would probably become completely irrelevant, since about two thirds of Belgium would be continuously surface-mounted with a very large water-cooling rig and heatsink, sometimes known as the North Sea.

If it wasn't for the required internet connectivity google could go off the grid completely. But they already own so much fibre and the public internet seems to need google more than they need it.

Soon they will generate all their own power from wind and solar, convert all their employees shit to power so they don't need the sewerage system either, send all their traffic through the network of low earth orbit satellites they are about to launch which also conveniently beam solar power back down to them.

So basically at the end of the day they will be able to buy or swindle a plot of land from some country with low tax, bring in all their own employees, contribute absolutely nothing to the local economy and leave when the sun goes down. It's great really, saves them on lawyers that would otherwise help them pussyfoot through the swaths of modern over-regulation and the satellites will help them get past any censorship / connectivity problems.

And if China start shooting down their satellites, Google will make satellites that shoot back

So basically everything gets rerouted on a hot day. Ok, that sounds fine until you realize that most of the outages of Google's products were due to, rerouting. And also, it seems odd that the cost of building a (hopefully redundant) datacenter that is this unreliable would be less than consolidating it with another one and using electrical cooling.

well, it might be unreliable, but i think you're overestimating the reliability of normal data centers. even if failure is twice as likely at this data center than others, i think it still improves overall performance and reliability enough that it's worth building. or at least google seems to think so.

Of course, if they have to do the re-routing 10 or so times a year, they will get the kinks worked out. That is far better management scheme than having a fail-over plan that never really gets tested. Also, when temps rise, they probably won't be completely off-lining this data center, just a fraction of the containers within it.

I also wonder if they might not be fibbing a little, the air handlers come in different types. For chilled water use, they wouldn't have compressors, the chilled water is run thr

If you have chilled water, you have a chiller, which means you have compressors. Process water or ground source water usually is not cold enough to be an effective cooling medium. You want a high delta T between the entering air temp and the entering water temp to induce heat transfer. Closed loop ground source water is extremely (prohibitively) expensive and open loop is quite a maintenance hassle due to water treatment. High efficiency chillers paired with evaporative cooled water towers with economizer capability is very efficient and reliable. Usually you can get down to around 0.5kW per ton with high efficiency chillers at full load and with multiple staged compressors you can do even better with part load conditions. The cooling towers are usually pretty low with around 0.05 to 0.15kW per ton. Use VFD's on the secondary pumps and cooling tower fans, and you can get cooling in at 0.75kW per ton for the whole plant at peak and even lower and part load conditions (95% of the time).

I just designed a data center for a large Big Ten univeristy and there were no large air handlers involved at all. The system had two 400-ton chillers with the chilled water piped directly to rack mount APC fan coils. Without "green" being the basis of design, the chiller system still operates right at about 1kW/ton.

It's mildly interesting to know how many KW of power it takes to move some water but it would be more interesting to know how many KW of power it takes to transfer heat. With your measurements, how much heat can you transfer with a ton of water and how does the temperature of the computers compare to the ambient air?

A ton is a measure of the amount of heat transferred. See this [wikipedia.org] for more details. It's also worth noting how much of the heat transfer is done by way of allowing the water in the system to evaporate.

the short answer is that the ton mentioned above in the HVAC industry is roughly equivlent to the amount of cooling a ton of ice (frozen water) would provide. Somedays I wish my industry would just unhitch the horse, and burn the buggy it was attached to.

1 ton is a unit of cooling equal to 12,000 BTU/hr, not weight. The typical rule of thumb is 2.4 GPM per ton which is based on a standard 10degF delta T, usually 44degF to 54degF. Assuming 100 feet of head and 50% mechanical efficiency, 1 BHP will move about 20 gallons of water per minute. 1 BHP is about 0.75kW.

I am kind of confused about how many kW of power it takes to transfer heat. Heat moves from high to low, you have to pump cold water through a coil and force warm air across that coil. The amount of heat transferred is a function of the face velocity and temperature of the air across that coil, the amount of fluid moved and temperature through the coil and the characteristics (fin spacing, fin size, material) of the coil.

The temperature of the computers isn't really the important factor, it is the heat rejected. Again using rules of thumb, you can assume that 80% of the electrical power delivered to the computers will be dissipated as heat. The total of that heat rejected along with the other heat inputs to the space, e.g. lighting, walls, roof, window loads, etc., will determine your cooling load. Almost all of this load is sensible, meaning heat only, for other occupancy types you would also have to consider latent (moisture) loads as far as people and ventilation air in determining the amount of cooling needed.

"Again using rules of thumb, you can assume that 80% of the electrical power delivered to the computers will be dissipated as heat."

? 100% of the electrical power delivered to the computer is dissipated as heat. It's the law. It will be far less than the nameplate power (that electrical uses), and perhaps 80% of what is delivered to the building (after transformer, UPS, and PDUs), but it all ends up as heat (unless you're splitting hairs about the acoustical energy emissions and velocity pressure in the exhaust, which is small and quickly converted to heat).

The units were mounted on the roof, but were packaged AAON 2 x LL210 chillers (and a full 400 ton backup) with no exposed exterior piping. Glycol reduces the specific heat of the fluid and increases the specific gravity, so it can move less heat and takes more power to move. I only add glycol to the system if freezing is an issue.

You do not need a chiller to operate a datacenter in many environments at all. Based on the 2nd edition of ASHRAE's Thermal Guidelines for Data Processing Environments (which was developed with direct input from the major server providers), you can run a datacenter at up to 90F. Seriously, 90F into the rack. When it comes out the back of the rack, you collect the heat exhaust at 100-110F. "Chilled" water at 81F is more than enough to knock that 110F down to 90F- ready to go back into the front of the rack.

The 81F water can be produced directly from open cooling towers (direct evaporation) whenever the wetbulb is lower than 76F (4 degree approach plus a 1F on your flat plate that isolates the datacenter loop from the open tower loop).

You designed an efficient datacenter, but you're five years behind cutting edge (not actually a bad thing for most critical environment clients). The next wave of datacenters will have PUEs of 1.2 or less and redefine the space from a noisy but cool space to hang out to a hot machine room with industrial heat exhaust design.

I actually just finished a chiller less 8MW schematic design and analysis for a bid. It was my second this month (the first was a cake walk - an extreme Twb of 67F, the second was west coast light conditions).

PS: Secondary pumps? Seriously? Unless you have to boost up to 25 psi to feed a Cray or some other HPC I thought everyone who cared had moved onto variable primary-only pumping. (Sorry, feeling a bit snarky after hitting a 40 hour week on Weds...)

Servers may be able to operate at 90-100, but they simply won't last as long being cooked compared to equipment that lives at cooler temperatures. This probably doesn't matter if you're Google and don't care about burning hardware or if you have money to spare and are always installing new equipment, or would rather generate truckloads of electronics waste replacing servers faster than a cooler facility just to get a PUE to brag about. The rest of us will have to settle for server rooms with air conditionin

"Servers may be able to operate at 90-100, but they simply won't last as long being cooked compared to equipment that lives at cooler temperatures."

Operating 500 hours a year at 90F (the peak of the allowable range) is unlikely to impact longevity. 100F is outside of the allowable range. Your opinion is contradicted by what IBM, Intel, Dell, Sun, and numerous datacenter owners along with the design professionals at ASHRAE have developed over the course of several years of research and many (mostly dull)

The only actual experience I have that contradicts your opinion is a UPS shorting its inverter and static bypass after two years of being in an 80-90 degree room during the summer, giving it a peak internal temp of 110.

Correlation doesn't equal causation. I've seen more failures in 70F UPS rooms...
Some equipment did have trouble with high temps, but all new equipment can and should be able to take 80F normal operating temperature with limited excursions up to 85-90F.
And the more common correlation is claims of higher failure rate for servers at the top of a rack. In most traditional datacenters, you can find hotspots where recirculation from behind results in a continuous 80+F temperature into the server. When we talk

If you have chilled water, you have a chiller, which means you have compressors

While I agree with you on that point, I think what the GP was saying is that it is indeed possible that the CRAC units being used at Google's "Chiller-less" Data Center could very well have compressors contained in them, along with condenser barrels.

You'll notice TFA does not specifically say that there is no mechanical cooling happening at this site, only that their focus is on free cooling. It could very well be that these CRACs have the ability to modulate between mechanical cooling and water side

Remember that even on hot days not all of the traffic through the datacenter needs to be rerouted, and I'd imagine that a location selected for a datacenter like this was chosen for the infrequency of days that will require rerouting. Do you know how much it costs to cool a datacenter, and how much this will save? I don't, but Google probably does, and they probably wouldn't make a decision to do something like this without comparing the savings with the potential cost from decreased lifespan of computers running hot and losses due to downtime. I would also imagine that Google will be working to greatly increase stability during rerouting, given the comments from the end of TFA about other power saving uses, such as routing traffic to datacenters where it's night, meaning "free cooling" can be used since it's colder outside, and off-peak electricity rates are in effect.

I think the concept is interesting, and it makes me wonder if we'll see more datacenters built in areas of the world more conducive to projects like this in the future.

"I think the concept is interesting, and it makes me wonder if we'll see more datacenters built in areas of the world more conducive to projects like this in the future."

Already happening in a way. Check out EDS's Wynyard facility. They didn't eliminate the chillers entirely last I looked, but in that climate they could have if they trusted the outdoor conditions and local code officials (open cooling towers are subject to abrupt shutdown if there is a Legionella scare anywhere near by in Europe).

That is where the ice storage systems become interesting and cost effective. In the states, usually half of a commercial energy bill is peak demand. If you can transfer that energy usage to night time to build up your ice storage and transfer your main power draw to off peak the savings can be very significant and create payback times in months not years.

If you're Google you can afford to have multiple data centers located around the world, each with excess capacity to take up the slack if the temperature gets a bit too hot at one of them. Of course, for the rest of the world, having just one major data center is a big investment and the idea of maintaining excess capacity in case the weather in Belgium is not favorable is a complete fantasy.

"The temperature is a bit high in Belgium, so lets just transfer the load to one of the hundreds of other data

Or, if you're Google, you have a metric shit-ton of servers and don't care too much about reducing the MTBF of a few hundred racks by running them hot.

I was under the impression (mostly from tons of slashdot articles on the subject), that Google had done the research on this and determined that the higher temperatures did NOT reduce the MTBF of the hardware. Seriously, 90F into the equipment isn't that hot.

They sit where temperate ocean water meets cold arctic air, resulting in a relatively narrow and predictable temperature band which happens to be perfect for cooling datacenters with minimal, if any, conventional HVAC. Their power is green, and they have lots of it. They use a combination of hydropower and incredibly abundant geothermal heat for power generation. Recently, undersea fiber cables have been laid down, greatly increasing their connectivity to the outsid

More people can and should do this. 27C is plenty cool enough for servers. It annoys me to go into a nipple crinkling datacenter knowing they're burning more juice cooling the darned thing than they are crunching the numbers. A simple exhaust fan and some air filters would be fine almost all of the time, and would be less prone to failure.

It's probably not as much about the energy bill as it is about the PR.

If it wasn't PR, they'd have chillers 'just in case', even if turned off most of the time. As it stands, they may be subject to a large risk of month-long heat waves killing them on paying idle employees, taxes, and taking a hit on capital depreciation costs for zero productive output that they are presumably banking on by bothering to build another datacenter.

Of course, there may be something unique about the site/strategy that makes th

I've seen facilities that are largely cooled by climate pretty far north that still keep chillers on hand in the event of uncooperative weather.

Very true, but this is Google we're talking about; their re-routing ability is phenomenal thanks to the sheer number of data centres they have across Europe. The latency cost for a re-route away from Belgium to North France or North Germany on a hot day is minor, most companies wouldn't have so many similar data centres proximal to the one shut down by inclement weather - it's a risk that will more than pay off. Besides, everyone gets lethargic on hot days - it seems Google now does too!

Google has been actively developing a reputation in the corporate world for squeezing the most CPU-bang out of a buck, and a great way to do that is by cutting down on the amount of power a CPU uses.

A few weeks back there was an article on Slashdot which discussed a before-unseen Google innovation concerning its servers - a 12 volt battery that cut the need for an APC (which lowered costs by lowering both the power flowing to the CPU and the power required to cool th

I wonder if it would be feasible to have massive passive cooling (heat sinks, fans, exhausts from the data center, etc.) and run the data centers which are currently at night (i.e. on the dark side of the planet.) and constantly rotate the workload around the planet, to keep the hotest centers in the coolest part of the planet. The same logic could be applied moving workloads between the northern and southern hemispheres.

Yes, there would be tons more telecommunication to do, with the impacts on performance,

Except that the highest load on data centers is generally during the local day or at least not at like 5am when its the coldest. I would imagine routing the traffic all the way to the dark side of the planet would produce less than acceptable latency for most uses. This might work for other types of work but I don't think it would work for anything web and response time based like google.

Plus routers/bandwidth isn't exactly cheap and costs would go up if companies started using these methods. I'm not goi

1 - You are thinking too small and apparently you are falling for the idea that Google is "just a search engine". It's OK. Many people do that.It is not all "web", nor do they have to use all their power for themselves.

The idea is to offer "virtualized workloads" to your customers - workloads that you then shift around the world to the "cheapest work-area at the moment"Which MIGHT cost you a pretty penny - unless you have your own global network of datacenters built for just such a purpose.You know... somet

I did read the PP and I've even replied to it.And I thought that I was clear enough in my reply, but apparently not.

See... the game is not most power-efficient cooling, or even best cooling.The game is "most bang per buck invested in the server infrastructure".

Now... Saving money by reducing the cooling costs by using huge passive cooling farms is a nice idea, but not as easily calculable as simply switching to cheaper electricity.Sure, should you move your servers to Siberia you would get shitload of passi

An Enabler for "Follow the Moon"?The ability to seamlessly shift workloads between data centers also creates intriguing long-term energy management possibilities, including a "follow the moon" strategy which takes advantage of lower costs for power and cooling during overnight hours. In this scenario, virtualized workloads are shifted across data centers in different time zones to capture savings from off-peak utility rates.

This approach has been discussed by cloud technologists Geva Perry and James Urquhart as a strategy for cloud computing providers with global data networks, who could offer a "follow-the-moon" service to enterprise customers who would normally build data centers where power is cheap. But this approach could also produce energy savings for a single company with a global network - someone like Google.

I think Google needs to start investing some time and money into buying or building Nuclear Power Facilities.

It could pay off for them, because they certainly don't need all of the power they would generate, and could sell some back to the Country/State/Region they build it in.

Sounds like a win-win to me.

P.S. - Please don't start a flame war about how Nuclear Power is 'unclean' or 'dangerous' -- in today's society it is cleaner, more efficient and just as safe, if not safer, than coal-fired generators.

Nuclear reactors have lead times of 10 years or more, and you are proposing this for an internet-based business. Reactors are also insanely expensive and carry enormous political problems.
Um, yeah... like, that's totally going to work.

Maybe this sort of initiative is just what is needed to renew public interest in nuclear power. If a business like Google can show that it is clean, safe and reliable, perhaps governments and "environmentalists" can see through the FUD and support nuclear for national grid power.

As the number of chiller-less data centers in the Northern Hemisphere increases, New Zealand may become the ideal location to build alternate climate data center capacity to deal with hot summers in Europe and Northern America...:)

Only problem is getting enough bandwidth to/from New Zealand to make the data center worth it. It's one thing to build one there and have it be super efficient and climate effective, but it's another to have all that greatness and not be able to get enough information to/from it.

This picture [infranetlab.org] only shows 1 cable from NZ to NA, and none to Europe, so I'd say it's out of the picture.

But if your data center is in say, Minnesota, it seems like you could balance the temperature with outside air for many months out of the year. Obviously you'd need to light up the chillers in the summer, but running them 4 months out of the year seems like a huge energy savings than running them year round.

I remember visting Superior in the summer and the lake water was freezing f'ing cold even in June. Wonder if you could run a closed loop heat exchanger without screwing up the lake environment?

I don't know about natural lakes but man made ponds have been used for just that purpose.

Man made ponds are used because the EPA crawls up your ass if you want to use a natural body of water for any commercial/industrial output.

Note: I'm saying that's a bad thing. I'm glad the "good old days," when chemicals, raw sewage, and cooling water were dumped willy nilly into the waterways and drinking supply, are gone. You warm up an area of water 10 or 15 degrees farenheit and you'll kill most everything living in it but algae.

Well, the Great Lakes are basically barren down in the deepest zones so that's one heck of a heatsink. I know someone was talking about using the water coming into Chicago as a heatsink for free cooling as the water is too cold to be usable immediately anyways (like 56F I think).

Cornell University actually did this exact thing to cool a good chunk of the campus. It's called lake source cooling [cornell.edu]. While there will of course be some environmental impact, the energy usage is 20% of normal chillers and thus is, I'm sure, an environmental net gain.

Geothermal heat transfer is a great way to do cooling (and heating if needed). The initial investment will be high, but the savings will also be high in the long run (and not just in $). Add solar panels and/or wind turbines that can power the heat pump and some of the data center equipment. I had a solar system installed on the roof of my house a few years ago. I received major incentive rebates from the state, I can sell my SREC's and I get "free" electricity; and in a few more years time the cost of

(I'm not asserting anything about how much heat the Presque Isle Plant releases into Lake Superior or about how much damage that heat does, but it probably releases a significant amount of heat, and it probably has some sort of license)

The rough answer is yes you can, but there are probably questions about how much you are willing to screw up the environment and whether or not you can get licensed:

In high school we went down to the Oyster Creek nuclear power plant a few times for class trips. They have a cooling water drain to a creek that eventually feeds into an estuary. In the winter, the area is teeming with wildlife that love it. There are clam beds unlike any area around it. They've actually created fish kills by turning off the

"Why they can't further extract useful energy from this hot water I don't know."

I blame that bastard Carnot personally for this... They could get additional work out of that hot water, but it gets prohibitively expensive the lower your delta T between hot and cold gets. I was all stoked about finding some sort of stirling heat engine to run off some datacenter waste heat, until I worked the numbers and found the annual average maximum therorectical efficiency was under 15%.

The short answer is yes --- water takes a staggering amount of energy to change temperature (it's one of the many properties the stuff's got that's really weird). A big lake makes an ideal dumping ground for waste heat. What's more, the environmental impact is going to be minimal: even the biggest data centre isn't going to produce enough waste energy to have much effect.

(A big data center consumes about 5MW of power. The specific heat capacity of water is about 4kJ/kg.K, which means that it takes 4kJ to raise the temperature on one kilogram of water by one kelvin. Assuming all that gets dumped into the lake as heat, that means you're raising the temperature of about 1000 litres per second by one kelvin. A small lake, say 1km x 1km x 10m, contains 10000000000 litres! So you're going to need to run your data centre for ten million seconds, or about 110 days, to raise the temperature by one measly degree. And that's ignoring the cooling off the surface, which would vastly overpower any amount of heat you could put into it.)

(The same applies in reverse. You can extract practically unlimited amounts of heat from water. Got running water in your property? Go look into heat pumps.)

In fact, if you were dumping waste heat into a lake, it would make sense to try and concentrate the heat to produce hotspots. You would then use this for things like fish farming. Warm water's always useful.

At those latitudes the ambient subterranean temperature remains pretty ambient all year long. Drill into the side of a mountain or hill with a boring tool, leave the edges rough (with a smooth poured/paved floor for access) and just drop your server containers in there with power coming in. If you go all the way through the hill you can use the natural air currents to push/pull air through the tunnels, and the natural heat absorption qualities of stone will keep the temperature down. I'd be surprised if any active "cooling" were needed at all.

The ancient Persians had a passively cooled refrigerator called the yakhchal [wikipedia.org] which "often contained a system of windcatchers that could easily bring temperatures inside the space down to frigid levels in summer days."

Perhaps the Google datacenter could employ some variation of their technique.

This reminds me of a technique for cooling water in a desert which could tenably be applied to the data center as well.

Basically, a container is filled with water, closed/sealed, and wrapped with a damp/wet towel and buried in the ground (or just placed somewhere in the sun, I suppose). The evaporation of the moisture in the rag will draw the heat from the inside of the container, resulting in frigid water.

Put a data center on a dry coastal equatorial area and harness solar to desalinate the water. Build th

You need both windcatchers and an underground water reservoir (a quanat). The windcatchers create a lower pressure zone which pulls air in through the quanat. There is evaporative cooling in the quanat. I don't think this would get near freezing temperature unless your water source is really cold.

There is a way to make ice in a dry environment by exposing water to the coolness of the night sky and insulating it during the day.

So the fundamental upshot is that the point to point speed of the internet will be directly correlated to the average temperature of various cells, on a large scale. The statistical effect will be there. I'd wager this will be a remarkably accurate and near real-time barometer of global temperature.

It's good to read some good news for a change...but it wont hit too many headlines..."Giant Googlebillion-dollar Company Doing Something Good"
This "good" I speak of is someone with means and vision getting out there and just doing something. I still think Google could easily turn to the darkside...but is a whole different post;)

I'm not sure I understand why they constructed their own water treatment plant. I would think that it would be more energy efficient on the whole to use the already constructed municipal system in the area.

Municipal water (at least here, in the US) means "chlorinated water". Chlorine does terrible things to pipes, coolers, pumps - everything. Having your own water treatment system means the chlorine never gets in, saving bundles in maintenance. To get an idea, find two similar water cooled vehicles - one which has had chlorinated water added to the radiator routinely, and another whose owner has been more choosy. Look down into those radiators. I've actually seen copper radiators corroded out in states t

They want water treated to work well with their cooling equipment and not just water good enough to drink. For instance where I am there are a lot of manganese salts in the drinking water that are perfectly safe to drink but tend to stick to hot surfaces. Using this water you would eventually clog up the pipes of a cooling system with the same brown gunk you get as a thin layer on the inside of electric kettles. There is other stuff that can precipitate out at different temperatures and basicly leave you

Basically the underground temperature will roughly be the mean year-round overground temperature, so long as you're not in a geothermally active area. That's where your 10-13C number comes from. But that only applies if you don't inject more heat than the ground can conduct away. The year-round mean temperature in London is probably around 10C. But the temperature in the London Underground in mid summer on the deep lines is more like 30C. Too much heat generated by the trains and people for the rock/cl

I wonder how much this is a cynical marketing and public policy exercise.
A few months ago, the European Commission announced an ambitous programme to the IT industry for European energy conservation targets to be met by 2012 and lo and behold, look who's here preening its feathers?

Unless you introduce a heat source, and then it's extremely difficult to lose that heat since you're in such a well-insulated environment.Similar problems exist in cooling spacecraft. Sure it's "cold" in space, but if you have a local heat source that you want to shed, where do you send it and how?