Tuesday, June 2, 2009

Grid Heating: Putting Data Center Heat to Productive Use

Dr. Paul Brenner, a research scientist in the Computing Research Center at the University Notre Dame, has been advocating a novel idea called grid heating. He recently won a "Green IT Award" from the Uptime Institute for his work. Here is a short introduction to the idea:

Around the world, large data centers consume enormous amounts of power. In addition to the energy needed to spin disks and rearrange electrons, an approximately equal amount of power is needed to run the air conditioners and fans to remove that heat from the data center. In this sense, data centers are doubly inefficient, because they are using power to both heat and cool the same space. If we could put that heat to productive use, then we could save energy on cooling the data center, as well as save energy that would have otherwise been used to generate heat.

Last year. Dr. Brenner constructed a prototype of this idea at the city greenhouse in South Bend, which was struggling with enormous heating bills during the winter. He constructed a small cluster, and placed it in the Arizona Desert display in the greenhouse, where the plants need the highest temperature. Notre Dame paid the electricity bill, the greenhouse got the benefit of the heat, and the computers simply joined our campus Condor pool. Everybody wins, and nobody has to pay an air conditioning bill.

However, the first cluster was just a prototype, and couldn't generate nearly enough heat for the entire greenhouse. So, this year, Dr. Brenner is building a small data center in a modular shipping container. next to the greenhouse. With a new electricity and network hookup, the data center will run several hundred CPUs, and function as a secondary furnace for the facility, hopefully reducing the heating bill by half over the winter.

The new facility will significantly add to our campus grid, and will also give us some interesting scheduling problems to work on. The greenhouse needs heat the most during the winter, and to a lesser extent during the summer, so the computing capacity of the system will change with the seasons. Further, the price of electricity varies significantly during the day, so jobs run in the dead of night may be cheaper than those run during the day. If we can connect our "campus grid" to the "smart electric grid", we can make the system automatically schedule around these constraints.

5 comments:

How does the cluster as heater work during the summer months? Does the cluster itself run hot? Do some of the machines take themselves offline if the greenhouse or cluster gets too hot? Will the energy savings from the winter compensate for any cooling costs that might arise?

At a technical level, the cluster simply acts like a furnace connected to a thermostat: when it is too cold, the cluster runs and generates heat; when it is too hot, it stops in order to cool down. As a refinement, you can configure the system to vary the number of busy CPUs in order to damp oscillations.

So, it is possible to run during the summer, but you must either accept that you have less computing capacity, or you must turn on the A/C unit in the cluster. The former saves energy, but the latter accomplishes more computing. The organization can then decide whether the value of the computing is worth the price of cooling.

The question of whether the winter savings compensates for cooling in the summer depends on so many variables that it has to be answered on a case-by-case basis for each installation. (For the greenhouse, we don't know yet.) At the moment, all we can say is that we are saving during the winter and doing no worse than before in the summer.

For cooling in the summer (if you are planning a new greenhouse): before the foundation is laid place geothermal tubing underground (in areas where that is possible) and introduce a ground coupled exchanger along with the cluster container thermostat. http://en.wikipedia.org/wiki/Ground-coupled_heat_exchanger

Prof. Douglas Thain

About Me

I am an associate professor at the University of Notre Dame, where I conduct research and teach classes in distributed systems, operating systems, and compilers. Read more at my homepage or at my research lab.