Related links

Mechanical and electrical engineers frequently share the same ultimate goals. But surprisingly often they try and reach those goals without actually talking to each other.

Take, for example, researchers in each discipline working on the IT data center. "We're both concerned with the growing energy footprint of enterprise computing," notes HP Labs mechanical engineer Amip Shah. "But too often we don't speak the same language."

The result, Shah suggests, is that data centers aren't nearly as energy efficient as they could be.

That's a big deal -- data centers consume approximately1.5% of all electricity generated in the United States, for example. And additional energy and resources are used in their manufacture, a figure that can be calculated to understand the total energy, or 'exergy,' consumed by a typical data center over its 'cradle-to-cradle' lifespan.

Now, though, a team that combines mechanical engineers from HP's Sustainable Ecosystems Research Group with computer system architects from the company's Intelligent Infrastructure Lab has found a way to speak in a common tongue.

And that shared approach is leading to significant insights into how the data center of the future might be designed so that its total exergy consumption is substantially reduced.

A common language

"The first thing we had to do was to take the materials-based perspective of mechanical engineers and convert it into something computer architects could deal with," explains Partha Ranganathan, himself a computer engineer and a principal investigator with HP's Intelligent Infrastructure Lab.

To do this, the cross-disciplinary team took a broad set of computer architecture terms and mapped onto each its impact on sustainability.

"Contrary to our expectations going in," Ranganathan reports, "we found that the energy it takes to make the system was already about 20 to 30% of total exergy consumption for the data center. And when you factor all the work that we computer engineers are doing to improve energy efficiency as we operate data centers, we realized that the embedded proportion will only keep getting bigger."

This insight in turn suggested that a key way to reduce overall exergy consumption for data centers would be to reduce the amount of material used to construct them – what the HP researchers began to refer as 'dematerializing' the data center.

The dematerialized data center

The notion of a 'dematerialized' data center had already been raised by Chandrakant Patel, director of HP's Sustainable Ecosystems Research Group and a recognized authority on sustainability and information technology.

Armed now with a common reference set that allowed them to accurately compare energy use in both manufacture and operation, the HP Labs team was able to begin putting flesh on Patel's theoretical bones.

The redesign they came up with was radically new. Gone were the conventional 'pizza boxes' housing individual servers which then slotted into columns of server racks. Instead, pairs of vertical backbones, similar to those that anchor walls of modular bookshelves, held columns of server blades plugged directly into the supports.

"It turns out that when you do that, a lot of things break!" admits Ranganathan.

"But that was where the collaboration got really vital," he says. "Thanks to what became a very iterative process, bouncing ideas back and forth between the mechanical and computer engineers, we fairly quickly figured out what made sense."

A scale model

Next the team needed to evaluate their new design. To help do this they called on undergraduate HP intern Tobin Gonzalez to build a scale model of what they had proposed.

While a miniaturized data center can't reproduce every aspect of a real data center's performance, it can offer valuable insights into areas such as the overall structural integrity of the concept and how it might best be cooled.

For example, says Gonzalez, "we wanted to think about the sort of boards we would plug into this." Thanks to sensors placed throughout the model, he says, "we could run tests to see how different arrangements of blades and different workloads will affect the input and output temperatures of the system and how effectively it cools."

Those results fed back into a new set of theoretical designs for an even more dematerialized data center.

The final step was to go back to traditional server designs and compare their exergy costs with the new models. "The good news," says Ranganathan, "is that we found that by designing the data center in this new way we can reduce exergy consumption by about 50%, while keeping its performance exactly the same."

A new paradigm

"The important thing here is that we had both the tools and the models," says computer engineer Jichuan Chang, who was also part of the multi-lab team. "And that we developed our idea at the same time as developing the methodology."

Equally significant, says Shah, is how this collaborative project was able to validate theoretical work in IT sustainability.

"We have always maintained that designing from cradle-to-cradle with sustainability in mind will get you the most effective cost of ownership," he explains. "And this really helps us validate that hypothesis. It is a very powerful proof point in terms of our ability to do these types of things for other types of ecosystems as well."

There are still plenty of questions to be answered before anyone starts building and selling a new generation of radically dematerialized data centers, notes Shah. "But you could see things like the research we've done on fan controls show up a year from now in our products. And then there will be a continuous stream of incremental innovation that hopefully will ultimately lead to really disruptive innovation."

Eventually, the entire paradigm of how data centers are built, shipped and used could change, Shah believes.

"Today everything gets built at point A, then assembled at point B, then delivered to the customer at point C and installed at location D," he says. "But from a cradle-to-cradle standpoint, is that the right way to be doing it?"

All the HP researchers agree that key to their ongoing success will be their ability to keep electrical and mechanical engineers talking much earlier in the development process than usual.

"The longer term implications of tying two research communities together like this are really exciting," Shah adds. "If this becomes a standard methodology that we go through, then I have no doubt we are going to discover possibilities that we didn't imagine existed."

The Value of Interns

Two important contributors to the HP's multi-lab data center team have been its undergraduate interns, Tobin Gonzalez and Justin Meza.

Gonzales is an incoming senior at the University of Washington in Seattle and comes to HP Labs though the company's HP Scholar program.

Being exposed to a research environment while still taking foundational classes in computer science has been inspirational, says Gonzalez. "There's a lot to see here and a lot of interesting ideas," he explains. "People are working on stuff that I'd never thought of."

Before Gonzalez, then-undergraduate Justin Meza performed a similar role. Meza is now a PhD student at Carnegie Mellon University, but still keeps in contact with the HP team. Indeed, his graduate research project was inspired by his time at HP.

That kind of long-term relationship is something HP researchers always hope to build with their interns as it helps create and cement ties with the broader academic research community, says HP's Partha Ranganathan.

And more immediately, interns offer a perspective that experienced researchers often find hard to achieve.

Undergraduates in particular, he says, "come in with a lot of ideas. They also don't have predetermined notions of what the correct answer is, and that is incredibly valuable to us."