Posted
by
samzenpus
on Wednesday July 22, 2009 @07:32PM
from the slow-your-roll dept.

snydeq writes "Datacenter operators seeking increased server density may soon turn to power capping, an emerging technology that limits the amount of electricity a server can consume, InfoWorld reports. The practice, which can be applied at the rack level, ensures that no server draws above a set power level, thereby increasing datacenter capacity within a rack-level power envelope by as much as 20 percent, according to a proof-of-concept study at Baidu, China's largest search company. As with powering down servers during off hours, of course, power capping incurs calculated risk, as those in charge of business-critical applications may be reluctant to set power limits below maximum utilization. Yet given IT's need to contend with the permanent energy crisis, the notion of power capping the datacenter could prove advantageous."

lame...a much better way of handling this is what datacenters are already doing: simply sell you power circuits, say 20 amps each for a set price. if they want to discourage power use, they simply have to raise the price. low tech, and works perfectly well. you can always get a good power strip with a ammeter on it if you want to know what your servers are drawing.

lame...a much better way of handling this is what datacenters are already doing: simply sell you power circuits, say 20 amps each for a set price. if they want to discourage power use, they simply have to raise the price. low tech, and works perfectly well. you can always get a good power strip with a ammeter on it if you want to know what your servers are drawing.

This is certainly an alternative. I think that either way we should see owners of devices examining how much processing power they really need an

With SSDs and the reduced issue of plater spin up, I can imagine servers that stand-by and wake up if there is any activity on a specified port

SSDs don't actually consume much less power than platter-based drives, but they're far more expensive. The spin-up/down is mitigated if there's good caching on the controller, and I think that's the key here -- caching. You can have a couple of terabytes on a server, but the drives aren't called until something is requested which *isn't* on the controller's buffer.

I should point out though that power consumption of HDs are insignificant compared to the rest of the system. If you want to avoid the spin-u

Just as I posted that I remembered that idle power draw on 3.5" drives isn't 1W, it's ~4W... I was thinking about 2.5" drives. This is compared to the 40W to 60W that a typical CPU will consume.

This is why I was suggesting a smart system that can deactivate cores or subsystems until they are actually needed. Most servers aren't designed with energy consumption considerations, so they still draw a fair amount of power even there are five users visiting the web site. If they could be made to use the least res

If they could be made to use the least resources possible to get the job done, then it would probably end up saving a fair amount of money long term.

Finally found the term they used: "power gating". Here's an Ars article [arstechnica.com] about it.

The relevant bit:Traditionally, Intel has been able shut down an unused core by cutting its active power, but even though it's in a sleep state, that core is still dissipating plenty of power because of leakage current. Intel's power gating technique involves a new transistor design, and it lets Intel cut the leakage current, as well, so that the sleeping core's power dissipation drops to near zero.

The spin-up/down is mitigated if there's good caching on the controller, and I think that's the key here -- caching. You can have a couple of terabytes on a server, but the drives aren't called until something is requested which *isn't* on the controller's buffer.

Something not on the controller's buffer would include new data received from the network. Caching works fine for reads, but if you rely on it for writes, you could lose Durability [wikipedia.org].

This is a great way to waste money. If you don't use power capping, then you'll be paying for 20A but only using 10A-12A if you're lucky. Power capping allows you to use the full 16A that you're paying for.

The real 'cool factor' starts to kick in when this stuff is fully automated, the demos that I've seen you can take a rack of dev/test servers and drop the power they are using off hours, then give the power back (cpu clock rate, etc) during the day. We already have something simmilar in our company datacenters: the HP systems (running ESX) we have now ballance (via vMotion) running systems at night, and power down some of the hosts (about half of them on a normal night, it bases it on current load) saving

I still don't understand why the data center needs to get involved. You can just manage your own power, and using a conventional power meter, the data center can just collect the bill for what you use.

One only wonders how long it will be until every spreadsheet process becomes "business critical" to override restrictions such as this.

When the business involved has to pay a larger and larger bill, that which is considered "critical" is increasingly analyzed as the bottom line gets thinner and thinner. When margins are fat, daily $4 lattes are "critical" to staff morale. Conversely, when times are really tight, staff morale is critical at "well, you ain't fired yet, is yeh?".

I think the problem is that too often the people who run the servers and the people who benefit from the servers have entirely different budgets and suffer from terrible myopia.

If some IT budget administrators were put in charge of telecom they'd save money by not letting anybody have a phone. Sure, that saves money, but of course completely ignores why employees have phones in the first place.

I can't tell you how many times at work I've seen applications suffering with performance where the application is

"Permanent energy crisis"? There's no such thing as a permanent crisis. Yes, energy costs are going up because we're more sensitive to the impact of new capacity. But that hardly constitutes a crisis. The word "crisis" has been practically stripped of meaning - everything these days is a goddamn crisis. When the girlfriend you were about to dump gets pregnant - that's a crisis. A few bucks more on your energy bill - not a crisis.

Peak oil isn't a crisis either. We can replace every erg of energy from oil with nuclear if we're motivated to do so. Even if we don't, during my lifetime peak oil will mean as an American I might, at some point, have to pay almost as much for gas as the Europeans are paying now, plus a few commodities will cost more.

I think you need to understand a little more about the politics and economics of oil before making a statement like this. A couple things to get you started:
1) Determine the relationship between the US dollar and economy and global oil trade
2) Understand why EROEI is so significant when discussing alternative energies

Sigh. I understand all that, but the numbers are hardly insurmountable, or even very uncomfortable. How many nuclear power plants could we have built for the trillion dollars we spent on "stimulus"? Four or five in every state, by my calculations. The idea that everything is just going to fall apart when the price of oil goes up is just silly.

The Stone Age didn't end because they ran out of stone. The Oil Age might end when we start to run low on oil, but that doesn't mean we won't have plenty of alternatives.

The economic argument for all sorts of magic coming from having oil traded in USD is weak. A barrel of oil is worth whatever the next buyer things a barrel of oil is worth, a dollar is worth whatever the next guy who gets it thinks it's worth. These things are both fungible, they're both pretty liquid. There's a vibrant currency exchange market. If people think the dollar or the barrel-o-crude is not worth what it used to be, the prices are perfectly capable of shifting to match. Look at the last big recession and oil crisis of the 1980s. Look at 2008, for crying out loud. The US dollar may wax and wane, the US economy may shrink 10% in a bad year, but oil dropped from over $100 a barrel to something like $30.

As for the money supply, the Federal Reserve is pretty capable of generating as much or as little of our little fiat currency as they feel like. The national debt (and the price at which people are willing to buy it worldwide) is what's going to be weighing on the US and its economy over the next several decades, much more than any medium-of-exchange games. The government and the private sector compete for loans: when there's more debt, it's more expensive for private firms to borrow and that hurts economic growth - because look! Treasury bonds! They're nice and safe. Why would you invest in a risky old Business in/this/ economy?

It's not that easy. The US FED is the only legitimate source of USD. So whenever someone needs USD to buy crude oil, he has somehow to get USD, and that means that he either takes a loan with the US FED, or he tries to sell something to someone, who has already USD, which means, that in the end he has to do business with the U.S. directly or indirectly.Because the US FED has a monopoly of the USD, it is the only institution that can directly manipulate the price of the USD. So no, the price of the USD is no

nothing to understand, we have coal sufficient for centuries and the means to make it into every major type of hydrocarbon fuel.

It's true that there's enough coal for centuries. However, if we keep using it the way we do now, these centuries will be spent underwater. The power crisis will kick in when sea levels start rising so fast that power will be capped during peak ours, for *everyone*. Either we start adapting now, or we'll pay for it tenfold later.

by the way, the ocean levels have been rising since the last ice age. Man might be making them rise a wee bit faster, so really all these place you see on new evacuating because of higher ocean level are doing so because they were right at ocean level. Might as well move now since they would have had to move later anyway.

Peak oil isn't a crisis either. We can replace every erg of energy from oil with nuclear if we're motivated to do so. Even if we don't, during my lifetime peak oil will mean as an American I might, at some point, have to pay almost as much for gas as the Europeans are paying now, plus a few commodities will cost more.

I think peak oil is real, but we won't get any warning. It'll just happen that we run out of oil.

Think about it for a little while - we've already seen that when gas prices jump, people change

How could you not believe there's an energy crisis? The average American uses the energy equivalent of 60+ personal slaves. 90% of that is provided by limited fossil fuels that are having an irreversible impact on global climate.

We have zero viable plans to replace any of it any time soon.

And in case you haven't been paying attention to the current breeding-age generation, a pregnant girlfriend is a ticket to 20 years of government cheese. It's not a crisis at all.

I must admit I'm not sure how to convert units of "personal slaves" into kilowatt-hours. I must have gotten my degree too early, or, I guess, too late. But assuming I understand the gist of your comment, and further assuming scientists have accurately assessed the impact of fossil fuels on the earth's climate (which would exclude coal as a long-term power source), I don't see why nuclear power can't produce all the power we'll need for hundreds of years. And that's assuming we can never get reasonable ef

Sixty slaves? Sounds like you've found a solution to our nation's unemployment crisis! I bet using people as energy sources would also help correct our obesity crisis, which would in turn lessen the effects of our current health care crisis. Returning people to work would help the financial crisis. These laborers wouldn't need much schooling, so there goes the education crisis. My goodness, it's brilliant!

Of course all these crisis could be the equivalent of the Y2k crisis.. much ado about nothing.

You should set the power cap slightly higher than the server's typical power usage, so the server rarely or never slows down. Also, in corporate IT there is no other provider, so the alternative to power capping is usually to not buy any more servers.

It depends what they are overselling and to what degree. Overselling a plane screws whoever is left behind (large impact over small set of customers). Overselling bandwidth slows someone's download or game (small impact over many customers).

Overselling a rack and causing servers to a) fail or b) corrupt data costs in hundreds of thousands or more pretty quickly in damages and legal fees. It's a much wiser business decision to just increase the power capacity (not that some suits will think this is the

Honestly, these days if a business isn't overselling they're leaving money on the table.

Yes, and it is that kind of amoral and unethical crap that is giving the shaft to the consumer day after day after day.

Just because you can do something, and it will be profitable, does not mean that you should do it.

Overselling anything should be expressly prohibited by law in the strongest terms possible. Honestly, it really pisses me off. At the very least, the amount of overselling should be disclosed to customers

Yes, and it is that kind of amoral and unethical crap that is giving the shaft to the consumer day after day after day.

You're right! Gyms, for example, shouldn't sell memberships to more people than they can fit in the gym at the same time. Otherwise, if every one of their members decided to go to the gym at once, they'd have to turn some away!

(Seriously - overselling is just another risk. Taking no risks is bad. Taking too much risk is bad.)

Beacon Power is an American corporation specializing in flywheel based energy storage headquartered in Tyngsboro, Massachusetts. Beacon designs and develops products aimed at utility frequency regulation for power grid operations. The storage systems are designed to help utilities match supply with varying demand by storing excess power in arrays of 2,800-pound (1,300 kg) flywheels at off-peak times for use during peak demand.

Plus, proper power capping can reduce infrastructure cost, since the power distribution/conversion whatsoever system in the datacenter needs to be designed according to the maximum load, which means they need a lot of headroom on their power supply. By doing so, they can reduce the maximum load.

The only problem... is how they will reduce power while minimizing the interruption to the workload on peak times. Not an ea

Those are insanely expensive and only store enough power for a few MW load for a handful of minutes. In theory they are better than a UPS in the long run due to not needing to replace batteries every few years but they aren't going to shift much load from offpeak to peak.

The basic unit of Beacon Power approach is the Smart Energy 25, that is basically an enormous steel vacuum bottle holding a 2,800-pound cylinder made of carbon and fiberglass composite that is levitated by magnets.

Beacon's flywheels take in electricity and use a motor to spin the cylinder so fast that the surface hits Mach 2. The spinning cylinder stores most of the electricity's energy for as long as needed (thanks to the near frictionless vacuum a

Eric Sonnichsen, who founded Test Devices in 1972, points out that a wheel created from carbon fibers is safer than a steel wheel, because even if a few fibers break, the wheel won't come apart. On the other hand, if a flywheel does disintegrate, says Sonnichsen, "it's more like potentially lethal lumps of coal com

Storing power is not as easy as it looks. If you have a good idea let us know.
Huge batteries are expensive and lose their charge over time.
You could use the power to pump water higher and then use that potential energy later. There are so many convertions of energy that make it not effective enough.

Tell me why it doesn't make sense to buy power at off-peak rates and store it locally to meet peak demands.

This has been possible for consumers in the UK for decades. In the UK, they have "storage heaters". These are heaters that store heat at night, at off-peak rates (the meters measure peak-time and off-peak usage) and then release the heat during the day when it is needed. Many people also use delay-timers on other devices such as dishwashers to take advantage of off-peak rates. It amazes me that this

It amazes me that this concept has failed to reach the USA, although it is a lot more difficult to store "cool" than it is to store heat, so perhaps the possible gains are smaller across much of the USA.

The problem isn't with household usage patterns but with the fact that the utilities generally don't have a time-tiered pricing system. My electricity meter is of the old-fashioned dumb sort, as is most people's, so there's no incentive to use power at off-peak times. That's changing, but slowly. If smart meters were ubiquitous, I think people would respond (and there are many places in the US that need more heating than the UK!).

It does make sense. That's exactly what i/o Data Centers is doing in Phoenix. They're installing a thermal storage system [datacenterknowledge.com] at their huge Phoenix ONE data center. The building chillers will cool a solution of water and 28 percent glycol. The thermal storage tank contains Cryogel ice balls, which freeze when the system is charging at night, and then cool the glycol solution during the day. The glycol solution is then pumped through a heat exchanger, which chills water in a separate loop used in the data center

If I'm reading the article right, this doesn't save any energy at all (and might increase total energy consumption). It's a way of spreading out power use over time, so you can get more servers without increasing peak power capacity. It makes sense for some loads (why run at 2Ghz for 2 minutes of every 30 when you can save power by running at 500Mhz for 8 minutes). But for interactive loads it won't be good.

I can't see any legitimate provider capping power usage. We have 20A running to a client rack by default - if they need more circuits we charge them per circuit. The only place I can see people wanting to make an argument for capping power usage is if a provider has oversold their power infrastructure and is starting to feel the pinch because they're not charging enough. Same goes for bandwidth: if you want to price things cheaper and cheaper to attract customer, I believe it';s unethical to then raise rates after-the-fact because of poor planning/forecasting.

Power capping is intended to be used by the server owner; e.g. in a colo that would be the customer, not the provider. You give the customer a circuit and they use capping to fit as many servers as possible on it.

I think you have it backwards. A CPU running at 1 GHz would use 10-20% of the power of a CPU running at 2+ GHz. The modern processor's power consumption is on a cubic power consumption curve. To increase frequency, you have to increase voltage, which gives a 2nd order 1/2 C * V^2 energy loss (per cycle). Additionally, the power consumption is the frequency times the capacitance, so the overall power formula is: 1/2 f * C * V ^ 2 or roughly O(f^3).

Then why do entire lines of CPUs have the same TDP? For example, the Core 2 Duo is 65W across the board from 1.8 GHz to 3 GHz

The entire line has exactly the same TDP because they are exactly the same chip. Intel automatically tests each one as its manufactured, and puts it into the appropriate speed bin. After a while, Intel's manufacturing technology improves, and they actually make too many of the fast chips. At that point, every speed bin gets the same chip. That is why, when the process technology i

Not a DC where I worked, but where we had rack space at. Each cabinet has to start 15-20A, and you could request up to three strips (total 45-60A) per rack.

They mentioned at one point that they were actually reaching the capacity for power on our floor, so we'd have to be careful about power as ordering more strips wouldn't be an option unless another rack elsewhere let some go.

Scaling down the use or throttling the usage during off peak times could be a good thing, but how much energy would it really save in the long run? It has been noted that data centers energy usage could double by 2011. Would steps like this really make a dent in that trend? I would grant that it might lower the curve, however it won't stop the trend of growing datacenter energy usage.
I don't subscribe to the "permanent energy crisis" argument. There *is* a "permanent political crisis", meaning that things

Datacenter power usage isn't going to double in two years. It can't. There isn't that sort of power capacity in the US today.

Sure, we might build more plants. But even a common nearly-off-the-shelf coal plant is going to require five years to bring online. And as you point out, we don't have five years.

If it is going to be a decision between turning off power to homes at peak times or preventing new businesses from being established, we are probably going to turn off power to homes. At least in the sho

This happens to be why my quarter rack space has only 2 1U computers in it. It was supposed to be a quarter rack (10U), but I was told I had only 7U of space. Okay, not a problem, I can put in 7 1U systems, 14 if I purchase the half sized systems. Then I was told I have only 2A, oh, and here's a switch that'll turn it off if you go over. Which means my quarter rack has two 1U servers in it.

Worse, even the full rack is allowed only 15A before you have to buy a secondary power conduit to the rack at this particular colo.

I suspect it's more a way for the facility to make money than it is to reduce energy usage. When I visited the facility last to move boxes, 4 racks were being emptied and a good 60% of them were completely empty anyway, so the facility may not be long in this economy.

A rack is only allowed 15A (in your case) or 16A usually out of 20A, because NEC code states you can only drive a circuit at 80% of it's max capacity (16A on a 20A circuit, 24A on a 30A circuit, etc). Colo customers should be taking this into account when they buy power. If you want to buy 10U of space, and expect to fill it with 10 1U servers, be prepared to pay an appropriate price, as it's not just space, but power, and cooling to cool all that power going into those serves. Every watt of heat you make i

Ed Nisley of Circuit Cellar shared an interesting statistic a couple of years ago - that it costs approximately $2 per watt per year. This counts both the supplying
the power to the computer, and the power to the fans and chillers to take it out...Needless to say, I don't put friend's servers in my basement....

Americans (and Europeans outside of France) are going to get over their nuclear power allergy really, really quickly once their lifestyles start to suffer more than a certain amount. Especially when counties like Iran and India start using it heavily and manage to undercut our economies with cheaper energy.

Flat out, if the carbon / climate change problem is as bad as is portrayed these days, and we think peak oil is a looming problem, nuclear is the ONLY rational response to the problem over the next few de

Sorry, it is probably too late for that. It would likely take 10 years to build a nuclear plant of any size. We are going to be seeing huge power shortages in the US pretty soon, and conservation isn't going to make a bit of difference. There isn't any point in shaving a couple of megawatts off the load when the shortage is a couple of gigawatts.

Building a coal plant today would require at least five years, if we started construction tomorrow. We aren't. I don't believe there are any new major power pl

I'll believe this when I see more sites that start dropping unnecessary ads and tracking when under heavy load. (Slashdot does some of that; under heavy load, most users get a canned home page. When the system is less busy, the "customization" machinery is used.)

The real killer is overdoing "customization". Customization makes serving pages far more expensive. Consider Google's problems with "Michael Jackson" searches. Google used to answer the most common queries from a cache in the first server to

Datacenters need reliable power and water. They will likely be able to outbid other users for the foreseeable future. They don't want to have to move their datacenter if a coal or nuclear power plant is scrapped.

on how it saves power. For almost any business you will need your servers running 24/7 esp if they are hosting web servers for your business. Running a power capping technology that shuts down the servers hosting your web site doesn't make much sense because when it is shut down nobody can access it. The whole point of running a web server is to provide 24/7 services even if your business is closed and everyone went home. But for some businesses they have people working all shifts and there is never a down

Web servers aren't of course the only type of servers running out there. You know the whole big shut-off-the-light initiatives all over the world in office buildings? Well this is much of the same. You may not need terminal servers going 24/7, or even some databases that are only used during business hours for internal operations. Or you likely have development, test, live environments - why not shut down dev and test during hours where no one uses them? Or for larger operations, smart monitoring may s

Correct me if I'm wrong, but wouldn't power-per-calculation actually be hurt by that?
If you do this, you might end up with 50 racks drawing 50kW (for instance). Yet you'd actually be able to do the same number of calculations, with _less_ power, if you had 40 racks drawing 45kW.
Rob.

In our datacenters we have already started to employ special airconditioning units, raised operating temperatures, redesigned floor layouts, etc. All of this to reduce power used, but not because we do not have enough power, because it is commercially attractive to say and show how "Green" our datacenters are. I suppose this can be seen as another way to prove how green you are, especially if you host services and storage and not dedicated systems.

energy crisis or not, reducing the amount of energy we use in the data center is a good thing all around. if not for global warming, then for business expense, and if not for that, then the potential to make life as a noc power and cooling specialist alot easier (we hope.)

Sure, power costs in Quebec have gone up over time, but not THAT much. If the US refuses to build sufficient capacity, just move servers to Montreal or some other Canadian city with cheap renewable power.

APC doesn't allow per-outlet monitoring, only per PDU monitoring. UCSD or UC Berkley is working on small per-outlet devices to both monitor and control loads (i.e. servers) that are to cost less than $20/piece. Assume they are controllable via HTTP, and you can do some fine power control.

Basically every colo datacenter is metered. Standard is 2x20A circuits (16A max draw) per rack. If you want 30A or quad 20A circuits in your rack you generally pay a hefty overage charge. At our DR colo provider they require metered PDU's. I have this capping capability with my HP servers which means I can fill a rack right up to the edge but make sure I don't overload the circuit by keeping my peak usage closer to average. You also need to use staggered startup on your servers if you want to play it that c