Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

You can also select systems that cold-boot rapidly. Model to model and brand to brand, servers exhibit wide variances in power-up delay. This metric isn't usually measured, but it becomes relevant when you control power consumption by switching off system power. It needn't take long. Servers or blades that boot from a snapshot, a copy of RAM loaded from disk or a SAN can go from power-down mode to work-ready in less than a minute. The most efficient members of a reserve/disaster farm can quiesce in a suspend-to-RAM state rather than be powered down fully so that wake-up does not require BIOS self-test or device querying and cataloging, two major sources of boot delay.

Which thereby "debunks" point one: the power-on statistics obtained in a cold boot will not be present if they are not run in a hibernate-power-up. But the effects will still be there, if any, because the temperature still cycles.

If you're booting those servers diskless with PXE and NFS, the boot time should be negligible. I should imagine the trick would also be to bring additional resources online before you are the point that you must tell users to wait while the server boots. The magic would be in predicting near-term future use...

Actually you're pretty close with VMware at the moment - VM instances can be 'hot' migrated, so you can clump them up on one server, power the rest down, and fire them up/migrate when demand shows. Your response won't be great, but at least you will be able to respond to dynamic load fluctuation.

Actually VM tech goes a long way to doing that anyway, provided you've a vaguely good concept of workload fluctuations.

You just set high and low load thresholds for server on/off. And a load balancer which simply adds the new server to the server pool when it notices it's there, removes them when it's gone. So no need to try to predict stuff.

5 seconds or 3 minutes, the server boot times are largely irrelevant. If you think you're going to handle a slashdotting you are mistaken, you can't handle oneoff events this way. You would have to go from 1 to 100 servers and connections in 5 seconds.

What it can do is grow really quickly if a service becomes very popular very quickly, or reduce your datacenter costs if it's typically used only 9-5. Or even, dual purpose processing. Servers do X from 9-17 and Y from 15-20.

Take a look at the proceedings from the International Conference on Autonomic Computing for the last few years, and you will see papers from universities and companies like Intel and HP describing efficient ways of doing exactly this.

Hibernate mode in XP saves the state of the system to RAM and then maintains the RAM image even though the rest of the system is powered down.

They must be using a different version of XP than I am... When I 'Hibernate' my laptop, it dumps the RAM to a file on the hard drive and then powers off completely. When I 'Stand By' my system, it keeps everything in RAM.

hey must be using a different version of XP than I am... When I 'Hibernate' my laptop, it dumps the RAM to a file on the hard drive and then powers off completely.

You must be using a different version of XP than I am... When I 'Hibernate' my laptop, it attempts to dump the RAM to a file, throws a hissy fit like a coddled freshman after their first exam, fails miserably, flickers the screen, disables the Hibernate option, and then just sits around until the battery drains.

I had the same problem. So I installed a patch (http://support.microsoft.com/?kbid=330909) from Microsoft.
Essentially it's looking for a continuous free area on your HD to save your RAM to. I believe the fix is to disable this "feature".

If you are using electric heat, chances are you don't live in a cold climate and pay for air conditioning for much of the year negating any "savings". Here in cold-balls Canada, EVERYONE has centeral heating; it's too expensive to use electricity. That being said, I do agree that datacenters' heat should be used to heat useful things (office bldgs, like you suggest).

I know a total of 5 people who don't use natural gas for heating, and 4 of them use propane as they're so far out of the way the gas network doesn't reach them. only 1 guy uses non-central (heating controlled on a room by room basis) electric. In terms of raw dollars-per-joule, gas is a way better proposition. even after the latest electric rate jump (from 6 cents to 9 cents per KW-hr), gas is still about 1/3 the cost of electric heat.

back in the very early 90's We moved into the first 7 world trade center. There was no heat in that building, it was air conditioned 365 days a year.
People and computers heated all the floors. Unoccupied floors were damn cold in winter, i can tell you.

I have no control over the heating in my apartment. Thus, I have to have an air conditioner running 24/7/365 because my room gets too hot from the two computers I have running... and they're both sub gigahertz machines. Heaven help me if I upgrade to a real machine.

If you have a shit system that's really slow and badly written display the following:

"The sub-optimal response you are experiencing will soon be resolved as we are utilising quantum replicators to produce more server hardware for your request. Once complete we will travel back in time and resubmit your request. Thank you for using One-Born-Every-Minute hosting. Have a nice day."

I stopped reading at #1: "Fact: The same electrical components that are used in IT equipment are used in complex devices that are routinely subjected to power cycles and temperature extremes, such as factory-floor automation, medical devices, and your car."

Well, yes, except for the fact that the it's a total lie. Cars, factory automation, and medical devises most certainly do NOT use "the same" components. While they may do the same things, and even be functionally equivalent, they are rated to much higher temperature and stress levels than consumer or even server grade components. Just ask the folks who have been trying to install "in-car" PC's with consumer grade components.

There is also more bullshit in that statement than meets the eye. Power cycling a system can cause failure if you have cheap soldering or marginal parts. Powering up a system causes it to heat up, things expand when they heat up. If you have a solder joint that isn't done right the expanding and contracting will cause it to break eventually. I've actually seen surface mounted parts fall off a board because of shotty soldering.

Yeah, true the real problem was shotty soldering but the heating cycles helped it along.

However, silicon is silicon, capacitors are still made from the same things

Thank you for playing the game, but you have lost [badcaps.net]. Rather then using more expensive Nippon electronics, the Chinese parts you used had a few part per million more impurities. This lead to early thermal failure of your mainboard.

If you would like to play the game again, please acquire more venture capital and buy quality next time. You may still lose the game to your manufacture buying counterfeit parts, using the wrong specification solder, or unforeseen interactions from running at many gigahertz at high temperature.

This show has been hosted by an automation robot that costs 75 times what your laptop does and still has occasional electronics failures.:)

Functionally, the devices might not be used to perform the same precursory things... However, silicon is silicon, capacitors are still made from the same things, and inductors are little more than (get this!!!) lumps of a constant controlled by the turns ratio and other things.

Nope, totally different animals, huh?

Right. And I'm the same as Albert Einstein because I have DNA, amino acids, and funny hair. Where's my Nobel Prize?

A Pinto is the same as a Mercedes because it's made of steel, has 4 wheels, and an engine. I want $75,000 for my used Ford.

My wife is the same as Elle McPherson because she has hair, tits, and a vagina. My wife should be the supermodel (no, really, honey, I was serious on that last one. No, wait...WAIT!...")

For a Web site, put up a static page asking users to wait while additional resources are brought online.

We're sorry for the inconvenience, but our systems seem to have been shut down. We've asked leroy, rufus, and heraldo to hit the power button, and we assure you that, once they've found that button, they will push it, and then, once the mandatory scandisk operation has completed, the Windows server screen will appear, and once the kernel operations have completed, the services you have requested will be available.

And that will be awesome!

While you're waiting, here are some links to our competitors' sites. Remember to open them in a new tab, so you can occasionally come back and hit "refresh". We promise, we're almost ready to serve you.

Yea, I don't know who wrote that bit in the article, but they're just dumb. If you run any kind of system with a load balancer in front of it you can easily script starting up additional machines as soon as your monitoring says you reach 90% capacity.

When a system suspends to disk, it uses no power.When a system suspends to RAM, it uses VERY LITTLE (strobe) power, and you can configure wireless adapters and USB devices to be turned OFF when you suspend to RAM. (I'm using "suspend" for both cases - FUCK the sleep/suspend/standby/hibernate/whatever for 2 different states bullshit.)A laptop's charging circuitry and ac adapter is independent of the power state, so of course the adapter is going to be running all the time to keep the battery charged and power the system.

They admit that the power use is negligible when suspending to disk or RAM (and probably running 3 wireless mice that don't turn off, in an idiotic attempt to boost their non-existent numbers).

They don't admit that they couldn't find anyone who thought that the green light on the power brick meant it was off and using no power.

Myth 7 is true as well.NiCd batteries do suffer from memory effects, and their capacity decreases over time. Conditioning a NiCd will remove the memory effect, but will not restore lost capacity due to general age.

NiMH batteries have much less of a memory effect, and less of a capacity loss through age. There is no need to condition a NiMH battery. Just drain it fully and then recharge it in a cheapo dumb charger, or buy a better charger (which will likely advertise a battery conditioning feature anyway).

LiIon batteries do lose capacity over time. If a cell (the smaller cells, not the 6 or 9 individual batteries in your laptop's battery) is completely depleted, it won't recharge again. If a cell is overcharged (or overheated), it will pop, and you've lost that capacity., and maybe your pants + laptop if the damn thing catches fire.

"Myth" 8 is true, as long as you remember that the hard drive is just one item drawing juice in a system.

"Myth" 9 is true, as long as you do it right.The problem with DC is that you lose power over distance. Converting from AC to DC in a specific box can be more efficient than any server power supply, more reliable, and output cleaner power.The issue is distance.

"Myth" 10 is true. "As soon as possible" means "When the servers are on fire or when we're 6 months overdue on our replacement cycle, whichever comes first...maybe". Energy costs are through the roof, and it makes sense for that to be a high priority in determining what you buy. You may even want to buy a more efficient server/power supply/switch/UPS/line conditioner EARLY if your budget allows for it. We all know that any money sitting around unused will get grabbed up by someone else, so use it or lose it.

That replaced equipment still has value (especially if you replace it early), and if you can resell those, you'll usually wind up ahead. in the long run.

Myth No. 6: A notebook doesn't use any power when it's suspended or sleeping. USB devices charge from the notebook's AC adapter.
Fact: Sleep (in Vista) or Hibernate mode in XP saves the state of the system to RAM and then maintains the RAM image even though the rest of the system is powered down. Suspend saves the state of the system to hard disk, which reduces the boot time greatly and allows the system to be shut down. Sleeping continues to draw a small amount of power, between 1 and 3 watts, even though the system appears to be inactive. By comparison, Suspend draws less than 1 watt. Even over the course of a year, this difference is probably negligible.

How about build an energy efficient PC! I have a LP AMD 64 x2 with a Geforce 7600GS, 2 HDD's, 2GB of ram and a TV tuner and an 85% efficient PSU and I peak at around 150W, using 140W at idle is insane. For the next generation of games I'm thinking about upgrading to a 9600 GSO but that will up my idle and peak numbers by at 20W so I'm holding off till I get a game that really needs it.

OK, I have a stupid question. How do you tell how much power a component is going to pull before you buy it?

Looking around newegg, I only see a handful of "green" items that advertise their low power usage.

For the record, my system pulls 120 W idle, 230 running CoHOF, and 5 in S3. It is extremely overclocked and mostly older components which tends to skew things, but I'm looking to upgrade and wouldn't mind saving a few bucks in energy costs in the long term.

For the record, my system pulls 120 W idle, 230 running CoHOF, and 5 in S3. It is extremely overclocked and mostly older components which tends to skew things, but I'm looking to upgrade and wouldn't mind saving a few bucks in energy costs in the long term.

5 watts in S3 is pretty bad in my book. Disconnect all USB devices and check again what your S3 power consumption is. If it is still high, most likely the PSU you have is not efficient. It could also come from other things like the motherboard, but most of the time it is the PSU. If your system idles at 120w, and 230w during load, you might be able to run with as low as a good 350w rated PSU. For example if your current PSU was around 70% efficient and you replaced it with an 80% efficient one, then during load your 230w draw would drop to around 201w. But you'll have to check and see if you can find the efficiency numbers for your current PSU.

How do you tell how much power a component is going to pull before you buy it?

There's no single source, but there are some useful websites.

80plus.org [80plus.org] Silent PC Review [silentpcreview.com] They generally provide both noise and power consumption measurements in their reviewsSilent PC Review Forums [silentpcreview.com] More anecdotal but at this point it is still good data. Many users post their own tests and measurements on the boards. It helps you get an idea of what's achievable and what isn't. There are also some nicely compiled charts that combine data from difference sources. I find the numbers are sometimes inaccurate but not too far off.

0.14kW *0.18$/kWh * 8*200h = 40$? at years? of electricity? not a whole saving you could get from reducing that. or I'm doing it all wrong?

Something's wrong about the units but I think the figure is right. Of course, it also depends on what climate you're in, if you're running an AC to get rid of those 140W again that costs too, while here up north a lot of the time it supplements other heating in the winter so it actually costs less. Anyway I haven't bothered to check but I imagine my 42" TV draws more when it's on so it's not like the computer is the big sinner. For me lower power is about less fans and less noise, there's no way 140W draw i

nevertheless, even 200$ at year are not a major expenses, and a 10% power saving is not so cost/effective.

I'm reversing the reasoning of demand/cost relationship: if the cost is so low, what is this energy crisis we are hearing about (I know the facts, but I fully expected a higher cost for energy given the situation)?

I lock my PC at the evening and turn off my monitor. Shutting down takes 5 minutes. Starting up takes 15 minutes. Just checked those time this morning to talk about it to IT. This does not include logging into the remote system with Citrix that takes another 10 minutes.

So the company has a choice.1) Pay me (and everybody else in the company) 20 minutes2) Pay the electricity for not turning of the PC3) Find a solution that makes it possible to do all of this faster.

Outlook Web Access (OWA). There, I've just saved you 10 minutes per day, or around 60 hours per year. I don't know how many members of staff your company has, but if you're using Citrix, I'd imagine quite a lot... lets say 500.

Um, your computer is way underpowered or your IT department sucks because 15 minutes to boot in crazy. I have an old T42 with a 4200 rpm HDD and it only takes about 5 minutes to boot and that's with multiple server type services installed (I have two copies of MSDE installed if that tells you anything). Also Citrix logon times at my shop are ~90 seconds average and they will be more like 30 once I get the users profiles onto a faster file server.

It's possibly a combination of the two. My old work laptop (Tosh Centrino, 1.6 or 1.8GHz, 1GB RAM, Win2K) used to take around 12 minutes to boot from cold. Quite a bit of this is due to the Pointsec full disk encryption software, followed by SAV, followed by the usual corporate crippleware. Horrible. In the end it became a tethered desktop as I couldn't be bothered taking it anywhere.

lots of companies have lazy desktop admins who write one giant script to run that checks every resource available in the system for every user even though most of those users will not use most of the resources it is checking for. Smart companies have created multiple scripts and figured out smart ways to quickly identify what scripts the logging in user needs to run, thus reducing boot up significantly.

Myth No. 3: The power rating (in watts) of a CPU is a simple measurement of the system's efficiency.Fact: Efficiency is measured in percentage of power converted, which can range from 50 to 90 percent or more. The AC power not converted to DC is lost as heat...Unfortunately, it's often difficult to tell the efficiency of a power supply, and many manufacturers don't publish the number.

I'm not sold on taking advice who doesn't understand the difference between the wattage rating of a CPU and the wattage rating of the power supply. They're completely different components.

I like how this plays with the following assertion filed under "Myth No. 9: Going to DC power will inevitably save energy."

"New servers have 95 percent efficient power supplies, so any power savings you might have gotten by going DC is lost in the transmission process."

So, when it suits his argument, power supply efficiencies range from 50-90% efficency, and are kept hidden by manufacturers. Then, when that doesn't suit his argument, all of a sudden power supplies are at least 95% efficient, and everyone knows that.

Myth No. 9: Going to DC power will inevitably save energy.Fact: Going to DC power entails removing the power supplies from a rack of servers or all the servers in a datacenter and consolidating the AC-DC power supply into a single unit for all the systems. Doing this may not actually be more efficient since you lose a lot of power over the even relatively small distances between the consolidated unit and the machines. New servers have 95 percent efficient power supplies, so any power savings you might have gotten by going DC is lost in the transmission process. Your savings will really depend on the relative efficiency of the power supplies in the servers you're buying as well as the one in the consolidated unit.

This is completely wrong. The author missed out on two of the three power conversions that take place in a data center. Data center UPS units take the AC current convert to DC then back again just so the server can convert it back to DC. Even if you have 95% efficiency at each stage the conversion losses will add up.

People wouldn't be going DC if it didn't result in measurable power savings.

With the US power system, you do avoid four high loss (DC:AC or AC:AC) power conversions and replace them with two lower loss (DC:DC) conversions. But, compared to the ROW electrical systems, you only save two AC:AC conversions which will just gain you two points or so.

I like 600VDC as a solution, but it will only work well for the biggest consumers where you can justify a significant increase capital cost with the energy savings. It's nice to have a single 4.8MW critical power bus (with a couple spares)-

Data center UPS units take the AC current convert to DC then back again just so the server can convert it back to DC. Even if you have 95% efficiency at each stage the conversion losses will add up.

There's only 2 stages in there that are affected. You aren't going to get DC from the power company.

And while the final stage is affected, since the servers are getting DC at lower (48V?) voltages, do you really think a DC power supply is possibly going to be any more efficient than an AC one? Just because the

If I remember correctly, the power savings is not in using AC or DC, it is in stepping up the voltage so that less current flows, resulting in lower power loss due to the innate resistance of the lines, a process that is possible using either AC or DC. Tesla and Watt fought over that one, comparing the relative safety of AC vs DC, a war of pure FUD IIRC.

Taking ten suppositions and making suppositions about those suppositions (I'm getting dizzy) is not debunking. All I see here is lots of questionable, completely unattributed information.
For example: "The average 17-inch LCD monitor consumes 35 watts of electricity". Really? Where did this information come from? Did you pull this information from the glossy for a 17" monitor? Did you just test your monitor? Did you test a large sample of monitor's here? Did you pull this information from a study? Out of your ass?

At the risk of sounding like an idiot, that is a fairly accurate guess for 17" lcds (TN panels anyway).Since pretty much all 17" lcds are TN rather than high-contrast panels, it doesn't really hurt to generalize.

Power ratings on monitors aren't like the ratings on computer power supplies. By effectively estimating average and peak power draw the manufacturer can save money. If the AC adapter is rated to handle too little power, the adapter or monitor will prematurely fail. If the adapter is rated too highl

Turning off your computer is always a good time to give the hamsters food and water, lets them rest, so in the morning your computer will be nice and fast. If it takes parents computer 15 minutes, his hamster need less weight

Yes, they've been redefined by Messrs Hyneman and Savage. Basically, if something takes a ridiculous amount of effort to blow up, then it is debunked, or "busted". If it blows up without too much provocation, it is "confirmed". If it merely catches fire, it is "plausible".

Microsoft's Windows Messenger (MSN Messenger, Live Messenger... whatever they call it these days) Group wrote an awesome abstract [microsoft.com] of how they cycle servers on & off to handle the load while saving power.

Now, for reasons pointed out in other comments, TFA from Infoworld is a mix of good info and horseshit.

Suspend saves the state of the system to hard disk, which reduces the boot time greatly and allows the system to be shut down. Sleeping continues to draw a small amount of power, between 1 and 3 watts, even though the system appears to be inactive. By comparison, Suspend draws less than 1 watt. Even over the course of a year, this difference is probably negligible.

I though "suspend to disk" and "hibernate" were synonyms. A suspended computer shouldn't draw any more power than a computer that's turned off, b

My favourite story (or urban legend) is when an employee came in to an IT shop on the weekend and shut down all of the A/C cooling units for the Data Centre. He claimed that he was "going 'Green' and saving power" because "...all of those computers in that room have their own fans."
I'm pretty sure he was let go after that...or promoted to management.

Out-right potentially wrong: no one cares if a customer is made to wait for a server to boot to get served. That's not a generalization to be made lightly... It is true, though, that suspend-to-ram has not received the attention it deserves in the data center. A great deal of server-class systems and options are not designed to cope with suspend-to-ram, and thus you must be careful banking on this. The industry should correct it, but a facility can't bank on it yet (just put pressure on your vendors to make it so...)

Straw-man: A supposed 'myth' that leaving on LCD monitors is fine for energy savings, with the remarkable clarification that being off saves more power... Who would have thought.

Other straw-man: You will unconditionally save money by rapid upgrades to the latest efficient technology. I don't think anyone is foolish enough to think compulsively following any technical treadmill will lead to any overall financial gain..

Probably the biggest and most annoying/disrupting power saving myth is Daylight Savings Time. Every year, the power companies announce that they don't notice any change whatsoever in power consumption.

That list of myths debunked seems pretty sensible, even in details that run counter to conventional wisdom. But even though the list properly cautions several times against how most any equipment left plugged in will still drain power while doing nothing useful (infinitely bad efficiency), the article still makes an inefficienty mistake:

Sleeping continues to draw a small amount of power, between 1 and 3 watts, even though the system appears to be inactive. By comparison, Suspend draws less than 1 watt. Even over the course of a year, this difference is probably negligible.

Over the course of a year, 2 unnecessary watts is 17.532 unnecessary KWh. Sure, that's only about $1.75 at about $0.10:KWh [doe.gov]. But that's for each device. At home, in addition to sleeping computers, there's dozens of devices with AC adapters wasting watts most of the day (and night), which is possibly hundreds of dollars wasted. In offices and datacenters, possibly thousands to hundreds of thousands of dollars a year wasted. And each KWh means loads of extra Greenhouse CO2 unnecessarily pumped into the sky, even if it's (still) cheap to so recklessly pollute.

Which is what the One Watt Initiative [wikipedia.org] is designed to minimize. The US government has joined the global efficiency organization, mandating purchases of equipment that consumes no more than 1 watt in standby mode. Whatever the global impact of 3W wasted in standby can be cut by 2/3 if switching to 1W.

In the short run, that makes energy bills lower (and, by saving heat from standby devices, further lowers energy costs due to less required cooling). In the long run, we've got more fuel and intact climate left to work with - and that stuff just costs way too much to replace when it runs out.

Google developed their own power supply, and open-sourced the hardware, saying it saves them tons of energy and the rest of the world should use it. Mind you, it is DC, and it means a total DC data center, but really that isn't a bad idea.

Actually, Google's point was that they wanted motherboards that ran on 12 VDC only. [64.233.179.110] PC
power supplies are still providing +12, -12, +5, -5, and +3.3v. Most of those voltages are there
for legacy purposes, and DC-DC converters on the motherboard are doing further conversions anyway.
So there's no reason not to make motherboards that only need 12 VDC. Disks are already 12 VDC only,
so this gets everything on one voltage. This simplifies the power supply considerably, and avoids losses in producing some voltages that aren't used much.

But Google wasn't talking about using 12 VDC distribution within the data center. The busbars required would be huge at such a low voltage. They were talking about using 12 VDC within each rack. Distribution within the data center would still be 110 or 220 VAC.

Actually HDDs use +12v for the motors and +5v for the electronics. If you have a 3.5" FDD it only uses 5v. If you don't believe me try swapping the yellow (12v) and red (5v) wires going into the power connector on your HDD some time... here's a hint, the smoke you see coming off the electronics isn't from putting 5v into something that expects 12v (note if you're really dumb enough to do this I won't be held responsible for ruining your HDD).

Spinning up and down hard drives: as discussed in plenty of places, including here on/. I believe, you can dramatically reduce the life of drives when you cycle them due to mechanical wear-and-tear.

Do you have any data on this? This is one of those commonly held beliefs that has absolutely no facts behind it. I've seen a google whitepaper that pretty conclusively debunked commonly held assumptions that drives fail because of temperature and "wear and tear". From a mechanical standpoint, this belief also

I saw the google whitepaper and it debunked very little about the temperature "myth", not sure about wear and tear.

With regards to tempreature the study had a couple of fundamental flaws.

* The temperature measurements came from the drives themselves. That means if say an unreliable hard drive model also underreported it's tempreature it would totally skew the results.* It was data from servers running in a well cooled datacenter. That means there was virtually no data about drives running at the kind of tem

This is one of those commonly held beliefs that has absolutely no facts behind it.

The data sheet for my Hitachi HDS721075KLA330 [hitachigst.com] drive rates it at 50,000 load/unload cycles. If it powered up 50 times a day (which would be quite possible in a desktop with aggressive power savings enabled), it's specced to last about 3 years.

From a mechanical standpoint, this belief also does not make any sense.

The people who actually built it seem to disagree with you. Hint: a spinning hard drive takes little energy to stay in motion. A stopped hard drive takes quite a bit of torque to spin up to running speed in a small number of seconds.