Posted
by
Zonk
on Sunday March 02, 2008 @09:20PM
from the saving-a-little-bit-of-juice-all-the-time dept.

tringtring writes "Computer World reports on an HP Labs researcher who foretells a future in which power management features will be built into the processor, memory, server, software and cooling systems. Coordination will be paramount. 'What happens if you turn all these elements on at the same time?' the principal research scientist at HP Labs asks. 'How do I make sure that the system doesn't explode?' This future is the vision of Parthasarathy Ranganathan, the man behind the "No Power Struggles" project at Hewlett-Packard. Power management systems will have to operate holistically, without one component conflicting with another, Ranganathan says. Ranganathan is just one of many researchers at the tech industry's biggest labs researching on how future data centers will handle increasing demands for processing capability and energy efficiency while simplifying IT."

Man you must have sucky apples wherever you live. Michigan Fujis are typically much, much sweeter than the usual California navel oranges we get around here. The oranges are typically a bit sour as well, while the Fujis are only slightly tart. Now Pink Lady apples, on the other hand, are very tart, and yet still sweet. Other, more lame, apples like Granny Smith and McIntosh are typically less sweet and less crunchy.

There, now I've both compared apples to oranges AND apples to apples. LOL

I also have a battery with twice the volume and a chemistry with 4x the energy density of the old laptop's battery. Not to mention that this battery is much newer and in better overall condition. I have 8x the battery capacity and ~3x the CPU power (1600Mhz vs 475Mhz). Factor in the fact that the older laptop has an internal floppy drive as well as DVD drive, where the newer laptop lacks the floppy drive, a hard disk that draws 5 watts more than the one in the newer laptop. I should be seeing nearly thrice the battery life by your logic.

You made the assumption that I had a 4750Mhz CPU, the same peripherals, the same size battery with the same battery chemistry and that similar peripherals use the same amount of power. You also failed to account for power management systems that are present in current laptops, which did not exist 10 years ago. Yet another thing you failed to account for is the supposed increase in efficiency (and decrease in overall power consumption) claimed by PC manufacturers, especially with regard to laptops. You even forgot to account for the age of the battery; 10 years vs. a week-old warranty replacement of a less-than-nine-month-old battery.

I have a battery with 8x the capacity in a system with less hardware and a supposedly more efficient CPU which is only about 3x faster, components which claim lower power consumption and over all better power management than my 10 year old laptop from the same manufacturer. Why am I seeing 1/3 the battery life of the old system rather than the 3x increase logic and mathematics tell me I should be seeing?

(and closer to 3x by mass). It would be better to compare the stated capacities, rather than your assumptions.

I imagine the newer screen is also faster and brighter, both of which increase power draw(LED backlights improve brightness per watt though, so if one of those is involved...). So you aren't lying, but you aren't being very careful.

The real question is: what the hell is software doing with all these resources? Why is it always on the shoulders of hardware to improve power specs? I have an idea: how about not requiring billions of processor cycles to support the 12 layers of indirection, redirection, abstraction, obfuscation, 12 megs of NOPs just to change the color of an icon? It is mind-boggling to think about what a modern processor does, I suspect most of it is crud left over from poor software decisions that we must drag around fo

There is absolutely nothing that can be done about this now. Software and abstractions are a lost cause.

In the whole picture, hardware is just another layer of abstraction, built of more interacting layers. But, todays hardware comes from several magnitudes lower number of suppliers than software and is much more tightly controlled and built to specs.

Another thing: hardware engineers are usually taught in universities. Software "engineers" are usually not.

As for abstractions, they allow other things that were simply impossible before. Abstractions allow tuning a design on criteria such as maintainability, extensibility, supportability, etc. Yes, making software more maintainable can reduce performance, but it also reduces t

Would you rather pay more for software that has less features but is faster?

I'd rather pay more for software that had the same amount of features but less years of krufty hack layered upon krufty hack.

Quite simply, we're talking about Windows here (and maybe Norton). Mac OS7 did a great job of providing both abstraction and speed in a maintainable environment on a 68030: a chip so slow that you wouldn't notice it if it was working as a co-processor on a modern machine.

I'd rather pay more for software that had the same amount of features but less years of krufty hack layered upon krufty hack.

You seem to be downplaying the costs that are incurred when throwing away working code to build new one. Non-trivial code takes a lot of time and effort to build. For example, let's look at Mozilla [wikipedia.org]. The Wikipedia article mentions the decision to scrap the codebase somewhere in 1998. When did the 1.0 version of Mozilla came out? 2002, four years later.

What kind of laptop is it? My girlfriend always used a Powerbook, and got about 2 hours with it...then she had to get a Dell, and was amazed to discover it got 6+ hours on a charge. To which I responded '...yea...your mac didn't?'

Both are HP. I thought I made that clear in my original post. I should also state that the CPU is less than 2x as fast as the older laptop when running on battery (it scales back to 800Mhz) and I have the backlight set to drop to 40% when running on battery. Wireless is configured to drop from 54mbps to 24mbps and halve its transmit power when running on battery.

The 10 year old laptop does none of this, everything runs at full power, full brightness, full speed, all the time.

Wait till you plug in an HP All In One printer. You'll get 15 desktop icons and a bunch of Taskbar quick launch icons. With 30 new high priority processes using half your CPU and all your memory, your battery life will drop to minutes, assuming your machine even meets the OS requirements.

"What happens if you turn all these elements on at the same time?" the principal research scientist at HP Labs asks. "How do I make sure that the system doesn't explode?"

That's certainly a worry for me. The last thing I want when I turn on a "processor, memory, server, software and cooling systems" is for the system to explode. Being a dedicated slashdotter, and therefore Linux user, I have little worry that the software will cause any manner of combustion event, but I'd never really considered the dangers of using a processor and memory at the same time. I was thinking of getting more RAM, but given that I'm already running a dual-core, perhaps I should hold off on the extra gig until I hear from HP.

Getting a live CD to boot is not hard...unless you're booting feisty with an ATI card not supported by the radeon driver. (all the mobility radeons and some others)
I really don't understand how people could be too prideful to ask for help. When I started using linux (ubuntu) I shamelessly asked for help or searched the internets for even the smallest of problems or questions. (fstab, xorg.conf, other easy stuff)

Stealing yout thoughts right from mid-stream, I quote "That's certainly a worry for me. The last thing I want when I turn on a "processor, memory, server, software and cooling systems" is for the system to explode. Being a dedicated slashdotter, and therefore Linux user" . That LINUX user crack is really uncalled for , under these circumstances:)

I can see the arguments between brands already:"-Your chip is sucking all the power and making mine look bad!
-No, yours is!"

I mean, we have enough problems with benchmarking as it is; I can't see how they would make that kind of "coordination" work, when not all pieces of the computer are of the same brand. Sure you can test what component takes the more power, but they can always say the others aren't sending enough info, etc...

I don't care about squabbling about who's better, I care about proprietary, not easily replaceable parts. Whatever moron at HP said "hey, let's start putting specialized hardware in our computers" should be fired. It's like with Dell's crap that you can't replace with standard parts so that they can charge 3x the real price to buy replacement parts from them directly. Guess how much a replacement motherboard was for a 6 year old Dell Dimension? $130! (I bought one used on ebay for $50 though)

The downside of not having those "manual" systems is that the user, no matter how well-versed they are, cannot adjust the system to do what tey want.Yes, your analogy is very valid for the Average Joe in terms of cars, but when a real user needs to make there car or truck do more, they have no way of doing it. If I want to give my truck more gear ratios for better mileage, I just slap on an over/under drive. Plus, automatic isn't always better, such as is the case with four-wheel-drive.

There is something to this. In a data center, if you have a brown-out or full power drop, the strain on power systems to restore power are what can only be described as epic.

When you take a 1400 amp back up system and drop it up and down like a yo-yo in a lightning storm, stress tends to bring out the worst of Murphy's Law. If all the components in a data center were orchestrated, that can be mitigated. It can be mitigated into nearly 'not a worry' status.

Monitors? low priority in most cases. Redundant supplies, in some cases bring them up separately. Cooling fans could be delayed by some seconds depending on usage. It may seem negligent power use, but on startup each system will draw it's max current, and when all do at the same instant, the peak draw can be overwhelming. In fact, computers themselves could bring up hardware in an orchestrated manner to reduce the startup surge.

In addition to this, by adding power management, it's possible to reduce data center power use also. If you monitored temp and turned off fans when not needed, less power used, less heat generated, less cooling needed overall. If all hardware were built in such a way the hardware on a quad nic card that is not used could be powered off after configuration... as an example. Nic cards could be the last thing to be powered up.

This type of design is practically rocket science. If you look at systems that go into space you will see that they count every milliamp of current draw and manage it with precision. Power use is a big concern for space craft.

Believe it or not, I like to have audio indication on many systems. I am at the point now that hearing certain sounds in conjunction with other events lets me know instinctively what is happening. I'm reasonably certain that one or two power outages would have a symphony of things going on with your proposal, and that symphony is easier to determine what is happening than reading several thousand kilobytes of log files. I like the idea. Even just knowing when something 'not normal' has happened by audible s

In addition to this, by adding power management, it's possible to reduce data center power use also. If you monitored temp and turned off fans when not needed, less power used, less heat generated, less cooling needed overall. If all hardware were built in such a way the hardware on a quad nic card that is not used could be powered off after configuration... as an example. Nic cards could be the last thing to be powered up.

If heat and energy usage were that much of a problem then the laws of capitalism dic

How so? You've replaced the AC input to the power supply with a DC input, at a lower voltage and with increased transmission losses. Whatever the input, it gets converted into AC in the voltage regulator.

So when you're purchasing power from the grid and you're metered not on use, but on peak draw, this will save you a LOT of money. Coordinating the power on of a number of systems which draw a lot at power on verses their normal draw (think turning on 100 laser printers all at the same time!).

Improving power management in the hardware is a good idea, then again the problem is probably simpler. Currently PCs uses a power management protocol that doesn't seem to be easy to understand and in certain case just badly implemented. It really gets on my nerves when I buy a new motherboard and there is no way to get the system to go to sleep. I am not too sure whether to blame this on Windows, the hardware or a bad specification?Can anyone tell me whether EFI (replacement of BIOS), provides a better way