Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

eebly asks: "I'm working with a team at my school to help ensure that some of our new building projects are environmentally friendly. As part of that, we want to include good, energy efficient computers. The new building will be labs, so we can't sacrifice power. We're looking for suggestions on what machines use low amounts of power, techniques to make the machines run with less, and ways to tweak the software to make things like sleep mode get used more effectively."

One option, albeit and expensive one, is to have the machines, expect for your highest-load servers, to be notebooks and laptops. Compared to a desptop PCs, they sip power due to low power CPUs and integral LCDs. The problem here is that they'll be more expenisive, more difficult to upgrade and more expensive to repair if something goes wrong.

One of the easiest ways, though, would be to insist that the green features of all the desktop PCs be turned on -- example is my home machine with goes into 'hibernate/suspend to disk' after 20 minutes of inactivity -- the machine powers down, the monitor turns off, though the system is brought back with a keypress in a few seconds.

One other stab in the dark could be one of cooling -- if you get PCs that do no require active cooling, there is no need for a fan ( don't know how much juice a fan uses), and perhaps they won't heat the lab up as much, which means the HVAC system won't run as much?

These days the up front cost of a Mac isn't much higher then the price of a name brand PC. Macs, however, use much less power in part because they are PowerPC based. They save power with the lower wattage CPU, by not needing active cooling, and they also have a more mature set of power management features then PCs do. If you don't need Windows in a lab, Macs would be a great way to save power. They can still open MSword documents under MacOS, and they run debian just as well as PCs do.

On the other hand, lcd's tend to be easier damaged by people sticking their fingers all over them and otherwise manhandeling them, as often happen in school enviornments. LCD's have many advantages, but hardened LCD displays of usable size will co$t.

ostensibly if a machine doesn't need a fan, it's not generating as much heat as one that does.
Example: the new apple g4's have a truly ginourmous heatsink--but they also have a huge (120mm!) fan in there, 'cuz it's cramped and that puppy sucks power.
the imac, otoh, doesn't throw as much heat and so can afford to be shoved in a translucent polycarbonate case with a CRT on top.

Many architects don't like to use skylights anymore because they tend to not be maintained properly, especially by school systems. They then leak and the architect gets shit, so they end up being more trouble than they are worth.
In my middle school, I remember that there were lots of skylights in the cafeteria, but they were all sealed up.

Very helpful, thanks. Also, since I've been using LCD for the last two months I can't stand to use a CRT, especially a curved one. Spoilt, am I.

One problem, though, that may affect the viability of equiping a lab with nice, safe, enviro- and elctro- friendly LCDs: how do you stop people from poking the screen with their fingers/pens when their pointing out things?? (Extreme Programmers must hate this, too!)

Should you be looking at converting to LCD.. There's a new technology on the forefront (that I learned about here at/.) that I believe uses organic means; thereby being cheap to reproduce and cheap in power..
The main problem with LCD's cost is that they're expensive to manufacture.. What you pay isn't profit; it's actually going into the raw goods!

I find that spinning down hard drives, zip/jaz drives,
and the like at the same interval as the monitor can reduce power consumption by a further 10-15 percent.

I think Zip drives are configured by default to spin down after 15 minutes of inactivity. This time can be changed through the software, at least using the Properties dialog in Windows. Is there a way to change this setting from Linux?

This is off-topic, but another environmental advantage to LCD-vs-CRT monitors is the reduced lead content. From what I have read, most CRT's have a couple of pounds of lead in them. I believe Massachusettes considers CRT's to be hazardous waste.

Of course, I don't have a good reference for any of this, so I could just be talking crazy-talk.

This is already done in some computer centres. Many networking devices (not so sure about processing equipment) have an option for a 48 volt DC feed. Presumably this voltage was chosen as it's the standard line voltage for telephone systems.

Not to get off topic, but aside from what CAN be done now to save power in a computer lab, I was curious what MIGHT be done someday?

What does the future hold for energy use in computers? Are they going to use more, or less? How can we conserve more power, or make what we do use go farther for the buck?

Do flatscreen monitors use less power than traditonal ones, or do they just look cool and use up less space? Instead of having to cool our CPUs with heat sinks, isn't there a way to design the chips not to waste so much energy as heat?

We go through periods of invention and innovation. Right now, invention is at an all time low, so we're focusing on innovation, to improve or extrapolate what we already have.

So what kind of innovation do we have to look forward to with computers and energy use? You tell me.

Not really. Computers like 5 volts DC. (3.3 volts and 12 volts are common too). The lower the voltage the less distance it will travel easially. DC didn't lose to AC because it was inefficant at distances, it lost because it is easy to change voltages with AC. The power line outside my house runs several thousand volts. Inside I have 120 volts, which isn't as good for long distances, but safer. Of course we can transform DC voltages today, but it is a lot more complex then AC. 5 volts will not travel the length of your house unless your power wires are as thick as the walls themselves, and you can't afford that.

Switching power supplies *do* have transformers. They are necessary as isolation devices. The difference is that instead of a large transformer that steps down the 120 Volt 60 Hz AC to a lower voltage 60 Hz AC and then feeds it to some diodes to change it to DC, switching power supplies feed the incoming 120 Volt 60 Hz AC directly to the diodes to change it to DC which is then converted into a direct current that varies in voltage (sort of an alternating current that doesn't "alternate", that is, that never crosses the zero line into the opposite polarity), but those variations are at a much, much higher frequency than 60 Hz, which means they can be imposed on the primary winding of a much smaller, lighter weight transformer (this is why aircraft alternators produce 400 Hz current for rectifying into DC, to save space and weight). This is where the isolation from the wall socket is performed.

Switching power supplies, like those used in most PCs, don't have transformers. I've noticed that some of the newer equipment use miniature switching power supplies instead of "wall wart" transformers.

I just built some Linux boxen, and the motherboards that I used (EPoX 8KTA3+, no affiliation) supports APM/ACPI, even though the boxen are not laptops.

My question is, how do I use it? Do I enable it in the BIOS config, and then set up Linux to control it? Or is the control done on the BIOS side as well? (Do I just pretend they are laptops, and follow the HOWTOs for laptop APM?)

Any suggestions or links would be appreciated. I've never had a laptop, so I don't even know the basic terminology.

Apple's G4 Cube does this. However, if you think about it, including a fan does not change the heat generation properties of the computer. If your system cools itself by convection, that doesn't mean that it generates less heat...just that it's more cleverly designed.

Well, I was thinking about sending +/- 12 Volts, then dropping the voltage down to the other levels closer to the device being used (actually, 12 and 24 volt systems are used in solar and RV applications, so it can be done). I wasn't meaning to imply that I would have a 5V wiring system. While even 12V wiring would still be pretty thick, it wouldn't be that bad. However, the conversion from 12V to anything else might not be that efficient...

I am not an electrical engineer, so what what I am proposing may be so much bull - can anyone give insight on the pros/cons of such a system?:

Nearly every device in the home (and office) is plugged into an AC outlet. What is the other cord generally plugged into? That's right - some form of transformer - it may be a large steel monster, or a tiny and efficient switching transformer, but in the end, nearly all of our systems use DC, rather than AC, to run.

A bit (and I would venture in certain cases) a lot of energy is wasted in the conversion of AC to DC, generally as heat. Since each device has one of these transformers, each device is wasting a bit of energy in the conversion process.

What if instead of having multiple transformers, you instead could use one (or a few) larger capacity, more efficient, transformers? Could this be done? Or would the actually be diminishing returns using this kind of system?

What if it could be done for the whole house, or floor, of a building? AC to the curb, from there, DC. Would the power losses be too big to do such a thing (ie, would the wires carrying the DC heat up too much)? I know that AC won out over DC because DC lost so much energy in long transmission routes (among other reasons) - but in the confines of a small building, would it really lose that much?

I wouldn't use such a system for large load devices (which use a lot of energy not matter what - like washers/dryers/heaters/AC/stoves/etc). But for other devices like computers, printers, network hubs, etc - it sounds like an interesting solution...

On the other hand, lcd's tend to be easier damaged by people sticking their fingers all over them and otherwise manhandeling them, as often happen in school enviornments. LCD's have many advantages, but hardened LCD displays of usable size will co$t.

For a lab, wouldn't it be possible to have plexiglas enclosures for the LCD screens made?

I mean, sure, it's an extra $40 per unit, and it isn't completely tamper proof, but it just adds protection for the screen. Something that's anti glare coated would be even better.

An excellent point on the CRTs (it's not uncommon for as much as half of a system's power draw to be from the CRT alone), but don't forget that many drives draw a fair amount of power on their own, and while floppys and CDs are typically good about spinning down on their own, hard drives, removables, and other media are not! I find that spinning down hard drives, zip/jaz drives, and the like at the same interval as the monitor can reduce power consumption by a further 10-15 percent.

Of course, the same caveat applies to drives as to CRTs - if the drive spins down to often, the power spike drawn when it spins back up will negate any savings from the time it spent turned off. This makes discovering an efficient interval imperative to creating a power-friendly environment. I've found that an interval of 8-12 minutes works best in office environments, but in an environment like yours, I wouldn't be surprised if an interval as high as 20 minutes would be more appropriate. I'm afraid some serious observation of 'users in their natural environment' may be necessary to pinpoint a good value.

Have you considered hard-drive-less machines? Even if you need heavy CPU power, you can still get a P3/Athlon with plenty of RAM and save a spinning disk. Speaking of RAM, skip the RDRAM as it is very power hungry.

That would save a LOT of power in a hurry. Of course, it requires a better investment in networking; managed switches rather than hubs, a nice fast server, and someone competent to set it all up.

This solution is heavily dependent on the way you intend to use these machines, but should be considered.

I agree with the other suggestions offered here (possibly use laptops, turn on all 'green' features, etc.) but add one: turn the computers (and printers, modems, hubs, and everything else) off when not in use for extended periods. My personal criterion is 45 minutes. There are arguments to be made that it might shorten machine life, take too much time to boot, and a host of other reasons, but being here in the midst of the California power crunch I've decided my power bill is going to take precedence over everything else.

I've also thought for a while that putting skylights in certain areas of schools would be a real power saver. Maybe your lab could consider doing that, if it were possible and made sense.

Turn on the power management on the PCs, but there isn't a huge amount of savings to be had there, on the grander scale. I would concentrate much more on the HVAC system, and making sure the new lab is very well insulated. Fluorescent lights (as annoying as they can be) help a lot as well vs. incandescent lamps.

CRT monitors (as suggested above) also take up a lot of power, and so do laser printers - better ones typically have sleep modes as well.

On the CPU side, just about anything besides an AMD or Intel CPU will get you better milage. PowerPC Mac's would be a good place to start. If you are willing to run Linux in the labs, a Netwinder or some of the MIPS-based Cobalt Micro systems would be an option. If you can find a Transmeta CPU powered system you could even run Windows and get decent power.

On the disk side, the really energy efficient way to go is to use solid-state memory of some kind. That's kind of pricey though, so the next strategy is to use low power hard disks and software which will shut down when not in use.

On the RAM side, I understand both DDR memory and RDRAM have lots of power saving features, although I don't know of a motherboard that is taking advantage of them yet.

More important than RAM though is the display. I think you'll find that going with LCD displays will save you significant amounts of energy.

Gee, doesn't it sound like G4 PowerBook's might be the answer to your problems?;-)

One other thing to consider is software which will do it's best to shutdown idle components of the system. DPMS, ACPI, APM, etc. are all important aspects of getting this right.

Transformers are inherently inefficient. They lose something like (1-(sqrt(2)/2)) power (sorry, my EE classes were a *long* time ago). Big ones are almost exactly as efficient as small ones.

There is one place where a significant amount of power is lost that could be gained. That area is the transmission line. Electrical losses average out to be the same as the above figure. This power is, esentially, radiated back down the transmission line, and is eventually lost in the form of resistance (heat). Now, all that is needed is to simply impedance match your electrical devices to the transmission line. This can be accomplished using a simple LC circuit.

Of course, every new device you plug in will change the impedance of the transmission line. So the device will have to be able to adapt dynamically.

If you can build a device that monitors the impedance of the power line, then impedance matches it with the electrical device, you'll be rich. Very rich. That is, assuming you can market & sell the damn thing.

Previous posters have noted that LCDs drain less power than CRTs. If you can find ones that are lit by ambient light instead of a backlight, you'll cut power requirements by a huge amount again.

Netbooting instead of hard drives.

Do a ROM boot off of the network card, and store files on a good file server. Dump in lots and lots of RAM so that you don't miss the swap space (RAM is cheap). Congratulations; you've now slashed another large contributor to the power budget.

Just make sure you're using 100-mbit or better.

Unless you're *sure* you need them, leave out CDROMs.

Students will use these as CD players, draining power. Ditto sound cards; there's no reason to have them unless you're doing sound editing in the labs.

Use simple graphics cards.

You don't need the power baggage associated with a kickass graphics card's high clock rates. Unless you're doing hardware-assisted rendering, an old-fashioned card with just 2D acceleration will be fine.

A floppy drive drains power, but is occasionally needed, and isn't used much, so keep it in. Beyond that, most of the frills can go.

In my experience, I've noticed that standard CRT monitors use up a large amount of power. To illustrate this, just put your hand over the CRT that you are using to read this message to feel the heat generated. More heat = more power used. LCD folks, don't bother trying this, you won't feel much heat, I specified CRT, OK?

When I installed my UPS at home, I went from a 25 minute power backup time with the monitir on, to a 42 minute backup time with the monitor off. That tells me that my monitor (17") is almost sucking up as much as my computer. And my computer isn't the most friendly model out there. The processor is old 1st rev PII 300-MHz heater (I think 40 watts), several SCSI drives and SCSI card, TV tuner card, internal DSL modem, ehternet, ATI Graphics card (with heatsink on the card to illustrate how much power it sucks down) CDROM and CDRW (Granted, the power backup numbers were not while copying a CD, but the point is still valid.)

The answer is to use somewhat aggressive settings on the monitor power off settings (don't bother with screensavers) or to switch to LCD displays.

Now, there is a hughe amount of power drain when the monitor turns itself back on after DPMS off, so you will have a net power loss if the monitor is only shut down for a few seconds at a time, so don't get too aggressive with those settings.

Of course, LCD displays are a more expensive up front cost, but the power savings from them are 2 fold: First, the display won't be eating up the power. Second, you won't need to cool the room as much.

If you are using Linux boxes, it is possible to do a full shutdown on modern hardware by calling the regular shutdown program from a screensaver like program. One would modify one of the activity moniter programs so it will shut the system down if there is no activity for a period of time. This would shut the system down over night, and durring slow days in the lab.

There is also a patch available that does HD spindown for Linux. From what I hear it now works on IDE drives. That alown will get you 5-10 Watts per machine.

The nightly scripts run by cron can have their run times changed so they are run just after lab close, then an automated shutdown run. Watch that you provide enough spacing between script runs so the previous one's output is available to the next script. This is important for the accounting scripts. You can even do some simple recoding of them to serialize them to shorten the time it takes to get them all done in the right sequence. This would allow one to have cron start them, then the last would shutdown the system when it is finished.