Cool - thanks. I had a cart full of parts ready when the registered/nonregistered question came up.

My main HTPC just flaked out (again), and I'm suspecting the MOBO now. It's an Asus P5E-VM HDMI that gets great reviews everywhere, but has never really been all that stable for me in the 3 years I've owned it.

So I'm dumping it. I'll swap my existing GA-MA74GM-S2 into the HTPC. And pick up the Biostar A760G-M2+, a 5050e, and some ECC ram for use in the server.

Mine is an unRAID server, and although the Biostar A760G-M2+ isn't mentioned anywhere in the "supported motherboards" list, since my current AMD 740 board is supported, I assume the Biostar's 760 will work. The "supported motherboards" lists includes a bunch of 740 and 780 boards.

I've always been curious, though: even if the BIOS doesn't explicitly support ECC, but the CPU does, what happens if you use ECC memory? Maybe it actually does "just work" behind the scenes? I found this page with some Linux ECC utilities (and I'd be surprised if something similar didn't exist for Windows). It would be interesting to get several "non-ECC" boards and experiment to see if ECC support actually does work.

Apparently, memtest86+ (www.memtest.org) can verify ECC functionality. I know some version of memtest is included on all ubuntu installation cd's - I *think* it's memtest86+. You don't need to install the OS - just boot from the cd and select memtest at the first menu screen.

I've always been curious, though: even if the BIOS doesn't explicitly support ECC, but the CPU does, what happens if you use ECC memory? Maybe it actually does "just work" behind the scenes? I found this page with some Linux ECC utilities (and I'd be surprised if something similar didn't exist for Windows). It would be interesting to get several "non-ECC" boards and experiment to see if ECC support actually does work.

Apparently, memtest86+ (www.memtest.org) can verify ECC functionality. I know some version of memtest is included on all ubuntu installation cd's - I *think* it's memtest86+. You don't need to install the OS - just boot from the cd and select memtest at the first menu screen.

Indeed, I've noticed that when I've used memtest86+ over the years.

However, I'm not so sure it works. I've ran memtest86+ with two ECC motherboards now, the Biostar A760G and Intel S3210SHLC. Same Crucial ECC memory in both cases. In neither case did memtest86+ verify the ECC functionality (i.e. it looked no different than running a non-ECC memtest).

That is really interesting. I bought this Gigabyte 740G board (and a 4850e CPU) last September and this year, much to my delight, ATI decided to move it to "legacy" (nice job, the thing was less than a year on the market when named legacy) category driver-wise. I slapped in a Geforce 8400GS but meh. I want to go back to videocard-less and low power . This 760G seems to be DirectX 10 so hopefully ATI will keep us supplied drivers for more than half a year... I am using two 2.5" disks (5400RPM) so I hope the machine wont each much -- indeed the two 22" monitors I bet eat a hell lot more.

The AMD 4050e ($40), the 4850e, and the 5050e have all been deactivated on the 'Egg. That pretty much means they won't be back. They aren't on ZipZoomFly, Tiger Direct, or Directron. You will find a couple of places have them on the 'Net, but they want $100 for them.

After just 8 months in production the AMD 710 X3 seems to hitting the sidelines as well, making room for the more expensive 705e. Typically $125 and rated 65 watts.

There are a couple of 45 watt single processors on the Egg currently, one of them is 45 nm tech.

"There are a couple of 45 watt single processors on the Egg currently, one of them is 45 nm tech."

Explain to me how watts relates to 45 or 65 nm. I don't understand that. Like how much more power would you expect to use if you had a 65 watt, 45 nm as opposed to 45-45. Or do they just run cooler or what?

Nm = nanometers. It is the width of the electrical traces or "wires" in the cpu itself. (There are no wires per se but some kind of electrical paths.)

The thinner they are the less electricity the cpu will use and the less heat it will produce. It also allows a smaller cpu to be built. Overall the smaller the better for the manufacturer and for us as well. Heat is often the what limits a cpu's maximum speed.

Note that companies seldom make things identical and smaller. If so you would very likely see less power consumption and less heat, but what they often do is add more the package. A cpu can go from 65 to 45 nm, but if the total transistor count (example: a larger cache) goes up it what may happen is similar heat and power consumption and a larger increase in performance.

_________________People who put money and political ideology ahead of truth and ethics are neither﻿ patriots nor human beings.

I just thought I'd update this to say that I've had the system running stably for a few weeks now.

With the Biostar A760G motherboard, 5050e CPU (stock voltage/speed, but using OS-based frequency scaling), 2 x 2 GB of DDR2-5300 ECC RAM, boot from compact flash, four 1 TB WD RE-2 Green drives, Seasonic S12-II 350 PSU, Intel 82574L PCIe NIC, and some fans... the system has had an average power consumption (over the last few weeks) of around 60 watts.

However, I've noticed that the Kill-A-Watt occasionally shows total system power draw of around 50 watts. I believe this is due to the head parking of the WD green drives (I can hear that characteristic clicking noise when the power consumption drops). This is consistent with SPCR's findings, that the drives use about 2 watts less (each) when the heads are unloaded. I have four drives, and throw in a couple watts for PSU inefficiency, and there you go.

Clearly, since the long-term average is closer to 60 watts, it means that when the heads to park, they don't stay parked for long. So now I'm looking at how I can tune the operating system to not access those drives when idle. Since this machine is idle most of the time, I believe I should be able to get the long-term average power consumption closer to 50 watts.

[quote="Jay_S"]@matt_garman,
Your Biostar looks like an awesome fit, though. Especially considering the ECC support. Looks like just the right host for a Supermicro AOC-USAS-L8i. Would make a killer little file server.

[b][EDIT][/b] Looking at the Biostar's manual, it states that ECC memory is NOT supported... ???

[b][EDIT #2][/b] Never mind - the BIOS manual shows all the ECC options. Don't know why the main manual says it's not supported...[/quote]

I received email from Biostar support were they state that most types of ECC memory are supported (Registered or Unbuffered), but the ECC functions aren't supported.

I received email from Biostar support were they state that most types of ECC memory are supported (Registered or Unbuffered), but the ECC functions aren't supported.

I own this board now (for about 1 week). I can confirm what matt_garman says about ECC - it is fully supported in BIOS. Whether or not it's actually working, I have no way to test. I paired it with an AMD 4050e and 2GB Kingston unbuffered PC2-4200 ECC ram. This is my unRAID server which has 3 WD10EADS drives. I have yet to check power consumption because my kill-a-watt is getting flaky (sometimes works, sometimes not), and it's a pain to take down the server.

I'm still playing with airflow in my case (CM Centurion 590), and will be re-designing it for positive pressure. I'll take power readings at that time.

Clearly, since the long-term average is closer to 60 watts, it means that when the heads to park, they don't stay parked for long. So now I'm looking at how I can tune the operating system to not access those drives when idle. Since this machine is idle most of the time, I believe I should be able to get the long-term average power consumption closer to 50 watts.

Not sure how your drives are used. But, I believe you can get the OS to not access those drives when idle by revising your file system setup.

Looking at your server setup, I think that you might have some system directories living on these drives eg /var, /tmp, or swap. CentOS and distros based on Redhat place system log files in /var/log directory. So, having any of these directories in those drives will cause it to spin up in order to write to it. And spins down again when idle.

I have a Fedora based server setup that was done up about 2.5 years ago. The system is a MSI K8MM-V and AMD Turion 64 MT-32 setup. The OS and system files (including swap, /tmp, /var, etc) goes to a 2x 120GB 2.5" drive array (mdadm RAID 1) and data (MythTV recordings, photos, music, videos, etc) goes to a 4x 320GB SATA 3.5" Seagate 7200.10 RAID 5 array. I have the following lines in my /etc/rc.local:

What it does is to set the drives to go to a standby state (spindown) after 10mins of idle.

Complete setup at idle is 52W. The system is getting a little long in the tooth. But it still serves me well for now serving files and SD MythTV recordings. Will definately have to upgrade if I want to go HD as it's only pushing data over my gigabit LAN at about 28MBytes/sec with CPU hitting 100% usage.

Not sure how your drives are used. But, I believe you can get the OS to not access those drives when idle by revising your file system setup.

Looking at your server setup, I think that you might have some system directories living on these drives eg /var, /tmp, or swap. CentOS and distros based on Redhat place system log files in /var/log directory. So, having any of these directories in those drives will cause it to spin up in order to write to it. And spins down again when idle.

The spinning disks are used exclusively as a data store. All the system files and directories reside on a compact flash card (I'm using one of those PATA to CF adapters). In other words, typical system activity (logging, swapping, etc) should not affect the spinning drives.

What it does is to set the drives to go to a standby state (spindown) after 10mins of idle.

Complete setup at idle is 52W. The system is getting a little long in the tooth. But it still serves me well for now serving files and SD MythTV recordings. Will definately have to upgrade if I want to go HD as it's only pushing data over my gigabit LAN at about 28MBytes/sec with CPU hitting 100% usage.

My goal is not to have the disks spin down, but go into the mode where the heads are unloaded (aka "parked"). With these Western Digital Green Power drives, this is a power state that is lower than "normal" idle+spinning, but higher than spun-down. On my system, with the drives spinning and the heads loaded/unparked (i.e. "normal"), AC power consumption is about 60 Watts; with the disks spinning but the heads parked, system is about 50 Watts; with the disks spun down, power usage is about 38 Watts.

I'm on the fence about spinning-down; it definitely uses the least power, but I'm afraid of too many spinup cycles causing premature failure in the drive. My drives are the "enterprise" RE2 models. At least at one time, conventional wisdom said that enterprise drives were designed to run 24/7 and therefore don't handle frequent spinup cycles. That's just based on stuff I read at one time---it may no longer be valid (or perhaps never valid!).

Anyway, I haven't had time to finish digging into the "maximize parked head time in linux" problem, but you can see the discussion I started on the Linux-RAID mailing list here: "linux disk access when idle".

The spinning disks are used exclusively as a data store. All the system files and directories reside on a compact flash card (I'm using one of those PATA to CF adapters). In other words, typical system activity (logging, swapping, etc) should not affect the spinning drives.

Ah! Yes, you did mentioned about the system drive on the CF card somewhere. And, I was thinking that you might have considered the impact on the CF card with swap, logs, etc writing to it. So, I was wrong and had assumed that you could have placed them on the array instead.

matt_garman wrote:

Anyway, I haven't had time to finish digging into the "maximize parked head time in linux" problem, but you can see the discussion I started on the Linux-RAID mailing list here: "linux disk access when idle".

It's odd that the drive heads should reactivate after 5 mins. My RAID5 array spins down after 10mins of no activity and stays that way until there's I/O going to the drives.

Going to look at my own setup again when I get home tonight. Might be something that I did to keep it remain spun down until there's I/O. I'll update here if I found anything.

Anyway, I haven't had time to finish digging into the "maximize parked head time in linux" problem, but you can see the discussion I started on the Linux-RAID mailing list here: "linux disk access when idle".

It's odd that the drive heads should reactivate after 5 mins. My RAID5 array spins down after 10mins of no activity and stays that way until there's I/O going to the drives.

Going to look at my own setup again when I get home tonight. Might be something that I did to keep it remain spun down until there's I/O. I'll update here if I found anything.

I'll be watching the thread at the Linux-RAID mailing list too.

One interesting point (that I haven't yet posted to the Linux-RAID mailing list) is that if the disks actually spin down (i.e. lowest power state), they will stay that way for more than five minutes; in fact, they will stay spun-down until data is explicitly requested from them.

My conclusion is that the head parking logic is more "sensitive" than the spinup/spindown logic... I don't know how else to explain it.

One interesting point (that I haven't yet posted to the Linux-RAID mailing list) is that if the disks actually spin down (i.e. lowest power state), they will stay that way for more than five minutes; in fact, they will stay spun-down until data is explicitly requested from them.

My conclusion is that the head parking logic is more "sensitive" than the spinup/spindown logic... I don't know how else to explain it.

Sorry, have been too busy at work. I've had a look through my server and other then those hdparm commands in /etc/rc.local to spin down the drives, I could not find anything else that cause it to stay spun down until there is I/O request to it.

Not sure if it's too "sensitive". But it could just be the way the head parking logic is being implemented by WD. I'm not using WD disks anywhere. but could there be some DOS based utility from the manufacturer to configure that setting?

for a few weeks now I'm planning on setting up my own little NAS. I read this thread with huge interest, mainly because of the good description of the power-saving functions implemented in the Biostar A769G M2+. Three days ago my Biostar arrived, and first thing I did was to flash the BIOS. My Sempron 140 wasn't supported by the installed BIOS, so I didn't spend much time and and looked for the newest version. I can't tell if that's the reason, but I can't lower the Vcore deeper than 1.075V, and the multiplier so something lower than x8. With CnQ the voltage is the same, but the multiplier is x4.

To avoid limitations because of the BIOS, I installed the OverDrive utility of AMD. On this way, it was possible for me to set the multiplier to x4, but the voltage remains at 1.075V. You can lower the voltage, but if you want to confirm, it jumps back to the value above.

My question: Does anyone else have problems with this? I thought it's possible to adjust the voltage manually, but actually I believe you can just choose the VIDs... Right now, I'm a little disappointed, because this way I won't get my energy consumption lower than 40W, using a 300W power suply and a 2,5" disk.

I received email from Biostar support were they state that most types of ECC memory are supported (Registered or Unbuffered), but the ECC functions aren't supported.

I was about to order the Biostar board, but this finding put me off. I've since searched various manufacturers web sites and found the ASUS M4A78L-M LE . It seems to offer everything the Biostar offers, and has official ECC support. Unfortunately, the web site is suspiciously void of details. Anyone have an idea if/when this will be available (and - from looking at the specs - if this would be worth waiting for)?

Has anyone with this board had any trouble with it detecting the memory--I have a stick of 1GB PC2-6400 in it and it only recognizes 768...or is the video memory permanently locked at 256? I can't see a way to change it, would like to dedicate the lowest possible amount.

edit: I'm an idiot, found it, fixed it, though I'm still interested in the stuff below

Or could that be gotten around with "headless mode" available in the BIOS...of course, if that disables screen/keyboard local access, how would you ever switch it back. The manual is no help whatsoever.

Is the latest BIOS on the Biostar website good to have installed? I have one from 7/28/09 currently installed, but the newest is from 9/11/09.

Thanks in advance for any help and thanks for bringing my attention to this board--got it open-box at Newegg, got to upgrade the HTPC and bring it's dual-core (BE-2300) over to this box to run my WHS for (hopefully) many many years.

can't wait for WHS to be updated to the newer (Vista/W7/Server08) codebase. The older stuff really just does not like changes to hardware that the newer stuff can handle much better. The ancient 250GB Seagate I put in needs to be replaced first, Speedfan did it's little SMART analysis and there are sectors failing etc. Noisy, hot, and probably a power hog also. Tried setting the BE-2300 to run at 1GHZ, 0.825V, only shaved 1-2W off the CnQ default idle of 1V. Put it back to regular so I get full performance when needed. Reinstalled the plugins and Connector software, updated accounts and it's all back to the way it was (after a billion MS updates ). Now to start dumping recorded tv onto this thing and figure out how to start ripping our DVD collection and putting it on here so we can watch from the different PCs and the Extender.

I measured 33W(EDIT:WRONG. The meter is not accurate) idle Biostar TA690G, 4850e, cheap 300W PSU, old 250GB HDD, 4 memory sticks 3GB, two optical drives, floppy, and - HD4670! Should be 25W without 4670 (or not... I'm not sure.) The best low power board?? I mean - what barrier? Broken long ago.

Last edited by Klusu on Fri Nov 06, 2009 1:49 am, edited 1 time in total.

What did you measure with? My Asus mATX board with 5050e/780G, new HD, 2 sticks memory etc was over 40W at idle. With 4670 it was over 50. That was using a Kill-a-Watt on 120V AC to get draw from the wall.

With the meter, according to which I pay for electricity. Devices, similar to kill a watt, may be very wrong (I had one). Not the first computer I measured. Was surprised myself, but here, in SPCR, I read several reports around 30W. It seems, 740G/760G boards are not lower power than 690G (no reason to be, with claimed chipset's idle consumption about 1W). I confess, I shaved some 4W off by removing 3 resistors in the PSU, connected between 0 and +12V, +5V, +3.3V. I had to, because the +5V one was very hot, the circuit board was brown, one capacitor had died. Modecom Feel 300W. My 4670 draws 7W DC. 4850e at 0.775V, 1.1GHz idle. Memory at 1.95V (the lowest setting in BIOS)

Or could that be gotten around with "headless mode" available in the BIOS...of course, if that disables screen/keyboard local access, how would you ever switch it back. The manual is no help whatsoever.

Good question - probably via a serial port, a console cable, and another PC running HyperTerminal (or similar). Or popping the CMOS battery!

psiu wrote:

Is the latest BIOS on the Biostar website good to have installed? I have one from 7/28/09 currently installed, but the newest is from 9/11/09.

Mmm. Unless you're having problems that are reported fixed by the newer BIOS, I wouldn't bother.

Who is online

Users browsing this forum: Bing [Bot] and 1 guest

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum