I remember my Intel D510MO wasn't able to run Ubuntu, nor Debian and some tests even showed better power consumption on Windows.

Not sure where you got that info from but Ubuntu running on that board fine from the day it was released. I tested power consumption back then and recently and its still very low, in-line with reviews online.

I would hope so! But unless it includes an early alert feature, ECC can't work miracles if the RAM is going bad (as opposed to handling the odd error caused by radiation or something).Does SMART report RAM errors? Does it flag a drive with eccessive RAM errors as failing? I don't know which means it can't be a very common failure mode but it'd be interesting to know.For system RAM, there's software that can notify you when errors are corrected by ECC (relying on features which are not necessarily included in affordable hardware).

I'm not in any way saying ZFS is bad, just that some of it's features are overblown. If you are worried about data corruption, ZFS will not help you in 99.9% of the cases. You need something above the filesystem for that.

Yeah, obviously with some effort you can get most of the features that ZFS provides. You can use hardware RAID with redundant power source to avoid the write hole issue. You can use a good modern filesystem with checksums to identify silent data corruption. You can use LVM or dynamic disks (or how is it called in Windows) to overcome partitioning problems.

The thing is that with ZFS you have all these features 'out of the box', with some additional benefits and very easy administration.

I have personally experienced a very simple problem: on RAID1 (two disks mirrored), one chunk of data on one disk got damaged. RAID could not tell which disk holds correct data, so the read process failed. These are the kind of problems that RAID does not solve. And this would never happen on ZFS mirror, because it integrates hardware redundancy with filesystem checksums.

I am not saying that ZFS is the only possible solution, but it is really feels like a different world. Once I have tried, I never looked back.

I remember my Intel D510MO wasn't able to run Ubuntu, nor Debian and some tests even showed better power consumption on Windows.

Not sure where you got that info from but Ubuntu running on that board fine from the day it was released. I tested power consumption back then and recently and its still very low, in-line with reviews online.

My personal experience, I was one of the first to own this board in Czech and Ubuntu after install and restart showed just black screen. There is a lot of threads around internet about it (there was some missing driver or what). I spent two days trying to fix it. Windows Server 2008 worked much better...

Sorry, but simple logic proves the CERN report (as reported in the linked article at least) is impossible in the real world. Modern storage does not have a byte error rate of 3 * 10^7. That is one error in every 30 MB. The modern world could not function if it were true. ZFS could not even deal with it, as it's own checksums would get corrupted regularly with that error rate.

The second article is dealing with data loss, which is not the same thing as corruption. Hard drives lose data all the time, no one is disputing that. What they don't do is magically return something different than what was originally written. They either return the requested data, or an error. As I said before, they have at least an order of magnitude more error checking than ZFS has.

Many people forget that we had RAID with built in check summing, it was called RAID 2. Is long gone since it was quickly proven to be useless with modern drives.

Quote:

I have personally experienced a very simple problem: on RAID1 (two disks mirrored), one chunk of data on one disk got damaged. RAID could not tell which disk holds correct data, so the read process failed. These are the kind of problems that RAID does not solve. And this would never happen on ZFS mirror, because it integrates hardware redundancy with filesystem checksums.

How would ZFS help if both disks appear good? If both return data, how do you trust which checksum is correct? It's a major logical problem with ZFS, it claims that hard drives cannot be trusted, yet trusts them with the checksum data.

Linking to the scribblings of salesmen isn't an effective way to establish credibility.

washu wrote:

Sorry, but simple logic proves the CERN report (as reported in the linked article at least) is impossible in the real world. Modern storage does not have a byte error rate of 3 * 10^7.

They weren't talking about random byte errors but mostly about more serious problems which can be troubleshooted if the admin is paying attention or prevented if she had been reading about other people's problems in the first place:

Bernd Panzer-Steindel, CERN/IT wrote:

-64k regions of corrupted data, one up to 4 blocks (large correlation with the 3ware-WD disk drop-out problem) (80% of all errors)

washu wrote:

How would ZFS help if both disks appear good? If both return data, how do you trust which checksum is correct? It's a major logical problem with ZFS, it claims that hard drives cannot be trusted, yet trusts them with the checksum data.

The scenario was partial data loss on one hard drive (as if some part of the disk had been overwritten with garbage), not inconsistency following power loss or something. Garbage rarely has the right checksum.The issue is rather: how common is it for a drive to return garbage? The answer is evidently: a lot less common than it is for controllers *or indeed the system RAM in low-end systems* (notice the thread's topic?) to return garbage.

Now if the RAID implementation gorkypl "personally experienced" fails every time it encounters a read error on a drive, that's a different problem! Because the normal effect of "one chunk of data on one disk got damaged" is a read error. Surely everyone here has experienced at least one.

The scenario was partial data loss on one hard drive (as if some part of the disk had been overwritten with garbage), not inconsistency following power loss or something. Garbage rarely has the right checksum.

I see what you are saying, but how would part of one disk get overwritten by garbage assuming the drive is part of a RAID set? That would require a serious failure on in the OS or the RAID system. ZFS could not deal with the OS being faulty, it has to place some trust in it.

Well yeah, it seems to me that the case in which this feature of ZFS is most useful is: solid OS, unreliable storage. In other words, expensive servers attached to loads of cheap storage.But if all you've got is a couple of drives attached to a cheap Bay Trail board (meaning your OS is your RAID system and that it's likely to be affected by any corruption affecting your storage), running ZFS might be more risky than not.

Wonder if you guys would mind taking your ZFS theology debate to its own thread and letting this one stay on topic regarding bay trail MBs?

On topic: any word on when/if SM will release a bios upgrade supporting proper UEFI boot options on the X10SBA(-L)? The 32-bit UEFI boot loader in bios 1.0b is a real pain. Also, anybody bot OpenElec running stable on it? Currently running with XBMCuntu and I'm still having sleep/wake issues and HDMI stability issues when the AV receiver is off. I need more stable options.

Had some talks with ASRock and it looks like, that Q1900DC-ITX will be available pretty soon and even pricing looks reasonable.As soon as they confirm I will post it here. I also asked for one piece for review.

Had some talks with ASRock and it looks like, that Q1900DC-ITX will be available pretty soon and even pricing looks reasonable.As soon as they confirm I will post it here. I also asked for one piece for review.

Stay tuned

Some Japanese pricing is now showing up. It is 25% higher than J1900-ITX, ~87.5EUR.

There is PCIe x1 Zotac nVidia GT610. There are also [mod: deleted non-functional link]Chinese funky risers for x1 to x16 where power comes from molex or SATA power cables and logic is passed from x1 to x16 riser via USB3 cable - I'll be testing that with bit old nVidia quadro FX580 and Radeon HD6450 (both with low power usage and they don't need the extra power plug) as a third part of the review I'm curious how much can pass via pcie x1 2.0 and how it will perform vs J1900 graphics (is that graphics limited by slow RAM etc.). In general they should be below HD4000 performance.

Can someone address me to the best way to boot FreeDOS on a Gigabyte w/ F1 firmware? I prepared a FAT16 2GB usb key but it does not want to boot (black screen, needs a power cycle, I'm using DVI output). I'm not even sure that DVI output is enabled during boot, I tried to boot from a SATA hard disk with a working Win8 installation (in legacy mode) but it did the same (black screen, no disk activity).

Strange thing is, after a CMOS reset I forgot an usb key inserted and it booted, sadly it was not the correct drive

PCIe x1 can't be used for graphic card (maybe with extra PSU), otherwise this seems to be the best Bay-Trail board so far.

Because lets face it, for boards with idle around 10-15 W, there is no classic PSU.

I'm testing three x16 graphic cards connected via riser-alike board to the Asrock x1 slot. Power is provided via Molex or SATA power cable. That seems to be working, some benchmarks drop compared with x16 (results to come). The bigger problem is that for stronger GPU the 6 pin additional power plug is needed, and that usually doesn't exist in small cases with fanless power adapters (and it would have to be probably ~200W or more depending on card).

Who is online

Users browsing this forum: No registered users and 8 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum