just brew it! wrote:Personally, I would not use spanned volumes for anything

Sounds like there's a good story behind that. So what happened?

No particular story. Just the fact that it has the disadvantage of RAID-0 (reduced reliability) without the performance advantage (no striping). If you want that sort of flexibility (ability to arbitrarily remap storage from multiple drives to multiple filesystems), your best bet is LVM on top of MD (with the underlying MD array configured as RAID-1 or RAID-5).

just a quick mention for upnp since no one seems to have brought it up and you've got an xbox and various windows PCs you want to share media files with... I've never really bothered with it myself but besides normal windows file sharing / samba it's the way to share media files with other devices. There are several upnp media servers available in the ubuntu repos, but I can't recommend one over the other as I've never used any in anger.

I wasn't implying that you would. But doing a simple spanned volume has the same reliability issues (if one disk dies you lose the whole logical volume), without the performance advantages.

Not really. Doing a RAID 0 without parity or mirror stands a pretty high rate of failure when not implemented on enterprise grade RAID controllers. If implemented on something like WD Greens you can increase the time to failure as latency, performance, and MMTF aren't nearly as controlled or guaranteed as they are on R2/R3/R4 drives. Laying LVM on top of a software RAID 0 implementation is exponentially worse because you're increasing your point of failure however, you're not increasing redundancy or affecting MMTF ratios. Implementing LVM alone allows for partition migration without increasing complexity, yet adding flexibility which goes a long way for giving the OP options for matching an enterprise grade backup solution where JBOD does not.

I went back and forth with what to suggest considering the meager hardware. However, I thought that since the OP dove into the linux swimming pool almost head first it would be best to give the OP options with the possibility that future dialogue would allow for further recommendation of moving the logical backup volume to another server as backups are intended to be.

Thanks again for all the advice. It has been extremely helpful and useful. I just wanted to give everyone an update on my status with the server. Over the long weekend I got some testing in and I got the file serving and sharing aspect down along with how the mounting and file systems work in Ubuntu. Here's what I ended up with as a hard drive breakdown in case people want to know for future reference.

Disk 1 - WD EARS - 2TB drive

/boot - 300mb/ - 20gb/swap - 4gb/Disk1 - 2TB (It is less but it is whatever the remaining space is)

Disk 2 - WD EADS - 2TB drive

/Disk2 - 2TB

I split up my different server shares between Disk 1 and Disk 2. There are a few reasons I chose to do this breakdown the way I did.

1) Although LVM seemed intriguing I did not understand exactly how everything worked with the file systems. I tried many installs but in the end I did not feel comfortable. In addition I did not like knowing which drive my stuff was on. For some reason this is bothersome to me.2) I know some of you are going to ask where my backup is but I have two separate HDD's (3.5TB but I only have slightly over 2TB of data) where I store the data via an e-sata dock. I feel safer if it is not connected directly to the computer.

Overall I am very pleased with the server so far. It feels so light on the atom server and I am starting to look into more things I can do with it such as remote access, and FTP which seem to be the next step. I may try some other features as well and definitely will to hear different add on ideas.

Once again thank you. I would have not been able to do this with out you guys (and gals).

To Start Press Any Key'. Where's the ANY key? If something's hard to do, then it's not worth doing You know, boys, a nuclear reactor is a lot like a woman. You just have to read the manual and press the right buttons.

I wasn't implying that you would. But doing a simple spanned volume has the same reliability issues (if one disk dies you lose the whole logical volume), without the performance advantages.

Not really. Doing a RAID 0 without parity or mirror stands a pretty high rate of failure when not implemented on enterprise grade RAID controllers. If implemented on something like WD Greens you can increase the time to failure as latency, performance, and MMTF aren't nearly as controlled or guaranteed as they are on R2/R3/R4 drives. Laying LVM on top of a software RAID 0 implementation is exponentially worse because you're increasing your point of failure however, you're not increasing redundancy or affecting MMTF ratios. Implementing LVM alone allows for partition migration without increasing complexity, yet adding flexibility which goes a long way for giving the OP options for matching an enterprise grade backup solution where JBOD does not.

Geeze, you need to reread JBI's original post. All JBI originally said was that using LVM to span two drives gives you the same potential reliability issue as striping across two drives -- i.e. you have two drives that could fail instead of one. You're still spinning your wheels implying he suggested combining the two. Combining them was something that you introduced into the conversation for some reason.

It's always fun to see how many approaches there are to stuff like this.

I just built a new file server this week to replace my aging ReadyNAS. I ended up getting a Newegg bundle which included a SuperMicro Atom-based board and a SuperMicro mid-tower (with 80+ 300W power supply):

That SuperMicro board is a bit expensive, but I wanted something with low power draw, passive cooling and which had a lot of SATA onboard.

The bundle was $199 US which I think is a pretty nice deal. I'm not entirely crazy about the case, but it's not bad. Sans hard disks, it was using about 24W at the mains with FreeBSD (FreeNAS) booted up; I've not yet measured it with the disks in there (4 x 2 TB WD Green drives). I had 4 GB of SODIMMs which I put into it from a broken laptop. The SuperMicro board also have a USB port on the motherboard; the same kind of USB port you'd fine on the front of the case. I put the USB stick with FreeNAS on it and boot from it.

FreeNAS really is a nice setup. I have run it on a smaller Via C7 system and it's been rock solid. FreeNAS on even the crappiest of hardware (like my 1.1 GHz Via C7, passively cooled) is way faster than a two year old ReadyNAS, both in terms of UI responsiveness and network throughput.

For the filesystems, I'm not using RAID or ZFS. Part of the reason is to "keep it simple". I like the idea that I can just take a disk out and use it in another FreeBSD system easily. Some of the storage is network backups, and I don't need redundancy on these, so I don't want to run RAID on that. I have an rsync job setup in FreeNAS that rsyncs data across some disks so that necessary data is copied across disks. It does this in the middle of the night. I like that better than RAID because if I delete a file, it's still rsynced safely elsewhere (until 2am, anyway). I really wanted to use ZFS, but it's still a bit experimental in the current FreeNAS, and reliability is more important to me than the (admittedly cool) features of ZFS. I predict I'll be running ZFS in the next couple of years though...

FreeNAS also can spin down disks when not in use, and those WD Green disks use almost nothing when not spinning. Since FreeNAS runs off of a USB stick, the disks are really only used when data access occurs, so one of the disks only spins up once a week for a network backup to be made to it.

esc_in_ks, do you remember how much power your ReadyNAS pulled (idle/standby and spinning)? How many bays did yours have? $200 for that bunde is nice; those empty two bay ReadyNAS models cost that much.

wibeasley wrote:esc_in_ks, do you remember how much power your ReadyNAS pulled (idle/standby and spinning)? How many bays did yours have? $200 for that bunde is nice; those empty two bay ReadyNAS models cost that much.

It's roughly the same power draw, the ReadyNAS (NV+, 4 bay model) is perhaps a little higher. With three Barracuda 7200 rpm drives, it pulls around 45-50W with all drives spinning (but idle), IIRC. According to the Barracuda data sheet (7200.11, hey, only one out of four failed for me!) they draw 5W at idle, meaning the ReadyNAS pulls around 30-35W.

bitvector wrote:Geeze, you need to reread JBI's original post. All JBI originally said was that using LVM to span two drives gives you the same potential reliability issue as striping across two drives -- i.e. you have two drives that could fail instead of one. You're still spinning your wheels implying he suggested combining the two. Combining them was something that you introduced into the conversation for some reason.

Um, I'm not going to argue on this past this post. There's really no reason to. My stance was not "spinning my wheels" in regards to combining the two. My stance is that the "chance" of failure is exponentially higher on RAID 0 (and chance of recovery being lower) vs. LVM. The two technologies are vastly different in how they are implemented. RAID 0 stripes across two drives without parity of any kind with minimal error checking in the process, if a drive failure exists the data is irrecoverable, unless you are going to spend major capital. In a software raid/fake raid setting the chance of data recovery are even worse compared to a hardware raid controller as the method of implementation varies slightly between versions. When it comes to LVM even during a drive failure the amount of data loss depends on the position of the logical volume. Depending on that position logical volumes can be recoverable even during a drive failure. The amount of data recoverable varies for sure. However, the possibility of recovery is greater in this setting provided a backup of a lvm partition setting exists vs. a striping of data which breaks up a file of a given size between two disks. Once you add in the ability of snapshots and migration of logical volumes comparing RAID 0 vs. LVM is like comparing apples vs. oranges. My response was merely pointing out the differences, not necessarily describing the scenario of combining LVM with RAID 0, which is why I asked for clarification and attempted to restate my viewpoint.

That being said, my main reason for responding to this post was to ensure that my previous response was not taken as a slam, nor taken insultingly by JBI. I do apologize if this is the case. My goal has always been to help others and offer clarification when possible. The suggestions might leave room for clarity and I am always willing for respectful dialog in providing the best solution to technical issues that arise. Again I apologize to JBI if my previous response came off a little harsh. My responses usually come after work.

I really like your build strategy for a lower power file server. That supermicro board looks phenomenal and I really should eventually look to upgrade to an atom server board. My atom should really serve another purpose because of the lack of SATA ports and expandability. ( mine only has 3:( )

The reason I ultimately chose to go with Ubuntu (linux really though) was for a learning experience more or less in addition to having it serve my basic needs. Thats why I switched from WHS. I do find it interesting how different people choose different options. I guess I can give FreeNas a VM trial just to experience it.

To Start Press Any Key'. Where's the ANY key? If something's hard to do, then it's not worth doing You know, boys, a nuclear reactor is a lot like a woman. You just have to read the manual and press the right buttons.