If you're not booting off the zpool, then just symlink things where you want them.

We run lots of 1U boxes with root on zfs because the boxes simply don't have the space for enough drives to have a separate pair of mirrored "boot" drives. There are downsides, but it's also kind of nice to be able to do things like snapshot all the partitions with the OS on them before doing an upgrade - it makes rolling back any oopses pretty simple.

I'd like to use the same SSD device as a log and cache device. I think it's possible but being new to Solaris/OI, I'm at a loss as to how to do so. I think I'd have to specify different slices/partitions... And if I need to partition, what kind of partitions do I pick in fdisk?

Anyone know how to do so? And if so, can you include specific steps on how to do so?

It's true, ZFS does need RAM for speed. I didn't think it did, but I upgraded my Microserver (running FreeNAS 0.7.5, ZFS v28) from 1.5GB (despite having two 1GB DIMMS?!?) to 8GB at the weekend. Speed increase was instantly apparent when doing file moves between the "scratch" disc (used for BitTorrents) and the ZFS RAID array. A 1GB video used to take upwards of 45 seconds to move, now it's less than 10 seconds. Yay!

I recently upgraded my ZFS server (the one that started this thread). The motherboard had failed - bad capacitors in a board from 2008, shame on MSI! The original build posted in this thread ran 24/7 from May 2008 to August 2011. It was in storage from August until November when I moved into my new house.

I swapped the 45W AM2 AMD 4050e from the ZFS server into my HTPC which has an AM2/AM3 motherboard with DDR2. This freed up the 65W AM3 Athlon II X2 240 that I was using in the HTPC. I bought a new AM3 motherboard and 16GB of DDR3 to go with the X2 240.

I am running that latest version of FreeNAS as an OS, replacing the original Solaris 10 install. One thing I have noticed is that performance over CIFS on gigabit LAN is much worse then under Solaris - read speeds of ~34MB/s instead of ~80MB/s...

I have a 60GB OCZ Vertex that has been retired from other uses that I will be installing as a cache device.

Doubt it. I'm a big Seagate user, and have had next to no failures. I think the last truely bad Seagate I had was the 4.5GB Medalist SCSI, went through a dozen of those in the space of a year.

I'm running some very old 500GB Barracuda ES drives in my FreeNAS - they're hellishly noisy with bearing/platter whine and noisy actuators, they've been perfectly reliable, despite over 4 years of use before I got them. I have two spare.

I realized that I have 1 2TB WD Green that I bought before the floods. I wonder if there would be a consequence to using mis-matched drives (1 WD and 3 Seagate). Assuming I can change the head park time on the WD to something more reasonable. I never did that on my 750GB WD Green drives and their head park counts after years of being on is... Very very large.

Having trouble replacing a bad drive. I have a 4-disk raidz, one drive is repeatedly getting soft and hard errors. I also have controller issues with the box, and am currently only able to attach 4 drives... so I can't put the new drive in and do a simple replace.

So I pulled the bad drive (yes, after positively identifying it), put the new (replacement) drive in... first I had to mark it online, then tried replace, no dice. So then I went to format, labeled it and formatted it, but I'm still getting an I/O error when I try to do a "zpool replace Tank c4t4d0". I'm going to swap the (failing) drive back in for now... it is failing, but slowly... this was just an attempt to replace it before it croaked.

Any tips for replacing when you can't put both physical drives in at the same time?

Having trouble replacing a bad drive. I have a 4-disk raidz, one drive is repeatedly getting soft and hard errors. I also have controller issues with the box, and am currently only able to attach 4 drives... so I can't put the new drive in and do a simple replace.

So I pulled the bad drive (yes, after positively identifying it), put the new (replacement) drive in... first I had to mark it online, then tried replace, no dice. So then I went to format, labeled it and formatted it, but I'm still getting an I/O error when I try to do a "zpool replace Tank c4t4d0". I'm going to swap the (failing) drive back in for now... it is failing, but slowly... this was just an attempt to replace it before it croaked.

Any tips for replacing when you can't put both physical drives in at the same time?

Can you provide more specific error messages than "no dice"?

What conroller are you using (e.g. on-board SATA, IBM M1015, etc.)? What driver are you using for said controller (AHCI, mpt_sas, etc.)?

I'll post more specific info tonight when I'm in front of the box again.Controller = onboard SATA. The error only says "Cannot replace: I/O error" (from memory, so I may be missing something, but it was a very bland/generic message). I did not "offline" the drive first, which may be part of the issue.

Again, I'll take another crack at it after I get the kid to bed tonight, and post more specific info if it fails.

OK - cfgadm -l | grep sata (only way I could see it without dealing with a few screens of USB info) shows me each sata disk, with the following relevant bit:

sata0/4 disk connected unconfigured unknown.

all other drives report "configured" and "OK" for the last two columns.

Quote:

devfsadm -Cv

1st time I ran it, I had several screens of data as it removed files - seemingly of drive identifiers not in use (saw lots of "c5t5d1" type info which does not match any of my disks). Subsequent runnings came back empty, I'm assuming b/c the list was purged.

Quote:

cfgadm -al

Again, I had to add | grep sata to the end to filter it out. shows the following:[quote]....sata0/4 disk connected unconfigured unknownsata0/5::dsk/c4t5d0 disk connected configured ok"

So it is missing the "::dsk/c4t4d0" label.

Running a "format" command, it does not see the drive. (fwiw, yes, the system does support hot-swap, disks are in hotsawp trays, and the system indicates removal/detection of the drive without errors when I swap them)

Rebooted, now format sees the device - c4t4d0 is present.Offlined (new drive), onlined it, re-ran the replace command... no errors this time

Thanks guys... not sure which step made the real difference, but appreciate extra eyes on the issue. Will let it resilver overnight (have about 4.5tb of data on the array) and see how it looks in the AM. Will give it a few days, and then RMA the old drive... unless I see continued issues on the new one, in which I'll move to replacing SATA cables and/or trying to mix the ports around.

It wasn't clear from the earlier response, but since onboard SATA is being used, is your onboard controller in AHCI mode? AHCI mode supports hotswap while the other mode (I forget the name at the moment) does not.

Thanks again guys. Chris - I'll add that in next time I'm working on the box. Still wasn't quite done when I left for work this AM.dj2k4... I believe it is, but will need to wait until I bounce the box to check. Again, it was detecting the drive being swapped without any intervention (showing disconnected and detected upon removal and insertion) so I THINK it is, but can't swear to it until I can take a peek in the BIOS.

I still 1/2 suspect that the issue wasn't REALLY the drive, but a bad cable or port. Will monitor it, maybe subject it to daily scrubs for a few days, and if that turns out to be the case, instead of RMA'ing the old drive, may just add it as a hot spare once I resolve the issue.

dj2k4... I believe it is, but will need to wait until I bounce the box to check. Again, it was detecting the drive being swapped without any intervention (showing disconnected and detected upon removal and insertion) so I THINK it is, but can't swear to it until I can take a peek in the BIOS.

Initial attempt to rebuild (resliver) failed. The *NEW* drive dropped out of the array after too many errors.

Removed side panel of case. Pulled hot-swap cage, reseated SATA cable. Re-inserted hot-swap cage."Replaced" drive in the pool.Watched system resliver with no further errors.Looked at what is probably a perfectly good drive sitting on my desk, shrugged, reminded myself to check the cheap stuff first next time.

On that note... seriously contemplating picking up one of these http://www.newegg.com/Product/Product.a ... 6816118112 before any more drives.I have an 8-port PCI-X card in my old box, but the connectors are too close together to use locking SATA cables. I know the BR10 card can be found fairly cheaply, but I figure that if I'm going to spend any $$, but something that can use drives bigger than what I have, so that I'm not limited to my current capacity. The LSI card can handle 3+ TB drives, so at least gives an avenue to grow down the road....

I have built myself a home NAS using an HP Microserver and FreeBDS: the thing runs a 4x2TB RaidZ, Samba, PF firewall, UPnP, Transmission and Sabnzbd daemons beautifully, and nothing else (I didn't install a window manager as I run it headless).Lately, I've been thinking of also running Windows Server on the box to experiment with creating a home domain, which would mean virtualization.Is there any way I can do so without bogging the machine down too much? The thing has an Athlon II Neo 1.3 Ghz and 4GB of RAM, so I'd really prefer not to have to install a third OS to act as host, especially if a window manager is also required.Also, how does ZFS react to being run in a virtualized OS and can an existing RaidZ be migrated to a virtual machine? This is a deal-breaker to me, as I cannot afford to lose the data in the array.

I have built myself a home NAS using an HP Microserver and FreeBDS: the thing runs a 4x2TB RaidZ, Samba, PF firewall, UPnP, Transmission and Sabnzbd daemons beautifully, and nothing else (I didn't install a window manager as I run it headless).Lately, I've been thinking of also running Windows Server on the box to experiment with creating a home domain, which would mean virtualization.Is there any way I can do so without bogging the machine down too much? The thing has an Athlon II Neo 1.3 Ghz and 4GB of RAM, so I'd really prefer not to have to install a third OS to act as host, especially if a window manager is also required.Also, how does ZFS react to being run in a virtualized OS and can an existing RaidZ be migrated to a virtual machine? This is a deal-breaker to me, as I cannot afford to lose the data in the array.

While I have no specific experience w/ FreeBSD....

As long as you run a virtualization platform that will let you pass physical drives directly to the guest OS, you should have no issues with ZFS - just export the pool first, set up your guest OS, map the drives to that guest, and import. ZFS runs very well under ESXi.

That being said... sounds like your hardware (CPU/RAM) may be on the light side for trying to run multiple guest OS's along with much performance.

It's one of the illumos (OpenSolaris decendent) distros with an aim towards being a hypervisor. It boots off a USB stick (or DVD) leaving all the disks available for your ZFS pool. Part of the pool is used for persistence across reboots.

The virtual machines available are Zones and KVM. For the Unix-y services you can load them up in the much lighter weight zones and set up KVM for the Windows server (and any other VMs you might need in the future).

Where it gets tricky is the handling of the Unix-y stuff. Who knows what is portable to illumos, if it's kernel based necessitating the global zone and needing to be aware of the persistence, etc. Additionally SmartOS is rather new so the documentation is still sketchy and the community is small.

I think it's a neat idea (and one I'm planning to migrate to from my OpenIndiana setup) but because it seems like a rather unique approach it is not for everyone, and it might not even be worth the effort in some cases.

If I used SmartOS (or I suppose any of the newer releases that support KVM), is there a convenient way to do the graphical install of Windows in a VM while still just having a headless server? Virt-viewer would appear to allow this if I were running linux, but other than the NAS box, everything else i have is running Windows.

I haven't really used SmartOS yet but I believe you use vmadm through the CLI (through SSH or some such) to create the virtual machines (either zones or KVM). After the KVM system is created and booting you use VNC from your desktop to handle GUI portions of installation/operation. After installation you can then switch to Remote Desktop directly to your Windows VM.

LSI has made it much easier to get a hold of the sas2ircu utility. Maybe this is now common knowledge but for a while I was getting v5 from an obscure Supermicro link and v7 from an obscure LSI for Oracle link.

Additionally, I found a much more concise utility, diskmap.py. It's a wrapper for sas2ircu. However, it was written with the output of v5. Sometime between v5 and v7 the output was expanded, breaking the diskmap.py utility. To correct this you can use the following diff:

You, sir, rock and I owe you a beer. Found the card from a seller on Amazon for $70, new. $25 to Monoprice for a couple of cables, and I think I know what I'm doing next weekend....

Just wanted to follow up - thanks again Chris. Got the card for $80 from an Amazon reseller, and picked up two cables from Monoprice. All came in last night.

Flashing the card was a little bit of a pain, just b/c without a Windows box around, trying to create the usb-bootable flash was annoying but got it done and flashing went fairly straight-forward after that. Exported my pool before hand, imported after connecting to the new card... not seeing much of a speed change, but expect this to definitely be MUCH more stable than the flaky ports on my mono, and using the 1-4 cables cleans up the inside of the case a bit as well. Overall a great deal for the $$ in my mind, and I'm seeing about 280 mb/sec writes and 350mb/sec reads on a 4 disk raids (with slower, 5900rpm disks). Running Bonnie++ on it now just to see how it does, but already satisfied that it is MORE than adequate for my needs.

Looking at building a 2nd box that will mostly be an ESX "test" box - I just scrounged up all of my older, left-over drives, and have the following on-hand:

2x 500gb3x 160gb1x 320gb

(and a handful of laptop drives that I haven't evaluated yet)

I'm thinking a mirror of the 500's and a raids of the 160's, but is there anything else that makes sense? I don't really need a lot of capacity on this box - my main storage box now has 4x2tb + 4x750gb arrays in it, so this is strictly VM storage. That being said, I've seen a few things advocating a mirror over raids for VMstorage due to higher IOPS... would it make sense to stock the 160gb drives in a 3-way mirror? Again... I don't really care too much about capacity, this is just going to be used for the occasional experiment and trying to not spend any $$ on additional disks other than what I already have on-hand.

On a basic level you're looking at roughly the same performance characteristics for a single 2-way mirror vs. a RAIDZ over 3 disks. The read characteristics are basically the number of data disks and the write characteristics are the number of vdevs.

That brings up a question. How DOES ZFS deal with mis-matched vdevs within the same pool? Ex: On my main storage box, I have two pools - one a single raidz of 4x2tb, and one a raids 4x750gb. Would there be any point at all toe backing up the smaller, and wiping it to merge into the larger? Would ZFS simply stripe the 1st 2.2 TB across all disks until the smaller disks reached capacity, and then stripe the remaining data across the remaining free space on the 4x2tb array?

I'm curious, as my eventual plan is to replace the 750gb drives with 2 or 3tb drives once prices drop somewhat and when I start nearing capacity. Just trying to plan ahead...

Ok, follow up to my previous question. I have now bought 8GB of RAM for the server (Amazon has 4GB DDR3 banks for 20 EUR each, so why the hell not). What about installing Windows Server as the main OS and running FreeBSD in a virtual machine on top of it: would the hardware still be too light?

Would ZFS simply stripe the 1st 2.2 TB across all disks until the smaller disks reached capacity, and then stripe the remaining data across the remaining free space on the 4x2tb array?

I don't know the particulars of the code but my understanding is it would be fairly balanced across the vdevs based on % used an not on absolute value. Assuming a fresh start your 2TB drives would get the majority of data going to it with a proportional amount going to the 750GB drives. ZFS does dynamic striping and, from what I understand, when vdevs get close to full Bad Things(tm) happen so I believe there's quite a bit of logic to try and keep vdevs from reaching full sooner than others.

However, if you were to zfs send/recv the data from your (currently) separate 750GB pool to your 2TB pool and then merge into one pool you'd have all your current data on the 2TB drives and nothing on your 750GB drives. New data would be spread out (under what exact algorithm, I don't know) but it would probably be initially biased towards the higher % free drives until things were a little better balanced.

If you're willing to be a guinea pig you can go through with the merge procedure and then watch zpool iostat -v <poolname>. That will show you the capacity columns (alloc, free) as well as the operations/bandwidth on a per-disk and per-vdev level. Queue up a single, large transfer like a DVD image and/or a slew of tiny files like your web cookies directory and see how the system behaves.

Ok, follow up to my previous question. I have now bought 8GB of RAM for the server (Amazon has 4GB DDR3 banks for 20 EUR each, so why the hell not). What about installing Windows Server as the main OS and running FreeBSD in a virtual machine on top of it: would the hardware still be too light?

My general experience has been that RAM is quite often the first bottleneck encountered so 8GB would be wonderful.

As to my previous post regarding SmartOS, I just realized you're running AMD. That is currently not supported for KVM on SmartOS (Intel only, ATM).

If you use Hyper-V for your virtualization needs but you still want to use ZFS for NAS portion under a virtualized FreeBSD then you might run into some performance issues (and potentially corruption issues) as it appears that system doesn't support VT-d to pass the hardware directly.

If you really want ZFS then you might be best off running your Windows server as a VM under VirtualBox, or something similar. If you really want a SMB-based NAS and AD, then you might be best off with Windows on the bare metal with Hyper-V for FreeBSD (or whatever) for your one-off services like UPnP, Sabnzbd, etc. You'll need to prioritize and adjust accordingly.

Except if someone else who knows more of the particulars of that hardware chimes in, it appears you can't quite have your cake and eat it too.

Hmmm... does the new Windows Server 8 Beta include the recently announced Pooled Storage? That might be a kinda awesome route to go. It's not quite ZFS but it's much closer and it would certainly be a valuable learning experience.

I'm testing an Intel 311 SSD (Larsen Creek) as a slog device. I got it with the goal of improving my NFS write performance on my home NAS, an OpenIndiana 151a server. It definitely has done that, so I'm mostly happy with it. At the same time, I'm also slightly disappointed it isn't faster than it is.

I have done three tests, which simply time the cp(1) speed for three sets of files from my Mac via NFSv3 over gigabit ethernet to ZFS:

I also tested a brand new Intel 520 SSD (Cherryville). It was somewhat faster than the 311 except in the small file test 3. I thought it might be much faster since Sandforce drives are supposed to be wickedly fast writers. I'll keep the 311 in the slog role though, since it is SLC and should hold up better over time.

I'm not sure how the 311 ended up faster than the temporary ramdisk in test 3. Maybe there was some competing hard drive activity on the Mac side when I did that test with the ramdisk. Or maybe the ramdisk wasn't large enough. (I made a mistake copying the results. Ramdisk was fastest.)

With the 311, watching iostat on the server over 5 second intervals, I didn't see average writes faster than 34 MiB/sec or more than 350 transactions per second. Intel says this drive can sustain write speeds "up to" 105 MB/s and can do 4KB random writes "up to" 3300 IOPS. I didn't get near either of those stats, unfortunately. There's probably a very specific benchmark that is needed to wring out that kind of performance.

So overall, this looks like a good addition. It would have been an awesome addition if it came closer to the manufacturer's write specs. I'm sure Intel would want me to buy another 311 and stripe them.

Anybody else using a 311 as slog? Do you see similar numbers? Any suggestions for boosting performance? Is there another inexpensive (<$200) device that is worth using as a slog?

Suprisingly topical and timely - here's a blog post @ nex7.comhttp://nex7.com/node/12about using intel 311 drives as slog drives. While the bulk of the article is on the effect (or lack thereof) of multiple zil drives, it also covers the single drive performance case.

It would be interesting to see what numbers you get in the same set of tests with disabling sync on the pool as well. (Just as a highwater mark for what the drives "could" do)

Anybody else using a 311 as slog? Do you see similar numbers? Any suggestions for boosting performance? Is there another inexpensive (<$200) device that is worth using as a slog?

I finished putting together a PostgreSQL server that's all-SSD recently (on top of ZFS). Performance on two mirrored pools of 160GB Intel 320s is fantastic - spinny disks that would do the same would force me from a 1U to 4U case and cost a ton more cash. When I was doing testing, I went through some basic benchmarks - 2 SATA, 2 SATA + 2 SSDs for ZIL, and then 2 SSDs.

In short, the 2 SATA + 2 SSD ZIL was very, very close to the all-SSD option. Do bear in mind this only helps with sync writes, but in my case, I'm running a DB, so almost everything is a sync write. For many common applications though I did see some nice speedups with the combo, so we'll probably be velcroing a pair of SSDs into a few more 1U boxes in the near future. It's a no-brainer - the smallest Intel 320 is under $100, and there's no benefit to using larger drives for ZIL.