I would like to rebuild my file server using FreeBSD. I know that ZFS is still considered experimental but would like to know if anyone has had some positive experiences with it on FreeBSD. From my understanding, 64 bit is more stable than 32, and that throwing heaps of memory at it via kernel tuning and some other tricks can help. I would love to hear your pros and cons. BTW, the volume would be 3TB, I know that ZFS does not run into any limitations but I have seen some odd things as the kinks are worked out of file systems, much less a hybrid volume manager / file system such as ZFS.

well i think a lot of memory would be issue if you will be using realtime compression on zfs...
i havent tried zfs yet but i will soon...
but from the things i read didnt heard any really bad and negative things about it

Come on now. The guy is asking about your experience in ZFS on FreeBSD not on open solaris or commodore 64.
@nniemeyer:
I handle 3 production FreeBSD servers, mail & dns. But I would like to experiment also on my home file server with ZFS. So, far I hear that people encounter a lot of problems.

Anyone can elaborate on this ?

George

__________________
...when you have excluded the impossible, whatever remains, however improbable, must be the truth.

I've played around with ZFS in FreeBSD a little bit, but so far only on my laptop. I am planning on giving it a go for a home fileserver I'll be putting together as soon as I get my hands on a monitor so I can actually run the installer...

I can say that it didn't give me any problems whatsoever running on my MacBook (CD) with 2 GB or RAM, and being able to have /usr/ports compressed was pretty nice and actually saved a decent amount of HD space.

I was using it at home until I lost two harddrives at once (due to old age, thank heavens for current backups). Waiting for the replacements to arrive before rebuilding the system with raidz2.

Hardware used:

MSI Master K7D motherboard

dual-AthlonMP 1.8 GHz CPUs

3 GB SDRAM

Promise FasTrak ATA133 controller

2x 100 GB ATA100 HDs

2x 160 GB ATA100 HDs

3Com 3c905C-TX NIC

Had ad0s1 and ad2s1 configured as a 36GB gmirror RAID1 array. ad0s2 and ad2s2 were 2GB swap partitions. And ad0s3, ad2s3, ad4s1, and ad6s1 were part of a 300GB raidz zpool.

/var, /usr, /usr/src, /usr/local, /usr/ports, /home, and /home/samba were all zfs filesystems. /var had a quota of 20GB, and /usr/src had compression enabled.

For the two weeks my drives lasted (using old, used, scrounged together drives is not recommended), things worked nicely. No kernel panics, no issues. /home/samba had 100 GB of mp3 music and mp4 movies shared via Samba to laptops around the house.

The only kernel tuning I did was to set vfs.zfs.prefetch_disable="1" and vfs.zfs.arc_max="1G" in /boot/loader.conf. The first disabled a feature known to cause issues, and the second limits ZFS to 1 GB of memory.

At work, I'm testing zfs on a:

Tyan h2000M motherboard

2x dual-core Opteron 2.0 GHz CPUs

8 GB DDR2-667 ECC RAM

3Ware 9650SE-12ML SATA RAID controller (PCIe)

12x 500 GB harddrives in JBOD setup

onboard Intel 10/100 NIC for management

Broadcom 10/100/1000 NICs (unused)

Intel Pro/1000MT quad-port NIC (will be used in a 4-port lagg(8) interface)

For testing, we have 64-bit FreeBSD 7-STABLE installed on 1 drive (to be gmirrored to another), a raidz2 zpool made up of 9 drives, with 1 left over for testing rebuilds with.

There's currently 1 zfs filesystem enabled. The server is running a nightly rsync of another server into the zfs filesystem, and snapshotting the filesystem after each rsync. We'll see how things go on Monday. We haven't done any tuning of any kind (still running GENERIC) as yet.

We're investigating using this as a remote backup solution. (rsync the entire server once, then just sync the changes, and snapshot after each sync, and save 30 snapshots.)