Debian GNU/kFreeBSD on production

Yesterday I begun using Debian GNU/kFreeBSD “squeeze” in thorin, my main workstation.

During the last few weeks I had to work through some of the limitations that were holding me back, such automated driver load and FUSE. I was lucky enough that other people filled the missing pieces I wanted, such as NFS client support and a GRUB bugfix that broke booting from Mirrored pools.

I have to say that I’m very satisfied. Barring a pair of small nuissances here and there, the result is quite impressive:

System responsiveness has noticeably improved, specially on disk-intensive operations such as loading my browser. I believe lightweight ZFS compression (the default algorithm is fast enough that it saves time rather than spending it) has a big influence on this.

Partition management became a lot simpler. Instead of using a fixed size like legacy file systems, with ZFS the storage is shared. This allows me to add, remove or resize file systems or swap volumes from my running system without having to worry about where they will be allocated (I have two pools, one that is mirrored and one that isn’t, and I can add or remove things from either. That’s as far as I’m concerned with allocation).

I no longer have to go through long wait periods when adding a new physical disk, or replacing a broken one. I have a lot of old used disks at my disposal due to various reasons, and they tend to break easily, so I used to run badblocks to them before I begun using them, to be sure they were safe (reliability is very important to me). Now I simply plug them in and trust ZFS checksuming to do the right thing.

No more waiting for long fsck. Specially annoying when I’m in a hurry and chance wanted that this was the 30th time I boot my system without fsck.

That’s basically my personal experience as newbie Debian GNU/kFreeBSD user. Of course my perspective is very limited because I just started and yes, I am biased.

Anyway, what about yours? If you have installed Debian GNU/kFreeBSD, was it meant for production or just a “toy machine”? If you considered using it on production, did it succeed at satisfying your needs, or did something hold you back? Leave your comment!

20 Responses to “Debian GNU/kFreeBSD on production”

Hi, Have you tried XFS! That’s amassing thing! FSCF is running everytime you are mounting FileSystem! and time is constant no matter what size it is (you wont notice a difference!)

what’s for running it on prod?
Check with Jeff Epler he setup a second debian-kfreebsd prod host. And he is happy, I will do it on my new “router” with NFS and visualization (waiting Debian-kFreeBSD-10 for http://wiki.freebsd.org/BHyVe support! hope we will get something new and awesome :) like awesomewm :) (but still I like fluxbox…)

If there was a port of it for the powerpc architecture then I would be intersted in trying it, too. It would be easier than using the current freebsd distribution, when everything but the kernel is like good old debian.

I like the idea behind ZFS and pools! I have a Debian GNU/kFreeBSD virtual box I have as a “toy” system. I don’t remember how I installed it, but I believe I used UFS. I’m going to look into ZFS and either migrating over or doing a reinstall. I actually didn’t think kFreeBSD was that stable to be used as a main production box.

I am running the read-write GnuPG GIT and SVN server for more than a year now under kFreeBSD without any problems. Now that allmost all repos are migrated to GIT, I plan to to host the public GIT report and web access also on that box.

For nearly two years I am running my old X31 under kFreeBSD. In the first months suspend/resume worked really nice but then came an X update and all broke – however it boots pretty fast. After a year without working WLAN, it does now work again and I am very satisfied with the box. The box is not only used as a my Laptop but also as an always powered up IMAP server for my mail.

[…] Debian GNU/kFreeBSD on production During the last few weeks I had to work through some of the limitations that were holding me back, such automated driver load and FUSE. I was lucky enough that other people filled the missing pieces I wanted, such as NFS client support and a GRUB bugfix that broke booting from Mirrored pools. […]

ZFS has been an install option since squeeze release. However, it only supports single-disk pools with only one file system on them.

Latest development version (available in daily builds) supports pools made of multiple disks (which can be combined either in Striped, Mirror or RAID-Z mode), with multiple file systems in them (which can be either ZFS or a legacy file system via ZVOL).

Actually, I haven’t tried it. I’m familiar with it but it doesn’t provide what I wanted. I’ll explain with an example. In my setup, I have:

root: My root file system

home: My /home

sid: A sid system I can chroot into (but I can also boot directly from it, mounting it as /)

These file systems are all part of the same 15 GiB pool. So far so good, you can do the same with LVM.

However, and that is what makes things attractive to me, with ZFS I don’t have to allocate a fixed amount of space to each file system. They simply grow using storage from the pool, and the only (shared) limit is the size of the pool.

This implies, among other things, that I can remove and add file systems from any pool without worrying about the disk space they were using. If I remove a file system from a pool, I’m not stuck with unused space in that pool as a result. If I want to add a file system to a pool, I don’t have to remove or resize something else first.