Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Reader tail.man points out this press release from Debian which says that the port of the Debian system to the FreeBSD kernel will be given equal footing alongside Debian's several other release ports, starting with the release of Squeeze. Excerpting from this release:
"The kFreeBSD architectures for the AMD64/Intel EM64T and i386 processor architectures are now release architectures. Severe bugs on these architectures will be considered release critical the same way as bugs on other architectures like armel or i386 are. If a particular package does not build or work properly on such an architecture this problem is considered release-critical. Debian's main motivation for the inclusion of the FreeBSD kernel into the official release process is the opportunity to offer to its users a broader choice of kernels and also include a kernel that provides features such as jails, the OpenBSD Packet Filter and support for NDIS drivers in the mainline kernel with full support."

Why would you not want to use APT on a server? What part of automatic dependency handling, automatic unneeded package pruning, easy security update application, and secure package retrieval do you not want on your servers?

ZFS with snapshotting and stuff is usable in any file system.. even root ones. True, ZFS is a memory hog, but man, imagine a root file system where you could have file system provided revision control for *every* file...

While packages with better FreeBSD compatibility are nice, I wonder if getting more critical release bugs won't slow down Debian releases even more. If it's all positive development then is nice, but I'd like to know the downside of things too in order to tell if it's a good or a bad decision.

On the other hand, maybe Debian really can improve on the FreeBSD experience; apt rocks, and the Debian project does perhaps a better job than anyone of combining the disparate parts of the GNU/Linux ecosystem into a coherent operating system.

I am not a big fan of the BSD userland, and I typically install "prefixed Gentoo" on my Macs. (Basically, it brings in a GNU user land, a fresh compiler chain, etc. It works well, but the repositories are very basic. It can help set up a Unixy programming environment, not a feature complete Unixy desktop system)

kFreeBSD Debian can potentially make Apt a real option on Macs. Fink sucks. Debian's repositories are much better.

As someone who has had a lot of experience with both, I switched to BSD in 1999. Back then the main reason was Ports. Needed to install MySQL:/usr/bin/ports/databases/mysql/ make && install. Then go grab a cup of coffee come back and it would fetch everything it needed, compile, and run. Or you could fetch a pre-compiled binary via pkg_add -r mysql. Hell, the first few version of PostgreSQL I used, the only way I could get the damn thing to work was to use BSD ports. The best you had with Linux was RPM and that was dependancy hell at times.

Also, back in the day it had a better tcp/ip stack and was generally more stable as a server platform and decent SMP support. And frankly it was far easier to support than "linux" was back in the day because there was a single FreeBSD, not umpteen different flavors.

Today it has ZFS and Dtrace from solaris ported over. I know ZFS hasn't made it into Linux as of yet, not sure about DTrace. But both are handy tools.

Currently we're deployed 100% on FreeBSD for our web, mail, and database servers running PostgreSQL. But that has more to do with using Pair Networks than any other single factor. They've been 100% FreeBSD and consistently in the top 10 in terms of uptime according to netcraft.

For the past 10 years, I've found FreeBSD to be a stable, secure server operating system that doesn't take a lot of system resources to run. It seems like Linux takes about 256MB of ram these days in most default configs to run a web server whereas our BSD machines were using closer to 150MB for the core OS. And was both systems running Apache 2.

The FreeBSD is the kernels name, the operating system name. Linux, FreeBSD, NetBSD, OpenBSD, DragonBSD are monolithic kernels, what makes all of them operating systems. Not just kernels. Same thing goes with SunOS.

GNU is taking so much others fame by forcing that GNU software is on more important part on the software system than others. Example, the Xorg, Firefox, KDE or even Amarok are more important for normal users than any GNU software. Even Linux OS is more important than GNU software. Without Linux OS, none of GNU software would work. Not glibc, not bash, not Gnome and so on.And you do not need to use GRUB to boot Linux OS. You can use any wanted bootloader to load OS, GNU is not needed at all.

GNU's should have their OWN operating system working. But Hurd is not ready. They made bad choise to port a Mach microkernel to Hurd, what made Hurd process slow down to become a completed OS even more.

GNU project has nice things to say and they have got nice software done. But it is just pathetic that they try to steal others fame and honor while waiting that others respect them and honors their actions. Just like GNU would be the #1 pure thing on universum.

(And those who even tries to say that GNU software is part of Linux kernel (what is the complete operating system, not just kernel like microkernels are) should explain does they call their computer as CPU/GNU or Motherboard/GNU and do they call United States as England/United States? All the times GNU people wants to people to forget the facts that monolithic kernels are the old way to build the operating system.)

Apt is indeed awesome, but FreeBSD's package system is GREAT. I never had a complaint with the way FreeBSD handles packages, and there are some fantastic package utilities in ports that make nightmarish tasks freakishly easy, such as package pruning.

Apt, ports, pacman, and the like are more-or-less converging, feature-wise. I'm sure package pruning is freakishly easy with Apt by now too. The nice thing about Apt is the fact that the Debian team is behind it. Essentially, Apt and the Apt repositories are the heart of Debian. You get the same kind of quality control for packages that the all the other Debian ports have.

Of course, Ports has the BSD teams behind it, but no "central repository" for quality control of programs outside of their domain.

One great advantage in BSD is that the base system is not packaged, so if you ever start having major package issues, you can simply wipe out all packages and reinstall your applications from scratch;

I don't see how that is an advantage. I can do that with Debian. Heck, now I can take a Debian Linux system, create a chrooted environment from which to build FreeBSD, install FreeBSD, install Debian for FreeBSD, and upgrade to the latest Linux kernel, without ever shutting the machine off. The point being, you can always do what you suggest.

This sounds insane to people who approach this from the usual angle. Linux has a lot more support for all the junk and semi-junk hardware out there, but some of the GNU core Unix userland is of questionable quality. All of us cursed GNU creeping featurism in the commandline utilities and GNU libc problems at some time or another. You would think people want the Linux kernel and the FreeBSD Unix userland. So why go the other way round?

There are very specific needs being addressed by using the FreeBSD kernel inside a Debian.

FreeBSD's ports system for third-party applications only has a devhead, and that has caused an increasing number of problems. FreeBSD has stable branches and releases for kernel, for "core Unix" userland including binutils and gcc/g++, but not for third-party applications. At the time that this was created it was great, because what we wanted at the time was a stable base system to do "server stuff" with, and the ports/applications were just for accessing the things, a light desktop that didn't do much except run xterm and emacs.

Today, I see two main problems with what worked a few years back:

1) those "server style" third-party applications aren't sitting flat on a Unix anymore. They are stacks of dependencies of considerable depths. It's not an apache with mod_cgi and the base perl system anymore.

2) some third-party applications became very aggressive lately and can be unusable in their newest releases. Many people bash GNOME and/or KDE, myself my favorite target is Xorg. The Xorg server has caused the most headaches across all my Linux and FreeBSD machines in the last years.

So, here's the trick. FreeBSD only has one branch in ports, so even if you use an older -STABLE release branch of the FreeBSD core system you still get the newest releases of third-party applications via ports. That's why my *most* stable OS (FreeBSD) had caused me the most headaches lately, because it upgrades me to the newest Xorg *first*, not last like it should.

I don't want to distract too much from the point of this posting by giving reasons why people want the FreeBSD kernel, let's just say there are enough of us. But no matter how much you want the FreeBSD kernel, many see increasing problems with ports/applications for the reasons I gave.

Debian provides stable branches for all applications, and that makes some people who don't generally like Linux still go "PLING!".

In addition to all that, Debian's packaging system, and the way that it is kept working (few package screwups upgrading), the way that it integrated/etc/* file management are simply first class and blow other Linuxes out of the water, too. Debian's packaging is the best out there, I haven't seen anyone challenge that notion in a long time.

So, very suddenly you have a demand for the FreeBSD kernel in a Debian application provision system and here we are.

%%

(BTW, what blows my mind for real is that FreeBSD is now partially sold based on driver availability. Because they kept their NDIS windoze driver integration system alive and maintained when Linux didn't. That is... something, I have to think about it)

There has been a lot of hype about ZFS but what use is it in a desktop system? And honestly, while APT is great for desktop systems, I really wouldn't use it much on a server. So unless there is some amazing benefit for the average user with ZFS why even have this port as a main system?

You must be kidding. You can snapshot your whole root, or your home directory, or anything and automate it for backups. There is even integration with GNOME's Nautilus to browse ZFS snapshots at the file level. You can create new filesystems and snapshots on the fly, compress them, enforce quotas, export via NFS, send snaps to a remote system or dump as flat file, etc, etc, etc.. if you don't have enough disks to use single or dual parity RAID-Z, you can even have ZFS record multiple copies of each block. ALL of those have uses on desktop or workstation systems.

I'm not sure if freeBSD supports ZFS root or not, but Solaris does if you want a taste. If anything, ZFS-root is under-hyped.When troubleshooting updates, instead of booting into your old kernel with a trashed userland, you can boot into an old BOOT ENVIRONMENT. As easy as picking a different grub entry.

Parent is confusing the terms "kernel" with "base installation", "OS" with "kernel", "microkernel" with "kernel", "monolithic kernel" with "base installation".

And to those of you who think glibc, gcc and autotools are not important, I dare you to build a fully Open Source Linux distro without them, or even just replace them on your own box. I have tried to make myself an uclibc-based Gentoo, and I still have nightmares about it.

Anyway, let's just call it Debian and be done with it. uname -a will fill you in on the rest.

This is called pinning [debian.org], if anyone is looking for the solution.

or to make lists of packages required, even for your own scripts.

"equivs" [debian.org] can be used to create empty packages for the sole purpose of manipulating dependencies. I usually use it to kill packages that are otherwise demanded in other important metapackages, though you could also use it to 'hold' dependencies for a broken third-party.deb package.

Yes I realise you can do without a partition, but when I first set up the LVM volume, I followed best practice as found in the howto [tldp.org] I linked.

Using the whole disk as a PV (as opposed to a partition spanning the whole disk) is not recommended because of the management issues it can create. Any other OS that looks at the disk will not recognize the LVM metadata and display the disk as being free, so it is likely it will be overwritten. LVM itself will work fine with whole disk PVs.

In future on dedicated machines I probably won't use that method, but then again, LVM has issues anyway. I set up my first volume without using RAID, as the disks were of differing sizes. I added a new disk some time later which very quickly began to exhibit problems. Because the disk wouldn't read properly, I couldn't retrieve any data from it and couldn't therefore remove the PV because it had data on. The only way was to shut down, then remove the disk, then remove the volume then use the metadata to rebuild the volume. I lost a few files but the main worry was whether I would get any files back at all. To this day, I have files visible on the volume that correspond to lost data. I cannot find anyway to delete these files as they only exist in the metadata not on the actual disk. I'm not confident that creating a replacement volume (on the same drives) would allow me to inherit an accurate filesystem from the old volume.

So my only option at present is to buy progressively bigger drives, run them in for a few weeks, then after adding them to the volume, use the free extents to move data off of the older drives. This procedure is fraught with risk. Any tips ?