Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "The Debian Squeeze release is going to be accompanied by a first-rate kFreeBSD port and now early benchmarks of this port have started coming out using daily install images. The Debian GNU/kFreeBSD project is marrying the FreeBSD kernel with a GNU userland and glibc while making most of the Debian repository packages available for kfreebsd-i386 and kfreebsd-amd64. The first Debian GNU/kFreeBSD benchmarks compare the performance of it to Debian GNU/Linux with the 2.6.30 kernel while the rest of the packages are the same. Results are shown for both i386 and x86_64 flavors. Debian GNU/kFreeBSD may be running well, but it has a lot of catching up to do in terms of speed against Linux."

As a long time debian user who also has to work with freebsd sometimes I don't get it. Why use freebsd with GNU apps, when you can just run freebsd? And why freebsd and not lets say, openbsd or netbsd?
What is the advantage in using the freebsd-kernel instead of the linux-kernel? I have access to every linux app when I use freebsd and to be honest, if I knew my way around bsd as I do under debian I would probably switch. But I am missing the improvement for Debian here. Can someone please clear this up for me a bit?

The FreeBSD kernel can be faster the Linux, but there are a lot of poorly written apps that think they absolutely must run on Linux or were written expecting GNUisms. Now you can do that.

FreeBSD is generally the more generic and performance driven of the BSD's with a larger developer base then the othe BSD's. The odds for very good performance and good hardware support are in FreeBSD's favor over Open or Net.

Porting apps to different platforms can have the advantage of opening or exaggerating new or difficult bugs in software, the end result being that everyone gets a better final product out of it.

Before seeing this benchmark I took for granted that 64 bits would be faster or at least come to par in all tests. How do you explain that 32 bits is faster in some tests?

For most, its generally considered a wash. Larger data structures require more cache and more memory and more memory to be accessed. On the other hand, you also get more registers in 64-bit mode. As a result, some things run slower and some things run faster, depending on the nature of the application. On average its likely to be a wash.

The big exception are those that use the PAE 32-bit extensions. Generally speaking, 64-bit is going to be a lot faster. Even still, there are some odd exceptions which will hopefully fuel the imagination of possibilities.

One such exception comes from the PostgreSQL guys. For example, on Windows, they strongly recommend running 32-bit PostgreSQL on 64-bit Windows. This seems really non-obvious at first but there is a good reason for it. If you use 64-bit OS, that means you get large pointers which can access large quantities of memory without using PAE tricks. But, since PostgreSQL spawns processes for each back-end, that means you can run more heavy hitting (very large data sets, heavy queries, etc), concurrent back-ends without taking a performance hit. Additionally, PostgreSQL relies heavily on the OS to cache files. With a 64-bit OS cache, large data sets can be readily cached by the OS and quickly return results to the 32-bit PostgreSQL. This means PostgreSQL directly benefits from 64-bit size file caches; despite running as a 32-bit application. And best of all, a 32-bit PostgreSQL means smaller data structures and more efficient cache use, with twice the available cache a 64-bit application would require. Its almost the best of both worlds.

As the above example illustrates, sometimes a mix can provide ideal results, but on average, consider it a break even unless you plan on having 4GB or more in your box. And even then...;)

But on x86, you are only guaranteed 4 *real* general purpose registers. x86_64 increases this number. With a good compiler, the register allocator would use all of these, and you would have much fewer loads from main memory, which can take on the order of 75+ cpu cycles on a cache miss, or 5+ cycles on a cache hit.

For best performance on x86-64, you want pointers to remain 32 bits, but still run in Long mode. The OS should make sure that everything is mapped below the 4GB line for the process. As far as I am aware, no operating systems actually support this mode of operation. Without that, for any process using less than 4GB of address space, you have some advantages and some disadvantages when running in 64-bit mode. Whether the advantages outweigh the disadvantages, or vice versa, depends on the code.

Whoa, wait just a second. Before asking if the operating system support this, shouldn't we first ask "does the hardware support this?" or more specifically "does Intel's implementation support this?" because as far as I can tell from Wikipedia [wikipedia.org], it doesn't.

It's good to have choices. Even if Linux is the best choice for you today, you can never know that it will be the best choice for you forever. Providing Debian GNU/kFreeBSD not only offers Debian users the option of using the FreeBSD kernel instead of Linux, but also offers FreeBSD users a way to use the GNU userland instead of FreeBSD's.

Moreover, in making different kernels and userlands work together, areas where this is problematic are identified and improved, so that other projects besides Debian can benefit, too.

The end result is that you gain more options to mix and match parts to build the system exactly the way you want it.

``I'll stick with my stage 1 Gentoo which is fast, optimized and ready to go.''

It would also be interesting to see benchmarks of functionality actually provided by the respective kernels. E.g. performance of fork, fork+exec, socket, accept, reads and writes on IPC, multiprocessor/multicore/hyperthreading performance, etc. Past benchmarks have shown that there can be dramatic differences between operating systems especially when large numbers of something (processes, filehandles, CPUs, etc.) are being used simultaneously.

Also, I am missing a description of exactly how they measured. Did they recompile the benchmark suite from scratch on each platform? Which compiler was used, and with which settings? Are they running the same binaries on both? How exactly did they arrive at the presented values? Is each bar the result of a single run, or did they run each benchmark multiple times and account for any variation in observed scores somehow?

As others have already mentioned, it would also be interesting to see how a regular FreeBSD system would fare.

All in all, interesting benchmarks. My conclusion: there isn't that much of a difference between the tested versions of Linux and kFreeBSD in there benchmarks. The difference between 32-bit and 64-bit is usually more pronounced. If you need the highest performance for your application, you'll still have to run your own tests.