Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "The Debian Squeeze release is going to be accompanied by a first-rate kFreeBSD port and now early benchmarks of this port have started coming out using daily install images. The Debian GNU/kFreeBSD project is marrying the FreeBSD kernel with a GNU userland and glibc while making most of the Debian repository packages available for kfreebsd-i386 and kfreebsd-amd64. The first Debian GNU/kFreeBSD benchmarks compare the performance of it to Debian GNU/Linux with the 2.6.30 kernel while the rest of the packages are the same. Results are shown for both i386 and x86_64 flavors. Debian GNU/kFreeBSD may be running well, but it has a lot of catching up to do in terms of speed against Linux."

Before seeing this benchmark I took for granted that 64 bits would be faster or at least come to par in all tests. How do you explain that 32 bits is faster in some tests ?

The only way I can explain it is that some piece of code is not optimized yet to run on 64 bits. This kind of prove my impression right; I can still wait for a while before upgrading to 64 bits OSes. The overall performancegain might or might not be there depending on your use cases.

But on x86, you are only guaranteed 4 *real* general purpose registers. x86_64 increases this number. With a good compiler, the register allocator would use all of these, and you would have much fewer loads from main memory, which can take on the order of 75+ cpu cycles on a cache miss, or 5+ cycles on a cache hit.

Like the other poster said there's register renaming - this allows pipelined stuff to be executed in parallel even though they happen to be using the same registers.For example, if you have a pipeline of 8 instructions, the first four instructions might be using R1, and the second set of four instructions might be using R1 too - but for different _independent_ reasons. Now you can't store the same R1 values to two different locations since the second four instructions would generate a different "R1" value.

Are they able to store double the words in 32 bit mode though? Or does it just stick a 32bit word in a 64bit register, and waste half of it?
If that's the case, it shouldn't give x64 a performance hit...

It's not because your CPU is running a 64bits OS that suddenly every data format has to be replaced with one using 64bit integers.It's not because your CPU is running a 32bits OS that you aren't allowed to manipulate anything bigger 32bits.

The OS bittage has almost no impact on what data format can be used. Only how fast those format will be processed, and how many memory can easily be addressed in a straight-forward way.

A 256 x 256 bitmap, RGBA, with 8bits per channel, will always take the same amount of memory wherever the OS is running 32bits or 64bits code. Only with a 64bits OS it will be much more easy to store more than 3GiB worth of textures.And even a 32bits OS can manipulate 1024bit data structures like crypto key (only a little bit slower, because the CPU internally won't be able to do 64bit operations).

Also most OSes are LLP64 or LP64, meaning that the default "int" still is 32bits. Thus code recompiled in 64bits will tend to approximately use the same amount of data as original code in 32bits.

It's not because your CPU is running a 32bits OS that you aren't allowed to manipulate anything bigger 32bits.

I can't speak for other OS's, but in the case of Linux, a 32-bit OS does mean there is a 32-bit limit on general registers (EAX, EBX, ECX, EDX, ESP, EBP, EIP, ESI, EDI). During a task switch, only the lower 32 bits of the general registers will be saved into the process state, even on a 64-bit CPU. When the kernel switches back to the process, only the lower 32 bits will be restored.

in the case of Linux, a 32-bit OS does mean there is a 32-bit limit on general registers (EAX, EBX, ECX, EDX, ESP, EBP, EIP, ESI, EDI).

Well that's still 8regs * 4bytes = 32bytes per task, vs. 16regs * 8bytes = 128bytes per task.You'll need quite a lot of task switching in order to fill the cache.

And even then, switching tasks is a rather rare event on the scale of other computations (Linux can switch task at 1000Hz when the low-latency desktop option is compiled into the kernel), so it is less dramatic if the switch does force a reload from RAM.

The parent poster mentioned problems with *data* structures, not with code.

And the code it self is less likely to increase. First there isn't a big different in length between x86_64 and stock x86 op codes. The whole x86 family has been using opcodes of varying length since day 1.There's nothing as dramatic as the differences between ARM native 32bits and Thumb 16bit modes.

Second, in 64bits the number of available registers increases dramatically, as said elsewhere in this thread.As such, the net result is

If you're talking about x64, the primary 64-bit consumer desktop / laptop CPU architecture, has it occurred to you that code running in the CPU's 32-bit mode also benefits from the doubled cache? It's not like the 32-bit code only uses half the cache, with 64-bit code using the full cache.

64bit code uses more memory, which can in some cases result in decreased performance..Also, i believe on some intel cpus some performance features are not available in 64bit mode (i forget the exact details)...

As others have said, 64-bit programs take more memory to run. There's nothing inherently faster about 64-bit registers and operations unless you're dealing with integers that get that big (which in most everyday programs, they don't). What makes 64-bit faster isn't just "more bits", but optimizations. 32-bit code is typically compiled for the lowest common denominator: i386. However, x86-64 CPU's are guaranteed to be at least i686 compatible (you're also guaranteed up to a certain level of SSE compatibility and such). In that regard, it's the code optimization that we can rely on and not "more bits" (which due to extra memory usage, will typically make things SLOWER, not faster) to make things faster.

However, not every app or test really benefits that much from i686 optimizations. For those that don't, and don't deal in larger numbers (AND that don't use so much memory that a 64-bit chip is needed to address it), 32-bit processors will typically be faster.

As to stability, x86-64 is well past the "new" stage. The specification is 10 years old and processors based on it 7 years old - Linux support was almost immediate. Just how long does it take for you to consider it not bleeding edge anymore?:)

You missed out on the fact that there are more registers on 64 bit than the famously register starved 32 bit x86. More places to put things can't hurt even if your not dealing in 64 bit values.

The problem with 64 bit is that a lot of code is still hand tuned to the maximum possible performance on 32 bit arches and in at least a couple of the cases listed in the benchmarks I wouldn't be shocked if there was some hand done assembler involved. I have also noticed GCC has some performance tweaks that work around the lack of registers on 32 bit that also tend to get enabled in 64 bit..

On most architectures, 64-bit code is slower. Pointers are bigger, which means you need more memory bandwidth to load them and you use more cache holding them. On x86-64, the situation is confused by the fact that 64-bit means 'using Long mode,' as well as 'using 64-bit pointers'.

Long mode gives you 64-bit registers (so you can store 64-bit values in a single register, rather than spread across two, doubling the number of 64-bit values you can store in registers), more registers, and a few other benefits like removing the 'must use EAX as the target' restriction on a lot of instructions (reducing the number of register-register moves, and decreasing instruction cache usage as a side effect). 64-bit pointers use more memory bandwidth and data cache.

For best performance on x86-64, you want pointers to remain 32 bits, but still run in Long mode. The OS should make sure that everything is mapped below the 4GB line for the process. As far as I am aware, no operating systems actually support this mode of operation. Without that, for any process using less than 4GB of address space, you have some advantages and some disadvantages when running in 64-bit mode. Whether the advantages outweigh the disadvantages, or vice versa, depends on the code.

> Whether the advantages outweigh the disadvantages, or vice versa, depends on the code.

That is exactly what I had figured out by intuition I guess;-))

I have learned (or got refreshed on?) some logical explanations to this fact here today, thanks to you and some others.

I have to admit that I don't remember taking the time to evaluate 32 bits vs 64 bits advantages, just postponing that analysis and an eventual upgrade to later. I though I was past due on that matter but with what I have read today, I wil

There are some neat things you can do with a 128-bit architecture. For example, you can assign an IPv6 subnet to your machine and pointers and IPv6 addresses interchangeably. That's more or less what SGI does with their high-end machines; each node stores a 48-bit local address space and a 16-bit node ID in each pointer. The kernel can then access memory on other machines transparently via a cache coherency protocol. For at least the next decade or so, however, the overhead of doing this on a global sca

For best performance on x86-64, you want pointers to remain 32 bits, but still run in Long mode. The OS should make sure that everything is mapped below the 4GB line for the process. As far as I am aware, no operating systems actually support this mode of operation. Without that, for any process using less than 4GB of address space, you have some advantages and some disadvantages when running in 64-bit mode. Whether the advantages outweigh the disadvantages, or vice versa, depends on the code.

The architecture doesn't have to support it. It is a chastity vow from the program, that while it may use 64bit, it simply opts to only use 32bit while still running in 64bit mode. Sounds confusing but is really simple.

The register is still 64bit (because CPU is in 64bit mode).The memory slot used to store pointers is 32bit (because no more is needed).The 64bit instruction set still has "load 32bit from memory" which may be used to load 32bit pointers into 64bit registers.

The reason no one is doing it is becaue the C api states that the pointers should be 64bit when in 64bit mode. If you use a trick like this, your application will no longer be following standards, making it unable to use standard libraries.

Correction: The C ABI says that, and the C ABI is defined on a per-platform basis (for example, FreeBSD and Linux use slightly different calling conventions on IA32). It's up to the operating system to define the ABI or ABIs that it supports. Solaris, IRIX, and most other commercial UNIX variants have been happily supporting 32-bit and 64-bit ABIs on 64-bit platforms for a couple of decades.

Of course we shouldn't ask that, because it would be a stupid thing to ask. The hardware doesn't care. If you use 32-bit pointers then you zero-extend them when you load them into 64-bit registers. From the hardware's perspective, you're using 64-bit pointers, but the top 32 bits are always 0. It's up to the OS (and compiler) and has absolutely nothing to do with the hardware. It is the responsibility of the OS to define the memory layout for the application and if the OS refuses to map anything above

Before seeing this benchmark I took for granted that 64 bits would be faster or at least come to par in all tests. How do you explain that 32 bits is faster in some tests?

For most, its generally considered a wash. Larger data structures require more cache and more memory and more memory to be accessed. On the other hand, you also get more registers in 64-bit mode. As a result, some things run slower and some things run faster, depending on the nature of the application. On average its likely to be a wash.

The big exception are those that use the PAE 32-bit extensions. Generally speaking, 64-bit is going to be a lot faster. Even still, there are some odd exceptions which will

Because Postgres is really a Unix-type program and not an NT-type program, you mean; NT has cheap thread creation and expensive process creation, while Unix generally features the reverse situation. If Postgres were NT-ized and used multiple threads instead of multiple processes... perhaps things would be different. The bit-width of the system makes no difference to the amount of RAM used for file caches; are you talking about how fast you can shovel data with a 64-bit processor? Data shoveling is the only

Yes, process model vs threaded model. For NT, a threaded application would be limited to its maximum addressable space per process, of one process, for all back-ends. Since PostgreSQL is a process model, that means each back-end is limited to the maximum addressable process space. Thus on a Windows 64-bit box, you can load it up with memory and EACH back-end can manipulate up to its 2GB-4GB of address space, depending on configuration.

The bit-width of the system makes no difference to the amount of RAM used for file caches

Absolutely it does. I have no idea why you would believe otherwise. Cache

It only makes a difference, as you say, over 4GB. Or on castrated platforms, 3GB. But since you can't actually use more than 4GB without ugly, performance-killing hacks, anyone using more than 4GB is going to need to use 64 bit just to use all their memory. But that's not just the file cache, that's everything.

I think somewhere I lost you and I'm not really sure how since the conclusion is extremely obvious. You're even making exceptions which were already clearly spelled out or flat out obvious.

Which is faster? 1GB of DB files cached? 10GB of DB files cached? The later is only possible on a 64Bit OS. Thusly, even 32bit applications can benefit in this environment, despite the fact they are 32bit and are not directly accessing all of that data within its own 32bit address space. That's entirely the point of every

And why would that be surprising? 64 bit lets you address more memory and they did the tests in a machine with memory that 32 bits could address all of. 64 bit pointers are obviously larger too, so the 32 bit version has effectively more memory and better cache usage.

Some programs aren't going to take advantage of 64 bit registers and so on, but are going to suffer from worse cache performance.

I'm sorry, what?!What are "many tests" you speak of?32-bit Debian Linux was notably better only on compilation (which isto be expected) and POVRay. A couple of tests have shown very small advantage towards 32-bit system, but 64-bit has won MOST of 27 test hands down.

Now, I am really surprised to see that Debian Linux 32 bits is actually faster than Debian Linux 64 bits in many tests !

I'm not so surprised to see that somebody didn't read the graphs very well. 32 bit was faster in only 4 out of 25 tests (16%). Further, 2 of those were only marginally faster to the point where they barely count as a clear lead. Conversely in the majority of cases, 64 bit was not only faster but significantly faster. To the point where I wonder if there were other configuration differences -- for example I don't understand why you'd see a much higher hard drive TPS rate under 64 bit (something like 4x)

Hehe, sorry, my fault. I used "many" to mean "more than one" and that was a mistake. After reviewing the English dictionary, it seems that in English, "more than one" is expressed by "several". I used to think that they were almost synonyms but "many" means more "a lot" than "several".

My God, how many times must I have sounded like bullshitting before;-))

Again, my mistake, I even learned the real signification of the word "many" today !!;-))/. teaches me all kind of stuff, including getting better in Eng

The benchmarks must be CPU or I/O bound... why should they benchmark sleeping apps?

If they're I/O bound, userspace and kernelspace are both waiting for the drive. The layout of the files on disk will have more of an impact than the kernel. You are right that things should be CPU-bound, I should clarify that I meant userspace CPU bound. You want to benchmark things that are systemcall-heavy, like concurrent apps that use lots of synchronisation primitives.

I really don't hope the last minute changes to the kernel, to do a big improve the system performance.

FreeBSD development follows three branches. -CURRENT is where all of the latest stuff is. -STABLE is where stuff that has been tested a bit in -CURRENT goes. -RELEASE branches are where only bug fixes (no new features) go. The 8.x series began development as 8-CURRENT shortly after 7.0 was released (about two years ago). Some features were then back-ported to the 7-STABLE branch, but only those that could be moved without invasive changes that might affect system stability.

8-RELEASE is the latest stable release and has had two years worth of new features on top of the one shipped with Debian, including, among other things:

Improvements to processor affinity and scalability in the scheduler.

A completely new USB stack.

A newer version of ZFS.

Improvements to the sound subsystem (now contains a full OSS 4 implementation, per-vchan volume control, a massively improved mixing algorithm, and other improvements)

A new NFS implementation, including NFSv4 support.

Network stack virtualisation for jails.

It's not a matter of last minute changes, it's a matter of not getting the last two years of improvements. I know that Debian likes the stable-and-tested versions of things, but they don't seem to apply that policy to the Linux kernel.

... you'll almost exclusively be testing usermode code unrelated to the OS.

Nobody cares about what OS itself does.

All people want from an OS is that their application run and run best.

... but you'll mostly be testing 3rd party code and the quality of your compiler's optimizations, not the OS kernel.

The statement is valid for e.g. MS-DOS. But it never was valid for OSs supporting virtual memory and I/O abstraction: the way kernel does things impact application performance quite noticeably. I/O optimization (read-ahead, delayed write back) and virtual memory management (application memory allocation, stack, context switching) all have direct influence on user space performance. And that is w

I don't use BSD, I just wanted to see if the "BSD is dying" troll still posted. It has been years, eh?

It does also seem to me that the FreeBSDk thing is meant to make certain features available to developers, maybe be more reliable, and "faster, faster" isn't being sold as part of the bill of goods. Yet, the talk returns to speed, speed, speed.

But what do I know... I work as a nurse. Although... I DO love a fast computer.

As a long time debian user who also has to work with freebsd sometimes I don't get it. Why use freebsd with GNU apps, when you can just run freebsd? And why freebsd and not lets say, openbsd or netbsd?
What is the advantage in using the freebsd-kernel instead of the linux-kernel? I have access to every linux app when I use freebsd and to be honest, if I knew my way around bsd as I do under debian I would probably switch. But I am missing the improvement for Debian here. Can someone please clear this up for me a bit?

Considering what they've done (most of it anyway) can be accomplished with a few flags to make.conf anyway, its not exactly impressive. Tell the system to use glibc instead of its native libc, then rebuild world (to rebuild the built in GNU tools with glibc) and build the ports for other GNU tools you want and you've got what they made.

You could probably write a fairly trivial sh script to do this on a generic FBSD install.

Thank you and the other people for giving your views. Because we can sounds like a perfect legitimate reason and I did my part of 'well, because I want to see if I can' myself.
ZFS is indeed a value-improvement of debian I think and that is cool. (well I think that is cool).
Still I am not really satisfied with the answers (sorry, please don't take this personal cause it isn't) I just hope the devs have some sort of great masterplan that totally makes sense.

FreeBSD has some cool features in the kernel, like ZFS support, Jails, and a 10Gb optimised network stack. I've found that the FreeBSD kernel responds better when the system is under heavy load. That's a godsend when you're trying to fix an issue. It's a high quality kernel, and is extremely stable.

Software management in Debian is much nicer than FreeBSD. The configuration files are more consistent, and keeping your system clean and tidy is an easy process. Doing updates, and even major upgrades, is of

As someone who is just now stepping into FreeBSD and who has managed Debian/Ubuntu systems for quite some time, i do find the front end tools for package management on Debian to be a bit nicer than pkg_add/pkg_delete on FreeBSD, but i know there are many other tools on FreeBSD for this purpose that i haven't found yet:)

The other day, I was installing an old FreeBSD system for compatibility with some stuff I had. I figure it's like installing an old Linux, right?

Wrong. When I install an old Linux, I can install all the old software. The *.rpm or *.deb files exist. FreeBSD doesn't work like that. It has ports. If your system is old, you're screwed. The ports system is only 100% available for the latest release. For older releases, there is a sort of weak idea that maybe it kind of sort of ought to be maintained when somebod

This is wrong. the Ports system is based on CVS, so in essence, you can go back version by version, back to the beginning, and select version numbers of the software to install at will, without having to depend on precompile binaries.
You use the supfile to select the port version you need
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/cvs-tags.html [freebsd.org]
Gives you all the branch tags that you can check out via historical cvs.
But Alter Relationship obviously didn't read the handbook, and started c

Yes, your slackware from 1994 is full of security holes. The FAIL here is you trying to install an unsupported version of the OS and then complaining about it when it doesn't work.

Never did I suggest that I was running a random server exposed to the Internet. The box could be running some lab equipment that costs millions of dollars to replace, using a proprietary control program that won't run on a more modern OS. The box could be running a legacy app, with a modern system controlling things over a serial line. Shit happens.

The fact that an unsupported OS suffers bit rot is disgusting. The bits should just sit there forever, as they do for every Linux distribution except Gentoo.

The FreeBSD kernel can be faster the Linux, but there are a lot of poorly written apps that think they absolutely must run on Linux or were written expecting GNUisms. Now you can do that.

FreeBSD is generally the more generic and performance driven of the BSD's with a larger developer base then the othe BSD's. The odds for very good performance and good hardware support are in FreeBSD's favor over Open or Net.

I prefer Gnu Userland. As for the kernel, its another option. Even though I prefer KDE, I wouldn't say that Gnome devs are wasting their time, or debian shouldn't allow users to install it. FreeBSD's kernel seems to be more performance tuned than Open BSD or Net BSD. Makes sense to me.

Why use freebsd with GNU apps, when you can just run freebsd? And why freebsd and not lets say, openbsd or netbsd?

They actually have a NetBSD port [debian.org] as well as a Hurd port [debian.org]. They also have a nifty why NetBSD [debian.org] section. There doesn't seem to be a similar page for kFreeBSD, but I assume the reasons are similar.

When George Mallory, the guy who attempted to climb Mount Everest several times (and almost succeeded, though the most successful attempt was also fatal in the end), was asked why exactly would he try to climb it, as it was extremely dangerous and he wasn't even a scientist or a cartographer, he said one simple thing.

"Because it's there."

Sure, there probably are some practical purposes for a version of Debian running the FreeBSD kernel, but whatever those might be, I think it's not a matter of "what for" bu

Along the same line, I'm reminded of JFK's challenge to put a man on the moon before 1970. After the Sputnik shock and the subsequent re-grouping, we were ready to push our limits, doing things "not because they are easy, but because they are hard." And by the time Apollo 11 returned, with everybody safe and sound, the benefits of the space program was reaching society at large.

With such a large challenge, overcoming the hurdles on the way produces benefits that often can't be quantified beforehand. And

I think that sums up for me why I'd be interested in the project. I'm not interested in running Linux with a BSD kernel for work because I like things that are reliable and well tested and documented by a large userbase. I'm not running it personally because I don't want to spend the time on something without a strong obvious benefit, but I appreciate that someone is.

The goal of doing something like this isn't to produce a product that is better than anything else, it is to see what you can do and see what

If you have a mighty herd of servers, desktops, and kiosks, all sharing various automation scripts, supporting both freebsd and GNU command line apps could be a pain, due to subtle differences in command line options, etc. Its possible to create a blizzard of "if then" to work around, but why bother.

But I am missing the improvement for Debian here.

Overall, none really. The way ports work on Debian, is if enough people volunteer to maintain a port, and they are successful, then we have a new port. Heck, that is the way everything works in the Debian project, if something meets a certain standard of excellence, its in, no matter if its a package, docs, artwork, shared VCS, human language translation, a network service, a mirror, or in this case, a port. Debian is thankfully not a deletionist stronghold like that dumpy embarrassment known as wikipedia.

This link provides a one page summary of each attempted Debian port, successful and... not so successful :

This link provides a one page summary of each attempted Debian port...

Don't forget the unofficial Debian ports; one really interesting one is Debian Interix [debian-interix.net], which is a Debian userland on top of Microsoft's Unix layer for Windows. It hasn't had a lot of activity recently because it's mainly run by one guy, and Interix doesn't get a lot of bugfix love from Microsoft (surprise surprise), but compared to the abomination that is Cygwin it's already light-years ahead. All it needs is a decent community...

The FreeBSD kernel gives you a few nice things. ZFS, DTrace, and a high-performance in-kernel sound system that eliminates the need to mess about with things like PulseAudio just to get half a dozen applications going 'bing' at the same time while another one plays music (although this got a lot of improvements in the FreeBSD 8 kernel, which isn't in Debian yet, as did ZFS). It also gives you the ULE scheduler, which has had several years of testing and refinement (unlike Linux's scheduler-of-the-week) and performs very well (was outperforming Linux by a large margin on 8+ cores, now they're pretty similar). It includes Jails, which are like chroot but with a complete environment inside so you can have a different IP, different users, and so on in a jail (and you can create them with a complete clone of a skeleton system almost instantly with ZFS clones).

As to why you'd use Debian rather than FreeBSD, the big difference is glibc rather than BSD libc. When people talk about Linuxisms in code, they most often really mean GNUisms and the code depends on something weird in glibc, rather than on anything specific to the kernel. It will therefore work with glibc on kFreeBSD just as it would with glibc on Linux. You may also prefer the GNU userland utilities. Some people install these on FreeBSD anyway, but with Debian they are the default ones. This means that a few other common GNUisms (e.g. assuming that/bin/sh is bash and that POSIX utilities accept GNU arguments in shell scripts) will work.

This means that it's easier to port crappy code (and there is a lot of it about) from GNU/Linux to GNU/kFreeBSD than to FreeBSD. I've written a bit about which bits are GNU and which bits are Linux [informit.com] before: most of what the user or developer interacts with is GNU.

Yes, that's true, and on the networking side netgraph is amazingly useful if you want to simulate unusual network conditions (I used it during my PhD to simulate running my code on a high-latency link with lots of jitter and 5% packet loss, for example). There's something similar for Linux, but netgraph is amazingly easy to use.

oh btw... I ran Debian as linux of choice between 1999 and 2007 (as mail/proxy/firewalls for various business clients).... so I'm not just a linux user who has no fucking clue on BSD, or a BSD user who has no clue on linux...:)

Only reason i have RHEL at the moment here is due to it being an officially supported platform for our ERP solution.

You're a sysadmin, and you're running Debian almost exclusively. You have a large number of automation scripts that you use to do your job (security updates, auditing, provisioning, general maintenance, etc). All of them are expecting to run on Debian, because all you run is Debian. So you, as a sysadmin, decide you want to use ZFS somewhere.

Options 1 and 2 will most likely require you to tweak or rewrite a lot of your scripts. I shouldn't need to explain why option 3 is a bad idea. Since you're working with Debian userland, going with option 4 seems like it would be the path of least resistance. Seems pretty useful.

It would also be interesting to see benchmarks of functionality actually provided by the respective kernels. E.g. performance of fork, fork+exec, socket, accept, reads and writes on IPC, multiprocessor/multicore/hyperthreading performance, etc. Past benchmarks have shown that there can be dramatic differences between operating systems especially when large numbers of something (processes, filehandles, CPUs, etc.) are being used simultaneously.

Also, I am missing a description of exactly how they measured. Did they recompile the benchmark suite from scratch on each platform? Which compiler was used, and with which settings? Are they running the same binaries on both? How exactly did they arrive at the presented values? Is each bar the result of a single run, or did they run each benchmark multiple times and account for any variation in observed scores somehow?

As others have already mentioned, it would also be interesting to see how a regular FreeBSD system would fare.

All in all, interesting benchmarks. My conclusion: there isn't that much of a difference between the tested versions of Linux and kFreeBSD in there benchmarks. The difference between 32-bit and 64-bit is usually more pronounced. If you need the highest performance for your application, you'll still have to run your own tests.

The speed difference is a few percent. For most people, that's not noticeable. My kernel CPU usage stays well below 10% most of the time, even when the CPU is busy, so even a 50% difference in kernel performance wouldn't be particularly important. Much less important, for example, than things like ZFS, DTrace, a decent kernel sound system, and so on.

It's not noticeable for most people...but for those of us in situations where it is noticeable that sort of difference is interesting. For example, my office has a debian box that runs at a continuous load average of around 5. Shave 10% off that & we'd notice.

I think speed is everything when you're writing an article for a benchmark site. Note that I'm not disagreeing with your ironic implication that there are other things to look at, but obviously it's a lot easier to churn out some graphs than to try to compare two OSes/suites/whatever on other important metrics, such as security or usability. Leave that to the media troll sites--there's no shortage of them.

It's good to have choices. Even if Linux is the best choice for you today, you can never know that it will be the best choice for you forever. Providing Debian GNU/kFreeBSD not only offers Debian users the option of using the FreeBSD kernel instead of Linux, but also offers FreeBSD users a way to use the GNU userland instead of FreeBSD's.

Moreover, in making different kernels and userlands work together, areas where this is problematic are identified and improved, so that oth