Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

diegocg writes "Linus Torvalds has officially released the version 2.6.32 of the Linux kernel. New features include virtualization memory de-duplication, a rewrite of the writeback code faster and more scalable, many important Btrfs improvements and speedups, ATI R600/R700 3D and KMS support and other graphic improvements, a CFQ low latency mode, tracing improvements including a 'perf timechart' tool that tries to be a better bootchart, soft limits in the memory controller, support for the S+Core architecture, support for Intel Moorestown and its new firmware interface, run-time power management support, and many other improvements and new drivers. See the full changelog for more details."

I'm not perfectly happy with the term "virtualization memory de-duplication". Linux 2.6.32 introduces what is called "KSM", an acronym that is not to be confused with "KMS (Kernel Mode Setting)" and expands to "Kernel Samepage Merging" (though other possibilities with similar meaning have already emerged). It does not target virtualization or hypervisors in general (and QEMU/KVM in particular) alone. KSM can help save memory for all workloads where many processes share a great lot of data in memory, as with KSM, you can just mark a region of memory as (potentially) shared between processes, and have redundant parts of that region collapse into a single one. KSM automagically branches out a distinct, exclusively modified copy if one of the processes sharing those pages decides to modify a certain part of the data on its own. From what I've seen until now, all that's needed to have an app benefit from KSM is a call to madvise(2) with some special magic, and you're good to go.

I really like how Linux is evolving in the 2.6 line. Now if LVM snapshot merging really makes it into 2.6.33, I'll be an even more happy gnu-penguin a few months down the road!

I have a system running a 2.6.32-rc6 kernel with KSM and the latest kvm (which includes support for this, but its turned off by default)... Because i run a number of virtual images that boot the same kernel and system libs (different apps ofcourse), it saved me over 1gb of memory on the host.

No, as Ubuntu Releases are version-stable, and backport security fixes only (Firefox being the exception of that rule). You may install the kernel from the mainline kernel PPA though: http://kernel.ubuntu.com/~kernel-ppa/mainline/ [ubuntu.com]Just fetch the.deb that fits your architecture, and install it via `sudo dpkg -i/path/to/your/downloaded/archive.deb`.

2.6.32's KMS and R600/700 improvements are expected to give a huge 3D performance boost to the open source ATI drivers - can't wait to test this!

This is indeed excellent although it needs to be backed up by support from the X driver. Currently I am running Ubuntu Karmic on a Radeon HD 3600 series card (RV635, which counts as an R600 series - quite confusing) and 3D support sucks. Both the "radeon" and "radeonhd" drivers only have basic support for these chips - desktop effects don't really work.

I was using the fglrx driver on Jaunty, which worked OK, but it seems to be getting worse with every release. In Karmic it was so broken I just gave up on it

The Fedora team has backported the KMS and R600/700 improvements to FC12, which I've been running for a few weeks now. While it's better than nothing, 3d performance still has a way to go. The performance of my old Heretic II game is still unacceptably slow.

The ATI drivers usually took the sacrifice of a goat to get them to work, but their performance was far superior. Too bad ATI won't support recent releases of Fedora.

If you're ok with not-so-open drivers, nvidia 3D cards have worked for years. I am waiting for quality open source 3D linux drivers, but until then, at least 3D can and has worked reliably on linux. (the nvidia-settings tool is reasonable enough that you generally don't need to edit config files)

"In Internet slang, a troll is someone who posts controversial, inflammatory, extraneous, or off-topic messages in an online community, such as an online discussion forum, chat room or blog, with the primary intent of provoking other users into an emotional response[1] or of otherwise disrupting normal on-topic discussion."

Karma butchery aside, everyone who modded this post troll is an idiot. Since when was expression of opinion and description of personal experience in any way inflammatory or e

Like the strip, and it raises a valid point. The bottom line is that kernel development advances more quickly than user interface and applications for the same reason that physics advanced more quickly than say... psychology. That is, because developing a faster kernel is a much easier problem than developing a fun, usable desktop environment. It's easier to write, easier to test, and easier to debug. People tend to gravitate towards problems that they think they can solve--and ignore the problems they don't understand or don't want to deal with.

Personally, I think that the best way forward for Linux on the desktop would be to take GNUstep to the next level. There's a LOT of code there already written, and with a bit more work you might be able to have source-level compatibility with Mac OS X--which would give you access to a bunch of commercial apps. And, most importantly, the ability of the OpenStep API to produce a world class desktop--best in the world in fact--is proven. After 10 years, I don't think that either KDE or GNOME have really done all that much for Linux on the desktop... it's time to try a different approach.

Of course, I'm just kibbitzing, not bringing code. So what right do I have to say anything?

Looks like you didn't get your psychology right. The problem is that creating a desktop environment is, in fact, much/easier/ to create than it is to enhance the kernel, and that makes it extremely boring. Desktop environments are trivial, but dull, to make. They are a perfect example of a job you should be getting paid for.

Personally, I think that the best way forward for Linux on the desktop would be to take GNUstep to the next level.[...]After 10 years, I don't think that either KDE or GNOME have really done all that much for Linux on the desktop...

Purely technical solutions to marketing and promotional problems rarely work, so its unsurprising that GNOME and KDE have done much for Linux on the desktop, since their marketing and promotional efforts are pretty minor. Of course, switch technical approaches to focus on GNUste

That is, because developing a faster kernel is a much easier problem than developing a fun, usable desktop environment.

I disagree, it's not an easier problem. It is, however, a much more interesting problem to solve, especially to skilled hackers.

One other aspect here is that the target audience is bigger for the kernel. Desktop uptake is still very low, but kernel is used by any device that runs Linux, whether it's a router, a smartphone, a server, or a netbook. This has a side effect of kernel hacking being better financed than desktop development, as there are more commercial players interested specifically in the kernel, who couldn't care less about KDE or Gnome.

Desktop uptake is still very low, but kernel is used by any device that runs Linux, whether it's a router, a smartphone, a server, or a netbook. This has a side effect of kernel hacking being better financed than desktop development, as there are more commercial players interested specifically in the kernel, who couldn't care less about KDE or Gnome.

If I hadn't already replied in this article, I probably would have modded you up. This point is hard for many to understand, but it's quite possible that the to

I disagree, it's not an easier problem. It is, however, a much more interesting problem to solve, especially to skilled hackers.

Whether or not its an easier problem to solve, overall, its an easier problem for the kind of people who actually write code to define concretely, and validate solutions to, since the skill set needed to do that with that problem is closely related to the skill set of programmers. This is important, because to successfully solve a problem (or, in the case of problems that progres c

...developing a faster kernel is a much easier problem than developing a fun, usable desktop environment.

While I agree with tulcod's response -- kernel development is usually much harder than desktop development. However, there is one important difference. A faster kernel is a measurable goal. While you might be able to make a "fun, usable desktop environment" for a single person, and maybe even for a good percentage of the population, you will never, ever satisfy everybody. Half the people want more op

Well, GNOME and KDE ( I prefer one of them but it is not relevant to this post ) have done lots for Linux on the desktop. I have been running it for a number of years because I find it more pleasant to use than Windows. And I am not alone.

And the millions of people using it are doing so against active attacks from a number of organizations. Mainly closed software companies, and also (mainly in the past) political organizations and governments.

what do you propose? to rewrite the kernel in python? sorry, but something needs to run on the hw itself eventually, nevermind that the language used has little to do with your complaints. User mode graphics are ok for basic desktop use, but forget it if you want decent performance for 3d.

I agree with you about the monstrosoties. The kernel is full of classes and virtual functions. C++ provides exactly those features without the syntactic overhead and assosciated bugs. Because they are a built in feature, everyone does them the same way. Not only that but it uses less memory (kinder on the cache) because it stores only one copy of each vtable. It also provides *exactly* the same features as C for when they are really needed.

"I'm tired of the Linux kernel; it's really not that great."Linux has always been more or less the entire FLOSS pool. Nothing in it is meant for one goal. You get all these different goals. Yes it's far from elegant in that respect.

But... It's an ecosystem where people throw in stuff. It's like evolution. You start with crap. Make countless modifications. The distro's then choose what's important. Everything that sucks dies. Everything is better than the version before sticks.

I'm tired of the Linux kernel; it's really not that great. Everyone seems obsessed with C, going as far as to spawn these kind of monstrosities [wikipedia.org] just to force modern features into a traditional platform.

No it would not. You are looking for ABI compatible, not source-level compatible (or even API compatible). And who the hell would want to duplicate the nightmare in OS X programming where Apple couldn't even decide if they were going to go with Carbon or Cacoa for 6t4-bit? Then of course, they axed one in-spite of what they were saying previously.

It was not a this or that decision, Carbon is a legacy API that got left at 32-bit, and Cocoa was always the one to use going forward. In the end it forced developers like Trolltech to port to Cocoa which is A Good Thing TM.

There is nothing the kernel developers can do about this. On my machine which is a dual 2218 opteron with Dual nvidia 8800gts video cards running 4 monitors just playing flash a single screen will bring one cpu core to its knees. Normal sized flash videos will bring one cpu core to its knees and will sometimes drop frames. The same video if saved to a local file and played with xine/mplayer etc will use up 1-2% cpu power at the lowest cpu freq (1 ghz).

It makes one wonder, with Microsoft encroaching on Adobe's turf like this, shouldn't they at least try to cover their ass and give great support on non-MS platforms? I really don't get what's going on as far as their strategy goes... Or maybe there just isn't any strategic planning.

I'm glad to see Btrfs improving so rapidly. I hope popular distros start including support for it, but more importantly, start using it as the default filesystem.

It's time for the ext-based filesystems to die. They are a technology that was obsolete a decade ago.

ReiserFS was set to kill them off, but unfortunately found another victim first... JFS and XFS only work well in certain high-end niches. But Btrfs is much better as an all-around filesystem, which is why it has a chance to finally put an end to ext

How does Btrfs compare to ZFS? I've been using ZFS-on-FUSE, and absolutely love the incredible data integrity and volume management features that it provides. The new support for deduplication will also be wonderful once implemented.

Of course, the performance and the idea of trusting my data to FUSE leave much to be desired.

(On the downside, I'm peeved that Btrfs is GPL licensed, which will prevent it from becoming "the one true filesystem" from here on out. Windows users will be stuck with NTFS, Linux

It wouldn't be a derivative work to write a driver if you did so from scratch. But to do so from scratch is... shall we say a "non-trivial problem." It would be better to have a BSD licensed filesystem that could be relicensed as appropriate--GPL for linux, proprietary for Windows and Mac, BSD for.. ahem... BSD, etc.

Nevermind that, I think that the whole objection to a BSD license in this case would be that such a license could not prevent MS (or Oracle, or Apple) from "embracing and extending" the whole filesystem so that the "standard that everybody uses" is no longer free

I think he was referring to the parent. The whole point of the BSD license is to not give a rat's a** about who does what with the code.

One could easily add an extra clause to the standard BSD license that states that any derivative work must be fully compatible with the reference Btrfs implementation in order to bear the Btrfs name. Many projects include such naming clauses.

But even assuming ms would use an open filesystem, they would want to alter it to make it incompatible with everyone else's implementation... And they can't do that very well if people are able to download the source.

If only real, accurate "specification" is the source code, then it's damn hard to create a compatible and reliable new implementation from scratch. File systems are complex, concurrent (meaning many files being accessed simultaneously) and performance-critical as well as reliability-critical. Getting it right is hard, while getting it wrong is bad, so there needs to be really good reasons to even try to do it, instead of using something that al

Lets say that the copyright owners of btrfs create the windows/osx/solaris/aix drivers?

Or more realistically, if the copyright owners of btrfs grant an interested third party a special license to create a lgpl'd btrfs driver?

There can't be a "special license" to do LGPL version. Once such version is out, well, it's out. So they could as well make the whole thing LGPL (which IMHO would be a good idea).

But if there are a lot of developers, getting everybody to agree (or even to reach everybody) is a lot of work. Replacing=rewriting the code of those who don't agree might be on option, depending on how much of it there is.

But I'm pretty sure they actually thought about it and chose GPL over LGPL because they wanted to, so they're

On the downside, I'm peeved that Btrfs is GPL licensed, which will prevent it from becoming "the one true filesystem" from here on out.

Well, ZFS itself has a GPL-non-compatible license, but that doesn't prevent it from being usable in Linux as an independent user-space process through FUSE.The same approach could be imagined under non-GPL-compatible OS: have the GPL implementation as a standalone userspace daemon.(Which is not a bad idea - give more freedom to upgrade)

Windows users will be stuck with NTFS

No matter what. Even if some kernel guru released a tri licensed LGPL/BSD/Proprietary perfect file system, Microsoft will still be using NTFS and promising WinFS soon for wha

"For removable media, UDF could be a good candidate too. It's getting widespread availability, specially since Microsoft added support for writing on Vista and Win7."

Getting slightly off-topic, but after the FAT patent-trolling recently this interests me.

I went and dug up the sadly-neglected udftools package and installed it. Sure enough, the following command (found with a bit of Googling) seems to produce a filesystem on my SD card that can be read from and written to just fine by Linux, Mac OSX (Leopar

MS will never support a filesystem they don't control unless forced to, and certainly won't make it the default...

BSD, Solaris and OSX all support UFS, as does Linux.... Linux also supports the hfs+ filesystem currently used by OSX, not sure if bsd/solaris do but there are bsd licensed drivers for it so no reason not to.

BSD, Solaris and OSX all support UFS, as does Linux.... Linux also supports the hfs+ filesystem currently used by OSX, not sure if bsd/solaris do but there are bsd licensed drivers for it so no reason not to.

Linux only sort-of, depends on flavor. I can't reliably mount a CF card r/w from pfSense (FreeBSD) under linux.

If Btrfs's design proves to be good, there is not a reason why there can't be both GPL and non-GPL implementations written for it. I think one of the things for universal filesystem to be successful is to have something that has more than one implementation.

FAT32 will have to die in the market when people get sick of files over 2gb getting truncated. The end is near for FAT.

(On the downside, I'm peeved that Btrfs is GPL licensed, which will prevent it from becoming "the one true filesystem" from here on out. Windows users will be stuck with NTFS, Linux users will get Btrfs, Mac users will get whatever apple is secretly working on, and the BSD/Solaris camp will get to keep ZFS. None of them will be compatible, and FAT32 somehow remains the only viable option for removable media.)

You may as well stop holding your breath now. Microsoft will never support a general purpose filesys

I would prefer to use EXT2 on small sdcards, so as to support filesystem permissions...Or how about something like jffs2 - a filesystem actually designed for flash media.

FAT32 is a pretty garbage filesystem, and it's patent encumbered, an open filesystem without the weaknesses of fat32 and which is supported everywhere would be extremely useful. It won't happen tho so long as MS have sufficient market share to bury any open filesystem, they want people locked in to their proprietary patented filesystems and

JFFS2 is designed for unmanaged NAND flash, not flash cards with built-in controllers that emulate IDE drives. Therefore you can't use it on SD cards, CF cards, or anything that has a built-in memory controller.

It's time for the ext-based filesystems to die. They are a technology that was obsolete a decade ago.

ReiserFS was set to kill them off, but unfortunately found another victim first... JFS and XFS only work well in certain high-end niches.

In my experience, JFS offers most of the benefits of ReiserFS, while being lighter on CPU. So it is definitely not just for the high end. It has also turned out more stable than Reiser, though in the recent years this has evened out.

On some of my machines there have been consistent problems with using JFS on the root partition, but this may be due to the init scripts. No data has been lost, though, and on non-root partitions JFS has consistently been rock solid for me. This includes a number of x86, Powe

If you look at the filesystem benchmarks, JFS is often not the fastest, but scores best in term of CPU usage. I've found that on a netbook which has a very fast disk (ie flash) and not much CPU, JFS is actually the best option. YMMV of course, and I came to this conclusion before ext4 was released, and I haven't trried the pre-release ones like btrfs.

Since EXT2/3 and recently EXT4 have some level of popularity among users, there are applications in Windows to mount them at startup if you are using a dual boot system. There wasn't anything wrong with ReiserFX or JFS or any other filesystems that I tried - but EXT# was the only one which could be easily used under Windows.

Unless I am out of date, there's only really ext2 in windows via ext2fsd, which will mount ext3, but sans journaling, and will only mount ext4 if you disable extents, which is one of the major features.

This 'Per-backing-device writeback' is pretty significant. I'm sure the feature film and database industries will love it especially:

The new system has much better performance in several workloads: A benchmark with two processes doing streaming writes to a 32 GB file to 5 SATA drives pushed into a LVM stripe set, XFS was 40% faster, and Btrfs 26% faster. A sample ffsb workload that does random writes to files was found to be about 8% faster on a simple SATA drive during the benchmark phase. File layout is much smoother on the vmstat stats. A SSD based writeback test on XFS performs over 20% better as well, with the throughput being very stable around 1GB/sec, where pdflush only manages 750MB/sec and fluctuates wildly while doing so. Random buffered writes to many files behave a lot better as well, as does random mmap'ed writes. A streaming vs random writer benchmark went from a few MB/s to ~120 MB/s. In short, performance improves in many important workloads.

Being somewhat ignorant of the inner workings of XEN, VMWare, KVM and the like the very idea that VM's would share memory at all seems rather risky in terms of them being sandboxed from each other. Beside a hypervisor being able to allow many VM's to run basically any OS, it would also seem that there is a security element involved eg: running Windows in a VM and Linux in another and NetWare in yet another the three would not have the ability to know the other were there and therefor be safe from being hac

Instead of storing multiple copies of the same data in memory, it stores a single read-only copy and points the others to it. If you try to write to it, it traps, creates a new read/write instance which is exclusive to you and then points you at it...

Shared libraries work in much the same way. Shared libraries been implemented pretty securely for many years now.

I grock the lib concept, each program gets it's own data segment and the code is run in a single image ( if you will ) and that conserves memory. Each data area is private to the program unless they explicitly share it through some IPC mechanism.

This is interesting as it seems like a way to write malware. If I wanted to deliberately run the machine into the ground I could just look for those data area's and keep attempting to write to them and force the OS to keep duplicating them over and over again. Now

This isn't possible, as the guest OS cannot get more memory allocated to itself than has been assigned to it. Let's say you have five guest machines that "share" all their memory through this new feature. If you gain access to one of the machines and completely rewrite all the memory it has inside it, you still have only doubled the memory usage as the host os allocates the new memory for the guest os. Rewriting in the same memory multiple times will not increase memory usage any more than that as it will s

Disabling or enabling this should be no harder than one command line option to qemu.

Disabling it is the right thing to do if you don't need the memory saving benefit. It will waste CPU and throw your stuff out of data caches. I think most people will be more concerned with saving memory though, I know I am.

So how does one deal with no VGA console support? I know nothing about what is going on in the video card industry. Nevertheless, I find this quite interesting and would really appreciate it if you could provide some more information so a layman like me can understand what this means.

kiss text mode goodbye becasue the powers that be refuse to support it.

The ATI/Radeon driver, at least, provides its own hi-res console mode (all in the kernel, not dependent on Xorg, works with fbcon to act as a framebuffer driver), so why would you want the old VGA (low-res, low-refresh) mode anyway?

For special/embedded systems, they can always leave out that specific driver (ATI/NV/Intel) that does KMS, and keep the old VGA console code, its still in the kernel & available as an option.

You haven't lost anything, you just can't use both at the same time now, e.g. you can

I'm very interested in the new make target. Specifically, "make localmodconfig". It seems that this new target will check your current.config, and also check whatever modules are currently loaded. It then creates a new config file which only builds the modules you are currently using. This could be a great time and space saving, as opposed to building everything and the kitchen sink as distros tend to do. It gives you a fairly easy and sane way to truly tweak your kernel to fit your box, or script it to fit a whole bunch of non-similar boxes.

There's also a "make localyesconfig" that will be even more useful for me, particularly for removing the need for initrd. I can now do a "make localyesconfig", and not have to try to guess what particular combination of compiled-in options is required for the computer to start up, then add in the additional things as modules.

The 2D specs were released in September 2007. The 3D specs were released in January 2009. Drivers do not write themselves immediately just because the specs are out, it still takes some time. But it's getting there, and they won't go away like the closed drivers will, the moment the manufacturer feels it's no longer profitable to maintain them.