Posted
by
samzenpus
on Sunday March 18, 2012 @11:04PM
from the check-it-out dept.

diegocg writes "Linux 3.3 has been released. The changes include the merge of kernel code from the Android project. There is also support for a new architecture (TI C6X), much improved balancing and the ability to restripe between different RAID profiles in Btrfs, and several network improvements: a virtual switch implementation (Open vSwitch) designed for virtualization scenarios, a faster and more scalable alternative to the 'bonding' driver, a configurable limit to the transmission queue of the network devices to fight bufferbloat, a network priority control group and per-cgroup TCP buffer limits. There are also many small features and new drivers and fixes. Here's the full changelog."

Arch Linux will probably support it in a few days. The packages have been marked outdated and there is already a 3.3rc7-1 ( https://aur.archlinux.org/packages.php?ID=50893 [archlinux.org] )release in the wild that will probably be the basis for the updated to 3.3.

Also, it's quite surprising to me since as far as I know it's necessary to use TI's compiler to generate C6X code. I found one initiative to port GCC to it, but afaik it didn't get finished. My understanding is that it is no small job to get Linux to compile on non-supported compilers, so I'm interested in the toolchain they are using.

It seems pretty clear stuff is not just being shoved in willy-nilly for android. There have been many debates about including this piece or that piece, and if the implementation should be identical to the android version. Many parts are not in yet, and some may not go in at all. The android suspending solution may not ever go in, mainline may eventually get a system that serves the same purpose in a different way, android may eventually support that. LWN and the LKML posts they link to give a pretty good overview short of reading all the code commits.

There isn't an easy answer to your question. In general, bufferbloat is when you get latency or jitter issues because some network device upstream of you has a large buffer, which it fills before it starts dropping your packets. The dropped packets is how software relying on TCP is notified of network congestion so it knows to throttle back. Other protocols may be affected differently (you might notice VoIP delay or bad lag on your xbox).

To combat this, the idea is to limit your traffic in buffers you control which are (typically) smaller than your ISP and modem's buffers so the ISP ones stay empty and highly interactive. In general, this means limiting your data rates to lower than your bandwidth and prioritizing packets by interactivity requirements. The linux kernel additions in 3.3 allow you to set your buffer size smaller for the entire interface with the goal being to reduce the delay induced by the linux router/bridge. It also adds the ability to prioritize traffic and limit buffers by cgroup (which is like a process categorization or pool which has certain resource limits), but this isn't particularly helpful in your forwarding situation.

For my own QoS setup, I usually use a script similar to this HTB one [lartc.org]. It requires some tuning and getting your queue priorities right requires some understanding of the traffic going through your network. A lot of high level netfilter tools (smoothwall, dd-wrt, etc) have easier to use tools QoS tools which may better suit your purposes. Having not used one, I'm not in a position to recommend them.

Not true. Kexec replaces the whole kernel, which means the system is reset. Ksplice applies and removes patches (security updates mainly) while the kernel is running, which means all the processes keep running as if nothing happened.

I was trying to remember the last time I built a linux kernel. It's going to be somewhere in the early 2.6.x series, on Debian Sid. Even in those early days I didn't really notice a difference in performance (unless I was compiling in drivers for specific hardware). The kernel image was smaller, and I knew that that was better, but other than that it all ran about the same. I almost wonder if the performance "increase" I saw back in the 2.2 days was all in my head now. I used to see some performance differences in compiled FreeBSD kernels on my really old boxes (300mhz K6-II with 128MB), but I think the differences have gotten smaller and smaller since 4.x days.

Like Wonko says, it's not a huge bit of effort to build a kernel. But I don't really see a reason to do it. I should give it a shot just for old time's sake, heh.

The idea I believe is more that userspace is responsible for handling which device(s) are used for transmission and notifying the kernel, rather than being responsible for the sending of packets themselves. If you've got an active/backup bonding setup, it makes sense to perform connectivity checks from userspace which can be flexible and complex, then notify the kernel to switch or remove devices that have lost connectivity.

The libteam [github.com] daemon that's in development seems to have a round robin mode planned and I'd hope 802.3ad, but I guess we'll have to wait and see how that works. I'm sure it'll still need kernel support for the bonding implementations, it's just the monitoring and management functions that are being extracted.

That's rubbish. I have a triple monitor setup and KDE will happily let me make a panel 100% of any one screen, or 100% of all three (if you wanted to do that for some insane reason) at any orientation.

The TI C6X line of chips are not only VLIW, they are "DSP" chips, optimized for signal processing operations. Also, this chip has no MMU. Nobody is going to build a tablet computer or any other general-purpose device based on one of these.

I think for the near term at least, anyone using a TI C6X will be using the TI C compiler. TI has a whole IDE, called Code Composer Studio. [ti.com]

But now we have the possibility of running Linux on the chip.

The one time I worked with a TI DSP chip, I didn't really have an operating system. Just a bootstrap loader, and then my code ran on the bare metal, along with some TI-supplied library code. Now I'm working with an Analog Devices DSP chip and it's the same situation. For my current purposes I'm not using any OS at all. But Linux support could potentially be great; for example, if you were using a platform with an Ethernet interface, you could use the Linux networking code; if you were using a platform with USB, you could use Linux USB code and file system code and so on.

Bullshit. Not only does a merge of Android kernel features not mean you can play angry birds under some regular Linux distro (you'll need, oh, Dalvik and Android's windowing system which is not X11), you can already play Angry Birds in Chrome [google.com], no Wine required. The kernel is entirely irrelevant. If you don't know what you're talking about, just shut up.

Okay, so what are the kernel changes that users need? Filesystem - we currently have a choice of ext2, ext3 and ext4 - what's inadequate about any of them that couldn't be resolved in an ext5? Any reason why re-strippable RAID can't be in that?

The general notion is that btrfs will "be" ext5 (i.e. it will be the next "updated" but still stable and mainstream FS), and that there will not be a filesystem with the actual name "ext5". For those who don't need btrfs features, ext4 will suffice. This is also the intent of Theodore Ts'o, the principal developer of ext3/4.

I believe the reason for this is that the innovation going on in filesystems is centered around some big rethinks, e.g. btrfs uses a copy-on-write B-tree (a concept introduced in 2007). It would be a pain in the neck (or impossible) to innovate like this and remain backwards compatible with ext2/3/4, thus btrfs is not called ext5.

One thing they could do as far as the Linux kernel goes is work on drivers - particularly Wi-Fi drivers, and do what's possible to ensure that 3.3, or 3.4 support just about every peripheral device there is out there. Aside from that, as far as I can tell, the Linux kernel is pretty much complete.

I suspect that this badly-worded Slashdot summary really means that the kernel can now expose the C6x as a device and manage userspace access to it when running on something like an OMAP SoC. Running Linux on the C6x itself would be pointless in the extreme.

So... which part of this release actually provides a compelling reason to use Linux over any other OS?
You've been itching for something to run on that TI C6X system you built?

The fanboisim here makes me gag. Apple has nothing on you guys.

My post had nothing to do with fanboisim. I currently use Win, Mac, and 5 different flavors of Linux not counting my tv, car stereo, smart phone, and at least 3 other devices that run modified Linux code.

I have been using Linux off and on for years. But it has only been recently that I have really been modifying it and making it do what I want and how I want. Currently I have my MBP that I use that I need to have Windows installed on due to either software differences in Win and Mac versions (ie. Quickbooks) or because I need access to some windows items while on my Mac.

I use a Windows machine for some games only because the gaming industry seems to feel that there is only 1 OS that is worth the time. Even Mac is lacking on a lot of the games I play. And I am not a big fan of running some games in a X over for Linux.

When I use Linux it is for everything else from my firewall to Development. But I am dependent on my Phone and Tablet which run Android. Now with android kernel merge I may have a greater use for using linux than before.

And as I stated, I just hope I dont have to start at the basics. I am no guru by far but I really dont want to have to thrash what I have learned in the last year or so and start again. Will have to wait and see what the distros look like.

OK, I admit to careless reactionary phrasing, but still, the point stands. The phrasing of the original post implied that KDE 'lacks the simplest functions' - which is untrue, hence the rubbish comment. The feature is there, and if it doesn't work for them, that's a bug, not lacking the feature itself.

I do think its a problem if one tries to add a new empty panel after deleting the default. KDE pulled out the resize button since the 3.x release; the panel won't occupy the full width until you add enough widgets on it; which i suppose indeed is very annoying. This isn't a bug, it appears they want it to work that way. There's more to the list of KDE stupidities, you cannot drag a widget in the panel to change its position; cannot add a desktop icon for your custom binary or script etc.
Compared to GNOME 3 insanity though, KDE is still a very usable desktop.

"toll quality" basically means that whatever path your voice takes thru operator's network, QoS, CoS or whatever the service management method applied to your frames, package, signal etc. the end result would be identical to the performance of a legacy twisted pair cable that phone systems used to be built on. A toll quality connection as a general rule:

1) Would not require any specific equipment other than a native phone device (assuming that PRI ports are native to the phone system, which is a subject of never ending discussions amongst some old farts like me around here...)

2) Would not cause any digital disturbance to voice quality like packet loss, jitter etc.

You just need to click on the cashew (or right click, panel settings - if there is no cashew, unlock the panel first) then drag the stoppers to change the size. As to dragging a widget, you can do that from the same view by dragging, and you can add an application launcher widget and point it to your custom binary or script. All of this stuff has been in there from the first KDE 4.x builds I used, even the really buggy first ones.