Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

eldavojohn writes "Linus has announced the release of 2.6.35 for people to download and test after he found not a lot of changes between this week and last. The big features to look out for include: 'Transparent spreading of incoming network traffic load across CPUs, Btrfs improvements, KDB kernel debugger frontend, Memory compaction and Support for multiple multicast route tables' as well as various performance and graphics improvements. Linus also praised the community saying that 'regression changes only' after rc1 improved this time around and gave numbers to back it up saying 'in the 2.6.34 release, there were 3800 commits after -rc1, but in the current 35 release cycle we had less than 2000.' Good to see the process is becoming more refined and controlled after the first release candidate — hopefully there's no impending burnout."

I understand why, but there are a ton of people out there that think OSS is OSS. You wonder why corporations are weary of OSS it's because of this. I really hope this project [github.com] goes somewhere or Debian's kFreeBSD [debian.org] project works as well as I'm hoping.

Reminds me of this joke:I was walking across a bridge one day, and I saw a software developer standing on the edge, about to jump off. I immediately ran over and said "Stop! Don't do it!"

A ton of people out there who both think that all the Free Software/Open Source licenses are the same and are waiting impatiently for ZFS in Linux? Somehow I doubt it. And corporations are weary of OSS because the Linux developers aren't breaking Sun's(/Oracle's) purposefully GPL-incompatible license? Actually, did you have a point? I think I missed it.:-)

I'm increasingly wary of BtrFS, due to claims that there are fundamental design flaws. This does not mean I believe such claims (although I observe LWN's top file-system contributing journalist is quitting her job, her entire career path and her State) but it does mean that I want to see someone do a proper systematic analysis of the methods used and algorithms chosen. I'll probably use it anyway. Radical filesystem architecturing is hard and better options are almost always likely to exist - the question I have is how much impact this actually has on performance and safety of BtrFS. A little? A lot? About average for filesystems?

Well, I dunno-- "reliable in general use" might have some conditions that make it completely unreliable or otherwise unacceptable in certain use-cases. E.g., many modern filesystems cannot detect latent sector errors that are not otherwise detected by the hardware itself. Depending on how your filesystem is implemented, a latent sector error could wipe out your critical disk structures (ala ReiserFS) or do nothing at all (ala ZFS). The only meaningful way to interpret a new filesystem's capabilities is

To an extent, yes, which is why I said I'll probably end up using it anyway. It is better in many ways than existing alternatives (although XFS and JFS are still filesystems I like for specialist work). However, if you are planning any major migrations, it can't merely be better than existing alternatives because you have to factor in the effort and the risk of error in the migration. Also, never, ever use version 1.0 of something. Anyways, the only way to factor in the overheads in any meaningful way is to

I'm increasingly wary of BtrFS, due to claims that there are fundamental design flaws.

The only 'claims of fundamental design flaws' I'm aware of are that it has bad performance in some pathological cases. Which is true of every single filesystem ever produced and likely true of every filesystem you'll ever use in the future.

I'm certainly not aware of it having any flaws that ZFS doesn't; my main concern is that Oracle won't want to fund any more BTRFS development now they also own ZFS.

From reading the mailing list thread, my impression was that it was a storm in a teacup, and the real problem was just a simple bug rather than a fundamental misdesign. Or if you want to be slightly less charitable, a case of "concern trolling".

As with *any* experimental subsystem in the kernel, use at your own risk. If you really need the features afforded by btrfs over ext4 or reiserfs at this point in the game, then do what I do: use LVM. Reiser over a disk-spanning LVM gives you most of the advantages that btrfs has over ext4, without having the hassle of being experimental. I still plan on switching to btrfs once it's out of experimental, but for now I'm able to do everything that has me wanting btrfs for my fileserver with those two.

I was walking across a bridge one day, and i saw a man standing on the edge, about to jump off. so I ran over and said "Stop! don't do it!" "Why shouldn't I?" he said. I said, "Well, there's so much to live for!" He said, "Like what?" I said, "Well...are you religious or atheist?" He said, "Religious." I
said, "Me too! Are you Christian or Buddhist?" He said, "Christian." I said, "Me too! Are you Catholic or Protestant?" He said, "Protestant." I said, "Me too! Are you Episcopalian or Baptist?" He said, "Baptist!" I said, "Wow! Me too! Are you Baptist Church of God or Baptist Church of the Lord?" He said, "Baptist Church of god!" I said, "Me too! Are you original Baptist Church of God, or are you reformed Baptist Church of God?" He said, "Reformed Baptist Church of God!" I said, "Me too! Are you reformed Baptist Church of God, reformation of 1879, or reformed Baptist Church of God, reformation of 1915?" He said, "Reformed Baptist Church of God, reformation of 1915!" I said, "Die, heretic scum," and pushed him off.

I understand why, but there are a ton of people out there that think OSS is OSS. You wonder why corporations are weary of OSS it's because of this.

I would wonder if they were, but they're clearly not. Corporations love Linux. It's less expensive and commodity. It frees them from expensive proprietary hardware vendors (Sun Sparc, HP Itanium, etc.) and lets them find the right x86/x86-64 servers for them. They can use free versions (e.g., CentOS) in some environments and paid enterprise versions (e.g., RHEL) in others. Most of the big enterprise packages (Oracle, DB/2, Websphere, JBoss, SAP, etc.) are available in Linux. The enterprise data center is a war between Linux and Windows (with the mainframe, AS/400, and other monotowers, though they are rarely growing).

The "SCO scare" is a thing of the past. I can tell you from personal experience after many years in the infrastructure world that the license headaches with Linux distros are nothing compared to the eternal headaches that I've had with companies like Veritas/Symantec, Oracle, etc.

Most of the decision-makers, technical architects, etc. in corporations do not operate at the "why ZFS is better" level. Does $LINUX_DISTRO support RAID, SAN multipath, and other common enterprise storage needs? Great. That's all we need. Frankly, while ZFS is great, it's not enough of a game changer to make someone buy Solaris over Linux.

It got me to switch. And every single person that I've talked to about it has.

I'm not talking about the companies that just USE Linux internally. I'm talking about the companies that sell something linux.

Tivo for example (The reason for GPLv3) there are companies that may want to pick up and run with something Linux based but are afraid about what might happen if they build a product around it. So they scrap the whole idea.

Vs a BSD license that Apple's created OS X around, companies have built specialty Fre

For a kiosk turnkey product, an extensible OS that can work for the server or desktop is probably not that useful. You're never going to want to run commodity software on it, you're never going to want to extend it, and you're never going to want to make use of the flexibility it has.

Rather, you'd be much more interested in a real-time OS that is compact (so that most of the memory can be used for double-buffering the video and buffering the network traffic and disk activity) and supports only the absolute

Meanwhile my TV, webcam and Blu-Ray players all appear to run Linux, as did the media players and cameras I used to work on. There are a ton of embedded Linux systems in all kinds of markets even when a real-time OS might make more sense.

If the OS is truly a commodity then it usually makes sense to go with the one that is the cheapest to acquire and also to maintain over the lifetime of the product. As both BSD and Linux cost the same to acquire the real question becomes:

Is it better to take advantage of the improvements others make to the OS knowing that any improvements you make have to be given up vs. the advantages of being able to keep your improvements secret knowing that your competitors can keep their improvements secret.

If you switched for ZFS without carefully considering whether it would meaningfully help for your particular use cases, you probably spent a lot of money and effort for no gain.

For most people, ZFS is a cpu-sink that offers slightly more convenient volume management, at a high price for hardware overhead and latency.

But you have to use it on solaris, because their UFS infrastructure is so out of date, you can't support a reasonable number of spindles (without investing even MORE money in moving that problem off the box entirely).

It has some neat whizzy bits, but those whizzy bits are not at all free, and things most people seem to not need.

Don't be so sure of that. FUD is alive and well. Last summer I interviewed with a bank about a three month contract to move some data. When I asked them about the requirements for the platform/environment. I was told, flatly, that I could use anything that I wanted, as long as it wasn't open source. Open source means that anyone can see the flaws in the software and exploit them. I had two choices, I could keep my mouth shut and take the contract or I could speak the truth and blow my chances. I spoke up.

Were I in their shoes, I would realise that commercial software comes with no more of a warranty than open source. Despite all the money they extract from you, commercial vendors provide you no warranty whatsoever and you have to agree to these terms before they will let you use the software.

You can also buy commercially supported versions of open source, there are a huge number of such products available now.

If you want a system so critical that it flies a plane then you typically write it in house (there aren't that many places that actually build planes). you test it extremely thoroughly (far more so than any commercial vendor does), and then you have multiple redundant backup systems too.

The reality is that many decision makers in business and government simply don't understand very much when it comes to technology, they buy into propaganda that open source is bad but will happily buy things like cisco asa firewalls without realising they run linux.

I even mentioned that you can buy commercially supported versions of open source software, i suggest you read the post again. Hint: it's the second paragraph.

The difference is that open source gives you the choice, you can get the software free and self support, you can pay for support, you can even pay for the software if you choose, and there are often multiple sources you can buy support from. Proprietary software takes away these choices, you have to pay for

I am also a professional software engineer/network engineer by trade since 1995

A young whippersnapper, then. When you have a bit more experience of the real world you might start to understand just how many critical systems already run on Linux.

BTW, I can make a safe bet that anyone writing avionics software is not running it on Windows either. Back when I was writing avionics software it all ran on custom hardware with no OS worth speaking of; and having a 'free' OS wasn't much of a benefit when our hardware was selling for the price of an expensive sports car.

Its true though, we're a full-on microsoft shop and even though we use some poor products that are shown to be failing us, when I migrated some things to OSS (Visual source safe to Subversion comes immediately to mind), my bosses all insisted we evaluate all the commercial offerings first. They equate 'free' to 'crap'.

As it is, even though we migrated successfully and used it for nearly 2 years, they still went and bought the worst (IMHO) SCM I've ever had the misfortune to use (Serena Dimensions). It cost

And it's quite likely that their explicit intention was to be Linux incompatible as well. Should it have been licensed under some other terms, the license for ZFS would likely have been chosen to be incompatible with that. For instance, if Linux was BSD licensed, Sun could have just released ZFS under the GPL. While in theory it's perfectly compatible, in practice a BSD project will refuse GPL patches.

Which really makes sense, as Linux has been replacing Solaris in lots of places, and I imagine Sun didn't want to help them with that.

I've never read anything with Sun saying it, but that hardly matters. It was well known Sun had a dislike for Linux. Sun was losing more licenses to Linux than any other competing OS. Every market share study indicates this, and has for a very long time. Sun was also losing mind share to Linux, which is why they created their Solaris/GNU distros and added Linux compatibility. In doing so they stayed relevant and "cool". All of a sudden Sun announces/releases ZFS whereby its license is completely incompatibl

You mention the zfs github and kFreeBSD, but are you aware of Nexenta?

Honestly, I'm not sure why it's not as well acknowledged as kFreeBSD. The myopia involved there seems to be similar to what you make light of with your joke.

In the event that you really haven't heard of it, Nexenta is basically OpenSolaris kernel with Ubuntu userland.

You get apt. No, it's not debian, but if we're looking at ZFS implementations, it's a far cry better than the alternatives (FreeBSD = buggy crap and you've got to use ports; OpenSolaris = you've got to use Solaris/shoehorn useable modern tools in).

I'm not sure why we need to stick with Linux, per se, and what's wrong with OpenSolaris kernel/CDDL. Serious question here: is there something wrong I'm missing?

From where I'm sitting - user and admin of Linux for close to a decade, now - there's really not much of an advantage to using (or developing for) Linux over, say, FreeBSD other than the community of developers (including the install base, financial backing, etc.) and what that provides for you. I'm not sure if a BSD compatible license could ever get the financial support (from the likes of RedHat, IBM, Intel, etc.) Linux does because it could be 'turned against them', but for most people (administrators, developers, etc.) there's no inherent reason, one way or the other.

Automation helps to some extent, I very much doubt a project like debian could support the number of ports it does without the array of software and infrastructure they have for autobuilding.

But that alone is not sufficient, a group of porters who care about bringing an architecture up to the coverage level required to become a release architecture and who have the skills to get it there is also needed.

Once a port reaches the status of release architecture things get easier for the porters because one of th

the lack of adoption of OpenSolaris compared to Linux has to do with real world considerations. The summary of the reasons really is that Sun waited too late to roll it out, should have done it in late 90s. That would have solved the issues: It doesn't support the amount of hardware Linux does, doesn't scale from embedded devices to supercomputers, doesn't have a couple tens of thousands of packages made for it, is much harder to admin (speaking as certified solaris engineer)

Perhaps the people who fear Linus is going to burn out again spent too many years watching Seinfeld and deeply internalized "no hugging, no learning". Linus != George. OTOH, given his acidic tongue, he's probably not well suited to a career in stand up comedy. Anyone else think that Larry McVoy would make a good Kramer? </rimshot>

You have to admit, it's somewhat disconcerting that there's nobody in his coattails to take over.

Unlike Microsoft or some other big softare company/project, Linux really has one controlling hand. If Linus goes kaput tomorrow, face in his wheaties, it would take a non-trivial period of time to get someone up to speed and filling his shoes.

Sure, there are other "non-current" linux developers/maintainers, and there are many others who have been doing the job in the past. But that's an entirely different development model than the 2.6 tree has been, and there's nobody who "fills in" for Torvalds when he wants to take a break. The man is 40; he's going to have to slow down sooner than later. He's certainly not keeping up his percentage of code commits, nevermind the level of code (though the quality, quite possibly). He's got 3 daughters and a wife; the man has to sleep at SOME point.

That said, I'm really pleased to see the decrease in regressions. I was starting to think that it was all open source OSes that were going down the shitter of late, but I am pleased Linux is still improving (though I do still consider the removal of the anticipatory scheduler a regression).

It just makes me uneasy that anything as big as Linux has such a small point of failure. It's possible I'm overlooking the importance of the distro kernel teams and other people who contribute, or overlooking something else, but as it stands now, his continued pivotal position makes me uneasy.

The lack of a unified "stable" kernel for distros to pull from (given 2.6s continued march) and at the same time the lack of a "real" development/next-generation kernel makes me likewise uneasy.

You have to admit, it's somewhat disconcerting that there's nobody in his coattails to take over.

There are at least a couple of good developers who could easily take over starting with the maintainer of the linux-next tree and if there were a huge disagreement then I'm sure the Linux foundation can step in if need be.

The lack of a unified "stable" kernel for distros to pull from (given 2.6s continued march) and at the same time the lack of a "real" development/next-generation kernel makes me likewise uneasy.

You would only say that if you haven't been using Linux long enough to remember when it was exactly the way you wish for. Back in the 2.4.x / 2.5.x days, people got so tired of features taking so long to be ready they started backporting the changes from 2.5.x to 2.4.x essentially making both branches unstable. For all of the whining kernel releases are a lot less buggy with fewer distro deviations from mainline. And as a bonus features actually get better testing now because fewer changes need to be tested at a time.

I remember the 2.0 kernel days just fine. I don't remember those "features" getting back ported making things unstable.

I do remember there being a lot of custom patchlevel kernels, though. Lots of people did it, myself included. It was quite straightforward, because the base kernel didn't typically have configuration failures like it does now, and the patches were relatively simple. THese days, it's a bit of a pain in the ass to build a kernel due to the odds and sods not building (a rare but workable scena

The fix for World of Warcraft under WINE made it into 2.6.35, though it is not mentioned in the changelist above. WoW 3.3.5 crashed under recent Linux kernels because it apparently made use of the "icebp" instruction, whatever that is; the kernel stopped sending SIGTRAP for icebp instructions in an earlier 2.6 build for whatever reason.

Since there seems to be no place on the internet where to post feature-requests for linux, here's four points from my list:

1. User-space scheduling. It would be nice if a process could have better control on the priority of each of its threads. For example, on a web service where multiple users are active, it is often necessary to give each user his/her share of the cpu. Right now this is rather difficult to do in a fair way, since multiple threads may belong to the same user.

2. Recursive strace: Currently it is not possible to run "strace" on a process which is already being straced. So for example: "strace -f strace -f ls" will not work (you'll get an "operation not permitted" inside the first strace. This makes it impossible for programs to use strace (or the related ptrace system call), since other programs which might also use strace, may depend on them.

3. "Nice" for bandwidth. It would be great if there was a command similar to "nice", which acts not on cpu-cycles but instead on bandwidth.

4. "Select" or "poll" with access to inter-thread synchronization structures. Select and poll are system calls which act mainly on file-descriptors. However, sometimes you'd like to wait also on a mutex or semaphore. Some support for this would be great.

This list is just from the top of my head. I could probably come up with a lot more.

I think what linux needs is a way to internally associate the process id with each data-packet. Then, when a packet is routed through some bandwidth-limited channel (like your network card), the kernel could assign a priority to the packet based on the pid. But this is just an idea.

The iptables solution is just too simple, I think. Also, you have to invoke it as root.

Regarding "nice" for bandwidth, there is wondershaper, but it has very coarse controls.

There is a very nice program called NetLimiter for Win32. I would love to see a clone of it for linux. Of course, all it really would be is some iptables magic and would unfortunately have to run with root privileges. Pyshaper is abandoned but seemed to be on the right track.

I would love to see user/group permissions introduced into the kernel's packet filtering to remove the need for root access in some cases. A user *sh

It would be way better to do this as a libc pre-load shim. just overload "open" to get the path names/ip addresses of what file/connection is associated with each fd, then override read and write to track i/o rates, and block when exceeded. Could also
do it based on iops...(counting each read/write as an op, and giving op budget...)

Nice is done in libc, at least the interface to handing the nice setting to the scheduler is.
This is just an added setting into the kernel's process scheduler. You'll find that a lot of people argue a lot about how to do scheduling right. The nice setting is just one constant that it inserted into an algorithm that has to exist anyways.

For I/O, traditionally there has never really been a "scheduler". The whole idea was to use the device to it's maximum capability all the time. Devices a

A big problem with doing it per file is that many processes run from the same file. It's not at all uncommon to have multiple processes accessing the network that are the same binary file. Think about interpreters or virtual machines like python or java.

you don't want to set the priority on the network interface, any more than you want to set priority on access to/dev/sda. In both cases, it is meaningless. For network, you need to have the choice to do it by the remote ip&port combo, or the local one. It does not matter whether it is interpreted or not, they will all get to libc eventually.

For example, I would make DNS requests very high priority. I would have something like *.dns, assigned very high priority, and ptp stuff relatively low, which

1. User-space scheduling. It would be nice if a process could have better control on the priority of each of its threads. For example, on a web service where multiple users are active, it is often necessary to give each user his/her share of the cpu. Right now this is rather difficult to do in a fair way, since multiple threads may belong to the same user.

Isn't this what pthreads condition variables are for? Or can you explain what you want in more detail?

The main point is that it is now not easily possible to wait for both a file and a mutex, for example. A workaround could be waiting for a pipe, and writing to the pipe in a separate thread, but of course, this may become hairy.

From a user-space programmer's perspective, it would be nice to have the synchronization objects and file descriptors in the same id-space, but perhaps a simpler solution is possible.

Btw, thanks for the pointer to cgroups. I'm not sure if it entirely matches my needs, but I'll look i

The particular problem I have is that I want to make an installer which tracks all the write-operations of its child-processes. This can be easily done using strace. However, now imagine somebody else also writing an installer (also using strace), and using my installer as one of its child processes. On linux, this will fail.

Also imagine somebody else using strace to inspect what my installer is doing... this will also fail.

if all you are doing is using strace to figure out where the program is writing, then a libc shim is probably a better idea than strace. you pre-load the shim, it tells you where the child process is writing, with no extra process, and full "recursion" support.
http://www.jayconrod.com/cgi/view_post.py?23 [jayconrod.com]
no, it doesn't work with statically built binaries... I think static binaries are stupid and should die. but as someone else mentioned, the only way around such things is kernel support for the function

Well replacing libc with a shim would not be very robust, since libc is entirely optional, and like you said, static binaries are making this approach fail.

(Of course, I could edit theose static ELF binaries, and replace the kernel calls, etc. But this seems to be totally not the right way (tm) to do it.)

Besides, having the ability to intervene at the interface between a process and the kernel, and hence be able to totally control the I/O behavior of a process, is something that, in my opinion, should be pr

Besides, having the ability to intervene at the interface between a process and the kernel, and hence be able to totally control the I/O behavior of a process, is something that, in my opinion, should be present in a modern OS. Think of all the possibilities for debugging, sandboxing, install-software, automatic dependency checking by build-tools, etc.

None of the functions you describe need to be in the kernel to be effective. For debugging, shims are fine, for sandboxing, you can use any of a variety of vm's, for install software, there are tools like fakeroot (again using libc shim.) used extensively in debian packaging.
I think that, a decade ago, static binaries made some sense. But they don't anymore for anything
but a few corner cases. In general, the arguments against static linking (binary size, memory footprint, security updates, and the cur

"Furthermore, after reviewing this GPL our lawyers advised us that any products compiled with GPL'ed tools - such as gcc - would also have to its source code released. This was simply unacceptable."

This sounds like FUD to me. I do not think the intent of your post is clean. Or maybe you have no clue and should consider getting better lawyers next time... then, if GPL still does not work for you, use some BSD flavor as OS for your next proyect.

As a consultant for several large companies, I'd always done my work on
Windows. Recently however, a top online investment firm asked us to do
some work using Linux.

... then...

although it was tough to do, there really was no
option: We had to rewrite the code, from scratch, for Windows 2000.

Hey, David, is that you? Some times back I received an email from you (reproduced below): is the offer still available?

Dear Sir/M,
I am Mr.David Mark. an Auditor of a BANK OF THE NORTH INTERNATIONAL,ABUJA (FCT).
I have the courage to Crave indulgence for this important business believing that
you will never let me down either now or in the future.
Some years ago, an American Mining consultant/ contractor with the
Nigeria National Petroleum Corporation, made a numbered time (fixed) deposit
for twelve calendar months, valued $12M.USD (TWELVE MILLION US DOLLARS) in an account.

Believability: 1/10. I would have given you a zero, except I notice one comment here that seems to think it's a legitimate point.

Humour: 6/10. The punch line was honestly not expected, and elicited a smile from me. But it would need a bit more work to truly be hilarious.

Anger response: 4/10. A fairly good natured troll. It does little to incite anger, but I think that if you worked on it a bit more and made the story more plausible, you could be a real contender, inciting hundreds of flames.

Overall: 5/10. A nice effort, but a little too obvious, and the punchline just wasn't enough, given the length of the post. The punchline could have been delivered in one simple paragraph.