Posted
by
Soulskill
on Tuesday May 13, 2014 @05:55PM
from the must-have-been-union dept.

rastos1 sends in a report about a significant bug fix for the Linux kernel (CVE-2014-0196).
"'The memory-corruption vulnerability, which was introduced in version 2.6.31-rc3, released no later than 2009, allows unprivileged users to crash or execute malicious code on vulnerable systems, according to the notes accompanying proof-of-concept code available here. The flaw resides in the n_tty_write function controlling the Linux pseudo tty device. 'This is the first serious privilege escalation vulnerability since the perf_events issue (CVE-2013-2049) in April 2013 that is potentially reliably exploitable, is not architecture or configuration dependent, and affects a wide range of Linux kernels (since 2.6.31),' Dan Rosenberg, a senior security researcher at Azimuth Security, told Ars in an e-mail. 'A bug this serious only comes out once every couple years.' ... While the vulnerability can be exploited only by someone with an existing account, the requirement may not be hard to satisfy in hosting facilities that provide shared servers, Rosenberg said."

If the kernel developers allowed bugs to be clearly marked as security vunerabilities, then it would be trivial to use the Git commit history to identify the individuals who are merging these exploits into the kernel.

The GIT entry for the bug was entered Dec 3, 2013. So that means at a minimum, the bug was known of and not fixed for 5 months. That's a bit excessive for 'A bug this serious only comes out once every couple years' kind of bug. I'll agree that 5 months is a lot shorter than 5 years, but it's still far too long.

Taking off-topic potshots against FOSS in response to a misinformed post which incorrectly describes the date of the bug report in response to a post which inaccurately maligns the attitude of kernel developers towards security bugs?

For fuck's sake, we're three levels deep in FUD here. Someone throw me a rope so I can pull myself out of this quagmire of bullshit.

Fact: FOSS proponents extremely frequently in the past claimed that OSS was security issue free because of all the review of the code that was happening.Fact: The code shipped 5 years ago according to the story.Fact: The story is about a security issue that shipped.

Therefore, pointing out that the proponents of FOSS are full of shit because a bug shipped is not off-topic for a story about a bug shipping in open-source software.

I was simply posting that the argument about when the bug was first reported is i

Personally, I'd say that the only frequently claimed advantage claimed for FOSS in the past was that it was, then, so niche that no one would find it worthwhile to try to exploit. Times have changed, now. For example: Firefox, Chromium, and, I'd say, even desktop Linux isn't safe anymore according to that criterion (server Linux never was safe, since servers are such juicy targets).

Sorry but your personal view is what others tried to point out as the advantage. The proponents claimed exactly what I wrote they claimed. Searching through Slashdot posts within the past 12 months find such claims.

Personally, I agree with your position on why most FOSS code has been exploited; that just doesn't fit what many proponents were claiming.

Bugs can be ancient. anyone remember that Windows VDM bug that affected every version of Windows based on NT? How is this bug different?

Bugs have to be found, you can't expect every bug to just be easy to find. That's how things like Heartbleed, and the VDM bug don't get discovered for years. I'm sure there's probably bugs almost as old as Linux itself in the kernel, and i'm almost certain there's bugs in Windows affecting everything from 3.1 up.

Well, in a normal situation, I'd say yes, but Linux's response to all bugs is similar, patch it as soon as there's a good patch. Now if it were a certain company in Redmond that scales its response based on customer "value", yeah, security bugs had best get fast-tracked. I honestly prefer the "fix all bugs and don't embargo fixes" response that linux does to the "when we discover bugs (heartbleed), we'll let the Cool Kids in on it first and then release it weeks later to the average user" response that Go

Don't see where your flamebait actually changes anything. It certainly provides nothing new, because you can say "they're rude" all day, the question is is the bug in question fixed, and when. Yes, the chances are very good that a bug submitter is going to get a "patch or GTFO" response. In the overall scheme of things, I'd say that's as good as can be expected, given many other groups respond with legal threats.

Excuse me, but what flamebait? I did not insult you or your argument, instead I made a valid counter argument.

Oh, and my point wasn't that the maintainers are rude. My point is that the security industry keeps insisting the the Linux team practice responsible disclosure, and they keep arguing there is no need or benefit.

A bug that allows remote code execution or even a DoS is a much, much bigger issues than fixing the user experience or minor stability issues.

I agree security vulnerabilities are worse than simple bugs. However DoS is not. Our entire network infrastructure is already vulnerable to DoS, so vulnerabilities of this sort are just par for the course really.

Might want to check the GIT report again. To quote:... which allows local users to cause a denial of service (memory corruption and system crash) or gain privileges by triggering a race condition involving read and write operations with long strings....

Notice that the bug permitted an easy denial of service attack. And with more effort a privilege elevation.

I'm not referring to any specific DoS, just DoS as a general class aren't necessarily security vulnerabilities, ie. specific DoS vulnerabilities might also be security vulnerabilities, but being a DoS vulnerability does not automatically also make it a security vulnerability.

You're missing the point: our network infrastructure is already DoS vulnerable. A remote DoS is just another drop in a full pool. To suggest that a DoS vulnerability "compromises" the remote system doesn't seem justified.

Look at the GIT entry. It was entered Dec 3, 2013. A few months earlier than "end of last month". Also the disclaimer on the GIT entry means that the bug could have been discovered even earlier, so the Dec 3 date is merely a "no later than" boundary on the discovery date.

And by "GIT entry" you mean "CVE entry", which clearly says "Disclaimer: The entry creation date may reflect when the CVE-ID was allocated or reserved, and does not necessarily indicate when this vulnerability was discovered, shared with the affected vendor, publicly disclosed, or updated in CVE."

Look at the CVE entry. None of linked documents are earlier than last month.

Your argument does not convince. Why reserve a CVE Id and sit on it for six months? Maybe there is a reason, I'm not familiar with how CVE internals work. Your argument is pointless without more support. Sounds like you don't understand the system you defend, which seems rather silly.

but this is open source and open-source proponents have always claimed in the past that the advantage of open-source is that the bugs are discovered by the thousands of pairs of eyes before they ship. So the truth is that this bug was discovered five years ago but not fixed. Either that or there is no inherent security advantage to open-source. Which falsehood have you been telling all these years, boys?

Ahoy, mod parent up. That's an important distinction. In addition to the claimed "eyes searching for bugs", there's already a sea of bugs that have been found and properly reported, but they get fixed slowly. Some of these are critical bugs. Now someone comes to say "you ignore the fact that proprietary software is no better". And it isn't! But the claim that bugs get fixed quickly in OSS is not true. It's a myth, just like the eyeballs thing.

This was a privilege-escalation exploit, which means you already need an account on the computer to do anything.

Any account would do. Even say, "nobody".

All you need is the ability to run an arbitrary binary, which a buggy CGI script is more than adequate. Basically, if you have a bit of shellcode, that's sufficient. Once you have that going, then you can easily exploit your way to more priviledges.

That said, for the time being we now have a good way to root our Android phones.

> This is crap. A bug that allows remote code execution or even a DoS is a much,> much bigger issues than fixing the user experience or minor stability issues.

You're not a security professional. You should have to put that in your sig file. The linux kernel is used in many different situations, and clearly some security problems only pose a risk in some of those situations. IE. a lot of embedded systems will never be vulnerable to this particular issue.

Compared to Hartbleed and this, I don't know of any of similar critical level and impact.

Any remotely exploitable bug that allows for remote code execution / priviledge escalation without user interaction is just as bad or worse.

After all, heartbleed was "just" a remotely exploitable memory "leak"; but if you have remote code execution, you can scan the memory and send home anything interesting; to the same effect as heartbleed, plus anything else you might want to do once you are running on the system.

The problem was well discussed in 2009 here :
A tempest in a tty pot
https://lwn.net/Articles/34382... [lwn.net]
The result was that after a heated debate, Alan Cox was blamed for allowing old code to stay because emacs would loose terminal output and Greg KH was simmoned to stepup as the TTY maintainer. The new TTY/PTY guys became James Simmons, the Frame-buffer guy and C. Scott Ananian, the former jack-of-all-trades for the One Laptop per Child Foundation. Curious enough it were not Linux server systems like RedHat Enterprise who have been vulnerable for almost 5 years, but the popular Linux desktop distro's like Ubuntu.

I read through the POC, it seemed safe enough to play with, so I've tried it out on a few different servers here (CentOS & Debian Stable). On the CentOS boxes it dies before it even gets started trying to overflow into a tty, and on my Debian machine it's been going for 5 minutes (using up to 90% CPU, but still leaving the machine quite usable), and still hasn't got anywhere.

As something like this would be impossible with the driver executing in an isolated process. Memory corruption would still be possible of course (unless the driver was written in a secure language) but it would be local.

As I suggested in this thread here in Dec 2012: http://developers.slashdot.org... [slashdot.org] "One thing of concern to me about this (not knowing the kernel communications culture or the previous interactions of Linus and this maintainer) is whether the Linux kernel (and development community) has maybe reached some point where old development methods are breaking down in trying to support an every growing monolithic kernel approach? I initially [resisted] using Linux in the 1990s because I knew there were alternative a

The L4 family, QNX, K42 and Nemesis/Expert shows that the kernel/userspace division is enough.

In fact there are systems that doesn't have even two "rings" and are secure. One example is the Microsoft research Singularity that uses a secure programming language, another is the Go component design (can't find a link sadly) that uses segmentation combined with a light-weight processor abstraction for protection. There are many others.

The L4 kernel have switching times as low as some hundred cycles between comp

The Linux kernel is oriented towards supporting all the cutting edge hardware. This is not going to make security very practical. Openbsd is, ah, stodgy. But what openbsd brags on is "no remote holes in the default install" not "no local exploits".

I think that Linus cannot fix this sort of issue. Theo could take lessons from Linus on nasty around systemd but Linus has not been consistently nasty and I think it too late.