Posted
by
Soulskill
on Tuesday April 15, 2014 @04:16PM
from the or-at-least-marginally-less-unsafe dept.

jammag writes: "Heartbleed has dealt a blow to the image of free and open source software. In the self-mythology of FOSS, bugs like Heartbleed aren't supposed to happen when the source code is freely available and being worked with daily. As Eric Raymond famously said, 'given enough eyeballs, all bugs are shallow.' Many users of proprietary software, tired of FOSS's continual claims of superior security, welcome the idea that Heartbleed has punctured FOSS's pretensions. But is that what has happened?"

In the self-mythology of FOSS, bugs like Heartbleed aren't supposed to happen when the source code is freely available and being worked with daily.

False. Bugs can and do happen. However, what can also happen with open source software is that entities other than the group working on the project can find bugs. In this case, Google found the bug. If the source were not open, maybe it would have never been officially recognized and fixed.

We're surrounded by tiny errors in the world. Heck, they're even built into our DNA. The vast majority of tiny little errors do no harm, and we don't notice them. We gloss over them, like a typo in a book. It's just that every once in a while, a tiny little error can occur that snowballs into something much greater. Like cancer. Or a massive, accidental security leak.

More eyeballs usually do make bugs more shallow, but only if the eyes know what to look for.

Many eyeballs may make bugs shallower, but those many eyeballs don't really exist. Source availability does not translate to many people examining that source. People, myself included, may like to build to install packages but that's it.

What we need are intelligent bots to constantly trawl source repositories looking for bugs. People just don't have the time any more.

I don't think anyone claims that open-source software won't ever have security issues. The claim is that the open-source model tends to find and correct the flaws more effectively than the closed-source model, and that the soundness of the resulting product tends to be better on average.

One case does not disprove that. The key words there are "tends" and "on average".

Anyone can view the source of an open source project, which means anyone can find vulnerabilities in it. Specifically, hackers wishing to exploit the software, as well as users withing to audit and fix the software. But, someone who knows what they're doing has to actually look at the source for that to matter; and this rarely happens.

Hackers must black-box closed source software to find exploits, which make it more difficult than finding them in open source software; the flip-side is that they can only by fixed by the few people who have the source. If the hacker doesn't disclose the exploit and the people with access to the code don't look for it, it goes unpatched forever.

Open source software does provide an advantage to both sides, hackers can find exploits more easily and users can fix them more easily; with closed source, you're at the mercy of the vendor to fix their code but, at the same time, it's more difficult for a hacker to find a vulnerability without access to the source.

Then, we consider how good fuzzing techniques have gotten and... well, as it becomes easier to find vulnerabilities in closed source software, open source starts to look better.

So, the "with many eyes all bugs are shallow" notion fails. There were not enough eyes on the OpenSSL library, which is why nobody discovered the bug.

Except that someone did discover the bug, when they were looking at the code because it was open source. And they did report it. And it did get fixed. Later than anyone would want of course. But it happened. Maybe the similar errors would and are being missed in the Windows and Mac implementations.

I don't know, Microsoft got caught about being able to waltz through the password check with full spaces, which is slightly worse than forgetting to place a character limit back onto something. Admittedly the stakes are not the same, but you can check it, and enough do that it works.It's safer in terms of checking for back doors, sloppy coding anyone can do.

That it reacts fast is good. That the bug could be audited in the source, in public, is good.

We should remember that FLOSS reacted very quickly to the "revelation," but the bug itself has been sitting there for years, which isn't really supposed to happen.

It's nice we know how long it's been there, and can have all kinds of philosophical discussions about why the OpenSSL folks decided to write their own malloc.

Also OpenSSL was effectively a monoculture and just about every SSL-encrypted internet communication over the last two years has been compromised. OpenSSL has no competition at its core competency, so the team really has no motivation to deliver an iteratively better product, apart from their need to scratch an itch. FLOSS software projects tend not to operate in a competitive environment, where multiple OSS products are useful for the same thing and vie for placement. This is probably bad.

This doesn't really change it, because think how a proprietary SSL library would've handled this. The vulnerability was found specifically because the source code was available and someone other than the owners went looking for problems. When was the last time you saw the source code for a piece of proprietary software available for anyone to look at? If it's available at all, it's under strict license terms that would've prevented anyone finding this vulnerability from saying anything to anyone about it. And the vendor, not wanting the PR problem that admitting to a problem would cause, would do exactly what they've done with so many other vulnerabilities in the past: sit on it and do nothing about it, to avoid giving anyone a hint that there's a problem. We'd still have been vulnerable, but we wouldn't know about it and wouldn't know we needed to do something to protect ourselves. Is that really more secure?

And if proprietary software is written so well that such vulnerabilities aren't as common, then why is it that the largest number of vulnerabilities are reported in proprietary software? And that despite more people being able to look for vulnerabilities in open-source software. In fact, being a professional software developer and knowing people working in the field, I'm fairly sure the average piece of proprietary software is of worse quality than the average open-source project. It's the inevitable effect of hiring the lowest-cost developers you can find combined with treating the fixing of bugs as a cost and prioritizing adding new features over fixing problems that nobody's complained about yet. And with nobody outside the company ever seeing the code, you're not going to be embarrassed or mocked for just how absolutely horrid that code is. The Daily WTF is based on reality, remember, and from personal experience I can tell you they aren't exaggerating. If anything, like Dilbert they're toning it down until it's semi-believable.

This, and I suspect a lot of shilling by proprietary software vendors playing up the "many eyes make bugs shallow" thing. This wasn't so much a failure of the open source model as it was a failure to properly vet commits to the code of a project before accepting them into the main tree, and that could happen just as easily on a closed source development model as an open source one. That might be OK for small hobby projects, and perhaps even major projects that don't have quite so major ramifications in the event of a major flaw, but hopefully this will serve as a wake up call for projects that aim to form some kind of critical software infrastructure. For such projects requiring that commits be reviewed and "signed off" by one or more other developers would perhaps have caught this bug, and others like it, and could perhaps work very well in conjuction with some of the bug-bounty programmes out there. Of course, "Find a flaw in our pending commits, and get paid!" only works if the code is open for inspection...

True, but it is also easier for malicious people to find vulnerabilities when they have the source code. There are other disadvantages, a broad developer base allows vulnerabilities to be deliberately introduced more easily and it's harder to enforce standards etc.

I searched and couldn't find a good study or any reliable evidence either way. There is good and bad open source software and there is also good and bad commercial software. Posting with absolute certainty that open source is more secure will get you modded up around here but I would like to see some evidence.

" just about every SSL-encrypted internet communication over the last two years has been compromised."

No, it really hasn't.

It's accurate to say that just about every Open-SSL encrypted session for servers that were using NEW versions of OpenSSL (not all those ones out there still stuck on 0.9.8(whatever) that never had the bug) were potentially vulnerable to attack.

That's bad, but it's a universe away from "every SSL session is compromized!!!" because that's not really true.

You seriously think that black hats bother with reading millions of lines of code in the hope of finding an exploit when all they have to do is play with the data sent to services/applications and see if it misbehaves. Which is why exploits are equally found among closed and open softwares.

Correct -- I could imagine that there are lots of "heartbleeds" in closed source software that can and will be exploited. Whether it becomes public and puts pressure on the development staff to fix, is another story.

"The problem here is that people have been using the argument that Open Source is better because these issues can't happen "because" of the visibility."

No, just no. No one with any sort of a clue ever argued these issues cannot happen with Free Software. It's good practice, it helps, but it's no silver bullet. That's just as true as it ever was and this news in no way contradicts that.

1. Proprietary software could have a million bugs like this. You just wouldn't know it. They do not become less dangerous because they are proprietary, nor do security flaws become more dangerous because they are in open-source code.2. Open-source software at least has the possibility of being looked at over and over. Proprietary code may be reviewed or not depending on the resources, interest, and monetization capability of that code. A possible review by all relevant coders in the world is always more review than by a limited team of programmers and analysts at one company.3. The real problem with Heartbleed is the time that passed between code being written and a bug being discovered. That delay exacerbates the security problem. However, there will be some sort of statistical (probably Poissonian or normal) distribution of the time required to catch a bug since introduction into code. As with anything, there are outliers. Heartbleed with its serious and longstanding flaw must be considered an outlier unless shown otherwise. I have not seen evidence that this happens on a regular basis with any software, FOSS or otherwise.

I would appreciate it if future Slashdot discussions were let out through the upper orifice with some maturation period in the brain, rather than through the lower orifice after festering in the colon.

There is plenty of evidence for the effectiveness of good code reviews, but most of it shows rapidly diminishing returns with the number of reviewers.

To me this is an argument *for* open source software. It *takes* LOTS of eyes to catch bugs, *because* there is diminishing returns by adding more code reviewers. It is only by having hundred or thousands of them that you can hope to catch those ones that would otherwise go unnoticed.

By the time you've had more than four or five people take a look, the difference in effectiveness from adding more barely even registers, unless one of the additional reviewers has some sort of unique perspective or expertise that makes them not like the others.

And one easy way to have a diverse group of code reviewers is to have a lot of them.

Given that almost every major FOSS system software project has had its share of security bugs, there is really very little evidence to support Raymond's claim at all.

Every piece of software of any reasonable size has security bugs. The fact that we know about them is because someone found them, which is exactly what is supposed to happen.

Gloat? About what? This only provides proof of the benefits of open source - a significant flaw was discovered, which is exactly the claimed advantage - the more eyes, the better.

Anyone who would claim that proprietary software is somehow more secure is making a huge leap - there are only a few eyes, if any, looking for unreported issues - so there may be even more serious issues which have existed for much longer, which only a few bad guys know about. If MS or anyone else thinks that their proprietary SSL implementation has no security breaches, let them put a guarantee with full financial liability behind that thought.

How can you be a good chess player if you do not lose the odd game? So the opensource code got a strike against it, I am sure GNU/Opensource teams are coming back at this with a vengeance, developing better protection methods. Stuff like this will rally security teams. Sure, not all bugs/vulnerabilities can be caught, but the ones that are...will have the living s--t kicked out of it. Chalk it up to valuable experience. I am sure developers are whipping themsleves into a mea culpa frenzy. A bit of humility will go a long way to making something superior.

Encryption is not security through obscurity. Encryption is security through rigorous openness and review.

"Security through obscurity is generally a pejorative term referring to a principle in security engineering, which attempts to use secrecy of design or implementation to provide security." The secret key in cryptography is neither design nor implementation.

No, just no. No one with any sort of a clue ever argued these issues cannot happen with Free Software.

No, they haven't made that claim in so many words. But they've sure as hell implied it for years now. That's the whole line of thought that Raymond's statement (quoted in TFS) is based on.

Huh? The quote is "given enough eyeballs, all bugs are shallow." That's a clear admission that open software, like all other software, contains bugs; that's why you want the many eyeballs. Any claim otherwise is a symptom of not understanding plain English. Eric's whole point was that the bugs in open software will be found and fixed faster than the bugs in other software, due to the population of interested people who will study it, looking for the bugs. Nothing in that quote implies (to anyone with reasonable understanding of English and basic logic) that open software doesn't have bugs. I expect Eric would just chuckle at the very idea of software without bugs.

(Actually, someone near him should ask him. Tell us whether he chuckles, or snickers, or just gets a sad look on his face. Or maybe he'll say "Well, there is a conjecture that bug-free software exists, but in has never been observed in the field by reliable observers.";-)

A much more useful conclusion from this story (if you're serious about computer security) is that this bug has been found and fixed in OpenSSL, but with its proprietary competitors, we have no way of knowing what horrible exploits they may be hiding. And you'd be a dummy to think they don't have exploits; every chunk of security-related software has exploits. The meaningful question is whether they can be found and fixed by the people using the software. If not, you'd be a fool to use that software.