Reasons for taking either approach, or combination of them, include: cultural norms, financial, legal positioning, national security, etc. - all of which in some way relate to the culture's view on the effect of having that system open or closed source.

One of the core concerns is security. A common position against open source systems is that an attacker might exploit weakness within the system if known. A common position against closed source systems is that a lack of awareness is at best a weak security measure; commonly referred to as security through obscurity.

Question is, are open source systems on average better for security than closed source systems? If possible, please cite analysis in as many industries as possible, for example: software, military, financial markets, etc.

Before answering "how safe is this vs that", we need a system of measurement. How do you measure the number of vulnerabilities? This will be harder for closed-source, which I think is why people often feel safer with open source.
–
Nathan LongFeb 6 '12 at 14:59

when you show someone how a lock is made it is only a matter of time before he picks it.
–
gerdiOct 30 '14 at 20:32

5 Answers
5

The notion that open source software is inherently more secure than closed source software -- or the opposite notion -- is nonsense. And when people say something like that it is often just FUD and does not meaningfully advance the discussion.

To reason about this you must limit the discussion to a specific project. A piece of software which scratches a specific itch, is created by a specified team, and has a well defined target audience. For such a specific case it may be possible to reason about whether open source or closed source will serve the project best.

The problem with pitching all "open source" versus all "closed source" implementations is that one isn't just comparing licenses. In practice, open source is favored by must volunteer efforts, and closed source is most common in commercial efforts. So we are actually comparing:

I agree. What matters most is how many people with knowledge and experience in the security domain actively design, implement, test, and maintain the software. Any project where no-one is looking at security will have significant vulnerabilities, regardless of how many people are on the project.
–
this.joshJun 8 '11 at 21:12

1

True, but giving "access to the source code" is potentially extremely valuable. Having outsider eyes look over your code brings new perspectives that might be missing in the dev team. You could even do something like stripe.com/blog/capture-the-flag, with a real project, with prizes for the best bug found (obviously not releasing details until a fix is out!)
–
naught101Jun 14 '12 at 0:56

2

Heartbleed is a good example of this. OpenSSL has been well, open, for years. Still this huge security hole went undetected for ages.
–
Sameer AlibhaiJun 4 '14 at 19:53

Maintained software is more secure than software which is not. Maintenance effort being, of course, relative to the complexity of said software and the number (and skill) of people who are looking at it. The theory behind opensource systems being more secure is that there are "many eyes" which look at the source code. But this depends quite a lot on the popularity of the system.

For instance, in 2008 were discovered in OpenSSL several buffer overflows, some of which leading to remote code execution. These bugs had been lying in the code for several years. So although OpenSSL was opensource and had a substantial user base (this is, after all, the main SSL library used for HTTPS websites), the number and skillfulness of source code auditors was not sufficient to overcome the inherent complexity of ASN.1 decoding (the part of OpenSSL where the bugs lurked) and of the OpenSSL source code (quite frankly, this is not the most readable C source code ever).

Closed source systems have, on average, much less people to do Q&A. However, many closed source systems have paid developers and testers, who can commit to the job full time. This is not really inherent to the open/close question; some companies employ people to develop opensource systems, and, conceivably, one could produce a closed source software for free (this is relatively common in the case of "freewares" for Windows). However, there is still a strong correlation between having paid testers, and being closed source (correlation does not imply causality, but this does not mean that correlations should be ignored either).

On the other hand, being closed source makes it easier to conceal security issues, which is bad, of course.

There are example of both open and closed source systems, with many or very few security issues. The opensource *BSD operating systems (FreeBSD, NetBSD and OpenBSD, and a few others) have a very good track record with regards to security. So does Solaris, even when it was a closed source operating system. On the other hand, Windows has (had) a terrible reputation in that matter.

Summary: in my opinion, the "opensource implies security" idea is overrated. What is important is the time (and skill) devoted to the tracking and fixing of security issues, and this is mostly orthogonal to the question of openness of the source. However, you not only want a secure system, you also want a system that you positively know to be secure (not being burgled is important, but being able to sleep at night also). For that role, opensource systems have a slight advantage: it is easier to be convinced that there is no deliberately concealed security hole when the system is opensource. But trust is a flitting thing, as was demonstrated with the recent tragicomedy around the alleged backdoors in OpenBSD (as far as I know, it turned out to be a red herring, but, conceptually, I cannot be sure unless I check the code myself).

Of course how important security is to the maintainer of the software is critical. It can be maintained for useability without being maintained for security.
–
this.joshJun 8 '11 at 21:09

1

+1 for raising the issue of maintenance. Also the "enough eyeballs" theory (also known as Linus' law), depends greatly on having trained eyeballs - and when it comes to subtle security bugs, there are far fewer.
–
AviD♦Jun 9 '11 at 23:03

I think the easiest, simplest take on this is a software engineering one. The argument usually follows: open source software is more secure because you can see the source!

Do you have the software engineering knowledge to understand the kernel top down? Sure, you can look at such a driver, but do you have a complete knowledge of what is going on to really say "ah yes, there must be a bug there"?

Here's an interesting example: not so long ago a null pointer dereference bug appeared in one of the beta kernels that was a fairly big thing, discovered by the guy from grsecurity (PaX patches):

and the pointer == NULL check was optimised out by the compiler, rightly - since a null pointer cannot be dereferenced to a struct containing members, it makes no sense for the pointer in the function ever to be null. The compiler then removes the check the developer expected to be there.

Ergo, vis a vis, concordantly, the source code for such a large project may well appear correct - but actually isn't.

The problem is the level of knowledge needed here. Not only do you need to be fairly conversant with (in this case) C, assembly, the particular kernel subsystem, everything that goes along with developing kernels but you also need to understand what your compiler is doing.

Don't get me wrong, I agree with Linus that with enough eyes, all bugs are shallow. The problem is the knowledge in the brain behind the eyes. If you're paying 30 whizz kids to develop your product but your open source project only has 5 people who have a real knowledge of the code-base, then clearly the closed source version has a greater likelihood of fewer bugs, assuming relatively similar complexity.

Clearly, this is also for any given project transient over time, as Thomas Pornin discusses.

from isc.sans.edu/diary.html?storyid=6820 "In other words, the compiler will introduce the vulnerability to the binary code, which didn't exist in the source code." this is a blatantly absurd meaningless statement. The source code is buggy, so it is vulnerable. The way the compiler generate code determine which exploits are possible.
–
curiousguyJun 27 '12 at 12:42

Ok fair enough, you're right, I was wrong - he's dereferencing tun when tun could be NULL - which is downright bad. Fair enough. I'll remove the reference to an offending gcc option, since that wasn't the issue. The rest of the example, as an illustrative point, stands just fine.
–
user2213Jun 27 '12 at 16:17

I think the premises that most use to differentiate between closed and open source are pretty well defined. Many of those are listed here, both have their advocates. Unsurprisingly the proponents for Closed Source are those that sell it. The proponents for Open Source have also made it a nice and tidy business (beyond a few who have taken it on as a religion.)

The Pro Open Source movement speaks to the basics, and when it comes to security in general here are the points that fit the most into the discussion:

The Customization premise

The License Management premise

The Open Format premise

The Many Eyes premise

The Quick Fix premise

So breaking this down by premise, I think the last two have been covered rather succinctly by others here, so I'll leave them alone.

The Customization Premise
As it applies to security, the Customization Premise gives companies that adopt the software the ability to build additional security controls onto an existing platform without having to secure a license or convince a vendor to fix something of theirs. It empowers organizations that need to, or see a gap, to increase the overall security of a product. SELinux is a perfect example, you can thank the NSA for giving that back to the community.

The License Management Premise
Often it is brought up that if you use F/OSS technologies you don't need to manage technology licenses with third parties (or if you do it is far less.), and this can be true of entirely Open Source ecosystems. But many licenses (notably the GPL) impose requirements on distributors, and most real world environments are heterogeneous mixes of closed and open source technologies. So while it does ultimately cut down on software spend, the availability of the source can lead some companies to violate OSS licenses by keeping source private when they have an obligation to release the source. This can ultimately turn the license management premise into a liability (which is the closed source argument against licenses like the GPL.)

The Open Format Premise
This is a big one, and one I tend to agree with so I'll keep it short to keep from preaching. 30 Years from now I want to be able to open a file I wrote. If the file is "protected" using proprietary DRM controls and the software I need to access it is no longer sold, the difficulty in accessing my own content has increased dramatically. If there is a format used to create my document that is open, and available in an open source product from 30 years ago, I'm likely to be able to find it and legally be able to use it. Some companies are jumping on the "Open Formats" band wagon without jumping on the Open Source bandwagon, so this argument I think is a pretty sound one.

There is a Sixth premise that I didn't list, because it is not well discussed. I tend to get stuck on it (call it paranoia.) I think the sixth premise is the feather in the cap of defense departments around the world. It was spelled out to the world when a portion of the windows 2000 source was leaked.

The Closed Source Liability premise
If a company has been producing a closed source code library or API through multiple releases through the decades, small groups of individuals have had access to that source throughout it's production. Some of these are third party audit groups, and developers who have moved on to other companies/governments. If that code is sufficiently static, to maintain compatibility as is a closed source benefit premise, so some weaknesses can go unannounced for many years. Those who have access to that closed source have the freedom to run code analysis tools against it to study these weaknesses, the bug repositories of those software development shops are full of "minor" bugs that could lead to exploits. All of this information is available to many internal individuals.

Attackers know this, and want this information for themselves. This puts a giant target on your company's internal infrastructure if you are one of these shops. And as it stands, your development processes become a security liability. If your company is large enough, and your codebase well distributed enough, you can even be a target for human infiltration efforts. At this point the Charlie Miller technique: bribe a developer with enough money and he'll write you an undetectable bug becomes a distinct possibility.

That doesn't mean it doesn't get into OSS products the same way either. It just means you have a set of data, then when released, can expose weaknesses in your install base. Keeping it private has created a coding debt against your customers installed systems that you cannot pay back immediately.

+1 @Ori: Do you know of any OSS that had a backdoor that was found, and clearly designed to be one? Also, Charlie Miller is who, meaning is there a wikipedia page, or something of the like.
–
blundersJun 12 '11 at 1:50

1

He's a "Security Researcher" who's famous for his Pwn2Own exploits. He mentions the human element of coding exploits in his Defcon 2010 talk, which is humorous enough to watch on it's own. youtube.com/watch?v=8AB3NcCkGNQ
–
OriJun 12 '11 at 4:24

1

@blunders Nothing that has been identified and publicized as such. I could go digging for some specifics, but the problem with a well designed "bug" is that it shouldn't be easy to differentiate between an accident and a deliberate placement.
–
OriJun 12 '11 at 14:16

@Ori, your 3rd point - Open Format premise - while a good point, is not necessarily a benefit strictly of F/OSS software. Indeed, your last statement in that paragraph contradicts the rest: "Some companies are jumping on the "Open Formats" band wagon without jumping on the Open Source bandwagon", which proves that it's irrelevant. (Admittedly, in some minds thats not the case, but thats not true.)
–
AviD♦Jun 12 '11 at 18:52

The upshot is that open or closed is about equivalent depending on how much testing gets done on them. And by "testing" I don't mean what your average corporate drone "tester" does, but rather more like in the field experience.