Who's Guarding the Guards? We Are

A recent DevX editorial, which expressed concern over the ability of open source to be maliciously exploited, is dead wrong, say many readers. In this rebuttal, one developer explains why open source's peer review security model offers the best protection against those exact types of malicious attacks.

by Ladd Angelius

Feb 12, 2004

he editorial published on February 11, "Open Source Is Fertile Ground for Foul Play," suggests three areas where security might be a concern for governments when considering open source software. However, all three arguments are flawed "straw men" when subjected to rational analysis. Indeed, some of the author's own arguments demonstrate the strengths of open source when weighed against any closed source alternative.

First, the author, DevX Executive Editor A. Russell Jones, suggests that security breaches could be inserted into open source software by an insider, perhaps hidden in code submitted as a fix or an extension. While there is a remote possibility of this occurring (this is conceded as "not terribly likely" even by the author), there is a far greater possibility of this occurring when patching closed source software.

For example, all software is constantly being updated, whether it is open source or closed source. The same malicious code insertion danger applies to closed source software, except that no one, except the software vendor, sees the code changes. It's like they have the keys to the hood of your car and only they get to see the engine. No one can tell if there is malicious code being added because no one can see the source code. If Microsoft wanted a backdoor into anyone's network or PC (assuming they don't have one alreadyonly they know) they could roll it out in the next Windows Update, and there is absolutely no way (beyond a whistleblower in their organization) that anyone would find out. Voila!instant access to your private information. It's the perfect crime because no one would find out.

Open source software, on the other hand, is transparentthe integrity of the code can be verified by anybody. You can open the hood and verify, with your own eyes, that the mechanic really did install the new generator that they charged you for. Even more beneficial is that, because the source code can be read by anyone, a lot of other people have already verified it before you. The built-in ability for reviewing the code keeps everyone honest.

There is an old Russian proverb, "Trust, but verify." Open source allows us to do that. In the closed source world, you cannot verify; you have only blind trust.

Secondly, Jones suggests that "distributions will be created and advertised for free or created with the express purpose of marketing them to governments at cut-rate pricing." Again, why does open source take the fall? Any "malicious vendor" would be more easily able to conceal malicious code in a closed source software package.

For the sake of argument, I'll concede the point: There could be a malicious group that could create an open source distribution and attempt to sell it or give it away to the government. And, who knows, maybe the government would be stupid enough to buy it instead of using a common, tested, distribution. And maybe the government would not have its own experts to look under the hood and examine the code. (Although if the government is going to be so ridiculously dense then they may as well rely on closed source software.) And while we're at it, let's assume the government would also use this software to protect the country's Most Valuable Assets. Hmm. Even in this extraordinary case, only if the software is open will this hypothetical government be able to examine the goods when questions arise. With a closed source distribution it would be impossible. And as the move to open source is only beginning, this is the way things are in almost every case today.

Lastly, and of most concern to the author, " an individual or group of IT insiders could target a single organization by obtaining a good copy of Linux and then customizing it for an organization, including malevolent code as they do so." This, once again, is not a security concern that applies solely to open source. Any group of IT insiders in an organization that has the administrative network access necessary to accomplish that already wields the power they need to act maliciously; they could install any code on any machine.

This last argument reminds me of the old saying, "You can catch a bird with your hand, if you first put some salt on its tail." If you're close enough to the network to have administrative access and influential enough to control the specific patches applied to all the servers, then you can add a backdoor in any number of ways regardless of any open source software. In other words, if you can get close enough to put salt on the bird's tail, just reach out and grab it. You don't need any open source software.

It's hard to believe that an opinion criticizing the security of open source software would attempt to use the phrase, "Quis custodiet ipsos custodies" as a defense of that position. "Who's guarding the guards?" We are. All of us. We are guardedin government, corporate America, and elsewhereonly because of the transparency that we have built into those systems. If we do not insist on that transparency, then we remove all capability for oversightand then no one can "guard the guards."