Ben Laurie blathering

Commenting on Google’s claim that Chrome was designed to be virus-free, I said:

Bruce Schneier, the chief security technology officer at BT, scoffed at Google’s promise. “It’s an idiotic claim,” Schneier wrote in an e-mail. “It was mathematically proved decades ago that it is impossible — not an engineering impossibility, not technologically impossible, but the 2+2=3 kind of impossible — to create an operating system that is immune to viruses.”

What I was referring to, although I couldn’t think of his name at the time, was Fred Cohen’s 1986 Ph.D. thesis where he proved that it was impossible to create a virus-checking program that was perfect. That is, it is always possible to write a virus that any virus-checking program will not detect.

Now, if what you’re interested in is PR, then it seems you can get away with these kinds of statements; certainly I have not seen a single public challenge. But if you care about rigour, you have to do rather better, since Schneier’s claim is demonstrably wrong. Why? Well, here goes … Cohen’s proof relies on computer science’s only trick: diagonalisation[1]. Basically, I assume that I have some perfect virus detector, V. If I give V a program, p, it returns true or false, depending on whether p is a virus or not. Let’s be charitable and assume that we can define what a virus is well enough to allow such a program to exist. Let’s also define what is meant by “perfect” – by that we mean that any program that exhibits virus behaviour will be classified as a virus and any that does not will be classified as not a virus.

Then Cohen says: fine, write a program, c like this:

if(V(c))
do_nothing();
else
be_evil();

Now, if V(c) returns true (i.e. c is a virus), then c does nothing, and therefore V is wrong. Similarly, if it returns false, then c behaves evilly, and once more V is wrong. QED, no perfect virus checker is possible. So far, we are in agreement.

Can we go from this to “it is always possible to write a virus that any virus-checking program will not detect”. No, because the proof only talks about perfect virus-checking programs. If the virus checker is allowed to be wrong sometimes, then the proof no longer works. In particular, if the virus checker can return false positives (i.e. claim that innocent programs are viruses) but is not allowed to return false negatives, then we can, indeed, have a virus checker that would keep our system free of viruses. Why? Because the virus checker will always detect a virus, by definition, but the diagonalisation proof no longer works – in particular, the case where V(c) is true no longer leads to a contradiction.

If we want to go a little further and show that such a program can, in fact, exist, we can actually do that quite easily. For example, consider the program V that always returns true: this would prevent any programs at all from running, so our OS wouldn’t be all that useful, but it would be virus-free. Less frivolously, we could have a list of non-virus programs, and V could return false for any program in the list and true for all others. Even less frivolously, it is possible to imagine an analysis that’s thorough enough for some restricted set of cases to permit reasonably general programs to pass the test without allowing any viruses (obviously we would also disallow many perfectly innocent programs, too), but at this point we’d have to define “virus” to drill down into what that analysis might be – but it could, for example, require that the program be written in some restricted, easily-analyzed language, and avoid constructs that are hard to deal with.

So, sorrry, Schneier. It has not been shown that it is impossible, in the 2 + 2 = 3 sense, to write a virus-free OS. Indeed, it has been shown that it is, in fact, possible – though I would certainly agree that it is an open question how hard it would be to create an OS that’s both useful and virus-free.

[1] Don’t get me wrong; it’s a good trick.

This entry was posted
on Sunday, August 2nd, 2009 at 17:45 and is filed under Rants, Security.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

4 Comments

“return true” is virus free assuming the compiler, used to compile it, is trustworthy including the executing environment and even the electrons used to power the executing environment. Maybe Ken at Google could have some more arguments? 😉

I think the biggest problem in Schneier’s statement is assuming that computer security must rely on virus-checking programs. Security-by-design as explored by capability-based design is quite promising, although I agree with you that many challenges remain (especially towards ease of use and programming).

“So, sorrry, Schneier. It has not been shown that it is impossible, in the 2 + 2 = 3 sense, to write a virus-free OS. Indeed, it has been shown that it is, in fact, possible â€“ though I would certainly agree that it is an open question how hard it would be to create an OS thatâ€™s both useful and virus-free.”

This seems a bit unfair. Technically speaking, Mr. Scheier is correct (even if he may only focuse on diagonalization in his communication). It may be a short answer to a long explanation, but he is still correct in the end, true? What seems to be eluded is the trust that is assumed from the firmware, OS, compilation of both, and the virus checking application to be formally verified as completely secure. In layman terms, you are saying that you have found that person who “shaves the barber” in Russell’s paradox, and I would like to see how that is possible (reference: http://en.wikipedia.org/wiki/Barber_paradox).

“Less frivolously, we could have a list of non-virus programs, and V could return false for any program in the list and true for all others. Even less frivolously, it is possible to imagine an analysis thatâ€™s thorough enough for some restricted set of cases to permit reasonably general programs to pass the test without allowing any viruses (obviously we would also disallow many perfectly innocent programs, too), but at this point weâ€™d have to define â€œvirusâ€ to drill down into what that analysis might be â€“ but it could, for example, require that the program be written in some restricted, easily-analyzed language, and avoid constructs that are hard to deal with.”

Seriously? Like a signature based system for virus detection? Again, the trust is on the anti-virus software to stop viruses, but now you are asking it to only “trust” applications that are on a list (like a firewall handles a rule set?). So then, how do you prove that the programs in the list match the ones trying to operate on the system? Hashing? How do you verify the hashes are correct? Third party? How do you verify the third party is correct? Uh oh… now we get back to the Verisign issue. Who verifies Verisign?

I would have to say that if what you are saying is true. Then, by your rationale, why not take it a step further. I will quote Bruce Potter during his speech at DefCon a few years back:

“Fix the code.”

Why not create an operating system that was completely virus free to start with? Then, we would not have to worry about viruses to begin with, correct? But isn’t that what the great *nix’s of the world are trying to do? Even my beloved FreeBSD falls short from being perfect.

He said that “we wouldn’t even need firewalls if the code was written correctly.” Is this really true? If so, why isn’t this a Shmoo project? The reason is that this is just not a reality. It could be, if things like buffer overflows and directory traversals were the only issues in information security. But the issues are much deeper than we sometimes acknowledge.

Anyways, I think your ideas are great, but this one would only hold water until it became “top dog” of the market. At that point, I would expect that the same issues that Microsoft, and all of the anti-virus manufacturers for Windows, would fall upon the product you describe above. At least you’re optimistic! Most INFOSEC people are “glass half empty” types…