As far as Clam etc. is concerned it has always been a stance of a large number of members of the security community that public security is better. ("Public security is always more secure than proprietary security...For us, open source isn't just a business model; it's smart engineering practice." -- Schneier). The reasoning behind that is that yes the "bad guys" can see your code, but so can the good guys. Things you may have overlooked that the "bad guys" might find easily the "good guys" will be able to point out for you whereas if no one could see your code it may have been found too late.
–
doylerJan 11 '12 at 21:40

4 Answers
4

Virus and antivirus software play an elaborate hide-and-seek game; they have done so for the last 20 years at least. An antivirus tries to "look at" all the places where a virus may hide; new virus try to find new hiding places, that antivirus do not know of yet. Reverse engineering is done both ways, and having access to source code sure makes reverse engineering easier.

Still, access to source code should not be fatal in that direction. Obscurity is the only weapon in the virus arsenal; once reverse-engineered, it becomes obvious how the virus replicates itself, how to detect it, and how to remove it. Having the source code of a virus would kill it. On the other hand, having the complete source code of an antivirus does not tell you how to defeat it; it only tells you what kind of virus will not escape it. A virus developer would still have to find a new path through which the virus may go and proliferate, that the antivirus developer did not think of. The virus developer could gather the same information through testing. Having access to the antivirus source code thus makes virus development quantitatively less tiresome, but not qualitatively easier.

Depressingly, most antivirus software primarily detect virus through so-called signatures (chunks of code which appear in some known virus but probably not in "normal software") and most virus try to escape antivirus by simply having a new, distinct signature. This is depressing because it means that both virus and antivirus developers are still, basically, in the Stone Age. The antivirus developers win by adding new signatures as fast as possible. The virus developers win by being so numerous that antivirus developers are overwhelmed. There is no elegance here, just raw numbers. However, it means that an antivirus is mostly a big grep engine; theory of implementation of such systems has been known and masterfully described for decades (since 1973 for volume 3, specifically). So, there is very little to learn, for a virus developer, from the source code of an antivirus.

To sum up, I would say that the damage resulting from antivirus source code leakage is mostly a Public Relations issue: as a "security company", they should know better than letting their corporate assets leak that way. But it does not have much impact on the actual security offered by the product itself.

I'm not certain this is necessarily security through obscurity by design. Symantec, like many corporate entities before them, see their source code as the intellectual property and not necessarily guarded for the sole reason of protecting their product from directed attacks. (Note that it doesn't hurt to have a little security by obscurity as long as it isn't your only means of protection.)

However, with that said, having their (old) source code read can open up new avenues for exploits. Although it is 5 years old, it is entirely possible (and generally probable) that parts of the source code have been reused in many of their more recent products.

Personally, I have reviewed -- as part of my job -- source code for products which have been in iterations for the past 10 years or more. The latest editions almost always have at least some code from their initial versions because it is usually the case that the core simply gets augmented rather than re-written.

I was thinking that it might be possible to discover and exploit vulnerabilities in the AV itself, and craft viruses that attack the AV directly. Seeing that Symantec has a lot of power over the OS, owning the AV would be quite the rootkit.

Yeah but closed source projects like IE get hacked daily... Hackers are going to use fuzzing and automation to uncover these flaws.
–
rookJan 11 '12 at 18:36

You are correct, but code review is far easier and more efficient than fuzzing.
–
schroeder♦Jan 11 '12 at 18:49

@schroeder "code review is far easier and more efficient than fuzzing" Is that your personal experience tells you, or your intuition? Why are free software not attacked a lot more than closed source software?
–
curiousguyMay 21 '12 at 0:12

@curiousguy By 'free' I assume you mean 'open source'. They are attacked quite often, but with many eyes being able to review the code, obvious issues are revealed quickly. It is my experience and it is logical to conclude. You can either read code to see how an input is filtered, or you try to fuzz to discover the filtering.
–
schroeder♦May 21 '12 at 1:19

@schroeder "By 'free' I assume you mean 'open source'." yes (or just source code is accessible). I have in mind a ridiculously obvious null-dereference bug in the linux kernel that went unoticed; now, a missing break in sudo! If such obvious bugs are not seen, then I don't believe in code review to detect more subtle problems caused by interaction between components (like unwanted race conditions). To me it proves that in practice most source code is almost never read. There is just too much of it, and it's increasing.
–
curiousguyMay 22 '12 at 15:14

It is definitely possible that it will be easier to avoid heuristic based algorithms if you know specifically what they are checking for and makes it much less of a black box test. Other than that depending on what was leaked if they got the signatures themselves that would be invaluable because you'd then know exactly what they were looking for and you'd be able to modify accordingly.