Developers still use primitive security measures, despite many examples of stolen or maliciously modified programs

InfoWorld|Apr 1, 2014

It's one of the great computer security lessons. In the early 1980s, widely trusted computer pioneer Ken Thompson, one of the original creators of Unix, admitted that he had inserted a hidden backdoor into a source code compiler.

In his acceptance speech for the Association for Computing Machinery's most prestigious honor, the Turing Award, Thompson admitted that he had created a parasitic C source code compiler that, when it detected source code for login programs it was compiling, would generate a backdoor known only to himself. He could essentially log onto any system that had used his source code compiler for logon access control programs.

It was pure genius. Any reviewed, uncompiled source code would not have the backdoor, and most people don't re-inspect their own compiled code, so malicious detection is highly unlikely. Thompson claimed he never released his rogue source code compiler, but there are reports that it, and programs compiled by it, made it outside of his Bell Labs laboratory. His lesson was that you could never really trust anyone else's program or source code.

Turns out even trusting your own source code may be more difficult than it seems. Beyond all the coding bugs that your legitimate developers may unknowingly insert, you may not be able to trust your source code for a variety of other, more nefarious reasons, including:

It can be stolen and reviewed by malicious outsiders who exploit found weaknesses.

You can have rogue, legitimate developers, who for a variety reasons may insert exploitable weaknesses into the code.

It may rely on other, external source code that has unintentional or intentional backdoors (like Thompson's exploit).

Malicious outsiders could insert undetectable backdoors or weaknesses, which would then be distributed by the vendor as legitimate software.

These risks aren't merely theoretical. There are literally dozens of instances of stolen source code, including:

The Linux kernel itself was once maliciously modified, but this was caught before it could be approved and distributed. And the huge Chinese APT attack known as Project Aurora, which targeted dozens of companies, was designed to steal and modify vender source code.

Most sophisticated development teams use source code repository tools known as source code managers, version control systems, or software configuration managers. In the open source world, the main tools are concurrent versioning systems with Apache Subversion, Git, and Mercurial leading the market. Not surprisingly, however, version control is not security control. Most of the source code compromises I listed above were running one of these products. In some cases the source code manager was abused; in others it was the trusted distribution point.

At very least, development teams need to be aware of the potential for source code compromise and abuse. Source code should be among the most protected assets in any company. I know many companies that require two-factor authentication and separate the source code on air-gapped networks. That can make the job harder on the outside intruder.