Biotech, Info Tech, the Future and Space

Last October, I speculated on the best ways to go about designing and implementing a software backdoor. I suggested three characteristics of a good backdoor: low chance of discovery, high deniability if discovered, and minimal conspiracy to implement.

The critical iOS vulnerability that Apple patched last week is an excellent example. Look at the code. What caused the vulnerability is a single line of code: a second “goto fail;” statement. Since that statement isn’t a conditional, it causes the whole procedure to terminate.

The flaw is subtle, and hard to spot while scanning the code. It’s easy to imagine how this could have happened by error. And it would have been trivially easy for one person to add the vulnerability.

Was this done on purpose? I have no idea. But if I wanted to do something like this on purpose, this is exactly how I would do it.

EDITED TO ADD (2/27): If the Apple auditing system is any good, they would be able to trace this errant goto line not just to the source-code check-in details, but to the specific login that made the change. And they would quickly know whether this was just an error, or a deliberate change by a bad actor. Does anyone know what’s going on inside Apple?

EDITED TO ADD (2/27): Steve Bellovin has a pairof posts where he concludes that if this bug is enemy action, it’s fairly clumsy and unlikely to be the work of professionals.

Schneier is a guy to listen to. There are a lot os things discussed in the comments about this because we have so little information.

It fits his criteria, With the logs of changes on hand, Apple should be able to back track and figure out how this happened. The best conspiracy theory would have to include the possibility that anyone signing off on correct testing of the code would also have to be involved – unless the same guy who added the code also signed off on the testing.

Makes for a great idea. even if it is much more likely that human error was involved.