A sampling of theories behind Wednesday's notice that TrueCrypt is unsafe to use.

Wednesday's bombshell advisory declaring TrueCrypt unsafe to use touched off a tsunami of comments on Ars, Twitter, and elsewhere. At times, the armchair pundits sounded like characters in Oliver Stone's 1991 movie JFK, as they speculated wildly—and contradictorily—about what was behind a notice that left so many more questions than answers. Here are some of the more common theories, along with facts that either support or challenge their accuracy.

Warrant or National Security Letter canary

Further Reading

Theory: Borrowing a page from the Lavabit crypto service that former NSA contractor Edward Snowden used, Wednesday's advisory was what legal practitioners call a "canary," intended to signal receipt of a confidential demand from a law-enforcement or national security entity. Since National Security Letters (NSLs) can impose draconian penalties on those who make the demands known, this theory goes, the TrueCrypt developers issued a thinly veiled warning to users that they should no longer count on the program to prevent snooping by the US government.

Pros: Several elements of the advisory left many readers with the vague sense that the writers' tongues were planted firmly in their cheeks. Most obviously was the advice that TrueCrypt fans—a mish-mash of privacy-loving Linux, Mac, and Windows users—should abandon the cross-platform app for BitLocker, Microsoft's proprietary encryption program that runs only on selected versions of Windows. With much less prominent mention of FileVault or LUKS—the rough Mac and Linux equivalents of BitLocker, respectively—some people regarded the advice as so absurd as to be a wink and nudge signaling something much more serious was going on.

Cons: Like most conspiracy theories, this one is based on little more than speculation. Beyond that, it's not clear that authorities know the identities of the anonymous TrueCrypt developers or, assuming they do, that the developers reside in a jurisdiction where an NSL would be legally binding. What's more, the parallels between Lavabit and TrueCrypt are at best tenuous. Whereas Lavabit's private key allowed the feds to decrypt all user data, there's no similar secret that the TrueCrypt developers could reveal. TrueCrypt is also publicly available, so it's not clear what developers could say that the NSA doesn't already know. As such, that makes TrueCrypt an ineffective and therefore unlikely target for a US-engineered backdoor. Last, a preliminary audit of TrueCrypt uncovered no evidence of any backdoors.

An elaborate hoax perpetrated by hackers

This theory is especially popular among TrueCrypt loyalists, who have a hard time accepting the possibility that they're being abandoned by a developer team they have admired and relied on for more than a decade. The idea is that the truecrypt.org site and the TrueCrypt page on SourceForge were both hijacked, along with the private encryption keys used to certify that TrueCrypt releases were authentic. The hackers then used the pilfered assets to perpetrate one of the biggest hoaxes the crypto and security world has ever seen.

Pros: Pretty much the same as the pros for the NSL canary theory. That is, the logic in Wednesday's advisory is too absurd to be taken at face value. A better explanation, proponents hold, is that the post and the signed TrueCrypt update are the work of pranksters who successfully hacked the developers.

Cons: As of press time, none of the developers have come forward to say they were hacked or to warn TrueCrypt users of a key compromise that, if real, could be accurately described as ruinous for users. Further, representatives of SourceForge have reported receiving no complaints of a hack from the TrueCrypt project team and seeing no indication that the account has been compromised. Aside from the lack of logic in the post and the bizarre exit it implies, there seems to be no support at all for this theory. It rests solely on the highly subjective intuition of people who can find no other explanation.

The developers found a vulnerability that was too onerous to fix

There are few code bases more complex than those that carry out sophisticated cryptographic functions. It's easy for bugs to creep in that can completely undermine security. This theory holds that the developers discovered a catastrophic flaw that couldn't easily be fixed. Rather than tipping the hands of eavesdroppers by disclosing the existence of the bug, the developers simply walked away.

Pros: Like the pros above, this theory helps explain some of the more counterintuitive portions of the post and exit strategy. And given how hard it is to do crypto securely, it's entirely plausible.

Cons: Like the other theories, it's based entirely on conjecture.

Ars reader Marlor posted a competing theory. It may not be as sexy as many of the other theories, but based on what's known at the moment, it's at least as plausible. Simply put, it holds that after 10 years of slinging code and with more than two years since their last update, the developers behind this rag-tag crypto project threw in the towel.

"I can't comprehend the conspiracy theories flying around about this," Marlor wrote. Later, the reader added: "If I developed a piece of security software, and wanted to cease development, I'd make a similar statement. 'Don't use this anymore. It's not maintained, and should therefore be considered insecure.' Otherwise, if a vulnerability is discovered, everyone will scream: 'Fix it now! Nobody told us to stop using it!'"

Promoted Comments

With the audit, we're going to have an audited line of code that can be branched off by somebody trusted and start where the original dev's left off. It should also answer some questions about the status of the project etc.

So I think everything will be fine, just going to be a bit of work going forward if it needs to be forked.

Well at this point the alternatives are fork it, or it's extinct.

The problem is that the license, while permitting forking, may not permit such forks to be used commercially. Also, the auditors have, IIRC, said that the code is kind of a mess. At this point, it might be better to perform a clean-room implementation and place it under a vetted FOSS license.

Its usefulness has been limited in recent years anyway by the lack of support for GPT system partitions. A few months ago, I started looking around for something to replace it for just that reason, but I have yet to find anything. I'm wondering what it is about GPT that makes it so difficult to implement WDE. (Serious question there, though I realize that it may sound a little accusatory.)