It was a popular and widely-used encryption toolkit similar to Microsoft’s BitLocker and Apple’s FileVault.

The idea is that by encrypting and decrypting data at the operating system level, just before every chunk is written to disk and immediately after it’s read back in, you can’t accidentally miss anything.

Your operating system and temporary files are scrambled; leftover fragments of deleted files are scrambled too; even sectors on the disk that are blank are encrypted so you can’t tell they’re empty.

That’s known as FDE, short for full-disk encryption, and it’s a very handy way of reducing the risk of data leakage if a crook runs off with your laptop, or you leave it in a taxi.

With FDE, it’s no longer possible just to put your hard disk into another computer, or boot up from a recovery CD, and look through the files.

A puff of mystery

Anyway, TrueCrypt vanished in a puff of mystery just over two years ago when the developers abruptly pulled the plug on the project.

It was the opening words that caused the excitement:

WARNING: Using TrueCrypt is not secure as it may contain unfixed security issues.

For all we ever knew, the developers simply decided they’d had enough, or fell out with each other, or realised that if they had to do a full rewrite for the forthcoming Windows 10 they might never escape from the cryptocoding treadmill.

Or perhaps they were forced to shut down by one or another intelligence agency who felt that the product was too strong?

Fast forward two years and a new project called VeraCrypt, another open source FDE toolkit, has arisen from the ashes of TrueCrypt.

Indeed, at the start of August 2016, the VeraCrypt team announced that they were going to get their source code audited.

Inspectable by anyone

Open source encryption products pride themselves on being “inspectable by anyone,” precisely because they’re open source, but the problem is that very few people are properly qualified to do cryptographic audits.

There used to be an adage in open source that “with many eyes, all bugs are shallow”, meaning that someone, somewhere, is bound to spot any problems sooner or later, because they’re in there somewhere…

…but recent history tells us that’s a myth: some bugs are subtle, or complex, or specialised enough that they stay hidden for years.

Worse still, security holes like backdoors aren’t bugs – they’re programmed in on purpose, so the coders often go to great lengths to hide them.

So, the audit was supposed to increase public trust in VeraCrypt.

Just this week, however, the Open Source Technology Improvement Fund (OSTIF), which gives financial support to VeraCrypt, has released an announcement cloaked in almost as much mystery as the posting that terminated TrueCrypt in 2013:

We have now had a total of four email messages disappear without a trace, stemming from multiple independent senders. Not only have the emails not arrived, but there is no trace of the emails in our “sent” folders. In the case of OSTIF, this is the Google Apps business version of Gmail where these sent emails have disappeared.

This suggests that outside actors are attempting to listen in on and/or interfere with the audit process.

We are setting up alternate means of encrypted communications in order to move forward with the audit project.

Interestingly, the article announcing the “breach” is explicitly titled OSTIF, QuarksLab, and VeraCrypt E-mails are Being Intercepted, although in this case, it looks as though the emails are being destroyed.

You’d think that an outside actor who wanted to snoop on what you are up to would intercept non-destructively, by looking at the messages but letting them go anyway.

After all, deleting the messages doesn’t serve much point: firstly, it draws attention to the problem; and secondly, it doesn’t really prevent the messages from getting through, because the senders can just transmit them again.

What’s the conspiracy?

As you can imagine, conspiracy theorists are all over this.

Just like last time, however, when TrueCrypt imploded, the explanation might be entirely innocent. (Hands up anyone who has never lost emails, apparently without trace, at a vital instant).

So far, however, both VeraCrypt and its auditors seem to be as good as admitting (insisting, even) that someone has unauthorised insider access to their email.

The announcement concludes:

If nation-states are interested in what we are doing we must be doing something right. Right?

That seems a strange leap of logic.

Firstly, we don’t actually yet know whether a nation-state (let alone two of more of them) was involved in the first place.

Secondly, ending up with hackers inside your email accounts is not, in fact, a terribly good sign that you’re doing something right.

I’d prefer to see a conclusion more along the lines of, “We’re going to find out how the breach happened, and tell you how we plan to stop it happening again.”

Great article Paul. I agree – I think they’d have been better saying that they are going to investigate and look at how the breach taken place rather than gloat about being onto something so good that nation states want to snoop. I am glad that they’re going down the route of source-code inspection. I seem to recall TrueCrypt had a similar inspection and it came up fruitless (which further confuses things because their website anagram hinted at the FBI). Anyway, I will also be following this story as it unravels with some interest!

The whole thing seems strange. It appears to me that they are either trying for a we’re so awesome governments are afraid of us so send us money type of thing or else something bad already happened and they are preparing to blame it on government interference.

Hmmm. That’s a weird sort of “cleared up”. The statements I quoted above were retrieved from the above URL this afternoon. No mention that my guess about “hands up” may be correct. Just continued insistence that there was “interception” 🙂

sections are too light a shade of gray (#98999A) are are very difficult for me to read. You might try making the italicized fonts a bit darker; e.g.: color:#606060; This provides color separation from the main content, without causing some users eye strain to read too light a shade that almost blends into the background.

We haven’t ignored the very small number of people who have complained.

We just haven’t changed it (yet, anyway).

The vast majority of readers don’t have a problem with it, and the visibly different grey level sets it off nicely against the body text, without needing to use a shaded background. We used to use “reverse-out” for quoted text, which was easier to read but was visually less clear in the layout because we also use reverse-out for ancillary explanations (what would be a sidebar in a print magazine).

I’ve encountered in person only one reader who found the light grey hard to handle, but her whole screen was hard to read because the brightness and contrast were set all wrongly. We fixed that and everything got better 🙂

May I suggest, if you really can’t read it, that in the meantime you just use your browser’s Reader Mode, which is designed to strip out design details like grey text and custom fonts, precisely for the delight of people who prefer consistently high-contrast text in a standardised layout.

I think it will sort out your problem without forcing us to change the underlying site CSS for everyone else.

“Secondly, ending up with hackers inside your email accounts is not, in fact, a terribly good sign that you’re doing something right.”

Not sure if you mean this in the bad security sense, the bad choice of communication via gmail sense or the they’ve done something bad thats getting looked into sense? Regardless just a couple bits that might add to the discussion.

I mean if they have a secure password & 2fa, maybe that’s where they presume it’s a nation-state that have backdoor access.

It could also be malware, but what it comes down to, is only they know what was in those emails & if it was revealing exploits that need to be patched – you’re going to be suspicious.

Now they just need to start using encypted email & none of the backdoor providers.

It seems that the theory advanced by one of the people in the group involved is that there was some SNAFU between the Mail app on OS X (I think) and Google’s mail server, and no need for alarm. In other words, the messages ended up neither sent nor received, if that doesn’t sound rather obvious 🙂

So it may be as simple as “some IMAP weirdness ate my homework,” or something like that.

That doesn’t remove the mystery of how we got to the dramatic claims of interception.

After browsing the comments, it apparently may be “a bit late” for me to throw a “new” conspiracy theory in here… but for my two cents (I think you need 50c for a local call off a payphone nowadays, how times change…)
One of the theories behind TrueCrypt’s closure was that it may have been shut down due to government-forced compromise of the code, accompanied by a gag order. If something even vaguely similar were to occur to VeraCrypt, then announcing an unrelated (even if it’s a complete fabrication) security breach could drive people away (for their own safety) and/or have people look a little closer. So, a deliberate lie in order to make people safe from the code that they can’t directly abandon.
This conspiracy theory stuff is kind of fun I guess, but I think I’ll stick to other entertainment.

One way to say, hello world, I’m stupid. If you are doing research about one of the most used encryption software (claimed to be NSA-proof), G-mail is not best option. They should check alternative communication tools e.g. from prism-break.org

Only 4 mails ?
I am wondering what was into these messages.
Then, one could imagine the root cause of their eradication, which would lead to a plausible explanation.
Up to now, we don’t have any clue like this, and as such, I am inclined as a new way to get known by creating a conspiracy but

OSTIF’s rhetorics is relevant of either two things, they know more than they say and say as if everyone knew what they don’t explicitly say OR they fantasize on the basis of suspicion. Whatever is doesn’t make impossible what is or possible what is not. You can be paranoid and have enemies, to paraphrase Woody Allen.
So how could a simple fellow as myself have any idea of what is going on or of what is not going on?

You wouldn’t. And that’s part of the problem. (It’s also why Naked Security exists and spends a lot of time and effort writing complex technical stuff in plain English without jargon. Well, without too much jargon.)

That’s also why I read Naked Security. Articles cannot limit themselves to evidence and posing unknown areas, exposing the twilight zones is essential indeed. There is always at least one valuable point, a philosophical one, that of squeezing the question by questioning it repeatedly. What I fear is less the objectivity journalism when applicable (it is here) then the fast conclusions, interpretations of us all, modest and savvy readers alike.

About a decade ago, Charlie Miller, now better known for Jeep hacking, found an exploitable bug in the Samba open source product, from looking at the public source code. Instead of simply reporting it so the Samba guys could fix it, he decided to make money off it by selling it as a zero-day.

IIRC he had it up for sale via a broker for something like a year until he got the price he wanted, somewhere close to $100k, from memory.

In all that time, no one else found the bug – it was only disclosed after the purchaser had finished with it.