Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

The first phase of the audit focused on the TrueCrypt bootloader and Windows kernel driver; architecture and code reviews were performed, as well as penetration tests including fuzzing interfaces, said Kenneth White, senior security engineer at Social & Scientific Systems. The second phase of the audit will look at whether the various encryption cipher suites, random number generators and critical key algorithms have been implemented correctly."

Is there something preventing an audit elsewhere? Is it illegal to send the source code overseas? And how are these audits done? There aren't a lot of details in TFA. Is it like a big Wiki where anybody can look at the code and report what they find, or are the auditors vetted with specific sections assigned them?

I'm asking seriously. I'm not a developer, so I don't know. But I worry about security and snooping.

Nothing to stop anyone anywhere from looking. And I don't see how a " NSA letter " , even to someone in the USA, would stop them from exercising their first amendment rights and writing whatever they wanted, or from adding comments to the code and posting them somewhere, etc.

Why does it rule those countries out?You you perform your audits in all 3 countries, the only things being missed are backdoors put in by all 3 securities agencies cooperating together.Add in China or some other "we don't like USA" country and you get better odds.

Tell me this: if the NSA did put a backdoor in the package and if this audit found it, how would the NSA know about it in time to prevent it being reported? Sending a security letter to the auditors would just be considered proof that there was a backdoor to be hidden. The auditors may have been forced not to reveal anything about it to the general public, but you can bet that the people over at TrueCrypt would have found out about it and eliminated it as soon as possible, although they'd probably have had to pretend that they found the flaw themselves to protect both themselves and the auditors.

The NSA was _able_ put in back doors. According to the report, the build environments were not safe enough and well enough controlled, or verified, to _prevent_ back doors. Given the NSA's strong interest in having one, and their level of skill, I'm afraid I'd have to assume that they did, indeed, create one. Whether a system that is at risk of such a back door is good enough for personal or even business is something you'd have to decide on a personal basis.

NSA letters, if my occassional skimming on the topic is correct, are gag orders about themselves as well. There are apparently ways to legally respond in the public to these without revealing one has been received but it involves not talking rather than talking.

Technically, if an NSA backdoor existed in the codebase, you would be prevented from reporting it by an NSA letter, subject to immeadiate imprisonment and confiscation.

Two responses.

First, I suspect if they were confronted with an NSL they could go the lavabit route and simply suspend the audit project with no explanation. IANAL but I don't think the NSA can compel them to falsify the audit results.

Second, if they are smart, they can have it audited multi-nationally with independent auditors to make it harder for any government gag orders to stick.

The problem with the NSA is we have no idea what their capabilities are, technologically or legally. They are clearly violating the constitution already and there seems to be no one willing or capable of stopping them. So if they did come to you with a NSL, no matter how ridiculous or unconstitutional it was, what choice would you have? You could go to the media, but how embedded in the media are they? Do they have standing NSLs with all the media organizations out there? You could go outside the country, but those newspapers are government by their own countries version of the NSA who's working in close relationship with ours. This really is a Global totalitarian secret police state. They haven't started herding people into camps or anything, but really... what's to stop them?

Do they have standing NSLs with all the media organizations out there?

I think there'd be less Snowden leak coverage if there were.:)

You could go outside the country, but those newspapers are government by their own countries version of the NSA who's working in close relationship with ours

Like China & Russia? Governements want their own security as much as their own intelligence agencies want to break it... there's too many pieces moving in opposite directions for there to be a credible global coverup of a transparent audit of open source software.

No, you think you've found a possible security hole and you email your friend Mike and ask him to look over it and see what he thinks. The NSA intercepts the email, and immediatly sends you the security letter.

Since Snowden's revelation about the NSA's clandestine $10 million contract with RSA,
I hope that as well as checking that the code implements some known encryption algorithm properly, that they also confirm that the algorithm itself is mathematically unadulterated (by the NSA or whoever).

Since Snowden's revelation about the NSA's clandestine $10 million contract with RSA,

If you're on NSA's radar you've got bigger problems than TrueCrypt's trustworthiness or lack thereof. The NSA doesn't have to have a back door into AES (or the other algorithms) when they have an arsenal of zero day exploits, side channel attacks, social engineering, and TEMPEST techniques at their disposal. The average user should be far more concerned about these attack vectors (from any source, not just NSA) than the security of the underlying encryption algorithm.

The Diceware FAQ [std.com] sums up the problem rather succinctly: "Of course, if you are worried about an organization that can break a seven word passphrase in order to read your e-mail, there are a number of other issues you should be concerned with -- such as how well you pay the team of armed guards that are protecting your computer 24 hours a day."

Oh hell, they'll just sneak into your home in the middle of the night and plant a hardware bug or key logger into your computer.

One of their favorite tactics used by law enforcement is to install cameras in your residence facing where you normally use your computer. They got a child pornographer like this, his use of true crypt didn't help because they had video of him entering the password and simply entered the password once they seized the computer.

True Crypt cannot reasonably protect you from law enforcement nor state sponsored spying like the NSA. It might protect you from some non-tech police agency in some shit hole country being able to access it but then they just use the standard non-tech password extraction method.

There was, that's the entire point. You can't win against the state. The state can take action by force, the warrant is a check on that system but regardless no matter what you do and the technical precautions you take the state, if patient and cautious, can easily acquire the information to breech those protections. It can range from the camera put in you house to the $5 wrench. Those advocating for true crypt to protect you from the state are simply wrong that it can protect you.

Well, here's the thing. They had enough on the guy to get the warrant to plant the camera. No encryption (or in the case of heartbleed, broken encryption), and they can likely find ways to snarf all that information without a warrant, in which case it could (more easily) become a case of "find people fitting profiles we don't like, then sift through all this information and look for something that sticks"

In case you've been sleeping under a rock for the last year, the target of the NSA is everyone. Not that they put you on the same level as the Chinese military of course, but nobody's under their radar and if they can grab your data or metadata easily they will because you could be a terrorist or at least the friend of a friend of a friend of a terrorist. It's not that the average joe would stand a chance if they threw everything in their arsenal at us, but those "zero day exploits, side channel attacks, social engineering, and TEMPEST techniques" don't come free and using them highly increases the chances of exposing them. The question is more like "Does NSA grab all the TrueCrypt containers used as backup on Dropbox/GDrive/whatever and rifle through everyone's data?" than "If the NSA really wants the contents of my laptop, would this really stop them?"

They are doing a dragnet, if you become a person of interest... THEN they have this big collection of data on you to use, but before that, you're just another random datapoint that they aren't expending resources on... or wasting their precious exploits on.

The metadata argument wears thin on me. If my phone number is two or three levels removed from a terrorist I really don't see why it's objectionable that the Government take a precursory look at my call logs. They'll quickly find that I'm a rather boring sort, whose connection with the terrorist was likely limited to ordering the same take out, and my privacy isn't significantly impacted by having someone review my call logs after obtaining a court order.

And when somebody at NSA examines or leaks your metadata, and your wife finds out about the emails to your mistress, or your employer finds out about the emails to a competitor about possibly taking a job there, or somebody finds out about your emails to a $DISEASE support group, or your fondness for Albanian furry porn, no matter how legal, you may have problems, or at least embarrassment.

If your call logs are actually secure barring a court order, you're fine. If they leak (something like LOVEINT or a

Snowden basically walked out of the NSA with all their secrets; who's to say a few dozen or hundred other contractors didn't do the same thing before him? Everything the NSA knew or had access to before 2013 was most likely available in blackhat circles through clandestine leaks.

Any backdoors in TrueCrypt would be a security disaster, and the NSA has already proven itself willing and able to put backdoors in highly trusted security software. It's also proven itself incapable of keeping secrets.

Yes, but who will audit the audit? Because it is open source we can meta-audit, much like how Slashdot meta-moderates. Otherwise the audit would be useless to us, much like a corporation paying for an audit of itself and presenting that to the public as proof of its good work.

Open source is only useful if someone looks AND has the skills to understand it.

Just being open source doesn't mean dick and you fanboys really should get that through your head. You all stand around waxing on about how 'many eyes' see it... assuming SOMEONE ELSE is looking... and no one actually is because... because... 'its open source! anyone can look!!@$!@%!@%&'

When are you guys going to actually come back to reality. OSS is great fo

The first phase of the audit focused on the TrueCrypt bootloader and Windows kernel driver. Not really surprising that they didn't find any critical security issues in those parts. The high value bugs should be in the crypto parts and how they are implemented.

The crypto is implemented in the driver, as well as the bootloader. The application known as truecrypt just flips their configuration bits around, loads keys into ram, and tells the driver when to mount/dismount containers etc. The bootloader needs to know enough to mount the system partition and hook into BIOS so that the regular OS bootloader can take over using it's normal calls. Once it loads the kernel and related drivers, truecrypt.sys takes over handling container IO.

The separate formatting utility probably contains some too since it's used to create containers..

What? I should probably assume you are joking, but in case you are not:

This is a stupid statement. If someone is American and they have a bank account in another country, both are able to be true. They are an American with an off shore bank account. Similarly, just because the NSA is American and have impacted the concept of security does not mean that Americans can not evaluate or produce secure code. And just to be more antagonizing than you are being, guess what? you used 'American' and 'security' in the

I've been coding in C a long time and one of the medium security faults makes no sense to me:"Windows kernel driver uses memset() to clear sensitive data"The reasoning they give is:"...However, in a handful of places, memset() is used to clear potentially sensitive data. Calls to memset() run the risk of being optimized out by the compiler."

WTF?!?I suppose a smart compiler can optimize out a memset() if it's directly preceeded by a calloc() or something, but I have never had any compiler ever just ignore my request to memset().What am I missing here?

Great article. Including the openssl bug(s) he pointed out...was expecting something esoteric but turned out to be really straightforward i.e. the type of error you make at 2am, taking the size of the pointer instead of the actual size of the buffer.

Say you store a password in a memory buffer. Use it. Then overwrite it with a call to memset. If this buffer is never used again, a compiler may think this is a wasted write and optimise-out this call to memset.

If you call memset on some allocated memory and then free that memory, what (apart from clearing sensitive data from physical RAM) functional difference does removing the call to memset make? None?

The longer the data remains in memory, the wider the window to read it via some other exploit. (Also, anything running as root could potentially access it.) This is precisely what happened with Heartbleed.

But the program performs functionally the same.That's the rule followed when doing compiler optimisations.

memset has nothing to do with Heartbleed by the way, nor does any compiler optimisation.

You also don't guarantee the original data is overwritten. If your application is paged out of RAM before the call to memset, when it gets loaded back in to RAM it can be pointing to a different physical memory location. You're now overwriting.... something completely different.

But the program performs functionally the same.That's the rule followed when doing compiler optimisations.

memset has nothing to do with Heartbleed by the way, nor does any compiler optimisation.

The program will generate the same output yes, but the security implications are not the same.This is actually tangentially related to heartbleed - if the memory had been zeroed when freed, the scope of the exploit would have been greatly reduced, as only currently allocated blocks would have been vulnerable. Furthermore, the most common reason for using custom mallocs in security-critical applications is to do exactly that - to zero all memory immediately upon freeing.

This is actually tangentially related to heartbleed - if the memory had been zeroed when freed, the scope of the exploit would have been greatly reduced, as only currently allocated blocks would have been vulnerable

The blocks holding the certificate private key are always allocated, so always vulnerable.

This is completely incorrect. Until it is freed (or realloc'ed), the address returned by malloc will point to the same data, regardless of whether it is in the L1 cache, RAM, or paged to disk. Were this not the case, each program would need to implement its own MMU.

So virtual memory is completely useless, because paging to disk doesn't free up the physical RAM or other processes?

There is SecureZeroMemory() function in the depths of Win32 API. Its description is rather concise and reads that this function overwrites a memory region with zeroes and is designed in such way that the compiler never eliminates a call of this function during code optimization.

So don't use memset to zero memory.

There is still the risk that another process reads data from RAM that another process was using, unless the OS zeros out the memory before allocating it.That's so

No idea why the paper talks about the compiler optimizing it out, that's obviously wrong. However, in the next paragraph, it reveals that swapspace is the reason. You might, after the page fault and swap-in, initialize the buffer via memset -- however this doesn't erase the previous data from swap space. Apparently, some "secure" memset-like routine does that.

There seems to be a major trend towards making compilers create code that is as different as possible from what the programmer wrote without being so different that the programmer actually notices. One might assume it's a secret NSA plot to defeat security measures in all software everywhere. You know, if one was incredibly paranoid, that is.

It's hard to say whether this is justified behavior. As an example, consider this code from a link an AC posted [viva64.com]:

That and memset in windows doesn't zero by default, as an optimization, until the page is hit (or some such pattern that I don't fully recall)

Theres a specific kernel API for zeroing memory because memset, even if called, may choose not to do anything. ZeroMemory is the generic way, SecureZeroMemory removes the 'option' to actually do the zeroing from the kernel and always does it.

Using memset to scrub memory on Windows, then not doing anything with it that requires the memory to actually be in active use

What you're missing is that some compilers get very aggressive about removing code when optimizing. I don't have the C standard here, but the C++ standard says the compiler can do anything as long as it keeps volatile variable access and calls of I/O library routines the same, in the same order. This means that, if you have a chunk of memory and memset() it and nothing of that chunk is referenced for an I/O operation or volatile variable access, it can go.

The backdoor is not in the source it is in the MVC++ compiler. NSA is not stupid, putting the backdoor in the source itself would be risky, it would be much wiser to put the backdoor in the MVC++ compiler itself.

One way to detect a backdoored compiler to a fairly high certainty is diverse double-compiling [dwheeler.com], a method described by David A. Wheeler that bootstraps a compiler's source code through several other compilers. For example, GCC compiled with (GCC compiled with Visual Studio) should be bit for bit identical to GCC compiled with (GCC compiled with Clang) and to GCC compiled with (GCC compiled with Intel's compiler). But this works only if the compiler's source code is available. So to thwart allegations of a backdoor in Visual Studio, perhaps a better choice is to improve MinGW (GCC for Windows) or Clang for Windows to where it can compile a working copy of TrueCrypt.

We know there's a difference between Windows containers and Linux containers, that being the ~64KB of random data at the end of the header for a Windows container instead of ~64KB of 0's in a Linux container.

This difference is not a result of some difference in the source code of Truecrypt when compiled under Windows. Where could the backdoor be?

I use TrueCrypt. Not that it likely matters given all the other back-doors on my Lenovo Wintel laptop, but I use a passphrase from Hell, and I suspect even the NSA's biggest cracker would have trouble with it.

Other than the backdoors in various places on this toxic waste dump of security, the biggest security threat to my passphrase from Hell is TrueCrypt itself. TrueCrypt by default does 100% useless password strengthening (key stretching or whatever it's called). It's strongest mode, which you have to

A good strong PBKDF2 is probably sufficient, but yeah, 2k rounds is pathetic. iPhones were doing better (admittedly, their passphrases tend to be very short) several years ago, and that's on a mobile CPU. Having a limit of 2k rounds doesn't even make sense, it's not like it's harder to code it for more rounds or something. The only real limit should probably be 0xFFFFFFFF rounds (assuming 32-bit ints) because why have a limit at all?

You should use a passfile as well as a password. Makes it much harder for an attacker because something like a hardware keylogger or audio analysis to recover keystrokes can't see which file you selected. When it comes to breaking your key there is no way to know after the fact that a keyfile was used, so they will probably waste a large amount of time trying a dictionary attack on the password before even realizing that they need to also try any of the 100,000+ files on your computer as well. That is assuming you used a file on your computer, if it was on an external drive they didn't collect when they grabbed it they are screwed. Keep a few corrupt USB flash drives around just to make the wonder if they had it but broke it.

True, but if you are that paranoid you can use a VM with the hardfile in an encrypted container on the host OS that is protected by a keyfile.

It's actually a nice way to do it because you can have the host OS as something like a read-only bootable Linux DVD, and use it as an outer layer that somewhat mitigates attacks on the host OS. For example if the host OS was running a VPN/Tor and sending all traffic from the inner host OS over that there would be no way, short of the user making a mistake, for the hos

I just added a keyfile as you suggested. I put it on a couple of USB keys, so I have a backup, and now in theory my encrypted volume can't be mounted without having the physical key. That should greatly increase my passphrase protection, as well as the volume contents (basically a list of all my various user/password credentials at various sites). I'm still running TC in Windows, and several times I've answered "yes" to let various programs make changes to my hard disk, and my machine probably comes with

I don't think you understand whats going on. PBKDF has absolutely nothing to do with 'protecting' your password. Its done because passwords suck ass for encryption keys.

TrueCrypt is taking your password and turning it into something USEFUL as a key for encryption, not 'protecting it'.

Standard passwords are pathetically low on entropy, a full twitter or SMS post is still not 256 bits of useful entropy, and its unlikely your passwords are anywhere near that. I admit I don't know your password, but if you're only using the standard character set, I can safely say its pathetically low on entropy. You need full binary keys generated from good random sources, but you'll never remember that, will you? Imaging trying to type it somewhere.

What the hashing does is takes your password and contorts it into a larger key that is more useful than whatever pathetic string of text you throw at it. It does so in such a way that, like all hashing processes are supposed to, you can't go backwards because bits are discarded along the way.

2000 rounds is pretty low, but thats only a tiny small part of the encryption/decryption process. And your password (as I understand true crypt) really just projects are larger private key, which is what is actually used for encryption. Its been a while since I've looked at or used TrueCrypt, so I may be wrong about that last particular bit.

I do write encryption software for a living. And again, its not about protecting your password or making it harder to guess, its about turning your crappy password into a useful encryption key, nothing more.

I don't do this for a living, but I'm not totally ignorant about this topic. [password-hashing.net] TrueCrypt does a poor job strengthening passwords. TC's users would be far better protected if TC ran something even as lame as PBKDF2 for a full second, with rounds somewhere in the 100's of thousands or millions. Not only does TC do a poor job protecting my data, but when an attacker does manage to guess a user's low-entropy password, he can then try that password all over the place to see where else the user has used it. T

Not only does TC do a poor job protecting my data, but when an attacker does manage to guess a user's low-entropy password, he can then try that password all over the place to see where else the user has used it

That's not at all unique to TrueCrypt. If someone guesses a user's password, it's the user's fault they used the same password elsewhere.

Password-strengthening before encryption is not the same as salting & hashing passwords for later authentication, where rainbow tables and "guessing" a password

I'm always amazed at how hard something as simple as password hashing can be. Yes, it's the user's fault for reusing passwords, but we should try and protect him anyway, because it's very common. Part of the job of the computer security industry is protecting stupid people. Improving this is situation one reason for the Password Hashing Competition.

You are right that password strengthening before encryption is a different problem from user authentication, but the solutions tend to be the same. You can u

Yeah, it's the users fault for not being able to remember 37 different strong passwords which change from time to time. (I haven't counted the passwords I use from time to time, but three dozen feels in the ballpark.) Heck, I can't remember that many, and I use a system, as well as having many low-security accounts on the same password.

If you try a system that approximately nobody seems able to use, after extensive efforts at user education, and the system sounds impractical to everybody who knows some

A passphrase from hell doesn't protect you from a keylogger. It does, however, put the burden on whatever organization hacks your computer to justify why they installed a keylogger since you can demonstrate that the long password couldn't possibly be brute forced. If they try to hide their tracks it is difficult to use a parallel reconstruction to explain away how they got the long password. Just don't ever fall for the trap of thinking you are invulnerable.