Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

msm1267 (2804139) writes "A cryptanalysis of TrueCrypt will proceed as planned, said organizers of the Open Crypto Audit Project who announced the technical leads of the second phase of the audit and that there will be a crowdsourcing aspect to phase two. The next phase of the audit, which will include an examination of everything including the random number generators, cipher suites, crypto protocols and more, could be wrapped up by the end of the summer."

Where do you get this? When I read the license it reads largely to be less restrictive than GPL 3.0. Section III of the license discusses exactly what is required to create derivative products. Basically, you have to make sure that no one will confuse it with TrueCrypt, you have to make the source available, and you can't change the license.

The only problem I can see with it from the perspective of the people around here is that it wasn't spawned by Stallman.

Insofar as lawyers don't like the wording as its a bit ambiguous on some fine details; but its not as restrictive you seem to think.

Moreover, for the license to actually be a problem someone would have to come forward, establish they actually have copyright standing, and then sue you over making a fork.

So what realistically what are the risks? That some anonymous devs who shutdown the project and have advocated everyone switch to alternative systems is going to come out of the woodwork to sue you for copyright infringment and 'damages' despite your best efforts to follow their license which DOES actually allow for forking, and for which you wouldn't be charging for copies. So there are no profits to sue for then there is the acute impossibility of you 'damaging' their interests given they discontinued the original project and burned it to the ground.

I honestly don't understand the fear. I mean sure there is a risk there, but if you incorporate a nonprofit, continue to give it away for free, and retain the terms of the license; the risk small.

Even if the authors did come out of the woodwork, and sue you, so what? So your non-profit shuts down - worst case. More likely by far to just walk away with little more than a cease and desist and/or a small fine, and that's assuming the court even finds against you (which given the ambiguity of the license, and your attempt to adhere to it as best as possible) isn't all that likely in the first place.

Yet, the lawyers say its 'highly restrictive' and 'dangerous' to anyone who goes near -- same lawyers who approved the non-compete clauses that now have silicon valley under a class action? Where was their sage advice about risk then?

Then put YOUR ass on the line and do what you suggest. Suggesting other people put their asses on the line for your benefit just means you're a dick.

You seem to be taking this rather personally. Why? vux984 can't make you or me or anyone else do what they don't want to do, even if he does suggest it would be okay to do so. The "dick" accusation is a petty way to state your disagreement.

Suggesting other people takes risks and put their ass on the line for your personal benefit IS being a dick. There is nothing I'm "taking it personally" about me pointing out that obvious fact. Are you socially obtuse enough that you don't recognize a dick move when it happens? Here's another dick move, sleeping with your friends wife/girlfriend. Would you defend that as well and claim someone is taking it personal when they point out it's a dick move?

A lot of GNU tools haven't been updated in around two decades yet no one feels like they need to be rewritten.

I was shocked to find out the other day that the cron most Linux distributions use was last updated in 1993.

And I am shocked that people have to reinvent the wheel over and over. Not to mention to skip regression checks and bring a new and 'better' version which lacks in features. There is a time when 'simple' tools are done and just do their job. IIRC tcpwrapper is on the same boat and being droppe

The TrueCrypt source is also - by most accounts - a huge ungodly mess that hasn't seen a significant update in at least the past two years.

Not seen a significant update in at least two years, check. But huge, ungodly mess? Nah, 4.45 MB uncompressed, subtract 491 kB bitmaps and icons, 902 kB user guide, 117 kB license and readme texts in several versions, 250 kb string localization, 150 kB resource, project and solution files and you're talking approximated 2.5 MB code, divided into several logical directories. I skimmed the main files and they look decently formatted and commented, on the longish side but with plenty whitespace. I think probably under 100 kLOC total, a lot of it standard cryptographic primitives, installer, GUI and so on. Once you've made sure they don't contain any funny business the actual logical core seems to be more like 20-30 kLOC, quite manageable for one man to grasp.

It's actually just a bit over 110 kLOC, but you were close. The crypto code is mostly very good. The GUI code must have been written by someone else, because it totally sucks, IMO. I was just porting it to wxgtk3.0 today from wxgtk2.8, and of course all the crypto compiled without even a warning, other than some AES code I need to look into. The GUI was a freaking nightmare. They implemented their own string class. How stupid is that? Well, they didn't just implement a string class, but they implemented a directory string class, a filename string class, a "volume" string class, a "volume info" string class, and about a dozen other string classes, most of which don't actually have any useful functionality, and just require all kinds of casting operators. Stupid stupid stupid...

I haven't looked at the firewall between the GUI and crypto code yet. Obviously there's a fuse driver in Linux and I would not expect it to link with the GUI code at all, but I need to check. Given that the crypto code rocks, and the GUI code sucks, it's critical that they be in separate processes. That would be needed in any case, since you can't trust all that GUI library code living in the same process as the crypto core.

The GUI was a freaking nightmare. They implemented their own string class. How stupid is that? Well, they didn't just implement a string class, but they implemented a directory string class, a filename string class, a "volume" string class, a "volume info" string class, and about a dozen other string classes, most of which don't actually have any useful functionality, and just require all kinds of casting operators.

Sounds like the GUI came from a completly different project. Possibly even on a different p

What would you say about those who claim that the deniable encryption doesn't work because the parts of an encrypted volume that hold actual data has lower entropy than the parts that hold the random data? I cannot understand that claim since, as far as I understand it, encryption algorithms such as the AES uses probabilistic encryption [wikipedia.org] and should have as high entropy as random data. Usually high entropy data is associated with data that is hard to compress (especially when discussing lossy compression of v

From this security analysis [privacy-cd.org] there is a 64K-ish block in the header that is filled with random data in Windows, but encrypted 0's in Linux. There's no simple way to insure the Windows header is indistinguishable from true random data, but the Linux version should be OK. As for the rest of the unused portion of the volume, I haven't checked the code. If it's using a pseudo-random number generator that isn't cryptographically strong, then it may be distinguishable. However, the entropy argument seems wrong to me. If the unused portion has measurably lower entropy than true random data, then the random number generator in question must have been compromised.

You can ensure that the encrypted data looks random because you are the one encrypting it. You can't, however, ensure that the random data in windows actually looks random. The next string: "monkeys can write" can result from a random source. I mean, monkeys could, theoretically, write all of Shakespeare's works given infinite time. Random doesn't mean it looks random. Random means there is no structure/logic behind it. It can *look* like something with meaning or not.

Just because something hasn't been updated doesn't automatically mean it's broken. Everyone's hopped on to this nonsensical upgrade treadmill. Software doesn't 'wear out.' If it's not buggy, it will stay buggy. If it's working, it will stay working.

As far as supported vs unsupported software goes, you should be assuming your system can be compromised and planning accordingly anyway, whether you get updates or not.

Just because something hasn't been updated doesn't automatically mean it's broken. Everyone's hopped on to this nonsensical upgrade treadmill. Software doesn't 'wear out.' If it's not buggy, it will stay buggy. If it's working, it will stay working.

As far as supported vs unsupported software goes, you should be assuming your system can be compromised and planning accordingly anyway, whether you get updates or not.

That's true for something like an ASCII text editor where the requirements are dead simple. However when encryption, and in particular fancy-tricks encryption like deniability are part of the requirements, you bet your ass that problems will appear out of nowhere. Humans make mistakes, and humans make software, so humans make software with mistakes. Just because it passed every practical review and test the first time around, doesn't make it future-proof. With the source code and enough time, someone wi

I take it to mean crowdsourcing the attempts to verify the integrity of TrueCrypt.

White said the next phase of the cryptanalysis, which will include an examination of everything including the random number generators, cipher suites, crypto protocols and more could be wrapped up by the end of the summer. Some of the work, White said, could be crowdsourced following a model used by Matasano, known as the Matasano Crypto Challenges. The now-defunct challenges were a set of more than 40 exercises demonstrating

Sorry. It is hard to convey it written words but the emphasis for my sentence is on the word 'rebuild'. I would rather they rebuild it open vs closed source. Since I am implying a rewrite, it would be their prerogative.

If TrueCrypt devs really gave up because they think it is pointless, then they should open source the code (BSD, Apache2, GPL, MIT). There is no reason not to, unless they had contributers who passed away.

So finally, was the duress canary activated or not? If it is "still there" as according to that tweet, that should mean it was not activated.

I think by now things are clear enough: The alternatives immediately after it happened were defacement or canary. As a defacement would have been cleaned up by now, it has to be canary. And yes, the developers would go to prison if they made that really clear, so a minimum of independent intelligence is required to see it.

If it is a NSA/NSL canary, then the devs are restricted in what they can say about why they are abandoning the project. The logical choice, and the easiest lie to remember, is that "we are just tired of developing it."

Which, unfortunately, is also the same exact thing they would say if they were just giving up on developing it. So the only real clues are the content of the current web page, and the changes made to the new 7.2 TrueCrypt. That they suggest using BitLocker without a TPM chip (I never thought I

That they suggest using BitLocker without a TPM chip (I never thought I'd be suggesting the use of a pre-made TPM chip; honest) and that the solution involves upgrading to the pro version of windows . . . it doesn't pass the smell test. Serious crypto guys wouldn't suggest those tools when drunk, much less just because they are quitting.

Indeed. Or the fact that for OS X, they give "encryption: none" as selection. Or the slap-dash look. (My guess is the look is specifically to suggest a defacement in order to get maximum press exposure and make the message even clearer when a few days later it becomes clear it is not a defacement.)

All quite clear, but the reasoning required seems to exceed what effective intelligence some people have.

It looks more and more like a not-too pretty negative canary. Like a website self-defacement automatically triggered is a number of people fail to do some things regularly. Really open-sourcing things needs work. The ridiculous travesty of the original website can be put up automatically.

TrueCrypt's source code is based on the earlier tool, Encryption For The Masses (E4M) [1997] by Paul LeRoux, who abandoned it in 2000 when he joined SecurStar to make the closed-source DriveCrypt with Shaun Hollingworth (who wrote a predecessor, Scramdisk). That's why the licence looks the (horrible) way it looks; it's an update of the E4M licence.

When the TrueCrypt Team released the first version of their fork, the project lead David Tesarik got a whole bunch of nastygrams from a manager at SecurStar who alleged Paul LeRoux had stolen E4M from them and open-sourced it without their permission: https://groups.google.com/forum/#!topic/alt.security.scramdisk/HYa8Wb_4acs

Which was complete bullshit, of course, as E4M had been opened years before SecurStar existed and they themselves published it on their website under the E4M licence, so nothing actually came of it - except 9x support was removed because it used Shaun's 'Scramdisk' driver, and he hadn't given permission to distribute with E4M if the name was changed, hence 1.0a.

Wouldn't be surprised if there was a Slashdot article about it. Peter Gutmann suggested it'd be right up/.'s alley.:)/akr

According to Ken Thompson, if you don't also analyze all the tools involved in the software build and load process at the machine code level, you still can't really trust the code [bell-labs.com]. That means compilers, linkers, loaders, etc. Someone who knows what they are doing, and has enough motiviation to go through the effort, could insert code into a compiler that does whatever they want when your code is built with it, and hides itself at the source level.

You just might want to look 'Diverse Double-Compiling' as a method of countering the attack described by Ken Thompson in 'Reflections on Trusting Trust'. A paper on DDC is at http://www.acsa-admin.org/2005... [acsa-admin.org]

It doesn't do anything about the same issue with linkers, the OS's executable loader, your CPU, etc. I suppose you could also try to apply the same concept to then, but them you get to my next issue...

If your problem is that you don't know if you can trust your compilers, a solution that starts with "first, go get a trusted compiler" is kind of an infinitely recursive solution.

It does address the issues you mentioned. As for the tool chain (compiler, linker, loader, etc), that is addressed by making them diverse. The term 'compile' means the entire chain from source to binary which includes the entire tool chain. As for the CPU issue, there's nothing in the source that mandates that you must create a binary for the same CPU as you're executing on. So do DDC on multiple CPU families (Intel, ARM, PPC, etc) and compare the final results. And the beauty of DDC is you can do it even i

And for compiling something like a basic C compiler, one could feasibly write their own using ASM from a base of something like CC500 (a 600ish line C compiler). Use said custom compiler to build something like PintOS (full code review possible by one person, I had to do so in collegiate OS courses) on a micro that is running nothing but your compiler from a RS232 port that you are monitoring with a logic analyzer (to watch out for stray data from the 'host' computer at this point). This gets you up to OS a

The OS, loader and CPU are minor issues, as they do not have the power to analyze your code. And getting a compiler you trust is simple: Write it yourself. Unless the compiler you use to do that was specifically designed to attack your compiler, it will be ineffective.

Incidentally, the risk of this attack actually happening in practice is very low, as it is exceedingly difficult to implement and as soon as it has bugs the risk of discovery is pretty big.

The OS, loader and CPU are minor issues, as they do not have the power to analyze your code.

You mean are not supposed to have. You are being awfully trusting here of un-analyzed code.

And getting a compiler you trust is simple: Write it yourself.

In what universe is that simple? Writing a functional C compiler takes on the order of man-decades. C++ is a factor of 10 longer [stackoverflow.com]. For a large team, you'd have to somehow be able to trust your entire team (and your network security!) for that entire time. For a single person, it would be a lifetime's work (or more).

Why not? Assume, for discussion, a malicious compiler. It looks for common code used in encryption and changes parts of the code (see Reflections on Trusting Trust). Identifying the keys should not be that hard with known algorithms, so go for that. Then just replace all keys with 0xDEADBEEF or another known pattern of bits. Viola, encrypted data that can be opened only with code compiled via the corrupt compiler, or by the attacker who knows what bit pattern was used.

Simple: The result of the encryption has to be bit-identical for things to work. Attacks on crypto by corrupting the cipher are not practical for widely distributed software. And even if you corrupt the cipher, it has to be bijective in order to work at all. Not easy.

Ah, I see. That is not how it works in any sane design. In a sane design, the password is unknown to the container and protects a master key that has been generated in a cryptographically strong fashion. In a sane design, there also is no space where you could put a copy of the master key or password "protected" in a fashion you describe.

Of course, bitLocker may just do that, exactly as you describe, and not be a sane design. One more reason to insist that crypto-software is open, the metadata and design is

It's perfectly sane if you're the NSA or affiliated with them, not so sane if you are using products they've tampered with.

The point with the compile chain/tool, is the compiler can be modified to build in exactly that kind of feature (there's an example from bell I think that did something very similar, since C compilers are compiled by previous version of themselves).

Its far more ubiquitious than it should be, for example these guyshttp://www.phoenixintelligence... [phoenixintelligence.com] Have a ton of hardware installed at micro

He is wrong and it was only ever a strong hypothesis on his part. Newer research shows that it is a lot easier to build in a way that excludes compiler backdoors: http://www.dwheeler.com/trusti... [dwheeler.com]

The idea is fascinating. It basically says if you have a really crappy and simple compiler that can compile your code and that you can trust, you can propagate that trust on a really good and complicated compiler. Writing a crappy and simple C compiler can be done in a few weeks.

He is wrong and it was only ever a strong hypothesis on his part. Newer research shows that it is a lot easier to build in a way that excludes compiler backdoors: http://www.dwheeler.com/trusti... [dwheeler.com]

The idea is fascinating. It basically says if you have a really crappy and simple compiler that can compile your code and that you can trust, you can propagate that trust on a really good and complicated compiler. Writing a crappy and simple C compiler can be done in a few weeks.

The person I replied to said it oculd be done in a few weeks. What you suggest takes months to years.And front-panel switches for RAM? WTF are you even talking about? Why would you even need to do that if you've already done everything by hand?

"compile by hand" and "assemble by hand" means "write out the results on paper".

After that, you have to get the machine code into core. That's what the front panel [wikipedia.org] is for.

Is this somehow new to you? Are you really that young, and that unfamiliar with computing history?

Of course, if you have a functional operating system you think you can trust, you can poke the machine code into a file using a binary editor (that you think you can trust), and then execute that file as the compiler.

In the DDC technique, source code is compiled twice: once with a second (trusted) compiler (using the source code of the compilerâ(TM)s parent), and then the compiler source code is compiled using the result of the first compilation. If the result is bit-for-bit identical with the untrusted executable, then the source code accurately represents the executable.

It this saying "write your own compiler, then use it to compile GCC, then use that to compile GCC"? I.e., the normal process for bootstrapping GCC to a new architecture?

Meh, better be running those compiles on a completely trusted OS (which you built how?), on a completely trusted processor (the masks were checked how?). I guess it's a good idea, since the more diverse you go on platforms, the more likely you'd be to find one that's trus

It is not gibberish, the thing is just complicated. Look into the thesis to really understand what is going on. As he also has a formal proof that this works, the level of confidence I have in it is very high.

"Gibberish" is a bit much. But the GP is exactly right on all other counts. What is being proposed is a ridiculous amount of work, relies on perfect security thereafter, and only addresses a single vector of many that were mentioned in Thompson's talk.
It reads a lot more like a "go back to sleep, all is well" paper than an actual practical solution. If I'm wrong, then surely people are doing what he suggested right now. So who are they?

Have a look at this one. It is a bit different. We are not talking about code verification here. This is a proof that the approach works, including a trace from a proof-checker and all steps. Can be verified by anybody competent in maybe a week or so.

The program (at 7.1a) is still completely useful for an individual or business to scramble personal/business records, in case the computer is lost or stolen, or the overnight cleaning lady is snoopy, etc.

This is what we are seeing in the field. A number of large financial institutions and government organizations who we deal with on a regular basis have already told us that they are no longer going to use TrueCrypt.

Most of them are moving towards SecureZip from PKware because it supports AES-256 and is FIPS 140 compliant. Others seem to be okay with 7Zip's "encrypted zip" feature (also AES-256). Others are looking at random packages that I have never heard of before last week, like BestCrypt. Of course there are others who want to go with Symantec's PGP.

This has proven to be a major pain the ass. For all of its warts, TrueCrypt was the de facto standard for secure data exchange. Now we are seeing a Balkanization of encryption software, and organizations are moving in different directions.

Personally I think that TrueCrypt is good enough for transferring data on an external USB drive and protecting it against accidental or intentional theft (by anyone other than the NSA). However it is going to be impossible to convince others of that, and I cannot state it with 100% certainty so I am not even trying to have that conversation within the business context.

As long as Client X is demanding encryption tool Z, that is fine. We will use that tool and let them shoulder the risk. After all, they are telling us what to use, not the other way around.

Why would these organizations switch to unknowns? If they trusted truecrypt up to this point, why not continue to trust? These closed source applications could be backdoored and there's no way of really finding out. If you think source auditing is difficult, try auditing a binary.

It was never possible to trust truecrypt or anything else with 100% certainty.

Best Crypt is made by Jetico, a finnish crypto software/hardware company that's been around since the early 90's. Their OTFE is top notch and the linux version is full featured with GUI. Both binary and source code packages for linux can be downloaded for free though they don't advertise it. In fact, Best Crypt was used in the Bill Clinton white house. Check them out: www.jetico.com

So what if there is? Assuming that your organization did audit 7.1, and found no problem, what makes it a risk now? Sure, you wouldn't want to migrate to 7.2 in a years time, and any fork from 7.1 would require a new audit; but I would hope that if you put that much effort into it that you would audit 7.2 internally or any further fork version as well, which would leave you with either a 'this is clean' or 'this is fishy' answer.

I don't doubt that many large organizations are looking at directions to migrat