Just out of curiosity, since I don't frequent anywhere but Ars that covers communications security:

How secure is TOR really? Has it ever been systematically and thoroughly examined for vulnerabilities? I am familiar with the theory behind it, but not nearly conversant enough in cryptography to understand where any weaknesses lie.

This is an example showing both the good and bad of open-source software I guess.The good is anyone can review the code and find bugs like this.The bad is few people ever do, leaving this bug active for a long time.

This is an example showing both the good and bad of open-source software I guess.The good is anyone can review the code and find bugs like this.The bad is few people ever do, leaving this bug active for a long time.

I think what's particularly egregious here is not simply the incompetence, but that the developers are violating their own tenets of "being radically open" regarding Cryptocat's development. Hopefully their ethics aren't as questionable as they seem, too:

Even though I qualified for their bug bounty I never got anything. My guess is my bug is too big. Since it means that all messages after May 7th, 2012 are crackable. In a comment I was ask for my name, but I have not been added to their bug hunt page. I guess should have "t-shirt, sticker, money, and a mention on our Wall of Unquestionable Greatness!" coming sometime, but haven't heard anything about it.

He did make it to their "Wall of Unquestionable Greatness," so perhaps the bounty situation was also rectified. I guess a t-shirt could suffice for a bug report of this pedigree...

"The bug stems from programming that confused the difference between strings of digits and an array of integers"

Wow, just wow. And there is worse in the "autopsy". Since many of the readers aren't programmers, maybe you might want to put in the article something to suggest just how stupendously bad this programming is. I would suggest the analogy of a mailman who thinks that the zip code is the street address. "Let's see, where the heck is house 93760?" And then not noticing he had a problem for a year.

And worse, from the Steve Thomas' autopsy: "... they generate random data by first generating a random floating point number instead of random bits or bytes. I don't know of any legitimate crypto software that does this. They generating a random floating point number by getting 16 random bytes of data with values less than 250 and converting each of them to a single decimal digit. "

Moral(s) of the story:- If you aren't a security expert don't write security code. And even then, only if absolutely necessary. Touching any piece of security code at all is basically like performing brain surgery. If you do it for less than absolute necessity, you're a moron.- Don't rely on your own opinion to determine whether you are a security expert.- If you aren't a really, really good programmer, don't write any kind of code upon which the lives or safety of people depend. Caring about it, and wanting to do it, are irrelevant.- Don't even begin to think your code is secure until it has had the holy hell tested out of it by talented people truly dedicated to breaking it, who have access to the source code.

And for whoever wrote this code, there is a special moral: don't write any more code, period.

but SSL is vulnerable to a variety of known attacks. That susceptibility is precisely why activists and other privacy-minded people frequently insist on using programs that offer a much stronger level of encryption.

Correct me if I'm wrong, but don't people use TOR because it hides more information than SSL? Not because it has a "variety of known attacks"? If SSL was as insecure as you imply then it wouldn't be the standard for transfering credit card data on the net.

but SSL is vulnerable to a variety of known attacks. That susceptibility is precisely why activists and other privacy-minded people frequently insist on using programs that offer a much stronger level of encryption.

Correct me if I'm wrong, but don't people use TOR because it hides more information than SSL? Not because it has a "variety of known attacks"? If SSL was as insecure as you imply then it wouldn't be the standard for transfering credit card data on the net.

Correct me if I'm wrong as well - but the way I read it, they said "SSL" has known attacks, as compared to "TLS" - which probably also has known attacks, but maybe fewer. It seems a controversial, and maybe within the context of this article, meaningless, distinction between SSL and TLS, but my understanding is that SSL has been replaced by TLS, and has fewer known attack vectors.

Colloquial usage seems to be that they are used interchangeably, however, so the way I read it is most likely incorrect. (Note: I ain't no security expert.)

Good, snappy coverage. It'd be nice to have a longer analysis, in layman's terms, of the broader significance of how bad these mistakes were. It might also be worth pointing to the fact that Cryptocat's developer has been running around telling everyone his software's been audited - and there were no bugs found! - to underscore the relative value of some auditing processes.

A minor editorial point: in paragraph 4 you refer to 'Thomas', but only in the subsequent paragraph do you explain who/what this person is. It might be helpful to add the explainer on who he is the first time you introduce his last name to better guide readers through the article. As it stands the structure is jarring.

I think this highlights a bigger concern that people are trusting of "open source" and essentially falling into a false sense of security by assuming that other people will verify that it's secure for them.

I personally worry about this all the time with things like TruCrypt and OnePassword. What sort of validation do we really have that there's no backdoor to these programs, or critical flaws that would completely invalidate them?

I think this highlights a bigger concern that people are trusting of "open source" and essentially falling into a false sense of security by assuming that other people will verify that it's secure for them.

I personally worry about this all the time with things like TruCrypt and OnePassword. What sort of validation do we really have that there's no backdoor to these programs, or critical flaws that would completely invalidate them?

You never have such an assurance, just as you never have the assurance that the company you just bought something from on the Internet isn't off shopping with your credit card number.

It comes down to how reliable people are, and people need to make their own judgements, based on experience, evidence, and logic, about how reliable those companies are.

Bottom line: people need to educate themselves about this kind of stuff, and make informed choices. It's no different than buying or using any other kind of product.

This is an example showing both the good and bad of open-source software I guess.The good is anyone can review the code and find bugs like this.The bad is few people ever do, leaving this bug active for a long time.

The problem with "anyone can review the code" has always been that while it's technically true, the number of people that can actually conduct a meaningful review of complex code is not generally large and the number of those that actually do is basically zero. There's plenty of other advantages of open source, but yeah.

From what I understand serious security reviews have to focus more on on how code is compiled/executed than just the source anyway, and that's an even more specialized area.

I think this highlights a bigger concern that people are trusting of "open source" and essentially falling into a false sense of security by assuming that other people will verify that it's secure for them.

I personally worry about this all the time with things like TruCrypt and OnePassword. What sort of validation do we really have that there's no backdoor to these programs, or critical flaws that would completely invalidate them?

TrueCrypt is open-source, but has anybody actually reviewed the source code?

Yes. In fact, the source code is constantly being reviewed by many independent researchers and users. We know this because many bugs and several security issues have been discovered by independent researchers (including some well-known ones) while reviewing the source code.

As TrueCrypt is open-source software, independent researchers can verify that the source code does not contain any security flaw or secret 'backdoor'. Can they also verify that the official executable files were built from the published source code and contain no additional code?

Yes, they can. In addition to reviewing the source code, independent researchers can compile the source code and compare the resulting executable files with the official ones. They may find some differences (for example, timestamps or embedded digital signatures) but they can analyze the differences and verify that they do not form malicious code.

I think this highlights a bigger concern that people are trusting of "open source" and essentially falling into a false sense of security by assuming that other people will verify that it's secure for them.

I personally worry about this all the time with things like TruCrypt and OnePassword. What sort of validation do we really have that there's no backdoor to these programs, or critical flaws that would completely invalidate them?

TrueCrypt is open-source, but has anybody actually reviewed the source code?

Yes. In fact, the source code is constantly being reviewed by many independent researchers and users. We know this because many bugs and several security issues have been discovered by independent researchers (including some well-known ones) while reviewing the source code.

As TrueCrypt is open-source software, independent researchers can verify that the source code does not contain any security flaw or secret 'backdoor'. Can they also verify that the official executable files were built from the published source code and contain no additional code?

Yes, they can. In addition to reviewing the source code, independent researchers can compile the source code and compare the resulting executable files with the official ones. They may find some differences (for example, timestamps or embedded digital signatures) but they can analyze the differences and verify that they do not form malicious code.

Weren't the Cryptocat guys essentially saying the exact same thing?

It's still just about a network of trust. Who reviewed the code? How do we know they reviewed it? How do we know they are security experts? How do we know the person who verified the reviewers are actually security experts knows what he's talking about? And so on, and so on.

TrueCrypt is open-source, but has anybody actually reviewed the source code?

Yes. In fact, the source code is constantly being reviewed by many independent researchers and users. We know this because many bugs and several security issues have been discovered by independent researchers (including some well-known ones) while reviewing the source code.

As TrueCrypt is open-source software, independent researchers can verify that the source code does not contain any security flaw or secret 'backdoor'. Can they also verify that the official executable files were built from the published source code and contain no additional code?

Yes, they can. In addition to reviewing the source code, independent researchers can compile the source code and compare the resulting executable files with the official ones. They may find some differences (for example, timestamps or embedded digital signatures) but they can analyze the differences and verify that they do not form malicious code."

I see a lot of "cans" and "may", and only a few "do".

And to be clear, I'm not calling out TruCrypt and saying it's a bad program, I'm simply pointing out a more widely know piece of software. I actually figure TruCrypt has a lot more scrutiny because of it's popularity, to the point that even many of my less tech-literate friends know the name. As for Cryptocat, this is the first time I've ever heard of the program, which makes me wonder how many people qualified and interested enough to properly review it have either (as another reader already mentioned).

Another person also pointed out that Cryptocat made the same basic claims of "people submit bugs and we fix it", and "anyone can verify this is legit", yet they still had this arguably MASSIVE flaw.

My point is that there are a LOT of people out there that see "open source security solution" and automatically assume "oh, this is good then. Other people check that it's good, so I don't have to."

What's interesting is that the creator of Cryptocat, Nadim Kobeissi (who has no problem with the media introducing him as "cryptographer" or "security researcher") commented the following last year to journalists when asked about an issue with MEGA's (mega.co.nz) encryption:

“It’s a nice website, but when it comes to cryptography they seem to have no experience,” says Nadim Kobeissi, a 22-year old cryptographer and creator of the secure chat software Cryptocat, who began poring over the public portions Mega’s code as soon as it debuted over the weekend. “Quite frankly it felt like I had coded this in 2011 while drunk.” (see: http://www.forbes.com/sites/andygreenbe ... -promises/)

"The bug stems from programming that confused the difference between strings of digits and an array of integers"

Wow, just wow. And there is worse in the "autopsy". Since many of the readers aren't programmers, maybe you might want to put in the article something to suggest just how stupendously bad this programming is. I would suggest the analogy of a mailman who thinks that the zip code is the street address. "Let's see, where the heck is house 93760?" And then not noticing he had a problem for a year.

And worse, from the Steve Thomas' autopsy: "... they generate random data by first generating a random floating point number instead of random bits or bytes. I don't know of any legitimate crypto software that does this. They generating a random floating point number by getting 16 random bytes of data with values less than 250 and converting each of them to a single decimal digit. "

I agree that it is definitely worth highlighting the magnitude of this problem. I think it's less a "bug" and more a fundamental misunderstanding of how to use a security primitive and probably a basic lack of understanding of programming or security conventions in general.

I like the mailman analogy, since it emphasizes that this isn't the sort of mistake you can imagine being made by someone who really understood the field. A mailman might mistake a '6' for an '8', but he's not going to think your zipcode is the street you live on. Similarly, a crypto programmer seems unlikely to think a function for a 256-bit elliptic curve only needs a key with 54 bits of entropy or that it was normal to pass crypto keys as an ASCII string of digits.

Quote:

Moral(s) of the story:- If you aren't a security expert don't write security code. And even then, only if absolutely necessary. Touching any piece of security code at all is basically like performing brain surgery. If you do it for less than absolute necessity, you're a moron.- Don't rely on your own opinion to determine whether you are a security expert.- If you aren't a really, really good programmer, don't write any kind of code upon which the lives or safety of people depend. Caring about it, and wanting to do it, are irrelevant.- Don't even begin to think your code is secure until it has had the holy hell tested out of it by talented people truly dedicated to breaking it, who have access to the source code.

And for whoever wrote this code, there is a special moral: don't write any more code, period.

I don't want to be too hard on the programmer here, since everyone has to go through the learning process. But someone's learning process should absolutely not be creating and releasing a cryptographic tool they portray as suitable for sensitive use. IMO, that's incredibly irresponsible.

i have repeatedly said and posted negative things about nadim and the cryptocat project because of exactly what we are seeing here: neither nadim or the other cryptocat developers have a real background in writing secure code and have repeatedly proven that they simply don't care about doing the crypto correctly. i stand by my comments that nadim et al should stop making crypto software since they have put and continue to put people at risk with their low-quality architecture and coding.

cryptocat 1.x was so broken that it had to be completely re-architected, almost entirely based on comments from people who make well-written crypto software, meaning that the 2.x design is based on listening to other people's criticisms and not based on original ideas. this reflects badly on the project both from a "do your own homework" and a threat-model standpoint. doing crypto in js is doomed to fail for more reasons that i care to elaborate on, and not using an existing crypto library that receives demonstrable scrutiny from crypto experts (not just someone who gives a security/crypto audit) is a bad idea. there is a reason openssl is used in many projects, despite it being a total PITA to use.

Just out of curiosity, since I don't frequent anywhere but Ars that covers communications security:

How secure is TOR really? Has it ever been systematically and thoroughly examined for vulnerabilities? I am familiar with the theory behind it, but not nearly conversant enough in cryptography to understand where any weaknesses lie.

There are some weaknesses in TOR but for the average person, it's as good as you are going to get without having to jump through a bunch of hoops, ala Freenet.

The "disputed" amount of time is due to that fact that Cryptocat only acknowledged it being broken from version 2.0.0 to 2.0.42. Even though it was broken 1.1.147 to 2.0.42. Also 19 months includes when they were using a broken implementation of Diffie-Hellman.

As mentioned in my post, "Cryptocat tried PBKDF2, RSA, Diffie-Hellman, and ECC and managed to mess them all up because they used iterations or key sizes less than the minimums."

Since they started Cryptocat it has only been secure enough for 70 days. This includes the last 32 days. I'm counting days their code has been in GIT. The real number is less since it took a 12 days for the FireFox plug-in to get approved.

This has bugging me long time - how hard will it be to plant a backdoor into some not so actively developed package in some more popular Linux distros.

Edit: I do not mean some obvious backdoor but something subtle, just a buffer overflow that allows execute code.

There's been enough spectacular accidental bugs (eg debian openssl!) that introducing one intentionally is clearly plausible. That's just the risk you take when you're using third party software though - open source or not. Modern software projects are far too complex to verify yourself, and even with OSS it's not like you never install binaries (which conceivably could be different from said source!).

This has bugging me long time - how hard will it be to plant a backdoor into some not so actively developed package in some more popular Linux distros.

Edit: I do not mean some obvious backdoor but something subtle, just a buffer overflow that allows execute code.

There's been enough spectacular accidental bugs (eg debian openssl!) that introducing one intentionally is clearly plausible. That's just the risk you take when you're using third party software though - open source or not. Modern software projects are far too complex to verify yourself, and even with OSS it's not like you never install binaries (which conceivably could be different from said source!).

Yes I think it needs to be clear that OpenSSL project might be garbage, but SSL itself is fine. Crypto++ is a much better library! And .NET is pretty easy to use once you get the hang of their buffers.

"Moral(s) of the story:- If you aren't a security expert don't write security code. And even then, only if absolutely necessary. Touching any piece of security code at all is basically like performing brain surgery. If you do it for less than absolute necessity, you're a moron...

You are not being strict enough. Security experts need to engage in 1 of 2 activities only:1) Create new algorithms / schemes and fix algorithms until their implementations meet their theoretical security features. This vetting process takes many years and continues till the algorithm fails. This makes you a security researcher.2) Implement off the shelf algorithms using tested implementation methods. In addition, google your particular crypto library to make sure that not only is the algorithm still secure, but the implementation for your platform has not been found to be defective as well as making sure you do not make any of the rookie mistakes that are published. This makes you a security expert given enough experience and a lack of making mistakes.

Unless I'm missing something about how to guess keys, reducing the key space from 2^54 to 2^27 (from Thomas' autopsy mentioned in the article) is emphatically NOT halving the keyspace but reducing it by a factor 2^27, that is 130 MILLION times fewer keys. Going from (random example) 130 million hours to crack to 65 million hours to crack isn't much progress but going from 130 million to ONE hour makes a difference.

The issue is not that SSL might have bugs, but that this bug makes secrecy once again dependent on the host. If the host is broken into, all bets are off. This threat was a driving factor for cryptocat to move from served JS to a browser extension.

As to using crypto tools or Tor: always make sure that their threat model is relevant to your use. Also check if your threat models differ. Eg: "What if someone were to pay cryptocat/hushmail/vpn provider to do X?".

Tor for example contains a vulnerable phase during circuit building against a sufficiently capable adversary controlling the net close to your pc (eg Iran). This has been somewhat amended in the current client (it warns on what it deems "too many" failures).

Maybe the best solution is to recursively encrypt the message with a series of different programs and algorithms. It would be nice to have a standard API to simplify to process.

Absolutely not. The underlying assumption of this is that some of your encryption programs are broken. Security should not be trial-and-error. On a more technical side of things, this would be extremely slow, and would require numerous different keys since you can't assume that an intermediate program won't leak the key.

Maybe the best solution is to recursively encrypt the message with a series of different programs and algorithms. It would be nice to have a standard API to simplify to process.

Absolutely not. The underlying assumption of this is that some of your encryption programs are broken. Security should not be trial-and-error. On a more technical side of things, this would be extremely slow, and would require numerous different keys since you can't assume that an intermediate program won't leak the key.

1. You could use a subset of the available keys/implementations/algorithms for each message.2. To assume the worst case scenario maybe the wise thing to do when your life is at risk.3. Do not use it for videoconferencing.

Unless I'm missing something about how to guess keys, reducing the key space from 2^54 to 2^27 (from Thomas' autopsy mentioned in the article) is emphatically NOT halving the keyspace but reducing it by a factor 2^27, that is 130 MILLION times fewer keys. Going from (random example) 130 million hours to crack to 65 million hours to crack isn't much progress but going from 130 million to ONE hour makes a difference.

I did not notice this when I read it. Nice catch. I think I corrected this in my head when I was reading it . This square roots the key space which divides the exponent by 2.

Maybe the best solution is to recursively encrypt the message with a series of different programs and algorithms. It would be nice to have a standard API to simplify to process.

TrueCrypt already does this - it uses cascading triple encryption. Three different encryption algorithms can thus be used automatically, which substantially improves security and mitigates the risk of finding weakness in any individual encryption algorithm.

There's an implicit assumption in some comments here that Cryptocat was vetted by an open-source community and/or security researchers, and that therefore that community failed. That is incorrect. It has been widely panned as a terrible piece of work for some time; no one was fooled except for some traditional journalists and/or bloggers. If you take the recommendations of WSJ, NYT, or Forbes about security--those are the publications posting pro-Cryptocat fluff pieces--you're going to have a bad time.

If you read through that, you'll realize Cryptocat has been broken since it was introduced. The first version was just fundamentally stupid. Bruce Schneier didn't even look at it himself--he read the existing pieces tearing it apart, agreed it was junk, and said "CryptoCat is moving to a browser plug-in model": https://www.schneier.com/blog/archives/ ... tocat.html

Then they came out with their 2.0, which used a better theoretical model--but one that had this serious implementation bug. At no point, ever, has Cryptocat been recommended by any credible researcher. They have all looked at this thing and laughed at how bad it was from day one. The only people who claimed otherwise were stupid journalists writing clickbait.

Absolutely not. The underlying assumption of this is that some of your encryption programs are broken. Security should not be trial-and-error.

The assumption is actually that any of your encryption programs may be broken, and that is something a prudent person should consider. The underlying model is not so much 'trial-and-error' as 'take nothing for granted.' Your advice is predicated on how things should be, not how they are.

What you mean is string of decimal digits and array of bytes. It's actually a byte array, nothing to do with "integers". Byte = Integer but only if it's a uint8 but that's "chance" ... The Ars article confused me until I had to read the original bug report which is far more accurate. Expected more from an Ars writer.