Posted
by
Soulskillon Friday November 04, 2011 @11:49AM
from the another-one-bites-the-dust dept.

Orome1 writes "Microsoft, Mozilla and Google have announced that they are revoking trust in Malaysia-based DigiCert, an intermediate certificate authority authorized by well-known CA Entrust, following the issuing of 22 certificates with weak keys, lacking in usage extensions and revocation information. 'There is no indication that any certificates were issued fraudulently, however, these weak keys have allowed some of the certificates to be compromised,' wrote Jerry Bryant of Microsoft's Trustworthy Computing."

In next days they probably will revoke other certification authorities with a similar names. This case was just the next step. Is a slow process and finding uppercase letters in the middle don't make things any faster.

Both Mozilla and Microsoft made sure to note that there is no relationship between DigiCert Malaysia and Utah-based DigiCert Inc., which is a member of the Windows Root Certificate Program and Mozilla’s root program.

Who in their right mind would generate such a certificate for (presumably) a production system?

Why didn't the CA have some sort of system to detect such short keys?

The CA I use doesn't allow anything less than 2048-bits to be signed. While the policy may be a bit strict, as 1024-bit keys still have their uses (there's a lot of hardware that only deals with 1024-bit keys), at least they're erring on the side of caution. I'm sure they're not the only one with such a policy.

My main curiosity is why any administrator would generate 512-bit RSA keys for their own servers, knowing that they're weak.

I wonder if there's some old Malaysian-language "Guide to setting up SSL" website that they're following? I'd be curious if there's any commonality between all the 512-bit keys. That, or some particular software that has that keylength in the default configuration file.

Which is a bit of an interesting decision, as it doesn't compromise anyone except the individuals foolish enough to generate insecure RSA keys and submit them, and there are numerous ways they could've screwed up their own security that the CA could never detect anyway. What's even more interesting is that they've allowed big-name CAs to remain as such despite them issuing fraudulently-obtained certificates corresponding to major websites. I think the size of this CA has a lot more to do with this than thei

Except in this case its not the browser makers that would need to fix it, its companies like Microsoft who accept these certificates as valid for code signing when they were not explicitly marked with a "can be used for code signing" flag.

Except it doesn't, as the bad cert was also "missing certificate extensions", which means it can be used for any purpose after the private key is factored out, and indeed from one of the articles: [net-security.org]

"I have been contacted by Entrust who say that two of the certificates issued by the Malaysian DigiCert Sdn. Bhd. were used to sign malware used in a spear phishing attack against another Asian certificate authority," reports Sophos' Chester Wisniewski.

Question: Not a crypto guy so my apologies if this sounds noobish but its just something I've been curious about. When I started out in the 80s i remember being told how strong 128bit was, followed by how strong 256bit, then 512bit, now you are saying anything less than 2048bit is shit, so my question is thus: How fast are we going through these things and with the frankly insane amounts of hardware that keep coming down the pipe is this gonna end up some sort of "bit race" between the white and black hats?

>RSA for example needs two prime numbers as a keypair, so while the key length might be 512 bit, there are actually not that many from those 2^512 numbers to choose from. Also, certain key values are prone to attacks.

How many is not that many? Bruce Schneier in Cryptography Engineering calculates that 1 in 1386 numbers in the 2^2000 bit range is prime. In the 2^512 range primes are even more frequent, according to prime counting estimates. [wikipedia.org]

That said, RSA is well known to not have key pairs that grow in security in a linear fashion compared with key length. EC fortunately has much better properties, although EC certainly has its own drawbacks. A 256 bit EC key has similar security to a 128 bit AES key (insofar as you can compare those) and 512 bit has about the same as 256 bit AES. You will quickly go to 16K RSA keys to accomplish a similar security level. Try and generate a 16K RSA key pair and do a few signings to see what that means. Try th

That's a good question. I will attempt to answer it, with the caveat that I'm also not a crypto expert.

Most of the relatively shorter key lengths you see these days, such as 128-bit and 256-bit refer to symmetric encryption algorithms like AES. At this point in time, such keylengths are secure for the foreseeable future. These algorithms tend to be quite fast (AES has hardware-acceleration in many CPUs, which can encrypt or decrypt data at 1GB+/sec in some cases, and around 300MB/sec on many non-accelerated CPUs), but require that both parties exchanging encrypted data share the same key. (Hence the name "symmetric" -- the same key is used for encrypting and decrypting.)

The two parties could previous exchange a shared symmetric key by means of a trusted channel, like a trusted courier, or meeting in person. This can be extremely difficult in the real-world, though.

The longer-length keys you often see (1024-bit, 2048-bit, 4096-bit and, in the case mentioned in the article, the not-very-secure-at-all 512-bit length) are "asymmetric" keys -- when they're created, one creates a "public key" and a "private key" that are linked a certain mathematical way. The public key can be distributed widely, while the private key must be kept secret. If Alice wants to send Bob a secure message, she can encrypt it with Bob's public key, but the message can only be decrypted with Bob's private key -- even if someone intercepts the encrypted message and has Bob's public key, they are unable to decrypt it.

Asymmetric encryption is extremely slow, relative to symmetric encryption (I seem to recall reading that they're about a thousand times slower). Sending large amounts of data over secure connections would be extremely slow. Fortunately, modern cryptosystems use a hybrid model: they use asymmetric keys to exchange a shared secret key that is then used for faster symmetric encryption -- this allows for quick symmetric encryption methods to be used by solving the problem of exchanging the symmetric key without needing to meet in person.

SSL, for example, uses such a method. A simplified description follows: when your browser connects to a secure website the server sends you its public key (which has been digitally signed by a certificate authority who vouches for the identity of the server). Your browser checks the signature to make sure it's actually been issued by the authority and, if it checks out, creates a random symmetric key, encrypts it with the server's public key and sends it to the server. The server decrypts the symmetric key with its private key. Both client and server then encrypt all future communications with the symmetric key.

Because asymmetric and symmetric encryption keys use entirely different mathematical methods to secure data, their keylengths aren't directly comparable. According to NIST [keylength.com], a 3072-bit asymmetric key is about as strong as a 128-bit symmetric key.

I am a cryptographic security researcher. I will give some background on this before answering your specific questions. Information security is subject to the same pressures as other forms of conflict. Such pressures are otherwise known as an "escalation", "arms race" or even as "evolution". Cryptography is one such armament in the information security arsenal; and while cryptography is subject to constant pressure of Moore's Law as you quite rightly assert; more cataclysmic changes can occur through leaps

Thanks to you and Pete for explaining this subject in much closer to layman's terms than I've ever seen it tackled, it does make me think of a couple of follow up questions if you don't mind.

Since as you pointed out with Enigma (which IIRC there is still a handful of messages they still haven't cracked after all these years) there are gonna be advances coming down the pipe and that both AES 128 and RSA 1024 have expiration dates, wouldn't it be smarter to try to jump a little bit ahead of the curve? by that

Thanks to you and Pete for explaining this subject in much closer to layman's terms than I've ever seen it tackled, it does make me think of a couple of follow up questions if you don't mind.

Not at all, you questions are poignant and well-framed.

Since as you pointed out with Enigma (which IIRC there is still a handful of messages they still haven't cracked after all these years) there are gonna be advances coming down the pipe and that both AES 128 and RSA 1024 have expiration dates, wouldn't it be smarter to try to jump a little bit ahead of the curve?by that I mean wouldn't it be smarter to just go ahead and switch to 512 bit AES and 4096 RSA when the previous schema expires? Or is that too computationally expensive with current technology?

Yes, going too far beyond current standards is expensive. As you imply, when computational overhead is considered (particularly in terms of server hardware) the cost of supporting increased key lengths is significant. For ciphers that are embedded in hardware devices there is further pressure to reduce footprint and fabrication costs as well as motivation to build in some amount of redundancy. Economic pressure therefore acts to resist the urge to overs

Thanks for the response, i knew there had to be a gotcha I hadn't seen. and as a humble PC repairman I know all to well the weakest link is often not the hardware but the little meatsack in front of the keyboard. I had a teacher that was once giving a tour of a "secure server' farm and the BOFH kept going on and on about how their insane password schema made them 'hackproof" until the teacher finally got fed up and said "Tell you what, you let me loose in the place for 10 minutes and if I can't bring you a

I bet when you see some beautiful security system turned into a mess because of bad policies you feel like I do when i hand over some box i lovingly created only to have them turn it into a spyware/adware laden mess in less than a month, just like that scene in "History of The World part I" where the artist gets his work pissed on by the critic!

This is more proof that Malaysia is not a real place. I mean look up some pictures of their subway or their big skyscrapers. Fake photoshopped renderings. Now think about where it is on a map. You can't. Because it isn't.

"DIGICERT is in the center of an effective trust model that the government is creating to address the issue of information security and the negative perception that has been painted in association with online transactions." *BREATH*

"Customers won't transact business at your website unless they are certain it's secure."

"The username and static password scheme has been widely used for verification online. Nevertheless, many have recognize this scheme as being obsolete as it can no longer be trusted to provide

I wonder if there's something for Linux that's equivalent to Blizzard's Warcraft password inspector. He contacted me last week, asking to inspect my password to ensure that it's secure. It was kind of embarrassing that my account got hacked, and my credit card maxed out, shortly after I'd sent him my password. Fortunately though I was able to regain access and change my password. I forwarded the new password to the inspector and apologized if he had trouble trying to use the old one. Email the Blizzard guy to see if he knows the Linux password inspector. His address is paswordinspecter@blizzard-account-admin.shulinhost.cn

The CA model is clearly broken, it is a chain that is too long with too many weak links. We have hundreds of root CA's, and combined with intermediate CA's, that number could be in the thousands. That is too many points of failure, which can bring down the entire system.

The following needs to be done immediately:

First: Eliminate Intermediate CA's:If an entity does not qualify as a root CA, why should it be allowed to issue trusted certificates?

Second: Restrict Root CA'S by geography:It is okay to trust the Chinese Post Office for *.cn, *.hk, etc. domains, why should we trust it for *.ca or *.com of Canadian companies? Why not restrict root CA's to geographic zones and also domain prefixes.

The CA model is broken. Always has been. Your browser comes with several hundred baked-in CAs, each with complete authority over what your browser thinks is a trustable connection. It's like a RAID 0 array with 600 drives. Just asking for trouble, huh? And it's hard or even impossible to tell when one of those drives is reading or writing bad data. Like the truism about hard drives, "hard drives just fail (so get backups)", CAs fail. Evidently.

The reason the CA system is broken is because we're not using the White List Model of "Trust No One". I've had to address this issue in Firefox by going through the entire list of certificates and marking everyone of them as untrusted and the funny thing is, I've only had to create a dozen exceptions to that model. These are websites that I depend such as my bank, merchants (Newegg), Google as I do use their https mode. Seriously, it did suprise me that I only needed 12 exceptions to the rule and each one i

The average user doesn't have the know-how to do that. Normal users freak out if they see that they have to accept a certificate - to them, it means their computer is about to burst into flames and hacker ninjas are going to come through the window and steal their credit cards. Also, there still isn't anything stopping one of the few CAs you created exceptions for from being tampered with.