Open-source software developer Kai Engert has proposed an overhaul to the Internet's SSL authentication system, aiming to minimize the damage that would result from the compromise of one of the authorities trusted by major browsers.

Under version 2 (PDF) of Engert's Mutually Endorsing CA Infrastructure proposal, people connecting to Google Mail, Twitter and other sites protected by SSL would draw on one of three randomly selected notaries to verify that the digital credential being presented is valid. By comparing the SSL certificate's contents to data contained in the voucher returned by the notary, the person's Web browser or e-mail program could quickly spot credentials that have been forged, even when they've been signed using the private key of a legitimate certificate authority. The notaries—or "voucher authorities" as they're called—would be made up of existing CAs.

"The introduction and requirement of vouchers has the benefit that controlling a single CA will no longer be sufficient," Engert, a software developer at Red Hat and a contributor to the Mozilla Project's security team, wrote in the proposal. "If the presence of a valid voucher were mandatory, at least two CAs would have to be involved to create a working rogue identity, one CA signing the certificate, another CA using its VA to produce a voucher."

At a minimum, the vouchers would contain a cryptographic hash of the certificate the end user wants to access, a single IP address used by the site, a timestamp recording when the data was collected, and a digital signature using the underlying VA's private key. It might also include data concerning intermediate certificates used by the SSL certificate, recent OCSP—or online certificate status protocol—responses for the certificate and intermediate certificates, and proof that the VA signing certificate hasn't been revoked.

Fractures in the Web's foundation of trust

Critics have complained for years that the web of trust used to prevent eavesdropping on webmail, banking transactions, and other sensitive Internet-based sessions is hopelessly broken. With more than 600 entities authorized to mint certificates that are trusted by major browsers, all it takes is the compromise of one of them for an attacker to forge a credential for any site. That point was dramatically underscored last year when hackers breached Netherlands-based DigiNotar and created counterfeit credentials for Google Mail, Mozilla's add-ons download site, and other sensitive services. The Gmail certificate alone was used to snoop on an estimated 300,000 Gmail users, an audit later showed.

Since then, a flurry of competing alternatives and enhancements to the fractured SSL system have surfaced. Among them is Convergence, proposed by Moxie Marlinspike, a researcher who has repeatedly exposed serious flaws in the underlying SSL protocol. Convergence relies on a loose confederation of notaries that independently vouch for the validity of a given SSL certificate. One of the key benefits of the system is a "trust agility" that allows users to query specific notaries they trust.

It also provides privacy protections not available with regular SSL. Under the current system, certificate authorities track huge numbers of requests for SSL-protected websites and map them to individual IP addresses. Convergence uses two separate notaries that are intentionally kept in the dark when vouching for a certificate. One notary gets to see the IP address of the Convergence user but not the SSL certificate she wants validated. The other one sees the certificate but not the IP address.

Google researchers have proposed their own fixes (PDF) for the ailing SSL system. Under their new system, CAs would be required to publish the cryptographic details of every credential they sign to a publicly accessible log that's also been cryptographically signed to guarantee its accuracy. Some CAs have baulked at the proposal, saying it would require them to part with proprietary customer data. The Google plan would also place technical burdens on websites and browser makers, these critics have said.

The latest proposal comes a day after Ivan Ristic of Qualys released a set of SSL/TLS deployment best practices (PDF) that administrators can follow to avoid common configuration mistakes. He said that his company has conducted surveys and found that two-thirds of all SSL servers are badly set up and that of the remaining third "many have application-level issues that fully compromise SSL."

"The truth is that most experts are attracted to the CA trust problem, but, in reality, most SSL installations fail because of configuration and implementation errors," he added.

"Like speaking with a corpse in your mouth"

The changes envisioned by Engert are in many ways similar to Convergence, except that notaries would be limited to existing CAs and would be chosen randomly by the client software rather than by the end user. Marlinspike characterized the difference as a major shortcoming.

"This is just Convergence without the good parts," he wrote in an email. "The problem we need to solve is the lack of trust agility in the CA system. Speaking about solutions to the CA system which don't provide trust agility is like speaking with a corpse in your mouth."

The proposed fix is also receiving a chilly reception from some CAs. Comodo Senior Scientist Phillip Hallam Baker wrote: "It might help if implemented. But probably not very much. Having two parties do essentially the same check in the same way is not likely to result in much reduction in risk."

In his own email to Ars, Engert said the proposal is an update to one he first floated (PDF) at a security conference late last year.

"The document v2 is the result of thinking about the initial ideas more, taking into consideration the thoughts and feedback that I had received from various sources," he wrote. "I'm hoping my proposal can be helpful inspiration for finding a solution for the trust problem."

30 Reader Comments

There is also the SSH model where you prompt and remember the first time on encountering a site, and thereafter only complain if the certificate has changed. Unfortunately it is hard to make the UI useful to most users. It is useful though since you go to the same secured sites far more often than you go to random new ones. (You could also hook into this convergence model only when the certificate changes combining the approaches.)

For Firefox and Thunderbird you can get an extension implementing that model named "Certificate Patrol". Sadly Chrome developers won't provide the hooks so that a similar extension could be written. Even more annoying, Chrome won't even let you delete or disable the certificate authorities it ships with. (I really wouldn't trust either AOL or Hong Kong Post.)

A Man-in-the-Middle, with sufficient technical capability, can intercept (the various proposed) additional lookups to other security/authentication authorities.

I would tend to agree that flawed web server configurations are a much more prevalent issue needing resolution, and this requires no mass client browser changes (or any client changes for that matter).

The fix calls for another connection. This isn't processing dependent.

Whether the performance hit of ssl is negligible depends on the application as well. In addition, 'modern systems' is a relatively meaningless term when applied to performance levels as there are newly produced systems in use that run the gamut as far as performance is concerned, unless you limit yourself to a particularly well defined platform.

This doesn't really help, since ultimately the VA will be automated by the original CA - if you manage to get in to request a cert then the voucher will be created.So you have gone back to the same problem, compromise any one CA and you have broken the whole system.

The first thing we need to do is get DNSSEC then we at least know the dns record is controlled by the owner of the requested domain. Once that is in place we can store certificate fingerprints in the DNS record to provide a quick way to verify the certificate. It still doesn't fix the problem entirely, but in the end I don't think that is possible without pre-shared keys or one time pads.This would even provide security if you only stored the authorised CA's for a given domain, doing so would allow you to change/update your cert without changing DNS records since a MITM on your domain would require a specific attack on your CA rather than one of the 600 out there.

Apart from the multitude of minor problems that can be foreseen with this, there is one major problem that will break it fairly fast. Cost.

Basically, I would expect that it could be implemented as described. A few years down the track, though, the architecture will have been through several rounds of implementing "efficiencies". Those "efficiencies" will undermine the infrastructure, whether by notaries ending up being combined or through some other means. I'm not enough of a nerd to be able to provide the technical details of how it'll be undermined, but as soon as I look at the diagram up top I can see the business manager saying "we can make a couple of savings here".

There is also the SSH model where you prompt and remember the first time on encountering a site, and thereafter only complain if the certificate has changed.

I 100% agree primary weakness in the current system is it insists we trust the CA every time we contact a site, and the easiest fix is to store the cert issued by the site.

However the comparison with SSH is a little off. CA's and SSH's prompt are two ways of solving the same problem. The problem is how to get a copy of that cert without being subject to an invisible MITM attack.

SSH's solution to the problem is to just warn you that you are getting a new copy of the cert and let you deal with the problem. A good SSH user will ensure he only gets a copy of the cert over a trusted connection he _knows_ won't be subject to a MITM attack. This works wonderfully well for SSH uses who are a sophisticated bunch, but it has no hope of working for non-experts.

SSL uses a different mechanism that does work for non-experts: PKI. Rather than putting your trust in a connection, you use the magic of public key crypto to enable a trusted third party (the CA's) to vouch for the certificate you receive. This works well for non-experts, and even though it isn't perfect I can't think of a better solution. However, as you observe the implementation has a flaw. You are always better off trusting as few things as possible, and that implies you should be trusting the CA's for a short a time as possible.

When you are connecting to "https://www.a-shop.com/" for the 1st time you are forced to trust 2 things: a-shop.com (which is unavoidable) and the CA that certified www.a-shop.com is indeed owned by a-shop.com. You could, as you observe, drop the CA from that trust relationship after the 1st contact by simply negotiating a new cert www.a-shop.com over the new trusted connection, and thereafter relying on that only. That cert could be expanded to perform mutual authentication, ie not only verify that a-shop.com is whoever you contacted last time, but also signal to a-shop.com who you are so you don't have to log in. The downloaded cert could be thought of as a smart bookmark which the user clicks on to log into the site.

grotgrot wrote:

Unfortunately it is hard to make the UI useful to most users.

Well, maybe not. If the downloaded cert did do mutual authentication then you could push the UI onto the site. If you contact the site using a CA's cert then it makes you jump through a few hoops to identify yourself. Eg, it would ask you to set up a new account, or if you want to use an existing account enter passwords, security questions and maybe respond to an email. But if you contact a-shop.com using their smart bookmark you get logged with minimal security double checks.

This approach has other benefits I won't go into here, but it end result is it solves most of the problems in the current system. It is even compatible with the changes described in the article. The downside is it requires major changes to the browser and the way sites work now. If the current system weren't broken such a change wouldn't have much of a hope of seeing the light of day. Now that people in Iraq have lost their lives to broken CA's and other CA's have shown their willingness to sell MITM certs that put swathes of the Internet at risk it might be time to contemplate something more radical.

I'm not seeing how this would help much either. Whether operated by your employer at their internet gateway, police at your ISP, or a nation state at their border firewall; the big brotherware will simply intercept all three communication channels with CAs in the same way they currently snoop just one.

At a minimum, the vouchers would contain a cryptographic hash of the certificate the end user wants to access, a single IP address used by the site, a timestamp recording when the data was collected, and a digital signature using the underlying VA's private key. It might also include data concerning intermediate certificates used by the SSL certificate, recent OCSP—or online certificate status protocol—responses for the certificate and intermediate certificates, and proof that the VA signing certificate hasn't been revoked.

Only including a single IP is fairly useless for most high profile SSL sites; they're available through multiple IPs for load balancing, failover, etc.

Blacken00100 wrote:

What? SSL isn't significantly slower than HTTP. The performance hit is negligible on modern systems.

Modern systems don't make network latency go away; the network round trips to set up SSL are a major portion of the time required to fetch an HTTPS resource.

The first thing we need to do is get DNSSEC then we at least know the dns record is controlled by the owner of the requested domain. Once that is in place we can store certificate fingerprints in the DNS record to provide a quick way to verify the certificate. It still doesn't fix the problem entirely, but in the end I don't think that is possible without pre-shared keys or one time pads.This would even provide security if you only stored the authorised CA's for a given domain, doing so would allow you to change/update your cert without changing DNS records since a MITM on your domain would require a specific attack on your CA rather than one of the 600 out there.

Sure, DNSSEC is awesome, as long as you trust the DNS infrastructure. Pretty much every TLD is controlled directly or indirectly by a government. And if you know how DNS queries work, there is nothing to stop them from sending you whatever records they want if they so chose to subvert the system. And it's not like the US or any other Gov ever wanted to interfere with DNS, right? (cough pipa)

This works wonderfully well for SSH uses who are a sophisticated bunch, but it has no hope of working for non-experts.

Funny anecdote time a friend told me:

A company the friend works for used internal hostnames for all of the development and end users off their main domain, no sub domain. so User1.XXXX.com, etc. Someone typo-squated the domain, and would redirect all incoming SSH connections to their own system, probably to steal the credentials.

One employee opened an internal bug report* (The company also wrote the OS that its employees used and that included an ssh client) because he said he would often typo his internal SSH connections, and since *.XXYX.com (typo squated domain) always answered, he'd get the SSH warning about a new or changed key. He said this warning wasn't OBVIOUS enough that something malicious could be happening, and it was up to the company to patch the SSH client to "fix this problem".

This would even provide security if you only stored the authorised CA's for a given domain, doing so would allow you to change/update your cert without changing DNS records since a MITM on your domain would require a specific attack on your CA rather than one of the 600 out there.

This is the best suggestion I have seen. With all these CA compromises in the news we must not forget an attack on a single server to obtain its private key, is just as likely as one on a CA (if not more so). So any suggestion that makes it harder for a server operator to rekey quickly really isn't helpful. Long term caching of server certs is not helpful for this reason, you may be caching a vulnerable cert and the new one is actually the one you need to use. But if you check with the CA each time you connect (do we really think SSH syle prompts are going to be delt with by even a fraction of users correctly?) we are back to square one. Remember, an attacker will always focus on the weak link in the chain. CAs have got sloppy but if/when they up their game attacks will focus elsewhere we don't want to fix this issue only to make others worse. Convergence is a great idea but if users just use the default notaries then it doesn't add anything.

The first is that it doesn't exist. Not really. At most it just makes work for someone to breach it. And funnily enough this is good enough for the same reason that makes security worthless really. Humans. You see, most people won't have the motivation to put the effort in to breach security. Those whom are really motivated to breach specific security, the really clever ones, know that the best way of doing is this is to attack humans.

All what's really on the table with this suggestion is you now need two CA's to verify a cert. That's not really that much more effort to a determined group. Sure, they have to target two organisations. But if you have the skill set and resources to attack one, then it just adds a little delay, as you now have to attack another. But that doesn't make it any more secure in the slightest.

The second thing what often occurs to me, and which really worries me lately is government. Especially Western governments, whom have taken this word and use it against, for want of a better words, heretics and blasphemers so to speak. People who dare to speak out against the governments wanten reduction in civil liberties under the banner of "security". It harks bad to the dark ages. Instead of think of God. It's "think of the children/terrorist".

It would not surprise me in the least that some savvy spook is just waiting for SSL to fall apart, and the government can step in with the claim "we need to monitor your account, to make sure your not a [insert boogie man of the week here]. After all you want secure banking don't you, you don't want to be a [boogie man] do you. Can we just have a word, because only a [boogie man] would disagree with us, it's just a quiet word, we don't sponsor torture, we outsource that these days"

At most it just makes work for someone to breach it. And funnily enough this is good enough for the same reason that makes security worthless really. Humans. You see, most people won't have the motivation to put the effort in to breach security. Those whom are really motivated to breach specific security, the really clever ones, know that the best way of doing is this is to attack humans.

That is incorrect. Study game theory a bit and you'll realize a few thing: a/ Security is about enforcing the rules and b/ there is plenty of real life situation when breaking the rules (defecting) is actually a worthwhile personal strategy.

The whole idea behind security is to make the price of defecting higher then the value of the asset you protect. That way defecting is, in the best case, worth the effort, in the worse case too expensive for the majority of the participants to consider.

That being said, this has little to do with the subject at hand, except the fact that it's security-related

Quote:

All what's really on the table with this suggestion is you now need two CA's to verify a cert. That's not really that much more effort to a determined group. Sure, they have to target two organisations. But if you have the skill set and resources to attack one, then it just adds a little delay, as you now have to attack another. But that doesn't make it any more secure in the slightest.

Your description is, I'm afraid, too general to apply to the proposal. The real issue is that SSL (and the whole X509 certificate signing industry) is mostly smoke oil: it's expensive and doesn't provide much in the way of real protection simply because it doesn't scale to the level we need and because it's too inflexible.

The proposal is, on paper, a way to improve on both these aspects. It's not a perfect fix and it also has several obvious and not so obvious disadvantage but it seems clear it makes defecting more difficult. Does it improve enough to be worth implementing ? I don't know. But nobody is pretending it's perfect: we're simply discussing whether or not the new balance between usability, security and price is better than the current one.

Quote:

The second thing what often occurs to me, and which really worries me lately is government

There, I agree with you. Governments are the parties for whom breaking SSL is both the simplest and the most worthwhile.

The rest of your post is more a political view than a technical assessment so I'll refrain from commenting.

There is also the SSH model where you prompt and remember the first time on encountering a site, and thereafter only complain if the certificate has changed.

I 100% agree primary weakness in the current system is it insists we trust the CA every time we contact a site, and the easiest fix is to store the cert issued by the site.

However the comparison with SSH is a little off. CA's and SSH's prompt are two ways of solving the same problem. The problem is how to get a copy of that cert without being subject to an invisible MITM attack.

SSH's solution to the problem is to just warn you that you are getting a new copy of the cert and let you deal with the problem. A good SSH user will ensure he only gets a copy of the cert over a trusted connection he _knows_ won't be subject to a MITM attack. This works wonderfully well for SSH uses who are a sophisticated bunch, but it has no hope of working for non-experts.

SSL uses a different mechanism that does work for non-experts: PKI. Rather than putting your trust in a connection, you use the magic of public key crypto to enable a trusted third party (the CA's) to vouch for the certificate you receive. This works well for non-experts, and even though it isn't perfect I can't think of a better solution. However, as you observe the implementation has a flaw. You are always better off trusting as few things as possible, and that implies you should be trusting the CA's for a short a time as possible.

When you are connecting to "https://www.a-shop.com/" for the 1st time you are forced to trust 2 things: a-shop.com (which is unavoidable) and the CA that certified http://www.a-shop.com is indeed owned by a-shop.com. You could, as you observe, drop the CA from that trust relationship after the 1st contact by simply negotiating a new cert http://www.a-shop.com over the new trusted connection, and thereafter relying on that only. That cert could be expanded to perform mutual authentication, ie not only verify that a-shop.com is whoever you contacted last time, but also signal to a-shop.com who you are so you don't have to log in. The downloaded cert could be thought of as a smart bookmark which the user clicks on to log into the site.

grotgrot wrote:

Unfortunately it is hard to make the UI useful to most users.

Well, maybe not. If the downloaded cert did do mutual authentication then you could push the UI onto the site. If you contact the site using a CA's cert then it makes you jump through a few hoops to identify yourself. Eg, it would ask you to set up a new account, or if you want to use an existing account enter passwords, security questions and maybe respond to an email. But if you contact a-shop.com using their smart bookmark you get logged with minimal security double checks.

This approach has other benefits I won't go into here, but it end result is it solves most of the problems in the current system. It is even compatible with the changes described in the article. The downside is it requires major changes to the browser and the way sites work now. If the current system weren't broken such a change wouldn't have much of a hope of seeing the light of day. Now that people in Iraq have lost their lives to broken CA's and other CA's have shown their willingness to sell MITM certs that put swathes of the Internet at risk it might be time to contemplate something more radical.

I'm no crypto expert, but this seems to be a reasonable way to deal with the issue. I'd put up with the hassle of creating another site-specific certificate for the augmented security. I already type in my credit card each time rather than have it saved at a site.

At a minimum, the vouchers would contain a cryptographic hash of the certificate the end user wants to access, a single IP address used by the site, a timestamp recording when the data was collected, and a digital signature using the underlying VA's private key. It might also include data concerning intermediate certificates used by the SSL certificate, recent OCSP—or online certificate status protocol—responses for the certificate and intermediate certificates, and proof that the VA signing certificate hasn't been revoked.

Only including a single IP is fairly useless for most high profile SSL sites; they're available through multiple IPs for load balancing, failover, etc.

Doesn't a DNS request resolve to one IP address anyway? Couldn't you just hash that? And couldn't the company have the server at the end of that IP do the load balancing etc.?

No, it resolves to a list of zero or more addresses and can include both IPv4 and IPv6 addresses. There is also information on how long it can be cached which for popular sites will be a few seconds to maybe a few hours. DNS servers will typically return the list randomly sorted each time. At a command prompt, try this a few times in a row. Also try http://www.microsoft.com, amazon etc.

Only including a single IP is fairly useless for most high profile SSL sites; they're available through multiple IPs for load balancing, failover, etc.

Doesn't a DNS request resolve to one IP address anyway? Couldn't you just hash that? And couldn't the company have the server at the end of that IP do the load balancing etc.?

grotgrot is spot on; if you want to have even more fun try that from different networks that are physically far away and compare the ips you get back. There are plenty of reasons to use multiple IPs: maybe you don't want an expensive load balancer, maybe one load balancer can't handle all the traffic, maybe you want to send users to different datacenters (possibly geographic load balancing) which involves responding with different IPs to different groups of users (you can do Anycast for low data stateless protocols like UDP, but for anything TCP, you really need different IPs per datacenter)

What? SSL isn't significantly slower than HTTP. The performance hit is negligible on modern systems.

Ummmm...okay. Perhaps you need to research that comment first. The initial handshake introduces significant latency over http (as much as 3.5x based on fairly current research), so adding additional checks is only going to make it worse.

Now, after the initial handshake, you are correct because the endpoints have moved to a block cipher, but they have also completed validating the cert, so doesn't apply.

The problem with x509v3 is deep: the centralized trust model it uses.The reason for this trust model is twofold:1. it is created by entities who want to control others2. it is mostly used by end users who don't care enough that they are controlled

Plus add the fact that the vast majority of even IT professionals does not understand the whole thing. You would be shocked how much people send you also the private key when you ask for the certificate.

We already have a public key infrastructure with an "agile trust model", called pgp. The reason it does not widely used have less to do with the mere fact that it isn't useable for web sites. You have to understand its concepts and define your trust preferences yourself, and this is a problem.

Moreover trust is a more complex thing than knowing that someone you are dealing with is identical to oneself.

The good thing is that any public key infrastructure can be used to entirely rebuild its trust model. You always have the identity of the key to start with. And you have the open source community to start with.

I am thinking about a combination of "trust providers", "trust exchange" and "trust client" (e.g. browser/email client plugin). Trust providers could rank keys, trust exchange would exchange the ranks, and the trust client would be used to act upon the rank (display trust level, appropriate warnings, block operations).The trust providers would be ranked along a system similar to cacert.org's assurer system. A trust level would be a scalar, with zero having the special meaning of compromised key. Trust providers would rank the keys with a trust level 9also using the trust client). For the simple user the trust exchange would provide a simple scalar computed from the provided trust levels (also taking the age of the ranks into account), and the trust client would place warning and block operations based on predefined levels.For advanced users the trust exchange could pass the individual rankings, the trust client could compute a trust level for itself based on user-defined rules (by reranking trust providers, individual keys, and redefining the trust level expression), and tune the settings for warnings and blocking.And of course there could be more trust exchanges.

Now we just need the exact rules, a trust exchange protocol, some infrastructure, an incentive system and some PR to make it work :)

Ummmm...okay. Perhaps you need to research that comment first. The initial handshake introduces significant latency over http (as much as 3.5x based on fairly current research), so adding additional checks is only going to make it worse.

Now, after the initial handshake, you are correct because the endpoints have moved to a block cipher, but they have also completed validating the cert, so doesn't apply.

The best solution is False Start, which is still a hack. :-/

But it depends, SPDY also uses SSL like HTTPS, it is a layer between HTTP and SSL, and it is faster than HTTP.

The first thing we need to do is get DNSSEC then we at least know the dns record is controlled by the owner of the requested domain. Once that is in place we can store certificate fingerprints in the DNS record to provide a quick way to verify the certificate.

What you talk about is called DANE, SSL-certificate fingerprints stored in DNS signed with DNSSEC.

The problem is, there are many deployment issues which prevent deployment of DNSSEC.

Ranging from mistakes in implementations of DSL-routers to browser- or operating system vendors which would need to add a lot of code to their system to be able to support it. There is a whole list.

The other problem is the usual politics of things like PIPA and DNS and so on, but that can be dealt with. the TLD at the end of the URL tells you exactly who you are trusting.

On the whole topic of SSL and MITM attacks they are fairly simple to avoid. If you are prompted with a untrusted cert, every major browser will let you know it is untrusted, do not accept it, it's as simple as that. My view on this new proposed notary servers (which are going to be existing CAs) could be worked in to a MITM attack just as SSL certs are and we are back to the same problem.

On the whole topic of SSL and MITM attacks they are fairly simple to avoid. If you are prompted with a untrusted cert, every major browser will let you know it is untrusted, do not accept it, it's as simple as that. My view on this new proposed notary servers (which are going to be existing CAs) could be worked in to a MITM attack just as SSL certs are and we are back to the same problem.

The problem you're not addressing is malicious trusted certificates. For example, a compromised CA, or a CA colluding with a hostile organization (be it a government or IT organization). You'll get no warning then you're being MITM'd. The notary problem addresses this by required multiple compromised or colluding CAs, not just one.

However, there are questions as to how effective this is, since the compromised or colluding CA could in theory present notaries of its choice to the victim. No matter what, no browser warning.