Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "Right now, in an email list excluded from the public eye, some bright people are discussing the future of SSL. Under debate is (a) do they allow DV (domain only validation) certificates to continue to exist (exist for e-commerce use? only encryption use?) or do they require a higher degree of certificate validation? (b) Do they allow certificates to be issued with non-unique common names (certificates used on internal networks, think your exchange server) or do they ban the practice? If this were 'hypothetically' a heated debate going on right now and you could chime in, what would you say?"

1) Stop selling the idea that certificates "verify" who you're talking to. They don't. They never did. As soon as I compromise your server -- easily done, as history shows -- I have your certificate. If it is remote across your network, a little more work, but still, soon I'll have it. Now you have still encryption of the intermediate channel, but the wrong person is catching the data.

2) Tell the truth for once, and let people know that certificates provide encryption of the intermediate channel, hardening ONLY that channel against interception (but NOT proofing it.) ID is NOT provided, only an invalid assumption of ID built out of the lies of Verisign and its co-scammers.

3) Stop "allowing" certificates at all. We can easily make them at zero cost, and we should. The whole "Verisign" thing is a complete and utter scam, and always has been, one with the collusion of the browser makers with the fake warnings and "scare the user" policies. Giving ownership of the encrypted data channel to profit making operations was a stupid, stupid move, and has served only to cripple e-commerce from the day it began -- it's one more useless and endless cost for the small entrepreneur to have to absorb, and therefore in the end, the consumer. Further, it has evolved into a higher stakes / cost game of buying that little green verification bar in some browsers. Scams upon scams.

...but of course, this will never be fixed, because the whole "that's who you're talking to" scam is big, big money (extorted from merchants and others who want to provide encryption to the general public), and big money wins out over reality every bloody time.

Doesn't matter how "smart" the people are working on this. They'll go with the money.

1) Yes certificates can validate your identity, provided the roots and intermediates are kept secure. You should never be issuing client certs from the local server cert, which many people do. Only an idiot keeps the root cert on an online server . Smartcards can provide security for the end user's private certs. It all boils down to a secure implementation. The flaws you describe result from an insecure implementation.

2) Yes, encryption is one use of SSL. The question was about SSL validation.

Which you cannot guarantee, therefore you cannot use them to validate identity.

The entire industry -- from scamming fees out of site owners to fooling the consumer and coercing and co-opting the browser authors -- is predicated upon the single critical idea that certs imbue a transaction with safety because you know who you're talking to. But the fact is, you don't have any idea who you're talking to; and furthermore, you cannot, and furtherestmore, the cert couldn't tell the user or the browser or the source site if the folks at both ends were the "right ones" even if it was true. All the cert does is implement intermediate communications line security -- as far as we know, presuming the NSA hasn't done what we all know it would most like to do and is either in the process of doing or has already done.

The flaws you describe result from an insecure implementation.

If there were such a thing as a "secure implementation" (which there isn't -- you have no idea where the hack will come from... a business associate? The cert authority? A lover? An intrusion? Use of force? Installation error? Stray gamma ray? Bad chip? Browser vulnerability? Language vulnerability? Worm? Virus? Some combination of the foregoing? Or etc., ad infinitum), certificates still wouldn't assure you it was in place. Claiming that they in any way validate identity is purest scamming.

2) Yes, encryption is one use of SSL. The question was about SSL validation.

No. The fact is that as far as we outside the government know, the SSL mechanism presently legitimately encrypts between points, IE the intermediate channel. The next fact is that certificates cannot, period, end of story, provide validation, nor have they ever done so. It's a scam to say that they do if you understand them; if you don't understand them -- and by that, I don't mean just the mechanism, I mean the environment they exist in and are expected to function in and the ways and means people are known to go to to get around such efforts, and the immense benefits available from doing so when they are circumvented -- then you're simply ill-informed and wrong.

Again, we're talking about crappy implementation here

Again, we are not. We are talking about the impossibility of implementation, in response to the bogus claim that identification and authentication are possible with the certificate mechanism. Which Verisign (used here as a placeholder for every CA) knows, and is why Verisign and etc don't seriously try to do it. They know perfectly well it's a scam. If they give you something to point at, like the hilarious methods they claim provide your identity (never mind the site's identity), then you'll be misled into trying to address the wrong issue. The actual issue is that this is a scam and cannot work at all, not just that the CAs have no serious knowledge who the certificate holders really are. It's not about identification and authentication. It's about the illusion of identification and authentication. All they have to do is put up a solid enough false front to make it look like they're trying, then misdirect the tech types into tech issues instead of thinking about how the whole system works, and they're golden.

1) Stop selling the idea that certificates "verify" who you're talking to. They don't. They never did. As soon as I compromise your server -- easily done, as history shows -- I have your certificate. If it is remote across your network, a little more work, but still, soon I'll have it. Now you have still encryption of the intermediate channel, but the wrong person is catching the data.

... and certificates don't cure AIDS either. But that's not what they're supposed to be used for.

Certificates are meant to secure the communication channel, in order to make sure no unauthorized third party taps in the middle. If the end points are compromised (server or client workstation), all bets are off.

That's why organization that care (such as some banks) make damn sure that their servers are secure, and cannot be compromised. A bank however has no jurisdiction on the path from its customers to its

A big improvement would be to require e-commerce servers to protect their private key in a hardware accelerator that won't give up the key. This would protect the certificate if the server is compromised. Someone might be able to use the accelerator, via some type of proxy hack, but the certificate would be safe after a compromised server is reloaded.

Maybe the "scam" factor could be reduced if the certificates were signed by two or more entities in different jurisdctions.

What's disturbing is that whoever is allowed on this mailing list imagine that they can make decisions out of the public eye and in secret. I call for them to make their discussions public immediately, with their list open to subscriptions and posting, and all past messages archived on the web for all to read. Failing that, we must ensure that no one respects the decisions of any committee operating in secret, for if they hide from the public, they don't have our interests at heart.

Well... The fact that it became known does not speak much for their secrecy, and secrecy in this regard is a very relative term, even if the group ever intended it to be a "secret society Illuminate". Sometimes (and I've seen it happen all too often) someone accuses people of discussing things "in secret" only because they weren't a member and the membership signup was not obvious to a 3 year old. Without knowing more about the specific list and group, it is impossible to judge their motives based on an unsubstantiated claim of a "secret mailing list".

I've been a member of "closed" mailing lists before and continue to be to this day. It's generally a question of someone vouching for you. Example... In the dark early days of the Internet and the Robert Morris Worm incident, we had two parallel security lists. To get on the Zardoz list, you merely had to sign up. To get on the ISIS list, you had to have some vouch for you in the "bang path" (uucp notation) between you and them.

More recently, certain mailing lists, such as the recently defunct VendorSec mailing list,. required a discussion amongst the members for you to join. Especially, in security circles, there's a matter of trust and reputation and the very real problems of disruptors , some of whom are "state sponsored" (the government really doesn't like it when you can protect your privacy and your security - you should depend on them for that, right? They long for their good old days of ITAR). Sometimes (SERIOUSLY) some of those lists are there discussing things of a serious enough nature that we don't want the "bad guys" to have a heads up. Some of us have to collaborate in a trusted manner somehow and, yes, we're going to get accused of "operating in secret". But it's just a matter of knowing who you are communicating with and can trust them. This doesn't sound like that kinda list but I would love to know what list it was. There are probably a dozen or more lists on the net right now discussing this very issue, probably including one or more IETF lists. It's generally not a "cabal" and I've never found it hard to join one if you have the reputation to be trusted.

the few industry experts, who are thinking of how their CA could get more signs will propose the solution where you need to create a cert from their service for every sub domain, and if they could have it per port.

because it's pretty easy to order a cert. all you need is a phone number(and photoshop).

also applies to signed programs, btw, a huge scam on security level - it's just about money, money to CA's to wash some hands.

The economics of certificates is fundamentally broken. The person who uses the certificate is not the person that pays for it. Users should pay for access to a CA's certificate, making the CA responsible to the user for the certificates they issue.

Hopefully, when DNSSEC is rolled out, SSL can go away. With DNSSEC, you can be certain that the DNS records that you are receiving are not spoofed. This include A / AAAA records for looking up the host, but it also includes IPSECKEY records. You can then use this data - an IPSEC public key - to establish an end-to-end encrypted connection at the IP layer, below TCP where SSL lives.

This provides the same level of trust that SSL actually provides, i.e. a guarantee that you are talking to the computer th

As you pointed out, it's not a fault of TLS as a protocol. TLS is a decent protocol, but the trusted roots part is not the best approach. I really have much better trust in DNSSEC as an approach. I just wish there was a generic way of publishing all keys over DNS (instead of LDAP) for SSH, PGP, S/MIME, SSL and anything else.

Yeah, especially when you have clowns like OpenDNS saying they won't support, or even pass through, DNSsec because they like DNS Curve better. The two standards (and I say that loosely because DNS Curve is NOT a standard and no where close) solve different problem sets but OpenDNS is too dense to realize that.

There is nothing that says you can't use DNSSEC for any clients that support it and certificates signed by traditional CAs for those that don't, until such time as there are so few non-DNSSEC supporting clients that you can do away with the CAs.

You can even put a scary message on web pages for non-DNSSEC supporting clients saying (truthfully) how their computer is insecure and pointing them to a place where they can update their software to support DNSSEC.

There are a lot of things that I'm waiting for "until such time as", but I don't foresee "such time" happening within one investment horizon.

Just because you can't do it immediately doesn't mean it isn't worth pursuing.

They won't follow that link; they'll just visit the site's competitor.

I don't see how that would drive traffic to competitors. It doesn't make your service unavailable, it only reminds your customers that they should update their software. For example, try installing the latest version of Firefox while you have an out of date Flash player: As soon as the install finishes you get a page telling you that your Flash player needs an update and supplying a link to Adobe's website to download the update.

This is true especially in cases where no update to support DNSSEC is available at all for a given platform.

Provided all clients support DNSSEC, which probably won't happen for several years.

Then we could introduce the concept of a public notary, which would be a DNSSEC enabled server
that will vouch for the server.

For example... we could eliminate all trusted CA certificates, and replace them with trusted notary certificates.

The rules regarding notaries issuing certificates for domain names or any subdomain could be something like:
(1) any notary issued certificate must be valid for no longer than 1 hour.

(2) a notary certificate can only be issued after comparing the details of the CSR presented, with a valid DNSSEC distributed public key, and finding the Subject name and public key details identical.

(3) a certificate for an e-mail address (e.g. for S/MIME), for code signing, or AD/LDAP Directory authentication, may be issued for a longer period, but must not be valid after the current expiration date of the domain name, and for issuance, SSL keys must be distributed under a DNSSEC validated subdomain that [1] identifies the desired subject of the certificate, [2] identifies the desired duration of the certificate, and [3] identifies the public key, etc.... all CSR details must be mirrored in the DNS record, and all signed using DNSSEC

As I see it, notaries acting as short-term CAs to proxy DNSSEC would have to prove themselves to the browser makers just as any other CA would. Such an audit costs a substantial chunk of change; how is such a notary supposed to afford it? And how is such a notary supposed to afford servers, bandwidth, etc. to stay in operation? With a lot of sites using a notary and signing a new certificate every hour, I imagine that'd be a lot of CPU and bandwidth. Web site operators would end up paying for the notary ser

SSL has numerous applications and needs that it serves. What we really need is a graduated system of "validity" which allows for things that don't need the "uber-valid" level of certs to operate.

Secondly, the long-standing ripoff in terms of costs extracted from this system are a symptom of this problem, creating and maintaining a monopoly-level stranglehold on doing things that don't need to cost nearly as much as they do.

Personally, I would prefer a decentralized web-of-trust kind of system for all but the highest level of confidence (maybe even for that, too, but I can envision a necessity to still centralize the absolute top layer).

Personally, I would prefer a decentralized web-of-trust kind of system

Which means that instead of CAs making money, the airlines will, as people will have to fly to key signing parties in order to get their public keys into the global web of trust as opposed to a local one.

A handful of people between nearby cities can extend the range of the web of trust.

As I understand the web of trust, a path from one party to another with more edges provides a weaker assurance than a path with fewer edges. A longer path increases the chance that a confused node [advogato.org] is in the path. Thus, having to go through a couple dozen nodes to reach the other party dilutes the trust. And if only a tiny subset of people travel, these people will be bottlenecks in the trust graph.

Eventually, the web could spread between all cities, just by spreading between people at nearby cities.

I'm no expert on the subject, but intuitively it seems like a system where the trust of X is calculated as some kind of aggregation (simple sum?) of the product of all trust-factors, in all the lines between you and X (how much you trust your closest friend, how much he/she trusts the next friend in the chain and so on), including negative trust (banning) could work fairly well?

Won't work, as long as spammers and scammers can cheaply create phony entities in the web of trust. It's exactly the same problem as link farms.

Could you please elaborate on this?

It takes only one member of the web of trust who either inadvertently or deliberately signs a bad certificate to make the whole thing collapse. Effectively, all members of the web-of-trust are not only certified identities, but also certification authorities (as each member has the power to vouch for further members).

In the "real world", certification authorities must pass very stringent audits in order to make sure that they know and correctly implement their responsibilities. With a web of trust, anybody

I want my free encryption because I don't trust some 3rd party to tell me whether I should trust the web site that I am visiting. Encryption and identity should never have been tied together in the first place. It's unfortunate that this business method has succeeded as long as it has.

Encryption and identity have to be tied together. It's a fundamental aspect of the mathematics. If you can't verify identity on an insecure channel, encryption is useless, as you could be taking to a man-in-the-middle who just takes the traffic from each end, decrypts it, snoops, reencrypts with another key and sends it on. The only way to ensure non-modification without a cryptographically authenticated identity is with quantum encryption, and that can only be done if you've got a single continuous strand of fiber from one end to the other. Good for inter-office links, but not for e-commerce.

By the same token, If I don't personally know the CA, then nothing they say about identity is at all meaningful. I care very little what random XYZ corp in China says about the holder of key 0xFD5645EB78. What I care about is that that's the same entity I successfully did business with before, whatever their name is.

Frankly, it's just not all that helpful to know that some random entity had the wherewithal to send Verisign a fax with a letterhead (perhaps theirs, perhaps not) on it.

Possibly nobody (leap of faith, start with a small transaction). Possibly a friend who has done business with them. Perhaps some other entity that has a good track record for correctly identifying others. Not some company I've never heard of before in a country I know little about.

It might even be a business that a friend vouches for that itself cannot afford to lose face by referring people to scammers.

Perhaps some other entity that has a good track record for correctly identifying others.

I have a name for such an entity: a "certificate authority".

Not some company I've never heard of before in a country I know little about.

Then your problem is with browsers accepting certificates from too many CAs and providing no way to restrict which countries' businesses a given CA is allowed to certify. For example, a CNNIC cert on a business serving the U.S. market should raise a red flag.

Part of the problem is the browsers accepting far too many CAs I know nothing of. Another is that it doesn't allow me to set a chain of trust or multiple signatures.

In other words, the problem is that the idea of a CA is burned into the software and the protocol rather than a web of trust. If it's set up as a web of trust, you are perfectly free to choose the current CA model for yourself if you like, just add the CAs as trusted by

Encryption and identity have to be tied together. It's a fundamental aspect of the mathematics. If you can't verify identity on an insecure channel, encryption is useless

No. Your base assumptions are wrong. The thing is you can't verify identity. I break into your server. I take your cert. I throw it in my apache directory. I pwn your nameserver. Now, I am you. None of these steps take a rocket scientist.

Or, I break into the user's computer, even less to do. Now I compromise the browser end. It says its ta

It shouldn't come as a surprise to anyone with more than 3 brain cells that if someone breaks into your server, then you're no longer secure.The identity function of certs is 100% bullshit -- and it's always been bullshit.

If by "identity" you mean that with a moderate degree of reliability that the cert claiming to come from www.johnsonandjohnsoncorp.com was actually issued at some point to the legit owner of www.johnsonandjohnsoncorp.com, then I have to disagree w

No. Your base assumptions are wrong. The thing is you can't verify identity. I break into your server. I take your cert. I throw it in my apache directory. I pwn your nameserver. Now, I am you. None of these steps take a rocket scientist.

Why do you think we have to type in a 64-character passphrase every time we start Apache on the
secure servers, before they can unlock the secret RSA key, get added to the load balancer, and start answering requests?

Why do you think we have to type in a 64-character passphrase every time we start Apache on the secure servers, before they can unlock the secret RSA key, get added to the load balancer, and start answering requests?

No... breaking into my server does not let you pretend to be me.

If somebody rooted your server, they could attach a gdb to the already running Apache process, in order to grab the already-decrypted private RSA key from memory.

All this passphrase protects against is exploits which need rebooting the server, or maybe physically stealing the server. However, most exploits don't depend on rebooting the server.

If you can't verify identity on an insecure channel, encryption is useless, as you could be taking to a man-in-the-middle who just takes the traffic from each end

Useless is a strong word for it. In practice, performing the man-in-the-middle attack is far more difficult than simple passive listening on traffic. So I'd say even an unauthenticated encrypted channel is preferable to one in the clear. Hardly useless.

To perform a MITM attack you have to be able to send out data on the channel, and not just be passively listening. This isn't always possible, and represents a higher degree of risk of being caught than a passive listening attack.

You can also buy automated lock pickers that make lock picking relatively easy. Battering rams, crowbars, and bricks through windows are cheap too. Does that mean doors and locks are useless?

Encryption and identity have to be tied together. It's a fundamental aspect of the mathematics.

That's an urban myth that has perhaps been popularized by government employees who have an interest in limiting the use of encryption on the Net. In practice, only a limited number of people can successfully launch a man-in-the-middle attack and an SSL encrypted connection without authentication is more secure than a completely unencrypted connection in almost all usage scenarios.

Perspectives fails if the server's only connection to the Internet backbone is through a MITM. In this situation, all notaries would see the same MITM'd certificate. It also fails if you're trying to host more than one unrelated HTTPS site on port 443 of the same IP address, as the server won't know which hostname's certificate to present to clients running SNI-less browsers (Internet Explorer on Windows XP or Android Browser on Android 2.x).

Maybe except for the part where site owners have to pay a fee for the privilege of encrypting their users' communications with them, which is a barrier that means a lot of web site owners, in particular people who aren't running their sites for a living, just won't bother.

Even if add-ons like Perspectives make use of self-signed certificates practical, there's also the problem that Internet Explorer on Windows XP and Android Browser on Android 2.x don't support more than one server certificate per IP address. This lack of SNI means each domain needs its own IP address, and now that all/8s have been allocated, such hosting is substantially more expensive than bargain basement name-based virtual hosting.

I want my free encryption because I don't trust some 3rd party to tell me whether I should trust the web site that I am visiting. Encryption and identity should never have been tied together in the first place. It's unfortunate that this business method has succeeded as long as it has.

I dont see how you can separate encryption and identity - if you do not know who you are talking to then encryption has no value?

Lets say a man in the middle says - Im your bank, please talk encrypted with me. Then the man in the middle just repeat what you say to your bank until you are logged in.

The download certificate on first meet might provide some security, but would make it difficult to do business with unknown entities.

Authentication is not identification. In order to trust me to withdraw funds my bank need only know that I'm the guy who opened the account. They have no need to be able to connect my account with my entire life history and all my other accounts. It's the government that wants that.

Certificates in NO way prevent the problem you describe. They simply provide encryption.

The "middle" is anywhere between your fingers and the desired target. I can get in the "middle" with a brain dead keyboard scanner, and steal your stuff, fake your browser, etc, etc. Or at the other end and simply steal the cert from the target server and then spoof the DNS, either in your machine or elsewhere. Certs do NOT provide assurance that you are only (or at all) speaking to who you think you are. They provide, I

SSL is useful for defense in depth, but should not be used as a catch-all.

What *should* happen is that a minimum level of certificate should be available, and cheap, that allows secure connections to and from a particular site. A medium level of certificate should be available for e-commerce, and cheap enough for mom-and-pop e-commerce, and should require that all information necessary to identify and report fraud or theft be displayed--even if in small writing--together with a link to reporting instructio

What *should* happen is that a minimum level of certificate should be available, and cheap, that allows secure connections to and from a particular site.

Such a certificate is already available. It's called "Go Daddy Standard SSL with a promo code". The problem here is web browsers that don't support more than one distinct certificate on port 443 of a single IP address. These non-SNI-savvy clients include Internet Explorer on Windows XP and Android Browser on Android phones and pre-Honeycomb tablets.

should require that all information necessary to identify and report fraud or theft be displayed

Allow a single site to have either multiple certificates or multiple signatures from different providers. This means that important sites could be signed by e.g. both Verisign and Globalsign, so it would be possible for users to remove trust of a provider without losing the TLS protection. Without this, there will never be a free market for certificates and browser makers will have to include root certificates from even the least trustable providers, so it simply HAS to happen. Fortunately it is relatively easy to implement.

For extra points, implement support for off-site certificate stores so third parties can attest to the validity of a particular key. Groups of users could collaborate to verify the certificates of sites, which could create a level of protection against fraudulent certificates from the primary providers. This proposal is much harder to implement securely.

Just because you don't trust a CA doesn't mean you can't build up a TLS/SSL connection. It will just throw a tantrum that the CA is not trusted and ask you if you want to continue. But for most sites that is simply unacceptable so they'll just go ahead and buy a cert from a generally trusted CA.

SSL certificates are not meant to verify identities. They're meant to authenticate hostnames and secure links. If somebody hacks the server and gets the keys to the certificate or replaces the hosted content with som

These are oganizations that we already deal with and are in the business of establishing our identities and securing transactions.

When you're paying money on-line, you (or your browser) should sign the payment authorization using your own bank account's private key (provided for free with your account) and the encrypted with the public key for the destination account. The recipient will submit the signed authorization to his bank for payment.

A recent evaluation showed that 80% of sites with certificates did not have them set up properly anyway.

As someone else already pointed out, browsers by default do not even warn you if a site's cert is invalid. Why? Because so many sites had invalid certs that people became intolerably annoyed at the constant warnings and just shut them off anyway.

That same study concluded that there are too many Certificate Authorities today, and they do an inadequate job of validating their customers before issuing certificates. Some CAs issued multiple certs to the same party, others actually issued the same certs to multiple parties! (Definitely a no-no.)

It's a broken system. Not because of bad design, necessarily, but because of the failures of people who administer it.

I use self-signed certificates on pages hosted on my intranet and all the major browsers throw a major fit about them. If I ever have guests over that want to utilize my intranet web apps then they have to approve and add exceptions for my self-signed certs. The browsers act like my certs are shady or suspicious and if I didn't re-assure my guests then they wouldn't have added the exceptions.

I haven't tried going to a site with a domain mis-match or expired cert, but I would assume browsers throw a fit abou

As someone else already pointed out, browsers by default do not even warn you if a site's cert is invalid.

Completely wrong. Browsers have by default warned about invalid certs for years. Versions of Firefox and IE made in the last several years have actually gotten scarier warning messages, and made it more difficult to get to the website without going through a few steps other than just a simple "click here to continue". Expired certs also give warning messages.

And I repeat: that's human error. And I dispute whether we could have a "system of trust" that is very much superior, because every time we have tried to set up a "secure" system, people have ALWAYS been the point of failure.

You can't make something foolproof, because fools are so ingenious. They will find ways to break nearly anything.

And I repeat: that's human error And I dispute whether we could have a "system of trust" that is very much superior, because every time we have tried to set up a "secure" system, people have ALWAYS been the point of failure.

The point you're missing is that all failures are equal, or that the goal of designing any system is to make it perfect, or foolproof.

That's never the goal of security. The goal is always to reduce risk. No security is perfect. All security can be broken with sufficient resources. Ju

StartSSL certificates are valid only for individuals, not businesses. For an individual's web site run as a hobby, the price of a domain-validated cert isn't a barrier as much as the price of hosting that includes a dedicated IP address [pineight.com]. A lot of clients still lack SNI, which means they can't see more than one unique certificate on port 443 of a given IP address, and that isn't very compatible with shared hosting providers' common practice of putting upwards of a thousand different domains on a single IPv4

Hypothetically, I'd tell them to stop worrying about SSL and get busy rolling out IPv6 where this problem (and several other pressing issues) are solved. But that's because I have an engineering mindset, not a committee one. The answer to "Is this technology out of date or poorly implimented?" is universally yes in my world. Nobody gets it right, and it's a bloody miracle the internet continues to work in spite of its own massive structural deficits.

A lot of deployed clients don't support SNI, which means they support only one certificate per (address, port) pair. This means only one SSL site can run on port 443 of a given IP address. And with hosting providers such as Go Daddy using name-based virtual hosting to pack up to 1,000 different web sites onto a single IPv4 address, SSL on all sites becomes impractical. But the larger AAAAddress space of IPv6 lets hosting providers go back to address-based virtual hosting.

Domain Validation (DV) certs are not the same as OV, Organizational Validation, or EV, Extended Validation, certs. Web SSL certs are OV or EV. DV certs are intended to validate that the FQDN is valid (i.e. correctly owned by the domain). This is the job that DNSsec is meant to address in many ways. There's already been public discussion on some of the crypto forums such as mozilla-crypto (ok, for some value of "public" - but it's not a closed list). The DNSsec crowd have asked about putting certificate signatures in DNSsec and the entrenched CA crowd got all up and in arms and huffy about it. But DV certs would just tie the certs to the domain owners, and that's all, which is exactly what can be done in DNSsec. And, yes, we all know, the domain could be faked but that's not the point. The point is to tie a certificate back to the domain owner or not. The OV/EV certs are what validate the organization claiming to own the domain/FQDN. The CA crowd doesn't like the fact that DNSsec can do for free what they can charge money for. DNSsec puts the power totally in the hands of the domain owners (where it bloody well belongs). Now if we could just get certain bloody registrars, like Network Solutions, to let us register our key signing keys, we could get on with things. The root zone (.) is signed. The.org,.net,.com,.edu, and.gov zones are all signed and numerous other ccTLDs are signed. Godaddy and others are reported to be accepting DNSsec registrations. Where is Network Solutions? A sleep at the switch last I looked. And OpenDNS continues to pout, whining "I donwanna... Use DNS Curve or I'm gonna cry." DV certs are a solution in search of a problem and DNSsec is a better solution.

VeriSign sold its registrar business (NetSol) to Pivotal Equity in 2003 and its CA business to Symantec in 2010. It's down to running two root servers and acting as the back-end registry for.com,.net,.name,.cc, and.tv.

This essentially makes EV certificates meaningless. The whole point of an EV certificate is to unambiguously identify the business owning the certificate. So if you need to sue, file criminal charges, or send in a collection agency, you know where to send the process server, cops, or collection agents.

(At SiteTruth, our system considers SSL certificates without a business name and address to have no value in establishing the legitimacy of a company. We've always done this for "domain controlled only" certs, and will now do it for EV certs missing a business name or address.)

I am under the impression that these certificates are supposed to verify the authenticity of the host that you're connected to. Yet when I asked a major bank who issued their certificate, never mind to verify the authenticity of the certificate itself, they didn't have a clue what I was talking about. In that case, what's the point of them? I am still subject to man-in-the-middle attacks after all.

We do need a class of certificate that simply verifies that we're talking to the host we expect to be talking to (ie. that the name of the host using the certificate matches the name of the host in the URL). That's sufficient for encryption-only purposes, where I'm using SSL not to validate the remote end's identity in any way but merely to prevent eavesdropping on the data stream by third parties. DNSSec addresses some of that, but it doesn't provide the encryption layer that SSL does.

Bluntly put, there's a lot of cases where I don't care about the absolute, real-world identity of the entity I'm talking to, only that I'm talking to the same entity every time. Think the Dread Pirate Roberts.

Certificates are good for encryption. That's it. With the insane amount of "trusted CAs"
that come pre-trusted with every browser nowadays, that's all that is possible.
Hoping to achieve anything beyond that is naive.

She sees the little "lock" icon, doesn't get a confusing certificate warning message, and is happy to make her purchase on the scams-r-us website because, "Golly-gee! It's $800 less on THIS website!"

SSL certificates aren't intended to ensure that you're running a legitimate business. How could they? The only function of an SSL certificate is to provide a decent amount of assurance that the cert being presented to you is actually coming from the website displayed in your browsers address window. That in t