Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "The makers of two major mobile apps, Fandango and Credit Karma, have settled with the Federal Trade Commission after the commission charged that they deliberately misrepresented the security of their apps and failed to validate SSL certificates. The apps promised users that their data was being sent over secure SSL connections, but the apps had disabled the validation process. The settlements with the FTC don't include any monetary penalties, but both companies have been ordered to submit to independent security audits every other year for the next 20 years and to put together comprehensive security programs."

This should be a lesson: If somebody is having trouble connecting with you, or you're under some kind of deadline pressure and you can't connect to them, don't turn off SSL validation. Get your connection working properly before going live. Because once you go live, you won't want to/may not be able to properly set up SSL.

you're under some kind of deadline pressure and you can't connect to them, don't turn off SSL validation.

OR: Always turn off SSL validation, because it's totally worthless.

The problem is CAs get suberted all the time into issuing certs they shouldn't issue.

In general, the validation provided by a certificate doesn't work, and many developers and security professionals alike
mistake the theoretical security benefits of validation, from the fact in reality.

What a sad state of affairs. The CA-signed certificate, far from being the key to browsing security, is the Maginot Line that preserves the masses in a state of blissful ignorance.
It works perfectly against the attacks conceived and theorised as the dramatic threat to mankind, commerce and the Internet, a decade ago. Problem is, the attackers bypassed it, with as much disdain as any invading army against the last war's dug-in defence.
Problem is, the security model had unreasonable expectations. Problem is, the users didn't subscribe to their part of the protocol. (To be fair, it's hard to communicate to users that they are even expected to be part of anything.)

Problem is, the browser manufacturers that were sold on the need for the certs also got sold on the convenience of click and launch. So, they turned around and sold the security model down the river faster than one can say "check the URL..."

The frequency of a true MITM - one defined above where someone has the ability to control an intermediate node at low level and take central position - is so low as to be difficult to measure. Using risk analysis, there is no economically viable support for mandating protection, so the deployment of a cert should be optional if there is any cost involved.

What about the spoof? In total contrast to the MITM, spoofs are common. As common as dirt, and as equally unclean.
E-commerce sites with real value for thieving suffer spoofing attacks
Does the Cert stop the Spoof? Nope. Well, of course not - not as described above. Obviously the user is at fault for entering - clicking - the wrong address, and not checking.......

Why would they need to compromise your CAs? They can compromise any CA, because unless the client uses a tighter-than-normal designated requirement, it will trust any cert for your domain as long as it is signed by any of dozens of CAs. That's what makes TLS so flawed.

MITMs happen all the time in workplaces. Theyre called proxies. Thing is, if you're not using SSL they can be literally undetectable on your end. With SSL, they have to modify your trusted CA list and make obviously forged certs.

I believe there are firefox addons which detect SSL MITMs immediately.

There exists an extremely widely-used crypto protocol which uses no certificate validation and yet prevents almost all MITM attacks. It's called SSH. In fact SSH has done something that SSL will never do: it has completely replaced the corresponding unencrypted protocol, to the point where no one, I mean no one, uses telnet anymore.

How does this magic work? SSH performs key validation. It performs this validation without requiring certificates. The validation model is very simple: trust on first use (TOFU). Although TOFU on paper is theoretically inferior to CA validation in every way, real life does not take place on paper. In the real world, TOFU is far superior to CA validation. It prevents the kinds of attacks that actually matter, while ignoring the kinds of attacks that look great on paper but aren't really a big deal in practice.

There exists an extremely widely-used crypto protocol which uses no certificate validation and yet prevents almost all MITM attacks. It's called SSH. In fact SSH has done something that SSL will never do: it has completely replaced the corresponding unencrypted protocol, to the point where no one, I mean no one, uses telnet anymore.

Or, for that matter, rsh. You know, the corresponding unencrypted protocol.

Telnet was an abomination, whose continued popularity was only necessary because there were so many crap terminals out there in the world, and systems without a POSIX environment:D

> There exists an extremely widely-used crypto protocol which uses no certificate validation and yet prevents almost all MITM attacks.

Nonsense. Ownership of the host private keys, stolen from the target SSH server, allows quite effective MITM: see http://www.gremwell.com/ssh-mi... [gremwell.com] and http://www.snailbook.com/docs/... [snailbook.com]. Moreover, there is no reliable ownership or timestamp on SSH private keys. And worse, there is no working signature authority _available_ for SSH host keys. This makes spoofing an SSH serv

And worse, there is no working signature authority _available_ for SSH host keys. This makes spoofing an SSH server for new users much simpler.

In many cases communicating the host public key out-of-band is simpler and more secure than using a certificate. Consider what happens in those cases where an SSL certificate is considered too much work or too expensive. Sites go with http instead. If the out-of-band communication of an ssh host key is too much work, you still go with ssh, you just trust the key exch

> Also it isn't entirely true that there isn't any authority available for ssh. You could make use of RFC 4255 or RFC 6187.

Neither were part of the original RFC specifications, and neither work in most production SSH clients. I'm afraid that I've not seen anyone actually use the DNS published signatures for SSH keys in the 8 years since RFC 4255 was published, and most clients have no capability for it. RFC 6187 seems to have been roundly rejected by the OpenSSH developers, who came up with their own sig

Yeah great. This kind of SSH compromise requires a targeted attack, and will only work on that one server. By contrast, with SSL, a single DigiNotar stunt allows you to attack thousands of servers and millions of users all at once. See the difference? SSL is great in theory, horrible in practice. Anyone claiming otherwise is willfully blind of real-world considerations. This includes most cryptography researchers.

Nice name calling. It doesn't support your argument, though. Let me go back to your original statement.

> > There exists an extremely widely-used crypto protocol which uses no certificate validation and yet prevents almost all MITM attacks.

"Almost all MITM attacks" is the phrase you used. Many MITM attacks do, indeed, rely on stolen or legitimately obtained copies of the server encryption keys, so please don't claim that SSH is immune from "almost all MITM attacks". And I just showed where the current

Your argument remains completely nonsensical for one very basic and unavoidable reason: SSL is also equally vulnerable to stolen keys. There is no way in which SSH is worse than SSL.

Of the MITM attacks against SSL actually deployed in the wild, what proportion rely on stolen keys compared to compromised certs? Answer that question, and you'll see that my "most attacks" claim is fully valid.

Another point that you missed completely is that your targeting assumption is wrong. If you're doing a MITM against a banking site, you DON'T need to target them. Not with SSL. You can compromise instead any one of the thousands of certificate authorities in the world. Any single successful compromise of any of these unrelated third parties gives you free rein to MITM any banking site in the world. From the point of view of the server administrator, this is absolutely insane. No matter how good my own secur

In fact SSH has done something that SSL will never do: it has completely replaced the corresponding unencrypted protocol

You surely know the reasons ssh was able to achieve this and SSL isn't. But for the benefit of others it is worthwhile spelling out the reasons. First of all SSL certificates means there is some additional difficulty to getting started with SSL, which isn't there for ssh. Switching from telnet, rsh and rcp to ssh really was as simple as installing the server and client and then start using

[TOFU] prevents the kinds of attacks that actually matter, while ignoring the kinds of attacks that look great on paper but aren't really a big deal in practice.

If HTTPS used TOFU, it would mean that if I wanted to connect to some high-value site on a device that hadn't visited it before, I couldn't do it on a network that I didn't at least kind of trust. Traveling? Sucks to be you if you need to contact your bank on a relatively new laptop.

We monitored SSH logs to analyze user behavior when our system adminis- trators changed the SSH host key on a popular server within our department. The server’s public key had remained static for over two years and thus expected to be installed at most user’s machines. Over 70 users attempted to login over the server after the key change during the monitored period. We found that less than 10% of the users asked the administrators if there

For all we know by now it's possible and not implausible to assume that MITM attacks are conducted routinely by various intelligence agencies across the world. SSL is broken. You should not rely solely on CAs anymore. Use physically delivered security tokens (such as encrypted random data on a USB stick) and/or the trust model of ssh instead.

And the "man in the middle" is often actually at one end, on the local router or on the local network switch, with simple packet snifing in place. It's not rare, it is _ubiquitous_ in many educational and corporate environments.

Not incidentally, I'll also point out that the linked article was written before wi-fi was common. At that point, it was perhaps much more reasonable. But nowadays, when people think nothing of connecting to public wi-fi networks, MITM protection is critical.

Yea, but Cert problems are so common that people routinely just click "yes" to anything that is cert related. The biggest problem with SSL is that it is difficult enough to get right that people are used to it not working right. When I was a teenager I worked in a department store where the alarm at the door would go off about once an hour... needless to say we never caught a single shop lifter.

Well you can give people are the security tools in the world and if they are careless they will still leave the door unlocked and the alarm disarmed. There will always come a point where a human being has to make a security decision, because fundamentally all these machines exist in service of us humans in the first place.

Security is about risk, if someone does not perceive the probability times the costs of potential losses from an attack to be worth investing in learning enough about SSL/TSL, x.509 certi

This is an app, not a browser. There *IS* no "yes" to click in an app. Cert not valid? Connection closed, user gets "connection is probably being tampered with" error message. No "shoot self in foot" option is needed, because the same developer owns both the client and the server.

Sigh... the sheer amount of stupidity (mostly in the form of people trying to ack like they know what they're talking about) in this thread is painful!

OK, I'm a little late to the party here. The issue with the apps isn't that "SSL is insecure" or that "SSH is better". The problem is: most security APIs require multiple levels of APIs to work correctly, where each level is hard to get right, and easy to get wrong.

Worse, a substantial number of apps will turn off one level or another "for debugging" and then not turn them back on for their release version.

If you don't trust your CA chain then do cert pinning. Either way you need to know you're talking to the right server, pretending that's impossible so it's not worth trying is a cop out.

Certificate pinning is not possible in any real-world scenario. The problem is that certificates change too often. Certificate authorities are part of the problem: they encourage high turnover, because it increases their profits. Certificate pinning only works in situations where you have inside knowledge of a company's certificate policies. Google implements certificate pinning on their own Google properties in Chrome in this way. There is nothing in SSL that technically prevents certificate pinning, but t

SSH has no certificates, and yet has a higher market share in the shell connection market than SSL has in the http connection market

This is not a particularly fair comparison.

I would say that almost all traffic that goes via SSH/telnet/whatever is reasonably private. In most cases; even if the traffic isn't so private, you're getting a shell connection to another computer! There's only one time I can think of where I've SSH/telnetted to that wasn't private like that, and that's to towel.blinkenlights.nl.

You realise you can do key pinning in SSL as well, right? The point of certificates is to let you find out a public key when you don't have advance knowledge. Certainly for mobile apps, there's no requirement that the app trusts any CA.

So... mobile apps aren't a real-world scenario? You know, where cert pinning is used *all the time* because the same developer provides both client and server? You know, the entire context of this discussion?

Do you even *try* and think before posting bullshit on the Internet?

By the way, it is entirely possible to implement public key TOFU in browsers. It's called HTTP Public Key Pinning [ietf.org]. Chrome is already supporting it, I believe. It could also be achieved through browser extension/plugin pretty easily.

Mobile apps can and do use key pinning, but certificates are not necessary for that. They can just pin individual self-signed public keys. For that matter, they don't even need SSL; they could just use SSH.

It's possible, but useless, to implemet public-key TOFU in web browsers. Almost all web sites rotate keys too fast for the pin to be useful.

Certificate pinning is not possible in any real-world scenario. The problem is that certificates change too often.

There is a fairly simple fix for that, but it requires a bit of standardization. The idea is simply to not only have a certificate chain from a CA to the server certificate, but also have a secondary chain going through the server certificates over time. If the client has already stored a previous certificate, the server need to provide a chain from that old certificate to the new certificate.

SSL certs do what theyre supposed to. There can be trust issues, but the whole "fake CA" thing isnt as big a deal as youre saying for a few reasons:

1) Such attacks are INCREDIBLY obvious. You may not notice right away, and I may not notice right away, but SOMEONE is going to notice that the thumbprint, issuing CA, etc for a prolific website all just changed. Gee, the Google SSL cert just change to an issuance date of today, the issuer changed from "Google Internet Authority G2" to "DigiNotar". Gol

You are an idiot, and dragging down the collective intelligence of this entire thread. Just for your information, in case you weren't yet aware...

IT IS A MOBILE APP! The developers *DO* control what CAs are trusted by the app. In fact, they do it through *exactly* the same mechanism as turning off cert validation entirely: they implement a custom validator function, instead of having the app use the platform's built-in validation.They can control the CA that is trusted. They can pin the certificates (or jus

You are just an imbecile who can't even spell Ad Hominem, let alone remember that an attack against the person does not form a sound argument.

They don't even need to check the CA chain of trust

Then they are not implementing SSL Certificate validation. They are implementing a custom validation scheme.
IF they are implementing a custom validation scheme, they may as well implement one that is as SIMPLE as possible, and as SPECIFIC to their application as possible, so it is least likely to be vulnerab

The problem is CAs get suberted all the time into issuing certs they shouldn't issue.

Can you please prove this? Unless you're using a very flexible definition of "all the time", there is no publicly known evidence for this point of view. There are millions of certificates in the world and the number of bad certs is low enough that people can enumerate all the compromises on wiki pages.

E can't prove it, no. Because it's bullshit. Don't get me wrong, CAs as a single point of failure is stupid. Trusting *all* CAs for any given connection is also stupid. On the hand, trusting any random certificate is much, much stupider. There are solutions for the problems with the CA system...

The obvious one, in the case of mobile apps, is certificate pinning. That's not a new idea, or even terribly hard to implement. Make sure you pin a backup as well, and if you want to you can pin at the intermediate C

Yeah, because if a sufficiently motivated person can always pick a lock, we should just remove all locks?

With certificate validation, someone will have to compromise a CA (admittedly, any trusted CA will do) and do a MITM to get your data.
Without certificate validation, anyone who can do a MITM can get your data.

And you seem to think that the difficulty of pulling a MITM attack is about the same as compromising a CA.
It is not: Just set up a rogue Wi-Fi hotspot in a cafe or other public place and wait for

Yeah, because if a sufficiently motivated person can always pick a lock, we should just remove all locks?

No... SSL encryption is the lock. Authenticity is supposed to be the anti-pick mechanism.
I suggest not using the anti-pick mechanism called "CA based certificate validation" that comes with SSL.
I mentioned there are better options available, such as hard coding good public keys on the client side.

But you have to throw away SSL certificate validation, before you can implement good crypto practice

you're under some kind of deadline pressure and you can't connect to them, don't turn off SSL validation.

OR: Always turn off SSL validation, because it's totally worthless.

The problem is CAs get suberted all the time into issuing certs they shouldn't issue.

You're asuming that they're using a third-party CA, and using the same pool of CAs browsers use to validate.

In truth, when developing applications, you don't need that. If I were to make an application and server right now, I'd use my own CA certificate. I'd then bundle it with my application, and sign the server certificate with it. TLS validation will mean TRUE security in this case.

This one is real justice, particularly if they have to foot the bill for the regular audits, something they should have been doing in the first place. Just because a PHB makes the moronic decision to "bend the truth" does not mean everyone else in the company should suffer a loss of employment.

PHB - Actually it sounds more like a geek "technical argument" to me - "Supports SSL" is open to creative interpretation when a deadline is looming.

Hi, you would be right except there is definitely something punitive in the settlement. Both formal security audits and formal certification procedures are very expensive to small business. If you have only a handful of developers and the audit or certification takes him out of circulation for 3-6 months that's very expensive. Even having your developers distracted by the necessarily niggling and picky auditors is expensive even if they aren't on it full time.

I have a hard time believe the FTC will follow through with reviewing and verifying the contents of these security audits. This is a non-punishment. Not even a slap on the wrist. They should have gone for a stiff monetary fine. That said, I don't know how likely such an outcome would have been for the FTC. However, fining till it hurts is the only thing I am certain businesses will respond to.

I have a hard time believe the FTC will follow through with reviewing and verifying the contents of these security audits.

They probably aren't planning to, and won't need to. Credit Karma will set up a new corporate entity like "Karma New Holdings LLC," transfer all assets including the domain, customers, and brand, and keep on truckin'. Hell it's probably already been done. Assuming the FTC ever does call them up two years from now, the entity which received sanctions will conveniently no longer exist.

"However, the app didn’t validate those connections, so users’ financial information was exposed during transmission." - This is false, the channel was still encrypted, but it is possible for an MTM attack to occur. Now if the client knows who it is talking too (IP Address) with some messages exchanged in the application layer, then SSL verification may not be needed. The real purpose of SSL cert validation is to authenticate who you are talking too - if you know you want to talk to server 10.1

"However, the app didn’t validate those connections, so users’ financial information was exposed during transmission." - This is false, the channel was still encrypted, but it is possible for an MTM attack to occur. Now if the client knows who it is talking too (IP Address) with some messages exchanged in the application layer, then SSL verification may not be needed.

No. If your verification is done on a separate layer from your encryption, then you are doing it wrong. This in no way prevents a MITM attack at all. All you need to do is get between them (wifi, arp poison, dns redirect, etc), MITM the ssl, then you can pass the "verification" right through while reading all of the "encrypted" information as it passes. You can also wait for the verification to succeed, then start screwing with the data because the verification is not connected to the encryption (which is a

There are so, so many ways around this. A simple one, for example, is to only perform certificate validation past a date in the future when you will have the cert ready for testing (ideally shortly before publication). That's an easy check to perform in a custom validation function (which happens to be the same way you turn validation *off*, in most cases, so it's a truly trivial amount of extra effort). Or you can have the validation disabled in debug builds, but enabled for any "release" build (including