Researchers have uncovered defects in a wide range of applications running on computers, smartphones, and Web servers that could make them susceptible to attacks exposing passwords, credit card numbers, and other sensitive data.

The Trillian and AIM instant messaging apps and an Android app offered by Chase Bank are three apps identified as vulnerable to so-called man-in-the-middle attacks. Like the other dozen or so applications identified, the threat stemmed from weak implementations of the secure sockets layer and transport layer security protocols. Together, the technologies are designed to guarantee the confidentiality and authenticity of communications between end users and servers connected over the Internet.

The weak implementations caused the programs to initiate encrypted communications without first assessing the validity of the digital certificates on the other end. As a result, one of the fundamental guarantees of the SSL—that the computer on the other end of the connection belongs to the party claiming ownership—was fundamentally compromised. Instead, the apps will trust imposter certificates that are signed by attackers or fail established validity tests for a variety of other reasons.

"Our main conclusion is that SSL certificate validation is completely broken in many critical software applications and libraries," a team of researchers wrote in a paper titled The Most Dangerous Code in the World: Validating SSL Certificates in Non-Browser Software. "When presented with self-signed and third-party certificates—including a certificate issued by a legitimate authority to a domain called AllYourSSLAreBelongTo.us—they establish SSL connections and send their secrets to a man-in-the-middle attacker."

Instant messaging clients Trillian and AIM are among the apps that fail to properly validate SSL certificates before establishing a secure connection, according to the researchers. Man-in-the-middle attacks on Trillian, depending on the specific setup, can yield login credentials for a variety of third-party services (including Google Talk, AIM, Yahoo!, and Windows Live services). The AIM client version 1.0.1.2 on Windows also accepts certificates signed by untrusted parties and also fails to verify if the host name on the certificate conforms to the Internet address the app is connected to.

Similar weaknesses in the Chase mobile banking app for Google's Android operating system also puts users at risk, the researchers said. "Even a primitive network attacker—for example, someone in control of a malicious Wi-Fi access point—can exploit this vulnerability to harvest the login credentials of Chase mobile banking customers," the paper warned.

The researchers attributed weaknesses to the "terrible design" of the programming interfaces provided in widely used code libraries that implement SSL. In some cases, the libraries leave it up to individual apps to validate the certificates presented when they connect to a server. In other cases, options chosen by app developers inadvertently turn off validation routines that by default are supposed to run.

"These APIs are extremely confusing," said Moxie Marlinspike, the pseudonymous researcher who has repeatedly exposed vulnerabilities in SSL. "They're very easy to get wrong and people do get them wrong all the time. But some of the cases that [the researchers] outline don't spell certain death."

One such case, Marlinspike said, was a weakness described in the code libraries for the Amazon Flexible Payments Service used to process online payments. Invoking the library using the PHP language turns off domain-name checking in FPS, allowing the server running the program to accept connections to non-authorized servers, the paper said. Marlinspike didn't dispute that finding but said that FPS provides its own signature-based authentication protocol and doesn't transmit client credentials, credit card numbers, or bank account information. Under those circumstances, the lack of SSL validation doesn't necessarily indicate a significant loss of security, he said.

Marlinspike made a similar observation about a finding involving TextSecure, an Android app he developed to encrypt cellphone messages that use the SMS and MMS protocols. The carrier servers that phones connect to don't present SSL certificates that conform to Internet standards, so TextSecure wouldn't work correctly if it ran validation routines. More importantly, messages are encrypted using the Off-the-record protocol, so SSL is never relied on to keep the content private.

Nonetheless, the paper cites a litany of widely used apps and code libraries that fail to properly vet servers before establishing connections that are supposed to be secure. This authentication is supposed to take place during a "handshake" mandated by the SSL protocol, when a server presents its public-key certificate to the end-user computer. For the connection to be considered secure, the client must first confirm the certificate has been issued by a valid certificate authority, has not expired or been revoked, and carries the domain name(s) the client is connecting to.

Other apps and libraries mentioned in the paper include Amazon’s EC2 Java library and all cloud clients based on it; Amazon’s and PayPal’s merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; and Java Web-services middleware—including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android—plus all applications employing this middleware.

The researchers said they uncovered the faulty validation by running the programs on a network that used DNS cache poisoning and a real-time certificate impersonator to simulate a real-world man-in-the-middle environment. They called on developers to subject their apps to similar tests.

"The state of adversarial testing appears to be exceptionally poor even for critical software such as mobile banking apps and merchant SDKs responsible for managing secure connections to payment processors," the researchers wrote. "Most of the vulnerabilities we found should have been discovered during development with proper unit testing."

35 Reader Comments

It's nice to see how civilized the world is becoming after the invention of this whole 'internet' thing. Myself, I feel much safer having my money stolen electronically as opposed to the more traditional method, mugging. /s

Great... every time I express the slightest bit of skepticism about banking from a mobile device (due to security questions), there is undoubtedly some snotty commenter to chide me for being timid or skeptical.

It's nice to see how civilized the world is becoming after the invention of this whole 'internet' thing. Myself, I feel much safer having my money stolen electronically as opposed to the more traditional method, mugging. /s

Yes, but in the physical world, a mugging just rid you of the money in your pocket. Online, it can rid you of all current and future money as well as your credit rating necessary to borrow more.

Myself, I feel much safer having my money stolen electronically as opposed to the more traditional method, mugging. /s

Yes, but in the physical world, a mugging just rid you of the money in your pocket. Online, it can rid you of all current and future money as well as your credit rating necessary to borrow more.

Usually when I tell someone that they missed the sarcasm tag, I'm also being sarcastic, since there isn't one. In this case, however, there actually *was* a sarcasm tag, and you still missed it. Delicious, really...

Great... every time I express the slightest bit of skepticism about banking from a mobile device (due to security questions), there is undoubtedly some snotty commenter to chide me for being timid or skeptical.

Well, basically, if you were banking from your mobile device using the browser, you'd be fine. (At least, I imagine if mobile chrome, mobile firefox, 'android browser' (chromium?), mobile safari, or mobile IE were vulnerable, they'd be mentioned in the article, because that would be HUGE news.)

Basically, I don't use those kinds of boutique apps partially for that reason - there is no user-visible way to verify that it's actually a secure connection, you're trusting brand x to properly vet their brand y programmers to make one little web-front-end app.. or you could trust a large security team at apple/google, microsoft, or mozilla, who, particularly in the case of microsoft and mozilla, have a long history of security stress-testing their browser code, and talking about it publicly (and google has a record, but it only goes back a few years instead of over a decade..).

So, if I go to my bank's website on my phone using mobile chrome, and I see the ssl symbol, I'm pretty sure that my data is safe in transit. At the ends is an entirely different story.. and sure, a few people might fall victim to a man-in-the-middle attack at a shady coffee shop... but millions of people get their financial details stolen from payment processors every year...

(Plus, at least with most GSM-based phones in the US, if you're not on wifi, you probably have a fairly secure connection to 'the internet', so I've been known to turn off wifi when I'm on a public hotspot and doing something sensitive)

The worst part of this (or, at least a really bad part of this) is that all the top people have been, rightly, saying for years that you should not implement your own SSL solution. Use the prebuilt ones and leave the security up to the experts.

I expect that advice will be a bit harder of a sell to anyone who got burned by one of these broken SSL libraries.

The worst part of this (or, at least a really bad part of this) is that all the top people have been, rightly, saying for years that you should not implement your own SSL solution. Use the prebuilt ones and leave the security up to the experts.

I expect that advice will be a bit harder of a sell to anyone who got burned by one of these broken SSL libraries.

It sounds more like people are using prebuilt ones, just not using them PROPERLY.

So stupid.... why people do not test the SSL portion of their apps is beyond me... This is such a glaring error that these Apps should have never been released.

Misuse of libraries is one thing, but I also have to wonder if part of the problem stems from the difficulty of developing and debugging an app that uses SSL/TLS, without using the real, live server(s) at the real, live domain(s). A banking app, for example; you think they actually connected to the live BoA servers during dev and debug of the thing? Man, I hope not! But that leads to ... "oh, damn, it doesn't like the self-signed cert on my dev server! Guess I'll have to disable that. For now. Yeah, I'll have to come back to that later." Later is sometimes never. So, when they do go live, nothing ever turns red, nothing ever flashes a warning. The app is by then so forgiving that it'll gladly connect to damned near anything - just like the developer made it do.

Maybe an edge case and an exaggeration, and not a valid excuse for the developers or the people who hired them - but I'll willing to bet that something like that has happened, at least with some of the smaller players.

The worst part of this (or, at least a really bad part of this) is that all the top people have been, rightly, saying for years that you should not implement your own SSL solution. Use the prebuilt ones and leave the security up to the experts.

I expect that advice will be a bit harder of a sell to anyone who got burned by one of these broken SSL libraries.

It sounds more like people are using prebuilt ones, just not using them PROPERLY.

In order to do the certificate verification, the library needs the public keys for the CA(s). These are usually stored in a file. That file has to "live" somewhere, and the library needs to be told where it is. This file might also need to be periodically updated if the regular HTTPS CAs are used.

None of this is very difficult, but it is more work than just turning off verification.

There is a more general problem at work here. Most developers try to make their code work, and see that as the goal. Trial-and-error debugging for example, instead of actually understanding the code and the APIs it uses. If things don't work because of certificate validation failures, then they turn off certificate validation (perhaps on the advice of someone who "solved" the same problem before), and voila, it "works", problem solved, on to the next bug.

It's not enough for software to do what it is supposed to do. It also needs to be impervious to attempts to make it do things that it shouldn't. Most developers struggle so much with the former that they can't bear to think about the latter.

Try reading the whitepaper. On Chase for instance, they decompiled and poked through the app, and found code that disregards certificates. There is no mention of whether they even performed a successful attack. The code they found could be completely unreachable.

The attack works in practice. It is not just a random "maybe" attack. To verify, get a copy of the Chase mobile app on Android dating from before April 2012 and try the attack yourself. Then you can see that you can capture the login credentials.

Yes, but in the physical world, a mugging just rid you of the money in your pocket. Online, it can rid you of all current and future money as well as your credit rating necessary to borrow more.

In the physical world, a mugging can rid you of your life too. There goes your everything then, not just future money and credit rating.

Quote:

In some cases, the libraries leave it up to individual apps to validate the certificates presented when they connect to a server.

Of course they do -- it is not a job of an SSL library to compare host FQDN to certificate CN field or to ensure that the certificate is not self-signed. Certificate validation is left to developers because of flexibility (not everyone needs all the checks) -- developers should know what they need to validate and how to handle each specific case of validation failure. If they don't, then they should not be trusted to write communication code of any sort, much less SSL.

SSL itself is not broken as the article claims, it's the idea of certificate authorities which is broken -- it requires implicit trust in 3rd parties (i.e. all intermediate certificate authorities) in addition to the entity you are dealing with and thus reduces security strength of the chain to the strength of its weakest link.

In my opinion, trusting more people can only reduce your security, not increase it.

So stupid.... why people do not test the SSL portion of their apps is beyond me... This is such a glaring error that these Apps should have never been released.

Misuse of libraries is one thing, but I also have to wonder if part of the problem stems from the difficulty of developing and debugging an app that uses SSL/TLS, without using the real, live server(s) at the real, live domain(s). A banking app, for example; you think they actually connected to the live BoA servers during dev and debug of the thing? Man, I hope not! But that leads to ... "oh, damn, it doesn't like the self-signed cert on my dev server! Guess I'll have to disable that. For now. Yeah, I'll have to come back to that later." Later is sometimes never. So, when they do go live, nothing ever turns red, nothing ever flashes a warning. The app is by then so forgiving that it'll gladly connect to damned near anything - just like the developer made it do.

Maybe an edge case and an exaggeration, and not a valid excuse for the developers or the people who hired them - but I'll willing to bet that something like that has happened, at least with some of the smaller players.

I doubt this is an edge case. I've caught colleagues of mine doing exactly this, thankfully before it went live in production. I wouldn't be surprised if this were not frequently the case.

So if I understand this correctly, there's not a world-ending problem with SSL. The problem is with the app developers implementation of SSL? Well, then I guess we should post them all on some wall-of-shame.

So stupid.... why people do not test the SSL portion of their apps is beyond me... This is such a glaring error that these Apps should have never been released.

Misuse of libraries is one thing, but I also have to wonder if part of the problem stems from the difficulty of developing and debugging an app that uses SSL/TLS, without using the real, live server(s) at the real, live domain(s). A banking app, for example; you think they actually connected to the live BoA servers during dev and debug of the thing? Man, I hope not! But that leads to ... "oh, damn, it doesn't like the self-signed cert on my dev server! Guess I'll have to disable that. For now. Yeah, I'll have to come back to that later." Later is sometimes never. So, when they do go live, nothing ever turns red, nothing ever flashes a warning. The app is by then so forgiving that it'll gladly connect to damned near anything - just like the developer made it do.

Maybe an edge case and an exaggeration, and not a valid excuse for the developers or the people who hired them - but I'll willing to bet that something like that has happened, at least with some of the smaller players.

I doubt this is an edge case. I've caught colleagues of mine doing exactly this, thankfully before it went live in production. I wouldn't be surprised if this were not frequently the case.

Edge case or not it is just called bad programming. IF you are implementing SSL into any application part of the QA / testing should be around actually testing the SSL validation.

I wonder what would happen if you test a jet before you check the aerodynamics?

Myself, I feel much safer having my money stolen electronically as opposed to the more traditional method, mugging. /s

Yes, but in the physical world, a mugging just rid you of the money in your pocket. Online, it can rid you of all current and future money as well as your credit rating necessary to borrow more.

Usually when I tell someone that they missed the sarcasm tag, I'm also being sarcastic, since there isn't one. In this case, however, there actually *was* a sarcasm tag, and you still missed it. Delicious, really...

I'm sorry, there are so many other things that "/s" could have meant. In HTML that's an end tag for strikethrough. Maybe you should use a more standard DTD.

The whole thing is designed to be fragile and strict. If anything is not exactly right and working up to protocol, it breaks and pukes errors onto your shoes. That's always the case when you deal with security related things, as any deviation from the defined process is likely to be exploitable and stopping immediately upon that is the Right Thing.But most programmers aren't used to that level of strictness and the whole SSL thing is not too easy to grasp. So to get it right, most would need to read up a bit, and it would also require a bit of an external setup to test that most developers aren't used to do.

So what is the result?Cutting corners and making it work just somehow. Mainly because is there not much of an incentive to do it any better, but a lot of trouble and work. Consumers and customers are unable to reveal underlying problems, they can only check if there's something turning green. If you tell them that the communication gets encrypted, they're happy, because that implies security to them, in most cases even if you outright tell them there is no authentication. If there's something that does not work or turns red, they're upset. They get the impression that this is bad, which they don't get if it works or falsely displays something green and flashes an animated "secure" badge.

The advice to use a well known and tested SSL library instead of rolling your own absolutely stands. As we see here, there's dozens of cases where it's not working even without the much harder task of handling the core SSL things. Imagine what would be the result if the people making mistakes using a library would try to write it themselves. I doubt that it would be anything but worse.

The problem is many major players violate this basic rule and don't use certs that match their servers/domains. Like the Trillian app mentioned ... Yahoo for the longest time didn't even have a matching cert for their mail server (not sure if it's been fixed yet) and I had no choice but to choose "confirm exception" for cert www.yahoo.com for website mail.123.abc.yahoo.com so I doubt their IM servers were any better. And if Trillian wants support for Yahoo Messenger, they have no choice but to disable this check. Of course, that also means Yahoo Messenger itself suffers from this same risk ... or doesn't use any encryption at all.

I write mobile banking apps for a living (on both iOS and Android). There's been a lot of speculation in this thread that it's hard to properly implement SSL on these platforms. I can tell you that's simply not the case. You have to actually write additional code to disable or alter certificate validation.

I can only speculate why developers would do so, but some of the reasons may be:

1. There are many terrible code examples on StackOverflow that recommend disabling certificate validation (here's one I found after 10 seconds of searching). Developers may be getting SSLExceptions and not understand why, and simply copy and paste the "fix" without understanding the ramifications. Stupid.

2. Android 2.2 and below were missing many commonly used certificate authorities in the trusted keystore. If the institution's site was using a certificate signed by one of these CAs, it's possible the developer didn't want to take time to work with their IT department to resolve the issue (by adding a cross-root CA into the server-side certificate chain, or replacing the certificate). So instead the developer bypasses certificate validation and the problem goes away. Stupid.

3. Android 2.2. and below were also VERY picky about the order of the server-side certificate chain (all certs are now chained). If the chain wasn't sent down to the SSL client in the right order, you'd get an SSLException because it couldn't follow the chain up to a trusted CA. A lazy developer would bypass certificate validation to "fix" the issue. Stupid.

4. It's possible the developer was trying to increase security by implementing their own certificate validation, and only allowing certificates signed by a particular CA to be trusted. But maybe they did this improperly. Stupid.

5. As speculated by others, it's possible the developer disabled validation during development and forgot to enable in production because they were using a self-signed certificate that's not trusted by the SSL library. That's a poor excuse as it's quite easy to add a CA to an iOS device, and you can create an SSLSocketFactory during development that trusts your self-signed cert on Android.

So a developer working on these types of applications has to be pretty negligent in order to allow something like this to occur in their apps. I've had to work with many of our clients in order to ensure that they have their SSL certificates setup properly (just because it works in IE does not mean it will work on a mobile device!). To completely destroy the security of your app because you're either lazy or ignorant is definitely no excuse.

(Plus, at least with most GSM-based phones in the US, if you're not on wifi, you probably have a fairly secure connection to 'the internet', so I've been known to turn off wifi when I'm on a public hotspot and doing something sensitive)

No, you have a secure connection to "the cell tower". WiFi and GSM encryption won't protect you from a man in the middle attack

This is one of the unfortunate side effects that tablets and mobile devices have had on software. In the last decade software had tended to converge towards a single platform, the web browser. This had the advantage of writing such things as SSL validation once and every website visited would use that same method.

Now instead of using a web browser, each "site" is releasing their own custom apps for each device. Some will be well written, others not so much.

I think that as mobile web browsers become more "full featured" it may serve us well to move back toward a single viewer model via a browser instead of lots of custom apps.

So stupid.... why people do not test the SSL portion of their apps is beyond me... This is such a glaring error that these Apps should have never been released.

Misuse of libraries is one thing, but I also have to wonder if part of the problem stems from the difficulty of developing and debugging an app that uses SSL/TLS, without using the real, live server(s) at the real, live domain(s). A banking app, for example; you think they actually connected to the live BoA servers during dev and debug of the thing? Man, I hope not! But that leads to ... "oh, damn, it doesn't like the self-signed cert on my dev server! Guess I'll have to disable that. For now. Yeah, I'll have to come back to that later." Later is sometimes never. So, when they do go live, nothing ever turns red, nothing ever flashes a warning. The app is by then so forgiving that it'll gladly connect to damned near anything - just like the developer made it do.

Maybe an edge case and an exaggeration, and not a valid excuse for the developers or the people who hired them - but I'll willing to bet that something like that has happened, at least with some of the smaller players.

I'd actually put it down to poor testing. It isn't at all hard to set up a "bench-tester" using virtual machines. It isn't even very hard to rig it for packet monitoring to help on the debugging end if you know what's required. I've been doing it for a dozen years now and this type of scenario is one of the first I came up with. Hell, I keep old client virtual machines around (e.g. Win'XP/IE6) as they are still common out in the Real World so I'm adequately covering all the testing bases (pun intended).

Any chance we can get the full list of apps these guy's found vulnerable? Some of them are probably open source so it would be interesting to see exactly what they are doing wrong.

I'm a PHP programmer and security advocate. Kevin McArthur has spent a lot of time highlighting this issue in PHP, and was graciously referenced by the study authors. You can find his own list of open source software (in PHP) containing this vulnerability at http://www.unrest.ca/peerjacking. The list may be incomplete due to responsible disclosure requirement when reporting security vulnerabilities to vendors.

The researchers said they uncovered the faulty validation by running the programs on a network that used DNS cache poisoning and a real-time certificate impersonator

Doing it on wifi is one thing, doing it over a cellular network is quite another level of sophistication. If you using a cell phone there's NO excuse for doing sensitive web or banking operations on untrusted wifi.

(Plus, at least with most GSM-based phones in the US, if you're not on wifi, you probably have a fairly secure connection to 'the internet', so I've been known to turn off wifi when I'm on a public hotspot and doing something sensitive)

No, you have a secure connection to "the cell tower". WiFi and GSM encryption won't protect you from a man in the middle attack

True, my bad. Your GSM/LTE connection is only strongly encrypted where it exists between your phone and their public IP network, once you hit their public IP space, you're probably back to standard internet levels of security... which is still higher than random coffee shop hotspot levels. (Though i think you're also right, in the case of GSM, you're only to-the-phone encrypted 'to the tower', not all the way back to the core cell network... but the tower-to-core connection is encrypted.. but bulk, just one encrypted connection carrying all the different device streams.. a compromised tower could leak them all. With LTE, each device has a strongly encrypted connection to the core, from what I understand, so a compromised tower can't do much.

But I didn't mean it made you immune to a man-in-the-middle attack.. I just take steps to make it less likely (not using unknown networks to get 'to the internet').

If you using a cell phone there's NO excuse for doing sensitive web or banking operations on untrusted wifi.

The phone companies push customers to use wifi whenever it's available. (Especially in the face of that) I don't think we should be pushing these kinds of security issues all the way down to the end user, the apps and the phone need to be making some automatic decisions. Or, to invert your statement - what non-sensitive operations can the typical person feel safe doing on an untrusted wifi, and what are your honest expectations for the typical person to get it right?

Any chance we can get the full list of apps these guy's found vulnerable? Some of them are probably open source so it would be interesting to see exactly what they are doing wrong.

I'm a PHP programmer and security advocate. Kevin McArthur has spent a lot of time highlighting this issue in PHP, and was graciously referenced by the study authors. You can find his own list of open source software (in PHP) containing this vulnerability at http://www.unrest.ca/peerjacking. The list may be incomplete due to responsible disclosure requirement when reporting security vulnerabilities to vendors.

The tldr; Kevin has been prevented from publicizing this research for responsible disclosure reasons. The article authors were made aware but have never been willing to communicate with him, despite having been contacted. He argues this raises issues for the ethics of the academy.

So, if you implement SSL without checking cert validity, shock horror gasp its vulnerable to exploit.

Unbelievably slack by the devs involved. Like above posters say probably some hack doing a copypaste job without understanding the ramifications. The fact that a banking app was vulnerable says a lot about how seriously they take this sort of thing....

The web was built on the principle of good enough, and now that everything we do is mediated through this thing (including private conversation and financial transactions) it's coming back to bite us in the ass.