Titles from security firms Lavasoft and Comodo leave users open to easier attacks.

Share this story

Two more software makers have been caught adding dangerous, Superfish-style man-in-the-middle code to the applications they publish. The development is significant because it involves AV company Lavasoft and Comodo, a company that issues roughly one-third of the Internet's Transport Layer Security certificates, making it the world's biggest certificate authority.

Lavasoft and Comodo were added just as researchers were discovering simpler, more potent ways to exploit the vulnerabilities.

Late last week came word that self-signed Secure Sockets Layer certificates installed by a company called Komodia caused most browsers to trust any self-signed certificate that used the same easily extracted private key. That was bad, but now, researchers have discovered vulnerabilities in the closely related proxy software of interception applications from Komodia and Comodo. The new insight makes it even easier for attackers to forge trusted credentials that impersonate Bank of America, Google, or any other HTTPS-protected destination on the Internet.

Lavasoft Ad-aware Web Companion is free privacy software Lavasoft markets as a companion to regular antivirus protection. Lavasoft appears to have licensed the Komodia engine and put it into the Companion product for inspecting SSL traffic. Most other AV products use similar self-signed certificates to detect SSL-injected threats, but so far there are no reports of other AV companies using such vulnerable implementations. At the time this post was being written, Lavasoft was unable to confirm if the vulnerable Komodia code was fully removed from the latest version of Companion. The company is prepared to issue a new version on Monday, if necessary.

The second security-marketed software was "PrivDog," which is the creation of Comodo CEO Melih Abdulhayoglu. Valsorda told Ars that the stand-alone version of PrivDog will cause most browsers to trust any self-signed certificate, a breath-taking vulnerability that leaves users wide open to easily executed man-in-the-middle attacks that completely bypass HTTPS protections.

Besides its ties to Comodo—a certificate authority that's trusted by all major operating systems—PrivDog is notable for not containing any traces of Komodia technology. The version of PrivDog that's bundled with Comodo Internet Security does not contain the same critical weakness, Valsorda said.

PrivDog bills itself as software that enhances security and privacy by replacing ads in Web pages with ads from trusted sources. Presumably, the vulnerable version of PrivDog is using the man-in-the-middle proxy and certificate to replace ads in HTTPS-protected sites. Abdulhayoglu and other Comodo officials didn't respond to e-mail seeking comment for this post. Update: On Monday, PrivDog issued a statement reiterating what Ars already reported, that the vulnerability resides in the stand-alone version only. Fewer than 58,000 users are affected. Remarkably, PrivDog rated the threat as "low." That seems to be a massive understatement, given the harm that can be done, no matter how many users are affected. Readers with either Lavasoft Ad-aware Web Companion or the stand-alone version of PrivDog should err on the side of caution and uninstall both the app and the underlying root certificate as soon as possible.

Post updated to add statement from PrivDog.

Promoted Comments

Some of you sound like congress, coming up with magical fixes for the Internet.

They should just engage the IP deflector shields and bounce the packets on a tangent knocking off hackers in nearby attack zones. If they flip the packets 90 degrees they can create a Möbius loop causing the hackers to enter a self inflicted death spiral. It's easy, I don't know why browsers don't already do this.

Something about the way security certificates works/are stored/handled on user's PCs is completely wrong or typical programmers-never-though-this-out-BS for things like this to be so simple to implement.

I've got to wonder, did the engineers who actually wrote all this software know what they were doing? Were they aware that they were damaging the protection offered by SSL and just didn't care? Or was that just simple ignorance. I'd like to think it was the latter but it's difficult to believe that people smart enough to subvert the certificate trust system would also not be smart enough to recognise the consequences of doing so.

When I was at university my CS course included an entire semester-long module on ethics. At the time I considered it a waste of time because I'm ethical anyway, but now I'm beginning to realise that maybe I'm not the rule but the exception.

I've got to wonder, did the engineers who actually wrote all this software know what they were doing? Were they aware that they were damaging the protection offered by SSL and just didn't care? Or was that just simple ignorance. I'd like to think it was the latter but it's difficult to believe that people smart enough to subvert the certificate trust system would also not be smart enough to recognise the consequences of doing so.

When I was at university my CS course included an entire semester-long module on ethics. At the time I considered it a waste of time because I'm ethical anyway, but now I'm beginning to realise that maybe I'm not the rule but the exception.

My guess is that their thinking never went beyond, "Hey, look! I figgered out how to jam more ads in people's faces, even when we're blocked!" and called it good.

I've got to wonder, did the engineers who actually wrote all this software know what they were doing? Were they aware that they were damaging the protection offered by SSL and just didn't care? Or was that just simple ignorance. I'd like to think it was the latter but it's difficult to believe that people smart enough to subvert the certificate trust system would also not be smart enough to recognise the consequences of doing so.

When I was at university my CS course included an entire semester-long module on ethics. At the time I considered it a waste of time because I'm ethical anyway, but now I'm beginning to realise that maybe I'm not the rule but the exception.

My guess is that their thinking never went beyond, "Hey, look! I figgered out how to jam more ads in people's faces, even when we're blocked!" and called it good.

That's the attitude I'd expect from the management team but like I said, an engineer who knows how to do this should be bright enough to know why doing it is a bad idea and ethical enough to not do it.

Something about the way security certificates works/are stored/handled on user's PCs is completely wrong or typical programmers-never-though-this-out-BS for things like this to be so simple to implement.

I don't think there's anything particularly wrong with the way certificates are stored and handled on systems. One could argue that the whole system of relying on certain entities that simply authenticate other entities that pay enough money for that and thus claim them to be "trusted" is flawed, but that's not really here nor there atm. I don't even know if I would attribute these kinds of things to malice, since most devs aren't security-researchers and it's very easy to just think that Komodia's folks know their shit so there's no need to delve deeper into their implementations. And even if they did delve into it I wouldn't assume most devs would understand the ramifications of all the computers using the exact same root-certificate with weak passkey.

Now, as for Komodia themselves... If they have, indeed, worked with some actual CAs to develop the toolkit and all that they totally should realize the importance of the passkeys and using identical certificates everywhere and all that. Ignorance of the possible security-issues is likely not the culprit there.

I know that mentioning Linux on Ars is a sin, but should I assume that none of the exploits are included in any Linux programs?

I looked at Komodia's website and it seems all their products are solely for Windows-systems. That doesn't mean there aren't similar exploits for Linux, but it does at least look like none of the exploits mentioned here would work on it.

Something about the way security certificates works/are stored/handled on user's PCs is completely wrong or typical programmers-never-though-this-out-BS for things like this to be so simple to implement.

I don't think there's anything particularly wrong with the way certificates are stored and handled on systems. One could argue that the whole system of relying on certain entities that simply authenticate other entities that pay enough money for that and thus claim them to be "trusted" is flawed, but that's not really here nor there atm. I don't even know if I would attribute these kinds of things to malice, since most devs aren't security-researchers and it's very easy to just think that Komodia's folks know their shit so there's no need to delve deeper into their implementations. And even if they did delve into it I wouldn't assume most devs would understand the ramifications of all the computers using the exact same root-certificate with weak passkey.

Now, as for Komodia themselves... If they have, indeed, worked with some actual CAs to develop the toolkit and all that they totally should realize the importance of the passkeys and using identical certificates everywhere and all that. Ignorance of the possible security-issues is likely not the culprit there.

There should be no way on my system for piece of software C to insert itself into the certificate system for certificates for sites A and B in such a a way that the utility of the certificate system is invalidated.

It's NOBODIES business what's in an SSL tunnel other than the user and content creator. If AV modules need access to payload, there should be a browser plugin API rather than undermining the transport and X509 layers.

It's NOBODIES business what's in an SSL tunnel other than the user and content creator. If AV modules need access to payload, there should be a browser plugin API rather than undermining the transport and X509 layers.

Why does the browser even LET it get undermined in the first place? Shouldn't there be some kind of check or assurance that it hasn't been undermined, and if it is the entire browser won't even try and make an HTTPS connection in the first place?

What is the 2#$@#@%@#$%@# point of even having SSL in a browser if any other piece of software on the PC can defeat it trivially?

When will Software Engineers start acting like actual Engineers and stop these crappy half-assed solutions to things that just put a ton of users at great security risk?

readers with either Lavasoft Ad-aware Web Companion or the stand-alone version of PrivDog should err on the side of caution and uninstall both the app and the underlying root certificate as soon as possible.

With no link to instructions on how to delete root certificates? With Ars' current broad popularity, it's an oversight to think that all readers will know how to delete certificates.

How? The browser needs some kind of mechanism to determine what can and can't be trusted. And whatever that mechanism is, there's simply no magical way to definitively protect it from manipulation if someone has access to your computer. You can throw up some roadblocks and make it harder, but there's no way to prevent it.

The main problem is that these vendors are doing things the wrong way. It's easy for them to inject themselves into the connection itself, as it gives them a predictable, raw access to everything. But then they are bypassing things like validity and revocation checking.

If some company wants a product to block ads in the name of privacy, they should do what AdBlock does, which is tackle the problem after the data reaches the browser, not before. That's harder to do since each browser vendor has different APIs for addons, so they instead go with the sledgehammer approach and smash up security in the process.

Now, if these software vendors were required to also make sure they don't compromise the integrity of SSL, then it becomes easier to do it via a browser addon and let the browser handle all the UI and complexity of trust. But there's no such requirement (and people haven't noticed until now), so they take the screw-your-customer-over shortcut.

How? The browser needs some kind of mechanism to determine what can and can't be trusted. And whatever that mechanism is, there's simply no magical way to definitively protect it from manipulation if someone has access to your computer. You can throw up some roadblocks and make it harder, but there's no way to prevent it.

The main problem is that these vendors are doing things the wrong way. It's easy for them to inject themselves into the connection itself, as it gives them a predictable, raw access to everything. But then they are bypassing things like validity and revocation checking.

If some company wants a product to block ads in the name of privacy, they should do what AdBlock does, which is tackle the problem after the data reaches the browser, not before. That's harder to do since each browser vendor has different APIs for addons, so they instead go with the sledgehammer approach and smash up security in the process.

Then the requirements for the software implementation of SSL in my browser and the requirements for the worldwide certificate system worldwide that backs up the use of SSL were not properly laid out, AND/OR the requirements are correctly written and the current implementation fails to meet them.

So somewhere a bunch of Software Engineers need to go sit down and DO BETTER.

I am so tired of the low quality of software in general. If it compiles, ship it, seems to be the mantra. No one actually cares if software does what is advertised on the tin, as long as everyone gets paid and Apple/Google/Microsoft/Red Had/Canonical/Mozilla Foundation/etc make their millions and millions of dollars of money.

Between the fact that it is apparently trivial for *software on my PC that is NOT MY BROWSER to implement a man-in-the-middle attack* and the fact that *hardware man-in-the-middle boxes are readily available* (http://www.wired.com/2010/03/packet-forensics/) - that green bar that appears on my browser saying "this site is secure"? That bar is totally, completely, and utterly useless. It is a fake, a scam, a load of BS.

Edit:I tell my parents, my younger sisters, all my less-tech-savvy family and friends to watch for that green bar (among other things like never clicking on any links in emails asking them to log into accounts, etc) to know that they are "safe" on the internet. The fact that (a) browsers will just allow other software to add a Root CA without raising holy hell to where any reasonable user will end up saying "no" and not letting that happen AND (b) a software proxy can be added into the OS to re-direct HTTPS connections into/out of a browser to another piece of software before they reach the network adapter - without the OS somehow raising holy hell that this is happening - the fact that EITHER OF THOSE TWO THINGS CAN HAPPEN means that all of the focus on "browser security" and "operating system security" since the days of XP being worm-infected within minutes of plugging into my schools campus network in 2003 - all of that focus has been complete self-serving BS covering up the fact that the people designing these systems JUST DON'T THINK.

Even worse to me than broken security, is the illusion of fake security, which is what that green bar represents unless issues (a) and (b) are both resolved.

Some of you sound like congress, coming up with magical fixes for the Internet.

They should just engage the IP deflector shields and bounce the packets on a tangent knocking off hackers in nearby attack zones. If they flip the packets 90 degrees they can create a Möbius loop causing the hackers to enter a self inflicted death spiral. It's easy, I don't know why browsers don't already do this.

It's easy to say that SSL has been badly designed, but every software developer knows that designing security is very, very hard. I really don't see how this kind of problem could have been avoided:

- You need some kind of chain of trust mechanism, because the vast number of servers which support SSL make it infeasible for the browser to know the full list of all valid certificates.- You need some way for a system administrator to add its own trusted certificate to the top of the chain, so companies can secure their own intranets.

I've got to wonder, did the engineers who actually wrote all this software know what they were doing? Were they aware that they were damaging the protection offered by SSL and just didn't care? Or was that just simple ignorance. I'd like to think it was the latter but it's difficult to believe that people smart enough to subvert the certificate trust system would also not be smart enough to recognise the consequences of doing so.

When I was at university my CS course included an entire semester-long module on ethics. At the time I considered it a waste of time because I'm ethical anyway, but now I'm beginning to realise that maybe I'm not the rule but the exception.

My guess is that their thinking never went beyond, "Hey, look! I figgered out how to jam more ads in people's faces, even when we're blocked!" and called it good.

That's the attitude I'd expect from the management team but like I said, an engineer who knows how to do this should be bright enough to know why doing it is a bad idea and ethical enough to not do it.

If you make assumptions like, "people like me are ethical, other people are bad," you still need to study ethics.

Data comes into a network stack via frames that identify where the data came from (IP and MAC addresses). Any frame that doesn't come from the IP address that requested a secure HTTPS connection should just be rejected outright by the browser (when rendering a HTTPS session) and let the user know there is a problem they need to address, not hide the problem/injection!

How? The browser needs some kind of mechanism to determine what can and can't be trusted. And whatever that mechanism is, there's simply no magical way to definitively protect it from manipulation if someone has access to your computer. You can throw up some roadblocks and make it harder, but there's no way to prevent it.

The main problem is that these vendors are doing things the wrong way. It's easy for them to inject themselves into the connection itself, as it gives them a predictable, raw access to everything. But then they are bypassing things like validity and revocation checking.

If some company wants a product to block ads in the name of privacy, they should do what AdBlock does, which is tackle the problem after the data reaches the browser, not before. That's harder to do since each browser vendor has different APIs for addons, so they instead go with the sledgehammer approach and smash up security in the process.

So you think they won't be able to fake or keep the original IP when modifying these packets? That's like soly relying on the written return address on snail-mail for validating sender, anyone can write anyone else's address there instead of their own.

And MAC-addresses are only useable until you get to your router, anything beyond the router "gets" the MAC address of the router.

The fact that (a) browsers will just allow other software to add a Root CA without raising holy hell to where any reasonable user will end up saying "no" and not letting that happen AND (b) a software proxy can be added into the OS to re-direct HTTPS connections into/out of a browser to another piece of software before they reach the network adapter - without the OS somehow raising holy hell that this is happening - the fact that EITHER OF THOSE TWO THINGS CAN HAPPEN means that all of the focus on "browser security" and "operating system security" since the days of XP being worm-infected within minutes of plugging into my schools campus network in 2003 - all of that focus has been complete self-serving BS covering up the fact that the people designing these systems JUST DON'T THINK.

Even worse to me than broken security, is the illusion of fake security, which is what that green bar represents unless issues (a) and (b) are both resolved.

RE: a) The browser trusts the certificates that the OS says can be trusted. It's not the browser's responsibility to start second guessing that trust (other than maybe checking the CRL). This is also why you don't want to trust just any cert.b) again, having that software installed implies you trust it. Trust that the software will not do bad stuff. But the OS cannot decide this for you. Also there are legitimate reasons for software to do what you described, take Fiddler for example.

It's easy to say that SSL has been badly designed, but every software developer knows that designing security is very, very hard. I really don't see how this kind of problem could have been avoided:

- You need some kind of chain of trust mechanism, because the vast number of servers which support SSL make it infeasible for the browser to know the full list of all valid certificates.- You need some way for a system administrator to add its own trusted certificate to the top of the chain, so companies can secure their own intranets.

superfish 'purports' to only be a trojan doing image recognition on your content .

GIVEN that this specific version of mitm https redirection has been occurring since Komodia COMMERCIALIZED it in 2009** , and komoda has now been found in over a dozen apps , and NO ONE CAUGHT/REPORTED THE IP REDIRECTION till now , it's safe to say ... the public is vulnerable to ip redirection and surreptitious monitoring by commercial programs whos purpose is to do so .

the problem is the ip redirect without the security community noticing , or comprehending the client side ssl problem . obviously , no one was packet sniffing an affected machine for ip redirects ?and if ip redirects were intentionally hidden and the offending code encrypted ...without reverse engineering every app that inserts itself into the ssl stream , there is no assurance ?

I've got to wonder, did the engineers who actually wrote all this software know what they were doing? Were they aware that they were damaging the protection offered by SSL and just didn't care? Or was that just simple ignorance. I'd like to think it was the latter but it's difficult to believe that people smart enough to subvert the certificate trust system would also not be smart enough to recognise the consequences of doing so.

My guess is that their thinking never went beyond, "Hey, look! I figgered out how to jam more ads in people's faces, even when we're blocked!" and called it good.

That's the attitude I'd expect from the management team but like I said, an engineer who knows how to do this should be bright enough to know why doing it is a bad idea and ethical enough to not do it.

From past experience, there are developers who don't look beyond the challenge of making something work. I think there's a feeling that others won't do it simply because they aren't clever enough. There's also the temptation to be able to say, "hey, look at this cool thing we're doing that no one else can do".

It's more tempting (and commonplace) than you might think – consider the following hacks:1) Using undocumented APIs2) Patching hooks into processes that don't have an API in order to modify/override their behaviour3) Jailbreaking devices

My job often involves cleaning up messes that other developers have left behind, and nobody wins when you go down this path. The hack eventually breaks or (as in this case) leaves you wide open to serious failures. It's surprisingly difficult to get the message through because some significant party (possibly everyone involved) is losing a feature they valued highly. Nobody warms to the message that, "this was only possible in the past because we did a Bad Thing and we can't keep doing it". Although it's right, ethical and ultimately practical, you won't be popular for voicing this simple truth.

As widespread and simple as this type of vulnerability is, it's astonishing that it took this long for people to pay attention to it.

Not really, if you think about it. Look at, for example, Heartbleed. Or GoToFail. This isn't something that typically comes up in the conversation. And it should be.

Whilst I know saying "this is different" is often a fools' game: those are different beasts that require you to at the very least try a specific kind of attack on a system, if not have access to the actual source code. Something you might expect security researchers - i.e. for the most part, "other people" - to be doing but not an average, if careful, user.

This was found by someone using SSL pretty much as intended.

That said, you're very right that problems seem to take a very long time to find.