"DressCode" poses a major risk, because it opens a direct connection to infected phones.

Share this story

In 2016, researchers uncovered a botnet that turned infected Android phones into covert listening posts that could siphon sensitive data out of protected networks. Google at the time said it removed the 400 Google Play apps that installed the malicious botnet code and took other, unspecified "necessary actions" to protect infected users.

Further Reading

Now, roughly 16 months later, a hacker has provided evidence that the so-called DressCode botnet continues to flourish and may currently enslave as many as four million devices. The infections pose a significant risk because they cause phones to use the SOCKS protocol to open a direct connection to attacker servers. Attackers can then tunnel into home or corporate networks to which the phones belong in an attempt to steal router passwords and probe connected computers for vulnerabilities or unsecured data.

Even worse, a programming interface that the attacker's command and control server uses to establish the connection is unencrypted and requires no authentication, a weakness that allows other attackers to independently abuse the infected phones.

"Since the device actively opens the connection to the C2 server, the connection will usually pass firewalls such as those found in home and SMB routers," Christoph Hebeisen, a researcher at mobile security firm Lookout, said after reviewing the evidence. Hebeisen continued:

Once the connection is open, whoever controls the other end of it can now tunnel through the mobile device into the network to which the device is currently connected. Given the unprotected API [the hacker] found, it may well be possible for anybody with that information to access devices and services that are supposed to be limited to such private networks if a device with [malicious apps] on it is inside the network. Imagine a user using a device running one of these apps on the corporate Wi-Fi of their employer. The attacker might now have direct access to any resources that are usually protected by a firewall or an IPS (intrusion prevention system).

Evidence of the still-thriving botnet raises important questions about the effectiveness of Google incident responses to reports of malicious Android apps that wrangle phones into botnets. The evidence—which was provided by someone who claimed to have thoroughly hacked the C2 server and a private GitHub account that hosted C2 source code—suggests that code hidden deep inside the malicious titles continues to run on a significant number of devices despite repeated private notifications to Google from security researchers. It's not clear if Google remotely removed the DressCode and Sockbot apps from infected phones and attackers managed to compromise a new set of devices or if Google allowed phones to remain infected.

The evidence also demonstrates a failure to dismantle an infrastructure researchers documented more than 16 months ago and that the hacker says has been in operation for five years. A common industry practice is for security companies or affected software companies to seize control of Internet domains and servers used to run botnets in a process known as sinkholing. It's not clear what steps if any Google took to take down DressCode. The C2 server and two public APIs remained active at the time this post went live.

In an email, a Google spokesman wrote: "We've protected our users from DressCode and its variants since 2016. We are constantly monitoring this malware family, and will continue to take the appropriate actions to help secure Android users." The statement didn't respond to questions if Google was working to sinkhole the C2.

5,000 headless browsers

The hacker said the purpose of the botnet is to generate fraudulent ad revenue by causing the infected phones to collectively access thousands of ads every second. Here's how it works: an attacker-controlled server runs huge numbers of headless browsers that click on webpages containing ads that pay commissions for referrals. To prevent advertisers from detecting the fake traffic, the server uses the SOCKS proxies to route traffic through the compromised devices, which are rotated every five seconds.

The hacker said his compromise of the C2 and his subsequent theft of the underlying source code showed that DressCode relies on five servers that run 1,000 threads on each server. As a result, it uses 5,000 proxied devices at any given moment, and then for only five seconds, before refreshing the pool with 5,000 new infected devices.

After spending months scouring source code and other private data used in the botnet, the hacker estimated the botnet has—or at least at one point had—about four million devices reporting to it. The hacker, citing detailed performance charts of more than 300 Android apps used to infect phones, also estimated the botnet has generated $20 million in fraudulent ad revenues in the past few years. He said the programming interfaces and the C2 source code show that one or more people with control over the adecosystems.com domain are actively maintaining the botnet.

Lookout's Hebeisen said he was able to confirm the hacker's claims that the C2 server is the one used by both DressCode and Sockbot and that it calls at least two public programming interfaces, including the one that establishes a SOCKS connection on infected devices. The APIs, Hebeisen confirmed, are hosted on servers belonging to adecosystems.com, a domain used by a provider of mobile services. He also confirmed that the second interface is used to provide user agents for use in click fraud. (Ars is declining to link to the APIs to prevent further abuse of them.) He said he also saw a "strong correlation" between the adecosystems.com servers and servers referenced in DressCode and Sockbot code. Because the Lookout researcher didn't access private portions of the servers, he was unable to confirm that the SOCKS proxy was tied to the user agent interface, to specify the number of infected devices reporting to the C2, or to determine the amount of revenue the botnet has generated over the years.

Officials with Adeco Systems said that their company has no connection to the botnet and that they're investigating how their servers were used to host the APIs.

By using a browser to visit the adecosystems.com links that hosted the APIs, it was possible to get snapshots of infected devices that included their IP address and geographic location. Refreshing the link would quickly provide the same details for a different compromised phone. Because the data isn't protected by a password, it's likely that anyone who knows the links can establish their own SOCKS connection with the devices, Hebeisen said.

API 1

API 1

API 1

API 2

API 2

API 2

The hacker also accessed a database containing the unique hardware identifier, carrier, MAC number address, and device ID for each infected device. He provided a single screenshot that appeared consistent with what he had described.

Many of the malicious apps, including many of these ones, remain available in third-party marketplaces such as APKPure. Neither Hebeisen nor the hacker said they have any evidence Google Play has hosted DressCode or Sockbot apps in recent months.

While Google has said it has the ability to remotely uninstall malicious apps from Android devices, some critics have argued that this level of control, particularly without end-user consent ahead of time, oversteps a red line. Google may therefore be reluctant to use it. Even assuming the remote capability is heavy-handed, the significant threat posed by the ease of establishing SOCKS connections with potentially millions of devices is arguably precisely the kind of outlier case that would justify Google using the tool. If possible, Google should additionally take steps to take down the C2 server and the adecosystems.com APIs it relies on.

At the moment, there is no known list of apps that install the DressCode and Sockbot code. People who think their phone may be infected should install an antivirus app from Check Point, Symantec, or Lookout and scan for malicious apps. (Each can initially be used for free.) To prevent devices from being compromised in the first place, people should be highly selective about the apps they install on their Android devices. They should download apps only from Play and even then only after doing research on both the app and the developer.

Promoted Comments

I'm personally not cool with the quantity and frequency of data Google harvests from Android but they have it and know which devices have the malicious apps installed. Why wouldn't they use this knowledge to actually benefit the end user and remove it remotely?

36 Reader Comments

Putting Android in the news for bad security and pissing off advertisers. Google must love these guys! Kudos to DressCode's masters for doing something more creative than mining crypto on the infected phones though ;-)

Putting Android in the news for bad security and pissing off advertisers. Google must love these guys! Kudos to DressCode's masters for doing something more creative than mining crypto on the infected phones though ;-)

Hardly more creative. Bots were doing this and sending spam long before crypto mining was a thing.

I'm personally not cool with the quantity and frequency of data Google harvests from Android but they have it and know which devices have the malicious apps installed. Why wouldn't they use this knowledge to actually benefit the end user and remove it remotely?

I have the feeling that Google auto-removes the known bad apps from people's phones. Is this correct?Are the only people at risk of one of these viruses people who root their phones?

edit: the article says this: " It's not clear if Google remotely removed the DressCode and Sockbot apps from infected phones and attackers managed to compromise a new set of devices or if Google allowed phones to remain infected."

I wouldn't want to run antivirus on my phone as it is sure to burn up your battery's run time while slowing your phone down. Your best bet is to be picky about what you install.

Edit: Apparently there are good phone antiviruses as per sUrfNmADNESS' comment.

I'm personally not cool with the quantity and frequency of data Google harvests from Android but they have it and know which devices have the malicious apps installed. Why wouldn't they use this knowledge to actually benefit the end user and remove it remotely?

Because as powerful as Google is, the fuckin telcos are the ones who really own your device.

'While Google has said it has the ability to remotely uninstall malicious apps from Android devices, some critics have argued that this level of control, particularly without end-user consent ahead of time, oversteps a red line. Google may therefore be reluctant to use it. "

While testing the app designed to test if bit flipping RAM could enable root on several devices (worked on none of them btw), it warned each one that I had a malicious app installed, stated that via persistent notification, and egged me to uninstall it.

Now if they know this app already they should do the same thing, although there is no indication it does it by code, probably more by app fingerprint, which is probably useless.

I'm personally not cool with the quantity and frequency of data Google harvests from Android but they have it and know which devices have the malicious apps installed. Why wouldn't they use this knowledge to actually benefit the end user and remove it remotely?

"At the moment, there is no known list of apps that install the DressCode and Sockbot code. "

I have the feeling that Google auto-removes the known bad apps from people's phones. Is this correct?Are the only people at risk of one of these viruses people who root their phones?

edit: the article says this: " It's not clear if Google remotely removed the DressCode and Sockbot apps from infected phones and attackers managed to compromise a new set of devices or if Google allowed phones to remain infected."

I wouldn't want to run antivirus on my phone as it is sure to burn up your battery's run time while slowing your phone down. Your best bet is to be picky about what you install.

While this might have been the rule in Android 1 and 2.0, it has long evolved along with the batteries in the phones. Lookout and others run quite well, pulling very little power.

Better power off your phone. Apple IOS has had and still has it's own security issues.

There has never been a botnet made up of iOS devices. Nor any kind of malware like this installed on millions of devices (or even tens of thousands of iOS devices). This whitewashing where people try to claim iOS is just as bad as Android gets a little thin. In this case the apps were in the Google Play store for a significant time so it's not even as clear cut as "Blame the carriers" or even "Blame the OEMs".

I'm personally not cool with the quantity and frequency of data Google harvests from Android but they have it and know which devices have the malicious apps installed. Why wouldn't they use this knowledge to actually benefit the end user and remove it remotely?

The reason why is this:

Quote:

While Google has said it has the ability to remotely uninstall malicious apps from Android devices, some critics have argued that this level of control, particularly without end-user consent ahead of time, oversteps a red line.

Microsoft is using innocuous telemetric data to improve their operating system and look at the conspiracy theories and privacy scaremonger that has occurred. Imagine Google doing the above with the accusations of 'big brother' and 'monitoring what applications you're using' rhetoric that would fill up Twitter, Facebook and Reddit with some family member who is 'good with computers' going around telling friends and family not to trust Android and/or disable the feature via some hack he found off a random apk website. Long story short, companies avoid bad publicity and this is why we can't have nice things. That being said, keep in mind the following:

Quote:

Neither Hebeisen nor the hacker said they have any evidence Google Play has hosted DressCode or Sockbot apps in recent months.

So for the vast majority it is once again a non-issue. The problem sits with people who have knowledge - just enough to be dangerous to themselves and others but not enough knowledge to realise they're doing something that is irresponsible.

I'm personally not cool with the quantity and frequency of data Google harvests from Android but they have it and know which devices have the malicious apps installed. Why wouldn't they use this knowledge to actually benefit the end user and remove it remotely?

The reason why is this:

Quote:

While Google has said it has the ability to remotely uninstall malicious apps from Android devices, some critics have argued that this level of control, particularly without end-user consent ahead of time, oversteps a red line.

Microsoft is using innocuous telemetric data to improve their operating system and look at the conspiracy theories and privacy scaremonger that has occurred. imagine Google doing the above with the accusations of 'big brother' and 'monitoring what applications you're using' rhetoric that would fill up Twitter, Facebook and Reddit with some family member who is 'good with computers' going around telling friends and family not to trust Android and/or disable the feature via some hack he found off a random apk website. Long story short, companies avoid bad publicity and this is why we can't have nice things.

So if I understand your argument:It's ok for Google to collect as much telemetry as Microsoft (if not more) as long as it's not obvious to the public; by doing something beneficial to them that they notice?Because that would make them look bad?

(Ignoring the fact that Windows 10 doesn't seem to have less issues than previous versions; which only had minimal, opt in telemetry during actual problems)

Better power off your phone. Apple IOS has had and still has it's own security issues.

There has never been a botnet made up of iOS devices. Nor any kind of malware like this installed on millions of devices (or even tens of thousands of iOS devices). This whitewashing where people try to claim iOS is just as bad as Android gets a little thin. In this case the apps were in the Google Play store for a significant time so it's not even as clear cut as "Blame the carriers" or even "Blame the OEMs".

I'm personally not cool with the quantity and frequency of data Google harvests from Android but they have it and know which devices have the malicious apps installed. Why wouldn't they use this knowledge to actually benefit the end user and remove it remotely?

The reason why is this:

Quote:

While Google has said it has the ability to remotely uninstall malicious apps from Android devices, some critics have argued that this level of control, particularly without end-user consent ahead of time, oversteps a red line.

Microsoft is using innocuous telemetric data to improve their operating system and look at the conspiracy theories and privacy scaremonger that has occurred. imagine Google doing the above with the accusations of 'big brother' and 'monitoring what applications you're using' rhetoric that would fill up Twitter, Facebook and Reddit with some family member who is 'good with computers' going around telling friends and family not to trust Android and/or disable the feature via some hack he found off a random apk website. Long story short, companies avoid bad publicity and this is why we can't have nice things. That being said, keep in mind the following:

Now imagine if Google upon a .1 or upgrade "checked" the box saying it was ok to do location data, OS data, and others *without* your permission.

AND/OR they ignored your privacy settings and sent the OS data anyhow.

Better power off your phone. Apple IOS has had and still has it's own security issues.

There has never been a botnet made up of iOS devices. Nor any kind of malware like this installed on millions of devices (or even tens of thousands of iOS devices). This whitewashing where people try to claim iOS is just as bad as Android gets a little thin. In this case the apps were in the Google Play store for a significant time so it's not even as clear cut as "Blame the carriers" or even "Blame the OEMs".

Better power off your phone. Apple IOS has had and still has it's own security issues.

There has never been a botnet made up of iOS devices. Nor any kind of malware like this installed on millions of devices (or even tens of thousands of iOS devices). This whitewashing where people try to claim iOS is just as bad as Android gets a little thin. In this case the apps were in the Google Play store for a significant time so it's not even as clear cut as "Blame the carriers" or even "Blame the OEMs".

Better power off your phone. Apple IOS has had and still has it's own security issues.

There has never been a botnet made up of iOS devices. Nor any kind of malware like this installed on millions of devices (or even tens of thousands of iOS devices). This whitewashing where people try to claim iOS is just as bad as Android gets a little thin. In this case the apps were in the Google Play store for a significant time so it's not even as clear cut as "Blame the carriers" or even "Blame the OEMs".

I'm not sure you read my post. Spear phishing a few individuals is a whole different class of exploit than the kind of drive-by botnet factories this article is talking about.

You asked for "a security incident," you got one. One that's was in use for years. Which required 3 vulnerabilities (in the kernel and browser) to take over ios protections. And we don't really know how many were infected. As already stated, malware writers don't waste ios exploits on peons.

Nothing in this article says Android users got this via a "drive by." They installed apps who had this code in them.

I stopped overly stressing after Equifax got hacked and leaked every possible bit of info needed to steal identities of every adult in the US. Like what can you do now? It's over. They've won.

About sums up how my wife and I feel. We did though freeze our credit after Equilfax hit. We just decided enough is enough. Freeze it. We'll do temp unfreezes when we need to do a large purchase and then re-freeze. Worth the cost.

I'm not sure you read my post. Spear phishing a few individuals is a whole different class of exploit than the kind of drive-by botnet factories this article is talking about.

You asked for "a security incident," you got one. One that's was in use for years. Which required 3 vulnerabilities (in the kernel and browser) to take over ios protections. And we don't really know how many were infected. As already stated, malware writers don't waste ios exploits on peons.

Nothing in this article says Android users got this via a "drive by." They installed apps who had this code in them.

Actually he said, "Just point out a single security incident LIKE THIS on top of iOS. " (My caps.)

Your example, while proving iOS is not immune to security issues, is nothing like the known scale of this Android issue, even if you attempt to paint it as though it is by use of "we don't really know how many were infected".

I'm not sure you read my post. Spear phishing a few individuals is a whole different class of exploit than the kind of drive-by botnet factories this article is talking about.

You asked for "a security incident," you got one. One that's was in use for years. Which required 3 vulnerabilities (in the kernel and browser) to take over ios protections. And we don't really know how many were infected. As already stated, malware writers don't waste ios exploits on peons.

Nothing in this article says Android users got this via a "drive by." They installed apps who had this code in them.

Actually he said, "Just point out a single security incident LIKE THIS on top of iOS. " (My caps.)

Your example, while proving iOS is not immune to security issues, is nothing like the known scale of this Android issue, even if you attempt to paint it as though it is by use of "we don't really know how many were infected".

If you allow people to install whatever they want, this is what you get. While it is true that Google can do a better job vetting apps, the only way to prevent this is to take user freedom away. It is true that you can side load on ios but Apple makes it painful and near useless with all the limitations imposed on it to make it useless except as a bullet or talking point.As stated already, this botnet was created by users installing bad apps, not by a "drive by". I don't need you to argue semantics with me.

I'm not sure you read my post. Spear phishing a few individuals is a whole different class of exploit than the kind of drive-by botnet factories this article is talking about.

You asked for "a security incident," you got one. One that's was in use for years. Which required 3 vulnerabilities (in the kernel and browser) to take over ios protections. And we don't really know how many were infected. As already stated, malware writers don't waste ios exploits on peons.

Nothing in this article says Android users got this via a "drive by." They installed apps who had this code in them.

Actually he said, "Just point out a single security incident LIKE THIS on top of iOS. " (My caps.)

Your example, while proving iOS is not immune to security issues, is nothing like the known scale of this Android issue, even if you attempt to paint it as though it is by use of "we don't really know how many were infected".

If you allow people to install whatever they want, this is what you get. While it is true that Google can do a better job vetting apps, the only way to prevent this is to take user freedom away. It is true that you can side load on ios but Apple makes it painful and near useless with all the limitations imposed on it to make it useless except as a bullet or talking point.As stated already, this botnet was created by users installing bad apps, not by a "drive by". I don't need you to argue semantics with me.

These weren't side-loaded apps, they were in Google's store. If it was about side-loaded apps or if it were in some third party app store then you might have a point, but this is a failure on Google's part end-to-end. They failed to catch it in the Play Store, their on device security failed to prevent it from gaining privileges, and their after-incident response to this is a complete failure.

I'm personally not cool with the quantity and frequency of data Google harvests from Android but they have it and know which devices have the malicious apps installed. Why wouldn't they use this knowledge to actually benefit the end user and remove it remotely?

The reason why is this:

Quote:

While Google has said it has the ability to remotely uninstall malicious apps from Android devices, some critics have argued that this level of control, particularly without end-user consent ahead of time, oversteps a red line.

Microsoft is using innocuous telemetric data to improve their operating system and look at the conspiracy theories and privacy scaremonger that has occurred. imagine Google doing the above with the accusations of 'big brother' and 'monitoring what applications you're using' rhetoric that would fill up Twitter, Facebook and Reddit with some family member who is 'good with computers' going around telling friends and family not to trust Android and/or disable the feature via some hack he found off a random apk website. Long story short, companies avoid bad publicity and this is why we can't have nice things. That being said, keep in mind the following:

Now imagine if Google upon a .1 or upgrade "checked" the box saying it was ok to do location data, OS data, and others *without* your permission.

AND/OR they ignored your privacy settings and sent the OS data anyhow.

The thrust of your argument is good and correct, Android will retain your settings when getting an OS upgrade but...

Android still sends some telemetry regardless of settings, plus any carrier/manufacturer app telemetry added on top of AOSP. It is quite an effort to prevent this altogether.

I'm not sure you read my post. Spear phishing a few individuals is a whole different class of exploit than the kind of drive-by botnet factories this article is talking about.

You asked for "a security incident," you got one. One that's was in use for years. Which required 3 vulnerabilities (in the kernel and browser) to take over ios protections. And we don't really know how many were infected. As already stated, malware writers don't waste ios exploits on peons.

Nothing in this article says Android users got this via a "drive by." They installed apps who had this code in them.

Actually he said, "Just point out a single security incident LIKE THIS on top of iOS. " (My caps.)

Your example, while proving iOS is not immune to security issues, is nothing like the known scale of this Android issue, even if you attempt to paint it as though it is by use of "we don't really know how many were infected".

If you allow people to install whatever they want, this is what you get. While it is true that Google can do a better job vetting apps, the only way to prevent this is to take user freedom away. It is true that you can side load on ios but Apple makes it painful and near useless with all the limitations imposed on it to make it useless except as a bullet or talking point.As stated already, this botnet was created by users installing bad apps, not by a "drive by". I don't need you to argue semantics with me.

These weren't side-loaded apps, they were in Google's store. If it was about side-loaded apps or if it were in some third party app store then you might have a point, but this is a failure on Google's part end-to-end. They failed to catch it in the Play Store, their on device security failed to prevent it from gaining privileges, and their after-incident response to this is a complete failure.

Yeah, did you read that part in my last post about Google not doing good enough vetting apps?Network stuff isn't something that needs gaining privileges - do you know how Android works?

I'm not sure you read my post. Spear phishing a few individuals is a whole different class of exploit than the kind of drive-by botnet factories this article is talking about.

You asked for "a security incident," you got one. One that's was in use for years. Which required 3 vulnerabilities (in the kernel and browser) to take over ios protections. And we don't really know how many were infected. As already stated, malware writers don't waste ios exploits on peons.

Nothing in this article says Android users got this via a "drive by." They installed apps who had this code in them.

Actually he said, "Just point out a single security incident LIKE THIS on top of iOS. " (My caps.)

Your example, while proving iOS is not immune to security issues, is nothing like the known scale of this Android issue, even if you attempt to paint it as though it is by use of "we don't really know how many were infected".

If you allow people to install whatever they want, this is what you get. While it is true that Google can do a better job vetting apps, the only way to prevent this is to take user freedom away. It is true that you can side load on ios but Apple makes it painful and near useless with all the limitations imposed on it to make it useless except as a bullet or talking point.As stated already, this botnet was created by users installing bad apps, not by a "drive by". I don't need you to argue semantics with me.

These weren't side-loaded apps, they were in Google's store. If it was about side-loaded apps or if it were in some third party app store then you might have a point, but this is a failure on Google's part end-to-end. They failed to catch it in the Play Store, their on device security failed to prevent it from gaining privileges, and their after-incident response to this is a complete failure.

Yeah, did you read that part in my last post about Google not doing good enough vetting apps?Network stuff isn't something that needs gaining privileges - do you know how Android works?

Your ability to nit-pick and argue about minor/ side points is amazing.

[quote="You asked for "a security incident," you got one. One that's was in use for years. Which required 3 vulnerabilities (in the kernel and browser) to take over ios protections. And we don't really know how many were infected. As already stated, malware writers don't waste ios exploits on peons.

Nothing in this article says Android users got this via a "drive by." They installed apps who had this code in them.

Actually he said, "Just point out a single security incident LIKE THIS on top of iOS. " (My caps.)

Your example, while proving iOS is not immune to security issues, is nothing like the known scale of this Android issue, even if you attempt to paint it as though it is by use of "we don't really know how many were infected".

If you allow people to install whatever they want, this is what you get. While it is true that Google can do a better job vetting apps, the only way to prevent this is to take user freedom away. It is true that you can side load on ios but Apple makes it painful and near useless with all the limitations imposed on it to make it useless except as a bullet or talking point.As stated already, this botnet was created by users installing bad apps, not by a "drive by". I don't need you to argue semantics with me.

These weren't side-loaded apps, they were in Google's store. If it was about side-loaded apps or if it were in some third party app store then you might have a point, but this is a failure on Google's part end-to-end. They failed to catch it in the Play Store, their on device security failed to prevent it from gaining privileges, and their after-incident response to this is a complete failure.

Yeah, did you read that part in my last post about Google not doing good enough vetting apps?Network stuff isn't something that needs gaining privileges - do you know how Android works?

Your ability to nit-pick and argue about minor/ side points is amazing.