Share this story

In May of 2017, the WannaCry attack—a file-encrypting ransomware knock-off attributed by the US to North Korea—raised the urgency of patching vulnerabilities in the Windows operating system that had been exposed by a leak of National Security Agency exploits. WannaCry leveraged an exploit called EternalBlue, software that leveraged Windows' Server Message Block (SMB) network file sharing protocol to move across networks, wreaking havoc as it spread quickly across affected networks.

The core exploit used by WannaCry has been leveraged by other malware authors, including the NotPetya attack that affected companies worldwide a month later, and Adylkuzz, a cryptocurrency-mining worm that began to spread even before WannaCry. Other cryptocurrency-mining worms followed, including WannaMine—a fileless, all-PowerShell based, Monero-mining malware attack that threat researchers have been tracking since at least last October. The servers behind the attack were widely published, and some of them went away.

But a year later, WannaMine is still spreading. Amit Serper, head of security research at Cybereason, has just published research into a recent attack on one of his company's clients—a Fortune 500 company that Serper told Ars was heavily hit by WannaMine. The malware affected "dozens of domain controllers and about 2,000 endpoints," Serper said, after gaining access through an unpatched SMB server.

WannaMine is "fileless," sort of. It uses PowerShell scripts pulled from remote servers to establish a foothold on computers and run all of its components. But WannaMine isn't purely fileless by any means—the PowerShell script that establishes its foothold downloads a huge file full of base64-encoded text. "In fact, the downloaded payload is so large (thanks to all of the obfuscation) that it makes most of the text editors hang and it’s quite impossible to load the entire base64’d string into an interactive ipython session," Serper wrote in his post.

Inside that file is more PowerShell code, including a PowerShell version of the Mimikatz credential-stealing tool copied directly from a GitHub repository. There's also a huge binary blob—a Windows .NET compiler—which the malware uses to compile a dynamic-link library version of the PingCastle network scanning tool for locating potentially vulnerable targets elsewhere on the network. The harvested credentials and network data are then used to attempt to connect to other computers and install more copies of the malware. The DLL is given a random name, so it's different on every infected system.

WannaMine's PowerShell code does a number of things to make itself at home. It uses the Windows Management Instrumentation to detect whether it has landed on a 32-bit or 64-bit system to pick which version of its payload to download. It configures itself as a scheduled process to ensure it persists after a system shutdown, and it changes the power management settings of the infected computer to make sure the machine doesn't go to sleep and its mining activities go uninterrupted. This code shuts down any process using Internet Protocol ports associated with cryptocurrency-mining pools (3333, 5555, and 7777). And then it runs PowerShell-based miners of its own, connecting to mining pools on port 14444.

The thing that is perhaps the most aggravating about the continued spread of WannaMine is that the malware continues to use some of the same servers that were originally reported to be associated with it. Serper reached out to all of the hosting providers he could identify from the addresses and got no response. The command and control servers are:

104.148.42.153 and 107.179.67.243, both hosted by the DDoS mitigation hosting company Global Frag Servers in Los Angeles (though the company also appears to be a Chinese network operator).

172.247.116.8 and 172.247.166.87, both hosted by CloudRadium L.L.C., a company with a disconnected phone number and a Los Angeles address shared with a number of other hosting and co-location service providers.

45.199.154.141, hosted in the US by CloudInnovation, which claims to be based in South Africa but gives a Seychelles address in its network registration.

None of these organizations responded to requests for comment from Ars.

Share this story

Sean Gallagher
Sean is Ars Technica's IT and National Security Editor. A former Navy officer, systems administrator, and network systems integrator with 20 years of IT journalism experience, he lives and works in Baltimore, Maryland. Emailsean.gallagher@arstechnica.com//Twitter@thepacketrat

When will they start patching their darned systems? I mean, really, how could they possibly not know what to do by now?! More importantly, when will we start seeing legal consequences for this sort of negligence?

When will they start patching their darned systems? I mean, really, how could they possibly not know what to do by now?! More importantly, when will we start seeing legal consequences for this sort of negligence?

And how many years do we have to wait before it's appropriate to just blackhole those servers and block all network communication in and out of them?

It's just mind boggling that people haven't patched the fix for this... heck microsoft even made patches for server 2003 and xp!

Selling people on the "yes, we do have to poke the legacy box that's as crufty as it is vital and nobody properly understands just to patch an issue you have convinced yourself is 'theoretical' or 'minimal real-world risk'" position is a deeply unpleasant, and by no means universally successful, endeavor.

It's my understanding that Microsoft is rather zealous in using the courts in seizing control of domains that look like they're about to be used to launch phishing attacks. Couldn't they produce a trove of data to justify why they should be able to seize the US-based servers that are being contacted by new instances of this infection?

It's just mind boggling that people haven't patched the fix for this... heck microsoft even made patches for server 2003 and xp!

Selling people on the "yes, we do have to poke the legacy box that's as crufty as it is vital and nobody properly understands just to patch an issue you have convinced yourself is 'theoretical' or 'minimal real-world risk'" position is a deeply unpleasant, and by no means universally successful, endeavor.

Even if they haven't convinced themselves that it's a non-issue, there's still a tendency for the non-technical people making those decision to look at them backwards. The first question is almost always "who's going to pay to fix this?", and unless the answer is "someone else" it often won't get done.

It's just mind boggling that people haven't patched the fix for this... heck microsoft even made patches for server 2003 and xp!

Selling people on the "yes, we do have to poke the legacy box that's as crufty as it is vital and nobody properly understands just to patch an issue you have convinced yourself is 'theoretical' or 'minimal real-world risk'" position is a deeply unpleasant, and by no means universally successful, endeavor.

Even if they haven't convinced themselves that it's a non-issue, there's still a tendency for the non-technical people making those decision to look at them backwards. The first question is almost always "who's going to pay to fix this?", and unless the answer is "someone else" it often won't get done.

Add to that I.T. staff are often the first culled when money needs to be saved and it really is no wonder we see the monumental data & security breaches we do.

Given the downright recklessness of so many in charge I am amazed anything works anywhere, ever.

We patched immediately, this was early 2017 right? And then security did scans for unpatched hosts. They first contacted owners. Anyone who didn't patch after having been contacted, security blackhole routed those machines.

It's just mind boggling that people haven't patched the fix for this... heck microsoft even made patches for server 2003 and xp!

Selling people on the "yes, we do have to poke the legacy box that's as crufty as it is vital and nobody properly understands just to patch an issue you have convinced yourself is 'theoretical' or 'minimal real-world risk'" position is a deeply unpleasant, and by no means universally successful, endeavor.

A fortune 500 company should have at least one sys admin that realizes how important patching the fix for this is. It's not like this was some small exploit, wannacry was worldwide news. A different EternalBlue exploit cost Maersk $250+ million, Merck and FedEx $300+ million.

Inertia can be a huge thing at larger companies. One place I worked for had a "No Linux" policy for years, because t was open source, and who do you pay for support. They eventually went to "You can use Linux on appliances only" A co-worker joked about forming a company which installed redhat on Dell servers, and calling them appliances.

Not that I'm saying that this is acceptable, more like I really wish I could be surprised it's happening.

It's just mind boggling that people haven't patched the fix for this... heck microsoft even made patches for server 2003 and xp!

Selling people on the "yes, we do have to poke the legacy box that's as crufty as it is vital and nobody properly understands just to patch an issue you have convinced yourself is 'theoretical' or 'minimal real-world risk'" position is a deeply unpleasant, and by no means universally successful, endeavor.

A fortune 500 company should have at least one sys admin that realizes how important patching the fix for this is. It's not like this was some small exploit, wannacry was worldwide news. A different EternalBlue exploit cost Maersk $250+ million, Merck and FedEx $300+ million.

They probably do have one. Who has been emitting pitifully dutiful summaries of CVEs and vendor update guidance based on ongoing comparison of available patches and actual system patch application; with those plaintive requests either being ignored or, if they want a veneer of care and seriousness, mired in a 'change management process' that operates on rigidly codified procedural hurdles and an implicit model of risk that treats the possibility that you'll break something by changing it as somewhere between 'catastrophic' and 'existential' and more or less ignores the risk that not changing something will allow any random script kiddie to break it.

Inertia can be a huge thing at larger companies. One place I worked for had a "No Linux" policy for years, because t was open source, and who do you pay for support. They eventually went to "You can use Linux on appliances only" A co-worker joked about forming a company which installed redhat on Dell servers, and calling them appliances.

Not that I'm saying that this is acceptable, more like I really wish I could be surprised it's happening.

The irony is that Linux on normal computers gives you a number of options to pay for support; while appliances have a nasty habit of being where abused and neglected 2.6-era kernels are institutionalized.

"Serper reached out to all of the hosting providers he could identify from the addresses and got no response. "

Uh, hello and welcome to the club. Most who have operated their own web facing servers, even applications such as online forum software, have experienced the dead silence on the other end when trying to notify hosting companies and pipeline operators that they have a problem on their assets.

My last one was a medical clinic whose computerized phone software was hijacked to send out fake bank phishing campaigns. I set up a dummy machine and was able to capture traffic between my dummy system and the host server. It enabled me to find the login credentials to get root access to the phone system, complete with admin file access.

No matter how I tried to notify the office personnel at the clinic, even playing some individuals copies of their own audio voicemail messages, I was hung up on, blocked on social media and threatened with legal action. I finally notified the FBI cyber crimes division, along with all documentation and threats I had received.

The list is almost endless of the number of hacking and intrusion attempts I tracked down when my leased server was online. Hosting companies, *if* they even bothered to read the messages and respond, took weeks and sometimes months to shut down throwaway accounts set up just for hosting exploits. But this is not new.

When AOL's chat system hit its stride many many moons ago, phishing was rampant on the system. When a fake "AOL Administration needs your account name and password" message would show up in chat, the mods were dead from the neck up on booting the offenders. Sometimes the trolls would park themselves in chat harvesting names and passwords for hours before a mod or support would boot them. So Serper's experience is not unique and definitely not new.

Inertia can be a huge thing at larger companies. One place I worked for had a "No Linux" policy for years, because t was open source, and who do you pay for support. They eventually went to "You can use Linux on appliances only" A co-worker joked about forming a company which installed redhat on Dell servers, and calling them appliances.

Not that I'm saying that this is acceptable, more like I really wish I could be surprised it's happening.

The irony is that Linux on normal computers gives you a number of options to pay for support; while appliances have a nasty habit of being where abused and neglected 2.6-era kernels are institutionalized.

Heh. Yes. 2.6 kernel, ancient apache build, and vulnerable ssh server you can't turn off. Also annual support agreement cost goes up 3x year over year. Accounting stopped paying that bill so now you got no support or updates.

It's just mind boggling that people haven't patched the fix for this... heck microsoft even made patches for server 2003 and xp!

Selling people on the "yes, we do have to poke the legacy box that's as crufty as it is vital and nobody properly understands just to patch an issue you have convinced yourself is 'theoretical' or 'minimal real-world risk'" position is a deeply unpleasant, and by no means universally successful, endeavor.

A fortune 500 company should have at least one sys admin that realizes how important patching the fix for this is. It's not like this was some small exploit, wannacry was worldwide news. A different EternalBlue exploit cost Maersk $250+ million, Merck and FedEx $300+ million.

They probably do have one. Who has been emitting pitifully dutiful summaries of CVEs and vendor update guidance based on ongoing comparison of available patches and actual system patch application; with those plaintive requests either being ignored or, if they want a veneer of care and seriousness, mired in a 'change management process' that operates on rigidly codified procedural hurdles and an implicit model of risk that treats the possibility that you'll break something by changing it as somewhere between 'catastrophic' and 'existential' and more or less ignores the risk that not changing something will allow any random script kiddie to break it.

Yeah... if there is so much bureaucracy that you can't patch a critical vulnerability over a year after it came out it might be time to find a a new job... which could explain the quality of their system admins...

I think the real crime is Cloudinnovation's website. Misaligned elements, random power button icon that does nothing, images not being unified in design, and some other layout problems.

Oh, they profit off of the criminal activity of others you say? That sucks too.

Ditto on CloudInnovation. They are one of the hosts for the Hakai malware. Very shifty.

If you are the type that reviews server logs, it is best to block hosts of such packages. The ones I've seen load the malware with wget. The requests nearly always come from ISPs, which are useless to block. Note that Nginx has flagged every attempt at Hakai with return code 400. I just block the hosts since datacenters don't have eyes to browse a website.

If the malware host looks legit, I report the IP address. Say what you want about Godaddy, they do shut down these servers when you report them. Needless to say I wouldn't contact CloudInnovation.

"Serper reached out to all of the hosting providers he could identify from the addresses and got no response. "

Uh, hello and welcome to the club. Most who have operated their own web facing servers, even applications such as online forum software, have experienced the dead silence on the other end when trying to notify hosting companies and pipeline operators that they have a problem on their assets.

My last one was a medical clinic whose computerized phone software was hijacked to send out fake bank phishing campaigns. I set up a dummy machine and was able to capture traffic between my dummy system and the host server. It enabled me to find the login credentials to get root access to the phone system, complete with admin file access.

No matter how I tried to notify the office personnel at the clinic, even playing some individuals copies of their own audio voicemail messages, I was hung up on, blocked on social media and threatened with legal action. I finally notified the FBI cyber crimes division, along with all documentation and threats I had received.

The list is almost endless of the number of hacking and intrusion attempts I tracked down when my leased server was online. Hosting companies, *if* they even bothered to read the messages and respond, took weeks and sometimes months to shut down throwaway accounts set up just for hosting exploits. But this is not new.

When AOL's chat system hit its stride many many moons ago, phishing was rampant on the system. When a fake "AOL Administration needs your account name and password" message would show up in chat, the mods were dead from the neck up on booting the offenders. Sometimes the trolls would park themselves in chat harvesting names and passwords for hours before a mod or support would boot them. So Serper's experience is not unique and definitely not new.

Years ago a local biomedical firm was hammering my home IP. I called them up and asked to speak to IT. No can do. The only entity the general public could talk to was personnel. You can guess the rest. Personnel informed me that their company would never do a denial of service attack. The result was I had to get a new IP address.

The vast majority of hacking I see in server logs I just ignore. If from a hosting company, I block them. If I detect hacking that looks like some sort of ecommerce attack, I notify the ecommerce company. I also report to financial institutions any hacking coming from their IP space. Not that I am concerned about the hacking, but rather I'm concerned some bank is comprised.

Does anyone know of a trusted source for an aggregated list of IP addresses related to security threats like this that could be used by routers or hosts files?

I'm of the opinion that IP space without eyeballs should be blocked from ports 80, 443, and probably 587. I am in the minority. So be it. AWS provides a json means to get their IP space. The rest you can dig up on bgp.he.net. Block the big ones first. Rackspace, godaddy, etc. Next get the VPS providers like linode, digital Ocean, vultar, etc.

Half your web traffic is probably useless bots scraping in hopes of being the next Google.

For a list of bad actors, there are blocking lists, but you will essentially be yielding some control of your server to an outside entity. For email, it is worth using RBLs. For a web server, I don't believe things have become that bad that you need to use an outside list.

When will they start patching their darned systems? I mean, really, how could they possibly not know what to do by now?! More importantly, when will we start seeing legal consequences for this sort of negligence?

As long as corporations continue to see doing systems maintenance and patching as an expense instead of a necessity, stuff like this will keep happening.

It's just mind boggling that people haven't patched the fix for this... heck microsoft even made patches for server 2003 and xp!

Selling people on the "yes, we do have to poke the legacy box that's as crufty as it is vital and nobody properly understands just to patch an issue you have convinced yourself is 'theoretical' or 'minimal real-world risk'" position is a deeply unpleasant, and by no means universally successful, endeavor.

Even if they haven't convinced themselves that it's a non-issue, there's still a tendency for the non-technical people making those decision to look at them backwards. The first question is almost always "who's going to pay to fix this?", and unless the answer is "someone else" it often won't get done.

Then there's the all of our systems are behind firewalls., so we can rely on our layered security... They can't get to SMB... That works until somone does something stupid behind the firewall and you realize by not patching you've reduced your layered security to one very faulty layer.

At this point, I’m not surprised, since most companies do not treat their IT like a serious thing, or they just do not want to pay for an extra admin or two so that the department functions like it should.

With Windows 10 zooming past Java in CVE counts and setting new records in vulnerabilities it's really no surprise why businesses continue to get hammered by the world's most insecure OS. People are even tweeting 0 days. Makes you wonder why businesses even continue to use this OS.

As long as corporations continue to see doing systems maintenance and patching as an expense instead of a necessity, stuff like this will keep happening.

IT is almost always a cost center and we're a long way from changing that mentality.

Came here to say almost the same thing. The people running the show never realize how behind the game they are because the ideas are so elusive and abstract to them. But the labor and hardware costs are one thing they do underatand as a negative. It is frustrating to try and tell these idiots about very bad, and very present holes are and get them to do anything about it.

Edit: come to think of it. One such team was cut by over half the employees and the budget dropped by over 1 million $. And you wonder why china has the plans to all your products. Lol.

Ah! and tell me it was mining monero - the only purpose this cryptos seems to have. Even then these attacks are not close to be said sophisticated ,all off the shelves scripts and exploits. Monero has made some of the skiddies really rich indeed and alternately ciphercriminals behind its development team.

"In fact, the downloaded payload is so large (thanks to all of the obfuscation) that it makes most of the text editors hang ..."

Use vim or gVim - There is a version for pretty much every operating system in existence (including Windows) and easily opens stuff even when every other text editor hangs permanently. I've used it to open 500GB+ size database log files, where even NotePad++ hung terminally.

When will they start patching their darned systems? I mean, really, how could they possibly not know what to do by now?! More importantly, when will we start seeing legal consequences for this sort of negligence?

Perhaps the best legal consequence at this point is to change the hacking law. If a patch to prevent it has been available for more than 12 months, and you aren't a hospital or other critical infrastructure, then hacking your systems using that vulnerability isn't criminal and doesn't open the hacker up to civil liability, even if they use it to get into critical information such as bank accounts or customer data. If they steal from you, they get to keep it.

Yes, it might be chaos for a year or two, but any company that didn't get their act together in that time would be out of business.

Since this article has been illustrated with the German Miner's greeting "Glück auf" and 2018 marks the year coal mining in Germany is about to end due to the closing of the last mine, I can't but leave you with a reference to the German miner's hymn: "Das Steigerlied / Glück auf, der Steiger kommt!"https://youtu.be/rnwg8YM56Fwhttps://youtu.be/lru_dGIHLhY

In the the coal mining and industrial area "Ruhrgebiet" we learned this quintessential song in primary school. As children we loved it because the miner is wearing 'his leather on his ass', so using the word 'ass' was now officially sanctioned. It guess most people of my age from this area know the lyrics. Here's

Listening to it still gives me goosebumps and wet eyes, it means 'home' to me, even if it's now the hymn of a competing football team. So much for today’s offtopic lesson in cryptic German culture