Exploits that abuse memcached servers threaten the stability of the Internet.

Share this story

It just got much easier to wage distributed denial-of-service attacks of once-unthinkable sizes, thanks to the public release of two ready-to-run exploits that abuse poorly secured memcached servers to flood targets with record amounts of junk traffic.

As Ars reported last week, DDoSers last month started bouncing specially developed traffic off of so-called memcached servers, which then respond by bombarding a targeted third party with a malicious flood that's thousands of times the size of the original payload. Attackers have long used such amplification techniques to magnify or augment the damage coming from the computers they control. What's special about memcached-based attacks is the size of the amplification—as much as 51,000 times, compared with about 50 to 60 fold for techniques seen previously. The attacks work by sending requests to servers that leave open port 11211 and adding spoofed packet headers that cause the responses to be sent to the target.

Now, two separate exploits are available that greatly lower the bar for waging these new types of attacks. The first one, called Memcrashed, prompts a user to enter the IP address to be targeted. It then automatically uses the Shodan search engine to locate unsecured memcached servers and abuses them to flood the target. Here's a screenshot showing the interface:

Drawing attention

Memcrashed has a two-second delay built in to the code that prevents it from being used to wage the types of record-breaking attacks seen recently in the wild. But it wouldn't be hard for someone to bypass that rate limit if she wanted to use the code to deliver malicious attacks, the exploit writer, Amir Khashayar Mohammadi, told Ars.

"That, however, is not my intent," Mohammadi wrote in an email. "My intent is to draw more attention to the problem here. That way vendors that own these servers can take them off the Internet or at least close port 11211 or disable UDP. My intention was never to code something to be used maliciously, hence why I released the code a week after the vulnerability was spotted (a grace period in my books)."

Memcached is an open source, distributed memory-caching system that's designed to speed up websites and cloud networks. Security professionals have noted since last week that the failure of administrators to properly secure roughly 93,000 such servers poses a threat to the stability of the entire Internet. Much of the inertia in fixing the problem is the result of the comparative lack of harm posed to the memcached servers. While the attacks have the potential to consume the bandwidth and computing resources of open servers, those negative effects are significantly less than those visited on the targeted servers on the receiving end.

Further Reading

Service providers that still permit spoofed data packets to traverse their networks also play a crucial role in allowing amplification attacks, but once again the providers aren't significantly harmed, either. Without a strong incentive to change, these practices may continue unabated for weeks or months.

Promoted Comments

So why aren't ISP's cutting the servers internet connection until their properly configured?

Not their problem. They have nothing to gain by doing that because they get paid either way. And they have a lot to lose if they tried: Getting sued due to breach of contract or inability to fulfill SLAs, or their action seen as tacit acknowledgement that this is their problem to solve.

So why aren't ISP's cutting the servers internet connection until their properly configured?

Not their problem. They have nothing to gain by doing that because they get paid either way. And they have a lot to lose if they tried: Getting sued due to breach of contract or inability to fulfill SLAs, or their action seen as tacit acknowledgement that this is their problem to solve.

So why aren't ISP's cutting the servers internet connection until their properly configured?

Not their problem. They have nothing to gain by doing that because they get paid either way. And they have a lot to lose if they tried: Getting sued due to breach of contract or inability to fulfill SLAs, or their action seen as tacit acknowledgement that this is their problem to solve.

It is their problem when others are getting napalmed because their customer doesn't believe in security.

So why aren't ISP's cutting the servers internet connection until their properly configured?

Not their problem. They have nothing to gain by doing that because they get paid either way. And they have a lot to lose if they tried: Getting sued due to breach of contract or inability to fulfill SLAs, or their action seen as tacit acknowledgement that this is their problem to solve.

It is their problem when others are getting napalmed because their customer doesn't believe in security.

You aren't thinking like they are thinking. They are a business and getting paid is all that matters. If they are still getting paid then it is not their problem.

So why aren't ISP's cutting the servers internet connection until their properly configured?

Not their problem. They have nothing to gain by doing that because they get paid either way. And they have a lot to lose if they tried: Getting sued due to breach of contract or inability to fulfill SLAs, or their action seen as tacit acknowledgement that this is their problem to solve.

It is their problem when others are getting napalmed because their customer doesn't believe in security.

You aren't thinking like they are thinking. They are a business and getting paid is all that matters. If they are still getting paid then it is not their problem.

Unfortunately, true. Until someone gets sued and it costs them more than what they make by doing nothing, then the problem will continue.

About the only way the server owners/isp's will care about this is if some one points these programs at their payment processing servers.

Another way they will care is if someone modifies this software to spoof the source IP of these memcached servers when running the software.... not that this would be a good idea, but it would solve the problem in short order.

So why aren't ISP's cutting the servers internet connection until their properly configured?

Not their problem. They have nothing to gain by doing that because they get paid either way. And they have a lot to lose if they tried: Getting sued due to breach of contract or inability to fulfill SLAs, or their action seen as tacit acknowledgement that this is their problem to solve.

It is their problem when others are getting napalmed because their customer doesn't believe in security.

You aren't thinking like they are thinking. They are a business and getting paid is all that matters. If they are still getting paid then it is not their problem.

So why aren't ISP's cutting the servers internet connection until their properly configured?

Not their problem. They have nothing to gain by doing that because they get paid either way. And they have a lot to lose if they tried: Getting sued due to breach of contract or inability to fulfill SLAs, or their action seen as tacit acknowledgement that this is their problem to solve.

They could still send out an e-mail or letter or something warning of the issue to users who may be affected. My ISP has done this in past; of course it was actually for a flaw built into their own router that I had already disabled, so was utterly redundant, but at least it means they can send out messages when there's a reasonable reason to do-so.

What boggles my mind is that these types of vulnerabilities are still being found on a regular basis; who in their right mind has a server without an active firewall on every port except the handful they definitely, absolutely need to have open? Maybe I'm weird, but I feel actual anxiety at the thought of a server without a proper firewall as the barest minimum of protection.

The recent raw-sockets exploit followed a similar pattern, not of unprotected ports but of someone being foolish enough to use a bad password (or a password at all) for remote SSH access. Who does something like that, and can we have them euthanised?

At least most cloud-based services seem to now do some of the best-practises by default, i.e- your newly spun up instance will usually only allow SSH access, and at least the services I've used set you up with certificate-based login as standard, no password to worry about. You have to then go into some security panel and explicitly open individual ports if you need them. But these kinds of precautions have been best practice for years (decades?), so why are so many servers still vulnerable to this crap?

Nevermind warning these people or blocking their ports for them; I'd much prefer they lose the right to own or operate servers until they can prove they've learned their lesson, like we do with irresponsible car drivers.

So why aren't ISP's cutting the servers internet connection until their properly configured?

Not their problem. They have nothing to gain by doing that because they get paid either way. And they have a lot to lose if they tried: Getting sued due to breach of contract or inability to fulfill SLAs, or their action seen as tacit acknowledgement that this is their problem to solve.

I haven't seen a bunch of SLAs, but the ones I have seen specifically carve out a right of way for the service provider to take servers/whatever offline if they're participating in some kind of illegal act OR traffic that can harm the network.

And I've seen those clauses used to good effect. Typically after a call to the owner, unless there's an exigent problem and they have to take it down without prior notice.

You may ask: what if the offending servers support some kind of emergency service that is offline and someone dies because of it? That's on the server owner. They didn't patch or didn't police their usage and so through their own negligence caused the harm. The service provider acted in good faith and according to a clause agreed to by the server owner.

ISPs have always had this right of way with consumer users. And again, in my semi-limited experience, they've also had this right of way with respect to business users. If you know of an SLA that doesn't include that caveat, please let us know.

How the fuck did you have an unsecured server in the first place? Does that thing not have proper secure defaults?

It's not a matter of defaults. Securing it is "not putting it on the internet". It doesn't have built in authentication, it's just supposed to sit on your website's private network and hand off data as your various servers ask for it. Anyone "fixing" this problem by disabling UDP still has the problem that their server is available to the internet; whatever information is in there can be retrieved and/or changed by anyone. The big deal for others is that UDP makes it so you can send that information elsewhere, hence the magnification: add bunch of data, then request it pretending to be the target. But even disabling UDP by default you STILL shouldn't have the server available to the internet. That's not what it's intended for.

How the fuck did you have an unsecured server in the first place? Does that thing not have proper secure defaults?

It's not a matter of defaults. Securing it is "not putting it on the internet". It doesn't have built in authentication, it's just supposed to sit on your website's private network and hand off data as your various servers ask for it. Anyone "fixing" this problem by disabling UDP still has the problem that their server is available to the internet; whatever information is in there can be retrieved and/or changed by anyone. The big deal for others is that UDP makes it so you can send that information elsewhere, hence the magnification: add bunch of data, then request it pretending to be the target. But even disabling UDP by default you STILL shouldn't have the server available to the internet. That's not what it's intended for.

It could still be configured to only listen on local connections instead of external ones.

Memcache has been dealing with issues for a couple of years now. Look at Talos' scathing audit in July of last year. Between flawed software and increasingly lazy IT departments that rely on it and won't even patch it, I am truly amazed that anything on the internet functions other than exploits. UDP in 2018? C'mon, we know better..... right?

AFAIK memcached is inherently insecure because it is never intended to face the public internet. Then, I think one solution is to implement a 'kill' command which will simply shut the server down. We can have crawlers that auto shuts down any public facing memcached servers. On private networks, well, you shouldn't have anybody autokilling it...

AFAIK memcached is inherently insecure because it is never intended to face the public internet. Then, I think one solution is to implement a 'kill' command which will simply shut the server down. We can have crawlers that auto shuts down any public facing memcached servers. On private networks, well, you shouldn't have anybody autokilling it...

There's a "shutdown" command... which, sadly, is not enabled by default. though most services would just restart it if it stopped.

So why aren't ISP's cutting the servers internet connection until their properly configured?

Not their problem. They have nothing to gain by doing that because they get paid either way. And they have a lot to lose if they tried: Getting sued due to breach of contract or inability to fulfill SLAs, or their action seen as tacit acknowledgement that this is their problem to solve.

It is their problem when others are getting napalmed because their customer doesn't believe in security.

You aren't thinking like they are thinking. They are a business and getting paid is all that matters. If they are still getting paid then it is not their problem.

Well, they do have to carry the bandwidth and possibly pay for it to 3rdparties, but as said in the article it is not that much bandwidth in their end as in the target end.

Why don't we DDoS those 93000 unsecured servers by turning them against each other? If we keep them busy attacking themselves we would avoid the problem for the rest of the internet!

Because that would again be tons of traffic that the backbone has to deal with. If it's busy DDoSing all over the place there is less capacity for actual data.

Information to people running the servers and ISPs (to block invalid packets coming from their networks) will be much more efficient in the end.

mobile networks in Germany used to be utterly insecure but a few talks about it and some contacting people working with them (on the technical end not the idiots in management) solved majority of issues (most left are those that are unsolvable due to legacy demands and standards being insecure in themselves)