Krebs on Security

In-depth security news and investigation

DDoS on Dyn Impacts Twitter, Spotify, Reddit

Criminals this morning massively attacked Dyn, a company that provides core Internet services for Twitter, SoundCloud, Spotify, Reddit and a host of other sites, causing outages and slowness for many of Dyn’s customers.

Twitter is experiencing problems, as seen through the social media platform Hootsuite.

In a statement, Dyn said that this morning, October 21, Dyn received a global distributed denial of service (DDoS) attack on its DNS infrastructure on the east coast starting at around 7:10 a.m. ET (11:10 UTC).

“DNS traffic resolved from east coast name server locations are experiencing a service interruption during this time. Updates will be posted as information becomes available,” the company wrote.

DYN encouraged customers with concerns to check the company’s status page for updates and to reach out to its technical support team.

A DDoS is when crooks use a large number of hacked or ill-configured systems to flood a target site with so much junk traffic that it can no longer serve legitimate visitors.

DNS refers to Domain Name System services. DNS is an essential component of all Web sites, responsible for translating human-friendly Web site names like “example.com” into numeric, machine-readable Internet addresses. Anytime you send an e-mail or browse a Web site, your machine is sending a DNS look-up request to your Internet service provider to help route the traffic.

ANALYSIS

The attack on DYN comes just hours after DYN researcher Doug Madory presented a talk on DDoS attacks in Dallas, Texas at a meeting of the North American Network Operators Group (NANOG). Madory’s talk — available here on Youtube.com — delved deeper into research that he and I teamed up on to produce the data behind the story DDoS Mitigation Firm Has History of Hijacks.

The record-sized attack that hit my site last month was quickly superseded by a DDoS against OVH, a French hosting firm that reported being targeted by a DDoS that was roughly twice the size of the assault on KrebsOnSecurity. As I noted in The Democratization of Censorship — the first story published after bringing my site back up under the protection of Google’s Project Shield — DDoS mitigation firms simply did not count on the size of these attacks increasing so quickly overnight, and are now scrambling to secure far greater capacity to handle much larger attacks concurrently.

The size of these DDoS attacks has increased so much lately thanks largely to the broad availability of tools for compromising and leveraging the collective firepower of so-called Internet of Things devices — poorly secured Internet-based security cameras, digital video recorders (DVRs) and Internet routers. Last month, a hacker by the name of Anna_Senpaireleased the source code for Mirai, a crime machine that enslaves IoT devices for use in large DDoS attacks. The 620 Gbps attack that hit my site last month was launched by a botnet built on Mirai, for example.

Interestingly, someone is now targeting infrastructure providers with extortion attacks and invoking the name Anna_senpai. According to a discussion thread started Wednesday on Web Hosting Talk, criminals are now invoking the Mirai author’s nickname in a bid to extort Bitcoins from targeted hosting providers.

“If you will not pay in time, DDoS attack will start, your web-services will
go down permanently. After that, price to stop will be increased to 5 BTC
with further increment of 5 BTC for every day of attack.

NOTE, i?m not joking.

My attack are extremely powerful now – now average 700-800Gbps, sometimes over 1 Tbps per second. It will pass any remote protections, no current protection systems can help.”

Let me be clear: I have no data to indicate that the attack on Dyn is related to extortion, to Mirai or to any of the companies or individuals Madory referenced in his talk this week in Dallas. But Dyn is known for publishing detailed writeups on outages at other major Internet service providers. Here’s hoping the company does not deviate from that practice and soon publishes a postmortem on its own attack.

Update, 3:50 p.m. ET: Security firm Flashpoint is now reporting that they have seen indications that a Mirai-based botnet is indeed involved in the attack on Dyn today. Separately, I have heard from a trusted source who’s been tracking this activity and saw chatter in the cybercrime underground yesterday discussing a plan to attack Dyn.

Update, 10:22 a.m. ET: Dyn’s status page reports that all services are back to normal as of 13:20 UTC (9:20 a.m. ET). Fixed the link to Doug Madory’s talk on Youtube, to remove the URL shortener (which isn’t working because of this attack).

Update, 1:01 p.m. ET: Looks like the attacks on Dyn have resumed and this event is ongoing. This, from the Dyn status page:

This DDoS attack may also be impacting Dyn Managed DNS advanced services with possible delays in monitoring. Our Engineers are continuing to work on mitigating this issue.Oct 21, 16:48 UTC

As of 15:52 UTC, we have begun monitoring and mitigating a DDoS attack against our Dyn Managed DNS infrastructure. Our Engineers are continuing to work on mitigating this issue.Oct 21, 16:06 UTC

This entry was posted on Friday, October 21st, 2016 at 9:59 am and is filed under Other.
You can follow any comments to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

There is a good chance your route is being remembered which is why the sites are still down. Each domain has a value called a time to live value that keeps track of how long your ISPs recursive dns servers remember that path. If it didn’t resolve to an IP within that path, it’ll remember the result for a certain period of time.

What if Google/Microsoft/network providers etc.. joined together to scan the internet for ISPs that allow IP spoofing, and other misconfigured routers that lend themselves to amplification attacks, and inject warnings into all IPs from those networks anytime they do a google search, letting them know their ISP is vulnerable. If the ISP does not fix it within a month of discovery, they block access to google, bing, Wikipedia… to that entire network, until they properly configure their routers. Could even to it on a smaller level to IoTs that use default U/P that are open on the web, or are already bots. Fix your stuff, or get blacklisted.

It seems to me that a re-work of DNS may be a better solution. One where either a security layer is invoked or perhaps a more robust failover design to virtualize target name resolvers and make them more dynamic. I understand the significance of modifying a core service like DNS but we should address the target of the problem, no?

Looks like they are testing their new IoT Bot farm. I believe that we are the front lines and it’s up to us to prevent a dystopian future from happening. The future is what we make it. I choose to make it Star Trek. Quote from Carl Sagan Contact:

David Drumlin: I know you must think this is all very unfair. Maybe that’s an understatement. What you don’t know is I agree. I wish the world was a place where fair was the bottom line, where the kind of idealism you showed at the hearing was rewarded, not taken advantage of. Unfortunately, we don’t live in that world.

Ellie Arroway: Funny, I’ve always believed that the world is what we make of it.

In an side note: I’ve almost started called all the self-starting videos on sites DDOS attacks themselves. They pull down an increasing large, maybe even huge (dunno yet), amount of data and prevent me from seeing or otherwise getting to the article I may have come to the site for. Also, increasingly there is no way to 1) prevent the download regardless of play and/or 2) no way to really shut them off and even if you can hit the pause button (no stop buttons are on these) there is no way to prevent any further loading of video in the background.

And I do video, but not these. The irony is the idea that the more media and in particular moving or animated video, the more you get your message out. That is simply not so. The more you shove at anyone the more our brains shut down on input. And the move movement you have in the corners the less you can focus on 1) the article and even 2) the ad contents if you want. I will also add in the constantly shifting menus in fast food places which make it irritating to try to order from.

Anyway, I hope I am not too far off the subject of this particular article (my apologies if so) but I was just on a couple of news sites which were pushing all this junk down my pipe (junk I don’t want to pay for but get stuck for), when I came over here hoping for the most recent info on Dyn’s DDOS attack. I couldn’t help make the comparison and think about the slippery slope in terms of DDOS (in effect) coming from the main site.

I had to add adblock plus a week ago for the very same reason on a desktop. This same 3 to 6 gigabyte an hour download and a nice big upload made even scrolling through some sites impossible. Started happening on a few more but not all sites. netstat is as far as I know, so I could only guess whether it is adware gone mad, or a hacking exploit that used large downloads perhaps just fishing in memory. Would be nice if someone dropped a clue. Thanks.

Problem solved for those in life altering situations. I hope those in vulnerable positions have good IT folks on hand to understand you can get around a major of issues.

Also, this site even went down for me after I refreshed and before I switched my DNS server. Many, many more people need to learn basic concepts about how networking works in this world to mitigate these things. Far too many people are obsessed with their touch screen pretty GUI’s to do things online, without putting in the effort to learn WHY it works.

Dude, if the domain DNS is down, switching your DNS will do nothing. Only if the DNS still have cached the query you will able to enter the web. If not, when your DNS try to ask the domain’s DNS server, will have no answer.

You need to learn the basics of how networks works, and how DNS works.

If the authoritative DNS server for a Doman is on Dan, changing the DNS server you are using does nothing. All DNS servers have to be able to reach the authoritative DNS server for a domain to be able to resolve a request. The only exception is when the request is cached on the server already or you create your own authoritative zone for the domain. If using cached queries, the TTL of a record usually determines how long the cache sticks around.

Again, all authoritative servers are not down, that’s why it works. Outage maps show the regions that are hit, and the Midwest US hasn’t been hit with anything. Neither have other parts of the world.

I mean, YES, obviously if every single redundancy of an authoritative name server for a site is down, then you’re working off the cache of a recursive one. But these are large companies, and both Dyn and the websites affected surely have redundancies in place to be operation in at least SOME areas.

So, you are saying nothing. You don’t know who/what is caching, and for how long. Hint: were I doing a DNS server (such as Cisco Open DNS), there is nothing to prevent me from using a cached record if there was a problem reaching an authoritative server. You seen to understand all this, by reading your response. It’s supposed to be this way, but if it isn’t … Obviously, you don’t understand the difference between regular operations and emergency operations, and you don’t know the difference between RFCs and “implementation variations based on experience.”

Well I apologize, I was just trying to provide information, and then I get a lecture about what my suggestion was (that worked) to give people a workaround and how some should understand possible workarounds.

You’re right, I don’t know exactly if it’s a cache from Cisco, or if it’s redundancies. I did a little more research in nslookup and the primary dyn server for Twitter is down. OpenDNS expiry is 7 days when I ran the -type=soa switch.

Just, nevermind. This is why I barely post on the internet. I just don’t want anyone to die, because I’ve read doctor’s offices can’t get to their records and other things in some places, and it would be horrible if someone was given the wrong medicine or something when there’s an option out there to change DNS settings to a workable setting temporarily to mitigate that.

I’m done, you won over whatever it is why I’m being argued with over trying to help.

Shane, the fix you had suggested also worked for me. I learned my lesson after the big Comcast DNS outage which I think was in 2015. I changed my config to use OpenDNS servers and then had no trouble accessing twitter and other sites for which I had received DNS errors when using Comcast’s default DNS server. Not sure why people think it’s appropriate to flame you for a great and useful suggestion. Competitive douchery, some people don’t like it when you know something they don’t.

It’s okay, thanks for the encouragement. I have my Networking and Security certifications and am still in school, so I’m no expert. I assumed changing the DNS server was because it had access to another Dyn server that was still up in another area. It is actually due to the smart cache Cisco implements.

I had another post trying to explain my thoughts, but it didn’t go through to all the traffic. Basically, I just think everyone should be equipped with some basic knowledge of certain things in school. Survival skills, computer skills like changing DNS servers and pinging or building PCs, auto mechanic skills, and legal skills.

It just sucks so many people are in a bad position because it’s just not taught.

The problem is in how DNS works. Say the nameserver for a domain says TTL is 7200 (2 hours). 2 hours is the expiration date for the cached DNS record. What is supposed to happen is that every 2 hours, your DNS server checks ITS DNS server, (which checks its DNS server, all the way back to the DNS root servers) to update it’s cached reccord. If the nameserver is unreachable for a few hours, all records will expire.

What SOME DNS servers do is keep cached DNS records for longer than TTL, and sometimes will keep the expired cached record if they can’t get an update. DNS servers that behave in this non-standard, “broken” way might still be able to resolve domain names to IPs despite the nameservers being down.

I personally think it should be standard to keep records current if an update is unavailable. This would mitigate both attacks on nameservers, and on root or intermediary DNS servers.

Thanks for the response, I’m trying to get a hold of this and try to explain my confusion here.

The thing I was deducing was that not all the authoritative DNS servers were down for say, Twitter. Is there a rule that a company can only have one authoritative server? I thought possibly either Dyn had backup servers in certain regions that weren’t affected, which is why some areas in the Midwest could still access it. Either that, or another provider like AWS or someone else was used as a redundancy in case all of Dyn was down.

So when my ISP here didn’t work, and Cisco’s did, I assumed Cisco just had a line to a working authoritative DNS server, as I just mentioned. Especially after I tracert’ed the path to the OpenVPN server to the Midwest.

How can someone tell if when someone is able to access the internet through a new DNS server, if it’s only because the cache is still active, or it’s because one of the secondary DNS servers is better positioned to connect to the working backup authoritative server?

That’s my main question, because it seems to be both of those could be equally valid assumptions. Thanks for the time and thoughtful response.

So, you are saying nothing. You don’t know who/what is caching, and for how long. Hint: were I doing a DNS server (such as Cisco Open DNS), there is nothing to prevent me from using a cached record if there was a problem reaching an authoritative server. You seen to understand all this, by reading your response. It’s supposed to be this way, but if it isn’t … Obviously, you don’t understand the difference between regular operations and emergency operations, and you don’t know the difference between RFCs and “implementation variations based on experience.” Think before typing.

Christine; I’m sorry you don’t understand. Will it help you understand if I told you that I subsequently found out that most DNS providers (perhaps even DYN) ignore TTL on authoritative DNS records? How about if I also mention that the longer you specify for TTL, the longer it will take you to have your DNS records change when you later change DNS settings?
I never said don’t use OpenDNS. I tend to use Google DNS, but even it was having trouble on this day.

But dude, many were, and the outage(s) lasted so long that many DNS resolvers that follow the TTL rules WERE down due to over-reliance on DYN. AWS N.Virginia was down because AWS (but not necessarily their customers) use DYN.

Dude, if the domain DNS is down, switching your DNS to OpenDNS will actually work because of their SmartCache feature.

“SmartCache uses the intelligence of the OpenDNS network at large, providing DNS service to tens of millions of people around the world, to locate the last known correct address for a Web site when its authoritative nameserver is offline or otherwise failing.”

You need to learn the basics of how networks works, and how OpenDNS works.

I know what DNS does and how it works. My logic in this was that all of Dyn’s servers weren’t down, because reports were the Midwest wasn’t affected. My local non-authoratative DNS server through my ISP in the area of outage wasn’t working, but when I used OpenDNS it did work.

I assumed the reason why is OpenDNS is routed somewhere in the Midwest (tracert’s last hops show in Chicago), so I assumed it went to a working authoritative server, or a backup, to get that information. I wasn’t aware it had a Smart Cache feature, because when I used Google’s 8.8.8.8 I still had problems, so I assumed it was the route from the non-authoritative to authoritative server why the other DNS didn’t work but OpenDNS did.

I mean, if you want to take shots at me to feel better about yourself for me using that kind of logic and being mistaken, then fine. I knew enough to at least attempt a DNS workaround, knew enough to flush my DNS cache, knew enough to know what authoritative and non authoritative are and how that’s passed along. I would say that’s a decent amount, and I don’t see why my assumption would be so dumb knowing all of that.

But by all means, criticize and trash other people. Hope it’s worth it for you.

So, pray tell, instead of using news reports, tell us the DNS daisy chain that worked for you without changing DNS settings. Using terms like “trolling” when you don’t like what someone says isn’t a good trait.

I wasn’t trolling, my reply wasn’t even directed at you. Why do you assume that anything contrarian or correcting someone is a troll?

I was actually showing how Daniel’s initial response was incorrect by using (mostly) his own words. He assumed that the person suggesting using OpenDNS as a workaround didn’t know how DNS worked. When in fact OpenDNS has a feature to continue to resolve known good names/IPs even when the authoritative server can’t.

If you want to talk about trolling don’t act like a white knight and end your comment with a passive aggressive, “But by all means, criticize and trash other people. Hope it’s worth it for you.”

Hours after dyn gave a talk about back connects poor presence in the net space and this happens? This is more than enough smell for anyone to ponder at this point. It’s time to investigate the company and the individuals that are anyway familiar or connected to this marshal webb and Tucker guy. I consider to everyone to investigate these individuals pasts this is all too frequent surrounding their company. I also believe these actors even attacked your site krebs and used vdos as a riot shield to get away with it. Krebs after all this are you still not questioning the attacks may have something to do with them afterall?

You’re doing a great job! You are modest to imply Doug Madory’s talk may have had been more of a factor than the publication of, Spreading the DDoS Disease and Selling the Cure. Today’s attack is surely no coincidence!

I bet you get some help in your efforts now. This attack is getting a lot of attention. And lot of attention means a lot of complaints. And lot of complaints, especially from large companies, get a law enforcement response.

You don’t have the power of subpoena. And you can’t search and seize computers for evidence. Many of the people you wrote about reside in the US. They will be investigated.

We formerly used PowerDNS (not the software, the service) in Europe, until it became a proxy battleground for Ukraine-Russia disputes and hence unfortunately useless for those of us without a dog in that fight. PowerDNS is now out of business– once customers have to move, there’s no going back. In our case we moved to AWS but w/TB-level hosing on tap there’s not really anywhere safe.

I do have to wonder if this is an escalation of saber–rattling w/regard to the US election and Russian meddling, the freezing of RT bank accounts, etc. A message, perhaps?

From Brian’s article the implication is that its folks butthurt over the exposure of the DDoS for hire types. So right now I imagine that DHS and the FBI are looking very carefully at the feeds coming out of Clemson U given that the biggest drama queen exposed so far seems to be attending college over there.

To me it feels like we’ve let the massively distributed Internet become a bit too centralized and now there are single points of failure. Perhaps this is due to cost or performance reasons, maybe both. None the less, even a DDoS attack of this magnitude should not be able to take out top tier websites. I haven’t noticed Google being off-line today but that just be my circumstance.

Here’s a trick by the way. If there’s an article/website you want to read but the website won’t respond. Copy/paste the URL of the non responding webpage into Google search. When the results are returned click on the down arrow symbol next to the URL shown in the results. You’ll then be able to load a Google cache of the webpage. I’ve been doing this technique for years and it’s also been working perfectly throughout this DDoS attack.

DNS does kind of work that way. But that assumes “secondaries” are acting as DNS slaves, and not other copies of a primary DB. Between providers that usually doesn’t work, they’re usually not really slaves, though some providers can act as slaves to another provider’s primary service (or your own).

Eventually the data expires though, and if the primaries are still down (and not repointed) then goes stale.

So yes, ideally you could do that. Practically most people don’t bother because it’s hard to make that work.

There’s no real concept of ‘primary’ and ‘secondary’ with DNS name server records. Resolvers will often select the lowest-latency of the list, or just a random one, but they are definitely not tried ‘in order’, both should get approximately equal load.

The behaviour in case of a timeout or SERVFAIL is usually to try a different server, but I believe this is entirely up to the resolver how ‘hard’ to try.

DNS is fairly flexible. But mostly because not all implementations are identical, a given attack may impact clients in different ways.

Some DNS clients will only look at the first record. Some might pick randomly or consider latency to the DNS as noted above.

Of course, using multiple DNS providers requires more effort, and possibly more money. There’s also an increased attack surface — if someone can hijack your DNS account at any of your providers (by hacking or through social engineering), then they are effectively you.

What moron puts all their DNS servers on one network? This has been a known bad practice for decades, and the consequences are as old as the first time microsoft.com vanished from the Internet in the 1990s for the same reason.

One thing I don’t get is how so many DVRs and other IoT devices got compromised. Just about all of them (afaik) are behind a router, so unless someone went out of their way to setup port forwarding to their device(s), I don’t see how it could happen.

The malware was delivered by email. When the victim opens the email attachment, all devices on that user’s local network are infected. That’s how border access controls (if you can call a router that) were bypassed.

Brian Krebs explains: “But as several readers already commented in my previous story on the Mirai source code leak, many IoT devices will use a technology called Universal Plug and Play (UPnP) that will automatically open specific virtual portholes or ‘ports,’ essentially poking a hole in the router’s shield for that device that allows it to be communicated with from the wider Internet.”

Patrick, no email is needed to do this. The user is not part of the problem.

Those of you who do informal “tech support” for friends/family could sure help by tweeting,to the largest group you can. Advising people to get these devices off of the net a.s.a.p. There are a lot of people who never hear the cause of these attacks… let them know that if they are using these crappy devices, they ARE the problem. Not bloody Russians.

Hacked baby monitors, thermostats and DVR’s are threatening the internet? Made with a impossible to secure crappy part from a Chinese company to the tune of millions? Not to worry, the Libertarian solution is earth will be fried when the sun goes red giant in 5 billion years….(snark).

For everyone else, perhaps the super rich private sector tech companies with the smartest folks in the room should have been proactive and ensured that laws were in place to avoid this very event. To certify security measures on all internet connected ‘things’. BRIAN WARNED US!

With this becoming an increasing problem should pressure be placed on the FCC to pass regulations requiring ISPs to follow BCP 38 or similar? If the reasons they won’t implement these controls is financial than fines large enough to reverse the equation could help the situation quickly.

That’s a good start for addressing the spoofing issue with DNS-based reflection attacks. But I don’t believe these Mirai based attacks using insecure IoT devices employed spoofing. To fix that problem requires a whole other set of solutions.