The ISPs of the world keep letting this kind of crap happen.... It should be pretty obvious when someone is trying to DDoS a server. Even if they don't want to lose a "paying customer", simply cutting access to that server for x amount of time for that IP would be more than enough.

The ISPs of the world keep letting this kind of crap happen.... It should be pretty obvious when someone is trying to DDoS a server. Even if they don't want to lose a "paying customer", simply cutting access to that server for x amount of time for that IP would be more than enough.

I understand where you're coming from but I think that may be a premature observation. I doubt this is just an attack against a single IP address. You should also remember that there comes a point where the incoming volume of traffic destined for the IP address(es) under attack overwhelms the upstream carriers prior to the null-routing of said addresses. The lower the null-route is set, the greater the chance for upstream impact. Mitigating heavy DDoS isn't always just a simple matter.

These computers are parts of botnets that exist for a long time. Send the infected customers an email about their infection, containing the offer to fix it (for a certain price) and a deadline when they will be cut off if they do not get this fixed.

Which is going to be a great explanation to talk about on TV talk shows. Alongside of why ISPs cut off innocent people who are victims of a crime off the internet as an additional punishment, and what should be done about those evil ISPs.

All the while the person dumb enough to actually make that career ending call enjoys his new career at local fast food restaurant.

Which is going to be a great explanation to talk about on TV talk shows. Alongside of why ISPs cut off innocent people who are victims of a crime off the internet as an additional punishment, and what should be done about those evil ISPs.

I do see (but don't particularly care) about the ISPs side of things.

So, don't "cut [granny] off from the internet" ; set up router rules so that all data emanating from that particular (or those thousand) IP addresses always get sent the same package of data - an informat

I get to choose those who I work with, and if they're incompetent, they don't get to come back for a second piece of work.

No, I don't work in a "public facing role". And nor would I want to.

Pretty much the whole thing about viruses, malware and fucked-up computers is largely down to people who aren't capable of following technical rules. They'll disappear. One funeral and/ or one personal education at a time.

On the other hand, I have done network maintenance work for a major ISP. It's highly technical work. It's also work that requires understanding of more than just technical side of it. People like you never get that kind of work, because they are incapable of performing it - even if they are perfect on the technical side, they utterly fail the test of understanding the human side of the issue. As you have shown very clearly in your last post.

These computers are parts of botnets that exist for a long time. Send the infected customers an email about their infection, containing the offer to fix it (for a certain price) and a deadline when they will be cut off if they do not get this fixed.

Because the offending packets are UDP, they can (and do) employ bogus response IPs. The IPs of their victims, in fact - which is how the reflection occurs. The botnet machines send out small judas packets to DNS servers all over the world. The DNS servers think that these are legitimate queries from the victim machine and send out large quantities of DNS data to the victims. Hence, the other name: amplification.

The problem is, the fix I had to employ was to physically replace the co-opted DNS servers with more advanced equipment because the system software that was on them had no throttling capabilities nor was is capable of recognizing and rejecting suspicious queries.

Question: is there any mechanism by which you can push that back up the line? As in, "Hey, I'm getting bogus requests from you. Can you see which of your users are sending vast quantities of DNS (or NTP) requests aimed at me, and perhaps inform the users that they are violating your terms of service?"

I assume that hosting such an attack is a TOS violation from most ISPs, though I've certainly heard from Slashdotters who feel that their packets are their own business, and that their ISP should be required to

Question: is there any mechanism by which you can push that back up the line? As in, "Hey, I'm getting bogus requests from you. Can you see which of your users are sending vast quantities of DNS (or NTP) requests aimed at me, and perhaps inform the users that they are violating your terms of service?"

I assume that hosting such an attack is a TOS violation from most ISPs, though I've certainly heard from Slashdotters who feel that their packets are their own business, and that their ISP should be required to carry them.

Sadly, no. In the case of UDP, only the first router in the chain can do anything. The other routers cannot really tell what the upstream path of a UDP packet was.

And since government agencies have been reported to participate in DDOS attacks, I would not be surprised to learn that some of those agencies had even activated exploits in other people's router microcode to press-gang them into participating unknown to their owners.

Not that you have to be a government agency to do that. It's mainly the differenc

15-20 years ago, sure. Today, you're lucky if you can get anyone with half a clue at your own ISP to even "look into it." Chasing this rabbit more than one or two hops just doesn't happen. And if you could, the chase ends with ISPs that cannot be bothered to stop their customers from spoofing traffic. Maybe if you could get enough operators to disconnect these "lame" ISPs, but you're not going to because that'd be dropping paying customers.

The problem is, the fix I had to employ was to physically replace the co-opted DNS servers with more advanced equipment because the system software that was on them had no throttling capabilities nor was is capable of recognizing and rejecting suspicious queries.

Protecting against DDoS reflection attacks is very easy, but it requires all 1st-tier ISPs to perform egress IP validation, so packets coming from the end users trying to get onto the internet are checked that the IP address is correct. Filtering anywhere else is impossible because of transit routes, so by the time the second AS gets to inspect the packet, it could legitimatly be from anywhere.

The problem is that this costs money to implement and isn't in the interest of 1st-tier ISPs, so is unlikely to ev

customer ingress validation; check the traffic as it comes from your customer. (i.e. drop packets as soon as possible, as close as possible to where it arrived.) often enough the router's builtin reverse path filter will do this for you, nearly processing free.

And just how do we find these tens of thousands of zombies? They're spoofing traffic and their ISPs are allowing it. They aren't sending out gigabits of traffic so there's no abnormal amount of traffic to look for. (perhaps abnormal for a single, specific connection, but no one is applying heuristics individually to millions of connections. they look for the far easier to spot, road flare... a link that's 90+% utilized for several hours.)

How, exactly, would you propose that this is done by carriers? You say that it would be obvious if someone were attempting a DDoS attack but that may not be true. One of the major issues with DDoS is that it doesn't require tremendous bandwidth on the client sides. There could be millions of those (and with the fact that everyone thinks they need 50Mbps home internet for their web surfing) and there's plenty of bandwidth available that could be limited to appear like legitimate traffic.
It has been my experience that the best attacks against things involve greater quantities of remote hosts and less bandwidth than fewer hosts with more bandwidth.

Except in this case (or other reflection attacks, i.e. you're dealing with source address spoofing), RPF [wikipedia.org] on customer-facing interfaces should prevent the attack from leaving the ISP's network in the first place. Note that I'm talking about the ISP of the original machine performing the request with the spoofed source IP here, not even even the ISP of the machine server that's being used for the reflection & amplification (which in this is a vulnerable or misconfigured NTP server).
The affected NTP servers need to be cleaned up as well, but the sources of the original packets also should be preventing the spoofed traffic from leaving their networks.

Well, yes and no. There really aren't that many vulnerable NTP servers out there, and those which exist rarely have much bandwidth to do much damage.HOWEVER there are many, many, many shitty little firewalls (I'm looking at you, SonicWall, among others) which for some FUCKING RETARDED reason default to responding to unsolicited NTP packets with a "reject" or "bad request" packet, instead of just dropping it into the "bitbucket". So for the cost of sending a malformed 8-byte UDP packet, you can get the amplifier to respond with a full-size "bad request" or "service denied" response.

Verifying source IP's is, as you stated, the real root of the issue.But it's not nearly so easy as you might think to blacklist a rogue ASN, at least not without blacklisting entire regions of the world at the same time. You need to get ALL the ASN's which have ANY kind of path to the rogue one to get in on the blacklisting, and even if you got it done they'd already have a contingency plan... change the company name, transfer the IP's to a "new" company with a new ASN, and boom you're back in business. It really is trying to shoot at a moving target, and in the process you end up hitting a lot of people who aren't guilty of anything.

There really aren't that many vulnerable NTP servers out there, and those which exist rarely have much bandwidth to do much damage.

I disagree. JunOS, for instance, runs a version of ntpd that's vulnerable to this and by default adding any ntp servers to its config (to turn it into an NTP client) also makes it respond to ntp queries to any source IP with no auth. You're fine if it's an SRX or something in full flow mode as in that case you need to explicitly tell the box to host NTP in a particular security zone or on a particular interface, but if it's a packet mode device (e.g. an EX switch or even an SRX in selective or full packet m

How, exactly, would you propose that this is done by carriers? You say that it would be obvious if someone were attempting a DDoS attack but that may not be true. One of the major issues with DDoS is that it doesn't require tremendous bandwidth on the client sides. There could be millions of those (and with the fact that everyone thinks they need 50Mbps home internet for their web surfing) and there's plenty of bandwidth available that could be limited to appear like legitimate traffic.

It has been my experience that the best attacks against things involve greater quantities of remote hosts and less bandwidth than fewer hosts with more bandwidth.

It is obvious when you see hundreds of connections per minute from each of hundreds of sources to one single target with no meaningful response back.It's even more obvious when your peers call you up and say that the target is being DDoSd and ask you to stop it.It's even more obvious when the attackers are spoofing IPs.

Identifying the zombies is never the issue. It always comes down to ISPs simply not having the balls to do something about it.

Specifically, he noted that in http://archive.icann.org/en/co... [icann.org] that it's a simple task to check outbound packets and drop them if the return address is for someone else.

The open question is ISP motivations: I used to work for Canada's first big ISP, and my management would have freaked out if they thought they were frivolously enabling a DDOS attack. See the queue article and comments for more info.

The Vixie article describes doing it at the edge, where one only has one or two uplinks from the local ISP and where the cost is trivial. One wouldn't want to do it closer to the backbone, for the reasons you noted.

Actually, you'd do it at the customer ingress point. So *your* ISP would say, "that's not your address, moron" and ignore the packet. It would never make it to your ISP, much less AT&T or Cogent. HOWEVER, there are too many ISPs that don't do that, so in your example, AT&T would have to validate the traffic from your ISP. And that's where it starts to fall apart as AT&T doesn't necessarily know with whom your ISP peers, or what their local preferences are; legitimate traffic from you could ar

I went to firstlook.org this morning to see Glenn Greenwald's latest NSA story, and was surprised to first see a page from cloudflare claiming to be checking if I was a legit visitor. Could this be related? Have the spooks struck again?

It's not always laziness. I added outgoing filters to my routers so that it only allowed source addresses from my network. That was great at stopping DOS attacks, but as I found-out the hard way, several of my customers were sending outbound traffic with source addresses not on my network. That was in 1997. For the next several years, it was a huge hassle to keep adding additional source address ranges for customers. An ISP selling a high speed connection has to allow outgoing traffic from addresses they don't own. That's the entire point of selling transit.

Because hiring people that can update cisco IOS configs are cheap and the updates are risk free. Also, customers are very understanding and patient when they can't send traffic after they change addresses. The GP is right that it is a huge hassle.

I would not call that lazy. Altruism only goes so far in the REAL world. If someone pays you for the effort, or legislation demands it of everybody, you will probably keep doing it. But if it only out of the goodness of my heart, eventually everybody will say fuck it.

It's not always laziness. I added outgoing filters to my routers so that it only allowed source addresses from my network. That was great at stopping DOS attacks, but as I found-out the hard way, several of my customers were sending outbound traffic with source addresses not on my network.

"I found-out the hard way, several of my customers were sending outbound traffic with source addresses not on my network."

You should lose those customers! Really.No-one, I repeat no-one, has business sending packets with forged source addresses.Refer them to a book on policy routing when they don't know how to route in a multihomed enviroment.

He didn't say spoofing, he said transiting, so they are people who have their own IP blocks assigned and are using those. The advantage is that you can have multiple uplinks and use the second as backup if your primary goes down and all of the ips never change.

Someone has to be running BGP, but not necessary the customer... I've setup several customers that had PI space but didn't run BGP themselves. It's setup on the ISP side just like the ISP's RIR assigned space... announce it like you own it.:-) (they can multihome that, too, as long as everybody knows that's the plan. some ISPs get upset when they see "their" space being announced by someone else.)

Users of the internet should send traffic from their assigned address.When they have multiple addresses they should use the address that belongs to the interface they send it on.Either they route the traffict to the interface that belongs to an address, or they assign the source address dependingon the interface they want to route on.Don't adhere to this rule and you face blacklisting of your traffic.

It is similar to open SMTP servers. Used to be no problem, used to be common practice, is not acceptible an

It's not spoofing if you have your own AS and are sending traffic from your allocated IPs in that AS via multiple transit providers. When OP said:

...several of my customers were sending outbound traffic with source addresses not on my network

...he could have been indicating that he was providing transit to customers that had their own IP allocations, ergo the source addresses were not in his network.

Multi-homing doesn't necessarily mean "getting an account from two different retail ISPs and then NAT-ing different depending on which interface you're leaving on"; we could be talking about BGP multi-ho [wikipedia.org]

It's not always laziness. I added outgoing filters to my routers so that it only allowed source addresses from my network. That was great at stopping DOS attacks, but as I found-out the hard way, several of my customers were sending outbound traffic with source addresses not on my network.

I'm not a networking wizard so I ask...why did the customers need to send outbound traffic using modified source addresses? Why should that be allowed as part of your service?

He was selling transit. It was customers of his customers. The customer of the customer had a valid source address in the customer of the customer's assigned netblock. The customer of the customer's netblock isn't one of the customer's netblocks.

Even if those addresses didn't belong to your network, you should have had a route to them via your inside network. If you allow outbound packets from addresses in your own network AND from addresses for which routes exist on your inside network, that should cover 99.99% of valid situations.

It's not always laziness. I added outgoing filters to my routers so that it only allowed source addresses from my network. That was great at stopping DOS attacks, but as I found-out the hard way, several of my customers were sending outbound traffic with source addresses not on my network. That was in 1997. For the next several years, it was a huge hassle to keep adding additional source address ranges for customers. An ISP selling a high speed connection has to allow outgoing traffic from addresses they don't own. That's the entire point of selling transit.

The way I read that... your customers were multihomed and sending you traffic that belonged on another ISP's network? That's the definition of spoofing!:-) You're perfectly correct to drop that crap. If they aren't using address space you assigned them, or PI space they own -- and told you about, it's not your problem.

I've had several clients bring their own address space (PI) and a few even had their own ASN. In **VERY** rare cases, we'd allow a client to announce a non-portable assignment to another I

A better question is why are ISP's allowing forged traffic ENTER their network from end users? If they drop grandma's traffic that doesn't have grandma's srcip then grandma won't complain and the WWW would be a little safer. Of course their will always be end users who transit legit traffic.

Because it is VERY difficult to ascertain whether the source of an inbound packet is forged unless it is very obvious (like an IP that should be inside your network or on a private subnet). Outbound traffic on the other hand should almost always have a source IP that belongs to your assigned ranges (or configured private subnets).

Because it is VERY difficult to ascertain whether the source of an inbound packet is forged unless it is very obvious (like an IP that should be inside your network or on a private subnet). Outbound traffic on the other hand should almost always have a source IP that belongs to your assigned ranges (or configured private subnets).

On the Internet there should also never be traffic with RFC1918 source IP addressing, which is easily filtered on ingress.

Filtering ingress packets with RFC1918 source IPs may be useful in some circumstances, but it doesn't help in amplified attacks.

The source in these cases will always be a legitimate uninfected machine that is just doing its job (such as a DNS or NTP server). The source IP will be whatever IP the requester expects to see, such as the destination IP of the initial request.

In amplified attacks, the forgery occurs in the initial request packets, all of which have the source IP of the DoS target, which must alw

Filtering ingress packets with RFC1918 source IPs may be useful in some circumstances, but it doesn't help in amplified attacks.

The source in these cases will always be a legitimate uninfected machine that is just doing its job (such as a DNS or NTP server). The source IP will be whatever IP the requester expects to see, such as the destination IP of the initial request.

In amplified attacks, the forgery occurs in the initial request packets, all of which have the source IP of the DoS target, which must always be an actual external IP. This is where egress filtering is useful, because none of these requests should have an IP outside of the subnet serviced by the egress filter.

Agreed except that egress filtering is not practical for inter provider transit traffic and not all SPs filter their customers' traffic.

I had that attack in my network last week. It's not based on ip spoofing. It simply expoilts open ntpd servers and send them the payload which targets many more servers (reflection and amplification). I had to mitigate the attack by filtering ntp port just to few credible servers from pool.ntp.org.

That is called ip spoofing. They send a request with a sender address of a victim, and the server sends the reply to the victim.This would not be possible when the attacker's ISP would not allow source address spoofing.

You are right. But i can't beleive that there are still ISP's out there which do not put filters based on their routing objects on their border routers. It's insane. And on the other hand their upstream providers allowing it. What is BGP good for then? Are network guys that lazy?

I did have one site I normally visit (DPReview) get really slow, and then eventually go offline for a short time, I wonder if that was due to the attack.

At this point it seems like even massive attacks are not really doing much of a job in slowing down companies using something like CloudFlare or other distributed CDN's. I wonder how much longer it is before people will give up on DDOS attacks as ineffective.

Our last inbound attack appeared to come from over 50 million very well spread out different IPs. Of course those are all spoofed IPs but either way you can't effectively block that many without blocking larger amounts of legit traffic.

Still not as bad as when they explained what whitelisting is [slashdot.org] a few days ago. Since this is starting to look like a trend, I'm beginning to suspect that the roadblock over the UI and UX stuff with the beta has led them to try rolling out their new, "more accessible" content independently of the redesign.

It's more that any community, even Slashdot's, has a variety of interests and areas of expertise. You can be very educated and technically-minded, but still not know anything about NTP, in the same way a network engineer may not know offhand what solid rocket grain geometry is, or Sanger sequencing.

It's a bit of a catch-22 -- when we post more explanatory summaries, people say that we're dumbing it down. When we post more complicated ones, people say they shouldn't have to turn to Google to figure out what the story is about.

I understand that, and I also know you guys are in the unenviable position of getting unfairly dinged no matter what you do, particularly right now with the beta and everything related to that situation. At this point, you'll be taking flak no matter what you do.

That said, isn't this exactly the sort of thing that links are for? Whitelisting shouldn't have needed an explanation, given that the concept can be inferred easily from the name itself (not to mention that it's a common topic here on Slashdot), but

I've been a member of the NTP Hackers team for more than a decade, the mechanism that is being abused for these attacks is in fact a very useful debugging/monitoring facility:

You can ask an ntpd server about how many clients it has and how often each of them have been accessing the server. On old/stable ntpd versions this facility was accessed using a single pure UDP packet (ntpdc -c monlist), and in reply you got back information about up to 602 clients (the size of the monlist buffer), sent as a big burst of UPD packets.

Researchers have developed maps of the entire publicly accessible NTP networks using this facility, I have personally used it to map the status of our fairly big corporate network. I.e. it can be extremely useful!

A few years ago the development version of ntpd switched to a different protocol and method to query this information, using a nonce which meant that you can no longer spoof the source address: (ntpq -c mrulist). Since the mrulist buffer is configurable, I have setup my public ipv6 pool server (ntp2.tmsw.no [2001:16d8:ee97::1]) to keep monitoring info for the last 10K clients.

Today we recommend that you either upgrade to ntpd v2.4.7, or if you really cannot do this, insert a 'restrict default noquery' option in the ntp.conf configuration file. The 'noquery' indicates that clients can still use the server for regular time requests, but the monitoring facility is disabled.

Thank you for pointing that out. It would be great if sysadmins and vendors fixed their NTP config. Unfortunately i's not only NTP that gets abused. The script kiddies also use open DNS servers that do recursive searches. And I'm sure there are more ways kindly offered by ignorant sysadmins and vendors who just don't care. Just google for "TP-Link recursive DNS" to get an idea. The solution is to force vendors to fix recursive DNS and NTP on their Internet facing boxes (why stop there, just "disallow anything from WAN" by default) and make them liable for the default config. Educate and poke sysadmins to fix their badly configured crap if they do not want to get blocked by their ISP or upstream. Force local ISPs to drop packets with a non-local src IP address and block the idiot that sends those packets. And finally add to Spamhaus the IP addresses/ranges of idiots who just don't care. Let's see how quickly they fix their crap once their boss figures out he can no longer send email to the cute-cat-pic mailing list.

This case mentions the use of NTP, but the idea of reflection attacks by now has propagated to TCP as well, even without amplification it seems worthwile.Right now an attack is running on many webservers that sends SYN packets with source port 80 and 443 and destination port 80 from spoofed source address.Apparently they want to overwhelm the victim with SYN ACK packets from reflectors.However, those are the same size as the SYN packets sent by the attackers. Probably no issue, those attacks are likely sent from compromised systems and botnets as well.

It is about time that a blacklisting system is setup for providers that allow source address spoofing, similar to how providers running open SMTP servers were tarred and feathered until they fixed it.

Source-address spoofing just shouldn't be happening. Whether on the smallest or largest networks, why would you let someone fabricate any IP address and pass it along as if it were part of your network?

First rule on almost all firewalls is to block all such "foreign" packets.

The big carriers are really the problem here - they should just turn off network access to anyone who provides traffic to/from systems that they are not registered in their AS for. After an hour of being offline, they'll soon push the message to clean up what IP's are talking out from your networks all the way down to individual leased line customers.

As someone above pointed out, load balancing and redundancy are valid reasons to send packets with source IPs not in the originating AS. That mostly doesn't apply to residential subnets where the zombies are, but one reason does. I sometimes use LTE tethering and my home internet connection simultaneously because the LTE is as fast or faster than my home connection during non peak hours. I don't know if it is doing load balancing between the two uplinks, but why shouldn't it?

The connection teaming/bonding firewall code should be able to mangle the source ip to match the outgoing connection's expected source IP. There should be no requirement to spoof the ip to go across a different network.

That's not Dice, it's the regular people who hate beta who are sick of the never-ending toddler temper tantrum of comments about it. The comments were read, possibly ignored and possibly considered, and nothing else is going to happen from continual screaming. Get over it and either leave, or switch off the beta and wait it out like the rest of us.