Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "Google has an interesting idea on how to take the edge off denial of service attacks. The latest developer builds of Chrome 12 have an option called 'http throttling,' which will simply deny a user access to a website once the browser has received error messages from the URL. Chrome will react with a 'back-off interval' that will increase the time between requests to the website. If there are enough Chrome requests flooding a website under attack, this could give webmasters some room to recover from a nasty DDoS attack."

That's also relatively easy to block on the server side, since all of the requests will have the same referrer. Plus with some framebusting code, you can really screw with the website that's being used as the attack vector.

It's meaningless. Browsers don't really participate in DDoS attacks; the attacks come from software that uses DNS reflection techniques to saturate TCP and other socket connections until load balancers fail, the servers are saturated, and everything has to time-out.

Protections really don't involve browser back-offs, they relate to parsing source address data, then filtering those out so genuine traffic gets through, rather than traffic that saturates the sockets.

Protections really don't involve browser back-offs, they relate to parsing source address data, then filtering those out so genuine traffic gets through, rather than traffic that saturates the sockets.

Exactly.

And that can't really be done at EITHER end, (browser or web server), but cries out for a middle-ground approach that can detect DDOS attack signatures and kill them off close to the source rather than forwarding them all to the target's ISP to handle.

The single ping flood is not the issue, and easily killed.

The request that appears once every two minutes from hundreds of thousands or millions of bots is very hard to distinguish from real traffic, other than the bots don't want the traffic either, a

Well, this depends, are they talking about there browser (in which case, yes this would be nothing but a nuisance), or are they talking about sometime of apache like server side software under the name of Chrome? If so, no this is actually a pretty standard technique that should exist in servers these days.

He's right... originally there was no way to turn it off until web developers bitched, me included, about how it's slowing down development. The problem was, as a developer i may reload a page often, or make a tweak, reload, etc. Waiting for this to clear was a bitch, so they put in the command line switch for us.

You'd be surprised when tweaking code or css how often you reload a page.

Actually, I think the point is that they want to help prevent XSS DDOS: go to a high-volume forum, set a <img src="http://target.example.com/asdf"/> in your signature and wait until your friends' home server dies.

Not how you take down Amazon, but that was never the subject. I think this has a nice, small, but nice, benefit. And no real downsides... I like it.:)

(I used to have a server like this and when somebody hotlinked an image on a busy for

Since dedicated DDoS programs like LOIC are readily available, nobody performs actual DDoS attacks with a browser. Hell, ping floods are more effective than a bunch of people pressing refresh too often.

That's because/. submitters usually link to their crappy little blogs instead of the original data sources that are often running on proper infrastructure. Slashdot has about 1 tenth the traffic that sights like Digg or Delicio.us have. The "Slashdot Effect" went away years ago...

And this helps how?If a site is overloaded, the service is denied to me. If *my* browser starts to "back off" it exacerbates the problem by increasing the outage I experience.A site is placed in the net to serve users content and if a user can't access it, then that person is per definition subject to Denial Of Service. A browser constructed with the described mechanism has a defect built in by design.

nobody performs actual DDoS attacks with a browser.Now, this might reduce the Slashdot Effect, but not a DDoS.

Exactly.

I seriously doubt Google designed this for what TFA says it does. TFA is too busy raking Google over the coals for not building in Do-Not-Track to even understand why this may be needed by legitimate sites who just happen to get slashdotted due to massive publicity or disasters.

DDoS campaigns have been launched by telling people "go to this web page and leaving it open"The page is just a bunch of iframes reloading the target over and over.

Just because Anonymous can rally the troops with the LOIC,doesn't mean that's how everyone else (or even anyone else) does it.Seriously, when was the last time you heard of the LOIC being used by a non-Anonymous group?

The web server is way too high up the stack, and having it do the work is how the DDoS wants to hamstring you anyway.

I was thinking that it should be distributed.

See, in order to block incoming traffic, you have to accept the connection at the lower layers so you can decode it to determine that it's from the offending IP address. DoS long ago devolved to just doing SYN floods, since it's impossible to stop a SYN because you don't look at its contents before it's tied up your hardware almost as much as it c

Distributed means from many sources. Attacks of this nature will not be affected by Chrome's mechanism. Chrome's feature will only prevent repeated requests from the same user. DOS attacks are blunted, not DDOS.

Many people run Chrome, right? It might not make much of a difference if a small percentage of a website's users are running Chrome but I wouldn't be surprised to see the other major browsers implement something similar.

Many people run Chrome, right? It might not make much of a difference if a small percentage of a website's users are running Chrome but I wouldn't be surprised to see the other major browsers implement something similar.

I was thinking something similar. If Google could somehow convince Joe Sixpack that Firefox and IE are missing some valuable DDoS protection feature, then it would eventually be added to other browsers.

No. A ping request only requires the server you're attacking to send a small packet back. For a DoS attack, you want to make the victim send a lot of bytes back to you, so a small script that repeatedly asks for a whole page, especially images, is the better way to go.

No, you don't use ICMP echo requests (and most other forms of ICMP), its too easy to filter upstream since it can safely be ruled out of the normal flow of traffic.

While many ICMP packets are indeed useful and blocking ICMP in general is a really retarded thing that some less than clueful people like to do on firewalls (seen often here on slashdot) it will in general not screw proper traffic up too much if you block ICMP echo requests/replies upstream during a DDoS.

If you want to do a proper DDoS, you have to make the traffic look like legitimate traffic so its indistinguishable from traffic the site actually wants so they can't easily block it.

If you just try to ping -f me, I'll just call my upstream and tell them whats going on and ask them to drop it upstream to my address space until further notice.

UDP dns queries are a good one to use as they can be spoofed and are pretty much impossible to block to a legitimate DNS server. TCP based connections like an HTTP request are more effective in the sense of the amount of traffic generated but are effectively unspoofable if you want to actually do more than a syn flood. If you can't spoof them then you become traceable and can be blocked since you're going to come from a specific address for each request, which can then be filtered, even if its a DDoS. Building a table of IPs to blackhole doesn't take long in most cases and can be pretty effective assuming your upstream firewalls/routers can handle the size of the blacklist, which may not be all that easy depending on the size and load of your upstream routers, but still far easier than dealing with a flood of legitimate looking UDP packets.

I haven't seen an effective ping flood since 1998-99 on any thing but some little tiny sites that simply don't know what they are doing.

I personally hate this 'feature'. I don't understand what it defends against, because someone hitting refresh a few times in a browser is hardly a serious DoS attack. And it got in the way of me (and many others) the first time they rolled it out because the "DoS" it was defending against was me hitting my local test webserver which was returning a 500 because the page code was broken.

What the hell? When Anonymous fires the low-orbit ion cannon, it comes down hard on evildoers. Why the fuck is Google on the other side of the fence now? I thought their motto is "don't be evil"? Why isn't Google offering LOIC as a feature in Chrome?

It's hard to see this being much of an impact, even for stressed sites with a lot of Chrome users; people don't usually sit there mashing the refresh button when their page won't load. Most folk will actually implement their own"back-off" feature, Sure, there are outliers, but this is a game of big numbers and average statistics.

Where this can help is with automated page loading. Your saved session has twenty tabs with pages from a single site? That's all loaded at once, in parallel in the browsers

And it's not a feature, it's a bug. It's been in chrome for a while now, suddenly popped up overnight, and made life more complex to all developers. Do you have any idea of how hard it is to test a webapp if you can only get an error message once?

It's a real piece of shit. I found a way to disable it, but it still pisses me off that google suddenly decided to implement such a stupid feature overnight, without warning, and without informing users of a way to disable it.

Someone else posted a link to the bug report on this. It looks like the feature was disabled at the end of January. Why are you still so angry about it? It appears they have taken all the previous complaints into account for this new release.