Q and A on cross site request forgeries and breaking into sessions. It's one of the attacks that XSS enables and the attack of the future. For Session, fixations, hijacking, lockout, replay, session riding etc....

As if say creating XSS on a popular site (say digg or /.) which does something to tinysite.com? I imagine the best request to do would be to call a dynamic page (.php) which requires a lot of processing power, such as a registration page of search page.

well.. the method of requesting you use is irrelevant.. DDoS works by two ways (really a combination of both)

Webservers have two resources.. processing resources (cpu/ram), and bandwidth resources (speed/maxtransfer). if you max any of those out, it should take it offline either temporary or until rebooted. if it's a small site, it may have a low 'max monthly transfer' bandwidth that you could use up and it could be offline for the rest of the month.

For big sites though, say a myspace.com .. the weakest link is often SQL database calls, and cpu-intensive PHP/Perl/Asp scripts. So surf the site.. and try to find the ones that probably return the most info from an SQL query or the script pages that take the longest to finish processing.

So, get the clients to view as many of those as they can.. 2 at a time, adding 2 more as the previous finish.

Images and static pages get cached by the client.. so they aren't as effective since they only get called once. But if there's some that are large (250kb+) call each of them once too. they exhaust bandwidth, but not much processing resources

the best choice is often the search function.. with random queries that still return alot of results. throw them in two <iframe security=restricted onload="newSearchRequest(this)"> and call a new search when that one finishes..

The most effective ways I've found to take up large amounts of resources on machines is to do something insanely CPU intensive or bandwidth intensive. Bandwidth only works if your bandwidth is greater than theirs. That's why DDoS is the most common method for GET request floods. For a CPU hog you want to maximize the number of requests for the least possible bandwidth cost. Using a GET request but shutting down the connection as soon as possible (before letting the server return data to you) is ideal.

That leaves an open process waiting to send more data to you. This requires some custom programming but it's pretty effective. Further requesting a very resource intensive script will add to the effect. As I wrote in DoSing Search Engines writing a very complex AND request often does the trick, as that requires multiple selects against a dataset.

Ultimately they can still block the IP address so you'll probably end up wanting to go a hybrid of a DDoS GET route and a CPU exhaustion route at the same time. It's pretty easy to defend against this stuff though, if you know what you're doing. These attacks have been around (not very well disclosed though) for nearly a decade. I remember seeing them on my servers in 1997-8.

Yeah, the basic principle is to send UDP-based DNS requests to the nameserver using a spoofed source IP address. This in turn causes the nameserver to return (much larger) packets to the spoofed IP address. The beauty of this type of attack is that you get more bang for buck and most ISP's etc won't backlist nameserver ips.

QuoteDDoS attacks using recursive name servers can create an amplification effect. The amplification effect in a recursive DNS attack is based on the fact that small queries can generate larger UDP packets in response. In the initial DNS specification, UDP packets were limited to 512 bytes. At most, a 60 byte query could generate a 512 byte response for an amplification factor of 8.5. The current DNS specification, EDNS0, allows for much larger responses, resulting in amplification factors of over 70.

most web or applications servers I've seen are configured with a TCP timeout of 5 minutes (default). So simply write a script to open connections and do nothing. After the remote site closes the connection (idle timeout), open it again -- loop. There're roughly 65000 ports ... I guess you know what I mean ...
Better you have some real zombies handy ;-)

Quote
If I wanted to take down a website, what JS code would be the most effective in order to do that ?

POST or GET, mass iframes or maybe new Image()'s ?

And forgive me if i don't understand it properly, but:
Ddos is all about bandwith, if you have less bandwidth then the given server (highly likely!), nothin happens. that's why the use of zombies to join a parallel chain. And JS is still client-side. You could write a script though, to send packets but it also requires more then 1 pc. 'huge image/file sourcing' is a real bugger though, it's got a sort of /.slashdot effect.

did you ever think about DDos Attacks caused by e.g. some kind of "Social" site or community which has some XSS issues. Now build a small XSS worm which autospreads and let the affected users abuse SQLinjection flaws (huge queries, ' OR '1'='1).

A little self-defeating unless the SQL server is not on the same server as the webhost.(likely for large sites, I suppose.). But I still think it would be a tad self-defeating regardless. Especially if the worm was persistant. It would probably be stored in the very same SQL server. If the SQL server was under stress, it couldn't server the JS as fast.

Kyran Wrote:
-------------------------------------------------------
> A little self-defeating unless the SQL server is
> not on the same server as the webhost.(likely for
> large sites, I suppose.). But I still think it
> would be a tad self-defeating regardless.
> Especially if the worm was persistant. It would
> probably be stored in the very same SQL server. If
> the SQL server was under stress, it couldn't
> server the JS as fast.