Here is a slightly different take on DDOS attacks. Rather than a server with dynamic content being attack i was curious how to deal with attacks on servers with STATIC CONTENT. This means cpu tends to not be an issue. Its either bandwidth or connection problems.

How would i mitigate a DDOS attack knowing nothing about the attacker (for example country, ip address or anything else). I was wondering is shorting the timeout and increasing amount of connections is an acceptable solution? Or maybe that is completely useless? Also i would limit the amount of connections per IP address.

Would the above help or be pointless? Keeping in mind everything is static checking for multiple request of the same page (html, css, js, etc) could be a sign of a attack.

This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.

2

Why would you not know anything about the attacker, your logs would be full of information surely? In a true ddos attack everything you have facing the attack would be swamped and your only recourse is to block them at your perimeter and/or get your upstream to do it.
–
IainJun 3 '12 at 6:13

3 Answers
3

DDOS attacks rely in some large part on sheer volume of bandwidth. If your underlying infrastructure is being swamped by massive numbers of false connections then it doesn't really matter what sorts of limits are being placed on connections to your servers. Limiting by IP address doesn't address the problem of many hundreds of thousands of sources (i.e. botnets) much less the possibility of spoofed sources.

Your best bets are generally going to be solid detection mechanisms and working with your upstream carrier(s) to limit the damage (and traffic) before it ever reaches you.

Denial of service attacks on static web-servers (it doesn't even have to be distributed) hammer the server on one of three areas:

Overwhelming the server through number of connections.

Overwhelming the bandwidth available.

Exploiting misconfiguration or actual vulnerabilities to force a DoS with fewer resources.

The first point is easier to handle. You profile your server and set the max-connections count appropriately for your hardware. That way you don't end up DoSing yourself if your site gets suddenly popular. Setting per-IP connection limits helps defend against not-very-distributed-DoS attacks, but risks blocking legitimate users behind NAT gateways. But if a large enough connection flood comes, you're still offline.

The second point is impossible to handle before the actual attack commences.

The third point requires paying attention to security vulnerabilities in your webserver and updating appropriately, and making sure you haven't enabled any dynamic-serving calls in error that might spike CPU even though they actually do nothing. Maybe you've not banned POST in some areas, and it takes CPU for your web-server to spit out the 403-not-allowed error.

But the fact remains that there isn't much you can do to a static webserver to harden it versus a DoS attack. They're pretty hardened out of the box as it is.

Once an attack starts, though, then the response options open up. But that's for another question.

The most elaborate attacks will not show in a Web server log files (incomplete connection attempts, or incomplete requests, or corrupted requests that try to get advantage of the HTTP protocol, etc.).

Acting at the router or firewall (above the Web server) helps sometimes.

That's the whole point to keep in mind when your site is receiving more attention than the usual coordinated ApacheBench shots from tenths of different locations (usually hosted servers - check the standard ports: 80, 21, etc.).

Starting from there, creativity enters into account and some Web servers make it easier than others to trace what garbage they receive, when, how and from who.

Some servers make it possible to log the whole requests (in hexdump form is they contain binary data) BEFORE they get parsed (that is, before harm at the application level can take place).

When this feature is coupled with the ability to write handlers that let you change the behavior (filter, test, redirect, log) then life becomes much easier.

When both functions are supported, then the question is how easy is it to involve them, and how much trafic the server can take while doing the extra work to protect/log all the attacks.

And then the number of Web servers that qualify to these requirements is a very small club.

Some attacks are more clever than others. You will learn a lot at looking at how 'creative' people can be in this area.