How do you properly implement defenses against brute forcing? Is it best to store how many times someone tried to log in and block them after X attempts? And how would "someone" be identified? With the session? An IP?

8 Answers
8

Account lockout is one option, where the account is locked either permanently or for a period of time after x unsuccessful logins. The problems here are Denial of Service risks if an attacker attempts to guess passwords and also the administrative overhead of resetting accounts (assuming an automated reset isn't available)

Another option is to introduce CAPTCHAs into the login process. This is intended to limit automated attacks, and stop the denial of service from account lockouts. The problems with it can be accessibility requirements and automated CAPTCHA cracking. From what I've seen some CAPTCHAs are easier for software to crack than others.

A third option is to introduce a progressive delay into the login process, so for each wrong log in you cause the process to become ever slower. This has a limited effect on real users but makes automated brute forcing much harder.

In terms of the second part of your question about identifying the "someone" doing the attack, it depends on what they're doing. If they're just attacking a single user, then the username is the identifier and actions are tied to that account. If they're brute forcing across a wide range of users then source IP (perhaps combined with browser agent) is an option. It's not perfect (eg, use of botnets, agent spoofing) but is probably the only information you'd have to go on.

using a progressive delay provides a simple method for attackers to DOS your site by hogging all the webserver processes - and these methods presuppose you can consistently differentiate valid from invalid requests.
– symcbeanDec 3 '10 at 13:21

Well in terms of whether a login will be successful or not, it's determined by whether the password (or other credential is valid). On the DoS, yep a lot of protections against brute-force have a level of risk of DoS, so to an extent it's a balance of risks that needs to be considered.
– Rоry McCuneDec 3 '10 at 13:41

@symcbean Looking at it from a theoretical perspective, hogging processes is about an implementation or design error in the server. I think some existing implementations do not have that problem. That's not practically helpful of course.
– Volker SiegelMay 9 at 8:54

The answer to denying brute force is to deny a large number of attempts, where large is defined by your key space and risk acceptance.

Each situation is going to have different solutions.

Meat Space

Doors are reinforced against brute force damage.

Crashing through a door is mitigated by chance of alerting the police.

Keys only have around 5 variables, but it takes resources to have one of each key (easier to attack the lock in other ways that aren't brute force)

Computers

Automating brute force attacks means you need to rate-limit in some way. With a rate-limit, you can absolutely know your upper band of attempts over a certain amount of time.

Monitoring: If you can limit brute force attempts to a certain rate (500 / day perhaps), then monitoring and alerting can let a person know to take a closer look at what is going on. How many attempts will it take to reach a certain level of unacceptable risk?

Large Key space: Having complex password requirements can greatly increase the number of attempts required for a certain probability of finding the password.

Two factor authentication: brute forcing the password isn't enough in this case

All of the solutions also have trade-offs like everything else in life.

Rate limiting: can DoS legitimate users, not effective if attack is breadth first - one attempt for each account before trying a 2nd time

Rate limiting by IP: Not effective if it is a distributed attack

Password complexity: Is usually a tradeoff in usability meaning users will write down their passwords - decreasing security by a greater amount than the brute force threat

+1 for a good answer - although 2-factor authentication need not be expensive - there's an outline on my blog (symcbean.blogspot.com ) of how to implement a simple paper based system
– symcbeanDec 6 '10 at 11:54

@symcbean I didn't see the 2-factor article on your blog. I mentioned it as a perception issue, because companies have a hard time of calculating the monetary value of the risks they are taking.
– rox0rDec 6 '10 at 17:12

As I suggest, you mean bruteforcing login/password in web-application.

No one bruteforces manually. So, if there are doubts that this is human who tries to authenticate, show Captcha, say, from the 3-d failed login attempt. Further you can put timeout between login attempts - but this will not work for distributed attacks from different IP's. Next, there are two types of bruteforcing - blind, just trying random login/password, and attack against some user. For the last attack it is effective to implement such feature like blocking account for some time and notifying user by e-mail. For the blind bruteforce mitigation can be harder to implement, especially, if the site is popular. In this case you can track such user information like IP/Country/Browser/etc, and figuring out combination of this, you can make assumptions of whether this is valid user. As and addendum to this, you can play with long-living cookies - once the user successfully logged in, save cookie, and later check for it.

Without web-application type solution it is possible to implement server-level solutions like Apache's mod_evasive. Or even write own web-server module, specifically for such purposes.

Sure, there are many other ways how to harden bruteforce attack mitigation, but often they comes out to be noisy, rather useless and nasty. Try to keep with KISS.

You can't do it by IP address - many users may appear to have same client address (this is going to become even more frequent with the continuing problem of IPV4 address starvation).

One user may appear to connect from multiple client addresses - e.g. using AOL's load-balanced proxies, or less legitimately, via tor.

Anything else is data supplied by the client - so they can potentially modify any cookies you set, the user-agent string etc.

The only practical approach is apply heuristic scoring to the request to try to match it against previous attempts - but its still going to be impossible to differentiate between sophisticated attacks and legitimate users.

Set a score threshold at which you consider the client to be trying to brute-force your site, say -20.

Requiring a current valid session referenced by a session cookie (which is not set in the page prompting for a username/password) is a start - if the cookie is not presented with authentication details then redirect to a seperate page which drops the cookie and initializes the session with a score of -10, and a score of (say) -2 to the client IP address. You could use a dynamic scoring mechanism when you start seeing multiple valid users from the same address. Similarly you could maintain dynamic scoring by user-agent and by the username.

When cookie+auth are presented for a non-existent user, add a score of -5 to the session, -1 to the client address, -5 to the username.

When cookie+auth are presented with an invalid password, add a score of -3 to the session, -1 to the client address, -4 to the username.

When cookie+auth+valid password add +4 from the score for the client address, +3 to the username, +2 to the user-agent.

NB you also need to set a decay to allow any score held against client address/user-agent/username to recover, say +2/hour

You need to think about what happens when the score exceeds the threshold. Have you provided a mechansim for anyone to block access to your site?

If you've blocked access with a high score for a prticular username, send out an email with a reset url to the registered user for that account so they can reset the score held for their username and reduce the score held for their user-agent/address.

Also none of these answers take into account the method of brute forcing where an attacker uses one password and tries it against all accounts, a reversal of the usual process, and one which isn't generally protected against.

The control would of course be to have a process monitoring for multiple attempts against different accounts with the same password, but again very tricky to block even if they come from a single address, as do you block that IP? It could be used by valid users as well, or do you just drop logins with that password - with the risk of locking out a valid user with that password?

And of course if an attacker runs this sort of thing from a distributed network, or over a long timeframe, how do you correlate the incidents into an attack profile?

Each time a login request fails, log this to the database. Make sure you write the username, IP address, time, etc.

If numerous failed requests are received for a specific username, mark that user in the database as locked for a short time. Also encourage the user to use the "forgot password" feature, if this is in the same session.

How many failed requests? It depends. In short, whatever makes sense for your site, the exact number is not critical.

How long should the user be locked for? Short intervals. Mathematically, it doesn't really matter, as long as you have a strong password policy. Realistically, start with very short interval of a few minutes, then if it continues make it incrementally longer. E.g. after 5 bad tries, lock for 5 minutes; after another 3 bad tries, lock for 15; after another 2, lock for 30; etc.

Do not lock user accounts permanently. This leads to Account DoS. And, can also cost your support personell a lot of time and money.

Despite the previous point/s, if your site is veryveryvery sensitive, e.g. a banking app, you might want to consider locking permanently till further notice, e.g. have the customer come into his branch.

Locks should be by username, on the database. Not by sessionId, and not in the webserver session.

If you receive many failed requests with different usernames, but from the same IP address - implement incremental locking like above, but with wider grace and shorter intervals.

Provide a "forgot username" feature, in addition to the forgot password.

Administrator accounts should have shorter grace, longer intervals, and never permanently lock them out.

In any event, when locking a user or IP, send an alert to administrators. Not for repeated locking, though - you don't want to flood the admin's inbox. But if it does continue, then elevate the alert level after a few times.

Don't use CAPTCHA - the minimal added difficulty is trivial, in relation to the value of accessing the user's password. (There are many ways around it, CAPTCHA is fundamentally broken, regardless of implementation).

Of course, as @rox0r stated, multi-factor authentication might be appropriate for you.

Another alternative, is whats known as "adaptive authentication" - if the user failed the login, ask for additional information (that had been pre-registered). Depending on additional risk-factors (e.g. location, time patterns different from usual, etc), escalate the information required to successfully authenticate. For example RSA's AdaptiveAuthentication product does this.

If it is a business website use certificate plus password authentication. Without a valid certificate they never get to the point of trying to guess the password. You could also use something like RSA SecurID in place of the certificate.