On the surface bcrypt, an 11 year old security algorithm designed for hashing passwords by Niels Provos and David Mazieres, which is based on the initialization function used in the NIST approved blowfish algorithm seems almost too good to be true. It is not vulnerable to rainbow tables (since creating them is too expensive) and not even vulnerable to brute force attacks.

However 11 years later, many are still using SHA2x with salt for storing password hashes and bcrypt is not widely adopted.

What is the NIST recommendation with regards to bcrypt (and password hashing in general)?

What do prominent security experts (such as Arjen Lenstra and so on) say about using bcrypt for password hashing?

4 Answers
4

Bcrypt has the best kind of repute that can be achieved for a cryptographic algorithm: it has been around for quite some time, used quite widely, "attracted attention", and yet remains unbroken to date.

Why bcrypt is somewhat better than PBKDF2

If you look at the situation in details, you can actually see some points where bcrypt is better than, say, PBKDF2. Bcrypt is a password hashing function which aims at being slow. To be precise, we want the password hashing function to be as slow as possible for the attacker while not being intolerably slow for the honest systems. Since "honest systems" tend to use off-the-shelf generic hardware (i.e. "a PC") which are also available to the attacker, the best that we can hope for is to make password hashing N times slower for both the attacker and for us. We then adjust N so as not to exceed our resources (foremost of which being the user's patience, which is really limited).

What we want to avoid is that an attacker might use some non-PC hardware which would allow him to suffer less than us from the extra work implied by bcrypt or PBKDF2. In particular, an industrious attacker may want to use a GPU or a FPGA. SHA-256, for instance, can be very efficiently implemented on a GPU, since it uses only 32-bit logic and arithmetic operations that GPU are very good at. Hence, an attacker with 500$ worth of GPU will be able to "try" many more passwords per hour than what he could do with 500$ worth of PC (the ratio depends on the type of GPU, but a 10x or 20x ratio would be typical).

Bcrypt happens to heavily rely on accesses to a table which is constantly altered throughout the algorithm execution. This is very fast on a PC, much less so on a GPU, where memory is shared and all cores compete for control of the internal memory bus. Thus, the boost that an attacker can get from using GPU is quite reduced, compared to what the attacker gets with PBKDF2 or similar designs.

The designers of bcrypt were quite aware of the issue, which is why they designed bcrypt out of the block cipher Blowfish and not a SHA-* function. They note in their article the following:

That means one should make any password function as efficient as possible for the setting in which it will operate. The designers of crypt failed to do this. They based crypt on DES, a particularly inefficient algorithm to implement in software because of many bit transpositions. They discounted hardware attacks, in part because crypt cannot be calculated with stock DES hardware. Unfortunately, Biham later discovered a software technique known as bitslicing that eliminates the cost of bit transpositions in computing many simultaneous DES encryptions. While bitslicing won't help anyone log in faster, it offers a staggering speedup to brute force password searches.

which shows that the hardware and the way it can be used is important. Even with the same PC as the honest system, an attacker can use bitslicing to try several passwords in parallel and get a boost out of it, because the attacker has several passwords to try, while the honest system has only one at a time.

Why bcrypt is not optimally secure

The bcrypt authors were working in 1999. At that time, the threat was custom ASIC with very low gate counts. Times have changed; now, the sophisticated attacker will use big FPGA, and the newer models (e.g. the Virtex from Xilinx) have embedded RAM blocks, which allow them to implement Blowfish and bcrypt very efficiently. Bcrypt needs only 4 kB of fast RAM. While bcrypt does a decent job at making life difficult for a GPU-enhanced attacker, it does little against a FPGA-wielding attacker.

This prompted Colin Percival to invent scrypt in 2009; this is a bcrypt-like function which requires much more RAM. This is still a new design (only two years) and nowhere nearly as widespread as bcrypt; I deem it too new to be recommended on a general basis. But its career should be followed.

What NIST recommends

NIST has issued Special Publication SP 800-132 on the subject of storing hashed passwords. Basically they recommend PBKDF2. This does not mean that they deem bcrypt insecure; they say nothing at all about bcrypt. It just means that NIST deems PBKDF2 "secure enough" (and it certainly is much better than a simple hash !). Also, NIST is an administrative organization, so they are bound to just love anything which builds on already "Approved" algorithms like SHA-256. On the other hand, bcrypt comes from Blowfish which has never received any kind of NIST blessing (or curse).

While I recommend bcrypt, I still follow NIST in that if you implement PBKDF2 and use it properly (with a "high" iteration count), then it is quite probable that password storage is no longer the worst of your security issues.

TL;DR: bcrypt is better than PBKDF2 because PBKDF2 can be better accelerated with GPUs. As such, PBKDF2 is easier to brute force offline with consumer hardware.
–
Mikko RantalainenJul 5 '12 at 11:56

1

@ExplosionPills My rule of thumb for a "high" iteration count is: as large as possible without causing too many users to complain about the delay. I generally tune bcrypt / pbkdf2 to take around 250-500ms (1-2s for admin accounts) on my production server while it's under light load. This policy seems to track w/ OpenBSD's recommended bcrypt settings, and any other context-dependant recommendations I've found.
–
Eli CollinsAug 6 '12 at 17:16

3

@EliCollins: I'm rather curious here - could using a high iteration count introduce the possibility of DDOS attacks against the server? For example, if I were to POST hundreds of login attempts to the server simultaneously, the server would immediately become overwhelmed with attempting to hash all of the passwords. Are there ways to avoid that potential problem?
–
Nathan OsmanApr 7 '13 at 3:35

3

@GeorgeEdison to avoid DDOS check the username before checking the password. Then if there have been multiple failed login attempts in a row for that user (say, 10 of them) reject the password without even checking it unless the account has "cooled off" for a few minutes. This way an account can be DDOS'd but not the whole server, and it makes even horribly weak passwords impossible to brute force without an offline attack.
–
Abhi BeckertJul 22 '13 at 18:56

1

@TerisRiel I don't mind if a user's account goes offline due to a DOS attack. In the real world it doesn't actually happen, and if it does... well that's better than the account being hacked.
–
Abhi BeckertSep 30 '13 at 15:12

bcrypt has a significant advantage over a simply salted SHA-256 hash: bcrypt uses a modified key setup algorithm which is timely quite expensive. This is called key strengthening, and makes a password more secure against brute force attacks, since the attacker now needs a lot more time to test each possible key.

Personally, I've been looking at PBKDF2 lately, which is a key derivation function that applies a pseudo-random function (e.g. cryptographic hash) to the input password along with a salt, and then derives a key by repeating the process as many times as specified. Although it's a key derivation function, it uses the principle of key strengthening at its core, which is one of many things you should strive for when deciding on how to securely generate a hash of a password.

+1 for mentioning HMAC, which I completely forgot to mention as the more secure alternative to a simply salted SHA-256 hash. Length extension attacks are well known; the Flickr API once was compromised because it used to sign API calls using MD5(secret + argument list) (see " Flickr's API Signature Forgery Vulnerability " [PDF] for further information)
–
GiuSep 18 '10 at 10:50

@curiousguy (Better late than never...) Storing a password hashed with a salt can be done by calculating an HMAC, using the password as the HMAC key and the salt as the HMAC message. This is more secure than naively calculating H(salt || password), though an actual password hashing function like bcrypt is preferable.
–
Søren LøvborgJun 12 '13 at 12:51

1

@curiousguy HMAC is the PRF most commonly used in PBKDF2. So don't use bare HMAC either. Use PBKDF2 (with HMAC-SHA256 or HMAC-SHA512), bcrypt, or scrypt. That's it. Don't invent or use anything else. Custom schemes are bound to be wrong. These three are well-vetted and easy to use. Realize that PBKDF2 is the most vulnerable to hardware accelerated dictionary attacks and scrypt is the least vulnerable.
–
Teris RielSep 30 '13 at 13:50

NIST is a United States based government organization, and thus follows FIPS (United States based) standards, which do not include blowfish, but do include SHA-256 and SHA-512 (and even SHA-1 for non-digital signature applications, even in NIST SP800-131A, which delineates how long each older algorithm can be used for what purpose).

For any business required to comply with U.S. NIST or FIPS standards, bcrypt is not a valid option. Check every nation's laws and regulations separately if you do business there, of course.

PBKDF2 is fine; the real trick is to get Tesla (GPU based) cards in the honest servers so the iterations can be made high enough to compete with GPU based crackers. For PBKDF2 in 2012, OWASP recommends at least 64,000 iterations on their Password Storage Cheat Sheet, doubled every 2 years.

It seems to me that using scrypt or bcrypt (changing the software) is easier than adding expensive (in terms of up front costs and energy costs) hardware to millions of servers. At a higher level, any key stretching function is just an attempt to add more entropy to the password. If (and only if) the password is strong enough to begin with (e.g. a 128-bit random number encoded as 10 diceware words), a single iteration of PBKDF2 is adequate protection.
–
Teris RielSep 30 '13 at 14:06

The purpose of *crypt functions or PBKDF2 is to store the password in such away that its very hard to find the original password. They do not make the password any stronger. A single iteration of PBKDF2 makes it easy to brute force guess at what the password is. If it takes a second of CPU time per guess then brute forcing becomes unrealistic.
–
mcfedrJun 3 '14 at 14:11