Cisco switches to weaker hashing scheme, passwords cracked wide open

Crypto technique requires little time and computing resources to crack.

Password cracking experts have reversed a secret cryptographic formula recently added to Cisco devices. Ironically, the encryption type 4 algorithm leaves users considerably more susceptible to password cracking than an older alternative, even though the new routine was intended to enhance protections already in place.

It turns out that Cisco's new method for converting passwords into one-way hashes uses a single iteration of the SHA256 function with no cryptographic salt. The revelation came as a shock to many security experts because the technique requires little time and computing resources. As a result, relatively inexpensive computers used by crackers can try a dizzying number of guesses when attempting to guess the corresponding plain-text password. For instance, a system outfitted with two AMD Radeon 6990 graphics cards that run a soon-to-be-released version of the Hashcat password cracking program can cycle through more than 2.8 billion candidate passwords each second.

By contrast, the type 5 algorithm the new scheme was intended to replace used 1,000 iterations of the MD5 hash function. The large number of repetitions forces cracking programs to work more slowly and makes the process more costly to attackers. Even more important, the older function added randomly generated cryptographic "salt" to each password, preventing crackers from tackling large numbers of hashes at once.

"In my eyes, for such an important company, this is a big fail," Jens Steube, the creator of ocl-Hashcat-plus said of the discovery he and beta tester Philipp Schmidt made last week. "Nowadays everyone in the security/crypto/hash scene knows that password hashes should be salted, at least. By not salting the hashes we can crack all the hashes at once with full speed."

Cisco officials acknowledged the password weakness in an advisory published Monday. The bulletin didn't specify the specific Cisco products that use the new algorithm except to say that they ran "Cisco IOS and Cisco IOS XE releases based on the Cisco IOS 15 code base." It warned that devices that support Type 4 passwords lose the capacity to create more secure Type 5 passwords. It also said "backward compatibility problems may arise when downgrading from a device running" the latest version.

"Due to an implementation issue, the Type 4 password algorithm does not use PBKDF2 and does not use a salt, but instead performs a single iteration of SHA256 over the user-provided plaintext password," the Cisco advisory stated. "This approach causes a Type 4 password to be less resilient to brute-force attacks than a Type 5 password of equivalent complexity."

The weakness threatens anyone whose router configuration data may be exposed in an online breach. Rather than store passwords in clear text, the algorithm is intended to store passwords as a one-way hash that can only be reversed by guessing the plaintext that generated it. The risk is exacerbated by the growing practice of including configuration data in online forums. Steube found the hash "luSeObEBqS7m7Ux97dU4qPfW4iArF8KZI2sQnuwGcoU" posted here and had little trouble cracking it. (Ars isn't publishing the password in case it's still being used to secure the Cisco gear.)

While Steube and Schmidt reversed the Type 4 scheme, word of the weakness they uncovered recently leaked into other password cracking forums. An e-mail posted on Saturday to a group dedicated to the John the Ripper password cracker, for instance, noted that the secret to the Type 4 password scheme "is it's base64 SHA256 with character set './0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'." Armed with this knowledge, crackers have everything they need to crack hundreds of thousands, or even millions, of hashes in a matter of hours.

It's hard to fathom an implementation error of this magnitude being discovered only after the new hashing mechanism went live. The good news is that Cisco is openly disclosing the weakness early in its life cycle. Ars strongly recommends that users consider the pros and cons before upgrading their Cisco gear.

Promoted Comments

The article states that "the older function added randomly generated cryptographic "salt" to each password, preventing crackers from tackling large numbers of hashes at once." Which I feel doesn't really get at the reason for adding salt. A salt value prevents "rainbow table"/dictionary like attacks. i.e. without a salt, you can hash common passwords beforehand and have them ready. "password" will always hash to the same value. With a salt two different users with the same password will have different hashes.

Rainbow table resistance is only a *minor* benefit of salt which protects the weakest users. It is easy to understand and it's what people remember, and in the case of Unix servers where the hashes are visible the need for this use case is obvious, but the *far more* critical use of salt in the era of GPU cracking is to prevent optimizations which permit mass parallel computation on readily available hardware.

Rainbow tables have practical limitations on size and these days you can actually get faster results with pure cracking than rainbow tables anyway.

Rainbow table resistance is only a *minor* benefit of salt which protects the weakest users. It is easy to understand and it's what people remember, and in the case of Unix servers where the hashes are visible the need for this use case is obvious, but the *far more* critical use of salt in the era of GPU cracking is to prevent optimizations which permit mass parallel computation on readily available hardware.

Rainbow tables have practical limitations on size and these days you can actually get faster results with pure cracking than rainbow tables anyway.

How do salts appreciably slow down brute force cracking a single password? You can hash many password guesses along with the salt in roughly the same time that you can hash many password guesses.

GPUs are really, really good at doing this: aaa, aab, aac, aad, aae... ... zzz. They can do it vastly faster than a conventional CPU because of their parallel operation design.

If you have a hundred thousand unsalted hashes to crack, you can just start at aaa and compare every calculated hash to the set of a hundred thousand.

If you have a hundred thousand salted hashes, each one has a large, unique prefix and will be the ONLY password to be found within the aaa ... zzz search space underneath that prefix. (If you take the aaa zzz thing literally I will beat you into a pulsating mass of sorrow with a cluebat. You know I mean the space of all possible passwords.) This slows down the parallelization because it must be redone for each hash. It's still faster than conventional CPU cracking but nowhere near as fast as unsalted gpu cracking.

It is nice if the attackers do not have the salt, but it's considered within the threat model that they probably do. They still work to slow down the process.

What is prenventing the use of custom hashing algorytms which aren't divulged to the public?. Couldn't they add a layer of obscurity that way. Like for example the SHA-256 times an arbitrary number. I'm probably missing something...

The usual problem, if history is any guide, is that cryptography is hard, so the additional security you get by forcing your opponent to reverse-engineer your algorithm is usually considerably less than the additional vulnerability you incur by having your in-house guy who is a pretty decent coder and took a class on cryptography his senior year in college roll his own crypto system(rather than a publicly known; but designed-and-peer-reviewed by a substantial number of the world's non-clandestine cryptography specialists, algorithm).

Even the ones designed by experts often end up having vulnerabilities of varying severity discovered later; but the world is littered with the corpses of pitifully weak in-house efforts.

What is prenventing the use of custom hashing algorytms which aren't divulged to the public?. Couldn't they add a layer of obscurity that way. Like for example the SHA-256 times an arbitrary number. I'm probably missing something...

Unless you really, really know what you are doing, any standard scheme is going to be better than anything you could think of. Many years ago when I worked at Digital, there was an internal news group dedicated to security. Many employees would propose security schemes - and the company experts would, usually no later than the next day, post the methods to break it.

More to the point, when evaluating products as a strategic planner, any company that would not divulge the cryptographic methods used in the products to me or my team was instantly removed from consideration for ALL future purchases. Security by obscurity = no security.

When working in cryptography, it is assumed that an attacker has full knowledge of the algorithms used and operating processes and procedures around the use of the algorithms - and has sample plain and cypher text. Only the key is unknown. If the system remains secure under those circumstances, then you have a good system. Only a wide, public review of a proposed system is sufficient to identify potential issues.

[...] The new one, despite using the superior SHA256, only uses one iteration of it and no salt.

in terms of password storage, sha256 is not superior to md5 -- they are equally bad.

the cryptographic weaknesses of md5 have nothing to do with why md5 is a poor choice for password storage. but the reason that md5 is a poor choice for password storage has everything to do with why sha256 is a poor choice for password storage.

The article states that "the older function added randomly generated cryptographic "salt" to each password, preventing crackers from tackling large numbers of hashes at once." Which I feel doesn't really get at the reason for adding salt. A salt value prevents "rainbow table"/dictionary like attacks. i.e. without a salt, you can hash common passwords beforehand and have them ready. "password" will always hash to the same value. With a salt two different users with the same password will have different hashes.

it's both, but much more so the former.

nobody uses rainbow tables anymore since they're large in size, limited in keyspace, severely limited in flexibility, and don't scale worth a crud. gpus have completely deprecated rainbow tables, so the old rainbow tables argument for salting doesn't hold much water today.

salts certainly do not prevent dictionary attacks. i don't know where this misconception originated, but i've seen it repeated way too many times.

today's primary reason for salting is exactly as the article states: to significantly slow down attacks against multiple hashes. you are correct in stating that the same password hashed with different salts will result in different hashes. but what this means from a massively-parallel perspective is that each plaintext candidate needs to be computed with each unique salt.

let's say we have 10 hashes of some algorithm, and we can compute that algorithm at 100 MH/s. without salting, we can crack at the full 100 MH/s, since each plaintext candidate only needs to be hashed once, then compared to all 10 hashes simultaneously. now let's say we have 10 hashes with 10 unique salts. because we have to hash each plaintext candidate with each unique salt and compare each hash individually, our effective rate becomes 10 MH/s -- exactly 10 times slower.

so you can see how with larger hash lists this can have a profound effect. let's say we now have 8 million unique salts, and we can tear through this algorithm on GPU at 15 billion guesses per second. because of the salting, the effective rate against all 8 million salts becomes a mere 1875 guesses per second.

but this is only an effective defense against multiple hashes. if we have a single hash that we are focusing on that belongs to a high-value target (admin/root account, famous person, whatever) then salting alone will not help us. thus, salting alone is insufficient, so we have to use salting in conjunction with some other technique to slow down the attacker. that's where multiple iterations, memory-hardness, anti-gpu techniques, etc come into play.

Speed of the hashing algorithm is not a weakness unless you have bad passwords.

The sheer amount of energy required to break something like a strong 16 char password, would consume the Milkway and a 20char password would consume the observable Universe. But that's assuming your have a 100% efficient computer.

When in doubt, scrypt.

sorry, certainly not true.

your observable universe scenario only applies to brute force. fast algorithms provide us the bandwidth necessary to run very complex and creative attacks, enabling us to crack perfectly reasonable passwords in a perfectly reasonable amount of time.

and let's be realistic here, there is such a thing as a perfectly reasonable password. just because your password isn't random doesn't mean it's a bad password. and due to improper storage we crack plenty of perfectly reasonable and sufficiently long passwords that, if protected just a little bit better, would be impossible to find.

the only way you would force me to incrementally search for a 16 character password is if you used a 16 character random password, and nobody is using 16 character random passwords. you may /say/ that you do on the forums to increase your e-peen, but statistics clearly show that 91%+ of you are not.

and while we're on the subject of realism: scrypt is a great proof of concept, but it is not something your average developer can just up and implement. don't overthink it, crypt(3) is your friend. if you're developing on windows and desperately wish you knew what crypt(3) was, shoot for pbkdf2 instead -- keeping in mind today's lessons, please. and there's lots of good bcrypt implementations out there now, too. these are what you're stuck with until we come up with a standard via the Password Hashing Competition.

The weird part is that Cisco doesn't seem to be really scrambling to fix it. They are just like, "Well, think about the fact that you may be hosed before you upgrade. Good luck!"

They are working on fixing it, but it will take time -- from the link:

Quote:

The Future of Type 4 Passwords on Cisco IOS and Cisco IOS XE

Because of the issues discussed in this Security Response, Cisco is taking the following actions for future Cisco IOS and Cisco IOS XE releases:

Type 4 passwords will be deprecated: Future Cisco IOS and Cisco IOS XE releases will not generate Type 4 passwords. However, to maintain backward compatibility, existing Type 4 passwords will be parsed and accepted. Customers will need to manually remove the existing Type 4 passwords from their configuration.

The enable secret password and username username secret password commands will revert to their original behavior: Both commands, when provided with a plaintext password, will generate a Type 5 password. This will be the same behavior as before the introduction of Type 4 passwords. This step is being taken to preserve backward compatibility.

Type 5 passwords will not be deprecated: This will be done to preserve backward compatibility. The deprecation warning for Type 5 passwords will be removed.

A new password type will be introduced: This new password type will implement the original design intended for Type 4 passwords, which is PBKDF2 with SHA-256, an 80-bit salt, and 1,000 iterations. The exact type is yet to be determined.

New command-line interface commands will be introduced: The new commands will allow Cisco customers to configure the new password type for both enable secret password and username username secret password. This will allow Cisco customers to gradually migrate to the new password type, while allowing them to use the existing syntax to preserve backward compatibility. The exact syntax for the new commands is yet to be determined.

I'm also assuming that they will be issuing a firmware upgrade, or patch, to fix existing products.

Incompetence on multiple levels. First the original implementer clearly didn't follow the specs (or the specs were wrong to begin with and Cisco is covering their ass after the fact), second the code clearly wasn't reviewed by someone who knows security as it should never have been allowed to ship.

The article states that "the older function added randomly generated cryptographic "salt" to each password, preventing crackers from tackling large numbers of hashes at once." Which I feel doesn't really get at the reason for adding salt. A salt value prevents "rainbow table"/dictionary like attacks. i.e. without a salt, you can hash common passwords beforehand and have them ready. "password" will always hash to the same value. With a salt two different users with the same password will have different hashes.

This reminds me a bit of Sony's "captcha", which instead of generating an image to be read, just uses javascript to generate plain-text and disables right-click menu and mouse-based text selection. Google has zero trouble reading the resultant text (seriously, comes up when I searched for "sony captcha").

This is almost as bad, although not quite: the "captcha" is actually harder for humans to read than machines, which is the exact opposite of the supposed functionality. This isn't quite that bad, but close.

How in the name of holiness can you have an "implementation issue" which completely omits the key requirement/point of the new feature? How can QA at a major vendor screw up that badly?

It's truly staggering - if we presume that they do develop software in a respectable fashion then this means the error happened early enough that all the test cases, etc, were done to the wrong design. Which in turn means that a largish set of engineers saw this neolithic approach to security and nobody shouted in horror (too sleepy? corporate culture of keeping head down?). Microsoft used to cock up like this but these days with the SDL culture embedded I'd be dismayed if a howler like this got dumped into customer laps.

Or maybe their software practices really are mid-90s and your big bucks go on a bedroom full of undisciplined code monkeys and a nice bridge logo on the box?

It's a pity the software industry doesn't have the safety-reinforcement of aviation, where accidents get a public post mortem so that everyone can learn how to do it better.

The article states that "the older function added randomly generated cryptographic "salt" to each password, preventing crackers from tackling large numbers of hashes at once." Which I feel doesn't really get at the reason for adding salt. A salt value prevents "rainbow table"/dictionary like attacks. i.e. without a salt, you can hash common passwords beforehand and have them ready. "password" will always hash to the same value. With a salt two different users with the same password will have different hashes.

The older one also performed a 1,000 iterations of MD5. I think that's what the author was getting at in regards to making it harder to bruteforce. The new one, despite using the superior SHA256, only uses one iteration of it and no salt.

I admit there are screw ups, then there are SCREW UPS. This ranking up there with the ALL CAPS RAGE variety. Makes me think of how generic passwords are set so they can be remembered/easily accessed, all the while basically setting a precedence of "You got pwned" series of stupidity all over the place, especially since Cisco is so prevalent.

That dent in my forehead is not that attractive anymore and my desk is out of warranty...

The article states that "the older function added randomly generated cryptographic "salt" to each password, preventing crackers from tackling large numbers of hashes at once." Which I feel doesn't really get at the reason for adding salt. A salt value prevents "rainbow table"/dictionary like attacks. i.e. without a salt, you can hash common passwords beforehand and have them ready. "password" will always hash to the same value. With a salt two different users with the same password will have different hashes.

Rainbow table resistance is only a *minor* benefit of salt which protects the weakest users. It is easy to understand and it's what people remember, and in the case of Unix servers where the hashes are visible the need for this use case is obvious, but the *far more* critical use of salt in the era of GPU cracking is to prevent optimizations which permit mass parallel computation on readily available hardware.

Rainbow tables have practical limitations on size and these days you can actually get faster results with pure cracking than rainbow tables anyway.

The article states that "the older function added randomly generated cryptographic "salt" to each password, preventing crackers from tackling large numbers of hashes at once." Which I feel doesn't really get at the reason for adding salt. A salt value prevents "rainbow table"/dictionary like attacks. i.e. without a salt, you can hash common passwords beforehand and have them ready. "password" will always hash to the same value. With a salt two different users with the same password will have different hashes.

If everybody with a certain password has the same hash, cracking that hash reveals all those people's passwords (with high probability), so it allows many passwords to be revealed at once. You can start with the most common hash and be pretty sure that the password is very common too. But technically, you're "tackling" only one hash at a time that way.

Rainbow table resistance is only a *minor* benefit of salt which protects the weakest users. It is easy to understand and it's what people remember, and in the case of Unix servers where the hashes are visible the need for this use case is obvious, but the *far more* critical use of salt in the era of GPU cracking is to prevent optimizations which permit mass parallel computation on readily available hardware.

Rainbow tables have practical limitations on size and these days you can actually get faster results with pure cracking than rainbow tables anyway.

How do salts appreciably slow down brute force cracking a single password? You can hash many password guesses along with the salt in roughly the same time that you can hash many password guesses.

In September 2010, ElcomSoft announced a password cracking utility for Research In Motion BlackBerry device backups that takes advantage of what Vladimir Katalov, ElcomSoft's CEO, described as the "very strange way, to say the least" in which the BlackBerry uses PBKDF2. BlackBerries encrypts backup files with AES-256. In turn, the AES key is derived from the user's password using PBKDF2. However the BlackBerry software uses only one PBKDF2 iteration, thus not taking advantage of the key security features of PBKDF2. By contrast, according to Katalov, Apple's iOS 3 uses 2,000 iterations and iOS 4 uses 10,000.

Rainbow table resistance is only a *minor* benefit of salt which protects the weakest users. It is easy to understand and it's what people remember, and in the case of Unix servers where the hashes are visible the need for this use case is obvious, but the *far more* critical use of salt in the era of GPU cracking is to prevent optimizations which permit mass parallel computation on readily available hardware.

Rainbow tables have practical limitations on size and these days you can actually get faster results with pure cracking than rainbow tables anyway.

How do salts appreciably slow down brute force cracking a single password? You can hash many password guesses along with the salt in roughly the same time that you can hash many password guesses.

If you have the salt, then logically it wouldn't slow down the attack at all, however if you don't have the attack then the salt increases the search space over and above the original password, and as the salt is a random number expressed as bytes, it also makes the password immune to dictionary attacks.

If you have the salt, then logically it wouldn't slow down the attack at all, however if you don't have the attack then the salt increases the search space over and above the original password, and as the salt is a random number expressed as bytes, it also makes the password immune to dictionary attacks.

The salt is typically stored alongside the hash so if the hash is known, the salt is too. I suppose the salt could be encrypted with a fixed key or stored separately or something but I don't think that's standard.

I may be very naive here, but couldn't much of this be solved if the devices only allowed 1 authentication request every 5 seconds or something like that? Then a brute force attack is going to take infinitely longer than if the device will respond to thousands or millions of requests a second.

I can see issues with this if the device is logged into by several people, but there should be a compromise available.

Though since this solution seems so simple, I'm sure I am missing some fundamental part about how this works.

If you have the salt, then logically it wouldn't slow down the attack at all, however if you don't have the attack then the salt increases the search space over and above the original password, and as the salt is a random number expressed as bytes, it also makes the password immune to dictionary attacks.

The salt is typically stored alongside the hash so if the hash is known, the salt is too. I suppose the salt could be encrypted with a fixed key or stored separately or something but I don't think that's standard.

The salt doesn't in itself make a single hash more secure. It increases the search space for attacks, and is intended to make precomputed hash (rainbow table) attacks less effective.

Incompetence on multiple levels. First the original implementer clearly didn't follow the specs (or the specs were wrong to begin with and Cisco is covering their ass after the fact), second the code clearly wasn't reviewed by someone who knows security as it should never have been allowed to ship.

And as has been mentioned, insufficient Testing/QA.

The project manager (or the person dictating the project's overall development process) for this effort should lose their job. So many different places to catch this were ignored.

Not sure who is driving the boat at Cisco in recent times, seems like the Boss died and his snot-nosed Son that knows jack took over and is just out joyriding for giggles. They've gone from being a powerhouse in reliablity to tripping up over some major points in the past many months.

I may be very naive here, but couldn't much of this be solved if the devices only allowed 1 authentication request every 5 seconds or something like that? Then a brute force attack is going to take infinitely longer than if the device will respond to thousands or millions of requests a second.

I can see issues with this if the device is logged into by several people, but there should be a compromise available.

Though since this solution seems so simple, I'm sure I am missing some fundamental part about how this works.

Uhm, this has nothing to do with brute force connecting. You have to be able to get the config(where the hashes are stored for these passwords) to even be able to do anything with this. Again, if this happens you have a bigger issue anyway. Not to mention most people use the local passwords as a backup to RADIUS/TACACS. Which means, you'd have to knock out the RADIUS/TACACS servers to have a working login in the first place.

I may be very naive here, but couldn't much of this be solved if the devices only allowed 1 authentication request every 5 seconds or something like that? Then a brute force attack is going to take infinitely longer than if the device will respond to thousands or millions of requests a second.

I can see issues with this if the device is logged into by several people, but there should be a compromise available.

Though since this solution seems so simple, I'm sure I am missing some fundamental part about how this works.

There are two issues at play:

1. DoS: If you rate-limit logins(especially if you rate limit per-account, rather than per-host, which is tempting because per-host allows an attacker who controls multiple hosts to make more attempts per unit time), you open a convenient avenue for somebody who knows only a username to lock out the legitimate user just by bouncing random guesses off the system, absolutely no knowledge required.

2. Online vs. offline attacks: against online attacks(where you are probing a remote host, and the remote host's OS and software are operating as designed) rate limiting can work, or at least severely slow, attackers. Against an offline attack(where the attacker has access to the hashes, either by compromising the device's OS/software, or by physically reading out its local storage), you don't get to impose rate limiting artificially(since the attempts are going on entirely on a system controlled by the attacker); but only when you first generated the hashes(as in this case, where they should have used 1,000 rounds, salted; but ended up using 1 round, unsalted). If an offline attack is happening, the individual device is already doomed; but people have a nasty habit of reusing passwords, so you want to limit the attacker's ability to recover the plaintext passwords from the hashes; because it is virtually certain that those passwords will be in use somewhere else, probably somewhere dangerously stupid but very convenient...

Rainbow table resistance is only a *minor* benefit of salt which protects the weakest users. It is easy to understand and it's what people remember, and in the case of Unix servers where the hashes are visible the need for this use case is obvious, but the *far more* critical use of salt in the era of GPU cracking is to prevent optimizations which permit mass parallel computation on readily available hardware.

Rainbow tables have practical limitations on size and these days you can actually get faster results with pure cracking than rainbow tables anyway.

How do salts appreciably slow down brute force cracking a single password? You can hash many password guesses along with the salt in roughly the same time that you can hash many password guesses.

GPUs are really, really good at doing this: aaa, aab, aac, aad, aae... ... zzz. They can do it vastly faster than a conventional CPU because of their parallel operation design.

If you have a hundred thousand unsalted hashes to crack, you can just start at aaa and compare every calculated hash to the set of a hundred thousand.

If you have a hundred thousand salted hashes, each one has a large, unique prefix and will be the ONLY password to be found within the aaa ... zzz search space underneath that prefix. (If you take the aaa zzz thing literally I will beat you into a pulsating mass of sorrow with a cluebat. You know I mean the space of all possible passwords.) This slows down the parallelization because it must be redone for each hash. It's still faster than conventional CPU cracking but nowhere near as fast as unsalted gpu cracking.

It is nice if the attackers do not have the salt, but it's considered within the threat model that they probably do. They still work to slow down the process.

What is prenventing the use of custom hashing algorytms which aren't divulged to the public?. Couldn't they add a layer of obscurity that way. Like for example the SHA-256 times an arbitrary number. I'm probably missing something...

Here is a way to fix this problem in general: Take gmail for example: Why not have it, that when you enter a password in the gmail login box, that your browser applies its own hashing before sending your password to gmail. That way Google will receive the hashed password as if it is the original password and apply its own hashing once again.

This will obviously have be done when creating a new password so that Google gets the hashed version of your new password as if it is your new password.

One drawback will be is that it will not work so well for anyone working on multiple machines except if each browser is once off configurable on what type of hashing it should use. (I am assuming that a person that cares about hashing is also someone that does not work on IE6).

The settings should be configurable to do the number of hashings proportional to the year that the password was saved that way it is partially future proofing it (otherwise when you want something more secure one would have to go through all of the settings of every browser that one uses).

There are several other tweaks that should be applied this idea to get it to work, but they are minor/obvious and I do not want to turn this into any more of a wall of text.

GPUs are really, really good at doing this: aaa, aab, aac, aad, aae... ... zzz. They can do it vastly faster than a conventional CPU because of their parallel operation design.

If you have a hundred thousand unsalted hashes to crack, you can just start at aaa and compare every calculated hash to the set of a hundred thousand.

If you have a hundred thousand salted hashes, each one has a large, unique prefix and will be the ONLY password to be found within the aaa ... zzz search space underneath that prefix. (If you take the aaa zzz thing literally I will beat you into a pulsating mass of sorrow with a cluebat. You know I mean the space of all possible passwords.) This slows down the parallelization because it must be redone for each hash. It's still faster than conventional CPU cracking but nowhere near as fast as unsalted gpu cracking.

It is nice if the attackers do not have the salt, but it's considered within the threat model that they probably do. They still work to slow down the process.

I understand all this but your previous post was unclear because it implied that salts prevent some sort of algorithmic optimization when hashing many times, whereas really they only allow you to reuse work from cracking one password to crack another password (which can be done in advance).

I may be very naive here, but couldn't much of this be solved if the devices only allowed 1 authentication request every 5 seconds or something like that? Then a brute force attack is going to take infinitely longer than if the device will respond to thousands or millions of requests a second.

I can see issues with this if the device is logged into by several people, but there should be a compromise available.

Though since this solution seems so simple, I'm sure I am missing some fundamental part about how this works.

There are two issues at play:

1. DoS: If you rate-limit logins(especially if you rate limit per-account, rather than per-host, which is tempting because per-host allows an attacker who controls multiple hosts to make more attempts per unit time), you open a convenient avenue for somebody who knows only a username to lock out the legitimate user just by bouncing random guesses off the system, absolutely no knowledge required.

2. Online vs. offline attacks: against online attacks(where you are probing a remote host, and the remote host's OS and software are operating as designed) rate limiting can work, or at least severely slow, attackers. Against an offline attack(where the attacker has access to the hashes, either by compromising the device's OS/software, or by physically reading out its local storage), you don't get to impose rate limiting artificially(since the attempts are going on entirely on a system controlled by the attacker); but only when you first generated the hashes(as in this case, where they should have used 1,000 rounds, salted; but ended up using 1 round, unsalted). If an offline attack is happening, the individual device is already doomed; but people have a nasty habit of reusing passwords, so you want to limit the attacker's ability to recover the plaintext passwords from the hashes; because it is virtually certain that those passwords will be in use somewhere else, probably somewhere dangerously stupid but very convenient...

Thank you. Skimming the article, I missed the local/offline part. The points raised in #1 are quite valid. Maybe some sort of system where it only allows 1 login attempt per x seconds per IP address might work. Might not help if the attacker has access to a botnet with millions of computers in it though.

Rainbow table resistance is only a *minor* benefit of salt which protects the weakest users. It is easy to understand and it's what people remember, and in the case of Unix servers where the hashes are visible the need for this use case is obvious, but the *far more* critical use of salt in the era of GPU cracking is to prevent optimizations which permit mass parallel computation on readily available hardware.

Rainbow tables have practical limitations on size and these days you can actually get faster results with pure cracking than rainbow tables anyway.

No.

If the password is chosen by a human you can create a table in memory and break upwards of 90% of them. If they are chosen randomly you can forget about that, but you are effectively generating the rainbow table on the fly and you still have 2**n more operations for n bits of salt regardless of how the password was chosen.

Also it isn't clear by what "one round of SHA256" means. I'm guessing it is the full SHA256 done a single time, but it could be a much simplified (and breakable) means. Since it isn't worth breaking the hash when it is trivial to brute force it, I expect no one to care (such a botched design (not implementation) doesn't make me strongly assume it is the full SHA256 the way I normally would).