Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Wow you really know nothing about encryption. Sigh.. Everyone is an expert. Zdnet looked at ighashgpu that's unsalted password decryption when you already have a precomputed hash table. TH looked at salted password decryption where you have to perform a SHA-1 transformation invocation thousands of times per every password attempt.

Wow you really know nothing about encryption. Sigh.. Everyone is an expert. Zdnet looked at ighashgpu that's unsalted password decryption when you already have a precomputed hash table. TH looked at salted password decryption where you have to perform a SHA-1 transformation invocation thousands of times per every password attempt.

Even though it's a dupe, why are GPUs so much faster than CPUs at this? It doesn't seem like they have any more power, is the architecture that different from CPUs? Is it an issue where you can basically dedicate all resources (GPUs plus VRAM) to the one task?

Depends on what types of number crunching you're doing. If every step depends on the previous step then you won't be able to exploit the hundreds of cores that makes up even a cheap GPU these days by running them in parallel. If, on the other hand, your task consists of subtasks which can be easily done simultaneously like password hashing (just send a separate password to be hashed to all cores at once) or n-body calculations you will experience huge speedups. If you do a lot of matrix (vector) operations

Does anyone remember when you could come on this website and have a discussion on this website and learn about new concepts and ideas? Don't be so bitter, even you were a noob at some point in your life.

CPUs and GPUs have very different focuses. A CPU is designed to take a single piece of data, run an operation on it, then grab a different piece of data, and run another operation on it. (There's a whole bunch of optimizations for running the same operation on different bits of data, and different operations on the same bit of data, but those are largely optimizations, and only apply to relatively small scales). A GPU is designed to take a butt-load (technical term) of data, and perform the same operation on all that data, followed by another operation on that same butt-load of data.

When you are cracking passwords, you have a bunch of potential passwords you want to try. On a CPU, you are stuck with hashing between 1 and maybe a dozen simultaneously. On a GPU, you could potentially run a few million simultaneously. Each step on the GPU would be slower, but your total output of hashed passwords would be much higher.

Thanks. Would the GPU have a single, multi-stage pipeline (with apparently a lot more stages than a CPU), or does it actually have multple pipelines? Or maybe just a ton of small cores? I'm confused about where the parallelity (technical term) actually gets implemented in the hardware.

It's multiple cores. Usually in the low hundreds, but some have over a thousand - a Radeon HD 6990 has, if my data is correct, 3072 cores running at 830mHz, combined with 4GB of memory optimized for parallel access. Although to be fair, it does so with about triple the power consumption of a high-end desktop processor. So compare it to, say, three i7 CPUs - you'd get 18 cores (that act like 36 because of SMT), clocked at 3466mHz.

Of course, a GPU would fail spectacularly at a lot of things. They'd probably

GPUs are much more specialized than CPUs. CPUs can only do a few things in parallel depending on the number of cores available in the CPU chip (ie 4). GPUs have a magnitude more processing paths than CPUs, the GTX 570 mentioned has 480 cores. That's what's being leveraged here, it's not the resources or power, it's the number of parallel processing paths.

It was a slightly different entry in/.'s series on "passwords are dead! oh noes!", except it was on brute forcing hashed passwords. It makes the same fundamental mistake that comments on that post pointed out _repeatedly_.

This is on brute forcing data encrypted with a symmetric cipher whose key is derived from a password. Yes, if you naively translate the password into a key, you go from a 128 or 256-bit keyspace to about the size of the dictionary.

No. Zdnet used ighashgpu. That's a hash cracker. WinZip and WinRAR encyption is different because it's based on precomputed password hashes. It looks like TH used AccentZip and AccentWinRAR to decrypt passwords.All three programs are created by Ivan Golubev. His blog is full of posts on cryptography performance.

Password hacking process could be used to create completely new content - even data like images. If you can make billions of attacks per second the amount of coherent information must eventually be high enough.

Alphanumeric would be 62 different characters.going from seconds to minutes to hours to days to years for a 7 character password....62 ** 7 / 14605.0 / 60.0 / 60.0 / 24.0 / 365.0 results in 7.6459888127245952

So its actually around 7.7 years but thats with just one computer.If you had 7.7 computers, it would take a year.If you got a shitload of time on Amazon (which I hear has GPU rigs available for rent) you could shrink that time way down.

I've been told before that WinRar's encryption wasn't much to crow about, but this article says it's 128-AES. So.. which is it? Is it fairly secure (provided it is used properly...) or does it still have a major weakness that makes it easy to get into?

Right, but that's not what I'm asking. I'll be more specific: Is using WinRar with a 8+ character password reasonably secure or does it have another vulnerability that weakens it? This article suggests that it's pretty darned secure. BUT... this is a benchmark article, not a strength of security exercise.

WinZIP and WinRAR have effective encryption, but one needs to have an effective passphrase with it.

Ideally, the best way to encrypt stuff is with not just a passphrase, either with random keyfile for symmetric encryption, or use public key crypto (although PK crypto has its own caveats). This way, there is no brute-forcable passphrase to guess, so an attacker has to deal with the complete keyspace of an encryption algorithm, and not just what people type in.

There are multiple ways to brute force. One is just finding the known subset of the keyspace (what people can type in), and running scans through that. Another is using dictionaries (ye old Crack). Of course, using dictionaries, then scanning the keyspace is useful too.

Dictionaries are still useful even though people's passwords tend to more than just a word these days. A lot of people use two words and a character, so that is far more gussable than trying to just brute force every single option in a 10

Dictionaries are still useful even though people's passwords tend to more than just a word these days. A lot of people use two words and a character, so that is far more gussable than trying to just brute force every single option in a 10-12 character keyspace.

Specifically: 10,000 * 10,000 * 90 - assuming that both words are in the set of 10,000 commonly used words. If you assume uncommon words are in use, you may have to look at 200-300k words for each position unless you know that they stuck to the sh

Brute force is synonymous with checking the keyspace against a dictionary.

Architecture allows for the GPU do to this much more quickly, with the correct software. It is, however, still checking against a dictionary. Parent mentions encrypting what has already been encrypted (note the use of the word symmetrical), so that you must crack the whole keyspace, as opposed to the area of the encrypted file that you know contains the reversible hash (or, as opposed to just getti

I was under the impression that brute forcing did exactly that. They're not using a dictionary. They're taking advantage of the GPU processing power.

For this kind of encryption, the archive password is converted into a key. This is done because remembering a large key is hard, but remembering a password is not.

However, this kind of conversion is not remotely secure. With around 70 typable characters ("a-z", "A-Z", "0-9", a few symbols, etc.) the number of possible keys for keylength l is around 70^l. If we use a secure crypto algorithm, say, AES-256, then we would encrypt the archive with a 256-bit key. Something that uses a password for encryption does so by permuting the password into a key, typically through some combination of hashing, concatenation, and salting. This process deterministically maps the relatively-small ASCII password space to a 256-bit key space. So even though you're using a secure-sized 256-bit key, there are still only (at most) 70^l possible keys, since each key must be generated from a password.

Now, with AES-256, there are 2^256 possible keys. While brute-forcing the 256-bit keyspace is considered hard (that works out to about 1 * 10^77 possible keys), brute-forcing the possible plaintext passwords that could have generated the key is significantly easier (a 10-character password has only 2 * 10^18 possibilities).

So back to what the OP said, while the crypto and keysize of the underlying cryptography are secure (in this example, AES-256), the keyspace is inherently limited since it has to be derived from a much-smaller set of passwords. The OP is spot-on... if you really want to encrypt something securely, you have to use a much larger keyspace, which, in this case, means generating a complete 256-bit key rather than deriving one from an ASCII password. This article shows that password-derived keys are not secure.

I find it strange that you discount using a longer passphrase without bothering to calculate how long it would need to be. Assuming 70 typeable characters, getting 2^256 possible keys only requires 256*ln(2)/ln(70) ~ 42 characters assuming an equal distribution. (english text actually has much less entropy than an ideal even distribution of characters, but we'll ignore that for the time being)

As an example, "This is a fourty two character passphrase!" is a fourty two character passphrase. It's not unreasona

As an example, "This is a fourty two character passphrase!" is a fourty two character passphrase. It's not unreasonable to blind-type something like that into a password field for someone with a reasonable amount of typing skill.

Except that your passphrase is not 80^42 (8.5e79 or about 265 bits) possibles. It's more like (10,000^7) * 2 * 12 (2.4e29 or 97.6 bits) because all of the words are fairly common ones and probably exist in a shortened dictionary of the 10,000 most common words. They're also all

So even though you're using a secure-sized 256-bit key, there are still only (at most) 70^l possible keys, since each key must be generated from a password.

Maybe I'm just not thinking hard enough about this but it seems very trivial to me to create a password/passphrase hashing algorithm that wouldn't be limited as you defined. Yes, there may be about 70 possible characters that can be typed - but their placement in the password (1st, 4th or 17th character) can also used to increase the possible keyspace.

Additionally, the length of the key (256-bits) and the length of the passwords do not have to be identical - the password can be much longer and thus can ha

Also: what stops a hacker from trying out passwords on the keyfile instead of the encrypted file?

It's an example to demonstrate how much more limited the typable keyspace is than an unconstrained binary keyspace, nothing more. I think you're quite out of line throwing words like "egocentric" around because an arbitrary example used QWERTY/ASCII.

However, say you did have 1024 typable characters. A random 10-character password with such a keyboard layout would yield only about (at most) 1024 ^ 10 = possible combinations, still well short of the example binary keyspace.

However, say you did have 1024 typable characters. A random 10-character password with such a keyboard layout would yield only about (at most) 1024 ^ 10 = possible combinations, still well short of the example binary keyspace.

rainbow tables only work on unsalted passwords. These were used by microsoft for 'lan' style passwords. IIRC, vista and win7 don't use these. And if you use a 14 character password or longer, even windows xp disables the lan encryption.
Your rainbow tables are effectively useless against aes-128, aes-256, and even des. They simply precompute password hashes, and generating the tables takes quite a long time.
Using rainbow tables has nothing to do with gpu acceleration.

If they're brute forcing against a hash, attempts/failures is irrelevant. If they're brute forcing file encryption, once again, system lockout attempts are irrelevant unless it's integrated into the operating system.

For most intents and purposes this is not that news worthy. In order to get processing performance like this you need a system that can also answer billions of password guesses per second. So keeping it simple, you need to get said database, make it function on/in a system/environment that can handle and that will allow this much activity for all those guesses.

ergo, someone has to jack yo shit before they can start guessing your password which may be more difficult than just trying to guess that password

I gotta ask why GPUs are faster? And because they are faster, why aren't CPUs using methods and techniques similar to GPUs for getting certain things done? I remember the days of the "math coprocessor" that the math processor was used to help speed things up by performing math on-chip rather than by using subroutines in software.

I was always under the impression that GPU means graphics processor unit, not "Guessing Passwords Unit."

These days there are GPGPUs with GP standing for "General Purpose". They're not only used for displaying graphics anymore but for general-purpose vector calculations. GPUs are faster *for vector calculations* because most of the chip consists of arithmetic units. In return, GPUs are much, much worse at pretty much everything else, such as branching. Don't try to run if-statements on GPUs.

In effect, having a good,recent GPU is the equivalent of having hundreds of CPUs all in one single chip. Since password guessing is a task that can be divided easily, the GPU is 100x faster than a regular CPU, and it is very efficient at number-crunching.

Since you don't know the plaintext, it's impossible, even by bruteforce

Not really. For example we might assume that the original file is a.txt file and that the document is in English. Based on this we could perform a sample decrypt followed by a quick (relatively speaking) analysis looking for characters in the ASCII printable range. If the key is wrong the result should appear random. At the same time we can check for non-varying bytes that occur at certain positions in word documents, or for image file format headers. Most file formats don't try to disguise themselves

GPUs are not really that much faster. GPUs are essentially tiny CPUs with very limited, I mean extremely limited cache/register/instruction sets. And they throw in hundreds or even thousands of these together in a card. Essentially GPUs are large number of weak processors. Some problems are "embarrassingly parallel", painting a 3D scene on a 2D screen or testing a guess passwords of an encrypted file. Some other problems are quite parallel but you need to aggregate the data at the end, like multiplying a ma

Cryptography is one of just a handful of embarrassingly parallel computing problems (3D graphics is the most notable one). For the vast majority of things people use computers for, a not-so-parallel general purpose CPU is better.

"Omg, what am I going to do about my eight char password I use half across the Internets?"

Well...One could print out a passwordcard [passwordcard.org].Then one might start using passwordmaker [passwordmaker.org], to whatever phone/OS one fancy. By which time one (sh/c)ould check if ones passwords are long enough [lockdown.co.uk] and while this "one" is at it, have a look at these tricks [passwordmaker.org] from an almost "tl;dr-ish" list. Now, apply elbow grease and a bit of go figure. "Problem solved? Moving on?"

A 15 char pass phrase is much more secure than an 8 char password. All it takes is a nonsensical pass-phrase and a light peppering of special chars and no one could break it without having a few dedicated power plants to supply electricity to their GPU cluster.

With the recent MTGox compromise, I've been looking at a better password system. It looks like one way to go is to use a program like password safe or keesafe to generate unique passwords per website. However, I'm curious as to how resistant these master files are to GPU attacks. GPUs basically sliced through the MTGox MD5 hashes like butter. How long would it take a higher-end distributed cluster to break a Password Safe master file? It's blowfish encrypted I believe.

When you design a security system that relies on passwords - you need to make the assumption that the attacker has either the password hash or the binary file that is being protected. In which case, they are not subject to any delays or lockouts and they can ramp up the brute-force rate to whatever they can afford. They may even have access to a 10k machine botnet, in which case their resources will far exceed your own. So you should also make the assumption that the

Using GPUs to crack passwords isn't going about it the same way that you are thinking. There are no network connections to a server as the GPUs wouldn't be any faster at that than a normal CPU. What they are doing is getting a copy of the encrypted passwords in some way. Either from a workstation with cached passwords or gaining some amount of access to system to get a hold of the encrypted passwords. Then they run the cracking software against that local file using the GPUs to do the heavy lifting.

Things to remember - password difficulty is based on x^y, where x is the number of possible characters and y is the password length. Increasing password length is *always* going to be more effective than increasing the mix of characters (indeed the point of a dictionary attack is to reduce can be thought of as reducing 96^8 8 character passwords to a mere 250,000^1).

Each additional alphanumeric character increases the search space by a factor of 62 - a two word password is still only 250,000^2, a password of ten random lowercase characters is 26^10, a *much* larger number.

Moores law says processing power doubles ~18 months. Every new lowercase character extends life of your password almost 12 years before new hardware can decrypt it as quickly as today's hardware. 23 1/2 if you use upper and lowercase.

Assuming an alphanumeric search space is naive. There's actually (26 *2) + (10 * 2) + (11*2) -> 94 characters that can be typed on a US PC keyboard, far more if you add all the composable characters (over 30k). For example the default settings for the (not-quite) standard mkpasswd utility generates a password with four lowercase characters, two uppercase characters, two numbers, and one typable symbol. That's not quite 94^9 -> 572,994,802,228,616,704 or 5.73E+17 permutations, not quite because you can

Yes but the point is that (barring quantum machines) adding length (y) increases the permutations (and thus the size of the search space) at a far greater speed than adding characters (x). It also happens to be much for a human being to do rather than pulling out esoteric unicode characters that may or may not be legal characters in a given system.

My estimates have always been: 10,000 commonly used words, 200-300k in a large dictionary.

Most people tend to pick words that are familiar to them, that they probably use in everyday speech or hear regularly. Not many will crack open a random dictionary or book and pull a word out at random from the other 97% of the word list.

Now, identifying the 3% most commonly used words? That's a bit more of a guessing game.

...really? Some of us play games on the GPUs, too. Or *gasp* do actual work with them! And the more expensive ones are faster. Because that's how technology works. No, not everyone needs an F1 car. Most people would be fine with a Corolla. And the cheaper cards generally give you much more value for your money. But if you can't see where some people would be more concerned about absolute performance than pure economy, then you're stupid.

This is not a slashvertisement. This is simply tech journalism. Or is C