If this is your first visit, be sure to
check out the Forum Rules by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

pureh@te, how would it be impossible? If you recode cowpatty to utilize CUDA, it could crack the hashes quite a bit faster. As it is, it would be impossible, but you could recode it to make it faster, right?

OP, as it stands, the answer to your question is no, if you wanted to attempt learning C for CUDA or Brook+(For ATI Stream) and recode the entire application to run on the GPU. (unless I'm missing something very drastic...)

pureh@te, how would it be impossible? If you recode cowpatty to utilize CUDA, it could crack the hashes quite a bit faster. As it is, it would be impossible, but you could recode it to make it faster, right?
...snip...

Well, here's what pyrit and CUDA get you: the ability to quickly calculate thousands of rounds of SHA-1 per second. This is how the WPA passphrase is encoded inside the 4-way handshake. Now, since we don't actually know the correct passphrase(actually we should, as we only test our own networks right?), pyrit creates the hash tables when given a dictionary list.

Next, we allow cowpatty to do the simple math. In my testing cowpatty rips thru a typical hash table at about 120K passphrases per second. That's way faster than even pyrit can generate the tables, so cowpatty is not the slow poke.

It's all about reducing the compute bottleneck. So while yes technically you could re-write cowpatty to use CUDA, why would you want to? Pyrit does the heavy-lifting, cowpatty does the easy part.

I mean Im not sure it can get any faster. Cowpatty is only single threaded in the first place. You would be better of writing a whole new tool. It sounds good in theory but you are still limited by certain hardware. Your frontside bus for example. The OP is asking if we can use cuda/pyrit to make a hash table under acceleration and then accelerate cowpatty in the sameway to double the speed. This sounds good but is not really possible

Ok, so it is a limitation is cowpatty itself that makes it improbably, without a complete rewrite from the ground up, or a completely new tool (which it would be) you wouldnt really be able to get any benefit from CUDA.

If you had pre-computed hash tables, and all you needed to do was run cowpatty, then accellerating cowpatty (or a similar tool) wouldn't be a bad idea, right? I mean, if computing the hash tables was no longer a bottle-neck, accellerating the simple math to considerable faster.

With the information you gave, pureh@te, it does seem unnessesary, though it would be fun to be able to rip through tables at 4 or 5 times that speed. Though as you said, you might run into hardware bottlenecks at that point, probably the throughput of the Hard drives, unless you can afford enough high end SSDs to hold all of the hash tables you would have to pre-compute to make creating a new tool in place of cowpatty worth it.

I see what you mean, pureh@te. Its not that its not possible, its that there is no value in it.

Why live your life trying to avoid pain and loss? Pain and loss is a part of life, embrace it, and live your life like you have nothing to lose.