4. Reduce the 78 rules down to 64 by removing the rules that are less efficient

It is possible that two or more different rules can crack the same hash just because of the many words in our dictionary.
So what we focus in are all these hashes that have been cracked by only one single rule.
This will happend more or less often for each rule. We add a counter for each rule and with each hit we increase it.
When we are finished we simply sort out these rules that have the lowest number in their counter.

I have experienced the same issue as described phish (but the original post seems to be deleted). I tried to simplify rules by doing some simple substitutions:

sXY@Y -> @X@Y

iNX@X -> @X

^X@X -> @X

$X@X -> @X

But somehow this makes the rules less effective. For example, take the rules i42@2 and @2. They should be identical but they're not. When I test them separately against the phpbb hashes, i42@2 gives me 7520 hits but @2 gives me only 264.

Oh and thanks to atom for compiling the new best64.rule! I have tested it against some random hashes with some random wordlists and got some very good results! I did spot a minor simplification you could do. It turns out that "}}D1{" can be simplified to a single "]".

Anyway thanks a lot! It is truly amazing how much the original rules could be improved upon!

the post was not deleted, just splitted into a seperate thread, since this thread is about the challange, not about strange rule engine behaiviors. please read the http://hashcat.net/forum/thread-1027.html then you hopefully understand why it works how it works...