In all cases above, two "identical" hashes were arrived at through a different sequence of operations; and that difference in the sequence of construction manifests itself in a different iteration sequence.

But that has always been the case!

The above is 5.10; but the same is also true going right back to my involvement with perl: 5.6.1.

The order returned by 5.12.4 is what you should see on pretty much every modernish perl there has been released with the exception of 5.8.1 and 5.17.6 and later. And obviously in 5.17.6 the order changes pretty much every time.

What we discover when we per-process randomize the keys is that people actually depend on the key order more than they realize. When we make it random these dependencies become visible as bugs. I tend to consider them buggy originally, as minor changes to the history of the hash will produce roughly the same results as per-process randomization.

BTW, you *did* see that I said "none of this is new" right? So why the emphasis on "But that has always been the case"?

BTW, you *did* see that I said "none of this is new" right? So why the emphasis on "But that has always been the case"?

Because, until the simple example in your latest post, all the previous examples demonstrate things that have always been true. Thus, they do not demonstrate what changed. Which when combine with the phrasing of the OP ...

But never mind. I'm not trying to get on your case here; just work out what has actually changed, and a) how it might affect my existing code; and more importantly b) how it might affect my thought processes with regard to how I think of and use hashes.

My conclusion so far -- for me personally; not the world in general you are addressing -- is that I have assumed the "new" constraints as a matter of course ever since the randomisation fix for Algorithmic Complexity Attack that was (breifly???) implement in 5.8.1.

However, what would be most useful to me -- and others I'm sure -- is a description of what has actually changed internally; and why it has been changed. Are you up for providing that description?

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.

"Science is about questioning the status quo. Questioning authority".

In the absence of evidence, opinion is indistinguishable from prejudice.

that I have assumed the "new" constraints as a matter of course ever since the randomisation fix for Algorithmic Complexity Attack that was (breifly???) implement in 5.8.1.

Alas not everyone has been as diligent as you. :-) It is surprising how many real bugs this found.

what has actually changed internally

Ok, first some history. In 5.8.1 a very similar patch the one I have been working on was implemented. It broke lots of stuff, which was considered unacceptable for a minor release. So a new implementation was done. This implementation actually supported two types of hash, and two seeds, one constant determined at build time, and one random per process. By default hashes would use the constant seed, but when Perl noticed too many collisions in a bucket it would trigger a "rehash" using a random per-process seed, which would cause the hash value of all of its keys to be reclaculatied and would as a byproduct cause the hash'es keys to be removed from the shared string table.

All of this consumed processing time, and added code complexity.

5.17.6 returned things to roughly where they were in 5.8.1. The rehash mechanism and all overheads associated with it are removed. The hash seed is randomly initialized per process. etc.

Somewhat related is the actual hash function in 5.17.6 is different from 5.17.5, and we probably will use a yet again different hash function in 5.18.

And if I have my way hashes will be randomized on a per hash level as well. (So every hash would have its own order, regardless of what keys it stores or the history of the hash.