jim_neophyte has asked for the
wisdom of the Perl Monks concerning the following question:

i am reading in the "Perl Cookbook" at the bottom of page 449. i am confused about the use of the values and keys functions with respect to the order in which those results are returned.

i thought the ordering of keys/values are or will be random, i.e. the values function gathers the values and returns them in random order; then the keys function gathers the keys and returns them in random order.

my understanding is that the order of keys and values being returned is affected by insertion order. i also thought that soon if not already the order returned is further randomized for some sort of security thing.

if i understand the following code, the keys function and values function will have to operate on the hash at the same time. will the following really work?

The values are returned in an apparently random
order. The actual random order is subject to
change in future versions of perl, but it is guaranteed to be the same order as either the "keys" or "each" function would produce on the same (unmodified) hash.

keys, values and each will all return values in bucket order. That order is not predictable by you, but it is not truly random. Your example won't work because it's a syntax error (and if the syntax error were fixed, it would be trying to assign things to where they already are). But yes, keys and values will return corresponding lists.

Other people have answered your main question. Here are answers to your peripheral questions.

To understand the answers, you need to know how a hash works internally. Internally a hash has a set of buckets. There is a function (aka a hash function) that decides what bucket each key should go into. Ideally the assignment of keys to buckets will look random, so if you have enough buckets for your keys, then no bucket has very many keys. But in fact it is deterministic. That means that inserting/retrieving/deleting are always fast, because you only have to work with the handful of keys in a bucket. (Technical note, Perl changes the number of buckets if the hash gets too many keys, thereby keeping the number of keys/bucket down. This operation is known as a "hash split" and is expensive. But it is also rare, and the cost of this operation averages out to a constant per insert. Perl does not try to reclaim memory if a hash shrinks after having grown.)

i thought the ordering of keys/values are or will be randomNo. Perl walks the buckets in order, and for each bucket walks the contents in order. Since Perl does this the same way for both keys and values, the order will match between them.

my understanding is that the order of keys and values being returned is affected by insertion order.Yes. The assignment of keys to buckets is not affected by insertion order, but the order of keys in buckets can be. (OK, I lied there. It is possible in at least some versions of Perl for the order that keys are added to cause a split to have happened/not happened.)

i also thought that soon if not already the order returned is further randomized for some sort of security thing.Yes. In recent versions of Perl, the hashing function that is used changes every time you run Perl. This is to prevent people from sending you carefully constructed data that causes your keys to all go into one bucket. Since they can't know what hashing function you're using, they have no way to construct a malicious dataset except by accident (and the odds against it are high).

...will the following really work?Yes. That is because Perl actually runs values first to generate a list, then keys to generate the list of variables, then proceeds to assign the one to the other.

I've wondered about the security aspects of changing the hashing function..

Whilst I acknowledge that possibly a security risk is posed from the order of items retrieved from a hash.. I can't actually think of any practical areas where randomising the hash function actually assists.

Surely if someone is putting together a hash that is at risk of attack then they should filter the data somehow?

Wouldn't a more fixed hashing function be of greater benefit.. are there any programmers today who take advantage of the order in which the hash is output consistantly across executions?

Note that filtering against this attack is virtually impossible, without extensive analysis you won't know what could possibly be a problem, and it could affect any hash at all that gets lots of data. Hashes are documented to be fast, and it is Perl's job to make them work out that way.

As for people relying on the order from the hash, I'd consider breaking that to mostly be a benefit. Anyone who relied on hash order being consistent was guaranteeing that their code would break when you change versions of Perl. (Perl's hash function changed fairly frequently, though admittedly not as often as it does now.) With the new change, people catch their mistake earlier. A real example of this mistake that I believe bit Ovid was a poorly written test that assumed the order in which keys came back out from a hash.

Though, admittedly, it did cause a few problems for people who would compare whether they got the same hash that they had previously by using Storable to stringify the hash, and then did a string compare with the old result. However you can fix that by setting $Storable::canonical to a true value.

I think i understand what you're asking regarding the apparent same time operations of keys and values in your line of code.
What actually happens with keys or values is that a list of all of the proper info (keys or values) is created all at once. Keys and values are not really iterators. They can appear to be, but the list is created all at once.

On the other hand, each is a true iterator, and uses the hashes internal position iterator (cant think of the real name) to keep track of where it was.

This results in an infinite loop because on each time through the while loop, each gives back the next key-value pair. By next I mean in the sense of checking the hash iterator. But the inner foreach loop calls keys on the same hash as the outer loop. keys and values both automatically reset the iterator, and create their return list by iterating thru the whole hash. And after getting to the end, the iterator is again reset for next time. Then, when the each is called again, the iterator says to return the first key-value pair.