An implementation of extensible hash tables, as described in
Per-Ake Larson, Dynamic Hash Tables, CACM 31(4), April 1988,
pp. 446--457. The implementation is also derived from the one
in GHC's runtime system (ghc/rts/Hash.{c,h}).

Note that insert doesn't remove the old entry from the table -
the behaviour is like an association list, where lookup returns
the most-recently-inserted mapping for a key in the table. The
reason for this is to keep insert as efficient as possible. If
you need to update a mapping, then we provide update.

This implementation of hash tables uses the low-order n bits of the hash
value for a key, where n varies as the hash table grows. A good hash
function therefore will give an even distribution regardless of n.

If your keyspace is integrals such that the low-order bits between
keys are highly variable, then you could get away with using fromIntegral
as the hash function.

A sample (and useful) hash function for Int and Int32,
implemented by extracting the uppermost 32 bits of the 64-bit
result of multiplying by a 33-bit constant. The constant is from
Knuth, derived from the golden ratio:

Knuth argues that repeated multiplication by the golden ratio
will minimize gaps in the hash space, and thus it's a good choice
for combining together multiple keys to form one.

Here we know that individual characters c are often small, and this
produces frequent collisions if we use ord c alone. A
particular problem are the shorter low ASCII and ISO-8859-1
character strings. We pre-multiply by a magic twiddle factor to
obtain a good distribution. In fact, given the following test:

This function is useful for determining whether your hash
function is working well for your data set. It returns the longest
chain of key/value pairs in the hash table for which all the keys
hash to the same bucket. If this chain is particularly long (say,
longer than 14 elements or so), then it might be a good idea to try
a different hash function.