Why do hash functions use prime numbers?

In a previous post i pointed out how questions posted in reward based discussions sites like stackoverflow.com never gets answered satisfactorily. This post is a look at one such feeble answer and makes an effort to explain in more detail a basic question about hashes.

using int arithmetic, where s[i] is the ith character of the string, n is the length of the string, and ^ indicates exponentiation. Why is 31 used as a multiplier? So why not 29, or 37, or even 97?

The partially wrong answer

The value 31 was chosen because it is an odd prime. If it were even and the multiplication overflowed, information would be lost, as multiplication by 2 is equivalent to shifting. The advantage of using a prime is less clear, but it is traditional. A nice property of 31 is that the multiplication can be replaced by a shift and a subtraction for better performance: 31 * i == (i << 5) - i. Modern VMs do this sort of optimization automatically.

NOTE : Primes are part of an older technique for hashing and there are supposedly better hashes which use more sophisticated techniques. I do not know the mathematical foundations for these, so cant vouch for them but its true that the prime number technique is quite old.

The correct but longer explanation

The reason 31 is chosen is because it is prime – not because it is odd. It so happens that primes are un-divisible by any other numbers. This includes 2 and this makes it odd too but the real reason, is its primeness and one other factor which we shall go into shortly .
So why a prime ?

Hashing works by allocating a value into one of the many storages, a hash has, for fast retrieval later on. The storage a hash has is also termed as buckets in comp literature, for reasons which will become clear later on.

Now, how does the hash identify which bucket it needs to store the value in? This is an important question, due to the property of hashes, which makes it compulsory that a hash be able to tell you in constant time (which is hopefully fast) in which bucket the value is stored in.

The slooow hash find logic

for(i = 0; i < totalkeys; i++)
{
if (CheckForEquality(key[i], "Samuel"))
return i;
}
[/sourcecode]
This sort of sequential search, would cause the hash performance to worsen, directly dependent on the number of value it contains. In other words, you would have a linear performance cost (O(n)), which becomes progressively bad with larger and larger no of keys(n). The other complication is the actual type of the value you are dealing with. If you are dealing with strings and other such complex types, the no of checks or comparisons itself becomes prohibitive in terms of cost.

Now we have 2 problems

Complex values which are difficult to compare

Sequential searches are not fast and cannot give constant time lookups

Solving the complex value problem

The easy way out for this issue is to derive a way to decompose complex values into a key or a hash that is easy to work with. The easiest way to achieve this of-course is to generate UNIQUE numbers from your value. The number has to be UNIQUE since we want to distinguish one value from another. This UNIQUE side of things is where primes come in handy.

Primes

Primes are unique numbers. They are unique in that, the product of a prime with any other number has the best chance of being unique (not as unique as the prime itself of-course) due to the fact that a prime is used to compose it. This property is used in hashing functions.

Given a string “Samuel”, you can generate a unique hash by multiply each of the constituent digits or letters with a prime number and adding them up. This is why primes are used.

However using primes is an old technique. The key here to understand that as long as you can generate a sufficiently unique key you can move to other hashing techniques too. Go here for more on this topic about hashes without primes.

Now why is 31 used?

Ok, so now you have your unique identifier generated for each value. Now how do you allocate this value to a bucket or a container you have.

Lets say your container is a fixed array of 16 items. The easiest way to decide where to stick the value or the generated unique number, from now on referred to as the hash key, is to stick the key in the same location as its value.

So if your hash is 3 you stick your key in the location number 3. But of-course what would happen if the hash number is bigger than your container size. For this reason the initial or the very first hashing algorithms used to divide the generated hash key and stick the key in the location pointed to by the remainder. (This again necessitated that the no of storage locations in a hash was a prime number so as to generate unique remainder values).

Well, when you stick your keys in the containers, depending on the prime used, you would get a different key number and therefore a different distribution of the keys in your array.

So the same key “abc” would be a*31+b*31+c*31 for prime 31 and it would generate a different key for the abc with 29 – a*29+b*29+c*29.

Since the key produced is different, the data would go into a different location depending on the prime used.

Researchers found that using a prime of 31 gives a better distribution to the keys, and lesser no of collisions. No one knows why, the last i know and i had this question answered by Chris Torek himself, who is generally credited with coming up with 31 hash, on the C++ or C mailing list a while back.

Collissions

There are chances that certain strings might cause the same key to be generated. In such cases, the individual has storage is turned into a link list or some other type of storage that can store all the duplicate keys. This is the reason why the individual hash storage is called as a bucket.

Better hashing algorithms

But the modulo hashing algorithm we just looked at is extremely simplistic. There are much more complex and more faster hash algorithms that can be used. There are even tools that will generate the algorithm for you, given the type of keys you use.

Here are some fast and better distributing hash functions / modules if you wish to improve on the ones you have –

“Primes are unique numbers. They are unique in that, the product of a prime with any other number is never the same when you change the number you multiple it with.”

Is “multiple” a typo (multiply instead?)?
Anyway, I think the sentance is true for any 2 given numbers (none of them has to be a prime), i.e. given a fixed number A, (not necessary a prime, but not 0) an an other number B, their product will always be different than A and C, where C is not B

The key idea is “Primes are unique numbers”. For example, if you use 4 * 8, it has more possibilities be collided than product of primes like: 3 * 5. 32 can be calculated by 1*32 or 2*16 or 4*8 or 2^5 etc. But you can get 15 only by 1*15 or 3*5.

I think that’s why weighting was mentioned right at the beginning. Multiply the first character by 1, second by 31, third by 31^2 and so on. However, this might result in an overflow as numbers could get very large. From experience in java, when integers overflow, they become negative. So I just test if my hash key is negative and if so, multiply it by -1 to avoid overflow

[…] Because you want the number you are multiplying by and the number of buckets you are inserting into to have orthogonal prime factorizations. Suppose there are 8 buckets to insert into. If the number you are using to multiply by is some multiple of 8, then the bucket inserted into will only be determined by the least significant entry (the one not multiplied at all). Similar entries will collide. Not good for a hash function. 31 is a large enough prime that the number of buckets is unlikely to be divisible by it (and in fact, modern java HashMap implementations keep the number of buckets to a power of 2). https://computinglife.wordpress.com/2008/11/20/why-do-hash-functions-use-prime-numbers/ […]