I am going to be developing a program that detects duplicate files and I was wondering what the best/fastest method would be to do this? I am more interested in what the best hash algorithm would be to do this? For example, I was thinking of having it get the hash of each files contents and then group the hashes that are the same. Also, should there be a limit set for what the maximum file size can be or is there a hash that is suitable for large files?

This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.

You don't say what the files are. If the files you are looking at are of varying length, then you START by partitioning on length, since two files cannot be identical if their lengths are different.
–
John R. StrohmJun 25 '13 at 6:20

4

It's ironic that a question about duplicates is duplicated.
–
user61852Jun 25 '13 at 19:15

2 Answers
2

The fastest way is just to compare hash code of files having same size.
This is the idea of of this answer on SO (see the second command line and its explanations).

There is no security issue while detecting duplicated files, therefore I would recommend a fast hashing code. For instance the project ccache uses MD4:

ccache uses MD4, a very fast cryptographic hash algorithm, for the hashing.
(MD4 is nowadays too weak to be useful in cryptographic contexts,
but it should be safe enough to be used to identify recompilations.)

If two files have same size and same hash code, they are probably equal. But there will still be a little chance these two files are different (except if file size is less than hash code size).

As you imply in your question, false positive can happen more frequently as the file size is larger.

There are two options to fix the large files issue:

Use a second hash code for large file (e.g. MD4 and MD5).

Use a dynamic-length hash code

The limit to consider a file as enough large to require a second check depends on how critical is your application.

If you are optimising for developer time, you are along the right tracks; if you choose a decent enough hashing algorithm, collisions should be extremely unlikely (see Yanis' link; but aside from those, people typically use MD5 or SHA1 for hashes, although MD5 is not recommended if you are security conscious). I would go with something that's available out of the box in your programming environment, since implementing and maintaining a hashing algorithm might not be worthwhile.

If you are worried about runtime performance, there are some things you can do to optimise the process. There are likely to be two areas which are slow - the reading in of all the data, and the actual hashing process itself. To give you an idea, most hash algorithms (even the slower, cryptographic ones) can typically go through a few hundred MB per second. So, unless you are using a (very fast) SSD, the bottleneck is more likely to be disk IO, so you should try to minimise that first.

One idea would be to group files by size first, and exclude any files with unique sizes. Then hash the first few kB of each remaining file, and using that to produce a list of potential matches (again, only compare against files of the exact same size). You would then only need to get the full hash of these potential matches, as opposed to every file on the drive. Depending on the exact characteristics of the drive, this may be faster than simply reading everything in (unless there are a very large number of duplicates, and we're wasting our time trying to exclude them - a worst case scenario). This should work fairly well for typical workloads, with more knowledge about the actual environment, you could probably tune it much more.