After determining whether a document matches a given query, a score must be calculated which indicates how well the document matches the query. The Similarity class is used to judge how "similar" the query and the document are to each other; the closer the resemblance, they higher the document scores.

After a field is broken up into terms at index-time, each term must be assigned a weight. One of the factors in calculating this weight is the number of tokens that the original field was broken into.

Typically, we assume that the more tokens in a field, the less important any one of them is -- so that, e.g. 5 mentions of "Kafka" in a short article are given more heft than 5 mentions of "Kafka" in an entire book. The default implementation of length_norm expresses this using an inverted square root.

However, the inverted square root has a tendency to reward very short fields highly, which isn't always appropriate for fields you expect to have a lot of tokens on average.