The ngram tokenizer first breaks text down into words whenever it encounters
one of a list of specified characters, then it emits
N-grams of each word of the specified
length.

N-grams are like a sliding window that moves across the word - a continuous
sequence of characters of the specified length. They are useful for querying
languages that don’t use spaces or that have long compound words, like German.

Character classes that should be included in a token. Elasticsearch
will split on characters that don’t belong to the classes specified.
Defaults to [] (keep all characters).

Character classes may be any of the following:

letter — for example a, b, ï or 京

digit — for example 3 or 7

whitespace — for example " " or "\n"

punctuation — for example ! or "

symbol — for example $ or √

It usually makes sense to set min_gram and max_gram to the same
value. The smaller the length, the more documents will match but the lower
the quality of the matches. The longer the length, the more specific the
matches. A tri-gram (length 3) is a good place to start.