kneser_ney: smooths based on Kneser-Ney (Kneser and Ney, 1995), a variant of Absolute Discounting.

presmoothed: normalizes at each state based on the n-gram count of the history.

unsmoothed: normalizes the model but provides no smoothing.

See Chen and Goodman (1998) for a discussion of these smoothing methods.

All of the smoothing methods can be used to build either a mixture model (in which higher order n-gram distributions are interpolated with lower order n-gram distributions) or a backoff model (using the --backoff option, in which lower order n-gram distributions are only used if the higher order n-gram was unobserved in the corpus). Even though some of the methods are typically primarily used with either mixture or backoff smoothing (e.g., Katz with backoff), in this library they can be used with either. Note that mixture models are converted to a backoff topology by pre-summing the mixtures and placing the mixed probability on the highest order transition.

If the --bins option is left as the default (-1), then the number of bins for the discounting methods (katz,absolute,kneser_ney) are set to method appropriate defaults (5 for katz, 1 for absolute).

Caveats

The presmoothed method normalizes at each state based on the n-gram count of the history, which is only appropriate under specialized circumstances, such as when the counts have been derived from strings with backoff transitions indicated.

References

Carpenter, B., 2005. Scaling high-order character language models to gigabytes. In Proceedings of the ACL Workshop on Software, pages 86–99.