Note that, unlike our example merging unnormalized counts above, merging two smoothed models that have been built from half a corpus each will result in a different model than one built from the corpus as a whole, due to the smoothing and mixing.

Each of the two model or count FSTs can be weighted, using the --alpha switch for the first input FST, and the --beta switch for the second input FST. These weights are interpreted in the real semiring and both default to one, meaning that by default the original counts or probabilities are not scaled. For an n-gram w1 ... wk, the default count merging approach will yield

C(w1 ... wk) = <alpha> C1(w1 ... wk) + <beta> C2(w1 ... wk)

To merge two smoothed models, the --use_smoothing=true option provides non-zero probability from each input language model to any in-vocabulary n-gram; and the --normalize=true option ensures that the resulting model is fully normalized. For example, to produce a merged model that weights the contribution of the first model by a factor of 3 and the contribution of the second model by a factor of 2: