Learning word categories is a fundamental task in language acquisition. Previous studies show
that co-occurrence patterns of preceding and following words are essential to group words into
categories. However, the neighboring words, or frames, are rarely repeated exactly in the data.
This creates data sparsity and hampers learning for frame based models. In this work, we propose
a paradigmatic representation of word context which uses probable substitutes instead of frames.
Our experiments on child-directed speech show that models based on probable substitutes learn
more accurate categories with fewer examples compared to models based on frames.