Syllable systems across languages share a number of common patterns. A particularly compelling explanation for these patterns is that they orginate from constraints provided by the perceptual and articulatory systems of language users. In this research, we use genetic algorithms to examine how a few experimentally defined perceptual and articulatory constraints on syllables interact to produce different relative distributions of syllable types in evolved vocabularies. The goal is to show how both language regularity and variation arise from optimizing the sound system under these constraints.