Vocal EEG sonifications are presented as a method for complex time series sonification that is particularly tailored to address both humans' articulatory and auditory competences in order to improve the understanding and communication of the underlying data. In Vocal EEG sonification, the EEG data is represented in real-time by synthesized sound in a systematic, reproducible, task-centered way using an articulatory sound synthesizer capable of creating vowel transitions. Patterns such as 'EEG at rest', epileptic EEG, sleep EEG, etc. are thereby turned into characteristically different sonic gestalts that human listeners can discern from listening to the 'data babble'. In this paper, we emphasize the aspect of designing sonification particularly for the purpose of enhancing communication about sonic patterns, and we conduct a preliminary study about the human skill to use the own vocal tract to mimic or imitate patterns heard in the sonification. Our study will show to what degree humans are capable to recognize signal types correctly, both from the original sonifications and from vocal imitations performed by trained sonification users and naive users without extended previous experience in sonification.