Elhilali earned her Ph.D. in Electrical Engineering in 2004. She was advised by Professor Shihab Shamma (ECE/ISR). Currently she is an assistant professor of electrical and computer engineering at Johns Hopkins University, where she also is active in the Center for Language and Speech Processing.

AbstractPerformance of hearing systems and speech technologies can benefit greatly from a deeper appreciation and knowledge of how the brain processes and perceives sounds. While most current systems invoke operations akin to the peripheral auditory system, they stop shy of incorporating promising capabilities of the central auditory system, most importantly its ability to adapt to the demands of an ever-changing acoustic environment.

Recent physiological findings are amending existing dogmas of processing in the auditory cortex; replacing conventional views of "static" processing in sensory cortex with a more "active" and malleable mapping that rapidly adapts to behavioral tasks and listening conditions. Hence, a new architecture for sound processing based on cognitive and adaptive processes promises to open a revolutionary frontier for hearing and speech technologies.

This research will develop effective algorithmic implementations to tackle challenging sound and speech processing problems in real ecological environments. It will provide a rigorous framework for designing experiments that tests the role and mechanisms of active and cognitive adaptation in the auditory system. This interdisciplinary effort will integrate techniques from neurophysiology, psychophysics, computational neuroscience and engineering.

The NSF CAREER program fosters the career development of outstanding junior faculty, combining the support of research and education of the highest quality and in the broadest sense.