Abstract

The use of speech recognition in automotive environments
has received increased attention in recent times.
Unfortunately, evaluations of algorithms designed to improve
recognition performance in this environment have been performed
on differing data collections, making results difficult to
compare. In recent years, the University of Illinois released a
large in-car audio and visual data collection known as AVICAR
("audio-visual speech in a car") [1]. The AVICAR database is
freely available, but to date no uniform evaluation protocol on
which to perform experiments has been reported. This paper
introduces a speaker-independent, continuous speech recognition
evaluation protocol for the audio data of the AVICAR database.
It is designed to allow for model adaptation, evaluation and
testing using native English speakers. Baseline recognition results
obtained using this protocol are also presented.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.