To link to the entire object, paste this link in email, IM or documentTo embed the entire object, paste this HTML in websiteTo link to this page, paste this link in email, IM or documentTo embed this page, paste this HTML in website

COMPRESSION ALGORITHMS FOR DISTRIBUTED CLASSIFICATION WITH
APPLICATIONS TO DISTRIBUTED SPEECH RECOGNITION
by
Naveen Srinivasamurthy
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(ELECTRICAL ENGINEERING)
May 2007
Copyright 2007 Naveen Srinivasamurthy

With wide proliferation of mobile devices and the explosion of new multimedia applications, there is a need for client-server architectures to enable low complexity/memory clients to support complex multimedia applications. In these client-server systems compression is vital to minimize the transmission bandwidth requirements. Compression techniques have been designed to minimize perceptual distortion, however, recently there has been an emergence of applications in which an algorithm processes the compressed data. Here for best system performance the compression algorithm should be optimized to have the least effect on the algorithms estimation/classification capability. In this work novel compression techniques optimized forclassification are proposed.; The first application is remote speech recognition, where the speech recognizer recognizes spoken utterances using compressed data. A scalable encoder designed to maximize recognition performance is proposed, and shown to have superior rate-recognition performance than conventional speech encoders. A scalable recognition system capable of trading-off recognition performance for reduced complexity is proposed. These are useful in distributed speech recognition systems where several clients access a single server and efficient server design to reduce computational complexity and bandwidth requirements is required.; The second application is distributed classification, where the classifier operates on compressed data. A novel algorithm is proposed which is shown to significant reduce the misclassification penalty. The algorithm is extended to improve the performance of table-lookup encoders, where product vector quantizers are designed to approximate a higher dimension vector quantizer. Significant improvements in PSNR performance over conventional design is demonstrated with minimal increase in the encoding time.; Finally, a new distortion metric, mutual information loss (MIL), is proposed for designing quantizers in distributed classification applications. It is shown that the MI loss optimized quantizers are able to provide significant improvements in classification performance when compared to mean square error optimized quantizers. Empirical quantizer design and rate allocation algorithms are provided to optimize quantizers for minimizing MI loss. Additionally, it is shown that the MI loss metric can be used to design quantizers operating on low dimension vectors. This is a vital requirement in classification systems employing high dimension classifiers as it enables design of optimal and practical minimum MI loss quantizers implementable on low complexity/memory clients.

COMPRESSION ALGORITHMS FOR DISTRIBUTED CLASSIFICATION WITH
APPLICATIONS TO DISTRIBUTED SPEECH RECOGNITION
by
Naveen Srinivasamurthy
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(ELECTRICAL ENGINEERING)
May 2007
Copyright 2007 Naveen Srinivasamurthy