Emotion classification is essential for understanding human
interactions and hence is a vital component of behavioral
studies. Although numerous algorithms have been developed,
the emotion classification accuracy is still short of what
is desired for the algorithms to be used in real systems. In this
paper, we evaluate an approach where basic acoustic features
are extracted from speech samples, and the One-Against-All
(OAA) Support Vector Machine (SVM) learning algorithm
is used. We use a novel hybrid kernel, where we choose
the optimal kernel functions for the individual OAA classifiers.
Outputs from the OAA classifiers are normalized and
combined using a thresholding fusion mechanism to finally
classify the emotion. Samples with low ‘relative confidence’
are left as ‘unclassified’ to further improve the classification
accuracy. Results show that the decision-level recall of
our approach for six-class emotion classification is 80.5%,
outperforming a state-of-the-art approach that uses the same
dataset.