Summary: Multi-modal Face Image Fusion using
Empirical Mode Decomposition
Harishwaran Hariharan, Andrei Gribok, Besma Abidi and Mongi Abidi
Imaging Robotics and Intelligent Systems Laboratory
The University of Tennessee, Knoxville, Tennessee, 37996-2100
{hari,agribok,besma,abidi}@utk.edu
1. Introduction
Using multiple modalities for face recognition has proven to increase the recognition rates as opposed to
conventional single modality systems, especially in challenging illumination conditions and in the case of disguised
individuals. Images from the multiple modalities are fused to increase the information content in a resultant fused
image. The fused image would have enhanced information which is more understandable and decipherable for face
recognition applications. In this effort, we describe a novel technique for image fusion and enhancement, using
Empirical Mode Decomposition (EMD). EMD is a non-parametric data-driven analysis tool that decomposes non-
linear non-stationary signals into Intrinsic Mode Functions (IMF). In this method, we decompose images from
different imaging modalities (visible-color and thermal) into their IMF. Fusion is performed at the decomposition
level and the fused IMFs are reconstructed to realize the fused image. Based on an empirical understanding of the
nature of the IMF we have devised weighting schemes which emphasize features from both modalities, thereby
increasing the information and visual content of the fused image.
2. Image Fusion using Empirical Mode Decomposition
2.1 Empirical Mode Decomposition