Share A framework for quality-based biometric classifier selection

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Related Documents

Share

Transcript

A Framework for Quality-based Biometric Classiﬁer Selection
Himanshu S. Bhatt, Samarth Bharadwaj, Mayank Vatsa, Richa SinghIIIT Delhi, India
󰁻
himanshub, samarthb, mayank, rsingh
󰁽
@iiitd.ac.in
Arun Ross, Afzel NooreWest Virginia University, USA
󰁻
arun.ross, afzel.noore
󰁽
@mail.wvu.edu
Abstract
Multibiometric systems fuse the evidence (e.g., matchscores) pertaining to multiple biometric modalities or clas-siﬁers. Most score-level fusion schemes discussed in the lit-erature require the processing (i.e., feature extraction and matching) of every modality prior to invoking the fusionscheme. This paper presents a framework for dynamic clas-siﬁer selection and fusion based on the quality of the galleryand probe images associated with each modality with mul-tiple classiﬁers. The quality assessment algorithm for eachbiometric modality computes a quality vector for the galleryand probe images that is used for classiﬁer selection. Thesevectors are used to train Support Vector Machines (SVMs) for decision making. In the proposed framework, the bio-metric modalities are arranged sequentially such that thestronger biometric modality has higher priority for being processed. Since fusion is required only when all unimodalclassiﬁers are rejected by the SVM classiﬁers, the averagecomputational time of the proposed framework is signif-icantly reduced. Experimental results on different multi-modal databases involving face and ﬁngerprint show that the proposed quality-based classiﬁer selection framework yields good performance even when the quality of the bio-metric sample is sub-optimal.
1. Introduction
Multibiometrics
-based veriﬁcation systems use two ormore classiﬁers pertaining to the same biometric modalityor different biometric modalities. As discussed by Woods
et al.
[19], there are two general approaches to fusion:(
1
) classiﬁer fusion and (
2
) dynamic classiﬁer selection. Inclassiﬁer fusion, all constituent classiﬁers are used and theirdecisions are combined using fusion rules [10], [14]. On the
other hand, in dynamic selection, the most appropriate clas-siﬁer or a subset of speciﬁc classiﬁers is selected [8], [16]
for decision making. In the biometrics literature, classiﬁerfusion has been extensively studied [14], whereas dynamic
classiﬁer selection has been relatively less explored. Mar-cialis
et al.
[11] designed a serial fusion scheme for com-bining face and ﬁngerprint classiﬁers and achieved signiﬁ-cant reduction in veriﬁcation time and the required degreeof user cooperation. Alonso-Fernandez
et al.
[3] proposeda method where quality information was used to switchbetween different system modules depending on the datasource. Veeramachaneni
et al.
[17] proposed a Bayesianframework to fuse decisions pertaining to multiple biomet-ric sensors. Particle Swarm Optimization (PSO) was usedto determine the “optimal” sensor operating points in orderto achieve the desired security level by switching betweendifferent fusion rules. Vatsa
et al.
[15] proposed a case-based context switching framework for incorporating bio-metric image quality. Further, they proposed a sequentialmatch score fusion and quality-based dynamic selection al-gorithm to optimize both veriﬁcation accuracy and compu-tational cost [16]. Recently, a sequential score fusion strat-egy was designed using sequential probability ratio test [2].Though existing approaches improve the performance, ingeneral, it is necessary to capture all biometric modalitiesprior to processing them.This research focuses on developing a dynamic selec-tion approach for a multi-classiﬁer biometric system thatcan yield high veriﬁcation performance even when operat-ing on moderate-to-poor quality probe images. The casestudy considered in this work has two biometric modalities(face and ﬁngerprint) and two classiﬁers per modality. It isgenerally accepted that the quality of a biometric sample isan important factor that can affect matching performance.Therefore, the proposed approach utilizes image quality todynamically select one or more classiﬁers for verifying if a given gallery-probe pair belongs to the genuine class orthe impostor class. Experiments on a multimodal databaseinvolvingfaceandﬁngerprint, withvariationsinprobequal-1
Proc. of International Joint Conference on Biometrics (IJCB), (Washington DC, USA), October 2011
ity, suggest that the proposed approach provides signiﬁcantimprovements in recognition accuracy compared to individ-ual classiﬁers and the classical sum-rule fusion scheme.
2. Quantitative Assessment Algorithm
In the proposed approach, different quality assessmenttechniques are used to generate a composite quality vectorfor a given biometric sample. The quality vector used inthis study comprises of four quality attributes (scores):
no-reference quality
,
edge spread
,
spectral energy
, and
modal-ity speciﬁc image quality
. Details of each quality attributeare provided below:
∙
No-reference quality
: Wang
et al.
[18] used blocki-
ness and activity estimation in both horizontal and ver-tical directions in an image to compute a no-referencequality score. Blockiness is estimated by the averageintensity difference between block boundaries in theimage. Activity is used to measure the effect of com-pression and blur on the image. These individual esti-mates are combined to give a composite no-referencequality score.
∙
Edge spread
: Marziliano
et al.
[7] used edge spreadto estimate motion and off-focus blurriness in imagesbased on edges and adjacent regions. Their techniquecomputes the effect of blur in an image based on thedifference in image intensity with respect to the localmaxima and minima of pixel intensity in every row of the image.
∙
Spectral energy
: It describes abrupt changes in illu-mination and specular reﬂection [13]. The image is
tessellated into several non-overlapping blocks and thespectral energy is computed for each block. The valueis computed as the magnitude of Fourier transformcomponents in both horizontal and vertical directions.
∙
Modality speciﬁc image quality
: Along with theabove mentioned general image quality attributes, thequality assessment algorithm also computes “usabil-ity”qualitymeasuresspeciﬁctoeachbiometricmodal-ity.
Face quality
: For face images, pose is a major co-variate that determines the usability of the face im-age. Even a good quality face image may not be use-ful during recognition due to pose variations. Poseis estimated based on the geometric relationship be-tween face, eyes, and mouth. Depending upon theyaw,pitch and roll values of the estimated pose, a compositescore is computed for denoting face quality.
Fingerprint quality
: For ﬁngerprint images, Chen
et al.
[5] measured the quality of ridge samples by com-
puting the Fourier energy spectral density concentra-
Table 1. Range of quality attributes over the images used in thisresearch.
Face imagesQuality attribute Range
Spectral Energy
[1
󰀮
09
󰀬
1
󰀮
34]
No reference quality
[12
󰀮
43
󰀬
13
󰀮
50]
Edge spread
[8
󰀮
51
󰀬
16
󰀮
88]
Pose
[302
󰀮
31
󰀬
466
󰀮
12]
Fingerprint imagesQuality attribute Range
Spectral Energy
[0
󰀮
96
󰀬
1
󰀮
15]
No reference quality
[8
󰀮
10
󰀬
11
󰀮
50]
Edge spread
[3
󰀮
94
󰀬
6
󰀮
68]
Global entropy
[0
󰀮
91
󰀬
1
󰀮
16]
tion in particular frequency bands. Such a measure isglobal in nature and encodes the overall quality of ﬁn-gerprint ridges. This quality measure, referred to as
global entropy
, is used in this work.For a given image, a quality vector comprising of thefour aforementioned quality scores is generated. Table 1shows the range of values obtained by the quality attributesover the face-ﬁngerprint images used in this research (de-tails are available in Section 4.2). The
spectral energy
isconsidered good if its value is close to
1
. For
no refer-ence quality
, higher the value better is the quality of image.For a frontal face image, the value of
pose
attribute is
400
.Therefore, a face is right aligned if
pose
is less than
400
,otherwise, the face is aligned to the left. For
edge spread
,lower the value better is the quality of image. For
globalentropy
, higher the value better is the quality of the ﬁnger-print image. For a given gallery-probe pair, the quality vec-tor of both gallery and probe images are concatenated toform a quality vector of eight quality scores represented as

= [


󰀬 
󽠵
]
, where


and

󽠵
are the quality vectors of gallery and probe images, respectively.
3. Quality Driven Classiﬁer Selection Frame-work
The proposed framework utilizes the quality vector forclassiﬁer selection. As shown in Figure 1, in a face-
ﬁngerprint bimodal setting, the individual modalities areprocessed sequentially. It starts from the strongest modalitysuch that the system has higher chances of correctly classi-fying the gallery-probe pair using the ﬁrst biometric modal-ity and obviating the need for processing the second modal-ity. Since classiﬁer selection can also be posed as a clas-siﬁcation problem, Support Vector Machine (SVM) is usedfor classiﬁcation. One SVM is trained for each biometricmodality to select the best classiﬁer for that modality usingquality vectors. In this paper, the classiﬁer selection frame-
Proc. of International Joint Conference on Biometrics (IJCB), (Washington DC, USA), October 2011
Figure1.Illustratingtheproposedqualitybasedclassiﬁerselectionframework for face-ﬁngerprint biometrics.
work is presented for a two-classiﬁer two-modality settinginvolving face and ﬁngerprint. However, the framework canbe easily extended to accommodate more choices as it pro-vides the ﬂexibility to add new biometric modalities and toadd/remove classiﬁers for each modality. The framework is divided into two stages: (1) training the SVMs and (2)dynamic classiﬁer selection for probe veriﬁcation.
3.1. SVM Training
The SVM corresponding to each biometric modality istrained independently using a labeled training database.
TrainingSVMforFingerprints
:
SVM
1
is trained for threeclasses using the labeled training data
󰁻
󽠵
1
􍠵
,
􍠵
1
􍠵
󰁽
. Here, in-put
󽠵
1
􍠵
= [


,

󽠵
] is the quality vector of the
󝠵
󝠵ℎ
gallery-probe ﬁngerprint image pair in the training set and the out-put
􍠵
1
􍠵
∈ 󰁻−
1
󰀬
0
󰀬
+1
󰁽
. The labels are assigned based onthe match score distribution of genuine and impostor scoresand the likelihood ratio of the two ﬁngerprint classiﬁers.As shown in Figure 2, for each modality,
distance
scoresare computed using the training data and the two ﬁnger-print veriﬁcation algorithms. If the impostor score com-puted using
classiﬁer
1
is greater than the maximum gen-uine score (conﬁdently classiﬁed as impostor) or if the gen-uine score computed using
󝠵󝠵
1
is less than the min-imum impostor score (conﬁdently classiﬁed as genuine),the
󰁻−
1
󰁽
label is assigned to indicate that
classiﬁer
1
cancorrectly classify the gallery-probe pair. Label
󰁻
0
󰁽
is as-signed when the impostor score computed using
classiﬁer
2
is greater than the maximum genuine score (conﬁdentlyclassiﬁed as impostor) or when the genuine score computedusing
󝠵󝠵
2
is less than the minimum impostor score
Figure 2. Illustrating the process of assigning labels: the genuine-impostor match score distribution are used to assign labels of inputgallery-probe quality vector

= [


󰀬 
󽠵
]
during SVM training.
(conﬁdently classiﬁed as genuine). If the score lies withinthe conﬂicting region for both the veriﬁcation algorithms,the
󰁻
+1
󰁽
label is assigned which signiﬁes that for the givengallery-probe pair, the individual ﬁngerprint classiﬁers isnot able to classify the gallery-probe pair and that anothermodality, i.e. face, is required. If both the veriﬁcation al-gorithms correctly classify the gallery-probe pair based onthe score distribution, then the likelihood ratio is used tomake a decision (genuine or impostor). The quality vectorof the gallery-probe pair is assigned the label correspond-ing to the veriﬁcation algorithm that classiﬁes it with higherconﬁdence (based on the accuracy computed using trainingsamples). Under Gaussian assumption, the likelihood ra-tio is computed from the estimated densities


(
󽠵
)
and

􍠵󽠵
(
󽠵
)
as

(
󽠵
)
=


(
󽠵
)
/

􍠵󽠵
(
󽠵
)
.
Training SVM for Face
: Similar to
SVM
1
,
SVM
2
is alsoa three-class SVM trained using the labeled training data
󰁻
󽠵
2
􍠵
󰀬􍠵
2
􍠵
󰁽
, where,
󽠵
2
􍠵
=
[


󰀬
󽠵
]
is the quality vector of the
󝠵
󝠵ℎ
gallery-probe face image pair in the training set. Thelabels are assigned in a similar manner as
SVM
1
. The onlyvariation here is with the
󰁻
+1
󰁽
label. If the score lies withinthe conﬂicting region for both the face veriﬁcation algo-rithms, the
󰁻
+1
󰁽
label is assigned which signiﬁes that forthe given gallery-probe pair, the individual classiﬁers arenot able to classify the gallery-probe pair and that matchscore fusion is required.
3.2. Classiﬁer Selection for Veriﬁcation
During veriﬁcation, the trained SVMs are used to selectthe most appropriate classiﬁer for each modality based onlyon quality. The biometric modalities are used one at a timeand the second modality is selected only when the individ-ual classiﬁers pertaining to the ﬁrst modality are not able toclassify the given gallery-probe pair.The quality vectors of gallery-probe pair for the ﬁrstmodality is computed and provided as input to the trained
SVM
1
. Based on the quality vector,
SVM
1
makes the pre-diction. If
SVM
1
predicts that one of the classiﬁers of the
Proc. of International Joint Conference on Biometrics (IJCB), (Washington DC, USA), October 2011
ﬁrst modality can be used to correctly classify the givengallery-probe pair, then the framework selects the classi-ﬁer predicted by
SVM
1
. Otherwise, the quality vector forthe gallery-probe pair corresponding to the second modal-ity is computed and provided as input to
SVM
2
. If
SVM
2
predicts that one of the classiﬁers of the second modalitycan correctly classify the gallery-probe pair, then the frame-work selects the classiﬁer predicted by
SVM
2
. Otherwise, if both SVMs predict that the individual classiﬁers of both themodalities are unable to classify the gallery-probe pair, thesum rule-based score level fusion of the classiﬁers acrossboth modalities is used to generate the ﬁnal score. It shouldbe noted that since the SVMs are based only on the qualityof the gallery-probe pair, the framework does not requirecomputing the scores for all the modalities and classiﬁers.
4. Experimental Results
To evaluate the effectiveness of the proposed framework,experiments are performed on two different multimodaldatabasesusingtwofaceclassiﬁersandtwoﬁngerprintclas-siﬁers. Details about the feature extractors and matchersused for each modality, database, experimental protocol,and key observations are presented in this section.
4.1. Unimodal Algorithms
Fingerprint
: The two ﬁngerprint classiﬁers used in thisstudy are the NIST Biometric Image Software (NBIS)
1
anda commercial
2
ﬁngerprint matching software. NBIS con-sists of a minutiae detector called MINDTCT and a ﬁnger-print matching algorithm known as BOZORTH3. The sec-ond classiﬁer, a commercial ﬁngerprint matching software,is also based on extracting and matching minutiae points.
Face
: The two face classiﬁers used in this researchare Uniform Circular Local Binary Pattern (UCLBP) [1]and Speeded Up Robust Features (SURF) [4]. UCLBP isa widely used texture-based operator whereas SURF is apoint-based descriptor which is invariant to scale and rota-tion.

2
distance measure is used to compare two UCLBPfeature histograms and two SURF descriptors.
4.2. Database
The evaluation is performed on two different databases.The ﬁrst is the WVU multimodal database [6] from which
270
subjects that have at least 6 ﬁngerprint and face imageseach are selected. For each modality, two images per sub- ject are placed in the gallery and the remaining images areused as probes.To evaluate the
scalability
of the proposed approach, alarge multimodal (chimeric) database is used. The WVU
1
http://www.nist.gov/itl/iad/ig/nbis.cfm
2
The license agreement does not allow us to name the software in anycomparative study.
Table 2. Parameters of noise and blur kernels used to create thesynthetic degraded database.
Type Parameter
Gaussian noise

= 0
󰀮
05
Poisson noise

= 1
Salt & pepper noise d =
0
󰀮
05
Speckle noise v =
0
󰀮
05
Gaussian blur

= 1
Motion blur angle
5

& length
1
-
10
pixelsUnsharp blur

=
0
󰀮
1
to
1
multimodal database consists of ﬁngerprint images fromfour ﬁngers per subject. Assuming that the four ﬁngersare independent, a database of
1068
virtual subjects withsix or more samples per subject is prepared. For associat-ing face with ﬁngerprint images, a face database of
1068
subjects is created containing
446
subjects from the MBGCVersion
2
database
3
,
270
subjects from the WVU database[6],
233
from the CMU MultiPIE database [9], and
119
sub- jects from the AR face database [12].
4.3. Experimental Protocol
In all the experiments,
40%
of the subjects in thedatabase are used for training and the remaining
60%
areused for performance evaluation. During training, theSVMs are trained as explained in Section 3.1. The 40%-60% partitioningwas done ﬁve times(repeated random sub-sampling validation) and veriﬁcation accuracies are com-puted at
0
󰀮
01%
false accept rate (FAR). Two experimentsare performed as explained below:
Experiment
1
: In this experiment, with two biometricmodalities (face and ﬁngerprints) and four classiﬁers, theproposed quality-based classiﬁer selection framework se-lects the most appropriate unimodal classiﬁer to process thegallery-probe pair based on the quality. In this experimentboth gallery and probe images are of good quality (unal-tered/srcinal images).
Experiment
2
: In this experiment, the quality of probe im-ages is synthetically degraded. A synthetic poor qualitydatabase is prepared where probe images are corrupted byadding different types of noise and blur as shown in Fig-ure 3. Table 2 shows the parameters of noise and blur ker-
nels used to create the synthetic database. Experiments areperformed for each type of degradation introduced in bothﬁngerprints and face images. It should be noted that for ex-periment
2
, training is done on good quality gallery-probepairs and performance is evaluated on non-overlapping sub- jects from the synthetically corrupted database.
3
http://www.nist.gov/itl/iad/ig/mbgc-presentations.cfm
Proc. of International Joint Conference on Biometrics (IJCB), (Washington DC, USA), October 2011
Figure3. Sample images from the database that are degraded usingdifferent types of noise and blur.Figure 4. Sample decisions of the proposed algorithm when (a)Fingerprint classiﬁer 1 is selected, (b) Fingerprint classiﬁer 2 isselected, and (c) Face classiﬁer 2 is selected.
4.4. Results and Analysis
Figure 4 illustrates sample decisions of the proposed al-gorithm. Figures 5 and 6 show the Receiver Operating
Characteristic (ROC) curves for experiment 1. Table 3summarizes the veriﬁcation accuracy for different types of degradations introduced in the probe set. The key resultsare listed below:
∙
ROC curves in Figures 5 and 6 show that for exper-
iment 1, the proposed quality-based classiﬁer selec-
Figure 5. ROC curves of the individual classiﬁers, sum-rule fusionand the proposed quality based classiﬁer selection framework onthe WVU multimodal database with good gallery-probe quality.Figure 6. ROC curves of the individual classiﬁers, sum-rule fusionand the proposed quality based classiﬁer selection framework onthe large scale chimeric database with good gallery-probe quality.
tion framework outperforms the unimodal classiﬁersand sum-rule fusion by at least
1
󰀮
05%
and
1
󰀮
57%
onthe WVU multimodal database and the large scalechimeric database, respectively.
∙
It is observed that when the quality of probe imagesis degraded, the performances of individual classiﬁersare affected. However, the quality-based classiﬁer se-lection framework still performs better than individualclassiﬁers and sum rule fusion. This improvement isattributed to the fact that the proposed framework candynamically determine when to use the most appropri-ate single classiﬁer and when to perform fusion basedon the quality of gallery-probe image pairs. Table 3 re-ports the performance of all the algorithms when probeimages are of sub-optimal quality.
∙
In experiment 1 with the WVU database, 27.95%gallery-probe pairs were processed by ﬁngerprint clas-siﬁer1 - NBIS, 25.33% pairs with ﬁngerprint classi-ﬁer2 - commercial matcher, 18.99% with face clas-siﬁer1 - UCLBP, and 15.51% with face classiﬁer2 -SURF. The remaining 12.19% pairs were processedusing weighted sum rule fusion. Similarly for the
Proc. of International Joint Conference on Biometrics (IJCB), (Washington DC, USA), October 2011

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.