10:30 -11:00 A.M.: Coffee

11:00 – 12:00 A.M.: Panel Session on “Attacks on Face Recognition”

Panel Chair: Ajay Kumar (HKPU)

12:00 – 1:30 P.M.: Lunch

1:30 – 2:30 P.M.: Invited Speaker-2

Richard W. Vorder Bruegge, FBI

Title: Human Identification from Images When Faces are Disguised

Identification of humans from images most typically relies upon facial comparison analysis. Facial comparison experts utilize a morphological approach to conducting such examinations and recent black box studies demonstrate the effectiveness of this approach. In the absence of a full facial image, examiners will leverage as many visible characteristics as possible to conduct a morphological analysis, taking care to exercise additional caution in the ultimate conclusion. Typical efforts to disguise identity encountered in case work includes covering the ocular region (e.g., sunglasses) or the nose and mouth region (e.g., balaclavas). Ears and tattoos (whether on the head or elsewhere) are also intrinsic components of the approach used in casework, so their visibility also offers an important feature to examine, when visible. This presentation will provide more insight into the approach used by forensic examiners to leverage such characteristics and will also make reference to secondary analyses which may also be useful in identification scenarios, including the role of height determination and clothing and footwear comparisons.

3:30 - 4:00 P.M.: Invited Speaker-3

Gerard Medioni, USC

Title: On Face Segmentation, Face Swapping and Face Perception

This talk discusses face swapping under the most extreme viewing settings, and its implications on face identification. We will show that even when face images are unconstrained and arbitrarily paired, face swapping between them is actually quite simple. In particular, we will (a) explain how a standard fully convolutional network (FCN) can achieve remarkably fast and accurate segmentations by providing it with rich example sets. For this purpose, we describe novel data collection and generation routines which provide challenging segmented face examples with little cost required to collect this training data. (b) Show how the segmentations obtained by our system enable robust face swapping under unprecedented conditions. Finally, (c), unlike previous work, these swapped faces are robust enough to allow for extensive quantitative tests. To this end, we will present results obtained on the Labeled Faces in theWild (LFW) benchmark,
measuring the effect of intra- and inter-subject face swapping on recognition. These results show that our intra-subject swapped faces remain as recognizable as their sources, testifying to the effectiveness of the swapping method. In line with
well known perceptual studies, we further show that better face swapping produces less recognizable inter-subject results. This is the first time this effect was quantitatively demonstrated for machine vision systems.