A New Algorithm to Detect Occluded Face from a Head Viewpoint using Hough Transform and Skin Ratio

Theekapun CHAROENPONG, Patarida SANITTHAI

Abstract

The performance of current algorithms used in occluded face detection for surveillance systems is limited when detecting a face covered with an obstacle, or a non-frontal view of the face. Therefore, a method able to capture a face from any viewpoint is necessary. In this paper, we propose a new algorithm by using 2 subdivision regions and skin ratio for detecting occluded faces from any head viewpoint during +90 degrees to -90 degrees around the yaw axis. This algorithm consists of 3 steps: head region identification, skin extraction, and occluded face detection. First, the system is fed with an image sequence capturing the whole target body, to define the head region. The head region is detected using a blob technique under an experimental condition. Second, skin data is extracted, for computing skin ratio. Skin color is considered in multiple color spaces, and compared with a database by Mahalanobis Distance technique. Third, for occluded face detection, the human head area is equally divided into 2 vertical regions. The skin ratio of each part is used as a criterion for occlusion detection. To test the performance of the proposed algorithm, data from 35 subjects is used. The data of a subject is captured from any viewpoint of the head, varying from +90 degrees to -90 degrees. As this paper aims to develop surveillance systems, obstacles covering the whole face are focused on, such as helmets and masks. The accuracy rate of non-occluded face and occluded face detection is 98.81 and 94.90 %, respectively. The average accuracy rate is 95.39 %. The advantage of this method over recent research is that this is the first method to detect an occluded face from any viewpoint of the head varying from +90 degrees to -90 degrees.