Easy To Use Patents Search & Patent Lawyer Directory

At Patents you can conduct a Patent Search, File a Patent Application, find a Patent Attorney, or search available technology through our Patent Exchange. Patents are available using simple keyword or date criteria. If you are looking to hire a patent attorney, you've come to the right place. Protect your idea and hire a patent lawyer.

Method for finding a most likely matching of a target facial image in a
data base of facial images

Abstract

A method of finding a most likely match for a target facial image within a
data base of stored facial images comprising determining a score for each
data base image as a function of closeness of a quantization of selected
facial features between each data base image and the target image and
ordering the data base for sequential processing according to the
potential value score in descending order, sequentially processing each
data base image starting from the highest potential value score by an
image comparison process to establish a correlation score for each
comparison, and applying one or more decision rules to each comparison to
reach a decision.

1. A method of finding a most likely match for a target facial image in a data base of facial images comprising;

creating a data base of facial images in digitized form stored in a digital storage apparatus,

digitizing a target facial image from a video image into a digital memory apparatus,

establishing a value system to provide a value within the system which is a function of quantization of facial feature measurements,

selecting a set of facial features to be subjected to quantization on the digitized target facial image and on the digitized data base facial images,

locating the set of facial features on the target image and on each of the data base images by placing a window in a determined position for a selected first facial feature and within said window locating a point or points defining the first
facial feature location and locating other facial features directly or indirectly by determined relationship to said point or points defining said first facial feature.

quantizing the set of facial features of each facial image in the data base,

finding the value in the value system which responds to the quantization of each facial image in the data base,

quantizing the target facial image,

finding the value in the value system which responds to the quantization of the target image,

comparing the value in the value system which responds to the quantization of the target facial image, to each value in the value system which responds to the quantization of each of the facial images in the data base,

ordering the facial images in the data base according to closeness of the value in the value system for the data base facial image to that of the target facial image.

2. The method of claim 1 wherein the value system is an N-dimensional space system and each quantization has a value in the system defined as a point in N-dimensional space where N is the number of quantities taken from the measurements of
facial features and where the distance between said points is defined as the P-norm distance.

3. The method of claim 2 wherein the N is 6.

4. The method of claim 3 wherein the measurements are:

eyebetween

eyelevel

eyeball

noseline

mouthline

ratio of intensity of one feature divided by another.

5. The method of claim 4 wherein the quantization is

eyebetween/eyeline

eyelength/eyeline

eyeball/eyeline

noseline/eyeline

mouthline/eyeline

ratio of intensity of one feature divided by another.

6. The method of claim 2 wherein the point in N-dimensional space for each data base facial image is compare to the point in N-dimensional space for the target image by finding the P-norm distance between them and the data base facial images are
ordered from the smallest P-norm distance.

7. The method of claim 2 wherein for said P-norm distance the variable P is equal to 2 whereby said P-norm distance is defined as the 2-norm distance.

8. The method of claim 7 wherein the 2-norm distance is normalized to give a value between 0 and 1 where the closest distance is closest to 1.

9. The method of claim 1 further comprising;

defining a correlation scoring area to be applied to each facial image,

dividing the scoring area into discrete correlation areas to be applied consistently to each facial image,

successively correlating the target facial image to each data base facial image starting from the highest in the order to determine a correlation score for each data base facial image.

10. The method of claim 9 further comprising;

choosing one or more decision rules for stopping the successive correlation and identifying selected at least one of the data base facial images.

11. The method of claim 10 wherein said successive correlation further comprises successively adjusting the data base images for dimensional and intensity equalization to the target image.

12. A method of finding a most likely match for a target facial image within a data base of stored facial images comprising;

digitizing each data base facial image from a video source,

digitizing the target facial image from a video source,

ordering the data base facial images according to closeness of value to a value of the target facial image based on similarity of measurements chosen from the group consisting of distances between selected facial features, and intensity ratio of
selected facial features,

choosing a correlation score as a threshold to select one or more data base facial images chosen from the group consisting of,

(a) the only data base facial image which exceeds the threshold

(b) all data base facial images which exceed the threshold

defining at least one window area containing specified facial features to be applied to the target image and the data base image or images selected by the preceding step,

intensity correlating the target facial image successively to each data base facial image starting from the data base facial image which is highest in the order to provide the correlation score said correlating being limited to pixel by pixel
correlation within the defined window area.

13. The method of claim 12 further comprising;

stopping the process after a predetermined number of data base images have been correlated with no data base facial image exceeding the threshold,

applying a second, lower threshold of correlation score to select one or more data base facial images chosen from the group consisting of

(a) the only data face facial image which exceeds the second threshold

(b) all data base facial images which exceed the threshold.

14. The method of claim 12 comprising;

presenting the one or more data base facial images chosen, to a human

15. The method of claim 13 comprising

presenting the one or more data base facial images chosen, to a human.

16. A method of finding a most likely match for a target facial image within a data base of stored facial images comprising;

determining a potential value score for each data base image as a function of closeness of a quantization of at least one of selected facial features between each data base image and the target image,

ordering the data base for sequential processing according to potential value score in a descending order,

sequentially processing each data base image starting from the highest potential value score by an image comparison process to establish a correlation score for each comparison,

said image comparison process comprising defining at least one window area containing specified facial features to be applied to the target image and the data base images and correlating the image portions within said window area,

applying one or more decision rules to each comparison to reach a decision.

17. A method of finding a most likely match for a target facial image within a data base of stored facial images comprising;

measuring preselected facial features for each data base image;

measuring the same preselected facial features from the target image;

establishing a measuring system to provide a value within the measuring system for the feature measurements of the data base images and the target image;

ordering the data base images according to nearness of the value to that of the target image in descending order;

selecting a portion of the ordered data base images for further processing;

defining at least one window area containing specified facial features to be applied to the target image and the data base image or images selected by the preceding step;

sequentially correlating the target image to each data base image according to the order said correlating being limited to the said window area;

scoring each correlation to define a correlation score within a correlation measuring system for closeness of correlation;

comparing a correlation to a selected threshold;

reporting as a conclusion according to the following rules:

(i) identity of a single data base image which passes the threshold, or;

(ii) no data base image passes the threshold;

(iii) identity of more than one data base image which passes the threshold.

18. The method of claim 17 wherein the measuring system value comprises;

for each data base image defining n as a point in n-dimensional space, where n is the number of measurements and the coordinates of the point is the quantization of the measurements for each base image;

for the target image defining n, as a point in n dimensional space, where n is the number of measurements and the coordinates of the point is the quantization of the measurements for the target image;

and the nearness is measured as the P-norm distance between the point in n-dimensional space for the target image and for each of the data base images.

19. The method of claim 18 wherein for said p-norm distance the variable p is equal to 2 whereby said p-norm distance is defined as the 2-norm distance.

20. A method of finding a most likely match for a target facial image with a facial image in a data base of digitally stored facial images;

selecting a set of feature measurements to each image in the data base to extract feature measurements for each image;

extracting the feature measurements of each data base image by placing a window in a determined position for a selected first facial feature and within said window locating a point or points defining the feature location and locating other facial
features by direct or indirect relationship to said first facial feature.

quantizing the feature measurements of each image in the data base according to a predetermined quantization scheme having a range;

digitizing the target image;

selecting the same set of feature measurements to the target image to extract feature measurements for the target image;

extracting the feature measurements of the target image by placing a window in determined position for a selected first facial feature and within said window locating a point or points defining the feature location and locating other facial
features by direct or indirect relationship to said first facial feature;

quantizing the feature measurements of the target image according to the same quantization scheme;

comparing the quantization of each image in the data base to the quantization of the target;

identifying an order for the data base images wherein the quantization closest to that of the target image is highest and descending sequentially;

correlating the target image to data base images starting with the highest in the order and descending sequentially;

reporting the degree of correlation for each data base image.

21. The method of claim 20 wherein said correlating comprises comparing a preselected pattern of stored video pixels of the target image to the same preselected pattern of stored video pixels for the data base images; and

establishing a correlation for each data base image to the target image based on closeness of the comparison.

22. The method of claim 21 comparing;

preselecting a threshold for said correlation score as a function of probability of at least one of the following:

(a) maximum probability of a correct identification

(b) minimum probability of an incorrect identification.

23. The method of claim 22 comprising;

comparing the correlation score of the first data base image in the order to the threshold and then performing a step selected from one of the following:

(a) reporting the first database image in the order for which the correlation score is greater than the threshold, and

(b) terminating the comparison when a preselected amount of the data base has been compared without a report under (a), and

(c) establishing a second threshold.

24. A method of finding a match for a target facial image within a stored data base of known facial images;

extracting feature data elements for each facial image in the stored data base;

converting each of said feature data elements to a quantization which defines for each face a point in n-dimensional space, n being the number of feature data elements;

establishing a score for each data base facial image face, defined as the P-norm distance in N-dimensional space between that data base facial image and the target image;

establishing a potential value score for each image in the data base relative to the target face defined as the P-norm distance in the N-dimensional space;

identifying an order of potential value scores for the data base images as the lowest to higher P-norm distances;

comparing the one or more of data base images to the target image sequentially commencing with the data base image having the highest potential value score;

said comparing comprising defining at least one window area containing specified facial features to be applied to the target image and the data base image or images and correlating said target image sequentially with said data base image or
images said correlating being limited to the defined window area.

25. A method of finding a most likely match for a target facial image within a data base of stored facial images;

establishing a set of facial feature measurements for application to the target facial image and to facial images in the data base;

determining a portion for each image to be used in locating the set of facial features to be measured;

finding the set of facial features to be measured in the target facial image and in the data base images by scanning each of the images in the determined portion of the image and locating within each scanned portion a predetermined degree of
intensity variation detail as being the location of a preselected facial feature;

determining the set of facial feature measurements from the locations of the preselected facial feature;

applying the facial feature measurements to the target facial image to create a numerical quantization representing the target facial image of n quantities for the target face where n is 1 or more;

applying the same feature measurements to the facial images in the data base to create a numerical quantization of n quantities;

defining n.sub.t for the target facial image as a point in n dimensional space having n coordinates;

defining n.sub.r for each facial image in the data base as a point in n dimensional space having n coordinates;

ordering the images in the data base to create a list in order of their P-norm distances from the lowest P-norm distance to progressively higher P-norm distances;

video image adjusting the data base image having the lowest P-norm distance for dimensional and lighting intensity equalization to the target image;

comparing the equalized data base image to the target image;

correlating the target image digital data sequentially to each of the data base images digital data beginning with the highest ordered image; and

reporting the degree of correlation as a correlation score.

Description

FIELD OF THE INVENTION

This invention relates to the field of methods and apparatus for matching a target facial image to a most likely reference facial image in a collection of facial images.

BACKGROUND OF THE INVENTION

There has been some work done in the area of facial recognition.

U.S. Pat. No. 4,975,969 uses facial measurements to calculate facial parameter ratio information for a particular user. This information is placed on a card. The person stands in front of a camera which digitizes his image and calculates the
specified parameters. If the data on the card, which is read by the machine, matches that seen by the camera, confirmation of identity is available.

U.S. Pat. No. 4,754,487 discloses a system for storing an image of a person to be identified to enable the image to be retrieved for display.

U.S. Pat. No. 4,449,189 provides for identification by a combination of speech and face recognition.

U.S. Pat. No. 4,811,408 compares stored pixel reference date to pixel data on a card to determine a degree of coincidence.

U.S. Pat. No. 4,975,960 locates and tracks a facial feature on a person's face in order to locate the mouth and thereby recognize speech. A gray scale encoding of the image which is smoothed to eliminate noise is used to form a contour map of
the image.

U.S. Pat. No. 5,012,522 discloses a face recognition system for finding a human face in video scene with random content.

SUMMARY OF THE INVENTION

This invention was made in the course of a thesis project by the inventor as a student at California State University, Northridge.

The thesis portion of the disclosed document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent disclosure, as it appears in the Patent and
Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.

In one aspect the invention is a method of determining a most likely match of a target facial image to a facial image within a collection or data base of facial images. The data base of facial images are stored in digital data form. A target
image is scanned and digitized. Then a series of processes are conducted. First, measurement of selected features is performed on the target image. These measurements are converted into quantities. That is, the image is quantized. Similarly, the
same process will be performed for each facial image in the data base, that is the same measurements are used to find the equivalent quantities. Preferably 6 quantities are selected to be used later to define a point in 6 dimensional space. This
process is called the Feature Extraction Process (FEP). The process is performed in advance for each image in the data base.

Next a process is performed to find a Potential Value Score (PVS). This is then used to establish a potential value function data base sort. In this process the quantities found in the FEP are compared. That is, the quantization of the target
image is compared to the quantization of each of the images in the data base. The comparison is accomplished through a value system which uses the quantization. In particular the comparison defines the quantities found in the FEP as a point in n
dimensional space. The point in n dimensional space of the target image is compared with the point in n dimensional space of the data base images to find the P-norm distance between these points. That result is preferably normalized to give a value
from 0-1 where a higher value indicates a closer P-norm distance. Preferably 6 quantities are used to quantize each image as a point in 6-dimensional space. Also, the comparison is preferably made by computing the 2-norm distance between points. This
result is called the Potential Value Score or PVS, being a value between 0-1. The data base images are ordered or sorted according to their PVS in descending order to create a search list.

The third process is called the Image Correlation Process (ICP). In this process the images of the data base, starting with the image having the highest PVS are geometrically and intensity correlated to the target image. The correlation is
performed on selected portions of the face. Then a comparison correlation is performed to establish a correlation score.

The fourth process is called the decision process (DP). In this process, starting with the data base image having the highest correlation score, a set of decision rules is applied to sequential comparison of the target images to the data base
image ordered from highest correlation score. The results of the decision rules will be to declare a match, call for human intervention or some other predetermined consequence.

The ICP and the DP are interdependent. That is, the data base image having the highest PVS will be processed through ICP and then through DP. Assuming a match is not declared, the image having the next highest PVS will then be processed through
ICP and DP, and so on.

In another aspect the invention is a system. In the system a computer receives as digitized data a video scanned target image of a person. The computer operates on the digitized image data, performing the FEP, to measure the selected facial
features, and to quantize them to define a point in n-dimensional space. Preferably 6 points are selected defining a point in 6 dimensional space. The data base consisting of many facial images is stored in a digital memory system coupled to a
computer. Likewise each facial image in the data base is quantized in the same manner to define a point in n or, preferably 6 dimensional space. The system will send the target image 6 dimensional point to the data base computer and a comparison will
be executed to find the 2-norm distance between the target image and sequentially those of the data base images. The data base images are ordered according to their PVS. The computer then selects the highest scoring data base image and allows the third
step, the ICP, to proceed on that image. Here by a correlation function the data base image data is processed to be geometrically transformed and intensity matched and then correlated with the target image. Finally on that image the computer will apply
the decision rules in comparing the target image to the subject image to determine if a match is to be declared. If not, the ICP and DP will be successively repeated through the potential value data base sort until a match is declared or some other
conclusion is reached.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 shows a schematic diagram of a hypothetical known face indicating the location of the 11 facial key points used for the Feature Extraction Process.

FIG. 2 shows the same schematic diagram as in FIG. 1 indicating the key feature measurements used in the Feature Extraction Process.

FIG. 3 shows the key feature measurements as in FIG. 2, but superimposed on a facial image.

FIG. 4 shows a print out of a sample screen image after edge filtering and also showing the six pixel lines having the greatest detail.

FIG. 5 is a diagram showing the Eye Edge, Nose and Mouth Search Windows used in the Feature Extraction Process.

FIG. 6 is a diagram showing the head tilt compensation method.

FIG. 7 is a diagram showing the 12 key points used for geometrical transformation.

FIG. 8 is a print-out of an exemplary target facial image before geometrical transformation.

FIG. 9 is a print-out of an exemplary reference or data base image before geometrical transformation.

FIG. 10 is a print-out of the image of FIG. 9 geometrically transformed and superimposed on the target image of FIG. 8.

FIG. 11 is a print-out of an exemplary reference or data base image to be used for intensity transformation.

FIG. 12 is a print-out of an exemplary target image to be used for intensity transformation.

FIG. 13 is a print-out of the image of FIG. 11 geometrically transformed on the image of FIG. 12.

FIG. 14 is a print-out of the image of FIG. 13 after intensity transformation.

FIG. 15 is a diagram showing the four correlation scoring areas.

FIG. 16 is a flow chart for the entire process.

FIG. 17 is a table showing the results of a sample test entitled System Test Number 1.

FIG. 18 is a table showing the results of a sample test entitled System Test Number 2.

FIG. 19 is a table showing correlation scores of a sample test entitled System Test Number 3.

FIG. 20A is a table showing S1 correlation scores of a sample test.

FIG. 20B is a table showing S2 correlation scores of a sample test.

FIG. 20C is a table showing S3 correlation scores of a sample test.

FIG. 20D is a table showing S4 correlation scores of a sample test.

FIG. 21 is a system hardware diagram.

DESCRIPTION OF THE INVENTION

For this description the invention will first be described in its method aspect. The invention can be most easily described and understood by recognizing four separate but interrelated processes--the Feature Extraction Process (FEP), the
Potential Value Process (PVP) to find the Potential Value Score (PVS) and to establish the potential data base sort or order, the Image Correlation Process (ICP) and the Decision Process (DP) o These will now be described in detail. In order to
implement certain aspects of this invention, the ability to digitize each image in the data base and to store the image in digital form is fundamental. Similarly, each target image is digitized and stored in digital form. All of the operations can be
performed on the images in their digital form. Some of these operations can be observed or controlled through a screen display of the image. Also, human intervention can be implemented as to certain steps. The primary points for human intervention are
in the FEP and the DP, as will be described below. Theoretically some of the steps could be performed in analogue form and method. But, for practical application of the correlation step, digitization is essential.

Feature Extraction Process

Primary features of human faces such as eyes, nose, mouth and face profile provide means by which a person can recognize faces of other persons. Locating these features and extracting numerical values from them by use of a computer is a
difficult process. As well, determining those features which will give a reliable discrimination between persons is a difficult process. In the present process features to be used are selected for their discrimination value and speed of determination
value, particularly speed of determination by computers, as well as for fault tolerance for image mismatches when combined with the succeeding steps. Described below, are features associated with the face and means for extracting those features which
provide highly reliable discrimination and which are suitable for use with video and digitized images.

The method and system described herein were developed and tested using the following system hardware. A Sony-video camera was used to obtain images. The camera was connected by means of its video output (NTSC) to an IDEC Frame Grabber digitizer
installed in a PC type computer. The digitizer operates through a key function to digitize a single frame of video. The digitized frame is stored in the computer hard disc as a Tagged Image File Format (TIFF).

The first step for processing an image for use in the method is the feature extracting process or FEP. The FEP identifies, measures and quantifies facial features. Preferably 6 features are used. More or less may be used but it has been found
that 6 is an optimal number. The preferred facial features or measurements are derived from facial key points. With reference to the numerals in FIG. 1 (in this text the key points are designated with a "P", but in FIG. 1 the "P" is omitted), the 11
facial key points are:

P1. Center of the right eye

P2. Center of the left eye

P3. Outer edge of the right eye

P4. Outer edge of the left eye

P5. Inner edge of the right eye

P6. Inner edge of the left eye

P7. Point located centrally between the eyes

P8. Point located centrally at the tip of the nose

P9. Point located centrally at the mouth

P10. Point located half way between 7 and 8

P11. Point located half way between 8 and 9

From these 11 facial key points, 6 lengths or measurements are taken, as shown in FIGS. 2 and 3, these are:

L1. Horizontal distance between the two eye outer edges: nomenclature--"eyeline"

L2. Horizontal distance between the two eye inner edges: nomenclature--"eyebetween"

L3. Average horizontal length of the eyes: nomenclature--"eyelength"

L4. Horizontal distance between the two eye centers: nomenclature--"eyeball"

L5. Vertical distance from the eyeline to the nose tip: nomenclature--"noseline"

L6. Vertical distance from the eyeline to the mouth center: nomenclature--"mouthline"

In order to compensate for different distances away from the camera, these six lengths are converted into ratios to form 5 numerical indicators. The eyeline distance is used as the common denominator to compute the ratios because it is usually
the longest dimension which therefore limits the range of the numerical indications.

The 6 preferred numerical indicators are:

F1. eyebetween/eyeline

F2. eyelength/eyeline

F3. eyeball/eyeline

F4. noseline/eyeline

F5. mouthline/eyeline

A sixth numerical indicator is computed by measuring the intensity of point P11 relative to point P10. This indicator will be used to indicate the presence or absence of a mustache; therefore:

F6. intensity of P11/intensity of P10

This last point is designated as the intensity ratio. An intensity ratio of other areas may be taken: for example the intensity of a point on the chin divided by a reference point intensity would indicate the presence or absence of a beard. For
example; intensity of chin/intensity of P10.

The 11 facial key points could be retained to be available for use for computing geometrical transform matrices when the later Intensity Comparison Process is employed for comparing the target images with a particular data base image. However,
as will be seen, some new points will be added, and some points will be dropped in the Intensity Comparison Process.

While the preferred points, measurements and distance compensation ratio is set out above; others may also be used. However, these preferred points, measurements and ratios have been found to give a high degree of success.

The most important feature on the face, in the present method, is the eyes. This is believed to be the case in terms of identifying discriminating features and measurements; but it is also the case in the following steps to establish the
locations in the digital image of the points due to high contrast between the eye whites and eye pupils and eye inner and outer edges. It is necessary to be able to locate the facial key points in a manner which is consistent from face to face and which
is readily extractable from a digitized image.

Therefore, a critical step is to locate the points in the face by a method which can be consistently applied to all faces. Following is a description of those steps for data base images stored in TIFF, which begins with the critical step of
locating the eyes.

Step 1 Copy the TIFF image into video memory.

Step 2 Perform average filtering on the image. A 2 by 2 low pass averaging filter is used to remove small amounts of noise.

Step 3 Perform edge filtering on the image; converting it to 2 shades of grey. This is a high pass filtering process using a variant of a Roberts gradient with a threshold of 10. That is, all point to point differences of less than 10 shades of
grey are treated as equal and are designated zero. All point to point differences of 10 or greater are also treated as equal and are designated 1. A binary line drawing is the result of the edge filtering process such as shown in FIG. 4. All ones are
given a grey level of 32 and all zeros a grey level of 0 to create a binary image. FIG. 4 shows the image data resulting from this step.

Step 4 Locate 6 horizontal lines in the binary image which have the most detail. This is done by using a long and narrow window, which is the length of the image horizontally (200 pixels in a typical display) and which is 4 pixels high. The
window is scanned across the entire image by moving it along the vertical axis one pixel at a time. The score is the sum of all the is (high contrast points) within the window at that position. Position is identified by the uppermost y coordinate.
After scanning, all the scores are sorted and the highest 6 are used in the next step. FIG. 4 shows the 6 highest contrast lines.

Step 5 The six highest scores are rescored as follows. Search each of the six high scoring windows. A new score is then generated by summing up all the gradients in the window above a certain threshold, say 10. This new score is proportional
to the number of large gradients and their value. Resort the 6 windows based on their new score. The highest scoring window will be closest to the center of the eyes. Hence the y position of the eyes subject to some possible error of a few vertical
pixels error is known.

Step 6 Since the pupils of the eyes reflect back the most camera light, their x position is easily found. Using the highest scoring window, search to find values along the x axis. This will locate two very large peaks, corresponding to the
center of the two eyes.

Step 7 Now the x, y coordinates of the centers of both eyes can be determined. This is done by looking for the two maximas of the window. This is done by taking the centroid of the 2 bright spots or peaks. This centroid is taken as the eye
center for each eye.

Step 8 Once the two eye centers are known, points P1 and P2 have been located and stored as x, y coordinates. Then the horizontal distance between them--the eyeball dimension can be computed by counting the number of pixels between the eye
center.

Step 9 Referring to FIG. 5 a window is established as having a horizontal outer limit located a distance from the eye center equal to 25% of the eyeball distance and an inner limit equal to half that distance giving a horizontal length of 25% of
the eyeball distance. The window vertical horizontal height is also 25% of the eyeball distance and is vertically centered at the eye center. This window will be used to look for the right eye outer edge. Since we are looking for an edge which goes
from white (eye white) to dark (eyelashes) in the direction from nose to ear, a 1 dimension light to dark edge filter is applied to the window along the x axis. This will produce a collection of points. Then, the centroid of these points is computed.
A cartesian coordinate system is overlaid on the centroid with its origin on the centroid. Points which are either in quadrants 2 or 3 of the coordinate system are eliminated and any isolated points are also eliminated. Of the remainder of points, the
point nearest the centroid is selected as the right eye outer edge.

Step 10 The remaining three eye edges are found in a similar manner.

Step 11 Next the point P7 centrally between the eyes is located by finding the midpoint of the eyeline.

Step 12 Next location of the nose tip point, P8, is found. Referring to FIG. 5 This is also done by using a window. The top of the window is defined as a point centrally below the eye line a number of pixels equal to 40% of the eyeline length.
The bottom of the window is defined as a distance below the face center point, P7, which is equal to 60% of the eyeline. The window width will be equal to one eye width, centered. Then, on the binary line image created in Step 4 count all the 1s in
each y line of pixels in the window. Using the y lines which have the highest count add up their Y coordinate values and take the average of their y coordinate values. Define this as the y coordinate for the nose tip position, P8. The x coordinate
position is directly under the face center point P7.

Step 13 The position of the mouth, P9, is determined in a similar manner, see FIG. 5.

Step 14 Because a slight tilt in the head could cause a large error in locating the nose and mouth, to compensate, the head tilt must be computed. Referring to FIG. 6 this is done by using the four eye edge points to generate an equation of a
line which passes closest to the four points. Then measure the angle this line makes with the horizontal axis. This is the angle of head tilt. If the face was perfectly centered, the nose tip and center of the mouth would have the same x value as the
face center as seen in FIG. 5. However if the head is tilted these x values would be different as shown by the dotted lines in FIG. 6. Therefore, the correct horizontal position, for these points perpendicular to the tilted eyeline is computed.

For the above Step 4, the edge filter use is a variation of the Roberts gradient:

The feature extraction method is applied to all facial images in the data base. In conducting a comparison search for a match it is applied to the target image. If desired, such as where the image quality is poor, some or all of the points on a
particular image can be located by a human operator.

Potential Value Process

After the Feature Extraction Method has been applied to a target image to determine the 6 numerical quantization characteristics, the result must then be compared to each of the images in the data base. Of course the numerical data for each data
base image has been determined in advance in order to be ready for use in a comparison search. This process, called the Potential Value Process (PVP) is an important aspect of the invention. Specifically the result of the PVP is to sort or order all
images in the data base in decreasing order of likelihood of match thereby readying the data base for the next steps. Importantly, the time for executing the PVP, recalling that quantization of the data base images has already been done, is quite short,
compared to the time used for the later steps on each image. Therefore, by ordering the images, to ensure that a match, if there is one, will be high in the order, the more time consuming steps can be limited to the higher probability images. The
process establishes a value system in which the numerical quantizations are used.

The PVP is performed as follows.

Step 1 For each image use the 6 numerical quantizations of characteristics found in the FEP to define a point in 6 dimensional space. This will be done in advance for each image in the data base. Do this for the target image.

Step 2 Then find the 2-norm distance between that point for the target image and that point for each of the data base images. Normalize the resulting 2-norm distances to give a number between 0 and 1. This is defined as the Potential Value
Score (PVS).

Step 3 Sort the images or otherwise have them available in order of the highest PVS successively to lower PVS.

This procedure organizes the data base so that a match will be near the top of the PVS succession and therefor organizes the rest of the process for speed. It also beneficially influences the DP. This ordering is done for speed and to place the
likely matches high in order, it is not intended to provide a match.

The Potential Value Process will order the data base images for likely fit to the target image through the PVS. Also, the PVP coupled with the DP makes the system highly fault tolerant. In particular the PVP, in tests performed, is shown with a
high degree of confidence to place the correct match image in the upper 50% of the collection of data base images. As will be seen it is not always the case that the highest PVS data base image will be declared a match, however it is important that the
PVP places the match image high in the order.

Image Correlation Process

This portion of the process does a pixel by pixel comparison of the data base image to the target image, transforms it geometrically and in intensity and establishes a correlation score. The ICP and the DP operate together on the data base image
when comparing it to the target image. This process is performed successively on the Potential Value Sort or order, starting with the highest PVS. The ICP could be applied to data base images by some other chosen primary selector such as gender, race,
hair color, etc. Such a primary selection could be applied to the data base before application of the PVP, or it could be applied after the PVP but before the slower ICP process.

The ICP has three sub-methods within it; Geometrical Transformation, Intensity Transformation and Correlation Scoring. These are now explained.

Geometrical Transformation

Geometrically transform the highest PVS data base image, which is first in the Potential Value order, into the target image's spatial coordinates. In order to successfully correlate two images, they must first be geometrically aligned, where the
salient features of one image are in the exact geometrical grid location as the salient features of the other image. This geometrical transformation is done by a process of polynomial estimate. The process uses the 12 facial key points shown in FIG. 7
to define a scoring window. The scoring window is outlined by the dotted lines.

This allows a point to point transformation using polynomial estimate, limited to the relevant area--the scoring window,--in which all the facial key points are contained.

Therefore, in this face matching method the 12 facial key points located by the computer method, or by a human method are located in both the target image and the reference data base image. Of course, the data base images are prepared in
advance. From these 12 facial key points the constant polynomial coefficients for use in polynomial estimate calculation are found. Then the entire reference image from the data base is geometrically transformed into the coordinate system of the target
image.

In general by this process, the reference image is made to look as if it were taken by the same camera, at the same camera distance, with the same head size and orientation as the target image. FIG. 8 shows a target image, FIG. 9 shows a
reference image and FIG. 10 shows the reference image of FIG. 9 geometrically transformed and overlaid onto the target image of FIG. 8.

Intensity Transformation

The first step of the intensity transformation involves rescaling the intensity of one image into the range of the intensity values occupied by the other image. As with geometrical transformation, it is preferred that the selected data base
image be rescaled in intensity to the target image.

In the second step following the rescaling, a transformation with an exponential transfer function is used to shape the intensity of both of the images. This transformation maps the original intensity into a new exponentially modified intensity
value. The transformation is performed so that the probability distribution of the original image follows some desired form for a given distribution of intensities. The purpose of this second step is to eliminate some of the detail in the images,
thereby emphasizing the more important features used in the method, which improves correlation score, as described below. That is, a matching face will have a higher correlation score, and a non-matching face will have a lower correlation score.

The intensity transformation can be performed using only the first step; but omitting the second step gives lower correlation scores.

FIG. 13 shows the target image on which has been superimposed (in the scoring box) an area of the reference image of FIG. 11 which has been only geometrically transformed. FIG. 14 shows the same superimposition, but the images have been both
geometrically and intensity transformed.

In FIG. 14, in the scoring box, the upper area of the eyes is the target image (that is, the same image as outside the scoring box), while the lower two areas, the nose and mouth is the reference image. The upper area has been subject to both
exponential modification and rescaling of the target image eye area. The lower area has been subject to both rescaling and exponential modification of the reference image nose and mouth areas. The difference of intensity, inside the scoring box between
the target and reference image portions has been dramatically reduced. There is seen a small intensity gradient, that is a closeness of intensity within the box.

Correlation Scoring Method

Correlation scoring is a measure of similarity between two images. The correlation score is arrived at by a pixel by pixel comparison of the target and data base image portions in the scoring window. A normalized correlation function is used
which yields a value, from zero to one, with one being a perfect match.

The correlation of two faces (the target image to a data base image) is slightly different from the correlation process used to find the PVS. In this case the precise location of all pertinent features within each image is already known and
stored as digital information. Therefore the role of the correlation function is not to locate an object within the image. Rather, in this case, the role of correlation is to compare the entire scoring box images of the two faces.

The Correlation Scoring Method makes four potential scores available, using four different areas of the face. Refer to FIG. 15. A first correlation score called a composite score, S1, is defined which comprises the portion of the face from just
above the eyes to just below the mouth, that is, the entire scoring box. A second correlation score S2 is defined which comprises the eyes only. A third correlation score S3 is defined which comprises the nose area only. A fourth correlation score S4
is defined which comprises the mouth area only.

The purpose of multiple correlation scoring is to allow any process which relies on the correlation results to use decision rules which are fault tolerant. One area of fault, for example, is errors in the feature extraction process. Also
geometrical transformation non-linearities can be tolerated.

Another advantage and example of fault tolerance available from multiple correlation scoring is to identify individuals who have changed their physical appearance slightly, in a way that would effect one score, but not another. For example, with
or without a mustache; or smiling in one image, but frowning in the other. The use of multiple correlation scores in conjunction with decision rules will be discussed below.

Thus, if a fault degrades correlation score in one of the scoring areas, say S4, the mouth, S2 and S3 will be unaffected by that fault. That fault can be isolated and excluded.

Therefore, at the end of the Image Correlation Process, a correlation score is available for the subject data base image as it relates to the target image. Of course, as noted above, where there are several scoring areas, there is a correlation
score for each area.

Decision Process

The next step is the application of Decision Rules (DR) through a Decision Process (DP).

A decision process should be matched to the application. A set of useful variables are as follows:

Time. This asks, how much time will be allowed to run the process to conclusion. The ICP is the biggest time consumer. The time element is measured as a percentage of the data base to be searched. Time has an impact on the quality of the
result. That is, if only a short time is allowed, so that using the Potential Value sort, only a smaller percentage of the data base is searched, the quality of the result will be lower.

False Alarm Error. This is the situation of a match declared which is wrong. This is measured as the probability that it will occur. For example, a particular application may allow this to occur liberally. Or, the application may require only
great precision in this conclusion, being relatively intolerant of a false alarm.

Non-detection Error. This is the situation of a match being available but which is missed. This is also measured as the probability that it will occur.

Ambiguity Resolution/Operator Intervention. Ambiguity is the situation where more than one match could be declared based on the rules chosen. Normally this would call for operator intervention to resolve the ambiguity. In some applications, it
is merely desired to reduce a large data base to a more convenient number of images, therefore, a lot of ambiguity would be called for. Operator Intervention is the situation where the process will present information to a human operator. Normally this
will occur as part of resolving ambiguity. This too is measured as a probability.

All of these criteria are interrelated.

Decision rules found useful employ the correlation scores for the data base images to a target image. Specifically, a threshold or a series of thresholds are established to limit searching to data base images which exceed the threshold.

In order to implement decision rules in a particular data base certain information about that data base must be determined. This process can be called "qualifying the data base". The ideal situation is to have two different images of each
person in the data base. This will allow generation of statistical distribution data. The data can be obtained with less than a complete set of two images. for example in a data base of 1000 images, the data could be assembled with 100 evenly
distributed second images. The data can be evaluated by use of a statistical classifier, which is one of the simpler classifiers available, and is well suited to take advantage of the available statistical information. Use of Gaussian distribution has
been used in the examples to follow.

Then, using the selected values of the decision criteria applied to the statistical information, threshold (T) values for the correlation score are determined. The threshold is a correlation score at or above which a match is to be declared. In
effect by choosing a threshold, a portion of the data base above the threshold will be processed. But, processing will stop at the threshold correlation score. Thus, time and quality of result are immensely related.

While for a given data base a threshold can be selected, it is also possible that the threshold can be varied based on external information and the needs of the particular application.

As will be seen, it is also useful to have a two-tiered system which has a first threshold T1, and a second, lower threshold, T2.

The statistical approach can be expanded to include all four correlation scores (S1, S2, S3, S4) in a multivariant distribution. Alternatively, each of the four correlation scores can be treated separately.

A two tiered system of decision rules is based on the statistical data from both the potential function sort and the correlation scores. In this system two correlation thresholds T1 and T2 are chosen. The first threshold is chosen to take
advantage of high matching scores from the qualifying process to minimize the probability of False Alarm Error and to expedite the process. The predominate goal is to minimize the probability of false alarm error in the attempt to find a single correct
match within the application specifications.

In the examples to follow the first threshold, T1, is the score where the probability of False Alarm Error is equal to the probability of a matching score being below the first threshold. Of course since the Potential Value sort has placed the
match, if there is one, high in the order, there is a very high likelihood of finding the match above the threshold before the time cut-off (percentage of data base) is reached.

The second threshold is used when processing with the first threshold has not found a match within the predetermined percentage of the data base to be searched. The function of the second threshold, T2 is to capture all those matching scores
below T1. The second threshold is computed based on the detection statistical data only. The threshold is selected to allow for the total probability of a matching score to be very large, ignoring false alarm error probability which will increase.
Therefore, the false alarms will be treated as ambiguities. As a result, since there are likely to be a number of "matches" these are all treated as ambiguities for either operator intervention or some other selection process. For example, if a given
search desires, a false alarm error probability of 5% then T1 is selected to yield that requirement. Then, for the same case if the nondetention error probability is 10% T2 is selected to meet that requirement.

Therefore, it is apparent that, due to the potential value sort, each successive comparison has a lower probability of being a correct match, than the prior comparison. At some point, it can be concluded that further search is not useful. Thus
a percentage of the data base is selected as a stop point for T1. Then T2 is introduced, applied to all the images whose correlation scores were determined in the T1 step. All those correlation scores greater than T2 are declared ambiguities. Taking
the foregoing into consideration FIG. 14 shows a System Flow Chart.

Example No. 1--System Test #1

In this example 10 people were involved.

Data Base Creation A data base was prepared from one video frame picture of each of the 10 people. That is a data base of 10 images. The 10 images were designated, Tom 7, Kev 5, Jim 9, Mark 3, Tricia 2, Dad 12, Lina 1C, Enza 1, Ron 3, and Frank
1. These pictures were close-ups and clear and highly controlled frontal poses. These pictures are referred to as the reference or data base images. The video frames were digitized by means of the Frame Grapper video digitizer and stored on the PC
hard drive as TIFF's. Each reference image is processed through the FEP to yield the 6 dimensional point in space which defines that image. Also the 12 facial key points to be used for geometrical transformation were determined and stored. Therefore
each element in the data base comprises three parts; the TIFF which is the facial image, the six quantities which define the 6 dimensional point in space, and the location of the 12 facial key points for use in geometrical transformation.

Processing Target Images In this example 31 tests were conducted. The 31 target images were video frame pictures of the above 10 people, but not using any of the frames which were used to create the data base. Typically the target images were
2-3 times more distant from the camera than the data base image, the poses were frontal, but were only loosely controlled relative to the data base image posing. From 2 to 4 images for each person were digitized, for a total of 31 target images. Dad,
Jim and Lina are father, son and daughter respectively.

The following is a typical process for the target images.

The target image is processed through the FEP to yield its 6 dimensional point in space. Also the 12 facial key points to be used for geometrical transformation are estimated and stored.

Next a process was performed for qualification of the data base and for threshold section. All 31 target images were used and all 10 data base images were used. Every target image was compared to every data base image to yield the form
correlation scores, S1, S2, S3 and S4. All this data is shown in FIGS. 18A, 18B, 18C and 18D.

Then the matching scores were segregated. Matching and mismatching statistics were generated as follows:

A statistical classifier was used for the S1, S2, S3 and S4 correlations. A two-tier system of decisions rules was employed based on the statistical data from both the potential value sort and the S1 correlation scores. As a result, two
correlation thresholds were chosen. The first was chosen to take advantage of the high matching scores, to minimize the probability of declaring a wrong match, and for speed. The first threshold is selected as the score where the probability of
erroneously declaring a match is equal to the probability of a matching score being below the first threshold. This point T1, is computed to be 9930 for S1; that is any correlations yielding a composite correlation score equal to or greater than 9930
will be declared a match. The probability of erroneously declaring a match at this point is 14.5% and the probability of a matching score being less than T1 is 14.5%.

The second threshold, T2, is selected to allow for the total probability of a matching score to be greater than T2, to be 99%. T2 is computed to be 9900.

Also, a procedure was implemented such that if, after performing the ICP on the top 5 reference images, that is 50% of the data base, no match was declared, then the threshold was changed to 9900 and all of those references which were originally
calculated are compared to 9900. This could result in more than one candidate. In this test of 31 targets, re-thresholding took place 5 times.

Taking into account the performance of the potential function sort this all but insures the matching reference image to be located within the top 50% of the search list. Therefore in this example the time element is controlled to 50% of the
search list. The decision algorithm in this example is as follows.

1. For the first and sequential data base images is the composite correlation score S1 greater than Ti? If the answer is yes the process is halted and a match is declared. Otherwise the process continues to check the next reference image in the
search list.

2. If 50% of the database is checked and still no match is found, the second threshold is applied to all the data base images already checked.

3. All data base images with composite correlation scores greater than T2 are considered likely candidates.

4. If only one data base image passes the second threshold, a match is declared.

5. If more than one data base image passes the second threshold then those data base images are displayed with the target image for human intervention to resolve the ambiguity. This last step is in place to minimize the probability of declaring
a match in error. Also the other three correlation scores can be assessed in resolving the ambiguity.

Target Comparison Image Next the PVP is performed. The 6 dimensional point for the target image is compared to the 6 dimensional points for each of the 10 data base images to find and then normalize the 2-norm distance to determine their PVS.
Then the data base images were ordered according to rank, that is the highest PVS, first. This ordering for each of the 31 target images is shown in FIG. 17. The sorted list of reference images (1 through 10) for each test is a "search list" for that
target image.

Starting at the top of the search list (to the left in FIG. 17) with the most likely candidate determined by the PVS, the ICP is executed for each image in turn. This results in a Correlation Score for the selected reference image as compared to
the target image. This was done one image at a time. Of course this has already been done in the example because all the target images were used for Qualifying the Data Base and Threshold Selection. The Correlation Score was compared to the
predetermined T1 threshold of 9930. In this test, as explained above T1 at 9930 was chosen from study of a matrix of all scores to minimize the required number of data base images to be compared and to minimize the probability of a target image being
erroneously declared a match to the wrong reference image and to minimize the probability of not identifying a target image at all when there is a match available. This procedure could be used for any data base. In actual applications the threshold
will be chosen from experience or from an index of thresholds based on external parameters, but also initially selected from experience. If the compared reference image has had a Correlation Score equal to or higher than the threshold, a match is
declared.

This was done for each of the 31 test target images.

Referring to FIG. 17, a solid box indicates a match found, that is S1 is greater than T1 and the process halts. A broken box indicates most likely candidates or ambiguity that is, S1 is less than T1 but equal to or greater than T2. In many
cases, 14 of the 31, a match was found on the first comparison. In 4 more cases a match was found on the second comparison. Two more matches were found in the third comparison, 5 more in the fourth comparison, and 1 more in the fifth comparison. In 26
cases a match was found at the T1 threshold. As for ambiguities, i.e., S1 less than T1 but greater than T2, in 3 cases there were ambiguities of two possibilities and in 2 cases there was only a single ambiguity. The following data is the result of
this test:

______________________________________ First set of decision rules, using S1 correlation only: ______________________________________ 1. Average length of data base search 25.00% of the until match declared (including data base (10)
rethresholding) 2. Targets matched at .gtoreq. .9930 (T1 83.80% of the threshold) numbered 26 targets (31) 3. Targets matched at .gtoreq. .9900 (T2 6.45% of the threshold) which yielded only a single targets (31) candidate, 2 occurrences. 4.
Percentage of targets isolated to 90.25% of the one candidate; 2 + 3. targets (31) 5. Percentage of targets isolated to 9.75% of the two candidates i.e. over .9900, 3 targets (31) occurrences or ambiguities. 6. Percentage of targets misclassified 0% (counting only those over .9930) ______________________________________

Example No. 2--System Test #2

This test is identical to Example 1 except that it employs a second set of decision rules using all four correlation scores, S1, S2, S3 and S4. In this case the process stops when all four correlations are greater than T1. If a match was not
found by the 50% mark, all candidates with at least three of the correlation scores greater than T2 are declared ambiguities:

______________________________________ 1. Average length of data base search 20.00% of the until match declared (including data base (10) rethresholding). 2. Targets matched at .gtoreq. .9930 93.50% of the threshold numbered 26. targets
(31) 3. Targets matched at .gtoreq. .9900 0.00% of the threshold which yielded only a single targets (31) candidate. 4. Percentage of targets isolated to 93.50% of the one candidate; 2 + 3. targets (31) 5. Percentage of targets isolated to
0.00% of the two candidates i.e. over .9900, 3 targets (31) occurrences or ambiguities. 6. Percentage of targets misclassified 6.45% (counting only those over .9930), 2 occurrences. ______________________________________

For this test, FIG. 18 shows the results by reference to solid box which show matches and broken box which show mismatch. In this test there were no ambiguities; however, there were two mismatches; that is a match was declared, S1 equal to or
greater than T1, which was wrong.

Example 3

This test is identical to Example 2, except that the process was stopped when any three of the four correlation scores were greater than the T1 threshold:

Third set of decision rules using all four correlation scores. In this case the process was stopped when any three of the four correlation scores are greater than the T1 threshold:

______________________________________ 1. Average length of data base search 30.00% of the until match declared (including data base (10) rethresholding) 2. Targets matched at .gtoreq. .9930 74.19% of the threshold numbered 26 or targets
(31) 3. Targets matched at .gtoreq. .9900 12.90% of the threshold which yielded only a single targets (31) candidate, 2 occurrences. 4. Percentage of targets isolated to 87.09% of the one candidate; 2 + 3. targets (31) 5. Percentage of targets
isolated to 12.90% of the two candidates i.e. over .9900, 3 targets (31) occurrences or ambiguities. 6. Percentage of targets misclassified 0.00% (counting only those over .9930) ______________________________________

For this test whose results are shown in FIG. 19, a solid box is used to indicate a match as before and a broken box indicates ambiguity.

The results from the three tests indicate similar performances between the first and third set. However, the performance using all four correlation scores is not as good as the performance using only the S1 correlation score. The plural
correlation score set performance would improve greatly in situations where the appearance of the individual had been modified (i.e. grown a mustache after the reference image was taken). The second set is quicker and has greater matching performance
based on T1. However, the price to be paid is unnoticed erroneous declaration of a match.

The three sets of decision rules all show acceptable performances. As in most pattern recognition systems, it is the application which will dictate acceptable levels of performance. In situations where speed is essential and the system must
make a decision, no human in the loop, the second set is more applicable. Of course, the thresholds would have to be recomputed to meet acceptable probabilities of detection and false alarm of the application. In other applications requiring humans to
make all final decisions the first and third sets would apply. In such cases the thresholds can be selected so that the human operator is presented a small list of ordered possible candidates.

The three tests performed were designated to demonstrate the performance of the processes disclosed herein. The process can be modified to meet the requirements of the different applications. For example, more than on reference image for an
individual can be stored. The level of human interaction is also a variable, which can modified to suit the application. For example, replacing the entire feature extraction algorithm with an operator oriented program where the operator selects the key
facial points or a predetermined set of key facial points which appears to be best suited to the instant situation.

FIGS. 20A, 20B, 20C, and 20D show the collected and organized correlation scores which were used in Examples 1, 2 and 3.

System Hardware Description

Referring to FIG. 21 the hardware used for this system consists of two different computers, a video capture card and a video camera.

I. The primary computer was used for all software development and contained the video capture card. The computer description is as follows:

III. The video capture card is a Super Vision image capture board with the following specifications:

A. Real time frame capture able to digitize a frame of video data within 1/6 of a second.

1. 1.2 size card which can use either an 8 or 16 bit standard slot.

2. Accepts any standard NTSC black and white or color video input.

3. Resolution of 256.times.256 with 64 shades of gray.

4. Includes capture software which supports both PCX and TIFF image file formats.

IV. The camera used is a Sony 8 mm with 8 to 1 zoom and standard NTSC outputs.

Although particular embodiments of the invention have been described and illustrated herein, it is recognized that modifications and variations may readily occur to those skilled in the art, and consequently it is intended that the claims be
interpreted to cover such modifications and equivalents. ##SPC1##