Tech Overview: Facial Recognition

Technology Overview Blog

Facial Recognition Technologies

Today’s post is regarding facial recognition technologies. This is not a complete overview of facial recognition and identification technologies. This is just a high-level overview of some facial recognition methodologies. 

Image-based recognition 

An image is stored in computer memory as an array of numbers, usually from 0-255.1 Color images, for example are often stored using an indexed palette, where each pixel represents a palette color.2 

Figure 1: Indexed color  (Image adapted from Wikipedia)3

Similarly, an integral image is often used in computer vision applications such as facial recognition, and it is a summed-area table where each square represents the sum of the pixel values above and to the left of it.4 For example, in the integral image is shown in Figure 2, the value of 101 in the third row of the second column in 2. represents the sum of all the values in the first three rows of the first and second column in 1.  (31+2+12+26+13+17 = 101).5

Figure 2: Integral Image example (Image adapted from Wikipedia)6

Image Recognition Algorithms

Algorithms for recognition are often divided into two main approaches:  geometric or photometric.7

Geometric Approach

Geometric approaches are feature-based; they perform recognition by identifying useful facial features.8 A feature-based approach matches nodal points within the image to a template database in order to perform identification.9  Nodal points can include relative positions of features like eyes and noses and/or the size and shape of such features.10

Geometric, feature-based recognition typically follows a series of steps11

  • Detection
  • Alignment
  • Feature Extraction
  • Matching

Video-chat filters, such as those one might use to turn themselves into a cat, likely use a facial detection method known as the Viola-Jones framework.12 

Figure 3: Lawyer Cat13

The Viola-Jones framework performs real-time scans of image data in the form of integral images.14 The use of integral images allows for quick identification of areas of contrast in the form of four types of rectangular features used in the framework.15 The features are shown in Figure 4, below. These features often correspond to areas such as the nose bridge and eye sockets; the nose bridge is lighter than surrounding areas, and eye sockets are darker than surrounding areas.16 Alone, this would not be reliable indicators that a face is present.17 However, If there are enough matches in one area of the image the framework makes a reasonable conclusion that a face is present.18   Note that this method is most accurate for images where the person is directly facing the camera.19   

Figure 4: a) Features used by the Viola-Jones Framework; b) Application of Viola Jones Features in Facial recognition (Image Adapted from Wikipedia)20

As a sidenote, to map a snapchat filter onto your face by an “active shape model,” which is a model of an “average face” that was generated through machine learning of a training set of possible thousands of images in which facial features were manually marked. 21 This “average face” then scaled and aligned with what has been detected as your face. 22  

Photometric Approach

Photometric approaches operate directly on the pixel values and make matches based on the intensity values for the picture and the stored template.23 This sort of approach for axial recognition requires a database which is normalized to a compressed face representation and attempts to recognize the face in its entirety.24  

An example of a photometric approach includes eigenfaces.25  Eigenfaces can be generated on a database of images of human faces.26  By performing statistical analysis on these images, eigenfaces are generated.27 Eigenfaces appear as a pattern of light and dark areas, as shown in Figure 5, below.28  New faces can be projected onto  a set of eigenfaces, and how the new face differs from those eigenfaces is recorded.29 To identify the new face, its recorded variations are compared against “gallery images”, recorded variations of facial images that have been processed using the same set of eigenfaces.30  Note that while an eigenface approach would use features,  no structure of the face is created with links between facial features as would be the case in a geometric approach.31

Figure 5: Some Eigenfaces from AT&T Labs Cambridge32

3D Facial recognition

3D facial recognition employing infrared or near-infrared emitters and receivers have been employed as a security measure in systems such as FaceID and Xbox Kinect.33 These ID verification methods work by projecting structured light, basically an array of infrared dots, onto a person’s face.34 Figure 6, below shows an example of structured light.35

Figure 6: Structured Light36

Where the dots contact an object they will be distorted in comparison to their expected location from the system’s initial calibration.37   Figure 7, below, shows an example of how the Kinect interprets the feedback from the infrared reflection to determine the depth of objects in its field of view.38

Figure 7: Creation of a Depth image using a Kinect39

By interpreting the distortions, the systems are able to create a 3D map of an individual’s face for use in future ID verification.40


    1. David Capello, Color Mode, ASPERITE,; Indexed Color, WIKIPEDIA,
    2. Id.
    3. Indexed Color, WIKIPEDIA,
    4. Integral Image, MATHWORKS,; Summed-Area Table, WIKIPEDIA
    5. Id. 
    6. Summed-Area Table, WIKIPEDIA,
    7. Facial Recognition System, WIKIPEDIA;  
    8. Id.
    9. Rean Neil Luces, Template-based versus Feature-based Template Matching, MEDIUM,; Facial Recognition System, WIKIPEDIA,
    10. Facial Recognition System, WIKIPEDIA,
    11. Id.
    12. Vox, How Snapchat’s filters work, YOUTUBE (Jun. 28, 2016),; Thomas Smith, Here Is the Exact Video Filter That Created Lawyer Cat; DEBUGGER,
    13. 394th District Court of Texas – Live Stream, Kitten Zoom Filter Mishap, YOUTUBE (Feb 9, 2021),
    14. Viola-Jones Object Detection Framework, WIKIPEDIA,; Summed-Area Table, WIKIPEDIA,
    15. Vox, How Snapchat’s filters work, YOUTUBE (Jun. 28, 2016),; Viola-Jones Object Detection Framework, WIKIPEDIA,
    16. Id. 
    17. Id. 
    18. Id. 
    19. Id. 
    20. Viola-Jones Object Detection Framework, WIKIPEDIA,
    21. Vox, How Snapchat’s filters work, YOUTUBE (Jun. 28, 2016),; U.S. Patent Publication No. 2015/0221118A1  (published Aug. 6, 2015)(Elena Shaburova, applicant). 
    22.  Id.
    23. Facial Recognition System, WIKIPEDIA,
    24. Id. 
    25. Id. 
    26. Maryna Longnickel, Eigenfaces, MEDIUM,; Eigenface, WIKIPEDIA,
    27. Id.
    28. Id. 
    29. Id. 
    30. Id.  
    31. Facial Recognition System, WIKIPEDIA,
    32. Id.
    33. Facial Recognition System, WIKIPEDIA,
    34. Kinect, WIKIPEDIA,
    35. Structured Light, WIKIPEDIA,
    36. Id. 
    37. Id.
    38. Kinect, WIKIPEDIA,
    39. Id.
    40. Structured Light, WIKIPEDIA,; Kinect, WIKIPEDIA,

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.