Nervous System: Teaching Machines to See Faces
This month's lesson on the history of cybersecurity and legal technology examines how facial recognition developed by looking at a 3-D problem in a 2-D way.
June 05, 2020 at 08:00 AM
6 minute read
With the aggressive pace of technological change and the onslaught of news regarding data breaches, cyber-attacks, and technological threats to privacy and security, it is easy to assume these are fundamentally new threats. The pace of technological change is slower than it feels, and many seemingly new categories of threats have been with us longer than we remember. Nervous System is a monthly series that approaches issues of data privacy and cyber security from the context of history—to look to the past for clues about how to interpret the present and prepare for the future.
People use facial recognition systems increasingly to access smartphones and bank accounts, to assist with policing and border crossings, to organize photo libraries, and for other applications. As facial recognition systems become more common, that familiarity can hide that training a computer to recognize a face is a complex computational challenge. Humans take this natural ability for granted (it is a facility so powerful that we can even "see" faces in shadows on Mars or in burn marks on toast). For a computer, however, the task must be reduced to a mathematical process.
Researchers first started trying to teach computers to recognize human faces in the 1960s, but modern facial recognition technology began with a landmark paper published 29 years ago this month. Two researchers at the Massachusetts Institute of Technology turned what had previously depended on manual labor by computer programmers into a mostly automated process.
Matthew Turk and Alex Pentland's "eigenface" method is not about teaching a computer to recognize a person as a living three-dimensional being that occupies space. The premise is to simplify the task by approaching it as a two-dimensional project, recognizing a face inside a photographic image. That image is a grid of pixels, each of which has a certain value of brightness or darkness. The entire photograph can be represented by a matrix of data points—each pixel's grid coordinates and luminosity.
For discussion's sake, imagine a collection sample of one hundred distinct images that will be used to train the system. They might represent not a hundred distinct faces, because there may be multiple shots of the same faces, but a hundred different pictures. Each is formatted the same way, with the same dimensions and resolution, so that every pixel in every picture has a correspondent pixel in every other picture.
The next step is to take the average value of each pixel. In other words, for pixel 1, sum up the luminosity of all one hundred variations and divide that by a hundred. Do the same for pixel 2, and so on to the end. The resulting picture is a blurry ghostlike representation of the average of every face in the sample set.
Every actual picture in the sample set can be recreated by taking this ghostly average and applying a series of transformations. This is where something seemingly magical happens. From a machine's point of view, the transformations are just dumb math—but a human watching this process unfold would describe the results in an entirely different way. The transformations have the effect of mapping certain facial features—make the eyes more almond shaped, lengthen the hair, widen the nostrils, make the smile more lopsided—but nothing in the algorithm maps any such thing. It simply happens that the kinds of differences that separate individual faces in the sample from the average tend to correlate to the kinds of things a witness might tell a police sketch artist when trying to reconstruct a given face.
The system compares images against this base set, subtracts the common elements they share, homes in on the distinctive features that make a given image different, and assigns mathematical weights to how a given image compares to the base set. It turns out to be a method that neural networks can learn to do. The approach reduces greatly the processing time to compare a given image against a large database of source images.
These transformations are called "eigenfaces," after the concept of "eigenvectors" in linear algebra. The concept is that certain essential transformations from some idealized norm characterize each individual. Apply the right eigenfaces to the average, and it is possible to restore any of the original samples.
Crucially, though, it is also possible to apply a combination of eigenfaces to the average to construct an image that was not part of the starting sample set. If a certain characteristic combination of eigenfaces is needed to restore a known facial image, and a substantially similar combination of eigenfaces defines a new image, then there is a mathematical basis to conclude the two images are visually similar.
Early researchers in facial recognition technology had to spend grueling hours hand coding the critical facial features on a batch of photographs to establish the mathematical basis for the machine algorithms used in pattern recognition. Those early experiments were promising, but they depended on humans to identify the key facial features like eyes, nose, and mouth on the base sample set.
By contrast, the eigenface method involves calculations that can be performed quickly and reliably, by a machine that has no concept of "eyes," "nose," or "mouth."
In "Face Recognition Using Eigenfaces," Turk and Pentland condensed the highly complex multidimensional features of a human face into a simple two-dimensional matrix. In addition to allowing for computerized face detection and face recognition, the technology reduces drastically the data storage and transmission needs associated with the facial images. In the three decades since its publication, other technologies have been developed, but many modern face recognition systems still use a version of this technique.
David Kalat is Director, Global Investigations + Strategic Intelligence at Berkeley Research Group. David is a computer forensic investigator and e-discovery project manager. Disclaimer for commentary: The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions, position, or policy of Berkeley Research Group, LLC or its other employees and affiliates.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllTrending Stories
- 1Clark Hill Acquires L&E Boutique in Mexico City, Adding 5 Lawyers
- 26th Circuit Judges Spar Over Constitutionality of Ohio’s Ballot Initiative Procedures
- 3On The Move: Polsinelli Adds Health Care Litigator in Nashville, Ex-SEC Enforcer Joins BCLP in Atlanta
- 4After Mysterious Parting With Last GC, Photronics Fills Vacancy
- 5Latham Lures Restructuring Partners From Weil, Paul Weiss
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250