Bias in Facial Recognition: Renewed Scrutiny of an Old Problem
Facial recognition offers some benefits, but poses certain risks that should be addressed as the technology is more widely adopted.
July 08, 2020 at 07:00 AM
8 minute read
Following the introduction of the Justice in Policing Act of 2020 on June 8, a number of tech companies have self-imposed restrictions on facial recognition technology due to concerns over bias—particularly in the context of law enforcement. In addition to self-imposed restrictions, these companies have voiced support for legislation addressing and limiting the use of facial recognition by law enforcement.
In a letter to Congress, IBM declared that it no longer offers general purpose facial recognition or analysis software; citing concerns of misuse including mass surveillance, racial profiling, and violations of basic human rights and freedoms. The next day, Amazon announced a one-year moratorium on police use of its facial recognition technology. Microsoft followed suit and stated that it will not sell facial-recognition technology to police departments in the United States until a national law is developed to govern the technology.
Despite this surge in attention, bias in facial recognition has been the focus of much research, often leading to significant improvements, for many years. Recent scrutiny focuses on findings of bias to propose harsh regulation. However, efforts to reduce and eliminate bias demonstrate that harsh regulation may not be necessary for all facial recognition in all contexts.
|Algorithmic Bias
Bias and the potential for discrimination have long been areas of concern for facial recognition and other machine learning technologies. Even in the early 2010s, as big data and machine learning were becoming mainstream, researchers were aware that those technologies were particularly susceptible to bias.
The Obama Administration spent several years researching such concerns and issued a report on algorithmic systems, opportunity, and civil rights in 2016. This report identified a range of factors—both intentional and unintentional—that could result in discriminatory outputs from algorithms. Despite these risks, the report concluded that it "is essential that the public and private sectors continue to have collaborative conversations about how to achieve the most out of big data technologies while deliberately applying these tools to avoid—and when appropriate, address—discrimination."
|Corrective Efforts
Companies, governments, and researchers have devoted considerable efforts to identify bias and understand how to eliminate it, often with special attention to facial recognition algorithms. The National Institute of Standards and Technology (NIST), supported by industry participation, has been carrying out its ongoing Face Recognition Vendor Test (FRVT) since 2016 and has released several reports investigating aspects of facial recognition technology.
In its most recent FRVT report, NIST evaluated accuracy variations across demographic groups and found demographic differentials in the majority of contemporary face recognition algorithms. But not all algorithms had demographic differentials, some were accurate enough that no false positives were detected in NIST's testing.
Researchers have carried out independent studies showing that some commercial facial recognition technologies exhibit significantly higher error rates when identifying females and darker-skinned people than those of lighter-skinned males—noting "that darker-skinned females are the most misclassified group (with error rates of up to 34.7%)" which is over forty-times the maximum error rate for lighter-skinned males (0.8%). In response to this problematic result, one of the involved companies engaged in additional research to understand the issue and significantly improved its algorithm. Facial recognition companies continue to work to improve the algorithms to reduce and eliminate bias.
|Heightened Scrutiny
Despite ongoing efforts to understand and correct bias in facial recognition, risks of discriminatory effects and consequences remain. This is particularly crucial when civil rights and civil liberties are at stake—for example when this technology is used by the government. Cities, states, and companies have opposed facial recognition in police body camera systems. The AI Ethics Board of major manufacturer of body-worn cameras determined that, in the context of body cameras, facial recognition technology is not currently ethically justifiable and "should not be deployed until the technology performs with far greater accuracy and performs equally well across races, ethnicities, genders, and other identity groups."
As California stated in its 2019 Body Camera Accountability Act:
The use of facial recognition and other biometric surveillance would disproportionately impact the civil rights and civil liberties of persons who live in highly policed communities. Its use would also diminish effective policing and public safety by discouraging people in these communities, including victims of crime, undocumented persons, people with unpaid fines and fees, and those with prior criminal history from seeking police assistance or from assisting the police.
Congress showed interest in the subject earlier this year and the initial version of the Justice in Policing Act of 2020 generally prohibits the use of facial recognition on body cameras. Another draft bill goes further and essentially proposes to ban facial recognition altogether. Beyond body cameras, facial-recognition-assisted surveillance is of growing concern in today's context, especially when used on large groups without consent. Government uses of facial recognition are often opaque and misunderstood to the public, heightening concerns over unknown risks.
Similarly, the European Commission has been evaluating biometric technologies and risks posed to fundamental rights. At one point, the European Commission was considering a five-year ban on facial recognition, in order to study the technology and thoroughly assess its risks. Although a ban on facial recognition was ultimately rejected due to a recognition of its possible benefits, the Commission emphasized that it would continue to review facial recognition as it develops.
Such concerns are present in nongovernment contexts, where the focus is often on privacy rather than discrimination. Most consumer-focused facial recognition (for example, unlocking one's phone through facial recognition or verifying identity) is at the customer's request for service and its scope is limited. Relatively few consumer applications act broadly without a specific customer request. However, such applications are not free from concerns over bias and discrimination.
|Moving Forward: Regulations and Responsibility
Although some companies are taking steps to limit unregulated use of facial recognition, these companies are not calling for an outright ban. In conjunction with legislation, IBM emphasizes the need for responsible technology policies, stating "now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies." Microsoft conditioned potential future sale of facial recognition to police on the presence of laws "grounded in human rights" and has called for federal facial recognition regulation. Microsoft's president rejected the idea a binary choice between whether to ban the technology or not, querying instead "what is the right way to regulate it?" Amazon's moratorium contains exceptions organizations that help rescue human trafficking victims or locate missing children. In its statement, Amazon advocates that "governments should put in place stronger regulations to govern the ethical use of facial recognition technology."
Like other technologies, facial recognition offers some benefits but poses certain risks that should be addressed as the technology is more widely adopted. There may be—and, for government uses, almost certainly is—a need for regulation and transparency. Such regulation, however, should focus on ethical and responsible use. For harmful uses, this may entail banning the use of facial recognition; while for many other uses, much less stringent approaches are appropriate.
Bias reduction and elimination has been a focus of facial recognition development—and will continue to be a key metric for any successful applications. These efforts should not be overlooked when determining the path forward. Facial recognition technologies promise many benefits, which can be achieved responsibly through thoughtful legislation and ongoing efforts to improve the technology's shortcomings.
Maureen K. Ohlhausen chairs Baker Botts' Global Antitrust and Competition practice. Her practice focuses on antitrust, privacy and data security and consumer protection investigations and litigation both in the U.S. and abroad. She advises top-tier clients across a wide variety of industries including technology, retail, telecommunications, social media, and life sciences.
Cynthia J. Cole is currently Special Counsel at Baker Botts in Palo Alto, California and formerly CEO and General Counsel in public and private companies, particularly related to technology, corporate transactional and data privacy issues such as the California Consumer Privacy Act of 2018 (CCPA) and the EU's General Data Protection Regulation (GDPR).
Ryan Dowel is an associate in the Baker Botts Intellectual Property Practice. His practice encompasses a range of intellectual property matters, including patent litigation, patent preparation and prosecution, worldwide portfolio management in a range of technological fields.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllLaw Firms Mentioned
Trending Stories
- 1Gibson Dunn Sued By Crypto Client After Lateral Hire Causes Conflict of Interest
- 2Trump's Solicitor General Expected to 'Flip' Prelogar's Positions at Supreme Court
- 3Pharmacy Lawyers See Promise in NY Regulator's Curbs on PBM Industry
- 4Outgoing USPTO Director Kathi Vidal: ‘We All Want the Country to Be in a Better Place’
- 5Supreme Court Will Review Constitutionality Of FCC's Universal Service Fund
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250