Face scan or biometrics, wireframe head with digital technology theme.

Following the introduction of the Justice in Policing Act of 2020 on June 8, a number of tech companies have self-imposed restrictions on facial recognition technology due to concerns over bias—particularly in the context of law enforcement. In addition to self-imposed restrictions, these companies have voiced support for legislation addressing and limiting the use of facial recognition by law enforcement.

In a letter to Congress, IBM declared that it no longer offers general purpose facial recognition or analysis software; citing concerns of misuse including mass surveillance, racial profiling, and violations of basic human rights and freedoms. The next day, Amazon announced a one-year moratorium on police use of its facial recognition technology. Microsoft followed suit and stated that it will not sell facial-recognition technology to police departments in the United States until a national law is developed to govern the technology.

Despite this surge in attention, bias in facial recognition has been the focus of much research, often leading to significant improvements, for many years. Recent scrutiny focuses on findings of bias to propose harsh regulation. However, efforts to reduce and eliminate bias demonstrate that harsh regulation may not be necessary for all facial recognition in all contexts.

|

Algorithmic Bias

Bias and the potential for discrimination have long been areas of concern for facial recognition and other machine learning technologies. Even in the early 2010s, as big data and machine learning were becoming mainstream, researchers were aware that those technologies were particularly susceptible to bias.

The Obama Administration spent several years researching such concerns and issued a report on algorithmic systems, opportunity, and civil rights in 2016. This report identified a range of factors—both intentional and unintentional—that could result in discriminatory outputs from algorithms. Despite these risks, the report concluded that it "is essential that the public and private sectors continue to have collaborative conversations about how to achieve the most out of big data technologies while deliberately applying these tools to avoid—and when appropriate, address—discrimination."

|

Corrective Efforts

Companies, governments, and researchers have devoted considerable efforts to identify bias and understand how to eliminate it, often with special attention to facial recognition algorithms. The National Institute of Standards and Technology (NIST), supported by industry participation, has been carrying out its ongoing Face Recognition Vendor Test (FRVT) since 2016 and has released several reports investigating aspects of facial recognition technology.

In its most recent FRVT report, NIST evaluated accuracy variations across demographic groups and found demographic differentials in the majority of contemporary face recognition algorithms. But not all algorithms had demographic differentials, some were accurate enough that no false positives were detected in NIST's testing.

Researchers have carried out independent studies showing that some commercial facial recognition technologies exhibit significantly higher error rates when identifying females and darker-skinned people than those of lighter-skinned males—noting "that darker-skinned females are the most misclassified group (with error rates of up to 34.7%)" which is over forty-times the maximum error rate for lighter-skinned males (0.8%). In response to this problematic result, one of the involved companies engaged in additional research to understand the issue and significantly improved its algorithm. Facial recognition companies continue to work to improve the algorithms to reduce and eliminate bias.

|

Heightened Scrutiny

Despite ongoing efforts to understand and correct bias in facial recognition, risks of discriminatory effects and consequences remain. This is particularly crucial when civil rights and civil liberties are at stake—for example when this technology is used by the government. Cities, states, and companies have opposed facial recognition in police body camera systems. The AI Ethics Board of major manufacturer of body-worn cameras determined that, in the context of body cameras, facial recognition technology is not currently ethically justifiable and "should not be deployed until the technology performs with far greater accuracy and performs equally well across races, ethnicities, genders, and other identity groups."

As California stated in its 2019 Body Camera Accountability Act:

The use of facial recognition and other biometric surveillance would disproportionately impact the civil rights and civil liberties of persons who live in highly policed communities. Its use would also diminish effective policing and public safety by discouraging people in these communities, including victims of crime, undocumented persons, people with unpaid fines and fees, and those with prior criminal history from seeking police assistance or from assisting the police.

Congress showed interest in the subject earlier this year and the initial version of the Justice in Policing Act of 2020 generally prohibits the use of facial recognition on body cameras. Another draft bill goes further and essentially proposes to ban facial recognition altogether. Beyond body cameras, facial-recognition-assisted surveillance is of growing concern in today's context, especially when used on large groups without consent. Government uses of facial recognition are often opaque and misunderstood to the public, heightening concerns over unknown risks.

Similarly, the European Commission has been evaluating biometric technologies and risks posed to fundamental rights. At one point, the European Commission was considering a five-year ban on facial recognition, in order to study the technology and thoroughly assess its risks. Although a ban on facial recognition was ultimately rejected due to a recognition of its possible benefits, the Commission emphasized that it would continue to review facial recognition as it develops.

Such concerns are present in nongovernment contexts, where the focus is often on privacy rather than discrimination. Most consumer-focused facial recognition (for example, unlocking one's phone through facial recognition or verifying identity) is at the customer's request for service and its scope is limited. Relatively few consumer applications act broadly without a specific customer request. However, such applications are not free from concerns over bias and discrimination.

|

Moving Forward: Regulations and Responsibility

Although some companies are taking steps to limit unregulated use of facial recognition, these companies are not calling for an outright ban. In conjunction with legislation, IBM emphasizes the need for responsible technology policies, stating "now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies." Microsoft conditioned potential future sale of facial recognition to police on the presence of laws "grounded in human rights" and has called for federal facial recognition regulation. Microsoft's president rejected the idea a binary choice between whether to ban the technology or not, querying instead "what is the right way to regulate it?" Amazon's moratorium contains exceptions organizations that help rescue human trafficking victims or locate missing children. In its statement, Amazon advocates that "governments should put in place stronger regulations to govern the ethical use of facial recognition technology."

Like other technologies, facial recognition offers some benefits but poses certain risks that should be addressed as the technology is more widely adopted. There may be—and, for government uses, almost certainly is—a need for regulation and transparency. Such regulation, however, should focus on ethical and responsible use. For harmful uses, this may entail banning the use of facial recognition; while for many other uses, much less stringent approaches are appropriate.

Bias reduction and elimination has been a focus of facial recognition development—and will continue to be a key metric for any successful applications. These efforts should not be overlooked when determining the path forward. Facial recognition technologies promise many benefits, which can be achieved responsibly through thoughtful legislation and ongoing efforts to improve the technology's shortcomings.

 

Maureen K. Ohlhausen chairs Baker Botts' Global Antitrust and Competition practice. Her practice focuses on antitrust, privacy and data security and consumer protection investigations and litigation both in the U.S. and abroad. She advises top-tier clients across a wide variety of industries including technology, retail, telecommunications, social media, and life sciences.

Cynthia J. Cole is currently Special Counsel at Baker Botts in Palo Alto, California and formerly CEO and General Counsel in public and private companies, particularly related to technology, corporate transactional and data privacy issues such as the California Consumer Privacy Act of 2018 (CCPA) and the EU's General Data Protection Regulation (GDPR).

Ryan Dowel is an associate in the Baker Botts Intellectual Property Practice. His practice encompasses a range of intellectual property matters, including patent litigation, patent preparation and prosecution, worldwide portfolio management in a range of technological fields.