The risk/reward analyses applicable to the use of AI tools using facial recognition technologies are by now fairly well known: On the one hand, we have the benefits of quick and easy unlocking of cellphones, access to numerous apps without the need to enter a password, and monitoring access to buildings to prevent unauthorized access, to name a few. On the other hand, facial recognition is also a core aspect of many surveillance tools, with broad social implications.

Numerous articles discuss biases that can be embedded in facial recognition tools when they are trained on narrow datasets. For instance, when a facial recognition tool is trained on a dataset consisting of mostly men, it has a higher error rate when used with women; when a tool is trained using subjects with lighter complexions, there are higher error rates when used to identify people with darker complexions; age and ethnic limitations in a training dataset can present additional issues.

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]