A study released in November 2018 examined how algorithms are used to decide loan approval, a task that can be laden with biases. Companies that leverage algorithms can’t turn a blind eye to the results their software provides; instead they should understand how the algorithm works, what data it pulls from and monitor its results, a Big Law attorney said.

The “Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved” paper penned by Cornell University professors, a Ph.D student and Capital One staffers, found potential pitfalls when algorithms are used when protected classes, such as gender or race, aren’t given by applicants applying for a loan.

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]