algorithms nanotechnology

A study released in November 2018 examined how algorithms are used to decide loan approval, a task that can be laden with biases. Companies that leverage algorithms can't turn a blind eye to the results their software provides; instead they should understand how the algorithm works, what data it pulls from and monitor its results, a Big Law attorney said.

The “Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved” paper penned by Cornell University professors, a Ph.D student and Capital One staffers, found potential pitfalls when algorithms are used when protected classes, such as gender or race, aren't given by applicants applying for a loan.

Nathan Kallus, a Cornell Tech professor and co-writer of the paper, said that when an applicant doesn't include their protected class, regulators may be overestimating disparities by guessing race by zip code or other factors. For example, when an applicant doesn't list their race in a loan application, institutions use “proxy variables” such as zip codes or surnames listed on the application to predict what race the applicant is, Kallus said.

“We wanted to investigate this approach by proxy and assess if it works,” Kallus said. “Obviously in such high-stakes domains, you really want to make sure you are doing the right thing. We really wanted to dig deep.”

The study reviews multiple algorithms used when protected classes aren't definitively known, and it found over- and underestimation of disparity.

Finding the fairest algorithm is difficult, Kallus said. While the paper doesn't recommend policy, Kallus suggested including an applicant's protected class may prove the easiest way to detect discriminatory institutional practices.

“What one might infer from these results is that maybe if you want to be fair, it's better to know who these people are. Maybe it's better to be aware,” Kallus said. However, such data can be misused or pose a privacy concern, he added.

The “garbage in, garbage out” adage can also apply to biased algorithms that fuel machine learning or artificial intelligence leveraged by court systems, regulators or financial institutions that wield significant consequences.

“People create algorithms, and if the people that create them live in a racist world of systemic discrimination, they reproduce that discrimination in their computer code. And I'm not sure how to guard against that,” said Andrea Freeman, a University of Denver Sturm College of Law professor and author of a North Carolina Law Review “Racism in the Credit Card Industry” article.

However, regulators would hold companies accountable if they used software that made discriminatory actions against protected classes, said Kevin Petrasic, White & Case's global financial institutions advisory practice chair.

“The regulator can go after the vendor and institution, especially if it's patently discriminatory or lending that has a disparate impact against a protected class.” If a company doesn't have controls in place to monitor algorithms, they are “not going to be given too much empathy from the regulator's perspective,” he added.

Petrasic, a White & Case partner and former special counsel to the U.S. Treasury Department's Office of Thrift Supervision, said an algorithm's issues may arise in how it was structured and trained and lead to potential inherent biases. “All of those issues suggest that there's a tremendous amount of awareness that needs to be had to the use of algorithms,” he explained.

Petrasic added financial institutions should have an “explainability factor” for their algorithms where they can explain how the algorithm works and what controls are in place to monitor the results.