When filling out a job application, it may not be anyone from human resources reading your career and education history. Instead, it could be software scanning applications for the best candidates. But alongside artificial intelligence's reported efficiency are growing reports and criticism that such algorithms can be coded with biased data that could foster discrimination in housing, loan approval and employment.

Notably, federal courts have split if disparate impact, a discriminatory policy or procedure that appears neutral but adversely affects members of a protected group, applies to job applicants. However, without disparate impact coverage, AI-backed decisions can't run amok, management-side lawyers warn.

Circuit court rulings aside, potential state court litigation and negative public relations should place a responsibility on companies to review AI-based results for potential discrimination anyway, lawyers said.

Whether job applicants can bring disparate impact claims remains unsettled after the U.S. Supreme Court declined to hear the Kleber v. CareFusion case. The court let stand the U.S. Court of Appeals for the Seventh Circuit ruling that a plaintiff can't bring disparate impact liability claims as a job applicant. In 2016, the Eleventh Circuit made a similar finding in Villarreal v. R.J. Reynolds Tobacco and found the Age Discrimination in Employment Act (ADEA) protects workers, not job applicants, against age discrimination.

However, Littler Mendelson robotics, AI and automation practice group co-chair Natalie Pierce noted the U.S. District Court for the Northern District of California ruled in Rabin v. PricewaterhouseCoopers that disparate impact claims apply to applicants and employees because Section 623(a)(2) of ADEA uses the phrase "any individual" rather than "employee" in identifying those protected by the statute.

Notwithstanding disagreement in the federal courts, Pierce said companies need to stay aware of the results of software-backed decision-making.

"I would say even if we have some circuit courts that have held that there is no disparate impact that can be brought by applicants, we should still be making sure that the decisions we are making, and the parameters we are making around potential applicants, are based on legitimate business needs," she said.

Also bias and discrimination cases aren't limited to federal courts, Pierce noted, and state courts could become go-to jurisdictions for discrimination claims.

"What we might see, because of what we've seen in the Eleventh and Seventh Circuit, is more of these claims filed in state courts," she said. "State courts tend to be more plaintiff-friendly, and not only do we have potentially more employee-friendly courts, there is potentially damages that could exceed the damages under the ADEA if it's brought before a state court jury."

Tulane University Law School financial risk management and corporate law professor Kristin Johnson noted disparate impact liability empowers not only litigants and courts, but also state regulators to protect citizens. "If we have robust anti-discrimination enforcement and if disparate impact liability is a tool in the toolbox of regulators or state agencies, then the use of a platform that integrates biased data or algorithms is curtailed," she explained.

But Johnson also noted that CareFusion is one recent example that "the theory of disparate impact has been under attack."

"It may be difficult to challenge the use of those platforms at the outset if disparate impact liability is not a pathway that litigants and regulators could use," she warned.

But Pierce argued plaintiffs have state courts and public perception on their side when alleging a discrimination case. She noted Facebook's recent housing ads settlement and a recent Department of Housing and Urban Development discrimination charge as examples of financial and public relations costs most companies, including employers, will seek to avoid.

"I think we are seeing more of a call for greater transparency and greater accountability in terms of what the application of machine learning really means for applicants or employees," Pierce said. "I think we are going to see more calls to really look at what the outputs are and make sure there aren't unintended consequences of machine learning."