While artificial intelligence-based personality assessment and job interview tools are becoming increasingly common for most job seekers in the United States, it's still quite rare in the legal industry.

AI and legal hiring observers say legal's hesitancy to use these tools could be driven by the industry's smaller hiring pool and uncertainty over implicit bias being potentially coded into software.

To be sure, not all law firms have kept AI hiring tools on the shelf. In 2018, O'Melveny & Myers announced it was leveraging an AI-based personality assessment tool developed by Pymetrics for law school students interested in the firm's summer associate program. At the time, the firm said the tool was an effort to reach more diverse students.

In a recent interview with Legaltech News, O'Melveny & Myers diversity and inclusion partner Darin Snyder said the assessment was so well-received that the firm decided to also use the tool during the initial assessment phase of lateral hiring. He said the expansion was fueled by having additional objective data during the lateral hiring process.

Snyder noted there was initial reluctance toward Pymetrics when it was first introduced to the firm's partners and leadership staff. Convincing partners and colleagues why Pymetrics was important and how it could help the firm was the biggest hurdle, he said.

"There was an education process. It required time and it was a real challenge," Snyder said.

O'Melveny was one of the first firms to announce it would use such a tool during the hiring process. Jamy Sullivan, an executive director of legal recruitment at consulting firm Robert Half Legal, said most firms aren't leveraging advanced AI tools during the hiring process. Instead, midsize and large law firms are more likely to utilize a linguistic analysis tool to scan an applicant's resume or cover letter, she said.

The slower adoption of more advanced tech is rooted in the lower amount of applications a firm receives compared to large institutional organizations and an unease with the unknown of consequential technology, she said.

"There are benefits, but there's also still some unknowns as to the development of the platform," Sullivan said. "How do you set up the platform, [and] how do you ensure it's an unbiased platform?"

Sullivan and others said law firms' caution toward AI-powered hiring tools is warranted.

"A lot of companies are nervous," said Eric Sydell, executive vice president of innovation at Modern Hire, a platform that utilizes technology for clients analyzing potential hires. "That is appropriate, I think, to be cautious of how they use AI. We are sort of in an interesting time when the regulations and guidelines governing hiring are pretty dated."

While federal and state laws are outpaced by tech, there are some employment laws that govern how tech is used during the hiring process.

The U.S. Equal Employment Opportunity Commission (EEOC) has issued guidelines to govern how employees can use pre-employment tools to prevent disparate impact based on race and gender, explained Mark Girouard, a Nilan Johnson Lewis Consulting labor and employment shareholder and vice chair. The EEOC guidelines also require that any procedure used during the hiring process be job-related, he added.

Girouard also noted that Illinois' newly enacted Artificial Intelligence Video Interview Act places new consent, transparency and destruction requirements on companies that use AI to scan a job applicant's recorded interview.

Lastly, Girouard highlighted the 12 OECD AI Principles, which are nonbinding but universal AI guidelines intended to promote innovation and trustworthiness. He noted the Electronic Privacy Information Center (EPIC) in September filed a complaint to the Federal Trade Commission over video interview tech company HireVue allegedly failing to show it meets OECD's minimal standards.

The case underscores how difficult it can be to create an AI hiring tool that stands up to scrutiny.

"There are efficiencies, if you build an AI system correctly they do have the potential to have less implicit bias than a human decision-maker but the real crux of that statement is you have to build them correctly," Girouard said. "If you don't build them correctly you can build bias into the system."