On Feb. 19, the European Commission released a whitepaper that proposed a new regulatory approach for high-risk and low-risk artificial intelligence systems. Included in that proposal is a "explainability" requirement to make understanding the mystery of AI's "black box" clearer.

Many in the legal tech industry are split about how such a requirement will affect their technology. While some legal tech companies say the explainability requirement will "dumb down" future legal tech features, others argue legal tech has already cleared the explainability hurdle nearly 10 years ago.

"For high-risk cases, such as in health, policing, or transport, AI systems should be transparent, traceable and guarantee human oversight," the European Commission wrote in its press release announcing the final version of its proposal.

To be sure, there's also disagreement if legal tech would fall under the "high-risk" category and be subject to the proposed law.

Despite scant details, Kira Systems chief technology officer and co-founder Alexander Hudek said he was fairly confident that his contract analysis and due diligence platform wouldn't be deemed high risk under the EU's AI regulation. He cited his platform's use of human- and machine-based decision-making regarding contract clauses not involving personal data for his reasoning.

Meanwhile, Relativity discovery counsel and legal content director David Horrigan wasn't entirely sure Relativity and other e-discovery and compliance AI-powered software would evade the proposed regulation's grasp. He noted the use of AI systems by health care institutions is cited in the commission's press release as a process that should have transparency. Notably, the health care industry not only leverages e-discovery and compliance platforms, but it also inputs highly personal data into those platforms, Horrigan said.

Still, if legal tech is held to the high risk standard in the EU, Horrigan said legal tech has already faced the explainability question before.

Horrigan noted the transparency language in the European Commission's proposal is similar to the transparency principles outlined in the EU's General Data Protection Regulation (GDPR). While the European Commission is still drafting its AI regulations, legal tech companies have fallen under the scope of the GDPR since mid-2018.

Legal tech companies have also fielded questions regarding predictive coding's accuracy and transparency with technology-assisted review (TAR), Horrigan added.

TAR has become increasingly accepted by courts after then-U.S. Magistrate Judge Andrew Peck of the Southern District of New York granted the first approval of TAR in 2012. In Peck's order, he discussed predictive coding's transparency that provides clarity regarding AI-powered software's "black box."

"We've addressed the black box before with technology-assisted review and we will do it again with other forms of artificial intelligence. The black box issue can be overcome," Horrigan said.

However, Hudek disagreed. While Hudek said the proposed regulation doesn't make him hesitant to develop new AI-powered features to his platform, it does make it more challenging. Specifically, he said, ensuring that an algorithm is explainable is time-consuming, and AI that is easier to explain comes with a price.

"Oftentimes, the simpler models don't have the accuracy of a complex model," he said.

He also noted privacy could be undermined by a stringent requirement for explainability.

"They have an obligation to not expose and delete their client data after their deals are finished so if we're retaining that data for reasons for explainability that would be catastrophic," he said.

Hudek said a full explanation of how a software came to a conclusion isn't the only or best solution to ensuring AI-powered tech is safe. He pointed to testing output predictability and other thorough testing as alternative safety measures.

While the proposed regulation is largely intended for high-risk AI applications, the European Commission also wants to encourage overall standards for all AI-powered tools. Indeed, even companies offering low-risk AI can volunteer to abide by the requirements, though the European Commission wrote it wasn't clear which aspects of the requirements that would include. For companies who successfully fulfill those voluntary requirements, they earn a quality label to signal their AI product is trustworthy to consumers, the Commission wrote in its white paper.

Horrigan and Hudek agreed that the quality label bestowed to volunteer low-risk AI platforms could be a worthwhile marketing tactic.

"In principle, voluntary labeling could be enticing for legal tech companies as it could increase trust with clients and be a differentiation," Hudek said.