Leveraging artificial intelligence can take companies' compliance efforts to new heights, but lawyers warn frameworks are needed to understand how the AI's decision-making correlates to the public's and regulators' concerns.

"The potential for AI technology is quite significant and the ability of the technology to make decisions, make recommendations, predictions or take actions is really revolutionizing a lot of sectors, but at the same time it presents questions about ethics," said Davis Wright Tremaine partner K.C. Halm.

However, when most companies are deploying their new tech, ethics is rarely a mainstay in the preparation conversation.

In the Deloitte "AI Ethics: An Emerging Imperative for the Board and C-Suite" survey of 600-plus executives at organizations that use AI, only 21% of respondents said they had a framework for ethical use of AI for risk-management and compliance efforts. An additional 26% said they didn't, but plan to develop a framework in the next 12 months. 

Halm said companies' AI frameworks vary, but "some define the standard by which you work on bias, transparency and explainability. My view is, it is useful to have some type of AI framework in place to explain those issues."

As some organizations move toward implementing AI ethics frameworks, nearly half of respondents also said they are planning to leverage more AI in their risk management and compliance efforts during the next 12 months, according to the survey.

Deloitte risk and financial advisory principal Maureen Mohlenkamp said the uptick could be inspired by organizations attempting to increase automation, improve corporate culture or spot risky employee behavior. AI, she added, is "creating opportunities to really shift through massive data."

Still, Halm cautioned that before leveraging AI, a proactive and ongoing review of the tool is useful for any sector, especially for a technology that could lead to exclusion of members of a protected group. 

"Health care, finance and housing—each of those sectors are pretty heavily regulated for good reasons, and many of those regulations focus on potential disparate impact that could be made if an AI tool is being used and if the training data is not representative of people being affected. It's a potential for bias," he said.

Phillips Nizer partner and former New York state Department of Financial Services deputy superintendent Patrick Burke added that outside of an AI-powered decision being potentially discriminatory, there is also the risk that regulators may ask questions if a company is "relying on [the AI tool] in lieu of more conventional compliance tactics."

However, for the time being, AI presents more ethical concerns than legal concerns. "At this point, it's more ethical concerns because the law hasn't caught up yet," Burke explained. Still, those ethical concerns could lead to public outcry and potential pushback from peers.

Companies leveraging AI will have to balance what's good for the bottom line with public perception.

"So much data is in big companies that they can get to the point where they can affect your life without you seeing it coming," Burke said. "I think Americans are becoming increasingly anxious about that, that ties into ethics. Do you do it anyway?"