The age of artificial intelligence (AI) is here. As Mayer Brown partner Rebecca Eisner puts it, “Our clients, across literally every industry and field of use, including financial institutions, are either already using AI or are planning to build it or buy it very soon.”

But there's just one problem—despite some legislation covering one-offs like autonomous vehicles and even the 2017 creation of a Congressional Artificial Intelligence Caucus, the age of artificial intelligence law is still forthcoming.

“Despite these early reactions to AI, our current laws and regulations really do not provide sufficient principles and frameworks for the wide adoption and use of AI,” Eisner explained, also adding, “Given the rapid development and deployment of AI, it's very possible that our courts will be addressing and making law about artificial intelligence long before our legislatures will.”

So for an industry as tightly controlled as the financial industry, the initial inclination may be to forgo AI altogether until some of those frameworks are in case. But according to the panelists on Mayer Brown's “The Reality of Artificial Intelligence in Financial Services” webinar Aug. 16, that doesn't necessarily need to be the case.

Mayer Brown partner David Beam counted out four questions for financial services counsel to ponder if they do implement AI, lest they be left without answers if a regulator comes knocking.

1. What regulations accommodate AI?

In Beam's telling, current financial industry regulations don't mesh well with AI for one main reason: “Simply, regulations often predispose a human actor.” As an example, he pointed to Office of the Comptroller of the Currency (OCC) regulations governing credit decisions. Often, if there's a question about a subjective decision made as part of the process, the governing laws will be based off of where the decision is made. Inherent in that rule, though, is that it's a person with a physical location making the decision—but what if there's no person at all?

“There you have an ambiguity, because the OCC rules and guidelines don't address what happens in there,” Beam said. “They don't even consider the possibility that it's a machine creating these underwriting standards.”

Tackling decisions made by AI, then, is often a case-by-case determination for counsel, and often determined by whether they want to interpret a regulation literally (meaning don't use AI) or interpret based on the principle of the law.

2. Do you need to program compliance into the system, or does an overlay on top suffice?

Take a look at fair lending and anti-discrimination laws—if a lender applies seemingly neutral factors into underwriting decisions, but those factors have a disproportionate effect on certain group, a vendor can be held liable. Naturally, this means that companies implementing AI want to ensure that the machine learning application defining underwriting criteria isn't using factors that correlate with prohibited factors.

“The question is, at what point do you need to program compliance into the system?” Beam asked. “And can you just create an overlay algorithm, if you will, that watches what the system is doing to make sure it's not adopting factors that have potential discriminatory impact?”

Again, it's a case-by-case basis, he said, but one that should be assessed at the beginning of the AI implementation rather than after a violation occurs. The easiest way to ensure compliance is to ask the developers of the system what would be most accurate, with a need to “get out in front” of the problem.

3. Who's going to be responsible for violations of law?

A lot of times, an AI system will involve some sort of licensed software or licensed technology. And oftentimes, licensing a technology includes as part of the warranty that the software doesn't violate laws like fair lending.

However, those warranties are often based on the fact that the software has very discrete, and known, actions associated with it. “With AI of course, the machine itself may be developing standards, practices, protocols, algorithms that we don't even know about yet,” Beam said. “So the question is, how are you going to allocate liability when the machine does something wrong?”

If the machine does, for example, discriminate, liability could either fall with the person who licensed the technology or software, or with the party that used it. Primarily from his experience, it's usually the party that used it that often bears responsibility, Beam said. But either way, it's important to address up front in a licensing agreement so both sides can have an accurate assessment of risk down the road.

4. To what extent can you explain the “what” and the “why”?

A number of regulations necessitate counsel to know why a decision was made, such as fair lending laws requiring explanations for why an adverse action was taken. And likely, Beam added, “I'm going to venture that a regulator will not be satisfied with the answer, 'Because the computer said no.' You're going to have to be a bit more specific.”

As a result, any AI system needs to include the capability to look under the hood if necessary—and do so very dynamically, Beam added. Especially because a number of these requests could be coming from regulators, engineers can't be spending a week or two coming up with an answer.

A lot of people actually think this is the fatal flaw in using advanced machine learning in loan underwriting, Beam explained, but he doesn't agree. Instead, he said, “it is something you want to think about early on, but you usually can, to the degree of specificity required by [the Equal Credit Opportunity Act], help the system or program the system to describe why it did what it did.”