4 Questions Financial Services Counsel Should Consider Before Implementing AI
According to the panelists on a Mayer Brown webinar, financial services doesn't need to fear implementing artificial intelligence—as long as they're smart about the risks.
August 17, 2018 at 12:36 PM
5 minute read
The age of artificial intelligence (AI) is here. As Mayer Brown partner Rebecca Eisner puts it, “Our clients, across literally every industry and field of use, including financial institutions, are either already using AI or are planning to build it or buy it very soon.”
But there's just one problem—despite some legislation covering one-offs like autonomous vehicles and even the 2017 creation of a Congressional Artificial Intelligence Caucus, the age of artificial intelligence law is still forthcoming.
“Despite these early reactions to AI, our current laws and regulations really do not provide sufficient principles and frameworks for the wide adoption and use of AI,” Eisner explained, also adding, “Given the rapid development and deployment of AI, it's very possible that our courts will be addressing and making law about artificial intelligence long before our legislatures will.”
So for an industry as tightly controlled as the financial industry, the initial inclination may be to forgo AI altogether until some of those frameworks are in case. But according to the panelists on Mayer Brown's “The Reality of Artificial Intelligence in Financial Services” webinar Aug. 16, that doesn't necessarily need to be the case.
Mayer Brown partner David Beam counted out four questions for financial services counsel to ponder if they do implement AI, lest they be left without answers if a regulator comes knocking.
1. What regulations accommodate AI?
In Beam's telling, current financial industry regulations don't mesh well with AI for one main reason: “Simply, regulations often predispose a human actor.” As an example, he pointed to Office of the Comptroller of the Currency (OCC) regulations governing credit decisions. Often, if there's a question about a subjective decision made as part of the process, the governing laws will be based off of where the decision is made. Inherent in that rule, though, is that it's a person with a physical location making the decision—but what if there's no person at all?
“There you have an ambiguity, because the OCC rules and guidelines don't address what happens in there,” Beam said. “They don't even consider the possibility that it's a machine creating these underwriting standards.”
Tackling decisions made by AI, then, is often a case-by-case determination for counsel, and often determined by whether they want to interpret a regulation literally (meaning don't use AI) or interpret based on the principle of the law.
2. Do you need to program compliance into the system, or does an overlay on top suffice?
Take a look at fair lending and anti-discrimination laws—if a lender applies seemingly neutral factors into underwriting decisions, but those factors have a disproportionate effect on certain group, a vendor can be held liable. Naturally, this means that companies implementing AI want to ensure that the machine learning application defining underwriting criteria isn't using factors that correlate with prohibited factors.
“The question is, at what point do you need to program compliance into the system?” Beam asked. “And can you just create an overlay algorithm, if you will, that watches what the system is doing to make sure it's not adopting factors that have potential discriminatory impact?”
Again, it's a case-by-case basis, he said, but one that should be assessed at the beginning of the AI implementation rather than after a violation occurs. The easiest way to ensure compliance is to ask the developers of the system what would be most accurate, with a need to “get out in front” of the problem.
3. Who's going to be responsible for violations of law?
A lot of times, an AI system will involve some sort of licensed software or licensed technology. And oftentimes, licensing a technology includes as part of the warranty that the software doesn't violate laws like fair lending.
However, those warranties are often based on the fact that the software has very discrete, and known, actions associated with it. “With AI of course, the machine itself may be developing standards, practices, protocols, algorithms that we don't even know about yet,” Beam said. “So the question is, how are you going to allocate liability when the machine does something wrong?”
If the machine does, for example, discriminate, liability could either fall with the person who licensed the technology or software, or with the party that used it. Primarily from his experience, it's usually the party that used it that often bears responsibility, Beam said. But either way, it's important to address up front in a licensing agreement so both sides can have an accurate assessment of risk down the road.
4. To what extent can you explain the “what” and the “why”?
A number of regulations necessitate counsel to know why a decision was made, such as fair lending laws requiring explanations for why an adverse action was taken. And likely, Beam added, “I'm going to venture that a regulator will not be satisfied with the answer, 'Because the computer said no.' You're going to have to be a bit more specific.”
As a result, any AI system needs to include the capability to look under the hood if necessary—and do so very dynamically, Beam added. Especially because a number of these requests could be coming from regulators, engineers can't be spending a week or two coming up with an answer.
A lot of people actually think this is the fatal flaw in using advanced machine learning in loan underwriting, Beam explained, but he doesn't agree. Instead, he said, “it is something you want to think about early on, but you usually can, to the degree of specificity required by [the Equal Credit Opportunity Act], help the system or program the system to describe why it did what it did.”
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllTrending Stories
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250