AI Resists Ethics and Explanation. So Why Is It Still Used?
In his latest book "Online Courts and The Future of Justice," Richard Susskind discusses the problems plaguing artificial intelligence solutions—but still advocates for its usage in the court and legal services. Legaltech News dives into the discussion around why AI is here to stay, despite its seemingly insolvable limitations.
December 13, 2019 at 11:45 AM
8 minute read
This autumn sees the release of Richard Susskind's new book, "Online Courts and The Future of Justice," from Oxford University Press. The book argues that to address the current access to justice gap, there needs to be transformation of courts through the leveraging of technology. Specifically, Susskind calls for "online courts," which provide two distinct services: "online judging," where human judges receive evidence and arguments, and deliver decisions through online platforms; and the "extended court," where technology helps court users understand their rights and available legal options.
Legaltech News is publishing excerpts from the book that uniquely highlight the pressing and complex issues facing the legal industry as it leverages technology to make legal services more widely available, efficient, and equitable. In order to spark thought and conversation about technology in the courts and the broader legal industry, Legaltech News reporters have solicited insights and reactions to these excerpts from a variety of legal and tech professionals.
Excerpt: Chapter 27: The Computer Judge
Another problem is that acceptance of decisions based on such prediction machines might lead to substantively unfair outcomes, because they might be based on data or algorithms that suffer from bias.
I would need another few chapters to address these issues fully. Sadly, some of the responses in the popular literature have been superficial, even if well-intentioned. For example, it is not at all clear, either technically or philosophically, what is meant when it is proposed, as many people suggest, that we should 'build ethics into AI'. Nor is it obvious what is meant when people demand that software engineers should 'program' their machine learning systems to provide intelligible explanations. This is to misunderstand the difference between the inductive processes that underlie machine learning and the deductive form of argument that we expect when we ask for an explanation.
It always amuses me in this connection when antagonists dismissively go on to say that the problem with these systems is that they are greatly constrained by only being able to operate on past data. I often challenge this and ask about the data upon which human beings rely—are they gifted with data from the future? Past data and experience are all we can have, except that machines have access to bodies of data that can be many of orders of magnitude larger than those available or analyzable by humans. Besides, as long as hard cases are still being settled by machines, these bodies of data will be regularly refreshed.
Analysis
The AI problems Susskind highlights are inherent in the technology, fundamentally tied to how it works and currently impossible to completely eradicate from a technical standpoint. But they do not represent a prohibitive roadblock to using AI in legal services.
While legal teams can't change the way AI technology essentially functions, they can account for its limitations and work around them. So long as AI operates with human oversight and is used exclusively for the specific tasks for which it was trained and tested, such problems can be mitigated.
Take for instance, the notion of building ethics into AI. To be sure, there are certain limited circumstances where AI can be said to have ethics, especially when operating as the technology behind autonomous vehicles.
"For example, imagine the scenario where for some reason a traffic signal fails, and a bus and a car pull out in front of an autonomous vehicle and there is no room to stop," said Jonathan Reed, CEO of AdvoLogix. "The autonomous vehicle will need to determine how to minimize potential loss of life and property damage."
But outside of these limited security-related circumstances, the malleable nature of a broader ethics system, which constantly evolves alongside changing societal norms and attitudes, isn't something for which AI can easily account.
"To expect that we can write ethical considerations into the coding of AI solutions—that is simply unrealistic," said Roger Bickerstaff, a partner at Bird & Bird who focuses on tech infrastructure and software, and an honorary professor in law at Nottingham University.
Bickerstaff called the ethics of AI in driverless cars an outlier and explained that "in most instances, ethics are relating to the wider political, economic and social considerations of circumstances, and an AI solution is simply not going to have the data available to take those considerations into account."
To be sure, even if an AI system were able to obtain all of that data, there's no guarantee it would come to the same "ethical decisions" as others. In fact, given the nature of how AI works, it's hard to explain exactly how the technology comes to one conclusion or another.
"Machine learning systems work in an entirely different way [than humans]. They don't work from general premise to a particular conclusions, they don't work deductively," Susskind told Legaltech News. He explained that to make decisions, AI draws "on huge amounts of data …[and] there are certain techniques it uses, whether it's a form of computational statistics or some form of regression, etc."
What this means is that when using AI, "we're going to have to accept probabilistic type outcomes and sort of faulty logic," Bickerstaff said. "There will be situations where the AI solutions might make a mistake, and it's not necessarily clear why it's made that mistake."
Yet this isn't to say AI can't be tamed. If one cannot fully apply ethics and reasoning to how the technology inherently works, they can still apply them to the broader context in which AI is developed and deployed.
Lee Tiedrich, partner at Covington & Burling, noted that companies could put in "good governance processes to manage the [AI] product design, development and implementation in accordance with their ethnical principles."
She said this includes observing how an AI system works to know how and when it should—and shouldn't—be used: "I think, in terms of the implementation, that a lot can be learned through testing and monitoring a system before it is ever put into use, and making changes to the system as needed—even as it is being used, defining what the use cases are."
For instance, if an AI tool is constantly making mistakes in reviewing M&A contracts—whether one understands why or not—there can be rules ensuring the tool is never used for M&A work.
In addition to observing AI, governance controls can also include auditing and accounting for bias in AI's algorithms and the data entered into system.
In fact, there has been some progress in the tech industry to better detect biases. "We're developing techniques across AI to audit the bias of our algorithms, to audit the bias of our data, [and] to audit the bias of the software developer who write the algorithms in the first place," Susskind said.
But auditing is just one part of the equation. Tiedrich noted that to limit bias, it is also necessary to leverage "data management systems to manage the data you are using for training, and be aware of the data."
Indeed, having a human continually select and manage the right data on which an AI system is trained can make a huge difference. "I think that moderated AI can be quite powerful, and one of the things we're seeing how is the combination of human intelligence and AI can give much better results than either on their own," Bickerstaff said.
Still, no matter how much oversight one has, there is always the critique that bias will always be a problem insofar as AI uses past data to make its decisions, including those of a predictive nature. Susskind, however, pushes back on this argument, noting that past data already informs the decisions judges and lawyers currently make.
"Is past data biased? Almost certainly … [but the question is] not are these systems biased, but whether or not the bias is so great that their introduction would be a step back from the stage we're in today," he explained.
And in Susskind's view, the answer is clear. "Yes, AI systems are often opaque. Yes, they often suffer from bias. And yes, we haven't built ethics in them, but … these systems would nonetheless be an improvement over what we have today, which is 54 percent of humans beings in our world have no access to courts at all," he said.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllTrending Stories
- 15th Circuit Rules Open-Source Code Is Not Property in Tornado Cash Appeal
- 2Mediators for the Southern District of New York Honored at Eighth Annual James Duane Awards
- 3The Lawyers Picked by Trump for Key Roles in His Second Term
- 4Pa. High Court to Weigh Parent Company's Liability for Dissolved Subsidiary's Conduct
- 5Depo-Provera MDL Could Be Headed to California
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250