The Dark Side of AI in Law
Panelists at the College of Law Practice Management's Futures Conference discussed the potential ethical concerns posed by AI.
October 31, 2017 at 02:16 PM
9 minute read
Pictured, from left, are John Simek, VP of Sensei Enterprises; Ed Walters, CEO of Fastcase; Sharon Nelson, president of Sensei Enterprises; and moderator, John Mitchell. Photo by Gabrielle Hernandez/ALM.
Before artificial intelligence was the legal technology buzzword of the season, it was the stuff of science-fiction nightmares. Stories about artificial intelligence “gone rogue” proliferate in popular culture, from “Terminator” to “I, Robot” to “2001: A Space Odyssey.”
Although these stories are all fictional and most have no basis in reality, significant ethical considerations around AI remain, and perhaps deserve greater space in the conversation around the use of AI in the legal industry. Panelists John Simek and Sharon Nelson, respectively the vice president and president of Sensei Enterprises Inc., and Ed Walters, CEO of legal research company Fastcase, closed out the College of Law Practice Management's Futures Conference, “Running With the Machines: Artificial Intelligence in the Practice of Law” with a discussion of some of these concerns.
Walters used some of the ethical dilemmas highlighted by self-driving cars to highlight what the role of law could potentially be in regulating AI. Self-driving cars could theoretically at some point be forced to evaluate whose safety to prioritize in a collision—do you crash into a pedestrian, or the building that could have people?—which raises questions about how law could and should regulate those decisions.
What's more is that AI gets its decision-making capacity from past data, meaning that pre-existing biases in law or data sets could simply be reproduced in new ways. Echoing concerns about “garbage in, garbage out” that technologists often use to explain “training” data sets, Walters said, “Data validity and authentication is extremely, extremely important.”
That said, cleaning the data that AI draws from isn't always clear-cut. “Algorithmic bias is not a binary thing. You'll always be trading off completeness versus bias,” Walters said.
Simek addressed how these concerns may play out in cybersecurity, especially in parsing through the vast amount of data needed to successfully diagnose a particular cybersecurity vulnerability. “What AI is helping with in that arena is to sort of collaborate that information and then bubble it up to the surface so that human beings can deal with it,” he said. Allowing AI to automatically respond to a cyberevent or create its own fix could create a whole other set of issues, Simek cautioned.
Establishing policy or law around AI, especially when it has tangible consequences around human safety, is more easily said than done. “There's very little law that governs what that machine has to do. It's very hard to choose the right legal regime for these algorithms,” Walters said. “The laws that regulate us don't map very well onto these systems,” he later added.
As with many issues in technology and law, the technology has outstripped regulation, something Walters thinks is likely cause for concern. “We're nowhere near ready to deal with these issues, but the cars are here.”
Nelson raised concerns about the ways in which AI are potentially making their decisions outside the purview of human oversight. Citing a recent incident in which Facebook pulled the plug on an AI project that had created its own language unintelligible to humans, Nelson cautioned that oversight begins with understandings the internal thinking of machines. “Transparency is the one thing we must have. If they are hiding from us what they're doing, I begin to worry.”
The panel also raised recent examples from Saudi Arabia and Estonia granting machines some version of citizenship or personhood as a means of regulating machines. While it can sound absurd to give machines rights designed for humans, Walters said it may not be too far outside the purview of existing law. “There is a long strain of American law about artificial persons. We grant corporations all kinds of rights all the time,” Walters explained. “There are artificial peoples we grant rights in order to do things more efficiently. Machines can do things a certain way if you grant them certain rights.”
Nelson pointed to the Trump administration's current tendency toward deregulation to offer a reality check around regulatory possibilities for AI. “Regulation itself is a possible solution, but we live in an environment today where deregulation is the rule,” she said.
Walters attempted to find middle ground on AI between the alarmism of science fiction and the excitement from technologists by urging safe practice. “We're playing with fire. When you're playing with fire, when you're working with fire, you have to be humble. You have to understand it; you have to be careful. You have to understand the consequences as well,” he said.
Pictured, from left, are John Simek, VP of Sensei Enterprises; Ed Walters, CEO of Fastcase; Sharon Nelson, president of Sensei Enterprises; and moderator, John Mitchell. Photo by Gabrielle Hernandez/ALM.
Before artificial intelligence was the legal technology buzzword of the season, it was the stuff of science-fiction nightmares. Stories about artificial intelligence “gone rogue” proliferate in popular culture, from “Terminator” to “I, Robot” to “2001: A Space Odyssey.”
Although these stories are all fictional and most have no basis in reality, significant ethical considerations around AI remain, and perhaps deserve greater space in the conversation around the use of AI in the legal industry. Panelists John Simek and Sharon Nelson, respectively the vice president and president of Sensei Enterprises Inc., and Ed Walters, CEO of legal research company Fastcase, closed out the College of Law Practice Management's Futures Conference, “Running With the Machines: Artificial Intelligence in the Practice of Law” with a discussion of some of these concerns.
Walters used some of the ethical dilemmas highlighted by self-driving cars to highlight what the role of law could potentially be in regulating AI. Self-driving cars could theoretically at some point be forced to evaluate whose safety to prioritize in a collision—do you crash into a pedestrian, or the building that could have people?—which raises questions about how law could and should regulate those decisions.
What's more is that AI gets its decision-making capacity from past data, meaning that pre-existing biases in law or data sets could simply be reproduced in new ways. Echoing concerns about “garbage in, garbage out” that technologists often use to explain “training” data sets, Walters said, “Data validity and authentication is extremely, extremely important.”
That said, cleaning the data that AI draws from isn't always clear-cut. “Algorithmic bias is not a binary thing. You'll always be trading off completeness versus bias,” Walters said.
Simek addressed how these concerns may play out in cybersecurity, especially in parsing through the vast amount of data needed to successfully diagnose a particular cybersecurity vulnerability. “What AI is helping with in that arena is to sort of collaborate that information and then bubble it up to the surface so that human beings can deal with it,” he said. Allowing AI to automatically respond to a cyberevent or create its own fix could create a whole other set of issues, Simek cautioned.
Establishing policy or law around AI, especially when it has tangible consequences around human safety, is more easily said than done. “There's very little law that governs what that machine has to do. It's very hard to choose the right legal regime for these algorithms,” Walters said. “The laws that regulate us don't map very well onto these systems,” he later added.
As with many issues in technology and law, the technology has outstripped regulation, something Walters thinks is likely cause for concern. “We're nowhere near ready to deal with these issues, but the cars are here.”
Nelson raised concerns about the ways in which AI are potentially making their decisions outside the purview of human oversight. Citing a recent incident in which Facebook pulled the plug on an AI project that had created its own language unintelligible to humans, Nelson cautioned that oversight begins with understandings the internal thinking of machines. “Transparency is the one thing we must have. If they are hiding from us what they're doing, I begin to worry.”
The panel also raised recent examples from Saudi Arabia and Estonia granting machines some version of citizenship or personhood as a means of regulating machines. While it can sound absurd to give machines rights designed for humans, Walters said it may not be too far outside the purview of existing law. “There is a long strain of American law about artificial persons. We grant corporations all kinds of rights all the time,” Walters explained. “There are artificial peoples we grant rights in order to do things more efficiently. Machines can do things a certain way if you grant them certain rights.”
Nelson pointed to the Trump administration's current tendency toward deregulation to offer a reality check around regulatory possibilities for AI. “Regulation itself is a possible solution, but we live in an environment today where deregulation is the rule,” she said.
Walters attempted to find middle ground on AI between the alarmism of science fiction and the excitement from technologists by urging safe practice. “We're playing with fire. When you're playing with fire, when you're working with fire, you have to be humble. You have to understand it; you have to be careful. You have to understand the consequences as well,” he said.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllCalifornia Becomes 2nd State to Give Brain Waves Data Privacy Protections, With Mixed Reaction
Former FBI Cyber Expert on How AI Will Exacerbate Law Firms' Wire Transfer Vulnerabilities
Trending Stories
- 1Infant Formula Judge Sanctions Kirkland's Jim Hurst: 'Overtly Crossed the Lines'
- 2Abbott, Mead Johnson Win Defense Verdict Over Preemie Infant Formula
- 3Preparing Your Law Firm for 2025: Smart Ways to Embrace AI & Other Technologies
- 4Greenberg Traurig Initiates String of Suits Following JPMorgan Chase's 'Infinite Money Glitch'
- 5Data-Driven Legal Strategies
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250