AI-Enabled Processes: And You Thought E-Discovery Was a Headache!
As AI development intensifies and the courts begin to create new and perhaps enduring case law about its use and admissibility, the next decade will require increased vigilance on the part of legal professionals.
January 31, 2020 at 02:20 PM
7 minute read
We enter a new decade in the thrall of technological wizardry and artificial intelligence. The related challenges coming our way, both practical and ethical, deserve our best thinking.
Much of the hype surrounding AI in the legal profession is un-validated, portraying savings and solutions that may or may not be achieved and proposing uses for which it may be invalid. Be that as it may, this technology is creeping into our legal system and legal practice, and is often operated without appropriate expertise. As AI development intensifies and the courts begin to create new and perhaps enduring case law about its use and admissibility, the next decade will require increased vigilance on the part of legal professionals.
|ABA Resolution 112
Recognizing the challenges ahead, in August 2019 the ABA adopted Resolution 112, which "urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (AI) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI."
This admonition is well conceived. As advanced technologies for legal endeavors become more widely available, our excitement over output may blind us to the considerations of effectiveness and suitability that are core to meeting our ethical obligations of communication, candor, fairness, trustworthiness, and avoidance of discrimination.
Where the understandable information retrieval metrics of "precision" and "recall" (the amount of retrieved data that is on point and the amount of existing relevant data that has been retrieved) are the accepted go-to metrics for search efficacy in e-discovery and other records management and retrieval efforts, acceptable measures are not yet established for many other AI-enabled processes. This is a problem because knowledge of what AI is accomplishing (and how fairly) is core to ethical use, legal and otherwise.
|Bias and Beyond
Recent studies are revealing more and more biases in heralded algorithmic systems—not surprising when you consider how AI actually works. In one fashion or another, AI systems search for and report or act upon patterns in data. The quality of the results depends on both the quality of the algorithms in the AI system and the data on which it is trained. But algorithms are built by humans, who determine what information to weight heavily in data sets and what to weight lightly or ignore—possibly introducing weakness or bias at inception. Then, the algorithms are trained on data that may itself be skewed or even reflect inequities based on gender, wealth, ethnicity, race, or sexual orientation.
We have seen enough of these biases come to light to be on high alert. Consider COMPAS, for example, a criminal risk assessment system that has been shown to misrepresent the recidivism risk for certain convicts due to a systemic racial bias identified in the software and data sets used for training. Or facial recognition software, rapidly being adopted by police and government agencies (not to mention self-driving car software) which has been exposed as having high error rates in recognizing or detecting dark-skinned faces. Hiring algorithms have been shown to target ethnic and gender groups in advertising jobs, promote resumes that mirror perceived recruiter preferences, discriminate in surfacing "passive" candidates for recruiter outreach, perpetuate bias by screening promising candidates based on a company's past hiring decisions and promotion history, and predict acceptable compensation offers in ways that perpetuate pay disparity.
The issue of bias is only one among many considerations as AI proliferates. Due process is another. In Wisconsin v. Loomis, the criminal defendant facing sentencing was not permitted to inquire of the workings of the COMPAS AI system on grounds that it was "trade secret." Nevertheless, the Wisconsin Supreme Court upheld the prosecution's submission of its ranking in the sentencing memo. Admissibility will be another. Without the appropriate standards and methodologies to make accurate and consistent determinations of the accuracy of an AI system, with due regard to the competencies of the operator of the system, the "evidence" it provides will be suspect. Poor and untested performance is reported relating to use of AI for health care as well.
|Standards for Effectiveness
What does this mean for lawyers? The intent of Resolution 112 is clearly laudable. Now, however, with the help of experts, we must forge a path to viable standards, require methods to assess efficacy in operation, and understand the expertise needed for effective deployment of AI tools and solutions.
Help is on the way. The National Institute of Standards and Technology (NIST), for example, has developed a roadmap for engagement, noting the need for "AI standards that articulate requirements, specifications, guidelines, or characteristics can help to ensure that AI technologies and systems meet critical objectives for functionality, interoperability, and trustworthiness—and that they perform accurately, reliably, and safely." Just prior to adoption of adoption of Resolution 112, the ABA Section of Science & Technology Law issued a response (albeit not formally approved by the House of Delegates or the Board of Governors of the American Bar Association) to a NIST-issued RFI on artificial intelligence standards that spells out the need for AI standards that provide insight into the trustworthiness of AI, calling for transparency of information by which accuracy can be assessed.
Other organizations with no less heft than the OECD, the G20, the Council of Europe, and the IEEE, have met over the past few years to consider the impact of AI, proffering papers, principles, standards and resolutions regarding AI's appropriate and ethical uses. IEEE, in particular, is focused on means to evaluate the trustworthiness of AI for use in legal systems and has developed comprehensive principles for enabling assessment of the trustworthiness of individual AI-enabled processes: accountability, effectiveness, transparency, and competence. These principles align with those proffered by the other organizations. IEEE has turned this year to moving these principles into practice.
|Investment in Expertise
While we await certification standards to help us assess accurately the capability and functioning of different AI-enabled processes (both in theory and in our actual use), we need to recognize this is an area that requires expertise. As much as we may try to learn on our own, this is a complex field requiring specific knowledge that extends beyond legal competence. We may well need to bring data scientists in-house or partner with outside experts who can provide insight into what is actually being achieved by any AI tool we use or consider using in practice.
We need both expertise in the metrics that show how well AI-enabled processes are working and expertise to ensure their competent deployment to achieve useful results. (Think of machine-learning tools used for e-discovery or defensible deletion, for example.) Such expertise will most often come from areas outside the law and should be engaged at deployment, not only in later testimony about the accuracy of the results.
Ironically, at the same time as we may require such expertise, we are responsible for its oversight. The language in supervisory Rule 5.3 ("Responsibilities Regarding Nonlawyer Assistance") suggests that we need to be in-the-know enough to ensure that there is no violation of ethical rules as the experts (and AI systems) do their work, another responsibility heightened by the challenges of AI.
|Conclusion
There is no doubt that in the next decade AI will continue to introduce complex challenges for us to work our way through. Assessing the efficacy and impact of the tools we use and the consequences they create—intended or not—is our charge for the future. As legal professionals, and as citizens, it is up to us to pay close attention and engage in efforts to help develop the standards that will keep us on the right course.
Julia Brickell is executive managing director and general counsel at H5.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllTrending Stories
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250