AI-Enabled Processes: And You Thought E-Discovery Was a Headache!
As AI development intensifies and the courts begin to create new and perhaps enduring case law about its use and admissibility, the next decade will require increased vigilance on the part of legal professionals.
January 31, 2020 at 02:20 PM
7 minute read
We enter a new decade in the thrall of technological wizardry and artificial intelligence. The related challenges coming our way, both practical and ethical, deserve our best thinking.
Much of the hype surrounding AI in the legal profession is un-validated, portraying savings and solutions that may or may not be achieved and proposing uses for which it may be invalid. Be that as it may, this technology is creeping into our legal system and legal practice, and is often operated without appropriate expertise. As AI development intensifies and the courts begin to create new and perhaps enduring case law about its use and admissibility, the next decade will require increased vigilance on the part of legal professionals.
ABA Resolution 112
Recognizing the challenges ahead, in August 2019 the ABA adopted Resolution 112, which "urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (AI) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI."
This admonition is well conceived. As advanced technologies for legal endeavors become more widely available, our excitement over output may blind us to the considerations of effectiveness and suitability that are core to meeting our ethical obligations of communication, candor, fairness, trustworthiness, and avoidance of discrimination.
Where the understandable information retrieval metrics of "precision" and "recall" (the amount of retrieved data that is on point and the amount of existing relevant data that has been retrieved) are the accepted go-to metrics for search efficacy in e-discovery and other records management and retrieval efforts, acceptable measures are not yet established for many other AI-enabled processes. This is a problem because knowledge of what AI is accomplishing (and how fairly) is core to ethical use, legal and otherwise.
Bias and Beyond
Recent studies are revealing more and more biases in heralded algorithmic systems—not surprising when you consider how AI actually works. In one fashion or another, AI systems search for and report or act upon patterns in data. The quality of the results depends on both the quality of the algorithms in the AI system and the data on which it is trained. But algorithms are built by humans, who determine what information to weight heavily in data sets and what to weight lightly or ignore—possibly introducing weakness or bias at inception. Then, the algorithms are trained on data that may itself be skewed or even reflect inequities based on gender, wealth, ethnicity, race, or sexual orientation.
We have seen enough of these biases come to light to be on high alert. Consider COMPAS, for example, a criminal risk assessment system that has been shown to misrepresent the recidivism risk for certain convicts due to a systemic racial bias identified in the software and data sets used for training. Or facial recognition software, rapidly being adopted by police and government agencies (not to mention self-driving car software) which has been exposed as having high error rates in recognizing or detecting dark-skinned faces. Hiring algorithms have been shown to target ethnic and gender groups in advertising jobs, promote resumes that mirror perceived recruiter preferences, discriminate in surfacing "passive" candidates for recruiter outreach, perpetuate bias by screening promising candidates based on a company's past hiring decisions and promotion history, and predict acceptable compensation offers in ways that perpetuate pay disparity.
The issue of bias is only one among many considerations as AI proliferates. Due process is another. In Wisconsin v. Loomis, the criminal defendant facing sentencing was not permitted to inquire of the workings of the COMPAS AI system on grounds that it was "trade secret." Nevertheless, the Wisconsin Supreme Court upheld the prosecution's submission of its ranking in the sentencing memo. Admissibility will be another. Without the appropriate standards and methodologies to make accurate and consistent determinations of the accuracy of an AI system, with due regard to the competencies of the operator of the system, the "evidence" it provides will be suspect. Poor and untested performance is reported relating to use of AI for health care as well.
Standards for Effectiveness
What does this mean for lawyers? The intent of Resolution 112 is clearly laudable. Now, however, with the help of experts, we must forge a path to viable standards, require methods to assess efficacy in operation, and understand the expertise needed for effective deployment of AI tools and solutions.
Help is on the way. The National Institute of Standards and Technology (NIST), for example, has developed a roadmap for engagement, noting the need for "AI standards that articulate requirements, specifications, guidelines, or characteristics can help to ensure that AI technologies and systems meet critical objectives for functionality, interoperability, and trustworthiness—and that they perform accurately, reliably, and safely." Just prior to adoption of adoption of Resolution 112, the ABA Section of Science & Technology Law issued a response (albeit not formally approved by the House of Delegates or the Board of Governors of the American Bar Association) to a NIST-issued RFI on artificial intelligence standards that spells out the need for AI standards that provide insight into the trustworthiness of AI, calling for transparency of information by which accuracy can be assessed.
Other organizations with no less heft than the OECD, the G20, the Council of Europe, and the IEEE, have met over the past few years to consider the impact of AI, proffering papers, principles, standards and resolutions regarding AI's appropriate and ethical uses. IEEE, in particular, is focused on means to evaluate the trustworthiness of AI for use in legal systems and has developed comprehensive principles for enabling assessment of the trustworthiness of individual AI-enabled processes: accountability, effectiveness, transparency, and competence. These principles align with those proffered by the other organizations. IEEE has turned this year to moving these principles into practice.
Investment in Expertise
While we await certification standards to help us assess accurately the capability and functioning of different AI-enabled processes (both in theory and in our actual use), we need to recognize this is an area that requires expertise. As much as we may try to learn on our own, this is a complex field requiring specific knowledge that extends beyond legal competence. We may well need to bring data scientists in-house or partner with outside experts who can provide insight into what is actually being achieved by any AI tool we use or consider using in practice.
We need both expertise in the metrics that show how well AI-enabled processes are working and expertise to ensure their competent deployment to achieve useful results. (Think of machine-learning tools used for e-discovery or defensible deletion, for example.) Such expertise will most often come from areas outside the law and should be engaged at deployment, not only in later testimony about the accuracy of the results.
Ironically, at the same time as we may require such expertise, we are responsible for its oversight. The language in supervisory Rule 5.3 ("Responsibilities Regarding Nonlawyer Assistance") suggests that we need to be in-the-know enough to ensure that there is no violation of ethical rules as the experts (and AI systems) do their work, another responsibility heightened by the challenges of AI.
Conclusion
There is no doubt that in the next decade AI will continue to introduce complex challenges for us to work our way through. Assessing the efficacy and impact of the tools we use and the consequences they create—intended or not—is our charge for the future. As legal professionals, and as citizens, it is up to us to pay close attention and engage in efforts to help develop the standards that will keep us on the right course.
Julia Brickell is executive managing director and general counsel at H5.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2025 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllTrending Stories
- 1Law Firm Fails to Get Punitive Damages From Ex-Client
- 2Over 700 Residents Near 2023 Derailment Sue Norfolk for More Damages
- 3Decision of the Day: Judge Sanctions Attorney for 'Frivolously' Claiming All Nine Personal Injury Categories in Motor Vehicle Case
- 4Second Judge Blocks Trump Federal Funding Freeze
- 5Crypto Hacker’s $65 Million Scam Ends in Indictment
Who Got The Work
J. Brugh Lower of Gibbons has entered an appearance for industrial equipment supplier Devco Corporation in a pending trademark infringement lawsuit. The suit, accusing the defendant of selling knock-off Graco products, was filed Dec. 18 in New Jersey District Court by Rivkin Radler on behalf of Graco Inc. and Graco Minnesota. The case, assigned to U.S. District Judge Zahid N. Quraishi, is 3:24-cv-11294, Graco Inc. et al v. Devco Corporation.
Who Got The Work
Rebecca Maller-Stein and Kent A. Yalowitz of Arnold & Porter Kaye Scholer have entered their appearances for Hanaco Venture Capital and its executives, Lior Prosor and David Frankel, in a pending securities lawsuit. The action, filed on Dec. 24 in New York Southern District Court by Zell, Aron & Co. on behalf of Goldeneye Advisors, accuses the defendants of negligently and fraudulently managing the plaintiff's $1 million investment. The case, assigned to U.S. District Judge Vernon S. Broderick, is 1:24-cv-09918, Goldeneye Advisors, LLC v. Hanaco Venture Capital, Ltd. et al.
Who Got The Work
Attorneys from A&O Shearman has stepped in as defense counsel for Toronto-Dominion Bank and other defendants in a pending securities class action. The suit, filed Dec. 11 in New York Southern District Court by Bleichmar Fonti & Auld, accuses the defendants of concealing the bank's 'pervasive' deficiencies in regards to its compliance with the Bank Secrecy Act and the quality of its anti-money laundering controls. The case, assigned to U.S. District Judge Arun Subramanian, is 1:24-cv-09445, Gonzalez v. The Toronto-Dominion Bank et al.
Who Got The Work
Crown Castle International, a Pennsylvania company providing shared communications infrastructure, has turned to Luke D. Wolf of Gordon Rees Scully Mansukhani to fend off a pending breach-of-contract lawsuit. The court action, filed Nov. 25 in Michigan Eastern District Court by Hooper Hathaway PC on behalf of The Town Residences LLC, accuses Crown Castle of failing to transfer approximately $30,000 in utility payments from T-Mobile in breach of a roof-top lease and assignment agreement. The case, assigned to U.S. District Judge Susan K. Declercq, is 2:24-cv-13131, The Town Residences LLC v. T-Mobile US, Inc. et al.
Who Got The Work
Wilfred P. Coronato and Daniel M. Schwartz of McCarter & English have stepped in as defense counsel to Electrolux Home Products Inc. in a pending product liability lawsuit. The court action, filed Nov. 26 in New York Eastern District Court by Poulos Lopiccolo PC and Nagel Rice LLP on behalf of David Stern, alleges that the defendant's refrigerators’ drawers and shelving repeatedly break and fall apart within months after purchase. The case, assigned to U.S. District Judge Joan M. Azrack, is 2:24-cv-08204, Stern v. Electrolux Home Products, Inc.
Featured Firms
Law Offices of Gary Martin Hays & Associates, P.C.
(470) 294-1674
Law Offices of Mark E. Salomone
(857) 444-6468
Smith & Hassler
(713) 739-1250