artificial intelligenceWe enter a new decade in the thrall of technological wizardry and artificial intelligence. The related challenges coming our way, both practical and ethical, deserve our best thinking.

Much of the hype surrounding AI in the legal profession is un-validated, portraying savings and solutions that may or may not be achieved and proposing uses for which it may be invalid. Be that as it may, this technology is creeping into our legal system and legal practice, and is often operated without appropriate expertise. As AI development intensifies and the courts begin to create new and perhaps enduring case law about its use and admissibility, the next decade will require increased vigilance on the part of legal professionals.

|

ABA Resolution 112

Recognizing the challenges ahead, in August 2019 the ABA adopted Resolution 112, which "urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (AI) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI."

This admonition is well conceived. As advanced technologies for legal endeavors become more widely available, our excitement over output may blind us to the considerations of effectiveness and suitability that are core to meeting our ethical obligations of communication, candor, fairness, trustworthiness, and avoidance of discrimination.

Where the understandable information retrieval metrics of "precision" and "recall" (the amount of retrieved data that is on point and the amount of existing relevant data that has been retrieved) are the accepted go-to metrics for search efficacy in e-discovery and other records management and retrieval efforts, acceptable measures are not yet established for many other AI-enabled processes. This is a problem because knowledge of what AI is accomplishing (and how fairly) is core to ethical use, legal and otherwise.

|

Bias and Beyond

Recent studies are revealing more and more biases in heralded algorithmic systems—not surprising when you consider how AI actually works. In one fashion or another, AI systems search for and report or act upon patterns in data. The quality of the results depends on both the quality of the algorithms in the AI system and the data on which it is trained. But algorithms are built by humans, who determine what information to weight heavily in data sets and what to weight lightly or ignore—possibly introducing weakness or bias at inception. Then, the algorithms are trained on data that may itself be skewed or even reflect inequities based on gender, wealth, ethnicity, race, or sexual orientation.

We have seen enough of these biases come to light to be on high alert. Consider COMPAS, for example, a criminal risk assessment system that has been shown to misrepresent the recidivism risk for certain convicts due to a systemic racial bias identified in the software and data sets used for training. Or facial recognition software, rapidly being adopted by police and government agencies (not to mention self-driving car software) which has been exposed as having high error rates in recognizing or detecting dark-skinned faces. Hiring algorithms have been shown to target ethnic and gender groups in advertising jobs, promote resumes that mirror perceived recruiter preferences, discriminate in surfacing "passive" candidates for recruiter outreach, perpetuate bias by screening promising candidates based on a company's past hiring decisions and promotion history, and predict acceptable compensation offers in ways that perpetuate pay disparity.

The issue of bias is only one among many considerations as AI proliferates. Due process is another. In Wisconsin v. Loomis, the criminal defendant facing sentencing was not permitted to inquire of the workings of the COMPAS AI system on grounds that it was "trade secret." Nevertheless, the Wisconsin Supreme Court upheld the prosecution's submission of its ranking in the sentencing memo. Admissibility will be another. Without the appropriate standards and methodologies to make accurate and consistent determinations of the accuracy of an AI system, with due regard to the competencies of the operator of the system, the "evidence" it provides will be suspect. Poor and untested performance is reported relating to use of AI for health care as well.

|

Standards for Effectiveness

What does this mean for lawyers? The intent of Resolution 112 is clearly laudable. Now, however, with the help of experts, we must forge a path to viable standards, require methods to assess efficacy in operation, and understand the expertise needed for effective deployment of AI tools and solutions.

Help is on the way. The National Institute of Standards and Technology (NIST), for example, has developed a roadmap for engagement, noting the need for "AI standards that articulate requirements, specifications, guidelines, or characteristics can help to ensure that AI technologies and systems meet critical objectives for functionality, interoperability, and trustworthiness—and that they perform accurately, reliably, and safely." Just prior to adoption of adoption of Resolution 112, the ABA Section of Science & Technology Law issued a response (albeit not formally approved by the House of Delegates or the Board of Governors of the American Bar Association) to a NIST-issued RFI on artificial intelligence standards that spells out the need for AI standards that provide insight into the trustworthiness of AI, calling for transparency of information by which accuracy can be assessed.

Other organizations with no less heft than the OECD, the G20, the Council of Europe, and the IEEE, have met over the past few years to consider the impact of AI, proffering papers, principles, standards and resolutions regarding AI's appropriate and ethical uses. IEEE, in particular, is focused on means to evaluate the trustworthiness of AI for use in legal systems and has developed comprehensive principles for enabling assessment of the trustworthiness of individual AI-enabled processes: accountability, effectiveness, transparency, and competence. These principles align with those proffered by the other organizations. IEEE has turned this year to moving these principles into practice.

|

Investment in Expertise

While we await certification standards to help us assess accurately the capability and functioning of different AI-enabled processes (both in theory and in our actual use), we need to recognize this is an area that requires expertise. As much as we may try to learn on our own, this is a complex field requiring specific knowledge that extends beyond legal competence. We may well need to bring data scientists in-house or partner with outside experts who can provide insight into what is actually being achieved by any AI tool we use or consider using in practice.

We need both expertise in the metrics that show how well AI-enabled processes are working and expertise to ensure their competent deployment to achieve useful results. (Think of machine-learning tools used for e-discovery or defensible deletion, for example.) Such expertise will most often come from areas outside the law and should be engaged at deployment, not only in later testimony about the accuracy of the results.

Ironically, at the same time as we may require such expertise, we are responsible for its oversight. The language in supervisory Rule 5.3 ("Responsibilities Regarding Nonlawyer Assistance") suggests that we need to be in-the-know enough to ensure that there is no violation of ethical rules as the experts (and AI systems) do their work, another responsibility heightened by the challenges of AI.

|

Conclusion

There is no doubt that in the next decade AI will continue to introduce complex challenges for us to work our way through. Assessing the efficacy and impact of the tools we use and the consequences they create—intended or not—is our charge for the future. As legal professionals, and as citizens, it is up to us to pay close attention and engage in efforts to help develop the standards that will keep us on the right course.

Julia Brickell is executive managing director and general counsel at H5.