At its 2019 annual meeting the ABA adopted Resolution No. 112 urging courts and lawyers to address the emerging ethical and legal issues relating to the usage of artificial intelligence (AI) in the practice of law including: bias, explainability and transparency of automated decisions made by AI; ethical and beneficial usage of AI; and controls and oversight of AI and the vendors that provide AI.

The ABA’s AI Resolution reflects the increasing anxiety over how AI will shape the legal profession with an emphasis on the perceived risks of such technology. The concerns highlighted in the resolution mirror those raised by myriad institutions studying AI, such as the Algorithmic Justice League and the AI Now Institute, which have focused their attention on the ethical problems raised by these new technologies. But, while the risks of integrating AI into the practice of law are important to consider, the ABA’s resolution raises another equally important, though less frequently discussed question: Could it be unethical not to use AI?

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]