When the Avianca airline serving cart allegedly bumped into the knee of Roberto Mata on a flight to New York four summers ago, few could have predicted that the lawsuit that ensued would soon become the prime example of how generative artificial intelligence tools can hallucinate

Now, six months after the New York lawyer involved in the Mata v. Avianca case submitted a ChatGPT-written brief with fake case citations to the court, the list of risks that the legal industry is worried about has grown, with considerations around bias and cybersecurity threats inching their way to the top.

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]