By now, most of us have have heard of Mata v. Avianca, 22-cv-1461(PKC), out of the U.S. District Court for the  Southern District of New York. Unfortunately for the attorneys involved, Mata is less known for the case facts (a fairly unremarkable personal injury case sprinkled with a little international law on top—the plaintiff alleged his knee was hurt by a metal serving cart on a flight from El Salvador to New York City), and more known as the case involving the attorneys (whose names I will not re-state here as they have already been through enough) who were caught using ChatGPT to write their legal brief.

Sadly, most articles on the case limited their dive into the details to that recited above, focusing more on the fact that the attorneys got caught using generative artificial intelligence in this manner. However, for this this article, I (a human, just wanted to make that clear) wanted to explore this case further as it is a cautionary tale involving snowballing mistruths and and the siren call of technological shortcuts which, admittedly, we all hear calling us during those lonely nights furiously banging away at our keyboards trying to meet a deadline.

This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.

To view this content, please continue to their sites.

Not a Lexis Subscriber?
Subscribe Now

Not a Bloomberg Law Subscriber?
Subscribe Now

Why am I seeing this?

LexisNexis® and Bloomberg Law are third party online distributors of the broad collection of current and archived versions of ALM's legal news publications. LexisNexis® and Bloomberg Law customers are able to access and use ALM's content, including content from the National Law Journal, The American Lawyer, Legaltech News, The New York Law Journal, and Corporate Counsel, as well as other sources of legal information.

For questions call 1-877-256-2472 or contact us at [email protected]