In recent years, some have coined the mainstream rise of generative artificial intelligence (AI) a modern-day space race. The legal profession, and especially litigators, have not been unaffected. As the use of generative AI becomes more commonplace in the practice of law, courts have been hesitant to regulate its use in litigation. A recent decision from the Surrogate’s Court in Saratoga County, however, suggests that trend may be about to change.

For better or worse, many of the headlines about the use of generative AI in litigation have focused on the negative. For example, the case of Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023) received a lot of attention after Judge Kevin Castel of the Southern District of New York slapped two attorneys appearing before him with a $5,000 sanction for filing a motion with citations to fictitious cases. Specifically, the court found that the attorneys violated Rule 11 of the Federal Rules of Civil Procedure by failing in their gatekeeping roles and citing multiple fictitious cases created by ChatGPT without authenticating them, and then attempting to justify the citations rather than coming clean about their provenance.