Artificial IntelligenceAs is often the case with disruptive technologies, the use of generative artificial intelligence (AI) has outpaced our ability to regulate it. Predictably, this has led to some cautionary examples of ill-considered use of AI.

For instance, a Colorado attorney's use of generative AI to prepare a motion to set aside summary judgment led to an embarrassing—and potentially career-threatening—result. The attorney did not cite-check the cases cited by ChatGPT, and it turns out that many of the cases either did not exist or did not stand for the propositions cited. Ultimately, the judge denied the motion and threatened to file a complaint against the attorney for making material misrepresentations of law.

This example is not unique as stories have emerged of similar errors in AI-generated legal motions filed without appropriate attorney review. Sensing a potential tidal wave of AI-generated motions and briefs containing egregious errors, some judges have taken proactive measures to stem the tide. Brantley Starr, a federal judge in the Northern District of Texas, issued a requirement that all attorneys appearing before him must file on the docket a certificate attesting either that no portion of any filing will be drafted by generative AI or that any language drafted by generative AI will be checked for accuracy—using print reporters or traditional legal databases—by a human being.