By now, litigators appreciate that a degree of technological expertise is needed to practice law effectively. Everyone has heard about the unfortunate attorney in Texas who appeared at a Zoom hearing as a worried kitten. But in the past year, attorneys have become more attuned to the potential and risks of artificial intelligence (AI). Last June, lawyers in New York made headlines after relying on a chatbot’s research skills, leading to sanctions for unknowingly submitting fictitious caselaw. One journalist even found himself in a love triangle with a chatbot bent on ending his marriage. In spite of these cautionary tales, the use of AI in the legal profession is on the rise as trusted legal research services like LexisNexis and Westlaw roll out AI-assisted research functions and major tech companies integrate AI into their products.

Faced with a seismic shift in available technology, litigators appearing in Pennsylvania’s federal courts must be mindful of when and how they turn to AI so as not to run afoul of court orders or policies, or procedural or ethical rules. Judges who opt to regulate the use of AI in cases before them should be equally mindful to avoid crafting orders with an unintended chilling effect on the use of new technologies, especially when current procedural and ethical rules may already suffice.