The introduction and integration of generative artificial intelligence (AI) into the everyday legal practices and processes, particularly litigation, remains a topic of considerable interest across the legal community. For many, there are more questions than answers: How can litigators use AI to most efficiently and effectively serve clients? What are the ethical implications of using AI? Will it really save time and money? If so, how? Despite all of these questions, one thing is for certain: AI is here to stay.

As AI technologies continue to evolve, their potential to transform traditional litigation practices cannot be understated. However, like any technological advancement, the use of AI in litigation comes with its own set of advantages, but also challenges.

Advocates of AI will easily recite all of the reasons why we as litigators should embrace AI and incorporate it into our everyday practices. Of course, quite possibly the most obvious advantage of implementing AI is the promised enhanced efficiency and productivity. AI can quickly process and analyze vast amounts of data – much faster than any human practitioner. This capability significantly reduces the time required for tasks like document review and evidence gathering, which allows attorneys to focus more on strategic aspects of a case.

On the other hand, eventually, the human lawyer will need to physically review those documents or data at one point or another. As the litigator, there is no substitute for reviewing the documents prior to taking a deposition and knowing the ins and outs of the file. Can AI really replace that? Do clients want to pay lawyers to take twice as much time to review documents when our computers can do this task in 6 seconds? All questions the authors continue to evaluate.

Proponents of AI will also stress that the AI systems can help minimize human errors in such document review and case analysis. This, of course, is still up for debate. The information AI generates is only as good as the information provided by the practitioner and how well the AI model is trained. But, assuming the information is correct, by using tools such as machine learning, AI can assist in identifying the documents that are actually relevant. Or, AI can be used to identify outcomes in similar cases, potentially leading to more predictable outcomes.

Predictive analytics are particularly useful for in-house attorneys. For example, in the context of personal injury suits, if a large corporation often subject to litigation tracks the outcomes of its various matters and correctly inputs such data into its AI platform, AI can help identify trends in litigation timelines and outcomes on a global scale.

Of course, one of the oft-discussed advantages to using AI is cost reduction. For proponents of AI, over time, the use of AI in litigation may lead to cost savings for both law firms and their clients. Take the example of a deposition summary. This is a task well suited for a junior lawyer or, today, your AI software. Presumably, on average, it may take a junior human lawyer 4-5 hours to review and prepare a deposition summary (of course, depending on a variety of other factors, such as length of the transcript, skill level of the lawyer, etc.). However, AI is likely to churn out such a summary almost instantaneously. This saves many, many hours in costs and presumably allows junior lawyers to focus their skills on more strategy-oriented tasks.

There are, of course, drawbacks to the use of AI, particularly in the context of the deposition summary example. While you will have a happy client, a junior lawyer has lost out on a valuable training opportunity. Reviewing deposition transcripts not only provides a junior lawyer with an essential assignment, but it also provides example of deposition style, format, and content, which no robot can replicate (at least that we’ve seen—yet).

Some may say implementing AI provides junior lawyers greater opportunity to be involved in more strategic and less administrative-type tasks. While this is something to be desired, performing tasks like deposition summaries, research, and reviewing and organizing evidence provides more training to junior lawyers than AI ever could.

With that, one of the biggest fears users of AI in litigation contemplate is the potential for job replacement. The automation of tasks typically performed by legal professionals, such as in the deposition example above, raises these concerns. While AI can certainly enhance the efficiency of litigation practices, eventually, does this require a shift in the roles and skills required of legal practitioners?

This leads us to another potential challenge—overreliance on AI. There are stories of attorneys relying on AI to prepare briefs without performing thorough reviews of the citations relied upon by the robot. The citations generated by AI are often inaccurate or false, or simply do not exist. To the extent attorneys rely on AI to perform tasks like research and brief writing (or truly any tasks), the work product must be thoroughly reviewed by the supervising attorney.

There are also obvious concerns about data privacy. As litigators well know, we are often tasked with the handling of very sensitive and proprietary information. This requires intense security parameters surrounding the use of AI internally at a law firm. There is also a question of whether clients want their litigators inserting sensitive information into AI platforms and practitioners should be sure to have consent before inputting client information into AI. Some AI platforms make clear that they retain inputs and/or outputs to further train the AI platform. Others make clear that inputs are used solely to process a request and then the inputs are deleted. Technical diligence is necessary on any AI tools to understand how they work and if they are appropriate to use.

Finally, litigators should consider the ethical issues that may arise when deciding to implement and use AI. Luckily, the ABA and local bar associations across the country are issuing general guidance about the use of AI by lawyer. See, e.g., Am. Bar. Assoc., Formal Op. 512 (2024). The New York City Bar Association recently issued some general guidance for lawyers wishing to use generative AI tools. On the issue of privacy, the New York guidance stressed that information provided to generative AI may be shared with third parties or used for other purposes, which is why it is essential to obtain client consent before inputting any confidential information into any AI system that will share the inputted confidential information with third parties. See Formal Opinion 2024-5: Ethical Obligations of Lawyers and Law Firms Relating to the Use of Generative Artificial Intelligence in the Practice of Law, The New York City Bar Association Committee on Professional Ethics, August 2024.

As set forth in the guidance from the New York City Bar Association, AI “may be used as a starting point, but must be carefully scrutinized….A lawyer must ensure that the input is correct and then critically review, validate, and correct the output of Generative AI …” Considering all of the pros and cons, together with the changing landscape of the use of AI, this is great advice – use AI as a place to get started, but remember, you are the litigator and the one who has to cite accurate case law to a judge, take a deposition, communicate with the client. AI is not capable of these tasks. At least not yet.

Ira M. Schulman is a partner and co-leader of Sheppard Mullin’s construction practice, based in the New York office. Sophia L. Cahill is an associate in the business trial practice and member of the construction team, and is based in New York.