The first part of this two-part article on TAR covered why you should use it, what it is and some basics of how it works. This second part will cover some of the TAR case law, how to use TAR for more than simply determining whether documents are relevant and some best practice considerations for using TAR.

The judiciary first proclaimed TAR acceptable in 2012, in Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182 (S.D.N.Y. 2012). Since then, there have been many judicial decisions regarding TAR. Several courts have made their thoughts on its capabilities clear, stating: "Predictive coding or TAR has emerged as a far more accurate means of producing responsive ESI in discovery than manual human review of keyword searches." Youngevity International, Corp. v. Smith, No. 16-cv-00704-BTM, 2019 U.S. Dist. LEXIS 60907, at *38 (S.D. Cal. Apr. 9, 2019) (citation omitted); and it "is now black letter law that where the producing party wants to utilize TAR for document review, courts will permit it." Entrata, Inc. v. Yardi Sys., No. 2:15-cv-00102, 2018 U.S. Dist. LEXIS 185744, at *20 (D. Utah Oct. 29, 2018) (citation omitted).

Most judicial decisions on this topic, beyond generally accepting the use of TAR, have focused on how TAR is implemented. Cases cover various topics, including: can a party be required to use TAR (generally no), whether search term culling could be used before TAR (it depends, but is not considered a best practice to allow it), how transparent the parties need to be in disclosing their methodology (it depends, but some transparency is generally a good idea) and what recall threshold is needed (75% has been accepted).

While the judiciary has focused on TAR being used for outgoing productions, there are many uses for TAR beyond assembling relevant documents for production, and there is no reason for judicial scrutiny when TAR is implemented for internal tasks. TAR can be used on incoming productions, for example, and it will provide the same time and money-saving and accuracy benefits when used on outgoing productions. For example, when you have more documents for a particular deponent than you can review, TAR can be used—particularly one using active learning—to prioritize the most relevant documents for that deponent. That way, with the TAR algorithm prioritizing the most relevant documents, the limited time and resources available for review of that deponent's documents are put to best use. In this situation the coding decision would not be as simple as relevant or irrelevant, but relevant or irrelevant to the particular deponent.

Another important point about TAR is that you can set up multiple TAR projects for the same case. After using one for relevance on the outgoing productions, you can set up as many deponent TAR projects as you need. And additional TAR projects can be set up for just about any task that has more documents than you have time to review.

It is also important to note that the coding decisions from one TAR project can be used on another set of documents, or each TAR project can start from scratch. This means if you're using TAR for deposition prep, if two deponents covered largely the same topics, you could use the results from the first review and apply them to the documents of the second deponent and immediately get some of the most relevant documents to look at. Alternatively, if the two deponents have nothing in common, you're better off starting from scratch for each deponent.

I conclude this TAR presentation with a listing of the various aspects of a TAR workflow to consider to be sure you are following best practices when implementing your next TAR project. "A defensible TAR workflow addresses the following components:

  1. Identify the team to finalize and engage in the workflow;
  2. Select the software;
  3. Identify, analyze and prepare the TAR set;
  4. Develop project schedule and deadlines;
  5. Human reviewer prepares for engaging in TAR;
  6. Human reviewer trains the computer to detect relevancy, and the computer classifies the set documents;
  7. Implement review quality control measures during training;
  8. Determine when computer training is complete and validate; and
  9. Final identification, review and production of the predicted relevant set."

These workflow components come from EDRM's TAR Guidelines, which is a must read for anyone considering implementing a TAR project.

Finally, it should be obvious at this point that this two-part series merely skimmed the surface of TAR, but hopefully it provided enough information that you would consider it for your next large document review. If that's the case, beyond the EDRM guidelines mentioned above, another great place to get more information is The Sedona Conference TAR Case Law Primer, 18 Sedona Conf. J. 1 (2017).

Todd Heffner is a construction litigator and eDiscovery specialist with Jones Walker in Atlanta.