It's now common that a case involves more documents than attorneys can lay eyes on, or should lay eyes on—for cost, time, or other reasons. If you cannot or should not look at every single document but still need to be confident all of the important documents have been found, Technology Assisted Review, or TAR, is a large part of the solution. In this first of a two-part presentation, I will explore why you should use TAR, what it is and how to use it. Part 2 will cover the judicial acceptance of TAR, additional uses of TAR and some best practice considerations.

Two of the best reasons to use TAR are (1) the cost associated with linear review; and (2) that it's more accurate than using search terms in identifying relevant documents. First, linear review, which is a document by document review of a certain set of documents, is extremely expensive. A common industry benchmark is a document review rate of 40-50 documents per hour. If 100,000 documents need review, it could take 2,000 hours—a whole year's worth of time! And, depending on the experience level of the people reviewing the documents, this could cost $200,000, $400,000 or even $600,000 for reviewers with an hourly rate of $100, $200 and $300, respectively. These numbers don't really contemplate any QA/QC review, privilege review or second-pass review. No matter how you slice the numbers or what factors you build into an estimate, reviewing a lot of documents is expensive. TAR will allow you to review less documents, therefore reducing costs, and it will be more effective than other methods.

TAR has repeatedly been shown to be more effective for identifying relevant documents than using search terms (even if given the chance to refine the search terms). As one judge wrote: "While some lawyers still consider manual review the 'gold standard,' that is a myth," and there are cases studies that show TAR "can (and does) yield more accurate results than exhaustive manual review, with much lower effort." Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182, 190 (S.D.N.Y. 2012) (citations omitted). Additionally, Bill Dimm regularly conducts tests comparing search terms and TAR, the results of which can be found on his Cluster Text blog. Spoiler alert: TAR always wins.

So what is this magic that costs less, beats the conventional method and has approval from the judiciary (more on that last point later)? At its most basic, TAR is an algorithm that will take a series of yes or no decisions—made by a human, typically a lawyer—and use those "coding" decisions to predict how a lawyer would code the remaining documents. The yes or no decisions are most often relevant or irrelevant, but could theoretically be just about anything you want them to be. The intricacies (or guts) of each algorithm will vary, but just about every modern document review platform is going to have this capability. And while some of the TAR algorithms are based in part on open-source software, even if a company would show you the exact computer code used to make its TAR algorithm work, it would be gibberish to you and me. So to better understand how it works, it's easier to explain how you use it.

And in explaining how you use TAR, it is easier to explain the two "generations" of TAR, 1.0 and 2.0, in that order. Understanding the iterative process that was TAR 1.0 makes it easier to understand what is happening during TAR 2.0, which is most often a continuous process, oftentimes referred to as CAL (Continuous Active Learning).

TAR 1.0 required review to progress in batches. After each batch was reviewed and coded, the relevant or irrelevant decisions made by a lawyer would be fed into the algorithm. The algorithm could then place the documents that had not been coded by a lawyer into one of three buckets: relevant, irrelevant or unknown. After enough batches had been reviewed and coded, the algorithm would be in a position to determine whether every uncoded document was relevant. Along the way, quality control would take place. For example, batches of documents that the algorithm deemed irrelevant would need to be confirmed as irrelevant by a lawyer, and the same process would take place for the documents deemed relevant. It is the review of these quality control batches (among other methods)—and a corresponding lack of overturning the algorithm's decisions—that would lead to enough confidence to stop the review once it reached a certain recall percentage. How many documents this would take, and what recall percentage is required, would vary, but suffice to say it was far less than all of the documents.

TAR 2.0 has come along and streamlined the whole process. Instead of needing to wait for the algorithm to spit out new batches, the entire process is continuous, and the algorithm will provide the lawyer with a mix of documents necessary to achieve the same results as TAR 1.0. Depending on the particular algorithm, the predictions can be updated as frequently as every 20 documents, instead of after a batch of several hundred documents. Additionally, a new feature is that instead of placing the uncoded documents into one of three buckets, the algorithm instead ranks all the uncoded documents in terms of how likely they are to be relevant.

Now that you know what TAR is, the second part of this commentary, coming out on the next business day as this one, will cover some of the TAR case law, how to use TAR for more than simply determining whether documents are relevant and some best practice considerations when using TAR.

Todd Heffner is a construction litigator and e-discovery specialist with Jones Walker in Atlanta.