Behind the Magic of Technology-Assisted Review, Part 1
If you cannot or should not look at every single document but still need to be confident all of the important documents have been found, Technology Assisted Review, or TAR, is a large part of the solution.
January 02, 2020 at 01:47 PM
5 minute read
It's now common that a case involves more documents than attorneys can lay eyes on, or should lay eyes on—for cost, time, or other reasons. If you cannot or should not look at every single document but still need to be confident all of the important documents have been found, Technology Assisted Review, or TAR, is a large part of the solution. In this first of a two-part presentation, I will explore why you should use TAR, what it is and how to use it. Part 2 will cover the judicial acceptance of TAR, additional uses of TAR and some best practice considerations.
Two of the best reasons to use TAR are (1) the cost associated with linear review; and (2) that it's more accurate than using search terms in identifying relevant documents. First, linear review, which is a document by document review of a certain set of documents, is extremely expensive. A common industry benchmark is a document review rate of 40-50 documents per hour. If 100,000 documents need review, it could take 2,000 hours—a whole year's worth of time! And, depending on the experience level of the people reviewing the documents, this could cost $200,000, $400,000 or even $600,000 for reviewers with an hourly rate of $100, $200 and $300, respectively. These numbers don't really contemplate any QA/QC review, privilege review or second-pass review. No matter how you slice the numbers or what factors you build into an estimate, reviewing a lot of documents is expensive. TAR will allow you to review less documents, therefore reducing costs, and it will be more effective than other methods.
TAR has repeatedly been shown to be more effective for identifying relevant documents than using search terms (even if given the chance to refine the search terms). As one judge wrote: "While some lawyers still consider manual review the 'gold standard,' that is a myth," and there are cases studies that show TAR "can (and does) yield more accurate results than exhaustive manual review, with much lower effort." Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182, 190 (S.D.N.Y. 2012) (citations omitted). Additionally, Bill Dimm regularly conducts tests comparing search terms and TAR, the results of which can be found on his Cluster Text blog. Spoiler alert: TAR always wins.
So what is this magic that costs less, beats the conventional method and has approval from the judiciary (more on that last point later)? At its most basic, TAR is an algorithm that will take a series of yes or no decisions—made by a human, typically a lawyer—and use those "coding" decisions to predict how a lawyer would code the remaining documents. The yes or no decisions are most often relevant or irrelevant, but could theoretically be just about anything you want them to be. The intricacies (or guts) of each algorithm will vary, but just about every modern document review platform is going to have this capability. And while some of the TAR algorithms are based in part on open-source software, even if a company would show you the exact computer code used to make its TAR algorithm work, it would be gibberish to you and me. So to better understand how it works, it's easier to explain how you use it.
And in explaining how you use TAR, it is easier to explain the two "generations" of TAR, 1.0 and 2.0, in that order. Understanding the iterative process that was TAR 1.0 makes it easier to understand what is happening during TAR 2.0, which is most often a continuous process, oftentimes referred to as CAL (Continuous Active Learning).
TAR 1.0 required review to progress in batches. After each batch was reviewed and coded, the relevant or irrelevant decisions made by a lawyer would be fed into the algorithm. The algorithm could then place the documents that had not been coded by a lawyer into one of three buckets: relevant, irrelevant or unknown. After enough batches had been reviewed and coded, the algorithm would be in a position to determine whether every uncoded document was relevant. Along the way, quality control would take place. For example, batches of documents that the algorithm deemed irrelevant would need to be confirmed as irrelevant by a lawyer, and the same process would take place for the documents deemed relevant. It is the review of these quality control batches (among other methods)—and a corresponding lack of overturning the algorithm's decisions—that would lead to enough confidence to stop the review once it reached a certain recall percentage. How many documents this would take, and what recall percentage is required, would vary, but suffice to say it was far less than all of the documents.
TAR 2.0 has come along and streamlined the whole process. Instead of needing to wait for the algorithm to spit out new batches, the entire process is continuous, and the algorithm will provide the lawyer with a mix of documents necessary to achieve the same results as TAR 1.0. Depending on the particular algorithm, the predictions can be updated as frequently as every 20 documents, instead of after a batch of several hundred documents. Additionally, a new feature is that instead of placing the uncoded documents into one of three buckets, the algorithm instead ranks all the uncoded documents in terms of how likely they are to be relevant.
Now that you know what TAR is, the second part of this commentary, coming out on the next business day as this one, will cover some of the TAR case law, how to use TAR for more than simply determining whether documents are relevant and some best practice considerations when using TAR.
Todd Heffner is a construction litigator and e-discovery specialist with Jones Walker in Atlanta.
This content has been archived. It is available through our partners, LexisNexis® and Bloomberg Law.
To view this content, please continue to their sites.
Not a Lexis Subscriber?
Subscribe Now
Not a Bloomberg Law Subscriber?
Subscribe Now
NOT FOR REPRINT
© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.
You Might Like
View AllHow My Legal Career Unexpectedly Led Me to Become an Advocate for Domestic Violence Victims and Survivors
Law Firms Mentioned
Trending Stories
- 1Infant Formula Judge Sanctions Kirkland's Jim Hurst: 'Overtly Crossed the Lines'
- 2Election 2024: Nationwide Judicial Races and Ballot Measures to Watch
- 3Guarantees Are Back, Whether Law Firms Want to Talk About Them or Not
- 4How I Made Practice Group Chair: 'If You Love What You Do and Put the Time and Effort Into It, You Will Excel,' Says Lisa Saul of Forde & O'Meara
- 5Abbott, Mead Johnson Win Defense Verdict Over Preemie Infant Formula
Who Got The Work
Michael G. Bongiorno, Andrew Scott Dulberg and Elizabeth E. Driscoll from Wilmer Cutler Pickering Hale and Dorr have stepped in to represent Symbotic Inc., an A.I.-enabled technology platform that focuses on increasing supply chain efficiency, and other defendants in a pending shareholder derivative lawsuit. The case, filed Oct. 2 in Massachusetts District Court by the Brown Law Firm on behalf of Stephen Austen, accuses certain officers and directors of misleading investors in regard to Symbotic's potential for margin growth by failing to disclose that the company was not equipped to timely deploy its systems or manage expenses through project delays. The case, assigned to U.S. District Judge Nathaniel M. Gorton, is 1:24-cv-12522, Austen v. Cohen et al.
Who Got The Work
Edmund Polubinski and Marie Killmond of Davis Polk & Wardwell have entered appearances for data platform software development company MongoDB and other defendants in a pending shareholder derivative lawsuit. The action, filed Oct. 7 in New York Southern District Court by the Brown Law Firm, accuses the company's directors and/or officers of falsely expressing confidence in the company’s restructuring of its sales incentive plan and downplaying the severity of decreases in its upfront commitments. The case is 1:24-cv-07594, Roy v. Ittycheria et al.
Who Got The Work
Amy O. Bruchs and Kurt F. Ellison of Michael Best & Friedrich have entered appearances for Epic Systems Corp. in a pending employment discrimination lawsuit. The suit was filed Sept. 7 in Wisconsin Western District Court by Levine Eisberner LLC and Siri & Glimstad on behalf of a project manager who claims that he was wrongfully terminated after applying for a religious exemption to the defendant's COVID-19 vaccine mandate. The case, assigned to U.S. Magistrate Judge Anita Marie Boor, is 3:24-cv-00630, Secker, Nathan v. Epic Systems Corporation.
Who Got The Work
David X. Sullivan, Thomas J. Finn and Gregory A. Hall from McCarter & English have entered appearances for Sunrun Installation Services in a pending civil rights lawsuit. The complaint was filed Sept. 4 in Connecticut District Court by attorney Robert M. Berke on behalf of former employee George Edward Steins, who was arrested and charged with employing an unregistered home improvement salesperson. The complaint alleges that had Sunrun informed the Connecticut Department of Consumer Protection that the plaintiff's employment had ended in 2017 and that he no longer held Sunrun's home improvement contractor license, he would not have been hit with charges, which were dismissed in May 2024. The case, assigned to U.S. District Judge Jeffrey A. Meyer, is 3:24-cv-01423, Steins v. Sunrun, Inc. et al.
Who Got The Work
Greenberg Traurig shareholder Joshua L. Raskin has entered an appearance for boohoo.com UK Ltd. in a pending patent infringement lawsuit. The suit, filed Sept. 3 in Texas Eastern District Court by Rozier Hardt McDonough on behalf of Alto Dynamics, asserts five patents related to an online shopping platform. The case, assigned to U.S. District Judge Rodney Gilstrap, is 2:24-cv-00719, Alto Dynamics, LLC v. boohoo.com UK Limited.