The following article is written by the winners of the “Call for Papers” associated with the Seventh Annual ASU-Arkfeld eDiscovery and Digital Evidence Conference. The conference hosts a competitive annual “Call for Papers” which address the progress, challenges, and future of e-discovery, digital evidence and data analytics. The article below was accepted as 2018's winning paper.

The authors of this winning paper will be presenters at the conference on March 6-8, 2018 in Phoenix, Arizona, at ASU's Sandra Day O'Connor College of Law. Those interested may register for the conference at discounted early bird rates until Friday, Feb. 9, and use discount code LTNArkfeld2018 for an extra 15% off: http://events.asucollegeoflaw.com/ediscovery/register/

One of the most common things we hear as Predictive Coding specialists is “We want to use Continuous Active Learning (CAL) on this project.” The term “CAL” has come to signify the ultimate method of TAR training, and also implies that other methods, such as Simple Active Learning (SAL), are things of the past that should be immediately discarded. While we agree CAL can be a very useful training strategy to employ, it is helpful to remember the purpose for which you are using Predictive Coding on your particular matter, and choose a learning strategy that fits that goal more closely.

The term “Continuous Active Learning” or “CAL” derives from a study done by Gordon Cormack and Maura Grossman demonstrating its superiority at improving recall over other training methods. Since then, CAL has developed a reputation as being the most effective method of training Predictive Coding. For that reason, it has been heralded as “TAR 2.0” and a slew of other marketing driven terms singing its praises. Despite its appeal as “the next big thing” by vendor marketing departments, it has actually been around since the onset of machine learning years ago.

The logic of a CAL training strategy is very simple: Continue to prioritize high scoring documents for training until no more relevant documents remain. Every time system learning occurs, the system refreshes the document rankings so documents you train are the highest scoring available at that time, and therefore, most likely relevant.

We do not dispute CAL is an efficient way to train Predictive Coding. In fact, we agree wholeheartedly with it, and use it often. However, it is important to remember that this method emphasizes a more ordinal approach to review; documents are trained and reviewed in descending order based on rank, and it is the rank (not necessarily the score value) that matters. CAL has a tendency to collaterally elevate scores on irrelevant documents with similar content to relevant ones (increasing the volume of false positives, therefore reducing precision). Erroneously escalated documents are trained as irrelevant only as they come up in the review.

While the order of likely relevance based on the relative scores to one another may be improved, looking at the database by objective scores, it is hard to tell what is relevant and what is not, and where to draw that line. Unfortunately, that means unless you spend the time to train everything down to the point to consider ending review, it is difficult to estimate how many documents to review. Initial sampling can help in this regard, but it is still difficult to plan staffing and deadlines when the location of the finish line is uncertain. From a practical use standpoint, this can be frustrating, particularly since many prefer to use Predictive Coding more as a culling tool, determining what to send to review or production, instead of as a feature designed to enhance the review and make it more efficient. If you are using Predictive Coding to rely on system suggestions prospectively, such as with high volume, tight deadline situations (second requests, large multi-jurisdiction litigation, etc.) where you cannot review or train every document, you may have to work on improving the system suggestions more broadly instead. That use case requires the more traditional Predictive Coding approach, since a CAL review method may be impractical.

The traditional Predictive Coding training method is often referred to as Simple Active Learning (SAL) or “TAR 1.0,” but can also be described as “training the unknowns,” “focus document training,” or “uncertainty sampling.” SAL is about building a model instead of an order. With this method, the system is trying to find the cut-off between relevant and not relevant and refine that threshold a little more clearly. In this instance, we train the documents near the suggestion threshold, as these documents are related to concepts that are the most questionable to the machine learning. Thus, SAL tries carving out false positives pre-emptively, instead of at the point-in-time of review based on the rank order, as CAL does. Taken from the perspective of a particular scores in the database, whereas CAL drags relevant (and similar irrelevant) documents upward in score, increasing recall to the initial detriment of precision, SAL removes irrelevant from the relevant, increasing precision while maintaining recall.

Considering the positive attributes of both methods, we tend to utilize SAL earlier in the discovery process, on very large projects to help determine the review or production set, or in situations where project duration or training resources are very limited. We tend to see CAL used more often in a generic review setting, where eyes-on review training is more feasible.

This does not preclude the use of either model being applied on a limited basis to improve results in a situational use case; using a limited CAL approach on a second request to help improve recall or using SAL on an ongoing review to help improve precision can, does and should happen when warranted. Understanding how best to utilize each goes a long way to understanding that a one-size fits all solution (as CAL is often heralded as) is not always the best solution.

In conclusion, both CAL and SAL training strategies are helpful in any TAR review. As skilled e-discovery practitioners understand, knowing when best to utilize each method should depend more on the circumstances of the project, and not on the marketing.