gavel data analytics
|

Within the past few years alone, data analytics capabilities for law has exploded—and so has the software offerings providing those analytics. So how does a law firm or academic law library decide between platforms?

A group of law librarians at the American Association of Law Libraries (AALL) ventured to find out with a study of their own. At the Monday “The Federal and State Court Analytics Market—Should the Buyer Beware? What's on the Horizon?” session, a group of four law firm library directors walked a packed room through the results of the study—one that found no winner, but a number of issues and potential improvements for current analytics platforms.

Diana Koppang, director of research and competitive intelligence at Neal, Gerber & Eisenberg, explained that the study analyzed seven platforms focused on federal courts: Bloomberg Law, Docket Alarm Analytics Workbench, Docket Navigator, Lex Machina, Lexis Context, Monitor Suite, and Westlaw Edge. The study included 27 librarians from law firms and academic law libraries with each tester analyzing two platforms over the course of one month.

Each tester approached the platforms with a common set of 16 “real world” questions, developed by a panel of four librarians in conjunction with attorneys, to try and find answers. One example is the first question, which asked, “In how many cases has Irell & Manella appeared in front of Judge Richard Andrews in the District of Delaware?” with further constraints of intellectual property and dockets. These can be found on the AALL website.

And in speaking of that first question, the answer revealed a key problem with comparing the tools: there was little consistency between the platforms. The actual answer to the problem was 13 cases, explained Tanya Livshits, director of research services at Irell & Manella. In this example, not a single platform answered 13 cases.

Statistics courtesy of AALL. Photo by Zach Warren/ALM. Results courtesy of AALL. Photo by Zach Warren/ALM.

The problems with each of the platforms differed: Bloomberg had issues with the IP aspects of the search, Docket Navigator and Lex Machina had false hits for attorneys who had left, Monitor Suite focused more on opinions rather than dockets, and Westlaw Edge automatically filtered for the top 100 results, of which Irell & Manella was not one.

This is not to say that analytics programs aren't helpful. In fact, identifying some cases up front can be a good way to kick-start the search. It's just that these platforms shouldn't be an end-all for research, Livshits said.

“This just proves that while analytics are a great places to start, you need to dive deeper if you want to do something exact,” she explained. “A manual review is still so necessary.”

The study also sorted the companies by both functionality and complexity.

Statistics courtesy of AALL. Photo by Zach Warren/ALM. Results courtesy of AALL. Photo by Zach Warren/ALM.

And herein lies the second issue: In many cases, the librarians felt they weren't comparing apples to apples. Jeremy Sullivan, manager of competitive intelligence and analytics at DLA Piper, noted that Context is his go-to for expert witnesses, while Monitor Suite has focused more on competitive intelligence, with granular tagging and exhaustive filters and lists.

These differences are why the testers would not, and could not, declare that one platform is better than another, Koppang said. “So much depends on your use case; so much depends on your organization size. … There's so many factors that nobody can declare a winner,” she explained.

Kevin Miles, manager of library services at Norton Rose Fulbright, agreed, adding, “This is just our assumption, you may have a different idea. … What we're trying to do here is have all of the vendors have some improvement. We're not trying to burn anybody, we're trying to get people to improve.”

Ultimately, the goal is to have the assembled law librarians conduct their own similar tests, the panel said. Koppang said that those conducting a test should:

  • Think about your use case (practice area, key users) prior to deciding what and how to test;
  • Record date and time of search, which is key for comparing results;
  • Use real-world examples;
  • Detail search strategy (date ranges, steps taken, outside resources); and
  • Remember to capture images and export data.

Conducting these tests may not only provide key insights into selecting a platform, but also serve another purpose: helping the law librarians understand the full capabilities of the platforms. “A lot of the testers emailed me and said, 'This is a really great training exercise,' which we didn't really think of it like that,” Koppang added.

Similarly, the testers offered a few insights for the platforms they tested. Among these are flexibility and a need to train those who are training others. Miles offered a few proposals to improve learning, including additional short videos on Vimeo or Youtube, additional short PDF training documents, pre-set searches with buttons or check boxes to combine features, and the ability to mouse-over specific words to reveal search strategy reminders.

Koppang added, “They're trying to explain every single piece and module they have, and they're overwhelmed.”

The panel also hammered home one point throughout the presentation: the need for transparency. Especially considering the issues with specific searches such as those in Question 1, the panelists (and those asking questions in the audience) reiterated that they need to know the platform's limitations in order to properly inform the attorney.

If something goes wrong without that information, Livshits said, “You're going to lose the trust of both the librarian and the attorney, and it's tough to get the trust of the attorney back.”

Finally, Sullivan suggested that for smaller firms, analytics platforms can do a better job of combining and offering features. In particular, he pointed to companies that said they don't offer a feature because it's in a “sister platform.”

“Many of you are content to say you can't be all things to all people,” Sullivan said. “Well, I would say you're not trying.”