During the Great Analytics ‘Shoot-Out’ at AALL, law librarians tested and compared the results of seven federal litigation analytics platforms.

Analytics tools enable lawyers to ask completely new questions and gain insights which were virtually unavailable in a text based research world.  It takes a special skill set to ask the right  “data quality” questions when firms are assessing  the dozens of analytics products competing for a share of lawyers desktop or an organization’s information resource budget.

Use cases for analytics include: pitch strategy, AFA responses, litigation strategy, deal negotiation strategy, managing client expectations, diving process efficiency, internal bench marking and developing peer metrics.

Law librarians have been quietly driving the adoption of analytics in the business and practice of law.  Hundreds of librarians, knowledge managers and legal publishing executives jammed into a meeting room at the 2019 American Association of Law Libraries Conference and Meeting in Washington DC on July 15th to attend a two and one half hour “super-session” “ The Federal and State Analytics market: Should the Buyer Beware?” exploring the state of litigation analytics products. The bottom line is that the market is quite complex and changing rapidly . Diana Koppang and I co-chaired the program which was broken into three separate sessions federal analytics, state analytics and the future of legal analytics Q&A moderated by Bob Ambrogi. This post will explore the results of the federal litigation product tests.

Defining Litigation Analytics: Comparing Apples, Oranges and Kiwis

Litigation analytics is in its infancy (TR launched the first ligation platform Intelligence Suite (Firm360) in 2005 for librarians and marketers. The 2010 launch of Lex Machina triggered a virtual “space race”  where large legal publishers and startups are competing to deliver analytics directly to attorneys desktops.  This crowded market presents a “brain numbing” challenge to tease out the varieties of data analyzed and insights delivered by each product.

There are many flavors of litigation analytics. Some are based on docket,data, others are based on textual analysis of case law. There are products that specialize in specific areas of law. There are federal analytics products that exclude all bankruptcy court data. Nonetheless the AALL test panel delivered important insights into the functionality of products and caveats for librarians and lawyers looking to purchase analytics products.

 The Product Comparison Jeremy Sullivan, Manager of Competitive Intelligence and Analytics, Kevin Miles, Manager of Library Services, Norton Rose Fulbright and Tanya Livshits, Director of Research Service, Irell and Manella reported on a recent study that engaged 27 librarians in testing and comparing the results of seven federal litigation analytics platforms: Lex Machina,WESTLAW EdgeFastcase/Docket Alarm WorkbenchThomson Reuters Monitor Suite, Bloomberg Law, Lexis Context and Docket Navigator.

The test parameters. The testers limited their searches to federal district court data using only docket analytics features and used real world questions. The testers were astonished by the wide range of results from each system and even compared results with a manual search.

The Pacer Problem – The Big Caveat

All the federal products rely heavily on Pacer data. Even though Pacer data is more consistent than state court data, it is riddled with problems. Each vendor deals with data gaps and data normalization differently.  One vendor created their own topics from analyzing complaints. Here are just some of the issues that each of the vendors  of federal litigation analytics is tackling.

  • Pacer has an inflexible data input form which is out of date and limits data input
  • Pacer is full of typos
  • Lawyers can’t identify more than one NOS nature of suit code
  • “Other” natures of suite – many cases are thrown into a generic “catch all” category.
  • Pacer does not normalize names of parties, law firms or attorneys. Some law firms have over 1,000 name variations in Pacer.
  • Pacer does not correct mis-attribution e.g when attorneys change firms.

Here is a sample question and results: In how many cases has Irell & Manella appeared in front of Judge Richard Andrews in the District of Delaware?

The Session illustrated how painfully challenging it is to compare products and to understand why various products deliver different results. The results in one case varied within the same system depending on how the search was crafted.

 Ease of Use vs Advanced Functionality. Currently “ease of use” and “advanced functionality” are tradeoffs. You have to balance those factors in selecting any product. This of course will change as products improve. Here are how the products were rated in each category by the 27 testers.

The takeaways from the test:

  • There is no winner.
  • The PACER Problems (highlighted above) impact all vendors. Each vendor addresses the data challenges differently or chooses to ignore some data problems.
  • The best platform for each organization depends on the firm’s “use case” and budget.
  • Analytics platforms generate better results when combined with research through other platforms (company research, docket searches)
  • There are major differences between platforms in terms of how they code data and reported results.
  • It requires a high level of expertise to get some answers. There may be hidden features and advanced techniques that will not be apparent to the casual user.
  • It is critical to understand the content that the you need to search. (e.g. dockets opinions vs. acutal docket filings)
  • If librarians don’t know about key features and nuance, can we expect attorneys to?

 Advice for Vendors:

  • Improve flexibility in searching
  • Improve transparency so subscribers know what they are/are not getting. What they can/cannot do.
  • Strong training documentation – example searches, explain limitations or capabilities, coverage details – publish everything publicly.
  • Test platforms with librarians, not just attorneys.
  • At this point Ai is not sufficient to address all litigation analytics issues..
  • Enhance learning by adding short training videos, like Vimeo or YouTube.
  • Add short PDF training documents.
  • Create “canned” searches with radio buttons or check boxes to combine features.
  • Add mouse-over for specific words to reveal search strategy reminders.
  • Add a Training Blog to remind librarians how a difficult problem was solved.
  • A daily report could pose a research question with a solution.

The three  big takeaways are:

  1. Have librarians manage the product evaluation and selection process. Few lawyers have the time or patience to examine all the issues. It is rate for other law firm administrators have the content expertise to take on the evaluation.
  2. Vendor Transparency Will Make or Break Litigation Analytics Products. It is imperative that vendors disclose the limits of their content and functionality. This market segment is too complex for vague documentation or hyped up marketing brochures.
  1. Everything Will be Different Tomorrow—Hopefully Better. We are at the  dawn of “the age of analytics.” If vendors listen to and collaborate with their potential customers their products will improve. Some combination of human and AI interaction will optimize the role of analytics in delivering insights to the lawyers and legal administrators. But until then let the buyer beware.

Note:  This post was originally published on Above the Law.