The first email comment I received regarding yesterday’s blogpost on the  Casetext / National Legal Research Group report was from longtime legal publishing/technology veteran Richard Reiben. Reiben observed that just as the eskimos are reported to have 50 words for snow –  he suggested we need ” 50 words for AI.”  I agree, it may be time for legal tech to develop a more nuanced and precise taxonomy for the host of “smart” technologies and systems which are hidden beneath the AI Cliché. Gartner Group famously defined the “hype cycle” which diagrammed the cycle where technologies fall from “inflated expectations” to the “trough of disillusionment.” A new language of AI might accelerate our rebound to the “plateau of productivity”

The report issued by Casetext and the National Legal Research Group continued to stir controversy. At Lexis request NLRG issued a statement clarifying their involvement in the AI study. Below is the response that Lexis received from an executive at NLRG.

 NLRG Response LexisNexis  sent an inquiry about the NLRG study directly to the NLRG Group and  received  the following response to their inquiry:  “Our participation in the study primarily involved providing attorneys as participants in a study that was initially designed by Casetext.  We did not compile the results or prepare the report on the study—that was done by Casetext.”

Lexis Response Lexis also offered an official response to the study from Jeff Pfeifer, VP of Product Research at LexisNexis:

LexisNexis has reviewed the referenced ‘study’ conducted by National Legal Research Group, Inc. and we have significant concerns with the methodology and sponsored nature of the project.  

First and foremost, the relationship between National Legal Research Group, Inc. and Casetext for the work should have been disclosed. Second, the methods used are far removed from those employed by an independent lab study. In the survey in question, Casetext directly framed the research approach and methodology, including hand-picking the litigation materials the participants were to use.  

In response to an inquiry from LexisNexis, John Buckley, President of National Legal Research Group, responded: “Our participation in the study primarily involved providing attorneys as participants in a study that was initially designed by Casetext.  We did not compile the results or prepare the report on the study—that was done by Casetext.” Nowhere is this relationship disclosed in the report paper nor is the report labeled as work-product of Casetext.  

Third, NLRG participants were also ‘trained’ in the use of Casetext prior to the test. With only a brief introduction to Lexis Advance, it was presumed that all participants already had a basic familiarity with Lexis Advance and all of its AI-enabled search features.

From the limited information presented in the paper, the actual search methods used by study participants do not appear to be in line with user activity on Lexis Advance.  References to ‘Boolean’ search is not representative of results generated by machine learning-infused search on Lexis Advance.  

We are confident that users of Lexis Advance and its advanced search capabilities, Lexis Answers service, Ravel View data visualization and other AI-infused solutions are well-served by our platform.

Casetext Response to Lexis:  I recieved the following comment from Casetext CEO, Jake Heller, regarding the Lexis criticism of the report. “We stand by the study; we appreciate that Lexis wishes the results were otherwise. Attorneys can judge the benefits of CARA A.I. for themselves by signing up for a free trial of Casetext. If Lexis would like to conduct their own study, we will gladly make a free Casetext account available for the duration of the experiment. ”

The Challenge of Measuring Comparative Value On the one hand I hate to vilify Casetext for trying to measure the impact of their CARA product on research efficiency. At least they tried… most vendors never get that far. On the other hand – if you are going to undertake a study – make it transparent and appeal to the experts — not the novices who can be confused by the hype. In early 2017,  Ross commissioned  a  legal research study by Blue Hill Research  Artificial Intelligence in Legal Research which in my opinion was full of laughable inconsistencies and improbable conclusions. The “whopper” that comes to mind is that they used “expert” legal researchers who had never used the  Lexis or Westlaw research systems.  When I saw that the Casetext study focused only on Lexis and lacked comparisons with other research systems, my brain went on high alert. The Casetext co-founders Jake Heller explained that they  focused on one system in order to reduce the cost of the study. But unfortunately the narrow  focus only reduced the report’s credibility. I saw another “red flag” embedded in the last  survey question which described Casetext as an adjunct to a “primary research tool”  (Lexis, Westlaw, Bloomberg Law, Fastcase.)

In other words Casetext was not actually suggesting that CARA could replace a major legal research system – although someone unfamiliar with the legal research ecosystem – might conclude that from the title and text of the report. There was no need to suggest otherwise—even by omission. CARA offers a unique research solution “brief as query” which offers real efficiencies in specific research scenarios.