Today Casetext released a study it commissioned which by attorneys at the National Legal Research Group Inc.Study is titled “The Real Impact of Using Artificial Intelligence in Legal Research.” But is that really what the study is measuring? I think not. While some of the conclusions may be valid – I have a recoil reflex when I smell ‘the fog of hype” which sadly hovers around so many discussions of legal AI.
I spoke to CARA co-founders CEO, Jake Heller and Pablo Arredondo, Chief Legal Research Officer. Throughout the conversation Heller repeatedly referred to the Casetext CARA product offering as “contextualized research. ” While I am not surprised that CARA can deliver research efficiencies, I am disappointed that they are promoting a report with such a misleading title. My conclusion is that this research study could benefit from some contextualizing.
AI is Not the Real Basis of Comparison. All the major online research systems from Lexis, Thomson Reuters, Bloomberg and Fastcase use some form of a proprietary AI system in generating search results. So the study isn’t comparing AI versus non-AI, it is comparing the outcomes from two completely different approaches to legal research using AI.
The study is comparing what I would call “search statement” research vs “document based” research. CARA is famous for pioneering a method which they call “document as query “research. In the CARA system the algorithm examines an entire document such as brief or a complaint and extracts and weights key legal, factual, jurisdictional, procedural elements. These results can be refined and focused with additional keywords.
All of the other major research systems currently rely primarily on some form of search query whether it is a natural language question or a Boolean query. But those results are enhanced with proprietary algorithms, citation systems and increasingly… analytics.
Does Legal Research = Caselaw Research?
One more caveat. Study seems to assume that lawyers only do caselaw research not statutory or regulatory research and that they always have a document in hand outlining the issues. Do lawyers never do research starting from scratch? In those alternate scenarios CARA would not deliver the efficiencies described in the report.
Now that that’s out-of-the-way, here are the highlights.
Attorneys using Casetext Cara reported finishing their research projects on average 24.5% faster than attorneys using Lexis. The study says that the average attorney would save 132 to 210 hours of legal research per year. As I pointed out above, I suspect that savings is based on the assumption that lawyers only do case law research and they ever have to conduct original research from scratch. So the actual hours saved using CARA may be lower when adjustments are made for other types of research.
Attorneys using CARA rated the results as being on average 21% more relevant when compared to the results of using Lexis. This clearly is one of CARA‘s strengths because “their document as query” approach automatically extracts and weights factual issues such as the types of parties, the jurisdiction and procedural posture.
45% of the attorneys believe they would’ve missed important or critical presidents if they had only done traditional legal research.
75% of the attorneys preferred their research experience on Casetext over LexisNexis®
Keep the Studies Coming. I do applaud Casetext’s effort to quantify research outcomes in real life. For too long lawyers and information professionals have selected systems based on price and content rather than demonstrated efficiencies. Professor Susan Nevelow Mart‘s recent studies on search algorithms have no doubt incentivized vendors to conduct controlled studies. I would like to see more studies like the Casetext/NLRG study – but they should avoid the lure of simplistic AI hype and contextualize the outcomes.