Since the release of ChatGPT last November, the buzz around artificial intelligence (AI) solutions and the related implications for legal professionals has grown to a fever pitch. It feels like hardly a day goes by without another product release announcement or reports of large investments in tools that are not yet ready for commercial use.
This is an extremely exciting time for those of us who work in legal technology. The amount of impact these rapidly evolving solutions will have—given time and thoughtful development—will be unprecedented. That said, risk-averse, high-value, and intuitive solutions for the challenges in the business and practice of law aren’t going to happen overnight.
As your partner focused on the business and practice of law, we’re dedicated to delivering the best solutions by working with and listening to our clients, including 99 of the Am Law 100 and 90% of the world’s top law firms. After 25 years in the business and positioning ourselves as a pioneer in legal AI for the last 10 years, we’re prepared to usher in the new generation of legal solutions to amplify the impact of all legal teams and processionals that support them.
With all the buzz, you may be feeling uncertain about your firm’s ability to decide on adopting a technology that will set you above the rest while avoiding any potential embarrassment around generative AI (GenAI) hallucinations or accidentally revealing IP as a result of a less-than-secure open AI infrastructure.
When Can Intuitive GenAI-Powered Technology Be Put to Work in Legal Without Substantial Risk?
First, let's look at the various capabilities that GenAI makes available to us to drive that impact. (*Editor’s note: To deepen your understanding beyond this article, we recommend checking out Stanford’s HELM results. This study tested the well-known large language models on a number of scenarios, including legal tasks.)
ChatGPT has shown that, when software is designed the right way, lawyers will adopt it very quickly. The question we now ask ourselves as legal technologists is how we can improve upon that exciting user experience, and tailor it to the specific challenges in legal. Where does it make sense to replace clicks with chat? How can we make redlining or proofreading as delightful as ChatGPT? What other potential solutions does this technology unlock that have so far been left undiscovered?
The Potential of Structured Data and LLMs
The reality is, beneath the chatbot that inspired so much hype, there are some capabilities in natural language search and retrieval and text classification and extraction that have been overlooked and underused. Our industry has long tried to get insight out of large volumes of unstructured data in documents for knowledge management. What we have now is the opportunity to give structure to that information and make it easily accessible to lawyers.
This means that soon, lawyers should be able to access any firm information available to them with a simple question without having to navigate siloed dashboards or sending request emails. And, that could apply to structured data sources like billing systems, CRMs, HR systems, and intranets.
You Get What You Give
The more context provided to a LLM around the input source, the more the model will be able to understand how to navigate it and therefore produce a better product. In the legal context, that means metadata—both around the document's contents and the context in which it was written. It may seem like common sense, but if it’s looking at five documents and you have told it what they are and what’s in them, it will do a better job than if you point it to a mess of 5,000 documents. When querying an LLM now, the prompt quality makes a big difference. Refining your approach to prompts is called prompt engineering and has fascinating implications for IP law on its own.
We have seen a 15-20% improvement in quality on diligence tasks when a prompter moves from a simple one-line prompt to one that includes examples and context. Again, the more context you can provide a model, the better it does.
Since these models are not capable of performing critical thinking, ChatGPT and other LLMs that are being experimented with in the legal space may not always live up to expectations. The Stanford Helm scores show poor results for legal reasoning tasks. If a lawyer expects a Google-like interaction where they can just throw a question in and get the answer, when they may actually need to spend a good chunk of time structuring a 100-word request, there’s an opportunity for Litera to step in and improve that experience.
Looking Forward
All of this adds up to exciting changes in the future of legal work. We're looking to introduce some of these generative capabilities including summarization and Q&A where we can remove the reliance on the end user to feed it sufficient contextual information to perform. Whether it’s matter metadata, narratives and profile information in Foundation, or pre-structuring the document with Kira’s engine, the more we can passively contextualize the input, the safer usage will be.
To bolster your understanding of these hot-topic solutions and to get a better sense of what’s coming next in our extensive portfolio, check out our on-demand webinar, Evaluating Artificial Intelligence for Legal Work: From the Basics to LLMs, now.