Author - Jonny is the Senior Vice President, Chief Legal Engineer at SYKE

In part 1 of this blog series for in-house lawyers, I covered an introduction to generative AI, commenting on why it’s all the rage and why it’s relevant to an in-house lawyer.

Once again, I’ll just re-affirm my view that generative AI is a useful tool in the kit, which will help solve some problems, but certainly not all of them. It can be incredibly powerful when used for the right use cases, and we’ll explore that a bit below – but it’s just one of several tools available in the kit – other technologies, processes and methodologies are available!

In this part, I’m going to talk about real-life use cases, while also addressing the associated challenges and risks.

Practical use cases

So let’s start with some practical use cases which in-house legal teams are already using generative AI for.

Contract Review, Markup and Negotiation

Generative AI is powerful for supporting the review and analysis of redlines in contracts throughout the negotiation process. It’s useful for summarising changes made to drafting as it can significantly help users understand the meaning and impact of the changes. Gen AI is also a powerful tool to enable users to draft tweaks and amends into contracts though natural commands.  For instance, you could input the prompt, “Ensure the confidentiality restrictions apply to both parties”. Some Contract Lifecycle Management (CLM) vendors have great demos of this available on their websites. This capability enables some activities to be pushed back to “business users” instead of the legal function (with certain guardrails) and in other instances just speeds up lawyers doing their work.

Legal Research / Knowledge

There are a couple of key use cases here. Firstly, many of the leading legal research platforms are now incorporating generative AI as a means of enhancing search results and finding more relevant case law, statutes and legal opinions. Purists will rightly argue that the basis for this is effective search, but gen-AI is enhancing how lawyers interact with search and databases in a more natural way – eliminating the need to know the right search syntax.

I’ve helped customers implement gen-AI to improve the success and usability of pre-existing Q&A databases they have previously curated and generated. These legal teams have reasonably large Q&A databases answering all types of common questions coming into the legal function about laws, policies, processes etc. Generative AI provides a powerful overlay to match the user’s “question” with the right “answer” to get them to the right place – and we’ve measured significant improvements in matching and therefore usability and adoption.

Content generation

Whilst document automation tools are typically more useful for generating entire new contracts, generative AI can be very powerful for generating specific clauses and wording needed in a contract. It doesn’t stop at documents though, it’s really powerful for producing things like meeting note, summaries from bullet points and even things like first draft job-specs for recruitment. Microsoft’s Co-Pilot will also shortly enable us to create high quality presentation decks too.

‘Translation’ of legalese

I find this use case exciting. One thing many lawyers find challenging is writing advice in a way that’s easy to consume for someone who is not a lawyer. We’ve had some impressive success with lawyers using generative AI to produce easy-to-understand advice based on a lawyer’s first draft. Of course, it needs a lot of sanity checking by the lawyer, but we’ve found it to be both a time saver and a real value generator.

Navigating the challenges & risks

As I’ve said time and time again, generative AI isn’t going to solve all our problems, and even where it can, there are challenges and risks.

Quality Assurance: We hear a lot about generative AI ‘hallucinations’, which are essentially ‘confident’ AI responses which are factually incorrect. Let’s not forget, generative AI is only as good as the data it’s trained on. Therefore, the key lies in using generative AI with the appropriate guardrails and in the right context. Lawyers should absolutely be sanity checking the output – it should be seen as an ‘assistant’ and not a ‘replacer’.A lot of focus was placed on the Mata vs Avianca Airlines case in the US where a lawyer relied on Chat GPT to produce his case submissions, which contained lots of errors and entirely fictional case citations. The key for me here is, would that lawyer have trusted a summer intern to prepare those submissions without checking the output? I doubt it. Generative AI needs to be treated in the same way. Of course, as we continue to train the models on curated data for specific use cases, accuracy will continue to improve.

Data Privacy: Again, we’ve had public examples where employees have used ChatGPT relating to sensitive commercial matters, inadvertently leading to information leaks. As with any technology, there is a real risk of data sabotage and leakage of private information. No system is immutable. That being said, people also need to be aware of what organisations they are sharing data with, and ensuring that these technologies have been approved by their IT and Information Security teams. Microsoft has responded to this concern by establishing Azure environments which allow organisations to access OpenAI’s GPT models within their own secure domains. You should not be putting confidential or sensitive information into the publicly available version of ChatGPT and you should be speaking to your IT team if in doubt.

Bias and Fairness: It’s a well known fact that technology, and particularly AI models, are only as good as the data they are trained on – rubbish in, rubbish out. Large language models inherit biases from their training data, which can lead to biased content generation. It’s important that anyone using the output of AI generated content is considering the basis of this before using the content. Again, training custom models using carefully curated data will significantly reduce bias moving forwards.

Black Box: In many instances, it is not possible to see the rationale, reasoning or logic upon which AI has generated its content. It’s therefore really important that people sanity check and understand the content before they use it. As I mentioned above, treat it like a useful intern, who is getting you to the answer more quickly but relies on fact-checking.

While I’m trying to keep this blog relatively punchy and easy to digest, there are obviously a whole host of other use cases, risks and challenges I could talk about. So do reach out to me directly if you’d like to discuss this in more detail:

This wraps up part 2 of this series. In part 3, I’ll be sharing practical guidance for the next steps, and also considering the future of legal careers in light of technologies like generative AI.