Back to overview

Commercial Question

AI and the law

updated on 01 August 2023

Question

How might generative AI (eg, ChatGPT) impact the future of the legal industry?

Answer

For the latter half of the past century, the legal industry (and its clients) has been moving towards greater automation of its processes. Since ChatGPT’s launch in November 2022 concerns regarding the use of AI from a regulatory and ethical perspective seem to have sped to the forefront. Governments are still catching up with ever-advancing technologies with many countries still only in the committee or proposal stages of law-making in this area.

What’s clear is that AI will continue to be a challenging area for governments to regulate, likely because there’s no clear line lawmakers should take. The risk of curtailing innovation and technology is as unattractive to governments as abuses of AI. There’s no clear path to achieve a balance between these two concerns. Answers regarding the ethics of using AI are even less clear.

The impact, therefore, that AI will have on the future of the legal industry will depend on the regulatory framework on a macro-level and the educational framework surrounding personal use on a micro-level.

1. The regulatory framework

It’s interesting to note the different regulatory positions that are being proposed in the UK compared to the EU.

A new UK white paper, titled A pro-innovation approach to AI regulation, was put forward in March 2023. It set out that the government sees AI playing a central role in delivering better public services, higher quality jobs and greater opportunities to learn the skills that’ll power the future. These priorities are seen as key if the UK wishes to reach its goal of becoming a superpower in science and technology by 2030. The UK does seem to be on its way, placing third in the world for AI research and development and home to a third of Europe’s total AI companies (which is twice as many as any other European country).

The white paper further addresses that if the UK wants to become and stay an AI superpower, it’s critical that the government does what it can to create the right regulatory framework to harness the benefits of AI yet regulate the risks any such technology poses. However what approach should countries take in the face of new technology?

The UK, in contrast to the EU, has chosen a collaborative and flexible framework initially. The UK doesn’t intend to introduce any new legislation regarding the regulation of AI technology whereas the European Union’s proposed 2021 AI Act does. The UK warns against rushing towards legislation too early, placing undue burdens on businesses. The white paper states that the UK seeks to empower regulators, such as the Health and Safety Executive, the Equality and Human Rights Commission and the Competition and Markets Authority to discover their own approaches as to the risks AI presents in each of their specific areas. It’s proposed that regulators will use existing laws to regulate and manage AI risks. This means it’ll be up to the regulators to manage the risks from AI (eg, physical harm, risk to national mental health statistics and risk to national security).

There are arguments on both sides: some consider the UK approach to be a light touch, whereas others support the government’s pro-innovation approach. Contrast this with the proposed EU AI Act. The proposed act splits AI usage into three risk categories: unacceptable risk; high risk; and other.

Applications that fall within the first category (ie, unacceptable risk) will be banned. Examples of such applications include those that involve cognitive behavioural manipulation of people of specific vulnerable groups. Another example is social scoring, when AI classifies people based on behaviour, socio-economic status or personal characteristics. Critics believing this will lead to discrimination.

Applications that fall within the second category (ie, high risk) are those applications that negatively affect fundamental rights and will be divided into two categories. The first are those products falling under the EU’s product safety legislation (eg,  toys, aviation, cars, and medical devices and lifts). The second, is AI systems falling into eight specific areas that’ll have to be registered in an EU database: for example, biometric ID and categorisation of natural persons, management and operation of critical infrastructure, education, employment and law enforcement. All high-risk AI systems will be assessed before being put on the market and also throughout the application’s life cycle. Applications that have limited risk should comply with transparency requirements; however, aside from this they’ll largely remain unregulated. The EU aims to reach an agreement on the finalised draft of the legislation by the end of the year.

2. Educational framework regarding the use of AI products

The other factor that’ll affect the legal industry is the educational approach as to how individuals are allowed to use AI on a micro-level. As stated above, where to draw the line remains unclear.

Taking the example of ChatGPT, although there have been many other AI products that were prevalent before the launch of ChatGPT (eg, Jaspar and Grammarly), there have been many more issues surrounding plagiarism concerning ChatGPT than any other AI application. This is perhaps because ChatGPT can produce long-form, detailed written pieces of work dependent on the quality of the input of data. It’s free and anybody can set up an account. It also completes pieces of work that might take a human brain more than 10 hours in mere seconds.

Most argue that to ban its use completely is unrealistic. AI and applications such as ChatGPT are here to stay. Some schools take a collaborative approach and encourage students to perhaps use ChatGPT for guidance only: to guide as to form, structure and prompts as opposed to solely depending on it to churn out assignments without the student editing or reviewing the ChatGPT end-product at all.  

Within the legal industry, there’s been a rise of self-employed business owners who’ve been using ChatGPT to draft commercial contracts for their small businesses. This carries with it all the advantages and disadvantages of a business owner opting to draft such contracts themselves. The contract will be dependent on the data inputted by the small business owner and it’s unlikely they’ll be aware as to what data to input in order to get the best contract for their purposes. There’s yet to have been a case involving a dispute where ChatGPT was used to draft the contract but many lawyers believe this is on the horizon.

Recently, an American Radio Host, Mark Walters, began a claim against OpenAI (the creator of ChatGPT) for defamation. Walters alleges that ChatGPT had generated misinformation regarding his personal life, stating that he had been incorrectly accused of fraud and embezzlement. AI experts have advised that this is a phenomenon known as a “hallucination”. The language model has pulled information from the Internet and regurgitated it as true when it’s not. On ChatGPT’s website there’s a disclaimer that it “may occasionally generate incorrect information” or “produce harmful instructions or biased content”. When ChatGPT was asked what AI hallucination was, it responded with a description, ending on “it is important to note that AI hallucinations are not actual perceptions experienced by the AI system itself… these hallucinations refer to the content generated by the AI system that may resemble human perceptions, but it is entirely generated by the AI’s computational processes”. Confusing, right?

Use of ChatGPT as a lawyer isn’t banned but it should be used only in an assistive way. The lawyer will still need to carefully review and check the end-product. Recently, a Manhattan lawyer was ordered to pay $5,000 in court sanctions after using ChatGPT to draft a court filing in a personal injury matter. The issue was that the filing cited six cases that didn’t exist, also including fake quotes in support.

Because the AI algorithm that sits behind ChatGPT is dependent on the information available on the Internet, this creates two issues. Firstly, it’s not always accurate and secondly, it’s self-perpetuating. Some experts believe that as individuals use AI more and more to create content online, the data output develops in a more AI way as opposed to a human way. Some experts fear that AI will replace human intelligence as opposed to complementing it or augmenting it.

The calculator assisted mathematicians and scientists to make tremendous discoveries owing to the outsourcing of complex calculations as opposed to relying solely on the human brain; it’s likely that applications like ChatGPT should be relied upon in the same way. It should never replace the need for humans to learn basic mathematics that sit behind complex calculations. Similarly, ChatGPT should never replace an individual learning the skills of research and drafting, especially lawyers, but it could be used to assist lawyers in their work. ChatGPT has great time-saving benefits that are appealing to an industry that revolves around chargeable hours however it should never be relied upon to replace entirely the skillset that is required by the legal profession.

Conclusions

Ultimately there’s a lot of uncertainty in this area. Whether you subscribe to a total ban of using AI, a heavy-touch or light-touch regulatory framework is wholly subjective.

Nobody, at this stage, has the answers.

Dianne Worthington is a trainee solicitor at Clyde & Co LLP.