John MacKenzie
12/06/2025
Reading time: four minutes
You’ve slogged through university, attended more networking events than you care to admit, run the gauntlet of applications and assessment centres, and finally landed your training contract. You were ready for a bit of photocopying, some polite panic in front of partners and the occasional foray into riveting due diligence. But what’s this? A chatbot might now do it faster – and without coffee breaks!
Generative AI tools like ChatGPT, Copilot and Luminance have barged into the profession faster than you can say “it depends”. For example, A&O Shearman has fully integrated Harvey – a generative AI tool developed by OpenAI – across its global offices. First adopted in 2022, Harvey is now used by more than 3,500 lawyers at A&O Shearman for contract analysis, compliance checks and drafting support – tasks that would’ve sat squarely in the trainee’s inbox. According to a 2024 LexisNexis report, AI adoption in UK law firms has more than doubled in recent years, with some firms now allocating specific budget lines towards generative AI tools.
But where does that leave actual human trainees? Fret not: AI isn’t replacing us but rather redefining what we’re doing. Some law firms are actively using trainees to help develop and apply AI tools to legal practice. One firm tasked trainees with identifying particularly tedious work that AI could handle, in turn freeing them up for more valuable work. Far from becoming redundant, trainees could gain novel roles as effective ‘pioneers’ in AI and innovative legal work.
It may seem smart at a glance, but despite its apparent ability to review documents at speed and blurt out passages in seconds, generative AI has some shortcomings when it comes to judgement, empathy or nuance. It can’t earnestly weigh commercial risk against relational dynamics. It can’t navigate the delicate politics of a boardroom. It can’t write an advice letter reassuring a frantic client without sounding a bit off.
At present, generative AI outputs all have some sort of ‘tell’ – without extensive rewriting, they often just feel wrong. The irreplaceable parts of legal work are still squarely human: persuasion, empathy, negotiation and ethical discretion.
More objectively, generative AI can ‘hallucinate’ – they might return a very confident sounding answer, but in fact have generated something utterly spurious, perhaps citing case law that doesn’t even exist. When your chatbot insists the Consumer Rights Act 2015 was passed in 1982, you’ll still need a trainee to politely tell it to calm down.
I should caveat most of this passage by saying “not yet”. It’s still relatively early days for things like large language models. Developments in retrieval-augmented generation, for example, may help generative AI ‘confabulate’ less often. So, I wouldn’t stress too much about AI coming for your trainee seat.
With that said, these tools are probably going to stick around and, with them, the concept of the ‘competent’ trainee may be changing. You’ll still need strong legal skills, emotional intelligence and commercial awareness, but firms are increasingly looking for trainees to display tech literacy. Some firms are looking for candidates to demonstrate AI strengths in their applications – this means not only showing that they can use it, but that they also understand its risks and limitations, and can use it with authenticity and thoughtfulness. Firms are increasingly investing in training programmes to help their lawyers get to grips with these new technologies and ways of working. The profession is getting to the point where just knowing the law may no longer be enough on its own.
So, should law students and trainees be worried about AI? In brief, no; particularly not if we evolve with it. The future of legal training won’t be about outcompeting AI but rather adapting to it, as with any new technology. Trainees and lawyers should know how to manage AI, interpret its outputs, challenge its assumptions and know when not to use it at all.
In my view, it’s unlikely that AI will ever be much more than just another tool, and there are immutably human characteristics AI will never possess. Firms (and clients) don’t want robots – they want people who can work with them. And until AI can bring biscuits to a client meeting, your job is secure.