Fake case law citations: a growing risk for litigants in person

updated on 15 July 2025

Reading time: two minutes

Concern over reliance on generative AI has escalated after fake case-law citations were used in an Intellectual Property Office appeal ruling. The litigant in person (LiP) admitted to using ChatGPT to help draft his case, with Phillip Johnson (the appointed person overseeing the appeal) highlighting various mistakes with the LiP’s argument.

This has become the latest in a growing number of cases involving made up legal citations, demonstrating the potential risks of AI use in legal preparation.

Johnson said: “It is important that all litigants before the registrar... and during any appeal to the appointed person are made aware of the risks of using artificial intelligence.” He acknowledged that many LiPs know “little” about trademark law, explaining that “a very clear warning needs to be given to make even the most nervous litigant aware of the risks they are taking”.

Earlier in the year, barrister William Rees-Mogg took to LinkedIn to describe his own account of a similar experience where another LiP produced a skeleton that had significant inaccuracies. He said: “The errors in that skeleton would be immediately obvious to any lawyer (or anyone with the knowledge base to find a copy of the relevant case and read it for themselves) but would be hard to spot for someone without training. This is arguably much more dangerous than generative AI producing an obviously wrong answer which can be spotted by a lay person immediately.”

In an article written by European law firm Fieldfisher, it suggested that the court’s recent decision in Ayinde v The London Borough of Haringey [2025] and Hamad Al-Haroun v Qatar National Bank QPSC and QNB Capital LLC (heard together) served as a warning to the legal profession in that: “Generative AI cannot be relied upon without proper verification and the full weight of professional regulation applies to any material a lawyer endorses, regardless of whether it was drafted by a person or a machine.”

The Bar Council’s previous AI guidance reiterated that while the use of AI wasn’t “inherently improper”, it was the individual making the legal submission’s responsibility to ensure that correct and accurate information was being used.

Find out whether you should be using AI in law firm applications – and how not do it.

commercial awareness