updated on 03 March 2026
Question
Is the court system a laggard or just being cautious?AI is reshaping civil litigation in England and Wales, promising efficiency gains and improved client service. From document review to predictive analytics, AI tools are becoming essential for lawyers. Yet, the pace of adoption within the court system raises a critical question: is the judiciary falling behind or exercising prudent caution?
Document review and disclosure
Technology-assisted review (TAR) remains the most established use of AI. Since Pyrrho Investments Ltd v MWB Property Ltd [2016], predictive coding has been judicially endorsed as a fair and proportionate method for managing large datasets. Today, platforms such as RelativityOne and Everlaw are standard in complex litigation, enabling rapid identification of relevant documents and automated redaction to comply with GDPR and confidentiality obligations.
Legal research and drafting
AI research tools like LexisNexis Context and Westlaw Edge UK provide judge analytics and efficient queries. Generative AI, such as A&O Shearman’s Harvey helps with drafting and precedent searches. However, courts have warned against blind reliance: fabricated citations and inaccurate reasoning (‘hallucinations’) remain risks, and lawyers face disciplinary consequences and professional negligence claims for failing to verify AI outputs.
Litigation analytics and case prediction
Platforms like Solomonic offer data-driven insights into judicial behaviour and case results, supporting strategic planning. While these tools improve risk assessments, there are concerns about fairness if only well-funded parties can access advanced analytics.
Client engagement
AI chatbots are now managing client intake and routine questions. The HM Courts & Tribunals Service is piloting AI to assist litigants in person, aiming to make justice more accessible.
Civil Procedure Rules
The Civil Procedure Rules (CPR) and Practice Direction 57AD allow the use of technology in disclosure, as long as fairness and proportionality are upheld. The October 2025 Guidance for Judicial Office Holders notes that judges can use AI for administrative tasks but cannot hand over judicial reasoning. There is no formal requirement to disclose AI use during case preparation, although courts may explore this if concerns arise.
Professional duties
The Solicitors Regulation Authority and Bar Standards Board state that core duties – competence, confidentiality and honesty – still apply in an AI context. Lawyers must verify outputs and avoid sharing sensitive information with unauthorised tools. The Law Society of England and Wales advises that the training data for AI models may amplify social biases, leading to unfair or discriminatory results. It recommends providing staff training and developing closed systems so that bias in training data can be limited.
Professional liability
The court’s findings in the Ayinde and Al-Haroun cases suggested that failure to verify AI generated material can also give rise to professional negligence claims against solicitors, alongside referrals for criminal investigation and contempt proceedings. It remains unclear whether such claims will come up in practice and how liability would be assigned, but it is now well established that legal professionals owe a duty of care to ensure that all legal submissions are accurate, properly sourced and based on genuine authorities.
Broader compliance
AI implementation must follow GDPR and the Equality Act 2010, ensuring transparency and protection against algorithmic bias. The UK government’s AI white paper suggests a principles-based regulatory approach, with specific rules expected as AI adoption increases.
There are a number of recent real-world examples:
Strengths
AI is making civil litigation in the UK more efficient and cost-effective. It streamlines document review, legal research and client management, allowing lawyers to focus on higher-value tasks. Regulatory bodies and the CPR support the use of AI, providing a framework for its responsible adoption.
Weaknesses
Despite its benefits, AI can produce unreliable outputs, such as errors or fabricated legal citations, which require careful human oversight. There is a risk that lawyers may become too dependent on technology, potentially losing essential legal skills. Access to advanced AI tools may also be limited to larger firms, raising concerns about fairness.
Opportunities
AI offers the potential to improve access to justice by automating routine tasks and supporting litigants in person. The legal sector is seeing rapid innovation, with new specialist tools and platforms emerging. Regulatory developments and a growing emphasis on technological competence are creating new opportunities for lawyers.
Threats
AI systems can introduce bias and lack transparency, which may undermine fairness and accountability in litigation. Data security and confidentiality are ongoing concerns, especially when using unsanctioned tools. The evolving regulatory landscape poses compliance challenges and public confidence in the legal system could be affected if AI is not managed responsibly.
Laggard or cautious?
The UK courts' careful approach shows caution rather than a lack of progress. Judicial support for TAR and ongoing pilots demonstrates a willingness to innovate, even if slowly. This caution helps maintain fairness and transparency, ensuring that technology supports, rather than replaces, human judgment.
For practitioners, the message is clear: being skilled with technology is now essential. The future of civil litigation will belong to those who responsibly use AI, balancing innovation with ethics and regulatory standards.
Arunima Shrikhande is a trainee solicitor at Shoosmiths.