Interested in a future career as a lawyer? Use The Beginner’s Guide to a Career in Law to get started
Find out about the various legal apprenticeships on offer and browse vacancies with The Law Apprenticeships Guide
Information on qualifying through the Solicitors Qualifying Exam, including preparation courses, study resources, QWE and more
Discover everything you need to know about developing your knowledge of the business world and its impact on the law
The latest news and updates on the actions being taken to improve diversity and inclusion in the legal profession
Discover advice to help you prepare for and ace your vacation scheme, training contract and pupillage applications
Your first-year guide to a career in law – find out how to kickstart your legal career at this early stage
Your non-law guide to a career in law – everything you need to know about converting to law
updated on 07 October 2019
QuestionDoes artificial intelligence pose an existential threat to the upper echelons of the legal profession?
Barely a day passes without a proclamation in the media about disruptive developments in artificial intelligence (AI) that will soon render hordes of white-collar professionals redundant as machines are developed that can perform tasks faster, at a fraction of the cost and more accurately than their human counterparts. The legal industry is no different. AI is already being used by law firms to transform the most repetitive and mundane tasks, traditionally the preserve of trainees and junior lawyers. This trend was even portrayed in hit TV show The Good Wife, with litigation funders punching information into a computer to calculate a dispute’s likelihood of success and its amenability to funding.
Current advances in AI have largely been focused on low-hanging fruit such as document review, but recent developments are starting to affect the most senior level of the profession – the judiciary. Jurisdictions such as the US and China are already experimenting with AI to facilitate judicial decision-making. Does the future herald a fully automated justice system where the courts are not just assisted by, but presided over by dispassionate machine AI judges? Or is human empathy and compassion a fundamental element of justice, intrinsic to society’s trust in its legal system?
This article explores the feasibility of AI judges, firstly by looking at the current uses of and recent developments of AI in the field of disputes, before examining the technological, legal and ethical hurdles presented by automated justice.
The current state of AI in the world of disputes
In March 2019, the Lord Chief Justice, Lord Burnett, set up an advisory group to ensure that the judiciary was fully up to speed with developments in AI, as to date “insufficient attention has been paid by judges to its impact on the work of the courts”. The group is chaired by Professor Richard Susskind, a legal AI specialist, and includes Lord Neuberger, the former president of the Supreme Court, along with other prominent individuals in law and AI.
While English judges are on the lookout for developments in AI to make their lives easier, AI systems are already beginning to have an impact in the sphere of commercial litigation. For example, the computer programme ‘Case Crunch’ specialises in legal decision predictions. It identifies, extracts and analyses court documents to assess the likely outcome of other cases. Impressively, when tested against a group of over 100 commercial lawyers, the software predicted outcomes of PPI mis-selling claims with an accuracy of 86% while the lawyers averaged 62%. Similar computer programmes have been developed to predict the outcome of past European Court of Human Rights and US Supreme Court cases with an accuracy of 79% and 70% respectively.
In other jurisdictions, AI is being used to facilitate judicial decision making, potentially paving the way for AI judges. China already has more than 100 robots in courts across the country, reviewing documents and identifying problems with cases. A Beijing Internet Court statement recently announced the introduction of a virtual judge to “help the court’s judges complete repetitive basic work, including litigation reception, thus enabling professional practitioners to focus better on their trial work”. The virtual judge does not make any final decisions, but according to President of the Beijing Internet Court, Zhang Wen, “we are heading toward a future when we can see an AI judge sitting at the podium”.
Meanwhile, the Estonian Ministry of Justice has announced plans to increase use of AI by prototyping its use for small money claims (less than €7,000). The system will analyse legal documents and other relevant information, before making a decision. Parties may then appeal to a human judge. The project should start this year and is likely to focus on contractual disputes.
The United States has also introduced predictive AI models. In at least 10 states (Arizona, Colorado, Delaware, Florida, Kentucky, Oklahoma, Virginia, Washington, Wisconsin and Alaska) parole guide judges use algorithms to determine the risk of re-offending and whether defendants are likely to appear at court dates when deciding bail and sentencing.
Are AI judges just around the corner?
Whilst some of the rhetoric surrounding AI gives the impression that the arrival of AI judges is imminent, there are some considerable technological, legal and ethical hurdles that must first be overcome before fully autonomous AI judges become a reality.
The computer programmes described above rely on comprehensive datasets and raw processing power to reach ‘decisions’ in respect of cases, but the process in no way resembles an act of human judicial decision making where the reasons behind a verdict can be comprehensively explained and subsequently scrutinised. This is particularly problematic in a precedent-based legal system such as England and Wales where judges are required to interpret and apply case law, often in relation to highly fact-specific situations. While a machine may be able to give the right answer, how it arrived at that answer and that it did so for the right reasons is equally as important.
Further, just because machines are automated does not mean that they are free from bias and make better decisions than humans. A report by ProPublica on the use of computerised risk assessment to determine the likelihood of defendants re-offending in Florida found that African American defendants were 77% more likely to be ascribed a higher risk of committing future violent crime than their white counterparts. The report suggests that bias in the underlying historic dataset prejudiced the programme’s decisions against African American defendants.
Fairness is essential to society’s notion of justice and trust in the legal system. In England, an individual’s right to a trial by a jury of their peers when they face imprisonment is a fundamental right enshrined by the Magna Carta since 1215. Is society prepared to hand over important decisions, such as, for example, criminal sentencing or decisions over the custody of a child to an AI judge? It is difficult to see such a radical shift anytime soon and it may be that such decisions will always require some degree of human oversight to make exceptions and show compassion when necessary.
For now and the foreseeable future AI appears to be confined to making the lives of judges easier. Technology may be increasingly used to narrow the parameters of a dispute, allowing judges to focus on the pertinent issues, but it is difficult to envisage a fully automated justice system with no element of human oversight. The current crop of judges are certainly not under threat.
Will Obree and Harriet Baldwin are, respectively, associate and trainee solicitors at White & Case.