Can algorithms be biased? Government investigates risks of AI in the justice system

updated on 20 March 2019

The government is investigating concerns about the quality and accountability of decisions made by algorithms, including the risk that automated decision-making software used by police forces could emulate human biases such as racial prejudice.

In addition to crime and justice, the new inquiry will look at financial services, local government and recruitment. As the Law Gazette reports, the new inquiry follows separate research by the Law Society into the potential effects of algorithms on the justice system which is to be published on 4 June.

Concerns about the reliability and transparency of algorithms, particularly in the criminal justice system, are outlined in this Law Gazette report.

The Centre for Data Ethics and Innovation will now investigate for the government. Its chair, Roger Taylor, said: “We want to work with organisations so they can maximise the benefits of data-driven technology and use it to ensure the decisions they make are fair. As a first step we will be exploring the potential for bias in key sectors where the decisions made by algorithms can have a big impact on people’s lives.”

Go to the Commercial Question section to learn more about artificial intelligence and the law.