Can algorithms be biased? Government investigates risks of AI in the justice system

The government is investigating concerns about the quality and accountability of decisions made by algorithms, including the risk that automated decision-making software used by police forces could emulate human biases such as racial prejudice.

In addition to crime and justice, the new inquiry will look at financial services, local government and recruitment. As the Law Gazette reports, the new inquiry follows separate research by the Law Society into the potential effects of algorithms on the justice system which is to be published on 4 June.

Concerns about the reliability and transparency of algorithms, particularly in the criminal justice system, are outlined in this Law Gazette report.

The Centre for Data Ethics and Innovation will now investigate for the government. Its chair, Roger Taylor, said: “We want to work with organisations so they can maximise the benefits of data-driven technology and use it to ensure the decisions they make are fair. As a first step we will be exploring the potential for bias in key sectors where the decisions made by algorithms can have a big impact on people’s lives.”

Go to the Commercial Question section to learn more about artificial intelligence and the law.

Get the LCN Weekly newsletter

Get our news, features, recruiter and lawyer interviews, burning questions, blog posts and more sent straight to your inbox with our weekly newsletter. You also get access to a free personal MyLCN account.

Sign up to LawCareers.Net to receive the LCN Weekly newsletter, diary updates, events, surveys and other emails providing information for future lawyers. Please note that we ask you to provide a password so that you can access MyLCN and edit your subscriber details, including email preferences.

 

Data Protection
To see how we use your data, please visit the Privacy Policy.

 

Terms
This subscription is subject to our Terms & Conditions.

Comment

Sign in to MyLCN to have your say.