Back to blog

LCN Blogs

Regulating AI

Regulating AI

Neide Lemos

23/05/2023

Reading time: two minutes

Disruptive technologies are becoming increasingly prevalent in our lives and the law. The introduction of AI into society has led the government to release a much anticipated white paper addressing AI on 29 March 2023, ‘AI Regulation: A Pro-Innovation Approach’, which outlines the UK government’s proposals to regulate AI.

Defining AI

Although AI doesn’t have a singular widely accepted definition, one government white paper, ‘Industrial Strategy: Building a Britain fit for the future’, defines AI as, “technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation”. Further, some AI models such as ChatGPT include a form of machine learning that extracts extensive data and replicates it. The AI regulation white paper focuses on the two characteristics of AI – adaptivity and autonomy:

  • adaptive systems (as AI is trained, it can be challenging to explain the intent or logic of its decision-making and outcome); and
  • autonomous systems (AI can make it difficult to assign responsibility for outcomes).

The government also explained that it doesn’t plan to adopt new legislation to regulate AI, noting it’ll “avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI”. This will empower existing regulators to address and prepare for the above challenges created by the two characteristics of AI.

In contrast, the EU Parliament proposed an Artificial Intelligence Regulation (AI Act) on 6 December 2022 and defined AI as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments”. It’s anticipated that the new EU legislation will be adopted by the end of 2023.

Five principles for regulating AI in the UK

The white paper outlines that existing regulators, including the Competition and Markets Authority, Financial Conduct Authority and the Human Rights Commission, will be required to drive and provide clarity for innovation in the use of AI using the following five principles:

  • safety, security and robustness;
  • transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

What’s next for AI regulation?

Overall, the white paper highlights a positive approach to the UK’s intention of regulating disruptive technologies and creates an environment that encourages innovation. The next step for the government is to await responses to the white paper, due by 21 June 2023, and publish its response by September 2023 with the ambition to implement some of its proposals by Spring 2024.