Back to overview

Commercial Question

From innovation to regulation: understanding the EU AI Act

updated on 09 September 2025

Question

What’s the EU AI Act? 

Answer

AI is disrupting sectors from life sciences to payment services, and automotive to fashion. Governments and legislators are attempting to deal with a range of legal, commercial and ethical issues raised by advances in AI, while encouraging innovation and uptake. While some have decided no AI-specific legislation is required, the European Union has taken a different approach. 

The EU AI Act is the world’s first comprehensive legislation on AI. It came into force on 1 August 2024, establishing a legal framework for the development, deployment, marketing and use of AI in EU member states. It aims to encourage innovation, while managing the associated risks, with implementation and enforcement overseen by a newly created European AI Office. The AI Act has a phased implementation. Currently rules on prohibited AI systems and general-purpose AI (GPAI) models apply. The bulk of the remaining legislation will apply from 2 August 2026, subject to exceptions. 

This article explains the objectives and key features of the AI Act, outlines practical steps businesses can consider for compliance, and explores some sector-specific implications.

What does the AI Act do?

The AI Act has several objectives. It facilitates ethical and sustainable innovation by increasing the transparency of AI systems, harmonising standards and codes of conduct and safeguarding the rights of individuals across the EU. It seeks to create a more level playing field for those involved with AI; for example, by allowing small and medium-size enterprises to test their AI systems without immediate scrutiny, while imposing stringent requirements on high-risk AI systems and prohibiting AI systems that pose unacceptable risk.

A key feature of the AI Act is its risk-based framework for classifying AI systems. ‘AI systems’ are defined in the AI Act as machine-based systems with varying levels of autonomy that infer how to generate outputs from inputs received. Those outputs could be predictions, recommendations or decisions. Systems include (but aren’t limited to) those of biometric identification, critical infrastructure, human resources, law enforcement and advertising. The AI Act classifies AI systems into four risk categories:

Unacceptable risk

These are prohibited entirely and examples include government-run social scoring, manipulative or exploitative AI, predictive policing using facial recognition and emotion recognition in the workplace or schools.

High risk

These are subject to extensive safety, transparency and accountability requirements, and would include CV-scanning tools that rank job applicants.

Limited risk

These will be subject to lighter transparency obligations and would include the requirement for AI providers and users to inform individuals when they’re interacting with AI systems, such as viewing deepfakes or communication with chatbots.

Minimal risk

These are systems without a risk of manipulation or deceit, which don’t have direct contact with humans.

The AI Act also addresses GPAI, which are systems that can perform a wide range of general tasks. GPAI is classified into systemic risk (subject to extensive regulation) and normal risk (fewer obligations).

Who’s affected by the AI Act?

Although the AI Act is EU-focused, it applies to entities outside the EU if their AI systems are placed on the EU market or their outputs are used within the EU. Within the EU, it applies to developers, providers, deployers, importers, distributors and operators of AI. Member states are affected as they’re required to establish national monitoring authorities. The European Artificial Intelligence Office will provide guidance and support to them. The AI Act also aims to protect the rights and interests of consumers across the EU as mentioned above.

Compliance obligations for high-risk AI systems

Providers, deployers, importers and distributors of high-risk AI systems have a range of obligations with the most onerous applying to providers (those that develop or have developed an AI system and deploy it under their own name or trademark). These systems must meet minimum safety, transparency and ethical standards. Requirements include producing high-quality data reporting, human oversight and clear documentation to ensure accountability and transparency. The systems must undergo conformity assessments and certification. Throughout the supply chain, different obligations will apply.

Sanctions for non-compliance

Organisations that breach the AI Act face significant sanctions, including fines of up to:

  • €35 million, or 7% of global annual turnover (whichever is higher), for prohibited AI systems; or
  • €15 million, or 3% of turnover, for other violations.

Commercial application 

Most businesses using or placing AI on the market in the EU will need to implement procedures to comply with the AI Act and related digital regulations. Key steps could include:

  • identifying where AI is being used across an organisation (eg, operations, HR, cybersecurity);
  • reviewing existing policies to assess which ones are already compliant and where updates are needed;
  • evaluating the risks of each AI system and implementing risk-mitigation strategies; and
  • budgeting for the cost and time to achieve compliance.

Article 4 of the AI Act requires providers and deployers of AI systems to ensure an appropriate level of AI literacy among their staff and anyone else dealing with the operation and use of AI systems on their behalf. The level of AI literacy required will depend on the role of the person. This means that many businesses will be required to implement AI literacy programmes.

Sector-specific implications

Life sciences

Medical devices with clinical applications to individuals are already subject to rigorous EU regulation. Where those devices are or incorporate AI, the AI Act introduces an additional layer of regulation.

Key considerations will include:

  • classifying the risk level of the device;
  • considering how the AI Act overlaps with existing frameworks and updating technical documentation to include AI components; and
  • managing the costs, procedures or delays involved in obtaining certain certifications before a product can be brought to market.

Media and content

One of the most pressing issues in development and use of GPAI is protection of intellectual property rights. In particular, AI developers have been accused by rights holders of using their copyrighted materials without permission to train their AI, and there are issues around ownership of AI-generated content and whether outputs infringe existing intellectual property rights. Rights holders are likely to find it easier to identify when their work is used for AI training as a result of the new transparency obligations under the AI Act, and then either enter into licence agreements or enforce their rights through the courts.

Media rights holders using AI in their own businesses are also likely to face transparency requirements as providers and deployers of AI systems. 

Fashion

The fashion industry was, as you might expect, an early adopter of AI, which is used in customer support services, personalised recommendations and design processes, among other things. Most fashion brands won’t be using high-risk AI but they’ll still have obligations under the AI Act. These range from being transparent when using AI-generated content to complying with rules on biometric categorisation for virtual try-ons.

Payments

AI enhances payment services through faster processing, behavioural personalisation and fraud detection. Similarly to medical devices, the payment market is subject to a wide range of legal obligations. The AI Act will add to the existing framework so the key will be for stakeholders to understand the extent to which the AI Act applies, the role they play in the supply chain and consequently which AI Act provisions apply, and how they minimise the administrative burden of complying with another set of legislative requirements. 

Human resources

The AI Act prohibits AI-based emotion recognition systems in the workplace unless they’re to be installed for medical or safety reasons, so it’s essential to comply with this prohibition, which has applied since 2 February 2025. AI systems used in HR are, however, more likely to be considered high risk. High-risk systems in this context include:

  • recruitment and candidate assessment tools;
  • systems influencing decisions on promotion or termination;
  • tools that assign tasks based on behaviour or personality; and
  • performance monitoring tools.

While the most onerous AI Act obligations in this context apply to providers of the AI systems, deployers (users) also have to ensure they comply with requirement by:

  • using appropriate technical and organisational measures to ensure the AI system is used in accordance with instructions;
  • developing an AI literacy programme;
  • ensuring human oversight; and
  • ensuring high-quality data, which prevents bias, is used to train the AI.

Deployers are also required to tell people where they use high-risk AI to make or help with decisions involving individuals.

A watershed moment?

The AI Act marks a defining moment in the governance of AI globally. It introduces a structured, risk-based framework that seeks to balance innovation with the protection of fundamental rights and public safety. For businesses, the AI Act demands a proactive and strategic approach: understanding how AI is used within their operations, mapping compliance requirements and staying ahead of legal developments is essential. While this may involve investment in the short term, early preparation will mitigate legal risk and help facilitate successful long-term innovation.

Global impact and international response

While the AI Act is an EU regulation, its influence is likely to extend far beyond Europe in the same way that the GDPR became a global benchmark for data privacy. Some non-EU companies that operate internationally are already adapting their AI strategies to align with the AI Act’s requirements. Canada, Brazil and the UK have begun developing or updating their own AI governance frameworks, many of which reflect similar principles of risk-based classification, transparency and accountability.

It remains unclear whether the UK will follow the EU's approach, but it’s moved from a position of not planning to legislate on AI, to planning some sort of legislation. Whether this will focus on AI safety, copyright or innovation remains to be seen.

The US has taken a more fragmented approach, with executive orders and state-level initiatives, rather than a unified federal law. However, the Trump administration has focused very much on enabling AI innovation in the US under its July 2025 AI Action Plan, and revoked former President Biden's AI Safety Executive Order.

Major tech firms are taking proactive steps to comply with the AI Act; compliance will be essential to avoid financial penalties and maintain access to the European market meaning that a company’s ability to navigate the AI Act may well serve as a competitive advantage.

For aspiring lawyers, this is a fast-evolving area that sits at the intersection of technology, innovation, regulation and commercial law, making it a key area to watch as the legal landscape evolves.

Grace Taylor is a trainee solicitor at Taylor Wessing.