Back to overview

Commercial Question

Ethical guidelines on artificial intelligence

updated on 28 May 2019

Question

How can AI systems be regulated to ensure ethical compliance?

Answer

On 8 April 2019, the European Commission (EC) published its ‘Ethical Guidelines for Trustworthy AI’ (the Guidelines). The Guidelines were developed and based on the EU High-Level Expert Group on Artificial Intelligence prior consultation on AI; a group which is made up of 52 experts and stakeholders who are involved in AI.

The obvious intention of the Guidelines is to act as a starting point for any operation and advancement of AI systems by stakeholders, in order to support the development of a trustworthy AI. In the context of recent accelerated innovation and use of AI systems, many public policy concerns and challenges have been highlighted by various parties, particularly around: (i) lack of regulation in the AI space; or (ii) the autonomous nature of AI systems, which eradicate human influence in their decision-making. In the legal context, this may happen in situations where data provided – or the decisions made – by the AI system are based on inaccurate or partial data, which could result in flawed algorithmic decision-making – for example, for the purposes of a due diligence exercise in corporate M&A deals.

It is worth noting that the Guidelines do not amount to any legally binding legislation, but they will nonetheless have significant influence over the development of future legislation by EU policymakers in their consideration of how to regulate AI. It is important to highlight the fact that AI is already subject to a number of existing laws, such as:

  1. the General Data Protection Regulation;
  2. free flow of non-personal data;
  3.  consumer law;
  4. anti-discrimination laws; and
  5. various other sector specific laws.

The Guidelines acknowledge the existence of such laws and emphasise the fact that stakeholders should comply with all applicable laws, while taking into account guidance provided in the Guidelines, as these are not mutually exclusive.

According to the EC, a trustworthy AI should consist of the following three components:

  1. compliance with relevant laws and regulations;
  2. compliance with ethical values and principles; and
  3. robustness from both a technical and social perspective. 

The first part of the Guidelines then lays down the following four core ethical principles:

  1. respect for human autonomy;
  2. prevention of harm;
  3. fairness; and
  4. explicability.   

These largely mimic fundamental human rights contained in the Charter of Fundamental Rights of the European Union.

The Guidelines then continue by setting out seven requirements which should be present at the inception and throughout the existence of any AI system, in order to achieve a “trustworthy AI”. A brief summary of each requirement is provided below.

Human agency and oversight

Human agency and oversight should be present at all times in the development and during the life of AI systems, to ensure that human autonomy is not impeded in any way and that the AI’s decision making can always be overridden by human command.

Technical robustness and safety

This requirement tries to ensure that AI systems are developed with a preventative approach to any security and external risks – for example, cyber-attacks – and that AI systems should have a fall-back mechanism in case their software is compromised by external sources. Overall, the requirement tries to ensure that AI systems are secure, accurate and reliable.

Privacy and data governance

Privacy and data governance requires that any personal data collected by AI systems must be kept secure and private and should not be misused for any harmful practices against the public.

Transparency

Data and algorithms through which the AI systems are developed and their decisions are made should be transparent (ie, accessible), so that any decision-making processes by the AI can be explicated by its developers.

Diversity, non-discrimination and fairness

From the outset, AI systems should be developed with the aim of preventing any unfair bias towards particular groups of people and to ensure that services provided by the AI systems are equally available to all.

Societal and environmental wellbeing

The environmental impact of AI systems must be taken into account in the development of AI systems. All AI systems should aim to be sustainable, ecological and have a wider positive social impact, particularly in politics.

Accountability

Certain procedures should be present to ensure that accountability and responsibility for AI and the outcomes of their decisions are in place, which includes auditability and reporting of any negative impacts of AI systems.

Notably, the four ethical principles and the seven requirements are quite abstract and still do not provide the necessary solutions to some of the ethical tensions between some of the principles and requirements, for example, where respect for human autonomy and the prevention of harm may clash or in situations where what is considered as a positive social impact tends to be more subjective.

The last part of the Guidance provides a Trustworthy AI Assessment List (the Assessment List), which is to be used by experts and stakeholders as a practical checklist in their development of AI software in line with the Guidelines. The Assessment List itself contains practical tips and instructions for its implementation in practice as well as within an organisation. More specifically, it focuses on the inclusion and engagement of employees from various levels of an organisation (not only the management), in order to ensure that a diverse range of ideas are taken into account during the development and operation of AI systems, in line with the Guidelines.

The EC is now rolling out the piloting phase of the Guidelines and the Assessment List, with the emphasis on the practical use of the Assessment List by the relevant stakeholders as well as their feedback on its usefulness, with a view of updating the Assessment List by the end of 2020.

On a more long-term basis, the plan is to develop a range of international AI guidelines and to cooperate with various stakeholders on an international level, as it is now quite clear that the AI market is more predominantly dominated by the US and China, and their input will be vital going forward. The most important step to consider in the long run is how to transform the AI ethical guidelines into mandatory legal rules?

Dominika Javornicka is a trainee associate at Weil, Gotshal & Manges.