Regulatory breakthrough: How the Artificial Intelligence Act may change the game in Europe

17.5.2023

Currently, people are growing increasingly afraid to open the fridge, lest it says something about artificial intelligence. As it happens, anything that appears in the market world is eventually regulated by regulation. Since artificial intelligence is really everywhere, it is important to keep an eye on legislative efforts in Europe. That is what this article is about.

The Artificial Intelligence Act will bring harmonised rules for the development and use of artificial intelligence (AI) systems in Europe. However, as the first comprehensive rules regulating AI, they may also affect the rest of the world (the so-called Brussels effect[1]).

The draft regulation proposal prepared by the European Commission has been available since April 2021. On Thursday, May 11, 2023, the Committee on the Internal Market and Consumer Protection adopted an amendment to the Commission's proposal[2], primarily responding to the rapid technological development of language models such as ChatGPT. Members of the European Parliament will vote on the final version of the regulation during the plenary session scheduled for June 12, 2023[3].

If the wording of the Regulation is approved, the final stage of the legislative process, known as trialogues, will take place, involving the European Commission and the Council of the EU in the negotiations.

The Regulation aims is to promote innovation and the deployment of trustworthy and secure AI systems that can be controlled and explained by humans. According to official materials, this is intended to ensure a high level of protection of health, safety, the rule of law and other fundamental human rights and freedoms from the harmful effects of AI.

The regulation defines an AI system as a system that is designed to operate at various levels of autonomy and is capable of generating outputs (e.g. predictions, recommendations or decisions) that affect the physical or virtual environment[4].

Classification of AI systems according to the level of risk

The regulation classifies AI systems according to the level of risk they generate, into low- or minimal-risk, high-risk and unacceptable AI systems.

AI systems with an unacceptable level of risk [5] are strictly prohibited as they contradict the values of the European Union. For instance, the use of AI systems utilizing subliminal, manipulative or deceptive techniques or the vulnerabilities of a certain group of individuals in order to significantly influence the behaviour and decision-making of those persons, is deemed impermissible. It also prohibits the use of AI systems for social credit allocation, emotion recognition or biometric identification.

High-risk AI systems are systems that pose a high risk to the life, health, safety or other fundamental rights of individuals. These systems will only be allowed to be placed on the European market if they meet a set of requirements.

Examples of such systems include those designed for use as safety components in the management and operation of road transport, water, gas, heat, and electricity supply. Additionally, they encompass systems utilized in education or employment, particularly in the selection, assessment, and determination of rights and obligations.

For high-risk systems, the Regulation sets out, for example, the following requirements:

1. Establishment and implementation of a risk management system which includes, inter alia, the adoption of appropriate risk management measures
2. Preparation of technical documentation before the system is placed on the market[6] and its continuous updating
3. Designing systems to enable automatic recording of events (logs) during their operation[7], which must be kept for at least 6 months[8]
4. The functioning of the system must be transparent[9]
5. Measures of the system with instructions for use containing relevant information for users, such as the identity and contact details of the provider, the features, capabilities and performance limitations of the system, including its intended purpose, level of accuracy, reliability and cyber security
6. Human supervision of the use of these systems[10]
7. If the provider is based outside the EU, the provider is required to appoint an authorised representative based in the EU by means of a written mandate
8. Obligation to register in an EU database before placing on the market or putting into service[11]
9. Obligation to report any serious incident or malfunctioning of these systems constituting a breach of obligations under Union law without delay or at the latest within 15 days after the provider becomes aware of the serious incident or malfunctioning[12]

Users of high-risk systems are obliged to use those systems in accordance with the instructions for use[13] and to monitor their operation. If they believe that an AI system poses a risk, they shall notify the provider or distributor and suspend the use of the system.

Basic models

In response to the rapid technological development of large language models such as ChatGPT, a category of so-called foundation models, which are a subcategory of general-purpose models, has been introduced. These are models that are trained on a huge variety of data, are designed to provide general outputs, and can be adapted to a wide range of tasks.

Providers of these models have a set of obligations imposed by regulation, such as the obligation to register them in a base model database, to produce technical documentation and instructions for use [14].

In addition, providers of base models used to generate content such as text, images, audio or video (so-called AI generative systems) have an obligation to indicate that the content has been generated by AI, and to design and train the system to prevent the generation of illegal content.

The most controversial and debated obligation is the obligation to disclose a list of the data that was used in training the system if it is subject to copyright protection. This obligation will significantly improve the position of artists defending against infringement of their copyright by an AI system, as they have no information about the process of creating and training the system, so it would otherwise be very difficult for them to prove that the system has learned from their copyrighted works.

European Artificial Intelligence Authority

The Regulation establishes the European Artificial Intelligence Office (AI Office)[15], which will be an independent body with legal personality and headquarters in Brussels. The structure of the Office will consist of a Management Board, a Secretariat and an Advisory Forum.

At national level, Member States will be required to designate a national supervisory authority to oversee the application and implementation of the Regulation.

Measures to promote innovation

Member States will be required to establish a controlled environment for the safe testing of AI systems (the so-called AI regulatory sandbox) by the date of entry into force of this Regulation at the latest.

Remedies

The Regulation newly provides for specific remedies, in particular the right of any individual or group of individuals to lodge a complaint with a national supervisory authority and the right to appeal against a decision of that authority. It also provides for the right to an explanation of a decision taken on the basis of the output of a high-risk system.

Conclusion

In addition to the great opportunities, artificial intelligence also brings substential risks and impacts on people's daily lives. There is no doubt that regulating AI is necessary. 

The European regulation of AI would be one of the first of its kind worldwide, potentially inspiring other countries to introduce their own AI rules and stimulating global-level debates. However, there is a risk that if other countries adopt regulation that impose less requirements on AI system providers, these providers may develop and market their systems in those countries.

The authors of the amendment to the Regulation believe that this balances the protection of fundamental rights with the need to provide legal certainty for businesses and to encourage innovation in Europe[16].

Vendula Marxová contributed to this article.

[1]CHAN, Kelvin. European Union Set to Be Trailblazer in Global Rush to Regulate Artificial Intelligence. time.com [online] 9. 5. 2023 [viewed 13. 5. 2023]. Available from: European Union Attempts First Significant AI Regulation | Time
[2]AI Act: a step closer to the first rules on Artificial Intelligence. Press Releases. 11. 5. 2023 [viewed 13. 5. 2023]. Available from: AI Act: a step closer to the first rules on Artificial Intelligence | News | European Parliament (europa.eu)
[3] Procedure File: 2021/0106(COD) | Legislative Observatory | European Parliament (europa.eu)
[4] Proposal for regulation of the European Parliament and of the Council on harmonised rules of Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts. Article 3(1)
[5] Ibid, chapter 2.
[6] Ibid, article 11.
[7] Ibid, article 12.
[8] Ibid, article 20.
[9] Ibid, article 13.
[10] Ibid, article 14.
[11] Ibid, article 51, 60.
[12] Ibid, article 62.
[13] Ibid, article 29.
[14] Ibid, article 28b.
[15] Ibid, article 56.
[16] AI Act: a step closer to the first rules on Artificial Intelligence. Press Releases. 11. 5. 2023 [viewed 13. 5. 2023]. Available from: AI Act: a step closer to the first rules on Artificial Intelligence | News | European Parliament (europa.eu)