Artificial Intelligence ("AI") has become a global phenomenon and the Artificial Intelligence Act ("AI ACT") comes as the first comprehensive AI regulation in the world, bringing unified rules and requirements across the European Union, both for providers and operators of AI services. The AI ACT was approved by a unanimous majority of the European Parliament on 13 March.
The text is now being finalised by lawyers and linguists, with formal approval expected in late April/May. As a provider or operator of these services, what do you have to comply with and what sanctions are you facing? That is the subject of this article.
The European Union has adopted the definition of AI from the recently updated OECD (Organisation for Economic Co-operation and Development) definition, of which the EU is a full member, as are the US, Canada and others.
"An AI system is a machine-based system that, based on explicit or implicit goals, infers from the inputs it receives how to generate outputs such as predictions, content, recommendations or decisions that can affect the physical or virtual environment. Different AI systems vary in their level of autonomy and adaptability once deployed."
As mentioned above AI ACT impacts both system providers and system operators.
The AI ACT is based on the categorisation of AI systems according to the risk that each system poses or may pose and then the requirements for providers and users are derived according to these categories. These categories are namely:
The first category of unacceptably risky systems is banned without any further regulation, the second mentioned category is subject to strong regulation, the third category is "only" subject to transparency requirements and the last category is not subject to any regulation.
The first category mentioned above is unacceptably risky AI systems, which are completely prohibited. These AI systems are prohibited because they unacceptably interfere with the rights guaranteed by the Charter of Fundamental Rights and Freedoms (e.g.: the right to privacy, freedom of expression, assembly, non-discrimination, and many others). Which AI systems are therefore among the prohibited?
The use of exceptions to the above-mentioned RBI with AI is only allowed in cases where non-use of the system would cause significant harm, but at the same time the fundamental rights and freedoms of the persons concerned must be respected. The system must be registered in the EU database (in urgent cases, deployment may be initiated without registration if it is later registered without delay, but this must be duly justified) and the police may only do so after completing an assessment of the impact on fundamental rights and freedoms.
Permission from a judicial authority or independent administrative authority is also required before deployment (in urgent cases, the system can be deployed without permission if requested within 24 hours, but if permission is refused the deployment must be terminated immediately and all outputs, data deleted).
The next category is those AI systems that have been assessed by the European Union as high risk on the grounds that they are high risk either to health, safety or to fundamental human rights and freedoms of individuals. The EU sets out this category in Annexes II and III.
According to Annex II. AI ACT, high-risk AI systems are those that are a safety component of a product or the product itself and are covered by the EU harmonisation rules listed in the Annex. These are systems used in autonomous or semi-autonomous vehicles or, for example, medical AI systems intended for diagnosis or treatment.
Annex III sets out where AI systems are high risk:
However, even within this category there are exceptions and these are those AI systems that perform only a narrow procedural task, improve the outcome of previously completed human work or perform a preparatory task to an assessment that is relevant for the purposes of the use cases in Annex III.
AI systems that are included in this category are subject to strict regulation and strict requirements are placed on their operators and providers.
The various provisions of the AI ACT also address so-called GPAI (General Purpose Artificial Intelligence), i.e. general purpose artificial intelligence, such as the globally known ChatGPT, but also the graphical Stable Difusion or the DeepL compiler.
Requirements for GPAI providers
Different requirements apply to "regular" GPAIs, to GPAIs with a "free and open license", to GPAI systems with some "systemic risk", and to GPAI systems that can be used as or integrated into high-risk AI systems. The latter are further subject to the obligation to work with downstream providers to enable compliance.
The requirements are as follows:
Additional obligations apply to those providers whose system is categorised as a GPAI with 'systemic risk'. This is judged by computing power, the cumulative volume of which for training must exceed 1025 floating point operations per second. However, the AI ACT also provides exceptions where a model exceeding computing power will not be classified as having "systemic risk".
The regulation imposes additional obligations on these systems because high computing power increases the risk of potential misuse and makes it more difficult to control and audit.
In addition to the four requirements mentioned above, these providers are subject to the following requirements:
This category includes AI systems that are used, for example, for spam filtering in email inboxes or for integration in video games. And these are not regulated in any way by the Regulation.
The AI ACT will become effective 20 days after its publication in the Official Journal of the EU, but what matters is when it comes into force, and this is broken down as follows:
In addition to the great opportunities, AI also brings great risks and impacts on people's daily lives. There is therefore no doubt that it needs to be regulated, but is not over-regulation the bane of progress?