REGULATION OF ARTIFICIAL INTELLIGENCE IN THE EU - WHAT LIES AHEAD?

24.4.2024

Artificial Intelligence ("AI") has become a global phenomenon and the Artificial Intelligence Act ("AI ACT") comes as the first comprehensive AI regulation in the world, bringing unified rules and requirements across the European Union, both for providers and operators of AI services. The AI ACT was approved by a unanimous majority of the European Parliament on 13 March.

The text is now being finalised by lawyers and linguists, with formal approval expected in late April/May. As a provider or operator of these services, what do you have to comply with and what sanctions are you facing? That is the subject of this article.

I. Background information

The European Union has adopted the definition of AI from the recently updated OECD (Organisation for Economic Co-operation and Development) definition, of which the EU is a full member, as are the US, Canada and others.

"An AI system is a machine-based system that, based on explicit or implicit goals, infers from the inputs it receives how to generate outputs such as predictions, content, recommendations or decisions that can affect the physical or virtual environment. Different AI systems vary in their level of autonomy and adaptability once deployed."

As mentioned above AI ACT impacts both system providers and system operators.

  1. The AI system provider is the entity that develops and markets the system in the EU under its own name, which may be a natural or legal person, or a public authority.
  2. The operator of an AI system, on the other hand, is the entity that uses the system in the exercise of its competence, except where the system is used for personal purposes and not for professional activities. Note: A change has been made here, as the original proposal used the term 'user', which was confusing as it did not mean the end user, but the operator.

II. Categorisation of AI systems according to risk

The AI ACT is based on the categorisation of AI systems according to the risk that each system poses or may pose and then the requirements for providers and users are derived according to these categories. These categories are namely:

  1. Unacceptably risky AI systems (prohibited)
  2. High-risk AI systems
  3. Limited risk AI systems (GPAI)
  4. Minimally risky AI systems

The first category of unacceptably risky systems is banned without any further regulation, the second mentioned category is subject to strong regulation, the third category is "only" subject to transparency requirements and the last category is not subject to any regulation.

III. Unacceptably risky AI systems

The first category mentioned above is unacceptably risky AI systems, which are completely prohibited. These AI systems are prohibited because they unacceptably interfere with the rights guaranteed by the Charter of Fundamental Rights and Freedoms (e.g.: the right to privacy, freedom of expression, assembly, non-discrimination, and many others). Which AI systems are therefore among the prohibited?

  1. AI exploiting human vulnerabilities (e.g. age, disability);
  2. Biometric categorization systems using sensitive characteristics (race, sexual orientation, political or religious views, etc.), except:
    • tagging or filtering of lawfully obtained biometric data sets or in the case of categorisation of biometric data by law enforcement authorities;
  3. the use of subliminal, manipulative or deceptive techniques that distort behaviour and circumvent free will;
  4. social credit scoring systems;
  5. the collection of faces to create a facial recognition database;
  6. Specific applications of predictive policing, i.e. assessing the risk of a person committing a crime based on profiling or assessment of personality traits, except:
    • Where it is used to supplement a human assessment based on objective, verifiable facts directly related to criminal activity;
  7. emotion recognition in the workplace or in educational institutions, except:
    • health or safety reasons;
  8. use of RBI (Real Time Remote Biometric Identification) in public places for law enforcement purposes, except:
    • Preventing a serious and imminent threat to life or foreseeable terrorist attack;
    • searching for missing persons, victims of kidnapping and persons who have been trafficked or sexually exploited;
    • identification of suspects in serious crimes (murder, armed robbery, organised crime, etc.).

The use of exceptions to the above-mentioned RBI with AI is only allowed in cases where non-use of the system would cause significant harm, but at the same time the fundamental rights and freedoms of the persons concerned must be respected. The system must be registered in the EU database (in urgent cases, deployment may be initiated without registration if it is later registered without delay, but this must be duly justified) and the police may only do so after completing an assessment of the impact on fundamental rights and freedoms.

Permission from a judicial authority or independent administrative authority is also required before deployment (in urgent cases, the system can be deployed without permission if requested within 24 hours, but if permission is refused the deployment must be terminated immediately and all outputs, data deleted).

IV. High risk AI systems

The next category is those AI systems that have been assessed by the European Union as high risk on the grounds that they are high risk either to health, safety or to fundamental human rights and freedoms of individuals. The EU sets out this category in Annexes II and III.

According to Annex II. AI ACT, high-risk AI systems are those that are a safety component of a product or the product itself and are covered by the EU harmonisation rules listed in the Annex. These are systems used in autonomous or semi-autonomous vehicles or, for example, medical AI systems intended for diagnosis or treatment.

Annex III sets out where AI systems are high risk:

    1. Biometric identification, which is not prohibited;
    2. Education (systems designed to assess students of educational or training institutions and to assess participants in examinations required for admission to study);
    3. Recruitment and management (systems to evaluate candidates during pre-employment interviews and systems to evaluate performance, conduct of persons in the course of employment);
    4. Administration of Justice (AI systems designed to assist the judicial branch in examining and interpreting facts and law and in applying the law to a particular set of facts);
    5. Critical infrastructure management (water, gas, electricity, etc.);
    6. Law enforcement, border control, migration and asylum (lie detector, assessment of illegal migration or health risks, etc.);
    7. Access to services (banking, insurance, social benefits, or systems that assess social credit in this area);
    8. Specific products and/or security components of specific products

However, even within this category there are exceptions and these are those AI systems that perform only a narrow procedural task, improve the outcome of previously completed human work or perform a preparatory task to an assessment that is relevant for the purposes of the use cases in Annex III.

AI systems that are included in this category are subject to strict regulation and strict requirements are placed on their operators and providers.

Requirements for providers of high-risk AI systems:

  1. Risk management system (implement a risk management system that continuously, throughout the life cycle of the AI system, assesses the risks that may arise while taking appropriate action);
  2. data and data management, used to train models to check for any shortcomings, errors or risk of bias;
  3. technical documentation and record keeping. Demonstrability that the AI system has the characteristics required by the AI ACT;
  4. Transparency and provision of information to AI operators; the obligation to provide AI system operators with transparent information necessary to use the AI system, including information on its purpose, accuracy, reliability and cybersecurity, its performance and input data specifications;
  5. human oversight
  6. accuracy, reliability and cybersecurity

Requirements for operators of high-risk AI systems:

  1. Adequate and relevant data with respect to the purpose of the AI system;
  2. Monitor the operation of the high-risk AI system to prevent risks and detect incidents, and inform the AI system provider where appropriate;
  3. retain for at least 6 months selected logs automatically generated by the high-risk AI system;
  4. the obligation to carry out a data protection impact assessment under the GDPR.

V. Limited risk AI systems - GPAI

The various provisions of the AI ACT also address so-called GPAI (General Purpose Artificial Intelligence), i.e. general purpose artificial intelligence, such as the globally known ChatGPT, but also the graphical Stable Difusion or the DeepL compiler.

  1. By GPAI model, we mean an AI model that is trained on large amounts of data, exhibits considerable generality and is capable of competently performing a wide variety of tasks, and that can be integrated into a variety of downstream systems or applications (research and development GPAI models are an exception).
  2. The GPAI system is based on the aforementioned GPAI model and is capable of serving a variety of purposes, both for direct use and for integration into other AI systems.

Requirements for GPAI providers

Different requirements apply to "regular" GPAIs, to GPAIs with a "free and open license", to GPAI systems with some "systemic risk", and to GPAI systems that can be used as or integrated into high-risk AI systems. The latter are further subject to the obligation to work with downstream providers to enable compliance.

The requirements are as follows:

  1. Technical documentation, including the process for training, testing and evaluation of results (this point does not apply to free and open-licensed GPAIs);
  2. information and documentation to be provided to providers who will further integrate the GPAI model into their own AI systems so that these providers are aware of the capabilities and limitations of the GPAI model (this point does not apply to free and open licensed GPAIs);
  3. Respect for copyright;
  4. a sufficiently detailed summary of the content used to train the GPAI model.

Additional obligations apply to those providers whose system is categorised as a GPAI with 'systemic risk'. This is judged by computing power, the cumulative volume of which for training must exceed 1025 floating point operations per second. However, the AI ACT also provides exceptions where a model exceeding computing power will not be classified as having "systemic risk".

The regulation imposes additional obligations on these systems because high computing power increases the risk of potential misuse and makes it more difficult to control and audit.

In addition to the four requirements mentioned above, these providers are subject to the following requirements:

  1. Documenting and evaluating the adversarial testing model to identify systemic risks;
  2. assessing systemic risks with a view to mitigating them;
  3. Monitor, document and report serious incidents and corrective actions to the AI Office.
  4. Cybersecurity, or ensure an appropriate level of.

VI. Minimum Risk AI Systems

This category includes AI systems that are used, for example, for spam filtering in email inboxes or for integration in video games. And these are not regulated in any way by the Regulation.

VII. When will the AI ACT "start to apply"?

The AI ACT will become effective 20 days after its publication in the Official Journal of the EU, but what matters is when it comes into force, and this is broken down as follows:

  1. 6 months - unacceptable risk AI systems
  2. 12 months - limited risk AI systems
  3. 24 months - high risk AI systems as defined in Annex III.
  4. 36 months - high risk AI systems as per Annex II.

VIII. Conclusion

In addition to the great opportunities, AI also brings great risks and impacts on people's daily lives. There is therefore no doubt that it needs to be regulated, but is not over-regulation the bane of progress?

If you are unsure about anything in this area or need legal advice, please do not hesitate to contact us and we will be happy to help.

70+
countries

60+
advisors

15+
years of experience in the market