Regulating Artificial Intelligence (AI) in the EU

A practical guide for businesses

28.2.2025

The European Union has adopted the first ever comprehensive regulation of artificial intelligence - the so-called AI Act (Regulation (EU) 2024/1689). This Regulation establishes harmonised rules for the development, marketing and use of AI across Member States and pursues the objective of promoting innovation while ensuring the protection of health, safety and fundamental rights.

The AI Act takes a risk-based approach - differentiating AI systems according to the level of risk they pose. It sets out categories of minimal risk, low risk with transparency obligations, high risk AI systems (subject to strict requirements) and unacceptable risk, i.e. AI practices that are prohibited.

It is these prohibited AI practices and their practical implications that the article focuses on, including the European Commission's latest guidance on the interpretation of these prohibitions.

Author of the article: JUDr. Jakub Dohnal, Ph.D., LL.M., ARROWS (office@arws.cz, +420 245 007 740)

The AI Act was enacted in June 2024 and the ban on prohibited practices took effect as early as 2 February 2025. This means that as of that date, there is an outright ban on the use of certain unsafe AI applications, regardless of industry. Violations of these prohibitions can be punished with very high fines - up to €35 million or 7% of a company's worldwide annual turnover, whichever is higher.

It is therefore essential that companies and organisations familiarise themselves with which AI practices are unacceptable in the EU, how to avoid breaching them and how to ensure compliance with the new rules. In the following, we therefore provide an overview of the prohibited AI practices under the AI Act, a more detailed analysis of selected prohibitions (in particular manipulative techniques and emotion recognition) with references to specific articles of the AI Act, an approximation of the European Commission's guidance on these prohibitions, and recommendations for corporate practice. Finally, we outline how the firm can help clients in the area of AI regulation.

Prohibited AI practices under the AI Act

The AI Act prohibits in Article 5 eight types of AI practices that are considered to be an "unacceptable risk" - i.e. contrary to EU values and fundamental rights, and therefore completely impermissible. These prohibited AI practices are:

  • Subliminal and manipulative techniques (Article 5(1)(a) of the AI Act: AI systems using subliminal, deliberately manipulative or deceptive techniques with the purpose or effect of substantially disrupting the behaviour of a person (or group of persons) beyond their conscious control, thereby causing them to make a decision they would not otherwise have made and causing them harm (We discuss this prohibition and examples of manipulative AI techniques in more detail below.)
  • Exploitation of vulnerability (Art. 5(1)(b)): AI systems that exploit the vulnerabilities of individuals or groups due to their age, disability or socio-economic situation in order to substantially influence their behaviour in a way that may cause them harm. Typically, this includes AI practices targeting children, the elderly or otherwise vulnerable persons and attempting to manipulate them into harmful behaviour (e.g. scams targeting gullible elderly - see below).
  • Social scoring (Art. 5(1)(c)): AI systems for rating or classifying people on the basis of their social behaviour or personal characteristics, leading to so-called social credit and consequent adverse treatment. In particular, the Act targets blanket social evaluation of people along the lines of some social credit systems - that is, detrimental treatment of people outside the original context of the data or disproportionate to their behaviour. Such "citizen scoring" is prohibited in the EU.
  • Predictive policing systems (Art. 5(1)(d)): AI used to assess the risk of a person committing a crime based on profiling or assessment of their characteristics, i.e. This is effectively "predictive policing" or "pre-crime" analysis known from fiction - the AI Act prohibits it as it may lead to discrimination. (The prohibition does not apply only to support systems that assist a human investigator based on verified facts.
  • Mass biometric data mining (scraping) for facial recognition (s. 5(1)(e)): AI systems that create or augment facial recognition databases through the untargeted downloading (scraping) of facial images from the Internet or camera footage. This provision targets practices such as the bulk collection of photographs from the web or social networks without consent for the purpose of training AI for facial recognition (the Clearview AI case is an example). Such unsolicited covert collection of biometric data is prohibited.
  • Emotion recognition in the workplace and education (Article 5(1)(f)): AI systems designed to infer the emotions of persons in the workplace or in educational institutions (e.g. analysing the expressions of employees or pupils to determine their emotional state), except for use for health or security purposes. In other words, AI monitoring of employees' and students' emotions is prohibited in the EU (see below for details).
  • Biometric categorisation according to sensitive traits (Article 5(1)(g)): AI systems that biometrically identify and categorise individuals according to racial or ethnic origin, political opinions, religion, trade union membership, health status, sexual orientation or life, etc. These are AIs that would attempt to infer sensitive characteristics of a person from biometric data (e.g., a facial photograph or vocal expression). Such practices are prohibited as they constitute an invasion of human dignity and privacy. (The prohibition does not apply to the mere tagging or filtering of legally obtained biometric data or the categorization of biometric data for law enforcement purposes-so, for example, police forensic fingerprint classification is not affected.)
  • Real-time biometric identification of persons in public (Article 5(1)(h)): use of AI for real-time biometric identification (e.g. facial recognition) in public places by law enforcement authorities. This prohibition is aimed at blanket policing - e.g. facial recognition cameras monitoring all persons passing by would be inadmissible. There are only narrowly defined exceptions when such a system can be used (and only by the police): searching for victims of crime (e.g. kidnapping or human trafficking), preventing imminent terrorist threats or finding a specific suspect of a serious crime. These exceptions are very restrictive and subject to, inter alia, prior authorisation by a judicial authority and an impact assessment on fundamental rights. In practice, therefore, the general use of face recognition technologies by police on the streets is not allowed.

As can be seen, the prohibited AI practices mainly include manipulation, surveillance and profiling that undermine human autonomy or may lead to discrimination and harm. Below, we focus in more detail on two areas that the AI Act explicitly prohibits and that are also the focus of the Commission's guidance - manipulative techniques and emotion recognition.

Manipulative and deceptive AI techniques (Art. 5(1)(a) AI Act)

The AI Act prohibits AI systems that use subliminal, manipulative or deceptive techniques to influence people in a way that seriously impairs their ability to make decisions and causes them harm.

These are practices of manipulation "outside the consciousness" of humans - for example, covert subliminal messages, unobserved influence of emotions, or AI that deliberately deceives people about its nature or intentions. The key point is that such techniques fundamentally affect the behaviour of the person, who then makes decisions that they would not normally make, and harm occurs.

Various forms of deceptive or covert manipulation using AI fall into this category. It is irrelevant whether the AI developer intended to deceive someone - the prohibition already applies if the system objectively uses these manipulative techniques and is likely to cause significant harm.

A typical example is an AI chatbot that pretends to be a real person (e.g., a friend or relative) and convinces the victim to take some action to his or her detriment (e.g., extorting money) - such deceptive AI causing harm is clearly prohibited.

Another example would be AI generating visual or audiovisual content (so-called deepfake) without warning that it is an artificial work if it is deceiving a human. Subliminal techniques can then take the form of, for example, inserting extremely fast flashing text or images into a video that are not consciously perceptible, but are subconsciously processed by the brain and affect the viewer. This would also fall under this prohibited practice.

In its Guidelines on Prohibited AI Practices (see below), the European Commission explains in detail what all can be considered manipulative or deceptive techniques.

For example, the Commission interprets the term "material distortion of human behaviour" in line with consumer protection law - similar to the "material distortion of economic behaviour" in unfair commercial practices under Directive 2005/29/EC.

An important distinguishing criterion is whether the AI goes beyond 'permissible persuasion'. Legitimate persuasion (e.g. conventional advertising) is that which transparently presents information or arguments, even appeals to emotions, but does not resort to covert manipulation.

In contrast, AI practices that exploit a person's cognitive biases, emotional weaknesses or other vulnerabilities without their knowledge fall under prohibited manipulation.

The Commission's guidance provides several specific scenarios. In addition to the aforementioned chatbot, they mention, for example, AI generating deepfake videos, which , if clearly labelled as artificial (e.g. with a watermark or warning text), significantly reduce the risk of deliberate deception and harmful manipulation... This is important advice for practice: consistent transparency (labelling AI content, informing users that they are interacting with a machine, etc.) can prevent the substance of this prohibition from being fulfilled.

For companies developing AI systems that interact with humans, this means that they should always inform users that they are interacting with AI (which is, by the way, a separate obligation under the AI Act for all AI to interact with humans. Similarly, any hidden "tools of persuasion" that the user cannot consciously detectshould be avoided.

In summary, any AI that secretly manipulates the user to their detriment has no place in the EU. Companies should assess their AI applications - for example, some aggressive marketing algorithms or "dark patterns " using AI could fall into this category if they induce consumers to make purchasing decisions or take actions leading to harm that they would not normally take. If an AI project even remotely borders on this definition, it needs to be reclassified or modified to ensure that it is not subliminal manipulation (e.g. to bring more user awareness into the process, insert choice, ensure consent, etc.). If in doubt, it is advisable to take a conservative approach or seek legal advice.

Recognition of emotions in the workplace and in education (Article 5(1)(f))

The second specific prohibited practice is AI for emotion recognition in employment and education. Section 5(1)(f) of the AI Act prohibits the marketing or use of AI systems to infer an individual's emotions in the workplace and in educational institutions (schools).

The only exception is for systems designed for health or safety reasons. In practice, this means that, for example, an AI camera tracking the facial expression of an employee or student to assess their emotional state (whether they are angry, bored, sad, etc.) is not allowed. Therefore, companies must not deploy AI to monitor the mood or emotions of their employees at work, nor must schools monitor the emotional expressions of students using AI.

The motivation behind this prohibition is to protect human privacy and dignity - emotional expressions are a highly intimate domain, and automated assessment of them in an employment or educational context may lead to coercion or discrimination. The European Commission's guidelines stress that this prohibition applies to any AI systems that identify or interpret a person's emotions or intentions in these environments. The term "emotions and intentions" is interpreted broadly and should not be narrowed to a few basic emotions. Significantly, it will typically be AI analysing biometric data (facial expressions, tone of voice, gestures, physiological expressions).

If the AI inferred an emotional state from text alone (e.g., analyzing words in an email), it would not fall under this prohibition, as textual data is not biometric. However, any sensing and evaluation of facial expressions, voice intonation, body movements, heart rate, etc. for the purpose of detecting emotions at work or school is prohibited.

Practical implications of this prohibition: Employers need to rethink the potential deployment of AI monitoring of employees. For example, apps assessing the "smile rate" of salespeople when interacting with customers or AI monitoring employee stress or satisfaction via webcam would fall under the ban and must not be used.

The only exceptions are systems deployed for safety or health - for example, AI monitoring signs of fatigue or anxiety in drivers or operators of dangerous machinery to prevent accidents. Similarly, for example, an AI assistant monitoring a worker's vocal expressions for health reasons (detecting signs of stress leading to a heart attack) could fall within the permitted health exception. However, it is always the case that the purpose must be genuinely safety or medical and not, for example, productivity enhancement or loyalty checks - such purposes are outside the exception.

Companies should be very cautious about any technology that "reads" the emotions of employees or students. For example, if an HR department was considering deploying AI to analyse video interviews of candidates (so-called AI interviews that monitor a candidate's facial expressions for personality estimates), it should be noted that in the EU, such a practice could breach Article 5(1)(f) if it involved assessing a candidate's emotions as a potential employee.

Similarly, any "emotion AI" tools for assessing the mood of teams, customer satisfaction through emotional reactions of employees, etc. are problematic. The alternative may be to focus on objective performance or satisfaction metrics (e.g. anonymous questionnaires) instead of covert emotion sensing, or to use other technologies that fall outside the AI Act definition (as long as they are deterministic software without machine learning - but even then, GDPR privacy protections apply).

From a regulatory perspective, emotion recognition outside of these prohibited contexts (e.g., AI that analyses customer emotions in a store for marketing) falls into a lower risk category - specifically, it has been classified as a high risk AI system under Schedule III of the AI Act if it is used to classify individuals based on biometric data (i.e., including emotion) to access services.

These cases would then require strict requirements to be met, but are not absolutely prohibited. However, the most important thing for a regular company to know is that any intention to use AI to monitor the emotional state of its own employees runs into a strict prohibition and cannot be implemented.

European Commission guidance on prohibited AI practices

On 4 February 2025, shortly after the prohibitions came into force, the European Commission issued Guidelines on Prohibited AI Practices under the AI Act (document C(2025) 884 final). These Commission Guidelines have been approved by the Commission, although they will be formally finalised once they have been translated into all languages (). Their purpose is to provide interpretations and practical examples on the various prohibitions under Article 5 of the AI Act and thus help to ensure consistent and effective application across the EU.

The guidelines are not legally binding in themselves (binding interpretation is a matter for the CJEU), but they are of significant indicative value and can be expected to be taken into account by supervisory authorities and courts. They are therefore a valuable guide for companies and organisations on how the Commission understands the various prohibitions.

What do the Guidelines contain? For each of the prohibited practices (a) to (h), the Guidelines discuss the main elements of the definition, provide practical examples, and identify what is not covered by the prohibition. They also offer recommendations on how to avoid providing or using AI systems in ways that might be prohibited.

For example, for manipulative techniques (a), the Guidelines explain the concepts of "subliminal" vs. "manipulative" vs. "deceptive", distinguish between what is still lawful persuasion and what is prohibited manipulation, and recommend measures such as explicit labeling of AI content that can mitigate the risk of manipulation.

On the prohibition of emotion recognition, the Guidance clarifies that it does not apply to customer or private use (it is only for workplaces and schools) and includes scenarios of what is still allowed - e.g. systems to detect stress levels for safety reasons. Similarly, for other categories (exploitation of vulnerable persons, social scoring, biometric identification, etc.) the guidance offers a detailed discussion.

For companies, it is essential that the Commission's Guidelines also include concrete examples from practice. We have mentioned many of them above: for example, a manipulative chatbot posing as a friend, AI giving imperceptible suggestions, or marketing AI exploiting minors' addiction to an app - all of these are listed in the Guidelines as typical prohibited practices.

Conversely, they also provide examples of systems that do not fall into the prohibited categories to draw the line. For example, in the case of manipulation, it is stressed that ordinary advertising or persuasion that does not use hidden deceptive tricks is not affected by the prohibition.

For emotional analysis, in turn, it is mentioned that if AI does not meet all the characteristics of a prohibited practice (e.g. not used on employees), it will fall into a lower risk category - typically high-risk AI with obligations instead of an absolute prohibition.

Summary of the importance of the Guidance: for businesses and organisations working with AI, the Guidance is valuable as it increases legal certainty. The AI Act is relatively general and technology-neutral, so specifically assessing whether a particular AI application falls under the ban can be difficult. Thanks to the Guidance, companies now have the Commission's interpretation, including practical scenarios. Although the Guidelines are not legally binding, it would be risky to ignore them - a deviation from the Commission's interpretation would have to be defended in court if necessary. We therefore recommend that companies familiarize themselves with the Guidelines (they are publicly available) and, where appropriate, consult with a lawyer in disputed cases to ensure that their AI activities comply with the spirit and letter of the regulation.

Impact on companies and organisations: how to avoid breaches and ensure compliance

Immediate bans and high penalties

As already mentioned, the prohibitions on prohibited AI practices are effective from February 2025. Any company or organisation using AI should therefore immediately take stock of its systems and applications and check whether any of them happen to constitute a prohibited practice under Article 5 of the AI Act. Unlike the obligations for so-called high-risk AI (which have a longer transition period for implementation), no grace period has been given here - the ban is already in place and enforceable now (). In each Member State, compliance will be supervised by market surveillance authorities and possibly other regulators (e.g. data protection authorities for biometric AI). They can carry out checks and impose sanctions. As we mentioned, the penalties are not negligible - up to €35 million or 7% of a company's annual turnover.

In the case of a breach of the Article 5 prohibition. By comparison, this is the same level of sanctions as the most serious breach of the GDPR. The financial, legal and reputational risk is therefore enormous should a company even unknowingly operate prohibited AI.

Practical steps for compliance

1. Identify AI systems and assess the risk: the first step is to map all AI systems and projects in the company. The AI Act has a fairly broad definition of "AI system" that includes not only machine learning, but also logic/expert systems, Bayesian models, etc. (i.e., any software that uses autonomous intelligent action for a specific task) (). For each such system, it is necessary to evaluate which risk category it falls into.

Should it fall into the category of prohibited practices (see list above and detailed description), action should be taken immediately - either to fundamentally modify the system or to stop using it. In contrast, AI systems falling into the high-risk category will need to undergo a compliance assessment and meet a number of technical and organisational requirements by the time the AI Act applies (likely 2026), but they do not need to be shut down. The prohibited ones, however, are uncompromising - they must not be operated at all.

2. Targeting critical areas:

  • Is our marketing AI tool trying to subliminally manipulate customers? (E.g., is it using neuromarketing tricks without the user's knowledge?)
  • Does our AI advertising or offers specifically target vulnerable groups in a way that may harm them? (E.g. misuse of elderly data to pressure the elderly, misleading offers of investments or medical products targeting the sick, etc. - this could meet the prohibition under Article 5(1)(b).
  • Do we use AI monitoring tools to monitor employees or students that could track their emotions or otherwise unduly intrude on their privacy? (See prohibition in Article 5(1)(f) - if so, stop immediately.)
  • Do we plan to deploy AI to assess the trustworthiness of individuals based on big data (e.g., scoring customers beyond creditworthiness)? (Could resemble social scoring and run afoul of Article 5(1)(c).)
  • Do we happen to be using some public database of faces or biometric profiles created by bulk "scraping" from the internet? (If so, this is highly problematic - better to verify the origin of the training data and follow the legal procedures for obtaining consent, otherwise a breach of Art 5(1)(e).)
  • Are we not developing AI that tries to guess people's sensitive characteristics (e.g. ethnicity, orientation, health) from their photos or voice? (Such a project would fall under the prohibition of Article 5(1)(g) - it must be stopped or the terms of reference changed.)
  • In the security field: are we not planning to implement face recognition cameras for general surveillance of public spaces? (This is already generally prohibited for private entities for other reasons - GDPR - and for police see Art 5(1)(h) only in a limited way. Businesses can only use biometric identification internally for authentication in a limited range - e.g. employee face entry into a building - which is not a "public space" or police purpose and the prohibition does not apply.

3. Modification or termination of risky AI activities: If you identify an AI system that might fall into a prohibited category, you need to act immediately. Options:

  • Discontinue use - the surest route if the AI is not business critical. E.g. you decide not to use an experimental AI tool rather than risk breaking the law.
  • Technical and procedural risk mitigation measures - sometimes AI can be modified to fall outside the prohibited definition. Example: if you have AI that generates very realistic video avatars for customer communications, ensure that these always clearly state that they are a virtual character (the "I am a virtual assistant" notice) - this will prevent deception and ensure the AI is not "manipulative". Or if you're developing AI to assess employee emotions for safety, document that purpose and don 't use the output for any other purpose (to fit the exception).
  • Legal assessment of borderline cases - if you're not sure if your AI system crosses the line, consult with a lawyer or compliance expert. For example, the distinction between permitted content personalization and prohibited manipulation can be subtle, and the degree and type of user influence is critical. The Commission's Guidelines can help in this regard - legal counsel can review them with you and evaluate your particular use-case.

4. Training and internal guidelines: put internal processes in place to ensure that new AI projects are assessed for compliance with the AI Act before deployment (similar to how projects are assessed for privacy today - DPIA). Train your developers, product managers and other relevant staff on prohibited practices. It's important that people on the team know that, for example, the idea of "let's analyze a customer's facial expression with a webcam and change the offer accordingly" is legally unacceptable. Create codes and guidelines for AI development in the company that include a checklist of forbidden areas - this way developers can know early on that certain directions must not be taken.

5. Monitor future regulatory developments. So keep an eye on updates to legislation and interpretations. Already now, the Commission is issuing guidance on the definition of AI systems or other methodologies in addition to guidance on prohibited practices. A law firm can help you with ongoing monitoring (see below).

Practical examples or how to avoid infringements

To illustrate practical measures, here are some scenarios and recommendations on how to prevent breaches of prohibitions:

  • Transparency as deception prevention: if you use chatbots or virtual assistants, always clearly inform users that they are interacting with AI. This will prevent unintentional deception. Similarly, label AI-generated content (text, images, videos) so that the observer knows it's not a real person or reality. This will minimize the risk of the AI output deceiving or covertly influencing someone.
  • Caution with targeting vulnerable groups: in advertising and offers, avoid AI segmentation that explicitly targets people by vulnerability (age, illness, poverty) with the intention of exploiting them. For example, if AI has identified a group of older people who respond to a particular stimulus, don't exploit this to coerce them (e.g. aggressively selling a financially challenging product of dubious value). Such a situation could be evaluated as a prohibited vulnerability exploitation.
  • No monitoring of employees' emotions outside of security: Implement an internal ban on the use of any employee emotion recognition technology unless there is an explicit security or health reason to do so. Have exceptions (e.g., monitoring a truck driver's attention with a camera to prevent microsleep) approved in writing by management and the legal department to ensure they are legal and serve safety. On the other hand, for example, categorically reject the idea of evaluating whether employees are "positive enough" based on video calls - it is not legal.
  • Correct use of biometric technology: If you use biometrics (fingerprint, facial recognition) to authenticate people at entry, make sure you are collecting the data legally and purposefully for this function only. Don't create your own large biometric databases by secretly downloading photos from online sources - you are in violation of both the AI Act (Article 5(1)(e)) and the GDPR. If necessary, use officially available datasets or purchase a licence from a provider who has obtained the data lawfully.
  • Beware of "AI scoring" of humans: If you use AI to evaluate customers or users (e.g. scoring for trustworthiness, creditworthiness, riskiness), stick to established and legitimate criteria. Don't invent a broader "social score" that combines a plethora of behavioural data and labels people as "problematic citizens" - exactly what the EU has banned. Any system of scoring people should have a clear specific purpose (e.g. assessing solvency for credit) and not transfer negative consequences to completely different areas of a person's life.

By following the above principles and recommendations, companies can minimise the risk of unwittingly committing prohibited practices. In addition, they will gain the trust of customers and regulators that they are using AI ethically and responsibly - which is, after all, one of the goals of the AI Act.

How a law firm can help with AI regulation

AI regulation in the EU is new and complex. Companies often don't have in-house experts to cover all the legal requirements (AI Act, GDPR, cybersecurity regulations, etc.) while understanding the technical side of AI systems. In this situation, a law firm specializing in technology law can play a key role as a partner to help clients prevent problems and use AI safely. Specific ways to help include:

  • Audit and assessment of AI systems: lawyers can conduct a comprehensive audit of your AI tools and projects and identify potential risks. For each system, they will assess whether it falls into a prohibited category or a high-risk system with requirements. Particularly for borderline cases (e.g. content personalisation systems that could be considered manipulative), they will provide an expert opinion backed by both the AI Act and the Commission Guidelines.
  • Suggesting compliance measures: if a potential problem is identified, the law firm will suggest a tailored solution - for example, modifying AI functionality (inserting user alerts, changing the target selection algorithm, etc.), implementing internal policies or other measures to ensure compliance. The goal is often to find a way to achieve the client's business objectives in an alternative way that is no longer illegal.
  • Legal coverage of AI projects from the beginning: ideally, lawyers should be involved early in the development or purchase of an AI solution. Attorneys can help set up contracts with AI vendors to include guarantees regarding compliance with the AI Act. They can also review the license and data - e.g., verify that training data for AI was not obtained in a way that violates the law (scraping personal information, etc.). This will prevent disputes later.
  • Training and preparation of internal guidelines. It can also help draft internal guidelines for AI development and use - a practical document that summarizes what employees can and cannot do with AI, how to assess risks, and when to contact the legal department. This will translate legal requirements into the company's day-to-day practices.
  • Addressing cross-border and regulatory issues. A law firm can coordinate strategy across EU offices, communicate with regulators in the event of questions or inspections, and represent the client in proceedings should there be any litigation or allegations of infringement (e.g. defending that a given AI system does not actually fall under the ban because... etc.).
  • Updates and continuous monitoring: AI law will continue to evolve - attorneys will monitor AI Act amendments, new guidance, court decisions and keep clients informed of what this means for them. This will give them early warning of, for example, the expansion of the list of high-risk applications or new interpretations of prohibited practices.

In conclusion, the regulation of AI in the EU (AI Act) is a challenge for companies, but with expert help it is possible to navigate it successfully. The prohibited AI practices defined in the AI Act protect us all from the most dangerous misuses of AI - and it is in the interests of both the company and your business to comply with them (violations would lead to severe penalties and loss of reputation).

Our law firm can help you set up compliance processes so you can take advantage of innovative AI technologies while fully complying with legal regulations. This will allow you to focus on developing your AI projects knowing that they are legal, ethical and safe.