The European Union has adopted the first ever comprehensive regulation of artificial intelligence - the so-called AI Act (Regulation (EU) 2024/1689). This Regulation establishes harmonised rules for the development, marketing and use of AI across Member States and pursues the objective of promoting innovation while ensuring the protection of health, safety and fundamental rights.
The AI Act takes a risk-based approach - differentiating AI systems according to the level of risk they pose. It sets out categories of minimal risk, low risk with transparency obligations, high risk AI systems (subject to strict requirements) and unacceptable risk, i.e. AI practices that are prohibited.
It is these prohibited AI practices and their practical implications that the article focuses on, including the European Commission's latest guidance on the interpretation of these prohibitions.
Author of the article: JUDr. Jakub Dohnal, Ph.D., LL.M., ARROWS (office@arws.cz, +420 245 007 740)
The AI Act was enacted in June 2024 and the ban on prohibited practices took effect as early as 2 February 2025. This means that as of that date, there is an outright ban on the use of certain unsafe AI applications, regardless of industry. Violations of these prohibitions can be punished with very high fines - up to €35 million or 7% of a company's worldwide annual turnover, whichever is higher.
It is therefore essential that companies and organisations familiarise themselves with which AI practices are unacceptable in the EU, how to avoid breaching them and how to ensure compliance with the new rules. In the following, we therefore provide an overview of the prohibited AI practices under the AI Act, a more detailed analysis of selected prohibitions (in particular manipulative techniques and emotion recognition) with references to specific articles of the AI Act, an approximation of the European Commission's guidance on these prohibitions, and recommendations for corporate practice. Finally, we outline how the firm can help clients in the area of AI regulation.
The AI Act prohibits in Article 5 eight types of AI practices that are considered to be an "unacceptable risk" - i.e. contrary to EU values and fundamental rights, and therefore completely impermissible. These prohibited AI practices are:
As can be seen, the prohibited AI practices mainly include manipulation, surveillance and profiling that undermine human autonomy or may lead to discrimination and harm. Below, we focus in more detail on two areas that the AI Act explicitly prohibits and that are also the focus of the Commission's guidance - manipulative techniques and emotion recognition.
The AI Act prohibits AI systems that use subliminal, manipulative or deceptive techniques to influence people in a way that seriously impairs their ability to make decisions and causes them harm.
These are practices of manipulation "outside the consciousness" of humans - for example, covert subliminal messages, unobserved influence of emotions, or AI that deliberately deceives people about its nature or intentions. The key point is that such techniques fundamentally affect the behaviour of the person, who then makes decisions that they would not normally make, and harm occurs.
Various forms of deceptive or covert manipulation using AI fall into this category. It is irrelevant whether the AI developer intended to deceive someone - the prohibition already applies if the system objectively uses these manipulative techniques and is likely to cause significant harm.
A typical example is an AI chatbot that pretends to be a real person (e.g., a friend or relative) and convinces the victim to take some action to his or her detriment (e.g., extorting money) - such deceptive AI causing harm is clearly prohibited.
Another example would be AI generating visual or audiovisual content (so-called deepfake) without warning that it is an artificial work if it is deceiving a human. Subliminal techniques can then take the form of, for example, inserting extremely fast flashing text or images into a video that are not consciously perceptible, but are subconsciously processed by the brain and affect the viewer. This would also fall under this prohibited practice.
In its Guidelines on Prohibited AI Practices (see below), the European Commission explains in detail what all can be considered manipulative or deceptive techniques.
For example, the Commission interprets the term "material distortion of human behaviour" in line with consumer protection law - similar to the "material distortion of economic behaviour" in unfair commercial practices under Directive 2005/29/EC.
An important distinguishing criterion is whether the AI goes beyond 'permissible persuasion'. Legitimate persuasion (e.g. conventional advertising) is that which transparently presents information or arguments, even appeals to emotions, but does not resort to covert manipulation.
In contrast, AI practices that exploit a person's cognitive biases, emotional weaknesses or other vulnerabilities without their knowledge fall under prohibited manipulation.
The Commission's guidance provides several specific scenarios. In addition to the aforementioned chatbot, they mention, for example, AI generating deepfake videos, which , if clearly labelled as artificial (e.g. with a watermark or warning text), significantly reduce the risk of deliberate deception and harmful manipulation... This is important advice for practice: consistent transparency (labelling AI content, informing users that they are interacting with a machine, etc.) can prevent the substance of this prohibition from being fulfilled.
For companies developing AI systems that interact with humans, this means that they should always inform users that they are interacting with AI (which is, by the way, a separate obligation under the AI Act for all AI to interact with humans. Similarly, any hidden "tools of persuasion" that the user cannot consciously detectshould be avoided.
In summary, any AI that secretly manipulates the user to their detriment has no place in the EU. Companies should assess their AI applications - for example, some aggressive marketing algorithms or "dark patterns " using AI could fall into this category if they induce consumers to make purchasing decisions or take actions leading to harm that they would not normally take. If an AI project even remotely borders on this definition, it needs to be reclassified or modified to ensure that it is not subliminal manipulation (e.g. to bring more user awareness into the process, insert choice, ensure consent, etc.). If in doubt, it is advisable to take a conservative approach or seek legal advice.
The second specific prohibited practice is AI for emotion recognition in employment and education. Section 5(1)(f) of the AI Act prohibits the marketing or use of AI systems to infer an individual's emotions in the workplace and in educational institutions (schools).
The only exception is for systems designed for health or safety reasons. In practice, this means that, for example, an AI camera tracking the facial expression of an employee or student to assess their emotional state (whether they are angry, bored, sad, etc.) is not allowed. Therefore, companies must not deploy AI to monitor the mood or emotions of their employees at work, nor must schools monitor the emotional expressions of students using AI.
The motivation behind this prohibition is to protect human privacy and dignity - emotional expressions are a highly intimate domain, and automated assessment of them in an employment or educational context may lead to coercion or discrimination. The European Commission's guidelines stress that this prohibition applies to any AI systems that identify or interpret a person's emotions or intentions in these environments. The term "emotions and intentions" is interpreted broadly and should not be narrowed to a few basic emotions. Significantly, it will typically be AI analysing biometric data (facial expressions, tone of voice, gestures, physiological expressions).
If the AI inferred an emotional state from text alone (e.g., analyzing words in an email), it would not fall under this prohibition, as textual data is not biometric. However, any sensing and evaluation of facial expressions, voice intonation, body movements, heart rate, etc. for the purpose of detecting emotions at work or school is prohibited.
Practical implications of this prohibition: Employers need to rethink the potential deployment of AI monitoring of employees. For example, apps assessing the "smile rate" of salespeople when interacting with customers or AI monitoring employee stress or satisfaction via webcam would fall under the ban and must not be used.
The only exceptions are systems deployed for safety or health - for example, AI monitoring signs of fatigue or anxiety in drivers or operators of dangerous machinery to prevent accidents. Similarly, for example, an AI assistant monitoring a worker's vocal expressions for health reasons (detecting signs of stress leading to a heart attack) could fall within the permitted health exception. However, it is always the case that the purpose must be genuinely safety or medical and not, for example, productivity enhancement or loyalty checks - such purposes are outside the exception.
Companies should be very cautious about any technology that "reads" the emotions of employees or students. For example, if an HR department was considering deploying AI to analyse video interviews of candidates (so-called AI interviews that monitor a candidate's facial expressions for personality estimates), it should be noted that in the EU, such a practice could breach Article 5(1)(f) if it involved assessing a candidate's emotions as a potential employee.
Similarly, any "emotion AI" tools for assessing the mood of teams, customer satisfaction through emotional reactions of employees, etc. are problematic. The alternative may be to focus on objective performance or satisfaction metrics (e.g. anonymous questionnaires) instead of covert emotion sensing, or to use other technologies that fall outside the AI Act definition (as long as they are deterministic software without machine learning - but even then, GDPR privacy protections apply).
From a regulatory perspective, emotion recognition outside of these prohibited contexts (e.g., AI that analyses customer emotions in a store for marketing) falls into a lower risk category - specifically, it has been classified as a high risk AI system under Schedule III of the AI Act if it is used to classify individuals based on biometric data (i.e., including emotion) to access services.
These cases would then require strict requirements to be met, but are not absolutely prohibited. However, the most important thing for a regular company to know is that any intention to use AI to monitor the emotional state of its own employees runs into a strict prohibition and cannot be implemented.
On 4 February 2025, shortly after the prohibitions came into force, the European Commission issued Guidelines on Prohibited AI Practices under the AI Act (document C(2025) 884 final). These Commission Guidelines have been approved by the Commission, although they will be formally finalised once they have been translated into all languages (). Their purpose is to provide interpretations and practical examples on the various prohibitions under Article 5 of the AI Act and thus help to ensure consistent and effective application across the EU.
The guidelines are not legally binding in themselves (binding interpretation is a matter for the CJEU), but they are of significant indicative value and can be expected to be taken into account by supervisory authorities and courts. They are therefore a valuable guide for companies and organisations on how the Commission understands the various prohibitions.
What do the Guidelines contain? For each of the prohibited practices (a) to (h), the Guidelines discuss the main elements of the definition, provide practical examples, and identify what is not covered by the prohibition. They also offer recommendations on how to avoid providing or using AI systems in ways that might be prohibited.
For example, for manipulative techniques (a), the Guidelines explain the concepts of "subliminal" vs. "manipulative" vs. "deceptive", distinguish between what is still lawful persuasion and what is prohibited manipulation, and recommend measures such as explicit labeling of AI content that can mitigate the risk of manipulation.
On the prohibition of emotion recognition, the Guidance clarifies that it does not apply to customer or private use (it is only for workplaces and schools) and includes scenarios of what is still allowed - e.g. systems to detect stress levels for safety reasons. Similarly, for other categories (exploitation of vulnerable persons, social scoring, biometric identification, etc.) the guidance offers a detailed discussion.
For companies, it is essential that the Commission's Guidelines also include concrete examples from practice. We have mentioned many of them above: for example, a manipulative chatbot posing as a friend, AI giving imperceptible suggestions, or marketing AI exploiting minors' addiction to an app - all of these are listed in the Guidelines as typical prohibited practices.
Conversely, they also provide examples of systems that do not fall into the prohibited categories to draw the line. For example, in the case of manipulation, it is stressed that ordinary advertising or persuasion that does not use hidden deceptive tricks is not affected by the prohibition.
For emotional analysis, in turn, it is mentioned that if AI does not meet all the characteristics of a prohibited practice (e.g. not used on employees), it will fall into a lower risk category - typically high-risk AI with obligations instead of an absolute prohibition.
Summary of the importance of the Guidance: for businesses and organisations working with AI, the Guidance is valuable as it increases legal certainty. The AI Act is relatively general and technology-neutral, so specifically assessing whether a particular AI application falls under the ban can be difficult. Thanks to the Guidance, companies now have the Commission's interpretation, including practical scenarios. Although the Guidelines are not legally binding, it would be risky to ignore them - a deviation from the Commission's interpretation would have to be defended in court if necessary. We therefore recommend that companies familiarize themselves with the Guidelines (they are publicly available) and, where appropriate, consult with a lawyer in disputed cases to ensure that their AI activities comply with the spirit and letter of the regulation.
As already mentioned, the prohibitions on prohibited AI practices are effective from February 2025. Any company or organisation using AI should therefore immediately take stock of its systems and applications and check whether any of them happen to constitute a prohibited practice under Article 5 of the AI Act. Unlike the obligations for so-called high-risk AI (which have a longer transition period for implementation), no grace period has been given here - the ban is already in place and enforceable now (). In each Member State, compliance will be supervised by market surveillance authorities and possibly other regulators (e.g. data protection authorities for biometric AI). They can carry out checks and impose sanctions. As we mentioned, the penalties are not negligible - up to €35 million or 7% of a company's annual turnover.
In the case of a breach of the Article 5 prohibition. By comparison, this is the same level of sanctions as the most serious breach of the GDPR. The financial, legal and reputational risk is therefore enormous should a company even unknowingly operate prohibited AI.
1. Identify AI systems and assess the risk: the first step is to map all AI systems and projects in the company. The AI Act has a fairly broad definition of "AI system" that includes not only machine learning, but also logic/expert systems, Bayesian models, etc. (i.e., any software that uses autonomous intelligent action for a specific task) (). For each such system, it is necessary to evaluate which risk category it falls into.
Should it fall into the category of prohibited practices (see list above and detailed description), action should be taken immediately - either to fundamentally modify the system or to stop using it. In contrast, AI systems falling into the high-risk category will need to undergo a compliance assessment and meet a number of technical and organisational requirements by the time the AI Act applies (likely 2026), but they do not need to be shut down. The prohibited ones, however, are uncompromising - they must not be operated at all.
2. Targeting critical areas:
3. Modification or termination of risky AI activities: If you identify an AI system that might fall into a prohibited category, you need to act immediately. Options:
4. Training and internal guidelines: put internal processes in place to ensure that new AI projects are assessed for compliance with the AI Act before deployment (similar to how projects are assessed for privacy today - DPIA). Train your developers, product managers and other relevant staff on prohibited practices. It's important that people on the team know that, for example, the idea of "let's analyze a customer's facial expression with a webcam and change the offer accordingly" is legally unacceptable. Create codes and guidelines for AI development in the company that include a checklist of forbidden areas - this way developers can know early on that certain directions must not be taken.
5. Monitor future regulatory developments. So keep an eye on updates to legislation and interpretations. Already now, the Commission is issuing guidance on the definition of AI systems or other methodologies in addition to guidance on prohibited practices. A law firm can help you with ongoing monitoring (see below).
To illustrate practical measures, here are some scenarios and recommendations on how to prevent breaches of prohibitions:
By following the above principles and recommendations, companies can minimise the risk of unwittingly committing prohibited practices. In addition, they will gain the trust of customers and regulators that they are using AI ethically and responsibly - which is, after all, one of the goals of the AI Act.
AI regulation in the EU is new and complex. Companies often don't have in-house experts to cover all the legal requirements (AI Act, GDPR, cybersecurity regulations, etc.) while understanding the technical side of AI systems. In this situation, a law firm specializing in technology law can play a key role as a partner to help clients prevent problems and use AI safely. Specific ways to help include:
In conclusion, the regulation of AI in the EU (AI Act) is a challenge for companies, but with expert help it is possible to navigate it successfully. The prohibited AI practices defined in the AI Act protect us all from the most dangerous misuses of AI - and it is in the interests of both the company and your business to comply with them (violations would lead to severe penalties and loss of reputation).
Our law firm can help you set up compliance processes so you can take advantage of innovative AI technologies while fully complying with legal regulations. This will allow you to focus on developing your AI projects knowing that they are legal, ethical and safe.