The European Union has adopted the first ever comprehensive legislation on artificial intelligence - the AI Act (Regulation (EU) 2024/1689). This Regulation establishes harmonised rules for the development, marketing and use of AI across Member States, with the aim of promoting innovation while ensuring the protection of health, safety and fundamental rights.
The AI Act takes a risk-based approach - differentiating AI systems according to the level of risk they pose. Categories of minimal risk, low risk with transparency obligations, high-risk AI systems (subject to strict requirements) and unacceptable risk, i.e. AI practices that are prohibited, are established.
It is these prohibited AI practices and their practical implications that the article focuses on, including the European Commission's latest guidance on the interpretation of these prohibitions.
Author of the article: JUDr. Jakub Dohnal, Ph.D., LL.M., ARROWS advokátní kancelář (office@arws.cz, +420 245 007 740)
The AI Act was enacted in June 2024 and the prohibition of prohibited practices took effect as early as February 2, 2025 (). This means that as of that date, an outright ban on the use of certain unsafe AI applications applies regardless of industry. Violations of these prohibitions can be sanctioned with very high fines - up to €35 million or 7% of a company's worldwide annual turnover, whichever is higher.
It is therefore essential that companies and organisations familiarise themselves with which AI practices are unacceptable in the EU, how to avoid breaching them and how to ensure compliance with the new rules. In the following, we therefore provide an overview of the prohibited AI practices under the AI Act, a more detailed analysis of selected prohibitions (in particular, manipulative techniques and emotion recognition) with links to specific articles of the AI Act, a closer look at the European Commission's guidance on these prohibitions, and recommendations for corporate practice. Finally, we outline how the firm can help clients in the area of AI regulation.
Article 5 of the AI Act prohibits eight types of AI practices that are considered to be "unacceptable risks" - contrary to EU values and fundamental rights, and therefore completely unacceptable. These prohibited AI practices are:
As can be seen, prohibited AI practices primarily include manipulation, surveillance and profiling that undermine human autonomy or may lead to discrimination and harm. Below, we focus in more detail on two areas that the AI Act explicitly prohibits and on which the Commission's guidance also focuses - manipulative techniques and emotion recognition.
The AI Act prohibits AI systems that use subliminal, manipulative or deceptive techniques to influence people in a way that seriously impairs their ability to make decisions and causes them harm.
These are practices of manipulation "outside the consciousness" of the person - for example, hidden subliminal messages, unobserved influence of emotions, or AI that deliberately deceives people about their nature or intentions. The key point is that such techniques fundamentally affect the behaviour of the person, who then makes decisions that they would not normally make, and harm occurs.
Various forms of deceptive or covert manipulation using AI fall into this category. It is irrelevant whether the AI developer intended to deceive anyone - the prohibition already applies if the system objectively uses these manipulative techniques and is likely to cause significant harm.
A typical example is an AI chatbot that pretends to be a real person (e.g., a friend or relative) and convinces the victim to take some action to his or her detriment (e.g., extorting money) - such harm-causing deceptive AI is clearly prohibited.
Another example could be AI generating visual or audiovisual content (so-called deepfake) without warning that it is an artificial work if it is deceiving humans. Subliminal techniques can then take the form of, for example, inserting extremely fast flashing text or images into a video that are not consciously perceptible, but are subconsciously processed by the brain and affect the viewer. This would also fall under this prohibited practice.
In its Guidelines on Prohibited AI Practices (see below), the European Commission explains in detail what can be considered manipulative or deceptive techniques.
For example, the Commission interprets the concept of "material distortion of human behaviour" in line with consumer protection law - similar to the "material distortion of economic behaviour" in unfair commercial practices under Directive 2005/29/EC.
An important distinguishing criterion is whether the AI goes beyond "permissible persuasion". Legitimate influence (e.g. conventional advertising) is that which transparently presents information or arguments, perhaps appealing to emotions, but does not resort to covert manipulation.
In contrast, AI practices that exploit a person's cognitive biases, emotional weaknesses, or other vulnerabilities without their knowledge fall under prohibited manipulation.
The Commission's guidance provides several specific scenarios. In addition to the aforementioned chatbot, they mention, for example, AI generating deepfake videos, which, if clearly marked as artificial (e.g. with a watermark or warning text), significantly reduce the risk of intentional deception and malicious manipulation... This is important advice for practice: consistent transparency (marking the AI content, informing users that they are communicating with a machine, etc.) can prevent the substance of this prohibition from being fulfilled.
For companies developing AI systems that interact with humans, this means that they should always inform users that they are dealing with AI (which is, by the way, a separate obligation under the AI Act for all AI to interact with humans. Similarly, any hidden "tools of persuasion" that the user cannot consciously detect should be avoided.
In summary, any AI that secretly manipulates the user to their detriment has no place in the EU. Companies should assess their AI applications - for example, some aggressive marketing algorithms or "dark patterns" using AI could fall into this category if they trigger consumers to make purchasing decisions or take actions leading to harm that they would not normally take. If an AI project even remotely borders on this definition, it needs to be reclassified or modified to ensure that it is not subliminal manipulation (e.g. to bring more user awareness into the process, insert choice, ensure consent, etc.). If in doubt, it is advisable to take a conservative approach or seek legal advice.
The second specific prohibited practice is AI for emotion recognition in employment and education. Section 5(1)(f) of the AI Act prohibits the marketing or use of AI systems to infer an individual's emotions in the workplace and in educational institutions (schools).
The only exceptions are systems designed for health or safety reasons. In practice, this means that, for example, an AI camera monitoring the facial expression of an employee or student to assess their emotional state (whether they are angry, bored, sad, etc.) is not allowed. Therefore, companies must not deploy AI to monitor the mood or emotions of their employees at work, nor must schools monitor the emotional expressions of students using AI.
The motivation for this prohibition is to protect human privacy and dignity - emotional expressions are a highly intimate domain, and automated evaluation of them in an employment or educational context can lead to coercion or discrimination. The European Commission's guidelines stress that this prohibition applies to any AI systems that identify or interpret a person's emotions or intentions in these environments. The term "emotions and intentions" is interpreted broadly and should not be narrowed to a few basic emotions. Significantly, it will typically be AI analysing biometric data (facial expressions, tone of voice, gestures, physiological expressions).
If AI inferred emotional state from text alone (e.g., parsing words in an email), it would not fall under this prohibition, as text data is not biometric. However, any sensing and evaluation of facial expressions, voice intonation, body movements, heart rate, etc. for the purpose of detecting emotions at work or school is prohibited.
Practical implications of this ban: Employers need to rethink the potential deployment of AI monitoring of employees. For example, apps assessing the "smile rate" of salespeople when interacting with customers or AI monitoring employee stress or satisfaction via webcam would fall under the ban and must not be used.
The only exceptions are systems deployed for safety or health - for example, AI monitoring for signs of fatigue or anxiety in drivers or operators of dangerous machinery to prevent injury. Similarly, for example, an AI assistant monitoring a worker's vocal expressions for health reasons (detecting signs of stress leading to a heart attack) could fall within the permitted health exception. However, it is always the case that the purpose must be genuinely safety or medical and not, for example, productivity enhancement or loyalty checks - such purposes are outside the exception.
Companies should be very cautious about any technology that "reads" the emotions of employees or students. For example, if an HR department was considering deploying AI to analyze video interviews of applicants (so-called AI interviews that track a candidate's facial expressions to estimate personality), it should be noted that in the EU, such a practice could violate Article 5(1)(f) if it involved assessing a candidate's emotions as a potential employee.
Similarly, any "emotion AI" tools for assessing team mood, customer satisfaction via employee emotional reactions, etc. are problematic. The alternative may be to focus on objective performance or satisfaction metrics (e.g., anonymous questionnaires) instead of covert emotion sensing, or to use other technologies that fall outside the AI Act definition (as long as they are deterministic software without machine learning - but even then, GDPR privacy protections apply).
From a regulatory perspective, emotion recognition outside of these prohibited contexts (e.g., AI that analyses customer emotions in commerce for marketing) falls into a lower risk category - specifically, it has been classified as a high risk AI system under Schedule III of the AI Act when used to classify individuals based on biometric data (i.e., including emotion) to access services.
These cases would then require strict requirements to be met, but are not absolutely prohibited. However, the most important thing for a regular company to know is that any intention to use AI to monitor the emotional state of its own employees runs into a strict prohibition and cannot be implemented.
On 4 February 2025, shortly after the prohibitions came into force, the European Commission issued Guidelines on Prohibited AI Practices under the AI Act (document C(2025) 884 final). These Commission Guidelines have been approved by the Commission, although they will be formally finalised once they have been translated into all languages (). Their purpose is to provide interpretations and practical examples on the various prohibitions under Article 5 of the AI Act and thus help to ensure consistent and effective application across the EU.
The guidelines are not legally binding in themselves (binding interpretation is a matter for the CJEU), but they are of significant indicative value and can be expected to be taken into account by supervisory authorities and courts. They are therefore a valuable guide for companies and organisations as to how the Commission understands the various prohibitions.
What do the Guidelines contain? For each of the prohibited practices (a) to (h), the Guidance discusses the main elements of the definition, gives practical examples, and identifies what is not covered by the prohibition. They also offer recommendations on how to avoid providing or using AI systems in ways that might be prohibited
For example, for manipulative techniques (point a), the Guidelines explain the concepts of "subliminal" vs. "manipulative" vs. "deceptive" , distinguish between what is still permissible persuasion (lawful persuasion) and what is already prohibited manipulation , and recommend measures such as explicit AI content warning (labeling) that can mitigate the risk of manipulation.
For the ban on emotion recognition, the Guidance clarifies that it does not apply to customer or private use (it is only for workplaces and schools) and includes scenarios of what else is allowed - e.g. systems to detect stress levels for safety reasons. Similarly, for other categories (exploitation of vulnerable persons, social scoring, biometric identification, etc.) the guidance offers a detailed discussion.
For companies, it is important that the Commission's Guidelines also contain concrete examples from practice. We have mentioned many of them above: for example, a manipulative chatbot posing as a friend, AI giving imperceptible suggestions, or marketing AI exploiting minors' addiction to an app - all of these are listed in the Guidelines as typical prohibited practices.
Conversely, they also give examples of systems that do not fall into the prohibited categories in order to draw the line. For example, in the case of manipulation, it is stressed that ordinary advertising or persuasion that does not use hidden deceptive tricks is not affected by the prohibition.
Emotional analysis, on the other hand, mentions that if AI does not meet all the characteristics of a prohibited practice (e.g. not used on employees), it will fall into a lower risk category - typically high-risk AI with obligations instead of an absolute prohibition.
Summary of the Guidance's importance: for businesses and organisations working with AI, the Guidance is valuable as it increases legal certainty. The AI Act is relatively general and technology-neutral, so specifically assessing whether a particular AI application falls under the ban can be difficult. Thanks to the Guidance, companies now have the Commission's interpretation, including practical scenarios. Although the Guidelines are not legally binding, it would be risky to ignore them - a deviation from the Commission's interpretation would have to be defended in court if necessary. We therefore recommend that companies familiarize themselves with the Guidelines (they are publicly available) and, where appropriate, consult a lawyer in disputed cases to ensure that their AI activities are consistent with the spirit and letter of the regulation.
As already mentioned, the prohibitions on prohibited AI practices are in effect as of February 2025. Therefore, any company or organization using AI should immediately take an inventory of its systems and applications and check whether any of them happen to constitute a prohibited practice under Article 5 of the AI Act. Unlike the obligations for so-called high-risk AI (which have a longer transition period for implementation), no grace period has been given here - the ban is already in place and enforceable now (). In each Member State, compliance will be supervised by market surveillance authorities and possibly other regulators (e.g. data protection authorities for biometric AI). They can carry out checks and impose sanctions. As we mentioned, the penalties are not insignificant - up to €35 million or 7% of a company's annual turnover.
In the case of a breach of the prohibition of Article 5. By comparison, this is the same level of sanction as the most serious breach of the GDPR. The financial, legal and reputational risk is therefore enormous should a company even unknowingly operate prohibited AI.
1. Identifying AI systems and risk assessment: the first step is to map all AI systems and projects in the company. The AI Act has a fairly broad definition of "AI system" that includes not only machine learning, but also logic/expert systems, Bayesian models, etc. (i.e., any software that uses autonomous intelligent action for a specific task) (). For each such system, it is necessary to evaluate which risk category it falls into.
Should it fall into the category of prohibited practices (see list above and detailed description), action must be taken immediately - either to fundamentally modify the system or to stop using it. In contrast, AI systems falling into the high-risk category will have to undergo a compliance assessment and meet a number of technical and organisational requirements by the time the AI Act applies (likely 2026), but they do not have to be shut down. The prohibited ones, however, are uncompromising - they must not be operated at all.
2. Targeting critical areas:
3. Modification or termination of risky AI activities: If you identify an AI system that might fall into a prohibited category, you need to act immediately. Options:
4. Training and internal guidelines: put internal processes in place to ensure that new AI projects are assessed for compliance with the AI Act before deployment (similar to how projects are assessed for privacy today - DPIA). Train your developers, product managers and other relevant staff on prohibited practices. It's important that people on the team know that, for example, the idea of "let's analyze a customer's facial expressions with a webcam and change the offer accordingly" is legally unacceptable. Create codes and guidelines for AI development in the company that include a checklist of forbidden areas - this way developers can know early on that certain directions must not be taken.
5. Monitoring future regulatory developments: the AI field is dynamic - the AI Act empowers the Commission to adjust details in the future (e.g., the list of prohibited practices may be expanded if new dangerous trends emerge). So keep an eye on updates to legislation and interpretations. Already now, the Commission is issuing guidance on the definition of AI systems or other methodologies in addition to guidance on prohibited practices. A law firm can help you with ongoing monitoring (see below).
In order to show practical measures, here are some scenarios and recommendations on how to prevent breaches of bans:
By following the above principles and recommendations, companies can minimise the risk of unknowingly committing prohibited practices. In addition, they will gain the trust of customers and regulators that they are using AI ethically and responsibly - which is, after all, one of the goals of the AI Act.
AI regulation in the EU is new and complex. Companies often don't have in-house experts to cover all the legal requirements (AI Act, GDPR, cybersecurity regulations, etc.) while understanding the technical side of AI systems. In this situation, a law firm specializing in technology law can play a key role as a partner to help clients prevent problems and use AI safely. Specific ways to help include:
AI regulation in the EU (AI Act) is a challenge for companies, but with expert help it is possible to navigate it successfully. The prohibited AI practices defined in the AI Act protect us all from the most dangerous ways AI can be misused - and it is in the interests of both society and your business to comply with them (breaches would lead to severe penalties and loss of reputation). Our law firm can help you set up compliance processes so you can take advantage of innovative AI technologies while fully complying with legal regulations. This will allow you to focus on developing your AI projects knowing that they are legal, ethical and safe.