Target Recognition Systems Based on Machine Learning

10.11.2025

Are you considering using biometric or facial recognition AI for your business in the Czech Republic? This article provides specific answers on how to comply with the new EU AI Act and the GDPR. As a leading Czech law firm in Prague, EU, we specialize in helping foreign clients navigate these complex rules and avoid massive fines. We are your English-speaking lawyers in a safe European harbour.

Need advice on this topic? Contact the ARROWS law firm by email office@arws.cz or phone +420 245 007 740. Your question will be answered by "JUDr. Jakub Dohnal, Ph.D., LL.M.", an expert on the subject.

What is 'Prohibited' AI? The Red Lines for Business

The EU AI Act takes a risk-based approach, and the first category is "Unacceptable Risk." These AI practices are completely banned in the European Union under Article 5 of the Act. If your business model relies on these, you cannot legally operate in the EU.

get in touch with us,
we respond immediately!

Prohibited practices relevant to target recognition include:

  • 'Real-time' remote biometric identification (RBI) in publicly accessible spaces (like scanning a shopping centre crowd to identify people "live").
  • Creating facial recognition databases by "untargeted scraping of facial images from the internet or CCTV footage".
  • Biometric categorization systems that infer sensitive attributes like political opinions, religious beliefs, or sexual orientation from biometric data.
  • Emotion recognition systems used in the workplace or in educational settings.

The ban on "untargeted scraping" shows regulators are moving from simply fining a practice to banning it outright. If your technology borders on these categories, stop and seek immediate legal advice. For immediate assistance, write to us at office@arws.cz.

When is Target Recognition 'High-Risk'?

If your system is not "prohibited," it is most likely "High-Risk." This is the category that will apply to most legitimate businesses. AI systems listed in Annex III of the Act are classified as 'high-risk'.

Understanding this category requires knowing three key legal terms:

1. Biometric Identification (1-to-many): The AI scans a face and compares it against a database to answer, "Who is this person?". While 'real-time' identification is banned, 'post-remote' identification (e.g., reviewing security footage hours later to identify a shoplifter) is permitted but is high-risk.

2. Biometric Verification (1-to-1): The AI scans a face and compares it to a single, pre-stored record to answer, "Are you who you claim to be?" This is less invasive but is still high-risk in many contexts.

Practical examples of high-risk (Annex III) use cases include:

  • Employment: Using AI to screen CVs, monitor employee performance, or in recruitment.
  • Access to Services: Using facial recognition to grant employees access to a building or a secure digital service.
  • Critical Infrastructure: Systems used for the safety and operation of transport, water, or electricity.

High-Risk AI System Compliance (EU AI Act)

Risks and penalties

How ARROWS helps

Risk: Using a "Prohibited" AI system (Art. 5) by mistake, (e.g., a "security" tool that performs real-time RBI).Penalty: Fines up to €35,000,000 or 7% of global turnover.

Legal Opinion: We analyze your proposed AI tool before deployment to determine its legal risk classification under the AI Act. Get a legal opinion by writing to office@arws.cz.

Risk: Failing to register your high-risk system in the new EU database. Penalty: Fines up to €15,000,000 or 3% of global turnover.

Compliance Documentation: We prepare the necessary legal and technical documentation for compliance and registration. Need help with AI Act paperwork? Contact us at office@arws.cz.

Risk: Your US-based AI provider gives you false compliance assurances.Penalty: As the 'deployer', you are liable for non-compliance in the EU.

Contract Review: We review your third-party vendor contracts to ensure liability is properly allocated and compliance is guaranteed. For a contract review, email us at office@arws.cz.

Risk: Failing to perform a mandatory Conformity Assessment for your high-risk system.Penalty: System is illegal on the market; fines for non-compliance with high-risk obligations.

Legal Consultations: We guide you through the complex self-assessment or third-party 'conformity assessment' process. Start your legal consultation by writing to office@arws.cz.

How Can You Lawfully Process Biometric Data Under GDPR?

Even if your AI system is compliant with the AI Act, it still processes personal data. This means you must also comply with the GDPR. This "dual compliance" is a major trap for foreign companies.

"Biometric data for the purpose of uniquely identifying a natural person" is classified as "special category data" under Article 9 of the GDPR.

Under Article 9, processing this data is prohibited by default. To make it lawful, you must find a specific, narrow exception. For most businesses, the only available exception is "explicit consent".

However, "explicit consent" is difficult to obtain correctly:

  • The Employment Trap: Consent must be freely given. In an employer-employee relationship, consent is rarely considered "freely given" because of the imbalance of power. An employee who feels "forced" to use a biometric scanner to enter the office has not given valid consent.
  • The Public Space Trap: You cannot get valid, explicit consent from every person walking past a camera in a public space or building lobby.

Other legal bases, such as "substantial public interest," are available, but these are defined by national law. What constitutes 'substantial public interest' in the Czech Republic? A foreign company cannot know this without local expert counsel. ARROWS can provide this specific analysis.

FAQ – Legal tips about AI & Data

1. What is a 'deployer' vs. a 'provider' under the AI Act?
A 'provider' is the company that creates or places the AI system on the EU market (e.g., the software developer). A 'deployer' is the company that uses the AI system under its own authority (e.g., your company using it in your Prague office). As a deployer, you have your own set of legal obligations. For an analysis of your specific obligations, email us at office@arws.cz.

2. Does the EU AI Act replace the GDPR?
No. This is a critical legal trap. The AI Act supplements the GDPR, it does not replace it. You must comply with the AI Act's rules for the system and the GDPR's rules for the personal data it processes. Need help with this dual compliance? Contact our lawyers at office@arws.cz.

3. What does 'extraterritorial reach' mean for my company?
It means that even if your company is based in the US or Asia, the EU AI Act and GDPR apply to you if your AI system's 'output' is used in the EU or if you process the data of people in the EU. Non-EU companies are regularly fined. To understand your company's exposure, write to us at office@arws.cz.

What Do You Need to Do Before Launching?

If you believe your system is not 'prohibited' and you have a potential legal basis under GDPR, your work is just beginning. You have mandatory obligations to fulfill before you can launch.

Your First Step: The Data Protection Impact Assessment (DPIA)

This is your most critical, non-negotiable first step. Under GDPR Article 35, a Data Protection Impact Assessment (DPIA) is mandatory for any processing that is "likely to result in a high risk".

The regulation explicitly names two triggers that require a DPIA:

1. "Processing on a large scale of special categories of data" (i.e., your biometric system).

2. "A systematic monitoring of a publicly accessible area on a large scale".

A DPIA is not just paperwork. It is your formal legal analysis of necessity and proportionality. It forces you to legally document the answer to the question: "Do we really need to use facial recognition, or can we achieve the same security goal with a less intrusive method, like a keycard?".

ARROWS lawyers are experts in conducting DPIAs and drafting this legally required documentation to protect your project. Do not hesitate to contact our firm – office@arws.cz.

Your New Obligations as a 'Deployer' Under the AI Act

As the 'deployer' (the user) of a high-risk system, you have your own specific legal obligations under Article 26 of the AI Act.

Your key obligations include:

1. Assign Human Oversight: You must appoint specific, trained individuals who have the competence and authority to oversee the AI's operation and intervene or stop it.

2. Follow Instructions: You must use the system only in accordance with the provider's technical instructions.

3. Check Your Data: If you control the input data, you must ensure it is "relevant and sufficiently representative" for the purpose.

4. Keep Logs: You must keep the system's automatically generated logs for at least six months.

5. Inform People: You must clearly inform natural persons (e.g., your employees) that they are subject to a high-risk AI system.

Certain deployers (like public bodies or those in finance) must also conduct a Fundamental Right Impact Assessment (FRIA). The 'human oversight' requirement is a legal mandate. ARROWS provides professional training for management and staff, complete with certificates, to ensure your team is competent to perform this role.

GDPR & Biometric Data Risks

Risks and penalties

How ARROWS helps

Risk: Processing biometric data without a valid legal basis under GDPR Article 9. Penalty: Fines up to €20,000,000 or 4% of global turnover.

Legal Analysis: We determine the correct legal basis for your processing in the Czech Republic. Want to understand your legal options? Email us at office@arws.cz.

Risk: Claiming "consent" from employees, which is later found to be invalid due to power imbalance. Penalty: Processing becomes illegal, fines, and orders to delete all data.

Preparation of Internal Policies: We draft robust internal data protection policies and consent mechanisms that can withstand regulatory scrutiny. Get tailored legal solutions by writing to office@arws.cz.

Risk: Launching a high-risk biometric project without first conducting a mandatory DPIA. Penalty: Significant fines for non-compliance; project can be shut down by regulators.

Drafting Legally Required Documentation: A DPIA is a complex legal document. We conduct the assessment and draft the DPIA for you. Need legal help? Contact us at office@arws.cz.

Risk: Employees are not properly trained for "human oversight" or are not informed of the system's use. Penalty: Violation of AI Act deployer obligations (Art. 26). Fines up to €15,000,000 or 3%.

Professional Training: We offer certified training for your managers and staff on their legal responsibilities under the AI Act and GDPR. To schedule a training, email office@arws.cz.

What Are the Financial Consequences of Non-Compliance?

The penalties for failing to comply with this "dual" framework are severe and designed to be punitive. You face two separate sets of fines:

  • EU AI Act Fines:
  • Tier 1 (Prohibited AI): Up to €35 Million or 7% of global annual turnover, whichever is higher.
  • Tier 2 (High-Risk Violations): Up to €15 Million or 3% of global annual turnover.
  • GDPR Fines:
  • Higher Tier (for Art. 9 violations): Up to €20 Million or 4% of global annual turnover.

These are not theoretical. The Clearview AI case proves that non-EU companies are high-value targets. Clearview AI was fined €20 Million by Italy's data protection authority for processing biometrics without a legal basis. The company's defense—that it was American and not subject to EU law—failed. Your company's location does not protect you if your service is used in the EU.

A Local Case Study: Why a Prague Law Firm is Essential

Enforcement of these EU laws is handled by national authorities. The legal arguments and regulatory environment in Prague are what will matter most to your business.

A recent case at Prague's Václav Havel Airport proves this. For years, the Czech Police operated a facial recognition system in the airport. Following a complaint, the Czech Data Protection Office (the ÚOOÚ) investigated and confirmed the system violated data protection laws.

The system was shut down in August 2025. A key legal argument for its illegality was the new EU AI Act. Since February 2025, the Act requires judicial approval for such systems—approval the police did not have.

This case demonstrates three critical facts:

1. Enforcement is not abstract; it is local and actively carried out by the Czech ÚOOÚ.

2. The EU AI Act is already being used as a powerful legal tool to shut down biometric systems in the Czech Republic.

3. You need a law firm that understands the specific Czech bodies implementing the AI Act, including the Ministry of Industry and Trade (MIT) and the Office for Technical Standardization (ÚNMZ).

As a law firm based in Prague, European Union, we have deep experience with the Czech legal environment and the specific interpretations of the ÚOOÚ.

Enforcement & Litigation Risks

Risks and penalties

How ARROWS helps

Risk: A complaint is filed against your system, triggering an investigation by the Czech Data Protection Office (ÚOOÚ). Penalty: Forced shutdown of your system (like the Prague Airport), massive fines.

Representation Before Public Authorities: We have on-the-ground experience with Czech regulators. We can manage the investigation and represent your interests. Need representation? Write to office@arws.cz.

Risk: Private litigation from individuals claiming damages for illegal surveillance or data processing. Penalty: Costly litigation, damages, and severe reputational harm.

Representation in Court: Our litigation team is prepared to defend you in court against data-related civil claims. For immediate assistance, write to us at office@arws.cz.

Risk: The AI Act or GDPR rules are implemented differently in the Czech Republic. Penalty: Relying on generic EU advice fails to protect you from specific local laws.

Local Legal Opinions: We are experts in both EU regulations and Czech national law. We provide opinions you can rely on. Get tailored legal solutions by writing to office@arws.cz.

How EU Rules Differ From Your Home Market (US & UK)

For foreign clients, the biggest risk is assuming that your "home" experience with AI governance applies in the EU. This "mindset gap" is a major liability.

The US 'Patchwork' vs. the EU 'Mandate'

The US has no single federal AI law. Instead, it has a "patchwork" of state laws like the Illinois Biometric Information Privacy Act (BIPA) and the California Consumer Privacy Act (CCPA/CPRA).

The primary US federal tool, the NIST AI Risk Management Framework (RMF), is a voluntary set of guidelines. The EU AI Act is a legally mandatory law with severe penalties.

The UK 'Principles' vs. the EU 'Law'

The UK, post-Brexit, has taken a "pro-innovation" and "light-touch" path. This framework relies on five core principles (e.g., Fairness, Accountability, Contestability) that are not yet law. They are implemented by existing regulators like the ICO. It is a "sector-led" approach, not a horizontal, binding law like the EU's.

As an international law firm operating from Prague, European Union, we specialize in bridging these legal-cultural gaps. Our ARROWS International network, built over 10 years, gives us the fluency to understand your home market's perspective and translate your business needs into a compliant Czech/EU legal strategy.

Do not let your home-market experience become a compliance blind spot in the EU. Get legal help by contacting us at office@arws.cz.

How ARROWS Guides You Through AI Compliance

Navigating this landscape is complex, but our process is clear. We provide end-to-end legal support for companies deploying AI in the Czech Republic.

  • Step 1: The AI Legal Audit: We start with a legal consultation to analyze your technology. We issue a legal opinion on your system's risk level (Prohibited, High-Risk, etc.) under the AI Act and GDPR.
  • Step 2: Pre-Launch Documentation: We conduct and draft the legally required documentation, including the GDPR DPIA and, if needed, the AI Act FRIA, to get your project approved.
  • Step 3: Policy and Contracts: We prepare internal company policies for 'human oversight' and data governance. We review your contracts with AI 'providers' to protect you from liability.
  • Step 4: Training and Defense: We provide professional training for your management on their compliance duties. And if the ÚOOÚ or another body investigates, we provide full representation before public authorities and in court.

Your Next Step

The rules for AI and biometrics in the EU are not a future problem; they are a present-day reality. A compliant strategy is essential for protecting your investment and avoiding multi-million euro fines.

Do not risk your project on guesswork. Contact our expert team of English-speaking lawyers in Prague today for a confidential consultation.

Our lawyers are ready to assist you – email us at office@arws.cz.

FAQ – Most common legal questions about AI and Biometrics in the Czech Republic

1. Who is responsible for the AI Act in the Czech Republic?
Several bodies are responsible. The Czech Data Protection Office (ÚOOÚ) handles data protection aspects. For the AI Act itself, implementation is led by the Ministry of Industry and Trade (MIT), with the Office for Technical Standardization (ÚNMZ) acting as the 'notifying authority' to approve 'conformity assessment bodies'. ARROWS can navigate these specific Czech authorities for you. For help, contact office@arws.cz.

2. Can I use facial recognition for employee attendance?
This is extremely high-risk. Under the AI Act, it's a 'high-risk' use in employment. Under GDPR, you must prove "explicit consent," which is very difficult in an employment context. A regulator would likely rule that a less invasive method (like a keycard) is sufficient and proportionate. Before you invest, get a legal opinion from us at office@arws.cz.

3. What is a 'Conformity Assessment' (CA)?
A Conformity Assessment is a mandatory process for 'high-risk' AI systems. It's the process of verifying and documenting that your system complies with all AI Act requirements (e.g., risk management, data governance, human oversight). Some CAs can be done internally; others require a third-party 'notified body'. We can guide you through this complex assessment. Write to office@arws.cz.

4. What is the difference between 'biometric identification' and 'biometric verification'?
This legal distinction is critical. Identification (1-to-many) answers "Who is this?" by comparing a person to a large database. Verification (1-to-1) answers "Are you who you say you are?" by comparing a person to their own stored file. While verification is less invasive, both are high-risk under the AI Act in most business contexts. Need to classify your system? Contact us at office@arws.cz.

5. Our AI vendor is in the US and says they are 'NIST compliant'. Is that enough for the EU?
No. This is a common and dangerous misconception. The US NIST AI RMF is a voluntary framework. The EU AI Act is a legally binding law with severe penalties. Relying on a US standard will leave you non-compliant in the EU. Our firm's international network specializes in resolving this exact gap. Get your EU compliance check at office@arws.cz.

6. What is a 'regulatory sandbox' in the Czech Republic?
A regulatory sandbox is a program, managed by agencies like the Czech Agency for Standardization (ČAS), that allows companies to test innovative AI products in a controlled, live environment with regulatory supervision. It's designed to foster innovation safely. ARROWS welcomes innovative business ideas and can help you explore entry into such programs. Discuss your innovative project with us at office@arws.cz.

Don't want to deal with this problem yourself? More than 2,000 clients trust us, and we have been named Law Firm of the Year 2024. Take a look HERE at our references.