Human-in-the-Loop vs. Human-on-the-Loop Control in Military AI
This article provides specific answers for foreign companies developing or deploying military or dual-use Artificial Intelligence (AI) in the Czech Republic. We will explain the critical legal differences between "Human-in-the-Loop" (HITL) and "Human-on-the-Loop" (HOTL) and why this distinction is key to liability. For any company seeking an English-speaking lawyer in Prague to navigate the EU AI Act and Czech defense regulations, this guide clarifies your immediate risks and compliance obligations.

Need advice on this topic? Contact the ARROWS law firm by email office@arws.cz or phone +420 245 007 740. Your question will be answered by "JUDr. Jakub Dohnal, Ph.D., LL.M.", an expert on the subject.
"Human-in-the-Loop" vs. "Human-on-the-Loop": Why this is a legal question, not a technical one
As a technology leader, you likely use terms like "Human-in-the-Loop" to describe your AI systems. However, in the European Union, these terms are not a reliable legal defense. Regulators and courts look past marketing labels to assess your system's true legal compliance, and a misunderstanding here can expose your company to significant liability.
What do these terms actually mean?
First, let's clarify the technical definitions. The distinctions are critical because they have direct implications for accountability and compliance with international humanitarian law.
- Human-in-the-Loop (HITL): The system cannot act without explicit human approval. A human operator is a mandatory part of the decision-making chain and must approve an action before the system executes it.
- Human-on-the-Loop (HOTL): The system can act autonomously, but a human supervisor has the ability to intervene and override its actions. The human is a supervisor, not a mandatory gatekeeper.
- Human-out-of-the-Loop (HOOTL): The system operates fully autonomously without any human oversight or intervention capability.
Why your "Human-in-the-Loop" marketing is not a legal defense
Many foreign companies, especially those from the US, use "human-in-the-loop" as a reassuring marketing term to signal safety and oversight. However, this "pro-innovation" slogan is legally irrelevant in an EU court or regulatory inspection.
Even the U.S. Department of Defense (DoD) has noted that "human-in-the-loop" is not its actual policy. The official DoD policy avoids this vague term, instead requiring that systems be "designed to allow commanders and operators to exercise appropriate levels of human judgment".
This distinction is crucial. EU regulators will not ask if you market a "human-in-the-loop" system. They will demand to know if your system complies with the EU's specific legal and ethical standards: Meaningful Human Control (MHC) and the rigorous demands of the EU AI Act.
The EU standard: What is "Meaningful Human Control" (MHC)?
The term that matters in international and EU legal debates is Meaningful Human Control (MHC). This is the standard you must meet.
MHC is not just about a human "pushing a button." It is a comprehensive legal and structural framework ensuring that human agency is embedded in the system's design, development, and oversight.
MHC is the legal basis for ensuring your system can comply with International Humanitarian Law (IHL) and, critically, for assigning criminal responsibility. If an AI system commits an unlawful act, regulators will ask who was in meaningful control. If your answer is "the algorithm," your company and its leadership could be held directly liable for the failure.
This is where ARROWS' legal experts provide legal opinions on whether your system's design and operational protocols meet the "Meaningful Human Control" standard required by EU and international law.
FAQ – Legal tips about AI Accountability
1. Is our "Human-in-the-Loop" (HITL) system automatically compliant with EU law?
No. Compliance is not determined by this label but by meeting the rigorous standards of "Meaningful Human Control" (MHC) and the technical documentation, risk management, and human oversight requirements of the EU AI Act. For a legal analysis of your system, email us at office@arws.cz.
2. What is the legal difference between HITL and "Meaningful Human Control" (MHC)?
HITL is a technical design (human must approve). MHC is a legal standard ensuring human agency, judgment, and the ability to assign accountability. A HITL system could fail the MHC test if the human is merely "rubber-stamping" AI decisions without the time, information, or ability to make a meaningful judgment. Need to review your liability? Contact us at office@arws.cz.
The great misconception: The EU AI Act's "Military Exemption" trap
The new EU AI Act is the world's first comprehensive law for artificial intelligence, and it applies across the 27-nation bloc, including the Czech Republic. It has one rule that causes the most confusion and creates the biggest risk for defense contractors.
What the EU AI Act's "Military Exemption" really says
The Act contains an explicit exemption that "does not apply to AI systems that are exclusively for military, defence or national security purposes".
This sounds like great news for any company in the defense sector. It is, in fact, a significant legal trap.
The Dual-Use pincer: Why the exemption won't protect you
The exemption only applies to systems used exclusively for a military purpose. The reality of the tech industry is that almost no advanced AI is "exclusively" military.
The technology is inherently dual-use. Your core technology—whether for image recognition, navigation, data analysis, or generative AI—is likely developed commercially first and then adapted for a defense client.
This means your commercial AI system (e.g., an advanced data-processing API) will be regulated by the AI Act. Under the Act's risk-based approach, any dual-use AI system that could be used in critical infrastructure, transport, or product safety will almost certainly be classified as "high-risk".
As a "provider" of a high-risk AI system, you now face massive compliance obligations, including establishing a risk management system, ensuring data governance, and drafting extensive technical documentation. ARROWS lawyers specialize in drafting documentation to prevent fines and ensure your products are fully compliant with the EU AI Act.
What are the penalties for getting this wrong?
If you assume you are exempt but your dual-use system is found to be non-compliant, the penalties are severe and based on your company's global revenue.
- Non-compliance with high-risk obligations: Fines up to €15,000,000 or 3% of your company's total worldwide annual turnover, whichever is higher.
- If your system violates rules on prohibited practices, fines rise to €35,000,000 or 7% of global turnover.
This is not a "cost of doing business." It is a company-ending financial risk.
EU AI Act Compliance risks for Dual-Use technology
|
Risks and penalties |
How ARROWS helps |
|
Risk: Wrongly classifying your dual-use AI as "military only" to avoid the AI Act. Penalty: Fines up to €35 million or 7% of global turnover for non-compliance with prohibited/high-risk rules. |
Legal Opinions: We provide a formal legal opinion on your AI system's classification under the EU AI Act. Need legal certainty? Write to office@arws.cz. |
|
Risk: Failing to provide complete technical documentation for your "high-risk" AI system. Penalty: Fines up to €7.5 million or 1% of global turnover for providing incorrect information. |
Drafting Legally Required Documentation: Our team drafts the technical documentation and risk assessments required by the Act. Prepare your documentation by contacting us at office@arws.cz. |
|
Risk: Your "high-risk" AI system fails its conformity assessment. Penalty: Fines up to €15 million or 3% of global turnover; product is banned from the EU market. |
Representation Before Public Authorities: We represent you before EU and Czech market surveillance authorities. For immediate assistance, write to us at office@arws.cz. |
Who pays when your AI fails? The new EU product liability rules
For any company developing software or AI, the liability landscape in the EU has just changed dramatically. This change creates a new and urgent risk that many foreign executives and in-house counsel are not yet aware of.
A critical update: The AI Liability Directive (AILD) is gone
For several years, the EU was developing a specific AI Liability Directive (AILD) to manage claims for harm caused by AI.
In a major policy shift, the European Commission withdrew this proposal in 2024/2025.
This was not a victory for the tech industry. It did not create a regulation-free "Wild West". Instead, it left a more dangerous and complex situation for developers. All liability claims now fall to two, much harsher, legal regimes:
1. A fragmented patchwork of 27 different national civil liability laws, creating massive uncertainty.
2. The newly revised Product Liability Directive (PLD), which has become the primary weapon for AI liability claims.
Why the revised Product Liability Directive (PLD) is now the biggest threat
The revised PLD is now the most significant legal threat to AI developers in Europe.
It makes a radical legal shift: the PLD explicitly defines software, including AI systems and firmware, as a "product".
This change means your code is now treated with the same legal liability as a physical machine. The directive applies strict liability. A person harmed by your AI does not need to prove you were negligent. They only need to prove that your AI "product" was defective and that the defect caused their harm.
This shift means your sales contracts, terms of service, and supply chain agreements are now critical lines of defense. ARROWS provides expert contract drafting and review to help shield you from this new, expanded liability.
Your AI's "learning" is now your liability
The new PLD was specifically designed to target the unique nature of AI. A product can be considered "defective" due to:
1. "Continuous learning": If your AI's self-learning leads to harmful behavior after it has been sold, you, the manufacturer, can be held liable.
2. Cybersecurity flaws: A product's cybersecurity is now a key part of its safety. You are liable for damage caused by a vulnerability you failed to patch.
3. Faulty software updates: You can be held liable for an update that introduces a defect, or for failing to provide an update that was necessary for safety.
This is a completely different legal world from the U.S. framework, which often relies on proving negligence. In the EU, the developer is now in a strict liability regime where you are presumed liable.
Product liability risks for AI Software developer
|
Risks and penalties |
How ARROWS helps |
|
Risk: Your AI system's "continuous learning" causes it to malfunction and injure a user or destroy data. Penalty: Strict (no-fault) liability. You must pay compensation for all damages, including data corruption. |
Legal Consultations: We analyze your product's liability exposure under the new PLD. Get tailored legal solutions by writing to office@arws.cz. |
|
Risk: A third-party hacker exploits a cybersecurity vulnerability in your AI, causing harm. Penalty: You are held liable as the manufacturer for the "defect" (the vulnerability) that allowed the attack. |
Contract Drafting: We draft robust supply chain contracts and end-user agreements to limit your liability exposure. Do you need a contract prepared? Contact us at office@arws.cz. |
|
Risk: Your supply chain partner (e.g., a data provider) gives you biased data, which makes your "high-risk" AI system defective. Penalty: As the final "provider", you are liable for the damage. You are now in a legal fight with your own supplier. |
Representation in Court: We defend you in product liability litigation across the EU. Need legal representation? Write to office@arws.cz. |
The Czech hurdle: Why EU-Level compliance is not enough
For foreign companies, it is a critical mistake to believe that EU "harmonised rules" mean compliance is "one-and-done."
EU regulations set the baseline, but it is the member states—like the Czech Republic—that implement and enforce them. More importantly, member states have their own, additional laws that create a national-level "gate" for foreign investors.
As an international law firm operating from Prague, European Union, we see foreign companies make this error frequently. For high-tech, defense, and AI sectors, the Czech Republic has two major gates you must pass.
Gate 1: Czech Foreign Direct Investment (FDI) Screening
The Czech Republic is a prime destination for technology and defense investment. To protect its strategic interests, the government screens foreign investments under the Czech FDI Act (Act No. 34/2021 Coll.).
This screening is mandatory for non-EU investments into companies involved in:
- Manufacturing or R&D of military material.
- Operating critical infrastructure.
- Developing or manufacturing dual-use items.
- Access to key technologies like Artificial Intelligence, robotics, and cybersecurity.
If you, as a US, UK, or other non-EU investor, try to acquire a promising Czech AI startup without approval, the Ministry of Industry and Trade can block the transaction or even force you to divest after the sale. ARROWS provides help with obtaining regulatory approvals for FDI, ensuring your investment is secure from day one.
Gate 2: EU Dual-Use Export controls & Czech criminal law
This is the most severe and overlooked risk for foreign tech companies. The EU controls the export of dual-use items via Regulation (EU) 2021/821. This list explicitly includes advanced AI, quantum computing, and cyber-surveillance technology.
There is also a "catch-all" control for non-listed AI items if they could be used for human rights violations.
This is not just an administrative rule. In the Czech Republic, violating this EU regulation is a criminal offense.
Under Czech law, a "Breach of International Sanctions" can lead to:
1. For an individual (like your Czech-based manager): Imprisonment up to 8 years.
2. For the company: Corporate criminal liability, fines up to €2,000,000 or 10% of annual turnover, and dissolution of the company.
This is not theoretical. Czech authorities are actively investigating companies for this right now. ARROWS provides urgent legal consultations to prevent inspections or penalties and offers professional training for employees or management (with certificates) on export control compliance.
FAQ – Legal tips about Czech Investment Screening
1. Does the Czech FDI screening apply to a minority investment in a Czech AI startup?
Yes, it can. The screening is triggered by acquiring "effective control" or a significant interest, which can be less than 50%. It's critical to get a legal assessment before you invest. For a consultation, write to office@arws.cz.
2. We are a US company. Don't our "pro-innovation" standards apply?
No. When operating in the Czech Republic, you are subject to the EU's "risk-based" 2 rules and Czech national security laws. US standards are not a defense. To align your US and EU compliance, contact us at office@arws.cz.
Czech National Law Risks for Foreign Investors
|
Risks and penalties |
How ARROWS helps |
|
Risk: Acquiring a Czech AI/defense technology company without mandatory FDI screening approval. Penalty: Transaction is declared void; forced divestment; significant fines. |
Help with Obtaining Licenses: We manage the entire FDI approval process with the Czech Ministry of Industry and Trade. Secure your investment by emailing office@arws.cz. |
|
Risk: Your Czech subsidiary exports your dual-use AI technology to a non-allied country without an export license. Penalty: Criminal prosecution under Czech law. Up to 8 years in prison for management; corporate fines up to 10% of turnover. |
Representation Before Public Authorities: We represent you in all dealings with Czech authorities, including export control and sanctions bodies. Do not hesitate to contact our firm – office@arws.cz. |
|
Risk: Your company's internal compliance policies are US-based and fail to address specific Czech/EU rules. Penalty: You are left completely exposed to Czech government inspections, penalties, and criminal liability. |
Preparation of Internal Company Policies: We draft and update your internal policies to be fully compliant with both EU and Czech law. Get tailored legal solutions by writing to office@arws.cz. |
How ARROWS creates your legal safe harbour in Prague
The legal landscape for AI, defense, and technology in Europe is a minefield of overlapping EU regulations (AI Act, PLD), conflicting international norms (EU vs. US), and severe national laws (Czech FDI, Criminal Code).
You do not just need a lawyer. You need a strategic partner who understands this entire ecosystem from a global, EU, and local Czech perspective.
ARROWS is a leading Czech law firm based in Prague, European Union. Our expertise is built on combining deep knowledge of local Czech law with the global perspective required for cross-border business. Our ARROWS International network, built over 10 years and operating in 90 countries, gives us the unique ability to advise foreign clients on how EU and Czech rules interact with their home jurisdictions.
We support over 150 joint-stock companies and 250 limited liability companies, many of them foreign-owned enterprises just like yours. We know the problems you face, and we are known for speed, high quality, and a practical, business-focused approach.
What is the next step?
Do not wait for a regulator's inspection, a product liability lawsuit, or a blocked investment. The time to manage these risks is before they happen.
Our lawyers are ready to help you with:
- Contract drafting or review to manage the new strict liability of the PLD.
- Drafting all legally required documentation for the EU AI Act.
- Representation in court or before public authorities (FDI, export controls).
- Professional training for your management on criminal compliance for dual-use exports.
Our lawyers are ready to assist you – email us at office@arws.cz.
FAQ – Most common legal questions about Military & Dual-Use AI Regulation in the EU
1. We are a US company supplying AI to a NATO ally in the EU. Are we really subject to the EU AI Act?
Yes, almost certainly. The Act has "extra-territorial" reach. If you are a "provider" placing a "high-risk" AI system on the EU market (even if sold to a government), you are subject to the Act's obligations. For an analysis of your specific exposure, email us at office@arws.cz.
2. My in-house counsel is worried about the EU AI Act overlapping with GDPR. Is that a valid concern?
Yes, it is a major concern. Training your AI on data may involve processing sensitive personal data, triggering both GDPR and AI Act compliance. This creates complex tensions, for example, between the "right to explanation" and protecting your IP. Our firm has experts in both data protection and AI. Get legal help at office@arws.cz.
3. We are an investor, not a tech developer. Do these rules apply to us?
Absolutely. The Czech FDI screening is aimed directly at you. If you invest in a Czech company touching AI or defense tech without prior approval, your investment can be nullified. Protect your capital by contacting our M&A team at office@arws.cz.
4. Our company only makes "Human-on-the-Loop" (HOTL) systems. Is that lower risk than "Human-in-the-Loop" (HITL)?
Not necessarily. Legally, the risk is about "Meaningful Human Control" (MHC). A HOTL system where the human supervisor cannot realistically intervene (e.g., due to speed) would be considered less compliant than a well-designed HITL system. Your liability depends on the quality of human judgment, not a label. Need legal help? Contact us at office@arws.cz.
5. The EU AI Liability Directive (AILD) was withdrawn. Does that mean our liability risk is lower?
No, your risk is higher and more complex. The AILD's withdrawal shifts all civil claims to the revised Product Liability Directive (PLD), which imposes strict (no-fault) liability on software and AI as "products." This is a more severe standard. To review your exposure, write to us at office@arws.cz.
6. What is the single biggest risk for our foreign-owned Czech subsidiary?
The risk of criminal liability for violating EU dual-use export controls. A manager exporting your AI software without a license faces prison time. This risk is immediate and severe. We provide professional training for management to prevent this. For immediate assistance, write to us at office@arws.cz.