Liability for autonomous weapons Who answers for algorithmic errors?
Autonomous weapon systems that decide on the use of force without human intervention pose a fundamental legal challenge. Who bears liability for a fatal algorithmic error—the state, the manufacturer, or the commander? In 2026, legal uncertainty is growing with every new conflict. Understanding this fragmented framework is key to safeguarding your business.

Table of Contents
Quick summary
- Regulation exists, but it is fragmented. Autonomous weapons are subject to general international law, but a specific globally binding convention is still missing. While the European Union’s Artificial Intelligence Act (EU AI Act) exempts military systems, dual-use technologies remain subject to strict controls.
- Responsibility primarily lies with the state and commanders. Under international law, the state that deployed the weapon is responsible for violations of IHL. The criminal liability of individuals (commanders, operators) is governed by the principles of international criminal law.
- Risks for suppliers and investors are increasing. Even though weapons manufacturers enjoy protection in some jurisdictions, in 2026 they face heightened risk in the area of export controls.
- Without a compliance strategy, sanctions are a real risk. Companies involved in the supply chain face not only reputational damage in the absence of a proactive legal strategy, but also direct financial penalties, contract invalidation, and exclusion from investment portfolios due to ESG criteria.
Autonomous weapons systems: how technology became a legal issue
Imagine the following situation: loitering munition is operating above a battlefield. Based on trained data, an algorithm identifies a target and independently decides to strike. A second later, it turns out the target was not a combatant but a civilian carrying an object resembling a weapon. Who is responsible for the death? The programmer? The commander who launched the drone? The state?
This is the core issue of lethal autonomous weapons systems (LAWS). These are platforms that, once activated by a human, select and engage targets without further human intervention.
The difference from earlier systems lies in cognitive data processing. Whereas guidance systems used to track a heat signature, today neural networks classify objects and assess the tactical situation in milliseconds.
The legal team at ARROWS, a Prague-based law firm, is dealing with these issues increasingly often—not only for defence manufacturers, but also for software developers whose vision or navigation algorithms may be classified as dual-use items.
International humanitarian law and the legality of autonomous weapons
As of 2026, there is no globally binding ban on autonomous weapons. The legal framework is a mosaic of customary law, the Geneva Conventions, and national regulations.
The Geneva Conventions and Additional Protocols
The cornerstone remains the 1949 Geneva Conventions and their Additional Protocol I of 1977. These set out the fundamental principles of IHL:
- Distinction: The weapon must be capable of distinguishing between civilians and combatants.
- Proportionality: Expected civilian harm must not outweigh the direct military advantage.
- Precaution: The attacker must do everything possible to minimise civilian casualties.
If an autonomous system cannot reliably distinguish a civilian from a soldier in a dynamic environment, its use is per se unlawful. Responsibility for deploying such a system lies with the state.
A key instrument is Article 36 of Additional Protocol I, which imposes on states the obligation to conduct a legal review of every new weapon. States must certify that the weapon can be used in compliance with international law. However, for AI systems that can “learn” or change their parameters, certification under Article 36 is extremely complex.
Legal development stages and compliance
From the perspective of Czech and international law, the development process must undergo strict scrutiny. ARROWS, a Prague-based law firm, supports these processes for its clients:
- Stage 1: Development and dual-use classification. Already at the design stage, it is necessary to determine whether the software falls under dual-use regulation. In the USA, relevant guidance includes , which requires systems to be designed to allow for “appropriate human judgment”.
- Stage 2: Legal review (Article 36 Review). Testing must be carried out before the system is fielded. Lawyers assess the error rate (false positive rate), the existence of a safeguard (kill switch), and the response to loss of connection.
- Stage 3: Deployment and Rules of Engagement (ROE). Even a lawful weapon can be used unlawfully. Responsibility shifts to the commander, who must understand the system’s limitations.
The Convention on Certain Conventional Weapons
Discussions within the UN (the GGE group under the CCW) in Geneva have been ongoing for more than ten years. As of 2026, no binding protocol banning LAWS has been adopted under the Convention on Certain Conventional Weapons (CCW), primarily due to resistance from military powers. These states prefer non-binding codes of conduct over a strict ban.
By contrast, the European Parliament and the UN Secretary-General have long called for a ban on systems that lack “meaningful human control”.
Related questions on legal responsibility for autonomous weapons
1. How should “meaningful human control” be defined in practice? It is not merely a human being present at a button, but the operator’s ability to understand in real time why the algorithm selected the target, and to have sufficient time and information to cancel the strike if necessary.
2. What happens if an AI weapon “re-trains” itself in the field? If the system changes its parameters after official certification (so-called Article 36), a legal vacuum arises; both the manufacturer and the state risk the weapon beginning to act contrary to international law, for which the state bears responsibility.
3. What role does cybersecurity play in the legal assessment? Vulnerability to hacking or sensor deception (adversarial attacks) is now considered a design defect; if the manufacturer underestimates security, it may be liable for subsequent incidents due to negligence.
4. Is it possible to insure against the risks associated with AI weapons? Standard insurance policies often do not cover an algorithm’s “autonomous decision-making”, which is why companies in the defence industry require specific AI liability clauses to cover the extreme financial risks of international disputes.
Liability for algorithm errors
From a legal perspective, it is necessary to distinguish between state responsibility, an individual’s criminal liability, and the manufacturer’s civil liability.
State responsibility
Under the Draft Articles on Responsibility of States for Internationally Wrongful Acts, a state bears strict responsibility for the acts of its organs, including the armed forces. If an autonomous drone commits a war crime, the state is responsible, regardless of whether it was a software error or an operator error. The state is obliged to provide reparations.
Individual criminal liability
The situation is more complex for individuals. Under the Rome Statute (ICC), a commander may be prosecuted if they knew the system was failing and deployed it anyway, or if they neglected oversight. So-called command responsibility applies even where a commander failed to intervene against an autonomous system committing crimes.
The argument “the computer decided” will not stand up in court. A commander has a duty to use only such means whose effects they can control.
Liability of the manufacturer and developer
In military procurement, the manufacturer’s liability is often contractually limited. However, in 2026 the legal environment in the EU is changing.
- Product Liability Directive: If a software defect was caused by gross negligence in development, civil liability cannot be fully excluded, especially if damage occurs outside an armed conflict (e.g., during training) under .
- Contractual liability: States increasingly shift risks to suppliers through recourse claims in public procurement contracts.
Practical legal risks for your company
If your company is part of the supply chain in the defence industry or develops AI technologies, the following regulations apply to you.
Export control and sanctions
Exports of technologies related to autonomy are subject to strict regimes. These include the international control regime of the Wassenaar Arrangement and EU Regulation 2021/821. Image-recognition software or neural networks may require an export licence even if they are not primarily intended for the military. Breaches are punished by high fines and, in the Czech Republic, also by criminal prosecution.
|
Risk |
How ARROWS helps (office@arws.cz) |
|
Export licence refusal |
We will classify the goods and represent you in proceedings before the Licensing Administration of the Ministry of Industry and Trade of the Czech Republic. |
|
Sanctions breach |
We will set up business partner screening (Sanctions Screening) in line with current EU and US (OFAC) lists. |
|
Criminal liability of management |
We will implement an Internal Compliance Program (ICP), which is mandatory for exporters of sensitive material. |
ESG and the CSDDD Directive
In 2026, the CSDDD (Corporate Sustainability Due Diligence Directive) is fully effective. Large companies must identify and address the impacts of their activities on human rights. Developing autonomous weapons without ethical safeguards may be assessed as a breach of standards, which may lead to exclusion from the supply chains of multinational corporations.
AI Act
Although the AI Regulation contains an exemption for systems intended exclusively for military purposes , the boundary is thin.
If you develop navigation AI that you sell to the military as well as to the civilian sector, you fall under the full regulation for high-risk systems. This includes CE certification, risk management, and data governance.
The human element and meaningful control
The legal standard the international community is converging on is Meaningful Human Control (MHC).
- Human-in-the-Loop (HITL): The system selects the target, but a human must actively approve the firing. This is the legally safest mode.
- Human-on-the-Loop (HOTL): The system attacks on its own; a human supervises and can abort the attack (veto). The risk here is so-called automation bias , where the human trusts the machine too much and does not intervene.
- Human-out-of-the-Loop: Full autonomy. Legally the most problematic and, for many states, an ethically unacceptable category for lethal force.
For manufacturers, it is crucial that their systems technically allow the implementation of HITL or HOTL modes according to the customer’s (state’s) requirements and the rules of engagement (ROE).
International developments and outlook
Although the UN Secretary-General called in 2023 for an agreement to ban LAWS by 2026, reality is different. Instead of a global ban, we see fragmentation:
- The “Regulation” bloc: The EU and some states are pushing for strict rules and human control.
- The “Status Quo” bloc: Major powers (the USA, Russia) reject new binding treaties, arguing that existing IHL is sufficient.
For your company, this means the need to monitor legislation in each export country separately. What is legal to export to the USA may be problematic under EU human rights rules.
Recommendations
In 2026, we are operating in an environment of legal uncertainty, but not in a legal vacuum. Existing regulations (IHL, export control, liability directives) apply to the entire life cycle of autonomous weapons.
Recommendations for management:
- Product portfolio audit: Do your products fall under Dual-Use or military material? Do you have the correct licences?
- Contractual protection: Do your contracts address liability for damage caused by an AI “decision”?
- Code of ethics and ESG: Do you have rules in place for AI development in line with international standards?
The ARROWS advokátní kancelář team has experts in international law, compliance in the defence industry, and AI regulation. We will help you navigate this complex legal environment safely.
FAQ – Legal questions on autonomous weapons
1. Is the development of autonomous weapons legal in the Czech Republic?
Yes, development is not prohibited, but it is subject to Act No. 38/1994 Coll., on Foreign Trade in Military Material, and other regulations. You must have a permit and a licence. As regards AI, it is necessary to assess the impacts of the EU AI Act.
2. Who bears responsibility if an autonomous drone mistakenly hits civilians?
Primarily the state under public international law. From a criminal-law perspective, the operation commander may be prosecuted. The manufacturer may face the state’s recourse claims or reputational ruin; direct criminal liability of the manufacturer is less likely unless there was fraud.
3. Does the new EU AI Regulation apply to military AI systems?
Systems developed and used exclusively for military purposes are excluded from the scope of the EU AI Act. However, be mindful of dual-use systems (e.g., logistics software, simulators), which may fall under the regulation.
4. What is “Article 36”?
This refers to Article 36 of Additional Protocol I to the Geneva Conventions. It requires states to conduct a legal review of every new weapon to ensure that its use is not contrary to international law. In the Czech Republic, these reviews are carried out by the relevant departments of the Ministry of Defence.
Notice: The information contained in this article is of a general informational nature only and is intended to provide basic guidance on the topic based on the legal framework as of 2026. Although we take the utmost care to ensure accuracy, legislation and its interpretation evolve over time. We are ARROWS, a Prague-based law firm registered with the Czech Bar Association (our supervisory authority), and for maximum client protection we maintain professional liability insurance with a limit of CZK 400,000,000. To verify the current wording of the regulations and their application to your specific situation, it is necessary to contact ARROWS directly (office@arws.cz). We accept no liability for any damages arising from the independent use of the information in this article without prior individual legal consultation.
Read also:
- Legal Checklist for SaaS Platforms in the EU: GDPR, AI Act and Licensing
- Carbon Audits in 2026: EU Compliance, Risks and Practical Preparation
- Legal Advice vs Lobbying in Czech Law: Risks, Registration and Liability
- How to Protect Yourself as a Company Executive in Czechia (Before It’s Too Late)
- How to structure a Czech investment fund under ČNB supervision