Predictive Defense Systems and Pre-Emptive Strikes under International Law
In today's rapidly evolving security landscape, governments and corporations face increasingly complex questions about when military action is legally justified. Whether your organization operates across borders, manages defense contracts, or invests in security infrastructure, understanding the legal framework governing predictive defense systems is essential to avoiding costly disputes, criminal liability, and reputational damage.

Article contents
- Understanding pre-emptive strikes and international legal framework
- The Caroline doctrine and the modern standard for anticipatory self-defense
- Predictive defense systems and artificial intelligence in military applications
- Article 36 reviews and legal compliance for new weapons systems
- Distinguishing imminence from speculation
- The role of human judgment and control in AI-assisted targeting
- Proportionality analysis and the challenge of predicting collateral damage
- Cross-border complications and international dimensions
Quick summary
- "Imminence" is a strict legal standard. The difference between a lawful pre-emptive strike and unlawful aggression lies in the definition of "imminence." It requires an immediate, overwhelming threat with no other recourse. Preventive strikes against distant threats are generally illegal.
- AI is a tool, not a decision-maker. Under International Humanitarian Law, only humans can be held accountable. Utilizing AI for threat detection is lawful, but delegating the decision to strike to an algorithm without meaningful human control invites criminal liability for the human operators and commanders.
- Article 36 review is mandatory and ongoing. Legal review of new weapons is a treaty obligation. For AI systems, this review must be continuous throughout the system's lifecycle, not just at procurement. Non-compliance creates significant liability for defense contractors.
- Proportionality precedes action. The legal validity of a strike depends on the anticipated military advantage vs. anticipated civilian harm at the time of decision. Documenting this analysis is critical for legal defense against future war crimes allegations.
- Sovereign borders matter. Striking non-state actors in third countries requires complex legal justification. This is a high-risk legal area requiring specialized counsel.
Understanding pre-emptive strikes and international legal framework
Pre-emptive strikes represent one of the most contested areas of modern international law. A pre-emptive strike is military action initiated based on the belief that an adversary is planning an imminent offensive. However, the question of whether and when such action is lawful remains fundamentally unresolved in treaty law.
While corporate decision-makers often assume that governments operate within clear legal boundaries, the reality is considerably more complex. The distinction between a lawful pre-emptive strike (anticipatory self-defense) and unlawful aggression depends on strict legal criteria: the imminence of the threat, the necessity of the response, and proportionality.
The UN Charter establishes the foundational rule: Article 2(4) prohibits all member states from using force against the territorial integrity or political independence of any state. Apart from authorization by the UN Security Council under Chapter VII, the sole recognized exception for unilateral action is found in Article 51.
Article 51 recognizes "the inherent right of individual or collective self-defence if an armed attack occurs." Notice the critical language—the Charter’s text arguably requires that an armed attack actually occurs before the right to use force is triggered.
This textual requirement means that any military action taken before an actual attack technically falls outside the explicit protections of Article 51, creating a presumption of illegality under the strict reading of the Charter. However, international law has evolved since 1945 through state practice and customary law.
States have increasingly interpreted Article 51 to encompass anticipatory self-defense—the right to act in self-defense against threats that are "imminent," even if an actual attack has not yet begun. This interpretive shift reflects the reality of modern warfare, particularly weapons of mass destruction and cyber capabilities.
The Caroline doctrine and the modern standard for anticipatory self-defense
To understand the current customary international law standard, one must examine the Caroline case (1837), which established the legal test known as the Caroline doctrine. Following a British attack on a U.S. vessel supporting Canadian rebels, the diplomatic exchange between the U.S. and Britain established a new standard.
For anticipatory self-defense to be lawful, the necessity of that self-defense must be "instant, overwhelming, leaving no choice of means, and no moment for deliberation." The Caroline doctrine is widely accepted as customary international law—binding even on states that have not formally agreed to it in writing.
It sets the high threshold for lawful pre-emption: the threat must be imminent, overwhelming, and leave no alternative but to use force. Yet, applying this 19th-century doctrine to 21st-century technology presents serious practical complications.
What makes a cyber-threat "imminent"? How should decision-makers assess "overwhelming" danger when intelligence derived from AI is probabilistic? The ARROWS Law Firm regularly advises international organizations, defense contractors, and corporations on navigating these uncertainties.
The interpretation of "imminence" has evolved, particularly regarding terrorism and WMDs, but the core requirement remains: the window of opportunity to defend oneself must be about to close effectively. The distinction between anticipatory (pre-emptive) self-defense and preventive war is crucial legal terminology.
Pre-emptive action targets an imminent threat (consistent with customary law and the Caroline test). Preventive action addresses potential future threats that are not yet imminent. International law treats these differently.
While pre-emptive self-defense has achieved acceptance in state practice, preventive war—striking to eliminate a threat that may materialize in the future—lacks valid legal foundation under the UN Charter and is widely considered an act of aggression.
Predictive defense systems and artificial intelligence in military applications
The emergence of artificial intelligence (AI) and predictive algorithms in military decision-making introduces new liability risks. Predictive defense systems use machine learning to analyze vast quantities of data—satellite imagery, communications intercepts, behavioral patterns—to forecast enemy movements and recommend military action.
However, the use of AI-enabled decision support systems (AI-DSS) in recent conflicts has revealed gaps between legal theory and operational reality. AI systems designed to improve targeting accuracy can, if unchecked, exacerbate unlawful civilian harm.
Reports indicate that accelerated targeting cycles can lead to "automation bias," where operators over-rely on unverified algorithmic outputs, compressing human judgment and overwhelming the legal review processes mandated by International Humanitarian Law (IHL).
This raises a fundamental legal question for defense contractors and military procurement officers: Can an AI system make the legal determinations required for self-defense? The answer under current international law is no. A system that "predicts" a threat detects patterns, but it cannot make the normative legal judgment of imminence.
Imminence requires a complex assessment of intent, capability, and the lack of peaceful alternatives. Furthermore, any recommended response must comply with the principles of distinction (targeting only combatants) and proportionality (ensuring collateral damage is not excessive).
Legal tips on predictive defense systems and threat assessment
1. Can artificial intelligence systems independently determine if a threat is "imminent" under international law?
No. Imminence is a qualitative legal judgment requiring human interpretation of context and applicable law. While AI can process data, the final determination of whether a threat meets the Article 51 threshold must involve human legal judgment. Over-reliance on AI predictions without meaningful human review creates significant legal liability for the state and potentially the developers.
2. What happens if a predictive system recommends action based on faulty data?
This creates direct legal exposure. If a state acts on AI-generated predictions that prove incorrect, resulting in civilian harm or violation of sovereignty, legal responsibility falls on the human decision-makers. In the context of the EU and NATO, reliance on a "black box" algorithm is generally not a valid legal defense against charges of negligence or war crimes.
3. Who bears legal responsibility when a predictive system contributes to an unlawful military action?
Under the doctrine of Command Responsibility, military commanders and civilian superiors are liable if they knew or should have known their forces were committing violations and failed to prevent them. Similarly, those responsible for designing and deploying systems without adequate safeguards may face liability for aiding and abetting violations.
Article 36 reviews and legal compliance for new weapons systems
International law imposes a strict obligation on states to conduct legal reviews of new weapons, means, and methods of warfare. This obligation is codified in Article 36 of Additional Protocol I to the Geneva Conventions (1977) and is recognized as customary international law binding on all states. Article 36 reviews must determine whether the employment of a new weapon would, in some or all circumstances, be prohibited by international law. Article 36 reviews are often inadequate for AI-enabled systems. Traditional reviews focus on the weapon's physical effects.
However, for AI systems, the legal risk lies in the process of target selection. An AI-DSS might technically be capable of distinction, but if its operational speed outpaces human legal review, it may function unlawfully in practice.
The ARROWS Law Firm advises clients on Article 36 compliance, which involves not just military law but also contract law, export control (dual-use goods), and liability mitigation. A compliant review for 2026 standards must address the "integration challenge": ensuring that the system's speed does not bypass the mandatory legal checks.
This requires embedding legal review throughout the system's lifecycle—from design (legal-by-design) to deployment and periodic re-evaluation.
Legal tips on Article 36 reviews and weapons system procurement
1. Is a one-time Article 36 review sufficient?
No. Especially for software-based systems that receive updates or learn (machine learning), the legal review must be iterative. A system approved in 2024 may operate differently in 2026 due to software patches or changes in the operational environment, requiring a new legal assessment.
2. What are the consequences of deploying a system without a proper Article 36 review?
Deployment without review is a breach of international obligations. If the system subsequently commits violations (e.g., indiscriminate attacks), the lack of a review serves as evidence of negligence or recklessness in international criminal proceedings or civil liability claims against manufacturers.
3. How should defense contractors approach Article 36 compliance?
Contractors should integrate "legal compliance files" into their deliverables. ARROWS Law Firm advises contractors to contractually stipulate that the end-user must conduct their own operational legal reviews, clarifying where the manufacturer's liability ends and the operator's begins.
Distinguishing imminence from speculation
The legality of a pre-emptive strike often hinges on the factual assessment of imminence. For instance, the 1967 Six-Day War is often cited as a case where Israel argued Arab armies were mobilizing for an immediate attack, justifying anticipatory self-defense.
Conversely, the 2003 U.S. invasion of Iraq is widely regarded by legal scholars as an unlawful preventive war, as the threat (WMD) was not imminent, and in fact, did not exist. For modern defense planners, relying on predictive AI creates a "black box" problem.
Contact our experts:
If an AI predicts an attack with 85% probability, does that constitute imminence? Legally, if a state acts on a prediction that turns out to be a "hallucination" or data error, the state bears full responsibility for the aggression.
International law evaluates the "reasonableness" of the decision based on information available at the time, but reckless reliance on unverified automated systems is unlikely to be deemed "reasonable" by international courts.
The role of human judgment and control in AI-assisted targeting
International Humanitarian Law (IHL) requires that all attacks comply with distinction, proportionality, and precaution. These are qualitative judgments.
The emerging international standard is "Meaningful Human Control" (MHC). This means humans must not just "press the button" but must have the cognitive capacity and situational awareness to understand the target, the weapon's effects, and the legal context before authorization.
Systems that automate the target selection and engagement cycle to a speed where human intervention is impossible risk being deemed illegal per se under IHL principles, or at minimum, shifting massive criminal liability to the commanders who deployed them.
Proportionality analysis and the challenge of predicting collateral damage
The principle of proportionality prohibits attacks where the expected civilian casualties are "excessive in relation to the concrete and direct military advantage anticipated." This is a forward-looking assessment.
AI systems complicate this by generating targets faster than humans can verify the "collateral damage estimation" (CDE). If an organization automates CDE without human oversight, it risks systematic violations of proportionality.
ARROWS Law Firm advises on the design of Rules of Engagement (ROE) that mandate specific human authorization levels based on the CDE value, ensuring that legal proportionality checks are not bypassed by algorithmic speed.
Cross-border complications and international dimensions
Using force in the territory of another state against non-state actors (e.g., terrorist groups) involves the "unwilling or unable" doctrine. This controversial legal theory suggests a state can use force in a third state's territory if that host state is unwilling or unable to suppress the threat.
While accepted by some major powers, this doctrine is not universally accepted in international law and is frequently challenged. Corporations and contractors supporting such operations must be aware that personnel could face legal risks in jurisdictions that do not recognize this doctrine.
|
Risks and Sanctions |
How ARROWS Helps (office@arws.cz) |
|
Unlawful Aggression/Crime of Aggression: Military action based on faulty threat assessment or preventive (not pre-emptive) logic can constitute a Crime of Aggression under the Rome Statute (ICC) and customary law, leading to individual criminal responsibility for leadership. |
Legal analysis and threat assessment review: ARROWS Law Firm conducts independent legal analysis of threat assessments (Jus ad Bellum analysis). |
|
War Crimes (Disproportionate Attack): Violation of proportionality requirements resulting in excessive civilian death is a war crime. Liability extends to commanders and potentially corporate actors knowingly assisting such operations. |
Proportionality assessment protocols: We advise on the design of proportionality assessment procedures and review Collateral Damage Estimation (CDE) methodologies. |
|
Deployment of Illegal Weapons (Art. 36 Breach): Deploying systems that have not undergone rigorous Article 36 review, or that cannot distinguish targets reliably, violates international law and exposes manufacturers to civil and criminal liability. |
Article 36 compliance & procurement: We guide defense contractors and states through the Article 36 legal review process, ensuring documentation covers the system's lifecycle. |
|
Command Responsibility for AI Errors: Commanders relying on "black box" AI without Meaningful Human Control are liable for the system's violations. " The computer made a mistake" is not a valid legal defense. |
Human control framework design: We assist in drafting Rules of Engagement (ROE) and Standard Operating Procedures (SOPs). |
|
Violation of Sovereignty: Cross-border operations without valid self-defense justification or UNSC mandate violate the UN Charter, leading to state liability and diplomatic sanctions. |
Cross-border operations analysis: We provide legal opinions on the "unwilling or unable" standard and jurisdiction-specific risks. |
Conclusion
The legal landscape governing predictive defense systems is unforgiving. While Article 51 of the UN Charter preserves the right to self-defense, this right is not unlimited. The integration of AI into military decision-making accelerates the tempo of operations but does not relax the legal requirements of distinction, proportionality, and precaution.
In fact, it heightens the need for rigorous legal oversight. The lawyers at ARROWS Law Firm regularly advise international clients on these complex issues, from Article 36 reviews to operational legal support.
We understand that in 2026, legal compliance is not just a bureaucratic hurdle—it is a strategic necessity to prevent criminal liability and ensure the legitimacy of security operations. Whether you are a defense contractor, a military organization, or a corporation in the security sector, expert guidance is essential. Contact ARROWS Law Firm at office@arws.cz to discuss your specific legal challenges.
FAQ – Frequently asked legal questions
1. Under international law, can a state use force to address a threat that is not yet imminent?
Generally, no. Customary international law requires a threat to be "imminent" for self-defense to be lawful without UN Security Council authorization. Acting against a non-imminent potential threat constitutes "preventive war," which is widely considered illegal under the UN Charter.
2. What legal responsibility does a defense contractor bear if a weapons system causes unlawful harm?
Contractors can face civil liability and, in some jurisdictions, criminal liability for aiding and abetting war crimes if they knowingly supply systems that cannot distinguish between civilians and combatants, or if they fail to disclose known defects that lead to unlawful targeting. Strict adherence to Article 36 reviews is the primary defense.
3. If an AI targeting system makes an autonomous error resulting in civilian casualties, who is responsible?
The human operators and commanders. International law does not recognize AI as a legal entity. The legal doctrine of "Command Responsibility" implies that if a commander deploys a system they do not understand or cannot control, they are criminally liable for the resulting violations.
4. Must proportionality analysis occur before or after military action?
It must be ex ante (before the attack). The legality is judged based on the information available to the commander at the time of the decision. Post-strike analysis (Battle Damage Assessment) is used to verify accuracy and inform future decisions, but the legal obligation applies to the planning phase.
5. Can Article 36 review be a one-time event?
No. Best practice and emerging norms dictate that Article 36 reviews must be iterative, especially for AI systems that evolve via machine learning or software updates. A significant update to the system's logic requires a new legal review.
6. What should an organization do if it suspects a violation has occurred?
International law requires the immediate investigation of suspected war crimes. Failure to investigate and prosecute is itself a violation. Organizations should immediately secure all data logs and seek independent legal counsel to manage the investigation and potential disclosure obligations. Contact office@arws.cz for crisis management.
Disclaimer: The information contained in this article is for general informational purposes only and serves as a basic guide to the issue as of 2026. Although we strive for maximum accuracy, laws and their interpretation evolve over time. We are ARROWS Law Firm, a member of the Czech Bar Association (our supervisory authority), and for the maximum security of our clients, we are insured for professional liability with a limit of CZK 400,000,000. To verify the current wording of the regulations and their application to your specific situation, it is necessary to contact ARROWS Law Firm directly (office@arws.cz). We are not liable for any damages arising from the independent use of the information in this article without prior individual legal consultation.
Read also
- Autonomous drones in urban warfare and civilian protection standards
- Legal Challenges of Human Control in Autonomous Military Systems
- Integration of Satellite Intelligence and AI under EU/NATO Oversight
- Cybersecurity, Drone Hijacking, and Liability for Damage
- Digital Inspections and AI in 2026: New EU Compliance Duties for Firms
- How ARROWS Uses AI to Enhance Legal Work While Ensuring Accountability
- Representing a client in a dispute for damages against the Czech Republic
- Strategic defense against sanctions imposed by the Office for the Protection of Competition
- JUDr. Ondřej Stehlík, LL.M., MBA
- INTERNATIONAL LAW