Artificial intelligence (AI) is penetrating defence technologies at an unprecedented rate, opening up new opportunities and risks in the field of weapon systems. With the development of so-called autonomous weapon systems - capable of autonomously searching for and destroying targets - major legal issues arise. In 2025, there is no single global regulation of AI in weapons, but a patchwork of partial legal frameworks and policies in different jurisdictions is emerging. This article provides an analytical overview of current legal considerations in the European Union, the United States and within NATO, with a focus on developers, businesses and governments. It omits ethical considerations and focuses on the specific legislation, constraints, responsibilities and challenges associated with the deployment of AI in defence systems.
Author of the article: ARROWS (JUDr. Jakub Dohnal, Ph.D., LL.M., office@arws.cz, +420 245 007 740)
Defence policy is largely a national competence, so the EU has no direct legislative competence to regulate the military use of AI. Nevertheless, the EU influences this area indirectly - for example through funding defence research or supporting international initiatives. Crucially, the proposed AI Act, the first comprehensive legal framework for AI in the EU, excludes military and security uses from its scope.
This means that AI Act obligations and restrictions (aimed at civilian AI) do not apply to weapon systems designed solely for military purposes. However, member states still have to comply with International Humanitarian Law (IHL) and ensure that new weapons comply with it. The European External Action Service, for example, emphasises the EU's clear position that international law, including humanitarian law, applies to all weapons systems and that decisions to use lethal force must always be made by the person responsible.
Moreover, the European Parliament has taken a stance calling for preventive regulation - already in September 2018, it adopted a resolution calling for an international ban on lethal autonomous weapons without human control. However, these resolutions are not binding on member states, and the position of the EU as a whole is gradually evolving towards a more cautious, less categorical approach.
In 2021, the Parliament adopted another resolution on AI in defence, which was more moderate on this issue. Overall, both the European Council and the European Commission have taken a rather restrained stance - recognising the importance of AI for defence and avoiding outright bans that could limit technological development. Instead of hard regulation, the EU supports the continuation of debates at the UN (Convention on Certain Conventional Weapons - CCW) and promotes the principle of 'meaningful human control' as a guide for the development of these weapons.
There is still no ban on the development or deployment of lethal autonomous weapons in the United States - on the contrary, US policy does not explicitly rule out such development. Instead of bans, the Department of Defense (DoD) has established internal policies and standards to ensure that any AI weapon systems are used responsibly and lawfully.
The seminal document is DoD Directive 3000.09 of 2012 (last updated in January 2023), which sets policy for autonomy in weapon systems. This directive defines categories of autonomous weapons based on the degree of human involvement (human-in-the-loop, on-the-loop, out-of-the-loop) and requires that there must always be a "reasonable degree of human discretion" over the use of lethal force. In practice, this means that a fully autonomous weapon (LAWS), which would select and engage targets on its own without further operator intervention, is subject to strict conditions.
All autonomous weapon systems must have built-in fuses and be robust enough to minimise the likelihood of failure. If the system is unable to meet the specified conditions (e.g., adherence to a defined area or target of operation), it must terminate the attack or request operator intervention. The directive also requires iterative testing and evaluation when the system learns and changes its behavior (machine learning) to ensure that it remains safe and functions as intended.
From an organisational perspective, the US has established a multi-tiered approval process: each new weapon system with advanced autonomy requires a high-level review by the Department's leadership before development can begin and another before it can be fielded. This review involves the Under Secretaries of Defense for Policy, Research, and Acquisition, the Joint Chiefs of Staff, and legal advisors. The aim is to assess the legality, ethical issues and risks before such a weapon is developed or deployed. In addition, Congress is strengthening oversight-the 2024 National Defense Authorization Act (NDAA) required the Secretary of Defense to notify Congress of any changes to this directive, and the 2025 NDAA requires periodic reports on the development and deployment of autonomous weapons through 2029.
In the international context, the U.S. actively participates in the UN CCW negotiations on LAWS, but does not support a binding ban on these systems. Instead, the U.S. government argues that existing laws of war are sufficient and that autonomous technologies may have benefits (e.g., more accurate targeting with less risk to civilians). For U.S. defense contractors, this means that AI weapons development is possible, but under the DoD's strict regulatory regime - requiring compliance with technical standards, legal reviews, and ethical principles (the DoD adopted a set of Ethical Principles for AI in 2020, such as accountability, clarity, reliability, or the possibility of human intervention, which also apply to weapons systems).
The North Atlantic Treaty Organization as a whole does not issue binding legislation for its members in the field of weapons development, but it does provide a strategic framework and recommendations for the use of AI in defence. In October 2021, NATO approved its first AI Strategy, which includes six principles for the responsible use of AI in defence. These principles - which include legality, accountability, traceability, reliability, controllability and fairness - are intended to guide member states in the development and deployment of AI technologies. Furthermore, in 2023, NATO established the Data and AI Review Board (DARB), which began work on a certification standard for AI in line with Alliance values. The aim of this standard is to translate general principles into specific control mechanisms to verify that new AI projects respect international law and NATO standards.
Checks are to focus in particular on the drivability, traceability and reliability of AI systems. In practice, this means that the alliance supports, for example, the implementation of log systems for traceability of AI decision-making and the ability of a human to take control or disable the system at any time (human override). NATO does not produce any weapons systems itself, but as a collaborative platform, it ensures the exchange of best practices between allies - lawyers, military and technical experts from NATO member countries discuss standards together to ensure that new technologies are developed safely and lawfully.
Crucially, NATO reaffirms the full validity of international humanitarian law for all AI weapons and emphasises the maintenance of human responsibility for the decision to use force, in line with principles shared by the EU. There is thus consensus within the NATO environment on the need for 'meaningful human control' of weapon systems - but specific implementation remains a matter for individual states.
Neither the EU nor NATO has yet adopted specific legislation that would explicitly regulate or restrict military AI, relying rather on the application of existing rules (especially international law) and political declarations or strategies. The US, on the other hand, has internally developed a detailed regulatory framework within the Department of Defense and has established technical and legal requirements for developers. No NATO country has yet officially deployed a fully autonomous lethal weapon, but research and experimentation is ongoing in many countries. Pressure for an international treaty is growing at the UN, resulting in a UN General Assembly resolution supporting the development of legally binding regulations for autonomous weapons in late 2024. The legal environment is therefore dynamic - national and international frameworks can be expected to continue to evolve in the coming years.
In developing and deploying AI weapon systems, both EU and US entities must comply with a number of legal constraints and procedural requirements. The international law of armed conflict plays a crucial role: States Parties to Additional Protocol I to the Geneva Conventions are obliged to review new weapons for compliance with applicable law (Article 36 review). This legal requirement means that already at the development or acquisition stage, an assessment must be made as to whether the intended weapon system is prohibited or violates IHL principles (e.g. the principles of distinction and proportionality). Testing and evaluation are therefore not only a technical necessity but also a legal imperative. The deployment of an untested or untested autonomous weapon could itself constitute a violation of law and establish state responsibility. In practice, the armed forces implement multi-tiered control mechanisms.
For example, in the US, DoD Directive 3000.09 prescribes that each autonomous weapon system undergoes a standard weapons review process, plus a special high-level review by DoD leadership prior to development and deployment. This process includes the involvement of attorneys (e.g., the DoD General Counsel) and weapons test and evaluation experts to assess whether the system meets legal requirements (especially IHL compliance). Similar procedures exist in other NATO countries - e.g., the UK and Germany declare that they consistently apply new weapons review and do not develop or deploy systems that cannot be used in accordance with international law.
Specific technical constraints are often embedded in regulatory or internal standards. One of the key requirements is that autonomous systems allow for human supervision and intervention. U.S. guidelines require that an autonomous weapon automatically abort an attack or request operator confirmation if situations outside established parameters occur (e.g., loss of communications, unidentified target, etc.).
It must also be ensured that the probability of system failure is extremely low and that the system has built-in features to minimise the consequences of any failure. As a consequence, this implies the obligation to implement robust test scenarios, software safeguards (e.g. kill-switch) and redundancy of critical components. If a system is changed based on machine learning, it must undergo new testing and approval before it can be re-deployed. Even in peacetime operations (e.g. during training), such systems are subject to strict oversight - militaries typically set rules for experimental use of AI weapons only on dedicated test polygons and with the participation of experts to minimize the risk of unforeseen behavior.
The regulation of the deployment of AI in defence systems also includes other aspects: for example, the issue of target identification. International humanitarian law prohibits attacks on civilians and requires reliable identification of military targets. This implies that AI used in weapon guidance systems must be sufficiently accurate and vetted not to mistakenly mark civilian objects as targets. Developers must build a conservative approach into algorithms - rather than misidentify a target, they must mislabel it. Therefore, legal frameworks in the US and Europe effectively limit the deployment of fully autonomous weapon systems in environments where there is a risk of lack of discrimination. The precautionary principle applies here: if there is no certainty that an autonomous system can discriminate legitimate targets in a given environment, it should not be used. For example, the U.S. DoD considers deploying lethal autonomous systems only in situations such as electronically jammed battlefields where traditional weapons fail, and even then would require the ability for additional control.
Another area of regulation is testing and certification. While civilian AI systems may be subject to certification under the AI Act (in the EU) or ISO standards, these general mechanisms are absent in defence. NATO is therefore working on a specific certification standard for military AI, which is intended to be user-friendly for industry and institutions within the alliance. This standard, planned for late 2023, is intended to help verify that new AI projects comply with international law and NATO standards from the development phase. Developers may thus face a requirement in the near future to have their systems independently assessed against these criteria (similar to how weapons are now certified for ammunition safety, etc.).
Export controls and international agreements also influence the development of AI in weapons. AI software and hardware with military applications may fall under strategic export control regimes (e.g. the EU's Dual-Use Export Controls Regulation or the US ITAR). This means that companies developing advanced algorithms for weapons systems must obtain a license for possible export and may not freely transfer such technologies abroad, especially not to embargoed countries. Multilateral regimes such as the Wassenaar Arrangement have already placed some AI technologies (e.g. neural network management software) on the list of controlled items, which imposes notification and licensing obligations on developers.
In short, the development, testing and deployment of AI in defence systems are subject to a combination of hard law (particularly international humanitarian law) and soft internal regulations. Developers and operators must ensure that their systems are legally reviewed, technically fail-safe, and operated under adequate human oversight or risk violating legal obligations.
Team ARROWS
The deployment of AI in weapons raises the question of who will be held responsible if an autonomous system causes harm in an unintended or unlawful manner. In traditional conflicts, the state is responsible for the conduct of its armed forces and individuals (commanders, soldiers) may be held individually criminally liable for war crimes. These principles also apply to the use of autonomous weapons - it cannot be argued that "the robot caused the damage, no one is to blame". Under the law of states, the state is responsible for violations of international humanitarian law that occur through the use of an autonomous weapons system.
For example, if an autonomous drone without human intervention accidentally attacks a civilian target in violation of IHL, the state that deployed it is liable, just as it would be liable for a soldier who committed a similar act. The state cannot absolve itself of liability by claiming that it was an algorithmic error. Moreover, if the state deployed an AI weapon without sufficient testing or review and a violation occurred, that in itself is another violation and the state is liable. This underscores the importance of the aforementioned mandatory gun reviews and tests - failure to do so could have legal consequences.
From the perspective of individual responsibility, there may be an "accountability gap" - the fear that it will not be possible to identify the specific culprit when fully autonomous decision-making is possible. In legal terms, however, it is assumed that there is always a human link to which responsibility for the deployment of a weapon can be attributed.
Typically, this will be the commander who decided to use the system in a combat situation, or the operator who activated the device. If the autonomous weapon causes unlawful damage, it may be investigated whether the commander/operator was negligent (e.g., deployed the system in inappropriate conditions) or intentionally violated regulations. At the extreme, a war crime could be involved (e.g., if someone knowingly used an AI weapon to attack civilians).
International criminal law already recognizes the concept of command responsibility, where a commander is criminally responsible for the crimes of subordinates if he knew about them or failed to prevent them - by analogy, he could be held responsible for the acts of a "subordinate" weapon if he knew of the risks and took no action. In practice, however, proving this will be difficult: the defence may argue that it was an inscrutable machine error beyond the control of the operator. For this reason, there are voices calling for the duty of human control to be clearly enshrined - so that there is always someone to assign legal responsibility. After all, the basic position of NATO and EU states that 'man remains responsible for life and death decisions' is also intended to prevent the creation of a liability vacuum.
Another level is the responsibility of the manufacturer or developer of the weapon system. There may be situations outside an armed conflict - e.g. during testing or when a weapon is used by security forces in peacetime - where civil liability will be addressed. In a product liability regime, software or program manufacturers could be held liable for programming errors or malfunctions in an autonomous weapon system.
This means that if the AI failure was caused by a demonstrable defect in the system, the victims (or the state that had to pay them) can recover compensation from the supplier. Military contracts often contain specific liability and warranty provisions - some may limit the manufacturer's liability (governments sometimes assume the risks for experimental technologies), while other times the state will exchange recourse against the contractor in the event of a serious defect. It is therefore critical for companies to contractually address what liabilities they bear in the event of unintended consequences of AI weapons deployment. Given the increased risk, some insurers are considering liability insurance specifically for AI systems in defence.
The legal implications of the misuse of AI can also have political implications. If an autonomous weapon causes an international incident (e.g., accidentally hits a neutral state or kills civilians), the state in question may face international legal claims (diplomatic protests, arbitration for damages) and a reputation as an "irresponsible user." For this reason, states take great care to investigate each AI incident transparently and to be able to demonstrate that they have acted lawfully. At the extreme, a massive AI weapon failure could lead to updated legislation or pressure for a moratorium, posing a regulatory risk to all developers of these technologies.
The legal approaches of world powers and organisations to autonomous weapons differ, which complicates global harmonisation. Meanwhile, the US and other technologically advanced militaries (e.g. Israel, Russia) take the position that existing law is sufficient and that proper implementation is important. The United States openly disagrees with a pre-emptive ban on autonomous weapons and instead advocates not enacting new international restrictions until it is clear what these systems will look like.
They argue that any weapon - with or without AI - must abide by the applicable law of war, and prefer the development of ethical frameworks and technical measures to ensure legal use rather than a blanket ban. In practice, this means that the US is investing in AI in weapons, but cautiously (with human oversight), and blocking efforts at the UN to immediately negotiate a binding treaty banning 'killer robots'. The UK, for example, takes a similar view, saying that it has no plans to deploy fully autonomous killer systems without human oversight, but at the same time rejecting a pre-emptive ban - preferring to emphasise "responsible AI" and the importance of existing rules.
On the other hand, many countries, including a number of EU members, are pushing for a stricter approach. Austria, Spain, Ireland, Malta and others have been calling for an international treaty banning autonomous weapons without human control for several years. Similarly, the European Parliament (as mentioned) has taken a tough stance in its resolutions. Even large states like Germany and France officially support the idea of "strong regulation" - for example, within the CCW they are pushing for a binding policy statement or code of conduct that would stipulate that humans must have the final say in the use of lethal force. These states are trying to find a compromise between security concerns (not getting bogged down in the development of military AI) and humanitarian concerns.
As a result, the EU as a whole speaks out in international forums calling for rules on LAWS, but internally does not prevent its members from researching these weapons (as long as it is within the bounds of the law). It can be said that the European approach is more "legally cautious" and human rights oriented - emphasizing preventive rules (human-in-the-loop principle) and international cooperation, while the American approach is more "technologically pragmatic" - leaving more room for innovation, with specific deployments to be assessed on a case-by-case basis.
NATO must balance between these approaches of its members. As such, the Alliance supports the responsible use of AI and the sharing of rules (such as the 6 principles mentioned above). Its documents state that IHL fully applies and human responsibility is indispensable - this reflects the consensus across member states. On the other hand, NATO has not restricted the development of autonomous weapons by its members by any prohibition; on the contrary, the alliance strategy mentions the need to keep pace with adversaries in the field of new technologies.
Russia and China, as major non-NATO military powers, also take different positions: Russia opposes any new regulation (as evidenced by its vote against the aforementioned UNSC Resolution 2024), while China declares its support for a ban on autonomous weapons but continues to invest in their development (thus often facing accusations of double play). These differences make it difficult to reach a consensus at international level.
In terms of specific legal issues, jurisdictions differ, for example, on the definition of an autonomous weapon. The U.S. has internally defined the terms within DoD 3000.09, where LAWS means a system capable of selecting and engaging a target without further human intervention. In contrast, there is still no uniform definition in the UN - different states understand it differently, which makes lawmaking difficult. EU institutions often use the term 'systems without significant human control over critical functions', which is the definition used by the European Parliament.
Different definitions can lead to different scopes of regulation. For example, if the U.S. considers a particular semi-autonomous weapon (loitering munitions with pre-selected targets) to be "human in the loop" and thus outside the category of prohibited weapons, while another state would consider it unacceptably autonomous, a dispute arises over the legal status of such a weapon.
Similarly, approaches to development differ: in the US and Russia, active state-funded research is ongoing, while some smaller countries opt for moratoria. International humanitarian organisations (such as Human Rights Watch and the ICRC) are pressing for precautionary measures, pointing to the 'accountability gap' and the risk of violating the basic principles of IHL if a machine were to make life-and-death decisions.
These views influence European and smaller states more than the superpowers. The technology lead then poses a legal and strategic dilemma: states that lead in AI weapons development (US, China, Russia) do not want to limit their capabilities unilaterally, while states without these capabilities call for restrictions to prevent an arms race.
Overall, legal approaches to autonomous weapons in 2025 range from voluntaristic internal controls to international treaty requirements. All NATO democracies recognize the binding nature of existing laws of war and the need for human accountability; however, they differ on whether further explicit legal anchoring or prohibition is necessary. Businesses must also take this difference into account - for example, a US contractor may be developing an AI drone for the Pentagon in accordance with DOD guidelines, but the same product could face resistance in Europe if it lacks certain safety features or is perceived as a step towards "killer robots". It is therefore important for businesses and governments to monitor both national regulations and international debates to reconcile technological developments with differing legal requirements.
Team ARROWS
The legal environment around AI in weapon systems poses several major challenges in 2025 for the companies developing these technologies and for the state institutions that procure and regulate them:
Unclear definitions and standards
AI in defence is a difficult concept to define and there is a lack of settled legal definitions of autonomous weapons. Legislation struggles to keep pace with rapid technological developments, which means that regulations can be vague or outdated. It is difficult for companies to design systems according to rules that are not clear - for example, the term 'significant human control' can be interpreted in different ways. For its part, government faces the challenge of how to create effective standards for technology that is constantly changing.
Compliance with diverse regulations
A company operating internationally must take into account the different requirements in the US, EU, NATO and possibly other regions. For example, in the EU (especially for projects funded by the European Defence Fund), systems are de facto expected to have human control, whereas US standards emphasise technical safety mechanisms. Reconciling these requirements in a single development program requires careful legal analysis. Interoperability is also a challenge for government - how to ensure that allied militaries can work together when their legal limits on autonomy differ.
Technological predictability and testing
AI systems, especially with machine learning features, can change their behaviour over time, making traditional certification difficult. Regulations require repeated testing and revalidation for software updates, which lengthens development cycles and increases costs. However, ensuring sufficiently comprehensive testing for all use scenarios is necessary, otherwise there is a legal risk (the state would be liable for an insufficiently tested weapon). Thus, the challenge for companies is to build robust verification processes into development and for states to have the capacity to verify these tests (e.g. through independent authorities or military testing facilities).
Transparency and auditability of AI
Lawyers and military commanders will require the ability to explain AI decisions after an attack occurs, especially if there is doubt about the legality of the target. NATO therefore emphasises the traceability and explainability of algorithms. For developers, this means incorporating elements such as data logging, sensor record keeping, and decision-making processes in the AI to allow for ex post analysis of why the system chose a given target. However, this "explainability" sometimes runs up against the proprietary nature of AI models or trade secrets, which is another challenge - how to meet the transparency requirement without requiring companies to reveal their know-how. Governments may need to introduce pre-deployment audits of algorithms, which will require new professional capacity (AI experts able to assess models for risk).
Liability and insurance risk
As the previous section has shown, it is not yet entirely clear who will bear responsibility in different AI failure scenarios. Businesses fear that they could face lawsuits or sanctions if their product causes an accident, even if it was used by the military. They may therefore be hesitant to invest in developing truly autonomous capabilities without a clear legal framework for liability. The challenge for states is to set up a regime that incentivises innovation but also ensures that victims are compensated in the event of damage. Liability insurance in this area is still in its infancy and prices can be high due to the uncertainty of the risk.
Maintaining compliance with IHL in practice
In theory, all NATO armies declare that they will use AI in accordance with international law. The challenge is to ensure this practically in combat conditions - personnel will need to be trained, rules of engagement (ROE) will need to be modified and new operational procedures will need to be introduced. For example, commanders need to understand the limits of AI tools so that they do not use them inappropriately (which could lead to legal action). Companies, in turn, need to provide sufficient information to customers (armed forces) about the capabilities and limitations of the system so that it is not used inappropriately.
Dynamic development of regulation
The legal framework may change fundamentally in the next few years - for example, if the UN adopts a new treaty on autonomous weapons, companies will have to react quickly (perhaps reducing or ending some projects), states will have to update their laws. Already, some states are considering moratoriums or national bans on certain AI applications (e.g. banning fully autonomous drones for police purposes). Uncertainty about future regulation is therefore a challenge in itself - it requires scenario planning and flexibility on the part of companies and government.
The use of AI in weapon systems brings significant legal risk, but also competitive advantage to those who can balance the edge of innovation and regulatory compliance. In 2025, no business or state can ignore this area - the legal aspects of AI in defence have become an integral part of the decision-making process for research, development and deployment of new technologies. It is critical for lawyers and decision makers to continuously monitor developments at the national level (legislation, DoD policies, EU fund directives, etc.) and in the international arena (UN negotiations, NATO principles) to ensure that their organizations comply with all applicable regulations while not falling behind in technological advancements.
Given the rapid evolution of AI and the security situation, it is to be expected that the legal framework will continue to be refined - the aim of all stakeholders should be to set clear rules that allow the benefits of AI to be exploited in defence while preventing its misuse or illegal use.