Use of AI-Driven Weapons in Armed Conflicts: Analysis of IHL Principles in International Law
Keywords:
Autonomy, Algorithmic warfare, Civilian protection, Targeting decisions, Human control, Legal accountability, Customary law, Military technologyAbstract
The increasing deployment of artificial intelligence (AI)-driven weapons systems in contemporary armed conflicts has raised profound legal and ethical concerns under international humanitarian law (IHL). This article examines whether existing IHL principles are adequate to regulate the use of AI-enabled weapons, particularly autonomous and semi-autonomous systems, in international and non-international armed conflicts. The study aims to analyse the compatibility of AI-driven weapons with core IHL principles, including distinction, proportionality, military necessity, and precaution in attack, as well as the issue of accountability for unlawful harm. Employing a doctrinal and analytical research design, the article critically assesses treaty law, customary IHL, international jurisprudence, and authoritative interpretations by international bodies. The findings indicate that while current IHL provides a normative framework applicable to AI-driven weapons, significant challenges arise in the practical application of these principles due to algorithmic decision-making, unpredictability, and diminished human control. The article concludes that existing legal norms are strained by emerging technologies and underscores the need for clearer interpretative guidance, enhanced accountability mechanisms, and potential normative developments to ensure meaningful human control and compliance with IHL in future armed conflicts.









