When military decisions are made in the blink of an eye and combat missions are executed with pinpoint accuracy—this is the scenario painted by artificial intelligence (AI), a technology that has become the focal point of interest for both military personnel and futurists alike. AI is a branch of computer science that aims to develop systems capable of simulating human mental capabilities, such as learning, reasoning, and problem-solving. In the battlefield, AI can be utilized in a wide array of applications, from analyzing intelligence data to directing weapons.
Imagine drones that can identify and destroy their targets without human intervention, or combat robots capable of adapting to changing conditions on the battlefield and making instant combat decisions. These are not mere science fiction; they are technologies already under development.
AI can analyze vast amounts of intelligence data in record time, allowing military leaders to make more informed decisions. Intelligent systems can discover patterns in data that human analysts might overlook, potentially leading to significant intelligence advantages. AI can also guide weapons with extraordinary precision, reducing civilian casualties and improving the effectiveness of attacks. Moreover, intelligent systems can track moving targets and predict their trajectories, making it harder for the enemy to hide.
Despite the potential benefits of AI in the military domain, there are also many challenges and risks that must be taken into account. It is crucial for concerned nations and international organizations to study how to ensure that intelligent systems are not used for harmful purposes. The recent proliferation of autonomous weapons has cast a dark shadow over a new arms race, increasing the risk of large-scale conflicts.
AI: A Double-Edged Sword in Battlefields
The world is witnessing rapid developments in the field of technology, especially in AI, which has begun to penetrate every aspect of our lives. Today, AI plays a pivotal role in changing the game, particularly in military contexts. For decades, films and novels have depicted combat robots capable of making independent military decisions, and today, this science fiction is drawing closer to reality. Thanks to AI, autonomous weapons capable of identifying and destroying their targets without human involvement can be developed, while vast quantities of intelligence data can be analyzed to anticipate enemy movements and make swift, effective decisions.
However, like any innovation, AI in the military has its bright and dark sides. On the one hand, AI can reduce human casualties in wars and enhance the precision of attacks. On the other, this development raises significant ethical and legal concerns. Who is responsible for the errors that intelligent systems might commit? How can we ensure that these systems are not used for harmful purposes?
Arming robots with the capacity to make life-or-death decisions for humans opens the door to many difficult questions. Can we trust that these systems will always act ethically? How can we protect civilians from the collateral damage of these technologies?
The development of AI in the military requires serious consideration of the implications and the establishment of a clear ethical and legal framework to regulate the use of these technologies. We must ensure that the use of AI in warfare is under full human control and that its aim is to protect lives rather than destroy them.
AI is a double-edged sword, and it can be a great force for good or evil. It all depends on how we choose to use it. We must invest our efforts in developing this technology responsibly and ethically, ensuring it serves all of humanity.
Ukraine Develops “Dog Robots” to Protect Army Forces
However, the most exciting and interesting news lately conveyed by the media is regarding the significant incursion of Ukrainian forces into Russian territory, particularly in the “Kursk” region, where Ukraine is working on developing dog-shaped robots to compensate for troop shortages along the front lines in the conflict with Russia, and to execute dangerous missions such as spying on Russian trenches and detecting mines.
Robot Dog Trials
Trials of the robotic dog known as BAD One have been conducted at a non-disclosed location in Ukraine, where the robot demonstrated its ability to run, bend, and jump based on commands from its operator, according to “Techxplore.” The manufacturers of the robotic dog explained that the robot is agile and capable of stealth, making it a “valuable ally” in the battlefield for the Ukrainian army.
Capabilities of Robotic Dogs
Robotic dogs are designed to be low to the ground, making them hard to detect. They can use thermal imaging to scan trenches and buildings in combat zones. The robot is equipped with a battery that enables it to operate for approximately two hours, and it can detect mines or improvised explosive devices. The robotic dog can carry about 7 kilograms of ammunition or medical supplies to hot spots on the battlefield.
A soldier responsible for operating the dog stated: “We have soldiers on watch who are always at risk, and this dog mitigates those risks and enhances operational capabilities.”
Definition of Military Robots and Their Types
Before delving into the legal analysis, it is essential to define the concept of a military robot and its types. A military robot can be defined as a mechanical system capable of performing military tasks either partially or fully independently, based on its programming or the machine learning algorithms it is supplied with.
A robot is an intelligent machine designed to perform a wide range of tasks, from simple industrial jobs to complex tasks requiring high cognitive abilities. Robots are characterized by their ability to learn and adapt to their environment, making them a powerful tool in many fields. Recent years have witnessed incredible developments in robotics, with the emergence of robots capable of social interaction and continuous learning, such as Sophia the robot, which represents a significant leap in this field.
Military robots symbolize a technological revolution contributing to the evolution of modern warfare. These complex systems, ranging from unarmed drones to autonomous combat robots, raise profound legal and ethical questions about the use of armed force.
Legal Challenges of Using Robots in Warfare
The use of military robots in warfare faces numerous legal challenges, the most important of which include:
- Criminal Responsibility: Who bears criminal responsibility for the actions committed by autonomous robots? Is it the manufacturer, the state using them, or the human operator (if any)?
- Discrimination and Proportionality: Can robots differentiate between combatants and civilians? Can it be ensured that their attacks are proportional to military necessity?
- Protection from Attacks: Do robots enjoy protection under international humanitarian law as military targets?
- Human Oversight: What is the minimum level of human oversight required to ensure compliance with international humanitarian law?
Robots and International Humanitarian Law
Despite the challenges posed by the use of robots, international humanitarian law remains the reference legal framework governing the behavior of parties in armed conflicts. Military robots, like any other weapon, must adhere to the basic principles of international humanitarian law. These include: the principle of distinction, where combatants and civilians must be distinguished, attacks must not be directed except against legitimate military objectives, and the principle of prohibiting excessive injury or unjustifiable suffering.
It is clear that the use of a weapon that may not be prohibited in itself but is used in violation of the principles and rules of warfare and contravenes the principle of respect for international humanitarian law during wars must comply with the rules or standards governing the use of means of combat during military operations. These rules are referred to as standards, which we will clarify in two parts: the standard of excessive injury or unjustifiable suffering, and secondly, the standard of indiscriminate effect.
1. Standard of Excessive Injury or Unjustifiable Suffering
The principle of unjustifiable suffering faces new challenges due to rapid technological developments. Modern weapons, such as precision weapons and cluster munitions, allow for greater targeting accuracy but simultaneously raise questions regarding potential collateral damage. Furthermore, lethal autonomous weapons pose entirely new challenges, as it may be difficult to determine responsibility for the damage caused by these systems.
The principle of unjustifiable suffering is an integral part of the set of principles governing the conduct of parties in armed conflicts, closely linked to the principle of distinction, which aims to protect civilians, and the principle of proportionality, which requires that the expected harm to civilians be proportional to the military advantage anticipated. Respecting these principles collectively is essential to ensuring civilian protection and reducing human suffering caused by armed conflicts.
Rules of international humanitarian law prohibit the use of weapons that cause harm or unjustifiable suffering, as well as indiscriminate weapons that inflict severe and lasting damage on the natural environment. This law governs the conduct of hostilities and regulates combat methods and means. It applies to autonomous military robots as one of the means of combat, and their legality is subject to customary and written rules in international humanitarian law. Consequently, in the absence of consistency between these weapons and international humanitarian law, their prohibition is decided, as is the case with conventional weapons.
The 1980 Convention on Certain Conventional Weapons provides a suitable framework for addressing the issue of emerging technologies in autonomous military systems, considering the goals and objectives of the convention, which aims to strike a balance between military necessity and humanitarian considerations.
Thus, the Convention on Prohibiting or Restricting Certain Conventional Weapons can be considered excessively harmful or indiscriminate in effect, as inferred from its preamble, which indicates that its scope of application is not limited to conventional weapons but extends to what may arise in the future, meaning the possibility of including weapons reliant on artificial intelligence technologies under the convention’s application.
2. Standard of Indiscriminate Effect
Lethal autonomous robots raise profound questions about ethical and legal responsibility. The inability of these systems to accurately distinguish between military and civilian targets makes them prone to committing serious violations of international humanitarian law. Who is responsible for the actions of these robots? Is it the manufacturers, the states using them, or the programmers who designed their algorithms?
The development of lethal autonomous robots faces significant technical challenges. Even with rapid advances in artificial intelligence, it remains difficult to simulate the full complexity of reality and the battlefield. Factors like smoke, adverse weather conditions, and deception can affect the robots’ ability to accurately identify targets. Moreover, human behavior on the battlefield is often unpredictable, making it hard for robots to anticipate their actions.
Given the technical, legal, and ethical challenges posed by the use of lethal autonomous robots, there are increasing calls for a complete ban on these weapons. Even with technological advancements, it remains difficult to ensure that these systems will always adhere to the principles of international humanitarian law. A complete ban on these weapons is the most effective way to protect civilians and ensure the security and stability of the international community.
Article 51(4) of the Additional Protocol I defines indiscriminate attacks as those that are not directed at a specific military target, use methods or means of combat that cannot be directed at a specific military target, or use methods or means of combat whose effects cannot be directed as required by international humanitarian law. Consequently, such attacks may affect military or civilian targets indiscriminately. The failure of autonomous weapons to distinguish between military and civilian targets threatens to result in grave violations of international humanitarian law. The use of these weapons could lead to widespread civilian casualties and destruction of civilian infrastructure. This would exacerbate human suffering and undermine trust in the international system.
Given the technical, legal, and ethical challenges posed by the use of autonomous weapons, there are increasing calls for a complete ban on these weapons. Even with technological advancements, ensuring that these systems will always comply with the principles of international humanitarian law remains difficult. A complete ban on these weapons is the most effective way to protect civilians and ensure the security and stability of the international community.
The growing reliance on autonomous robots in the military necessitates the development of an international legal framework governing the use of these technologies. This framework should focus on establishing accountability, ensuring compliance with international humanitarian law, and protecting civilians. It should also include mechanisms for accountability and compensation in the event of violations, as the absence of such a legal framework could lead to an arms race in robotics and undermine international security and stability.
Conclusion
Military robots and other AI weapons pose a new challenge to both international humanitarian law and technology. From a technical perspective, these systems raise questions about how to ensure they operate reliably and accurately and can distinguish between military and civilian targets. Legally, clear rules must be established to define the responsibility of states, manufacturers, and users for any damages resulting from the use of these systems. Additionally, adequate safeguards must be put in place to prevent these systems from falling into the wrong hands or being used for unlawful purposes.
Addressing the challenges posed by the use of robots in combat requires extensive international cooperation. UN member states and other international institutions must work together to develop common standards governing the use of these systems. Dialogue among legal, ethical, and technical experts should also be encouraged to develop a comprehensive framework that ensures the responsible and safe use of robots.