“Algorithmic Wars”: The Strategic Implications of Military Use of Artificial Intelligence

The development, integration, and utilization of artificial intelligence (AI) for military purposes carries profound implications for the future of warfare and for peace and international security in general. This development represents both opportunities and risks at the tactical, operational, and strategic levels, prompting the incorporation of AI into intelligence analysis, command and control, targeting, firepower, training, simulation, equipment monitoring, and logistics.

This has led to extensive discussions about the effects of algorithmic warfare, known as “autonomous wars,” and the potential pathways they may take, in addition to their immediate military, ethical, and legal ramifications. Simultaneously, there is a growing acknowledgment of the broader strategic implications of AI on power competition between states and the escalation of conflicts, ultimately leading to the risk of nuclear war.

In this context, the RAND Corporation published a study in September 2024 by Emily Benson, Katherine Moridian, and others, titled “Strategic Competition in the Age of AI: Emerging Risks and Opportunities from the Military Use of Artificial Intelligence.” The study aimed to identify the extent to which state militaries and non-state armed groups rely on AI and the changes in the nature of competition and conflict.

Strategic Implications:

The study addressed the need for a multidisciplinary approach to planning for the strategic risks and opportunities arising from the military use of AI, noting significant gaps in theoretical understanding and empirical data regarding these benefits and potential risks. This has led to passionate discussions, sometimes reaching ideological rigidity, among AI experts and policymakers who often grapple with high levels of uncertainty about the pace and direction of future developments in AI applications.

To fully grasp the potential implications of AI, one should not view it merely from a technical perspective; rather, it should be understood as a complex socio-technical system influenced by human, organizational, and cultural factors. This complexity is particularly reflected in its military applications, where technology alone does not dictate its form; it interacts with the various contexts in which it arises. Nonetheless, despite the lack of evidence, policymakers cannot defer decisions regarding the deployment of AI and the governance of its use, leading the study to address the strategic implications of military AI across several levels:

  • National Level: These implications relate to the risks and opportunities posed by AI to strategic actors, particularly nation-states. By focusing on military applications of AI rather than civilian applications, the discussion is framed through the lens of ongoing competition among actors for strategic advantage, which in turn affects the balance of power and degree of peace and stability on an international level. AI could lead to widespread economic changes or be used as a weapon to wage economic warfare, as well as for financial settlements in defense, and has the potential for information manipulation (such as highly advanced deepfake videos), in addition to transforming the productivity of the defense industry and supporting the development and arming of military capabilities at an increasing scale.

The study then analyzed how AI influences individuals’ ability to seize opportunities and translate them into tangible outcomes, and how individuals can leverage their capabilities and available resources to achieve their strategic goals. AI can play a central role throughout all phases of the strategy cycle, beginning with the collection and analysis of big data, through decision-making support, to evaluating alternative options.

Despite the promising potential of AI, further research and development are needed to understand how to effectively harness it, as studies indicate concerns about bias and over-reliance on these systems. Additionally, they emphasize the need to find a balance between human and machine capabilities, considering the limitations and challenges faced by AI systems, especially in the military domain.

  • International Level: The study discussed how military AI could transform the concept of power in international relations, analyzing how AI affects competition among states, whether through the development of the technology itself or through its use to achieve strategic objectives. By analyzing how AI impacts power dynamics and competition, it becomes possible to understand how this technology could lead to unforeseen outcomes that threaten global stability.
  • Nature of Competition: The study calls for moving beyond current discussions that focus on specific examples of military AI applications, such as airstrikes or nuclear weapons, to a broader and more comprehensive understanding of how AI affects all aspects of competition and conflict. Notable examples in this regard include facing or conducting operations beneath the threshold of open warfare, escalation dynamics, and deterrence effectiveness in a multipolar world. AI could pose a direct threat to nuclear command, control, and communication systems, potentially increasing the risk of unintended nuclear conflicts or escalating existing conflicts.
  • Type of Actor: This focuses on the various risks and opportunities that AI may present to different categories of actors based on their size or relative power, which may include major powers, middle powers, small states, types of governance reflecting democracies, authoritarian regimes, non-state actors, private companies, violent extremist organizations, and others.

The risks and opportunities associated with AI vary by country; while the United States and China lead in investment and technology, other countries strive to catch up. However, the capacity of these states to influence the trajectory of military AI development is limited, especially compared to the capabilities of major corporations and other leading states.

AI is likely to intensify strategic, technological, and military competition between major powers (the U.S. and China), jeopardizing international stability. Multilateral initiatives to mitigate the risks of military AI will unlikely succeed or endure without the approval of both major powers, meaning that other countries will seek to influence both sides. Moreover, AI has the potential to undermine the credibility of governance and associated institutions like the United Nations, which has long supported global peace, prosperity, and stability.

Key Takeaways:

Governments can learn from past experiences in managing destabilizing technologies like AI by analyzing successes and failures in other fields to help establish mechanisms and tools that ensure the ethical use of AI amidst its uncertain future military applications. Thus, the study reviewed these experiences and identified lessons learned, extracting a set of tools and mechanisms that defense institutions and governments can employ to mitigate the emerging risks associated with military AI and maximize its opportunities.

The study suggested a range of practical actions to guide the global development of military AI, including:

  • Accelerating and expanding investment in AI at the levels of defense, government, and society as a whole, focusing on enhancing national resilience and preparedness to face potential threats arising from the misuse of AI.
  • Taking preventive actions to deter, inhibit, or increase the costs of military AI use by non-governmental entities, terrorist groups, and hostile states.
  • Raising awareness and identifying challenges while sharing experiences regarding the risks of AI in military contexts.
  • Promoting a comprehensive, participatory approach to building an emerging global consensus on responsible conduct regarding military AI as a precursor to more concrete agreements in the future.
  • Supporting the development of flexible and rapidly adaptable multilateral mechanisms to mitigate nuclear and biological risks associated with AI through smaller, more specialized dialogue platforms.

In conclusion, the study clarified that the development and adoption of AI by militaries and armed movements is ushering in a fundamental shift in the nature of conflict and competition, paving the way for a new era of warfare characterized by unprecedented capabilities. It also noted that the findings of this research are preliminary and require further validation and deepening, necessitating mapping systems that facilitate a comprehensive understanding of how the risks and opportunities associated with AI intersect by gathering insights from diverse expert perspectives.

Additionally, it requires employing advanced techniques such as hierarchical clustering and forecasting future trends, which aid in exploring a variety of potential scenarios for the development of military AI and the governance of this field, while identifying the possible implications of each scenario. War gaming techniques can be used to simulate various scenarios regarding the use of AI in future conflicts and assess the responses of governments and other states, whether they are allies or adversaries.

Source:

RAND Corporation, Strategic competition in the age of AI: Emerging risks and opportunities from military use of artificial intelligence, Rand Europe, (2024).

Please subscribe to our page on Google News

SAKHRI Mohamed
SAKHRI Mohamed

I hold a Bachelor's degree in Political Science and International Relations in addition to a Master's degree in International Security Studies. Alongside this, I have a passion for web development. During my studies, I acquired a strong understanding of fundamental political concepts and theories in international relations, security studies, and strategic studies.

Articles: 15451

Leave a Reply

Your email address will not be published. Required fields are marked *