Artificial Intelligence and the Defense of Fundamental European Interests: A Reading through Dependency Theory

  • By Stéphane Mortier (Revue Défense Nationale 2024/8 n° 873)
  • Translated by Mohamed SAKHRI

The ability of countries to develop AI companies is essential for their competitiveness. These companies provide the tools and services for the growing number of businesses that are adopting AI. In fact, the global percentage of large enterprises using AI in at least one function or business unit is increasing year by year. In 2023, 83% of companies reported that AI is their priority for the coming years. Since power operates on an informational basis, the determinants of success or advantage relate less to the information one possesses and the knowledge that can be derived for strategic purposes, than to the management of increasingly automated information (Atif et al., 2022). In this sense, AI represents a power struggle and falls broadly within the scope of economic intelligence.

Clearly, powers are now engaged in a race for AI, with only two countries appearing capable of winning: the United States and China (Thibout, 2018). As for Europe, it is striving to reclaim a leadership position against other players such as Russia and Korea (Atif et al., 2022). The risk lies not in the use of AI itself but in the algorithms or tools that facilitate its application. Most of these algorithms and tools are developed outside the European Union, placing its member states in a form of dependency on foreign powers. This inevitably generates risks for national security and the protection of the fundamental interests of European states.

This dependency situation is reminiscent of theories developed during the decolonization process, particularly the dependency theory based on the work of Theotonio Dos Santos (1970). This theory critiques structuralism, a theoretical corpus developed within the United Nations Economic Commission for Latin America (ECLAC) in the 1950s and 1960s. However, this theory has particular relevance in the context of AI dependencies. In a unique yet non-trivial manner, we will attempt to analyze the European situation through this theoretical framework. It is not about considering Europe as a developing geographical zone, but rather demonstrating that in certain sensitive sectors, former Western powers may find themselves in a state of dependency, with all the societal consequences that may entail.

Dependency Theory as a Conceptual Framework
First and foremost, it is important to note that there has never really been a consensus on the notion of dependency. A look at the works of theorists in this field will confirm this. We choose to rely on the works of T. Dos Santos and use his definition of dependency as a reference.

By dependency, he refers to a situation in which the economy of certain countries is conditioned by the development and expansion of another economy to which the first is subjected. The interdependence relationship between two or more economies and between them and the global market takes the form of dependency when certain countries (the dominant ones) can develop and be self-sufficient, while other countries (the dependent ones) can only do so as a reflection of that expansion, which can have either a positive or negative effect on their immediate development. We could interpret this for artificial intelligence as follows: under dependency, we would understand a situation in which the development of AI in a state is conditioned by the development and expansion of it in other powers to which the former is subjected. The interdependence relationship between two or more states developing AI, and between them and each one’s strategic needs, takes the form of dependency when some of these states (the dominant ones) have the data and algorithms needed to develop AI tools, while others (the dependent ones) are highly reliant on tools developed by the dominant states, which could have a positive or negative impact (depending on their geopolitical alliances) on their national security.

In the post-war period, a new type of dependency emerged, based on multinational companies that began investing in industries directed toward the domestic markets of underdeveloped countries. This form of dependency is essentially technological and industrial. In the case of Latin America, the degree of success of national economies in the decolonized situation depended economically on the following elements (Cardoso, 1967):

  • The availability of a primary product capable of supporting, transforming, and developing the export sector inherited from the colonial period;
  • An abundant labor force;
  • The availability of arable land or highly profitable mineral deposits.

These three points characterize Europe’s current dilemma concerning AI: the first factor corresponds to the current industrial fabric, the second to the capacity for producing personal or other data, and the third concerns the pool of data generated that escapes European actors in favor of foreign players (United States and GAFAM, China and BATX [1]).

The post-colonial period was marked by the investment (or maintenance) of economic actors from the colonizing power, with local entrepreneurs neither sized nor holding sufficient capital. This is, ceteris paribus, the current situation in Europe, where many strategic companies (linked to national security or fundamental national interests) are being acquired by American or Chinese economic operators through foreign investments. This situation has prompted the EU to legislate to protect its strategic economic assets (such as directives on foreign investment screening), and certain member states, like France, to implement public policies for economic security.

Sovereignty, Fundamental Interests of the Nation or the Union
Here, sovereignty must be understood as digital sovereignty, as it is this type that is being discussed in relation to AI. However, achieving digital sovereignty at the European level seems particularly complex, as each member state is sovereign in this regard… Nevertheless, common interests do indeed exist.

The concept of digital sovereignty refers to the capacity of a given entity (a nation, a company, an individual) to control the digital attributes (data, information, knowledge, algorithms) concerning objects it claims to observe or even control. The term “control” does not necessarily mean that the entity owns (in the sense of full property) the objects in question and, by extension, the digital attributes, notably the data of these objects (Ganascie et al., 2018). This is how digital sovereignty is conceived in France and more broadly in Europe. Data (intangible) represent the stakes of this sovereignty. In this regard, the European Union possesses a legislative arsenal in this area (GDPR, AI Act, DORA, Digital Chips Act, Digital Markets Act, Digital Services Act, etc.).

Thus, a state is obligated to safeguard its sovereignty by defending its fundamental interests (which are the national security concern). These interests are described as follows in the French Penal Code (art. 410-1): “Its independence, (…) the integrity of its territory, (…) its security, (…) the republican form of its institutions, [the] means of its defense and (…) of its diplomacy, (…) the safeguarding of its population in France and abroad, (…) the balance of its natural environment and [the] essential elements of its scientific and economic potential, and of its cultural heritage.”

The main risk to national security in the use of AI is dependence on foreign technologies. There is indeed a genuine technological race between the United States and China. The investments of these two powers in European companies fully participate in this competition. It is particularly difficult for European states to coordinate in combating this type of economic predation, despite the European mechanism for screening foreign investments. European states are therefore becoming increasingly reliant on imported technologies and/or dependent on them. The use of algorithms developed and controlled by these foreign powers creates not only a dependency but also a risk of interference. Moreover, AI controlled by other sovereign actors opens the door to cyberattack risks, content manipulation, or the diversion of strategic information. Furthermore, a foreign power could seize control of social governance over our populations and implement disinformation actions aimed at destabilizing social order.

Beyond the major state powers, it is necessary to consider the economic environment of AI. The primary actors in its development globally are the largest technological companies. The GAFAM (Google, Apple, Facebook, Amazon, Microsoft) are economic giants of unprecedented size, often in a position of quasi-monopoly, giving them enormous power (Cazals et Cazals, 2020). Although they are economic players, they represent foreign forces, and their modus operandi is as follows (DeCloquement, Luttrin, 2023):

  • Analyzing targets (psychological vulnerabilities, economic and social weaknesses, operational modes, networks, family and professional environments, identification of local needs);
  • Exploiting psychological vulnerabilities and fulfilling a specific local need (short-term strategy);
  • Penetrating territories, impoverishing them (medium- and long-term) to better absorb the state.

Private strategies then align with the strategic interests of governments or even replace them. This is particularly true in Europe, where there is a form of dependency on the United States, but also elsewhere in the world, where Chinese actors are playing an increasingly significant role in this field. For the United States and the West in general, China represents the main ideological adversary and the largest economic and technological competitor (including in fields such as microelectronics, wireless technology, and AI), and it is the most important player in the global economy. Indeed, beyond AI itself, it is the entire environment and value chain that can create dependency situations. This value chain encompasses all the hardware necessary for AI operation: chips, servers, storage, fast networks, low-latency and high-performance systems, 5G, Edge Computing, operations, software, and data collection (Harry, 2023). All this indispensable equipment is strategic and thus falls within the realm of national security or the fundamental economic, industrial, or scientific interests of the nation and more broadly of the Union.

Dependency…
Recently, the website of one of the leading AI organizations – the Centre for Governance of AI in Oxford – published an intriguing white paper titled (Ord, 2022): “Lessons from the Development of the Atomic Bomb.” This highly relevant text indirectly compares the resources required for any state entity wishing to possess an atomic bomb to those needed to develop its own operational AI system. Thus, developing an atomic bomb or a complete sovereign AI system requires four main conditions:

  • Possessing the raw materials (which Europe does not have sufficiently in terms of materials);
  • Having unwavering political will in the short, medium, and long term (impossible in the current state of the EU and national prerogatives);
  • Having access to the necessary human, technological, and financial resources for decades (Europe has never had a program budgeted in the thousands of billions of euros, the sum needed for an AI system; everything implemented to date, across all domains, does not exceed a few billion);
  • Actively spying on those who already possess the bomb (in this case, AI) and keeping strict secrecy regarding the development of the weapon itself (its own AI).

For years, the GAFAM have amassed data without users fully realizing the extent of what was happening. With the system sustaining itself, it is utopian for companies in European democracies, which lack the same means as China, to close the gap. The current situation poses two problems for Europe (Atif et al., 2022):

  1. From a purely economic standpoint, AI regulation could hinder the development of European companies compared to their foreign competitors, which is a dilemma.
  2. The risk of losing mastery of technology and making Europe a “de facto” vassal of the United States and China in the field of AI, in other words, finding itself in a situation of dependency…

In dependency theory, countries are integrated into the global economy but are structurally placed in a state of ongoing dependency. This was the case in the 1950s. In the context of AI developments, the power wielded by foreign powers (the United States and China) over the development capacities of European actors, whether public or private, places them in a similar situation. Given the budgets allocated and the centralization of projects by the two major powers in AI development, this European dependency seems set to persist. However, the fundamental interests of European states or, more generally, of the European Union are increasingly tied to mastery of AI tools. Dependency on these tools consequently leads to dependency on foreign powers. This is where the danger lies for the fundamental interests of European states.

However, this is not about unequal exchanges between actors but about unequal access to the data necessary for developing AI tools, an unequal allocation of budgets devoted to it, and controlled access to the tools necessary for protecting European fundamental interests. Conversely, Europeans are establishing numerous legal provisions to shield themselves from this dependency. Yet, as described, these legal constraints are difficult to implement and coordinate among themselves and are swiftly undermined by major foreign powers (such as the Cloud Act).

The aim is not to demonstrate that an interpretation based on dependency theories would place Europe and Europeans in a situation similar to that of colonized countries in the last century. However, technological (and not economic) dependency in AI creates difficulties in defending the fundamental interests of dependent states. Relying on foreign powers (dominant ones) to defend one’s own interests inevitably creates a state of dependency.

Conclusion
It is challenging to contextualize a recent phenomenon, artificial intelligence, with a theoretical reading based on the concept of dependency stemming from decolonization. It may have been somewhat bold to embark on such an analysis, but dependency situations evolve and can thus take on new forms. From the economic dependency of newly decolonized countries to the technological dependency of smaller nations, there inevitably exists a corollary impact on power and the defense of national interests. This precisely encapsulates the heart of our analysis: dependency exacerbates the loss of power through the difficulty of autonomously managing one’s fundamental interests.

The race for AI has been underway for several years, and a few giants have arrogated to themselves, and continue to do so, the capturing of the data necessary for developing their tools. The United States and China appear to be waging a technological and economic war in which the rest of the world struggles, with little success, to maintain an acceptable position.

References

  • Atif Jamal, Burgess, Burgess J. Peter et Ryl Isabelle, Géopolitique de l’IA : : Les relations internationales à l’ère de la mise en données du monde, Le Cavalier Bleu, 2022, 150 pages.
  • Castro Daniel et McLaughlin Michael, « Who is winning the AI Race : China, the EU, or the United States ? », Center for Data Innovation, janvier 2021 (https://www2.datainnovation.org/2021-china-eu-us-ai.pdf).
  • Cazals François et Cazals Chantal, « GAFAM et BATX contre le reste du monde », in Cazals F. et Cazals C., Intelligence artificielle. L’intelligence amplifiée par la technologie, De Boeck Supérieur, 2020, 320 pages.
  • DeCloquement Franck et Luttrin Aurélie, « La souveraineté numérique au fondement de notre performance nationale », in CERCLE K2, Les enjeux du Big Data, Cercle K2, 2023, p. 138-149 (https://cercle-k2.fr/).
  • Dos Santos Theotonio, « The Structure of Dependence », The American Economic Review, vol. 60, n° 2, 1970, p. 231-236.
  • Harry Jean-Baptiste, « Comment l’utilisation d’un matériel dédié au rendu graphique a permis la troisième vague de l’intelligence artificielle et ses applications concrètes ? », in CERCLE K2, Les enjeux du big dataop. cit., p. 128-133.
  • Klossa Guillaume, « Pour garantir notre souveraineté industrielle, l’urgence est de développer une stratégie européenne systémique pour l’IA », La Revue du Trombinocope, n° 283, juillet-août 2023, p. 20.
  • Mortier Stéphane, « IA et cybersécurité, les instruments de conquête d’un espace non territorialisé », Droit & Patrimoine, n° 298, janvier 2020.
  • Nour Mohamed Rida, « Géopolitique de l’Intelligence Artificielle : Les enjeux de la rivalité sino-américaine », Paix et Sécurité Internationales–EuroMediterranean Journal of International Law and International Relations, n° 7, 2019, p. 231-259.
  • Ord Toby, Lessons from the Development of the Atomic Bomb, Oxford, Centre for Governance of Artificial Intelligence, novembre 2022 (https://cdn.governance.ai/Ord_lessons_atomic_bomb_2022.pdf).
  • Thibout Charles, « L’intelligence artificielle, une géopolitique des fantasmes », Études digitales, vol. 2018-1, n° 5, p. 105-115.
Please subscribe to our page on Google News
SAKHRI Mohamed
SAKHRI Mohamed

I hold a Bachelor's degree in Political Science and International Relations in addition to a Master's degree in International Security Studies. Alongside this, I have a passion for web development. During my studies, I acquired a strong understanding of fundamental political concepts and theories in international relations, security studies, and strategic studies.

Articles: 15167

Leave a Reply

Your email address will not be published. Required fields are marked *