
Like many others, I find myself deeply immersed in the ongoing debate surrounding artificial intelligence, particularly generative AI. The discussions and debates among technology experts, philosophers, and official institutions—and even those among the general public—regarding artificial intelligence are often perplexing and confusing. This bewilderment is especially evident for the average person on the street, or even for someone like me, who holds advanced degrees in physics and mathematics and has extensive experience in public service, where governance and national security were my primary focus.
Contrasting Views:
Some consider artificial intelligence—or more precisely, “algorithmic decision-making”—to be merely a new phase of the industrial revolution stemming from the widespread use of computers. Algorithms are simply defined sets of rules designed to solve specific problems.
Proponents of generative AI assert that it represents a natural and transformative advancement in technology and modernity, even though some warn of its regulations due to potential negative implications, particularly on freedom of expression and creativity.
However, others firmly believe that generative AI fundamentally differs from other transformative technological techniques; it has the ability to generate and present new information through cognitive capabilities that rival human abilities across various domains, albeit not based on direct education. This starkly contrasts with previous technological techniques that enhanced machine efficiency through algorithms.
Discussion on Regulatory Frameworks:
The world today acknowledges that generative AI has the potential to offer significant advancements, but it also presents extremely complex challenges, with many openly admitting they do not fully grasp the positive and negative impacts of using generative AI alike.
One camp argues that, as was the case with previous technologies, regulating the use of generative AI is crucial, especially given the current unpredictability of its capabilities and the scope of its potential. Another camp believes that this new technology is so revolutionary and transformative that its capabilities cannot be predicted, particularly since these capabilities have already been unleashed, making real-world regulation nearly impossible. This camp also contends that any attempt to regulate the use of generative AI may be futile, only fostering a false sense of security and self-satisfaction.
Ethical and Geopolitical Implications:
Generative AI raises highly challenging technical questions that involve numerous critical ethical issues, with experts themselves differing on the answers. As the field of AI applications evolves, the scope of these issues and concerns grows. If we examine the effects of generative AI from an international or geopolitical perspective—or through the lens of national security—we confront variables that can indeed send shivers down our spines.
With the advent of generative AI, are we witnessing a shift in how populations and nations are governed, transitioning from human governance to AI governance, especially considering the significant increase in the influence of this technology on human interactions? Will technology—particularly generative AI—become an added value or a new burden in this context? Will this technology evolve to become an ally or an adversary? Is the traditional dichotomy of the world made up of developed and developing countries transforming into a new model that distinguishes countries with generative AI capabilities from those with lesser technological abilities? Will this lead to an expansion of the gap between the two?
These questions are frequently raised due to the immense potential of AI, its ever-changing nature, and the unpredictability of its capabilities and associated risks.
Many technological advancements, including generative AI, are funded by the military; thus, the potential risks of militarizing AI should not be underestimated. We must seriously consider the possibility of an arms race involving the use of generative AI applications. How do national security experts assess the security implications of using generative AI when its scope of activity is not predefined and is not fully controlled or managed by its users, allies, or adversaries? How will we interpret potential AI errors, especially given its non-learning nature and the diminishing role of humans in military decision-making?
It is worth noting that the most advanced military nations have attempted to mitigate the repercussions of irrational or incorrect use of nuclear weapons by implementing a “dual-key” system, which requires decision-making from more than one person to authorize the launch of these devastating weapons.
Everyone today, regardless of their stance, agrees that generative AI exists and is rapidly and broadly spreading—a fact that no one disputes. Most opinions lean towards the notion that regulating the use of AI may be beneficial. However, even within this majority, some recognize that achieving effective regulation of AI use may be technically impossible, at least for now.
Necessities of Regulation:
I concur with the views asserting that generative AI is here to stay, and I support the position that establishing some forms of international regulation for its use is essential. While some aspects of AI may currently be beyond our regulatory capabilities, doing our best to establish some regulatory frameworks is certainly better than leaving matters completely chaotic or random. I do not advocate for hindering innovation or the free flow of information; rather, I assert that we need a degree of regulation—even if it is not complete—to mitigate the risks associated with AI use.
Some regulation—provided it is based on reliable scientific knowledge—is, in essence, better than no regulation at all. At a minimum, such regulation can make misuse more difficult, reduce the margin for potential errors, and ensure that innovation and freedom of expression are not unjustifiably hindered.
There remains much to understand about generative AI for the general public—and possibly for experts too—and it is crucial to bridge the gap between science and science fiction regarding the benefits and risks of this new technology. While the United Nations is studying artificial intelligence through various initiatives, I urge reputable scientific institutions to collaborate on preparing a guideline document regarding generative AI, focusing on what we know and what we do not know about it.
It would be beneficial—to build upon existing scientific knowledge—to establish voluntary cooperative foundations that optimally harness generative AI for social and economic development while guarding against its negative repercussions. Similarly, cooperative foundations should be established to maximize national security benefits while reducing the negative security implications of this technology.
Among the valuable attributes of generative AI are transparency, facilitating communication, and rapid response. Past abuses involving lower-level technologies have been partially addressed through the construction of virtual firewalls. Developing safety measures against generative AI actions not fully authorized by high authorities in the most sensitive national security applications would even serve the most advanced military forces, especially amid the rising tensions in our increasingly polarized world.
These initial suggestions do not represent a cure-all for regulating the use of artificial intelligence; rather, they aim to enhance understanding of its capabilities and support efforts to harness its potential and manage its risks. In a global system rife with inequality and polarization, we must seize every opportunity aimed at progress while preventing technologies that could amplify the destructive capabilities of certain technologies and raise the risks of miscalculations concerning national security issues.