PoliticsSecurity

Can Countries Manage “End-of-the-World” Threats?

Existential threats facing humanity, often referred to as “end-of-the-world” scenarios, are increasing significantly. Among the most pressing of these threats are nuclear weapons, the risks posed by technological advancements, engineered pandemics, and other perilous developments. Addressing these challenges necessitates international cooperation to curb their detrimental impacts and safeguard human existence. In this context, the magazine Foreign Affairs published a study by William MacAskill, an associate professor of philosophy at the University of Oxford, titled “The Beginning of History: Surviving the Era of Catastrophic Risk.” Key highlights from the study are as follows.

The study outlines various catastrophic threats that could jeopardize humanity’s future:

First, it emphasizes the tremendous destructive capacity of nuclear weapons. According to MacAskill, nuclear weapons exhibit many critical characteristics that may be present in future technological threats. He notes that while the pre-nuclear era saw incremental increases in destructive power, the advent of nuclear energy has resulted in a staggering 10,000 years’ worth of progress in just a few decades. The potential for catastrophic destruction can be deliberate, as seen when American generals advocated for a first nuclear strike against China during the 1958 Taiwan Strait Crisis, or due to errors, as evidenced by the alarming record of mistakes in early warning systems.

Second, controlling future technologies is fraught with challenges. The author points out that many emerging technologies may become even more destructive than nuclear arsenals, could be more easily accessed by a broader range of actors, and raise concerns about dual-use applications. These technologies might require fewer errors to achieve human extinction, making their regulation significantly more challenging. A recent report from the U.S. National Intelligence Council identified threats such as runaway artificial intelligence, engineered pandemics, and nanotechnology weapons, alongside nuclear warfare, as sources of existential risk, capable of inflicting harm on a global scale.

MacAskill also highlights the increasing international employment of engineered pandemics. The rapid progress in biotechnology, coupled with significantly reduced costs, presents both opportunities and threats. While advancements promise numerous benefits, such as gene therapies for incurable diseases, the looming shadow of dual-use concerns remains significant. Techniques used in medical research could potentially modify or create highly transmissible and lethal pathogens. Such endeavors might take place within open scientific institutions, where scientists sometimes alter pathogens to learn how to combat them or, with less noble intent, to develop biological weapons for terrorism or state-sponsored initiatives. A report from the U.S. Department of State in 2021 concluded that North Korea and Russia maintain offensive biological weapons programs.

The study also points out the self-replicating nature of bacteria and viruses, in contrast to nuclear weapons, evidenced by the difficulties in containing the COVID-19 pandemic. While only nine countries possess nuclear weapons, thousands of biological labs operate worldwide, including dozens licensed to experiment with the most hazardous pathogens across five continents.

Regarding the obstacles to managing these threats, the author notes that humanity has so far done little to protect its future, hindered by several significant challenges.

One major issue is the inadequate budget for the Biological Weapons Convention (BWC), which prohibits the development, storage, and possession of biological weapons. Security expert Daniel Gerstein has described it as “the most important arms control treaty of the 21st century,” yet it lacks a verification mechanism and operates on a minimal budget. The study indicates that the BWC struggles even to collect the meager contributions owed by member states, with the conference president lamenting the “unstable and deteriorating financial situation due to long-standing arrears from some state parties.”

Moreover, the study notes the failure of states to coordinate effectively in addressing these threats. Research efforts aimed at preventing loss of control over artificial intelligence systems constitute a tiny fraction of the overarching research operations in this domain. The article points out that military forces deploy lethal autonomous weapons in combat. Efforts to limit such weapons have continually floundered for years in United Nations halls. On an individual country level, the situation is similarly grim; the author states that less than 1% of the U.S. defense budget is allocated to biological defense, with most of that amount designated for countering chemical weapons like anthrax.

The study also emphasizes negative implications arising from geopolitical tensions. The author argues that the resurgence of great power competition casts doubt on the likelihood of global cooperation in mitigating these threats. Worse still, geopolitical tensions could compel states to accept increasing levels of threats to the world and themselves if such a stance is viewed as a worthwhile gamble to enhance their security interests. According to the study, great powers, in their pursuit of global dominance, could resort to overt warfare. The study stresses that avoiding the risks of a Third World War while achieving unprecedented innovations in international governance is a difficult challenge—yet, whether we like it or not, it remains a pressing issue for the world.

In response to these threats and obstacles, the study proposes several strategies for handling future risks:

First, it underscores the need for a global consensus on confronting these threats. Some argue that if controlling emerging technologies safely is exceedingly challenging, then why not refrain from inventing them in the first place? This line of thinking criticizes economic growth and technological advancement as primary culprits of environmental destruction and other forms of harm. In 2019, 11,000 scientists from over 150 countries signed an open letter urging nations to stabilize or gradually reduce global population numbers and to prioritize plans away from “GDP growth.” While these ideas hold appeal, the author contends that such a response is unrealistic and dangerous. He asserts that even if countries temporarily came together to halt innovation, someone would inevitably resume the pursuit of advanced technologies sooner or later.

Second, the study highlights the pressing need to harness new technologies for managing threats. The author argues that technological stagnation is undesirable. While new technologies could exacerbate risks, they could also mitigate them. As seen with the emergence of nuclear weapons, governments may require additional technologies to manage these risks. If societies halt all technological progress, fresh technological threats may arise that cannot be contained due to a lack of appropriate defenses. For instance, various actors may possess the capability to create unprecedentedly dangerous pathogens at a time when little progress has been made in the early detection and eradication of new diseases.

Some experts refer to “differential technological evolution,” meaning that if people cannot avoid destructive technologies or accidents, they should at least strive to develop useful and preventive technologies first. For example, creating an early warning system for new diseases would benefit the entire world, as in the case of COVID-19 being contained to a small number of countries and subsequently eradicated.

Third, the study emphasizes the importance of collecting and analyzing data to address “existential threats.” Intelligence gathering and analysis regarding the sources of large-scale threats will be critical. Although complete certainty is unattainable, surveying and predicting what lies ahead can assist in identifying new concerns. In this context, a recent “Global Trends” report from the National Intelligence Council called for “development of resilient strategies for survival.”

Fourth, the establishment of “clubs” among countries for security cooperation is suggested. The author proposes that, to maintain security, nations could enter into agreements to collectively refrain from developing particularly dangerous technologies, such as biological weapons. An alliance of willing states could unite to form what economist William Nordhaus labels as a “club” that collectively promotes the global public good that the club was established to support. Simultaneously, member states would commit to providing mutual benefits (such as economic growth or peace) while imposing costs (through measures like tariffs) on non-members to incentivize them to join. For instance, “clubs” could be based on safety standards for artificial intelligence systems or a moratorium on hazardous biological research.

Lastly, the study underlines the necessity for regulatory reform to curb potential threats. Regulatory reform is highlighted as crucial; in his book “Avoiding Catastrophe,” Cass Sunstein—former head of the regulatory office in the White House—illustrated how the current governmental approach to cost-benefit analysis fails to adequately account for the risks of potential catastrophes. Sunstein has argued for what he calls the “maximum principle,” stating that in the face of significant risks, governments should focus on eliminating the worst outcomes. The author notes that the White House is currently updating its framework for reviewing regulatory measures; he encourages U.S. officials to seize this opportunity to adapt their approach to managing low-likelihood but high-impact risks for the 21st century, whether by adopting Sunstein’s “maximum principle” or a similar approach that takes global catastrophic risks seriously.

In conclusion, the study asserts that powerful and destructive technologies pose an unprecedented challenge to the current political system. Advanced artificial intelligence could undermine the existing balance of power between individuals and states; dictatorial governments employing AI in their military and police forces may suppress the potential for uprisings or coups. Additionally, governments could use the prospect of a Third World War as justification for curbing individual liberties, such as freedom of expression, under the guise of protecting national security. They may also invoke the likelihood of easy access to biological weapons to justify widespread surveillance. Therefore, the study concludes that humanity must resist these pressures for the sake of its future, emphasizing that the cultural shift towards liberalism over the past three centuries has engendered a driving force for moral progress, the spread of democracy, the abolition of slavery, and the expansion of rights for women and racial minorities. To build upon this progress, we must ensure that global cooperation reduces the risks of worldwide catastrophe while preserving and fostering freedom of thought and diversity, thereby securing a brighter future for ourselves and generations to come.

Source:

William Macaskill, The Beginning of History: Surviving the Era of Catastrophic Risk, Foreign Affairs, September/October 2022.

Mohamed SAKHRI

I’m Mohamed Sakhri, the founder of World Policy Hub. I hold a Bachelor’s degree in Political Science and International Relations and a Master’s in International Security Studies. My academic journey has given me a strong foundation in political theory, global affairs, and strategic studies, allowing me to analyze the complex challenges that confront nations and political institutions today.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *


Back to top button