LegalPolitics

The Revolution of Language Models: The Dangerous Applications of AI in Biological Terrorism

In the summer of 1990, three trucks sprayed a yellow liquid on several locations in Tokyo and its suburbs, including two U.S. Navy bases, Narita Airport, and the Imperial Palace. These attacks were carried out by the Japanese cult “Aum Shinrikyo” (Aum Supreme Truth), which aimed to cause a total collapse of modern civilization to pave the way for a new society based on its religious principles. Five years later, the group gained widespread notoriety for its sarin gas attack on the Tokyo subway, which killed 13 people and injured thousands.

The goal of Aum Shinrikyo in the 1990 summer attack was for the yellow liquid to contain botulinum toxin, one of the most biologically toxic substances; however, no fatalities were recorded as a result of these attacks. Among the possible reasons for the failure of the attacks was Aum’s lack of critical knowledge, particularly regarding the difference between spreading Clostridium botulinum bacteria and the deadly botulinum toxin produced by those bacteria. It is also uncertain whether the group managed to obtain the toxic form of the product, and other factors may have contributed to the attack’s failure.

If Aum Shinrikyo or any similar malicious group had access to modern AI tools, such as ChatGPT, they might have avoided this and other mistakes. ChatGPT excels at answering questions and providing knowledge, including topics like the production of botulinum toxin. If Aum Shinrikyo had access to this program, would people remember the 1990 summer attacks as the worst biological terrorist incident in history?

AI advancements hold great potential for improving fields like science and health, transforming work and education, and contributing to biology by solving problems like protein folding and drug development. However, the proliferation of AI applications in bioengineering raises concerns, as malicious actors could use them to cause devastating effects. As literature suggests, large language models (LLMs) like ChatGPT, along with AI-powered bio-design tools, could significantly increase the risks associated with biological weapons and bioterrorism.

The Revolution of Large Language Models

Large language models could particularly contribute to increasing access to biological weapons. In a recent exercise at the Massachusetts Institute of Technology, ChatGPT took just one hour to guide non-science students on potential pandemic pathogens, explaining the options available for acquiring them by individuals lacking the necessary laboratory skills.

The story of Aum Shinrikyo’s lack of knowledge about the difference between Clostridium botulinum and botulinum toxin is not unique. Previous biological weapons programs faced challenges due to a lack of qualified personnel. For example, Rauf Ahmed, who began his studies in microbiology, led Al-Qaeda’s efforts in biological terrorism. In 2001, he used his scientific expertise to attempt to obtain anthrax, but he was arrested in December of the same year, with the extent of his progress remaining unknown. With the development of AI chatbots, these technologies could inadvertently help malicious individuals improve their skills to cause harm.

But how much can one learn from an AI-powered lab assistant? Ultimately, creating pathogens or biological weapons requires not only theoretical knowledge, which large language models can provide, but also practical, implicit knowledge. It is difficult to determine the extent of this “implicit knowledge barrier” and how much programs like ChatGPT can reduce it. However, one clear fact remains: if AI chatbots and assistants make the process of creating and modifying biological agents easier, more individuals are likely to attempt it. And the more attempts there are, the greater the likelihood that some will eventually succeed.

ChatGPT is just the beginning of language models and AI revolutionizing how scientists guide laboratory robots. Soon, AI systems will be able to execute ideas and design strategies independently, accelerating the automation of science and reducing the need for scientists in large projects, thereby facilitating the rapid development of biological weapons.

AI-Enhanced Terrorism

With the rapid advancement of AI technologies, biological and chemical terrorism is no longer just a theoretical threat but an increasingly likely possibility. The availability of advanced AI models could open the door for non-state actors, including terrorist groups, to exploit these technologies in developing deadly weapons with unprecedented ease.

  1. Democratization of Technology Access: Experimental scientific materials could become powerful tools in the hands of malicious actors, providing a means to transfer hard-to-find information that is publicly available at the click of a button. This “democratization” of access to scientific knowledge related to the manufacture of nuclear, chemical, and biological weapons could significantly enhance the effectiveness of terrorist activities. This makes it easier for terrorists to better understand scientific research and potentially exploit the necessary technical expertise.

Additionally, experimental scientific materials could reduce terrorists’ reliance, especially lone actors, on “intermediaries” who transfer information, and on online groups that share links to tutorials or journals containing instructions on how to manufacture chemical and biological agents. These materials could also create “do-it-yourself” opportunities, posing additional challenges for law enforcement in detecting terrorist activities, making it easier for terrorists to acquire dual-use scientific knowledge.

Despite the opportunities large language models may provide for terrorists, their direct impact on chemical and biological security remains limited due to the complexities of mastering chemical processes and life sciences. However, technological advancements over time could facilitate access to scientific experiments for individuals with resources and knowledge, helping them develop the expertise needed to design chemical and biological agents.

  1. The Risk of Chemical Language Models: Chemical language models have become effective tools for generating molecules with desired properties, such as designing toxic molecules used in the manufacture of chemical agents. Studies have shown that these models can be directed to design analogs of the nerve gas VX, which was used by terrorist groups like Aum Shinrikyo in 1994. This raises the possibility that other terrorist groups may seek to use such agents, potentially helping chemical language models develop the necessary knowledge.

Similarly, these models can support the acquisition of knowledge about biological agents, such as providing information on laboratory equipment or reverse genetics of the influenza virus. Programs like ProtGPT2 and ProGen can also be misused to design proteins and toxins like ricin to evade detection technologies, which is concerning given the previous interest of groups like Al-Qaeda and ISIS in weaponizing ricin.

  1. DNA Synthesis Devices and Emerging Innovative Technologies: Advances in synthetic DNA production demonstrate how emerging technologies continue to change the risk landscape. Currently, most DNA synthesis providers voluntarily review their customers and synthesis orders, but the new generation of desktop synthesis devices could change this. These devices enable laboratories to print DNA without relying on commercial providers, making production harder to monitor. As technology improves, individuals or small groups may gain access to capabilities previously limited to governments and advanced laboratories, lowering barriers for actors engineering pathogens and increasing the risk of pandemics.

Potential risks include leading tech companies failing to implement necessary measures, such as insufficiently screening DNA orders or not monitoring AI model training. Current guidelines and oversight are inadequate, creating a worrying situation where humanity’s future may depend on a few advanced laboratories voluntarily adhering to best practices, even though these practices are not clearly defined.

  1. Bio-Design Tools: While large language models may enhance the boundaries of bio-design capabilities in the future, specialized AI tools are already achieving this. These tools include protein-folding models like AlphaFold2 and protein-design tools like RFdiffusion. These tools are typically trained on biological data, such as genetic sequences, and have been developed by many companies and academic researchers to address major challenges in bio-design, such as developing therapeutic antibodies. As these tools become more powerful, they will enable many beneficial achievements, such as creating new drugs based on innovative proteins or designing custom viruses.

However, such advanced design capabilities could also increase biological risks. In extreme cases, bio-design tools could enable the creation of biological agents with unprecedented properties. Some have suggested that natural pathogens are balanced between transmissibility and lethality, while designed pathogens may not be subject to these evolutionary constraints. Terrorist groups could potentially design a pandemic virus far more lethal than anything nature could produce, raising the possibility of bio-design tools transforming from catastrophic threats to existential ones. Additionally, these tools could enable the creation of biological agents targeting specific geographic regions or populations.

In the near term, new design capabilities may challenge current measures aimed at controlling access to dangerous toxins and pathogens. Current security measures often focus on banned lists of dangerous organisms or screening known genetic sequences that pose a threat; however, design tools could create other agents with similarly dangerous properties that these measures cannot recognize or detect.

The good news is that the advanced capabilities provided by bio-design tools, at least initially, are likely to remain in the hands of a limited number of current experts who will use these tools for legitimate and beneficial purposes. However, this access barrier may erode as bio-design tools become more efficient, allowing their outputs to be obtained with minimal additional laboratory testing, especially as AI language models improve their ability to interact effectively with these tools. Already, language models are being linked to specialized scientific tools to assist in performing specific tasks, automatically applying the most appropriate tool to the required task. As a result, bio-design capabilities may quickly become available to a large number of individuals, including those with malicious intent.

Mechanisms for Risk Mitigation

What can be done to mitigate the risks arising from the intersection of AI and biology? There are two main areas to focus on: enhancing general biosecurity measures and developing strategies to mitigate risks associated with new AI systems.

  1. Biosecurity Measures: In the face of increasing bio-design capabilities, comprehensive gene synthesis screening is a fundamental step for biosecurity. The challenge lies in producing the basic genetic building blocks that turn digital designs into physical agents, and there are companies specialized in this. Since 2010, the U.S. government has recommended screening customer orders to ensure that genetic materials are obtained by legitimate researchers. Although some companies voluntarily conduct this screening, many still do not. An exercise at the Massachusetts Institute of Technology showed that ChatGPT is capable of identifying these gaps and guiding how to exploit supply chain vulnerabilities.

We need a mandatory framework for screening synthetic DNA products, which does not conflict with corporate interests, as leading companies in the U.S. and U.K. have been doing this voluntarily and advocating for regulatory standards to ensure safety. This framework should include widespread gene synthesis devices and be flexible enough to cover screening for concerning agents. Similar rules should apply to customer screening by other service providers playing a crucial role in the transition from digital to physical, such as contract research organizations.

  1. Strengthening AI Governance: In addition to general biosecurity measures, there is a need for AI-specific interventions, focusing on mitigating risks associated with large language models. These models could lower barriers to biological misuse, and their capabilities are evolving rapidly. A major challenge is that dangerous capabilities may only emerge after a model’s release; therefore, preliminary assessments are essential to ensure that models do not contain dangerous capabilities upon release, and these assessments should be conducted by an independent body to ensure companies take necessary measures. Releasing models through organized interfaces like ChatGPT allows for updating safeguards, while open-source model releases pose significant risks, as fine-tuning can be easily removed.

Overall, the potential impact of AI tools on the risks of biological misuse raises a profound question: Who should have the right to access dual-use scientific capabilities? For policymakers seeking to answer this question, it is essential to consider diverse perspectives from various disciplines, populations, and geographic regions. This will require difficult trade-offs between opening scientific fields related to pathogens, enforcing laws, and monitoring data flows related to illicit activities, while increasing the risk of misuse.

It is logical that language models like ChatGPT do not provide detailed guidance on creating dangerous strains of pandemic influenza; therefore, it may be best for public versions of these models not to provide detailed answers on dual-use topics. For example, Anthropic’s Claude 2 model offers a higher barrier compared to GPT-4 in this context. At the same time, these tools should provide appropriate training for scientists to support the development of new drugs and vaccines. This requires differentiated access mechanisms, such as verifying scientists’ identities online to access tools related to biosecurity and vaccine design.

In conclusion, as reliance on AI increases across various fields, it becomes essential to address the challenges that may arise from the misuse of these technologies, especially in areas like bioengineering and chemical weapons. Although recent advancements in large language models and bio-design tools have opened new horizons for science and health, they also pose existential threats if they fall into the wrong hands. Swift action by policymakers can enhance safety and allow us to reap the benefits of AI.

Mohamed SAKHRI

I’m Mohamed Sakhri, the founder of World Policy Hub. I hold a Bachelor’s degree in Political Science and International Relations and a Master’s in International Security Studies. My academic journey has given me a strong foundation in political theory, global affairs, and strategic studies, allowing me to analyze the complex challenges that confront nations and political institutions today.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *


Back to top button