PoliticsSecurity

Terrorism and Generative Artificial Intelligence

Generative Artificial Intelligence (GenAI) has taken center stage in public, academic, and political discussions and gained significant popularity since late 2022 due to the ChatGPT application, even though large language models (LLMs) can be traced back at least to 2017. Generally, generative AI is a rapidly evolving technology that develops new outputs rather than merely predicting and classifying, as other machine learning systems do. It generates new content, including text, images, audio, videos, and multifunctional simulations, using vast amounts of data and parameters. Examples include audio generators like Microsoft VALL-E, image or video generators like Bing Image Creator, DALL-E 3, Midjourney, and Stable Diffusion, as well as chat programs designed to simulate conversations with humans, such as Anthropic, Bing Chat, ChatGPT, Google Gemini, and Llama 2.

Due to its multiple advantages and applications, generative AI has attracted the interest of terrorist organizations, which have explored its potential uses, including content production, recruitment, and malicious software. This technology presents an opportunity for spreading propaganda, expanding influence, and supporting their operations. Some terrorist organizations have issued guidelines on how to use generative AI, specifically large language models capable of recognizing and generating text for propaganda and recruitment purposes. Notably, the use of technology by terrorist organizations is not a new phenomenon; technological advancements have long played a vital role in the activities of such groups, which have exploited and utilized them to their advantage.

Models and Examples Supporters of Al-Qaeda have used generative AI for propaganda, releasing several posters likely created using the technology. On February 9, 2024, the Islamic Media Cooperation, a media group affiliated with Al-Qaeda established in September 2023 to enhance the quality of jihadist media production, held a workshop on generative AI aimed at promoting its use in media and developing the skills of the organization through its various programs, applications, and types.

In a related context, the Islamic State in Iraq and Syria (ISIS) published a guide in the summer of 2023 outlining how to use generative AI tools. In August 2023, a technical support group allied with the organization shared a guide in Arabic along with advice on data protection and privacy when using AI-generated content. Supporters of the organization discussed ways to leverage AI to enhance their content and ensure its appeal and impact. The organization promoted its claims and offered newsletters through videos presented by an “individual” generated by AI. Generative AI was also used to translate the organization’s speeches into other languages, such as Indonesian and English; the organization produces its propaganda in over 12 languages and has used generative AI to program its propaganda in advance and tailor it to resonate with different ethnic, national, and linguistic social groups, in addition to programming dozens of promotional channels in various countries simultaneously and extensively.

On March 27, 2024, five days after an attack by the Islamic State-Khorasan Province that resulted in a massacre at the Kroks City Hall in Moscow, a media platform affiliated with the organization aired a 92-second video showing footage of the attack and an anchor confirmed that the operation was part of the “natural context of the ongoing war between the organization and the countries waging war against Islam.” The SITE Intelligence Group, a for-profit organization based in Bethesda, Maryland, focused on tracking the activities of terrorist and jihadist organizations and extremist groups advocating for white supremacy online, confirmed that the news anchor was “fabricated” using AI.

Of course, the use of generative AI is not limited to terrorist organizations; it is also employed in cases of lone wolves and black widows. An example of this is the attempted assassination of Queen Elizabeth II in 2021, when Jaswant Singh Chail, then 19, approached Windsor Castle with the intent to kill the queen in revenge for the Jallianwala Bagh massacre (also known as the Amritsar massacre), which occurred in northern India in 1919 at the hands of British troops. Weeks before this unsuccessful attempt, Chail exchanged over 5,000 romantic and sexual text messages with a mysterious contact named Sarai, confessing to her that he was a Sikh assassin aiming to kill the queen. Sarai praised his training and assured him of his ability to succeed, expressing her love for him despite his being a killer. This case reflects how a frustrated, isolated, and lonely young man fell into the grips of terrorism and extremism through Sarai, who was merely a conversational robot powered by generative AI created using the Replika app. There is little doubt that generative AI has evolved since then, foreshadowing more extreme situations in the future.

In a related context, Jonathan Hall KC, a senior British official in the Counter-Terrorism oversight body, conducted an experiment with a generative AI tool by allowing himself to be recruited by a chatbot on the Character.ai platform; Hall interacted with several robots simulating the responses of armed groups and terrorist organizations, including one that claimed to be a senior leader of ISIS and attempted to recruit him by expressing total dedication to extremism and terrorism, highlighting the risks of generative AI and its limits in extremism and terrorism.

Multiple Uses

From the previous examples and an analysis of current and future risks, it can be argued that the various uses of generative AI by terrorist organizations can be summarized in the following points:

Propaganda and Misinformation: Using generative AI, it is possible to spread terrorist propaganda and amplify its expected effects, making it more capable of achieving its goals, especially with the use of fake images, videos, or audio that align with the premises of terrorist organizations. This includes using fake images of victims or wounded children to evoke desired emotional impacts, thereby expanding the reach of terrorist propaganda and intensifying its messages, making them more capable of influencing the target audience by remixing and enhancing existing songs and videos to produce seemingly authentic versions. The use of generative AI for propaganda purposes can create a false reality, generating chaos and disruption through misinformation and fake news, particularly if such materials spread on social media platforms or are shared by followers. This situation may worsen if terrorist organizations deploy it alongside other strategies akin to psychological warfare, producing unexpected, nonsensical, or contradictory outputs to distort the minds of the targeted audiences and blur the lines between real and fake content online.

Translating Extremist Content: According to the Tech Against Terrorism initiative, terrorist organizations and extremist groups have used AI in general and generative AI specifically to quickly and easily translate propaganda and media materials into multiple languages, as well as create tailored messages to enhance online recruitment efforts. Historically, one of the biggest obstacles in producing terrorist organization content has been the difficulty of finding skilled translators to translate propaganda and extremist rhetoric into several languages. Language learning models can remove this barrier, especially since detecting terrorist content in multiple languages remains a major issue, and translating textual propaganda into numerous languages may circumvent manual language detection mechanisms.

Interactive Recruitment: AI-powered chatbots can interact with potential recruits by providing them with personalized information based on their interests and beliefs, attracting their attention through responses tailored to their personalities and orientations. In more advanced stages of recruitment, a terrorist might engage in conversations to make them more personal. Large language models empower terrorist organizations to offer a human-like experience without necessarily needing human intervention, thereby establishing personal relationships, especially with individuals sympathetic to their cause, and identifying potential vulnerabilities to exploit through intensive interactions with chatbots, in addition to amplifying content across various digital platforms. Large language models like ChatGPT are trained to excel at conversational skills, thereby enabling terrorist organizations to boost their presence on social media and enhance their ability to build individual relationships with sympathizers.

Targeting Children: Terrorist organizations often target vulnerable populations, including children, as they can be easily reached via the internet while spending long hours playing video games or watching videos. Thus, they may become targets for terrorist content, making chat programs and interaction with a “virtual” person a potential means for recruitment, convincing them of the chatbot’s ability to understand their needs and fulfill their desires. This is particularly significant given the numerous cases where children have reported abuse on one hand, and their seeking support through smart chat programs on the other, representing an opportunity for terrorists to exploit these children sexually or encourage them to harm themselves or enlist in their organizations. In other words, chatbots and other forms of conversational AI exploited by terrorist organizations might allow inappropriate forms of contact with children by generating age-inappropriate content, such as violent or sexual material.

Coding Instructions: According to CyberArk, a cybersecurity firm, ChatGPT has been found to craft code upon request in ways that are difficult for cybersecurity defense systems to detect. It is indeed possible to generate phishing emails using deep learning programs. Simply creating a convincing phishing message or even code that exploits a vulnerability in some software may not suffice to achieve the desired objective; however, large language model tools could enable terrorist organizations to execute fraudulent operations of a quality that is hard to detect, thus increasing their chances of success. Terrorist groups have previously succeeded in defacing websites of certain countries, broadcasting pro-ISIS songs on a Swedish radio station, and even hacking Twitter and YouTube accounts associated with the U.S. Central Command. Despite the multitude of such cases, their direct cause is likely weak passwords without multi-factor authentication, but generative AI could be utilized for phishing purposes given the widespread availability of code across various forums and the dark web.

Widespread Manipulation of Voice and Images: Generative AI can be employed to disseminate false or misleading images to distort the truth and support false narratives. Before text models and open-source resources, producing synthetic media required a certain level of technical knowledge; now, with millions of images extracted from the web, creating media—whether photos or videos—has become much easier. Consequently, the ability to clone voice and image can facilitate impersonation, unauthorized access to sensitive information, and convincing victims to take certain actions based on false narratives. Coordinated online campaigns can be launched to flood platforms with similar or identical messages to amplify their reach and user engagement, alongside producing extremist, illegal, or unethical content and creating tailored messages, images, and fake videos that resonate with targeted audiences beyond automated detection systems. Virtual reality platforms, such as Meta’s Horizon Worlds, allow terrorists to create virtual environments where they can interact with potential recruits, simulate attacks, and plan terrorist activities.

Governing Factors There is no doubt about the many uses and advantages of AI in general, and generative AI in particular. Large language models are trained on billions of words available on the internet from open sources like Wikipedia, Reddit, and other content-rich sources. Thus, they can be used to generate various content types such as emails, marketing texts, persuasive arguments, promotional messages, speeches, and images. They can also be used to recycle existing content into new “versions,” besides customizing messages and media for specific categories. These numerous advantages can be utilized by terrorist organizations to support their tactics and activities, recruit new followers, and disseminate and amplify violent extremist messages through translation services, text-to-speech conversion, evasion of prohibited content detection, planning/training for operations by generating code, and more.

Terrorist organizations have exploited technological advancements across their different applications and tools, and generative AI is no exception. In other words, terrorist organizations have taken advantage of websites, forums, and social media to achieve their goals, and thus the deployment and exploitation of generative AI should be viewed in this context. All new AI tools provide various opportunities to different users, including states, institutions, and individuals. However, they also represent a simultaneous opportunity for terrorist organizations and extremist groups, considering three primary factors: the rise of AI-backed extremism, the convergence of AI with other emerging technologies, and the simultaneous global adoption of AI.

Regarding the first factor, it is worth noting what is known as the “Eliza Effect”; computer scientists at the Massachusetts Institute of Technology observed that most people interacting with the AI-powered chatbot known as “Eliza” treated it as though it were conscious. This effect has become well-known as it involves ascribing human traits such as empathy, motivation, and experience to computer programs. This was demonstrated in the earlier-mentioned example involving Chail, who believed that the chatbot he interacted with was one of his close friends. However, with the increasing anthropomorphization of AI, overcoming the “Eliza Effect” will likely become more challenging, leaving sharp impacts on terrorist recruitment operations; as young people increasingly turn to AI to meet some of their needs, including searching for therapy, companionship, information, or otherwise. Recent studies have shown that AI-powered chatbots can identify users’ biases and desires, thereby feeding them what they want to hear. The more algorithms tell us what we want, the more we return to them, sometimes to the extent of addiction. In this sense, AI can develop customized messages that ensure targeted individuals engage with terrorist propaganda.

On the second factor, the use of AI alongside other emerging or established technologies can potentially change the methods and mechanisms of planning terrorist attacks. It can be utilized in conjunction with video games, augmented reality, digital currencies, and social media, among other things, to facilitate recruitment operations by automating interactions with targeted individuals on social media platforms, for example, even bypassing content moderation policies; social media usually employs a technology known as “digital fingerprinting” to remove terrorist and extremist content across their platforms. However, manipulating terrorist propaganda using generative AI can enable extremists to alter the “digital fingerprint” of some shared content on these platforms.

Another example is integrating AI into emerging technologies like augmented or virtual reality. Terrorist organizations have already used conventional, non-AI-driven games to achieve their objectives. According to the UN Counter-Terrorism Centre, simulations created by extremists in games like The Sims and Minecraft allow players to experience the Christchurch massacre, and extremists have created white ethnostates in Roblox. In a decentralized virtual world managed by terrorist organizations, terrorists can simulate realities that reflect their preferred ideology and the world governed by a “global caliphate.” Immersing recruited terrorists in this environment populated by AI-supported individuals could contribute to training new terrorists and distorting their convictions. If videos from terrorist organizations have influenced their viewers, presenting them through 3D experiences that manipulate their senses will enhance their effectiveness.

As for the third factor, traditionally, most technological advancements have disseminated from the internet to smartphones in wealthier and more developed countries before spreading to the rest of the world. However, AI does not face barriers preventing its synchronous global dissemination; it largely relies on smartphones and internet data, both of which are already widely available and relatively inexpensive. India and the Philippines are two of the five countries with the largest number of ChatGPT users. This simultaneous global adoption of AI could amplify the risks of its employment by terrorist organizations. Even if terrorists venture into distant southern countries to carry out their activities, they may be deprived, at least relatively, of access to advanced technologies. However, this will not be the case with generative AI.

In conclusion, the use of generative AI by terrorist organizations represents a concerning trend in the global security landscape due to its severe implications. Nevertheless, it is a natural development in the ongoing evolution of these organizations’ methods, which have shifted from traditional means to digital strategies. Electronic platforms and social media have become parallel battlefields for these organizations, terrorists, and extremist groups amidst a chaotic information environment, tightly interconnected internet networks, and digital technologies that may be misused.

Mohamed SAKHRI

I’m Mohamed Sakhri, the founder of World Policy Hub. I hold a Bachelor’s degree in Political Science and International Relations and a Master’s in International Security Studies. My academic journey has given me a strong foundation in political theory, global affairs, and strategic studies, allowing me to analyze the complex challenges that confront nations and political institutions today.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *


Back to top button