In recent years, there has been a noticeable increase in the use of deepfakes across various fields, including politics and global conflicts. Predictions indicate that this trend will escalate in the coming years. Deepfakes are a technology used to manipulate video footage through artificial intelligence, making it difficult for humans to distinguish between what is real and what is fake. With the spread of AI, deepfake technology has evolved from rudimentary to sophisticated, making it even harder to detect. This presents opportunities for governments to utilize deepfakes, but under regulations to avoid their negative uses.
In this context, the Center for Strategic and International Studies released a research paper in 2024 titled “Government Use of Deepfakes: Questions to Ask,” authored by Daniel Byman, Daniel W. Lina Jr., and V. S. Subramanian. The paper explores the major positives and negatives of deepfakes and how to avoid the negatives while leveraging the positives for the benefit of nations and individuals.
Questions Raised:
The authors point out the increasing use of deepfake technology in politics and international conflict in recent years. In May 2023, deepfake videos were alleged to show the Kurdish Workers’ Party (PKK) endorsing Kemal Kılıçdaroğlu, the main opposition candidate in the Turkish presidential elections. Another deepfake video with explicit content led to the withdrawal of an opposition candidate from the presidential race. In January 2024, a fake automated call in President Joe Biden’s voice advised New Hampshire residents not to vote in the primary elections. Reports also suggest that the Venezuelan government used deepfakes to create news anchors portraying the Venezuelan economy in glowing terms and produced videos supporting the 2022 coup in Burkina Faso.
Conversely, the paper discusses “beneficial deepfakes,” which serve legitimate purposes. One example is the deepfake campaign of football star David Beckham, who was shown endorsing a malaria prevention message in multiple foreign languages. The visual content was real, but the voice was not, and the messages were created with his consent. Similarly, an Indian politician previously used a similar approach to create deepfake videos of his speeches in about 20 languages, allowing him to communicate his political message more effectively.
The authors raise an important question: Will the allure of deepfakes, whether beneficial or harmful, prove irresistible to democratic governments? The answer is likely yes, based on historical precedents. It will not be long before major democracies, including the United States, begin using deepfakes to achieve their objectives, if they have not already done so. But does this occur through a clear set of regulations that consider the benefits and potential risks of any government operation utilizing this technology?
In reality, an unbiased assessment is needed to weigh the short-term benefits against the long-term risks of deepfake technology used by a democratic state. To this end, it is important to address two fundamental questions: First, what are the right questions to ask when a government considers using deepfakes to advance its mission? Second, who should ask these questions and approve or reject the use of deepfakes? To answer these questions, the authors interviewed six national security leaders with specialized knowledge in AI technology and national defense.
Risks and Scenarios:
Several noteworthy risks are associated with deepfake use by democratic governments. The first risk is the potential loss of trust that citizens of a democratic state have in media and news, whether online or offline. This can make it easier for adversaries to spread misleading information that divides targeted populations and allows opposition elements to use fake videos to undermine public trust in the government. The second long-term risk of deepfake use is the loss of credibility for the government employing it. Once a government is credibly accused of using lies as a tool of policy, the value of any statements it makes may be diminished for years, if not decades, both domestically and internationally. The third risk is that adversaries could benefit from the “lie payoff,” using real evidence of corruption or misuse to claim that the information is fake.
In this context, the authors proposed five scenarios, shared with six national security leaders from three regions: the United States, the European Union, and Asia. These leaders represented four types of stakeholders: intelligence, military, industry, and non-governmental organizations interested in the issue. The scenarios are as follows:
Election Scenario: This addresses a common challenge in foreign policy, where a hostile government also exhibits repression towards its people. Such governments often threaten the interests of the United States and its allies and harm their own citizens, though they rarely pose a serious threat.
Genocide Scenario: Designed to test responses in one of the worst imaginable situations—deliberate mass killing of civilians. Historically, the United States and other Western governments have not effectively intervened in such cases, but the advancement of deepfake videos might offer a means for intervention that leaders might resort to in such crises.
Invasion Scenario: In this case, deepfakes are used in response to other fake videos—in an attempt to fight fire with fire—amid a potential war and human rights violations.
Stock Market Scenario: To determine whether government use of deepfakes is appropriate outside traditional security contexts, this scenario explores a situation involving a citizen and a company seeking assistance. Additionally, the information in this deepfake video is at least partially accurate.
Intellectual Property Theft Scenario: This final scenario explores another commercial use of deepfakes; here, deepfakes are defensive, creating forged materials (text documents, images, videos, presentations) that are close enough to the real design to appear highly credible, even to experts.
Questions and Regulations:
The authors note that all interviewed experts expressed deep concerns about government use of deepfakes. They were united in their view that deepfakes should only be used in limited contexts and under the authorization of appropriate authorities. However, there were differences in their opinions on when deepfakes should be used, which scenarios are appropriate, how they should be used, and who should authorize their use.
Based on the interviews, experts agreed that when governments consider using deepfake videos, it is essential to ask a series of critical questions to ensure responsible and ethical use. The questions governments should consider are:
- Effectiveness: How effective is the potential deepfake video in achieving the desired outcome compared to other available methods?
- Audience: Who is the intended audience for the deepfake video? Will it be visible to a narrow or broad audience? How will different audiences react to it?
- Potential Harm: Could the deepfake video cause direct or indirect harm to individuals, organizations, or society at large? What measures can be taken to mitigate any potential harm?
- Legal and Ethical Considerations: Does the proposed use of the deepfake video comply with relevant laws and ethical standards? Are there any potential legal or ethical implications that need to be addressed?
- Target: Who is the intended target of the deepfake? Is it a head of state or a prominent individual like a political or religious leader? Is the target a living person?
- Purpose: What is the purpose of the deepfake? Is it to protect citizens from direct harm? Is it a reciprocal response to another government’s use of deepfakes? Is it to inform a specific group of people?
- Traceability: What is the likelihood of tracing the deepfake back to its creator? What would be the negative reaction if it were attributed to the government that created it?
By carefully considering these questions and conducting a thorough assessment, governments can make more informed and responsible decisions about the use of deepfake videos.
In conclusion, the authors emphasize the importance of considering the consequences of using deepfake videos now, rather than waiting to develop policies that might be influenced by arbitrary use by U.S. government agencies or allied entities. This process is particularly important because deepfakes have severe implications for the credibility of any government in the long term and could shape, and possibly exacerbate, local political discussions.
Although it might be ideal to simply announce that the government will never use deepfakes, the power and prevalence of the technology make it an appealing tool. Nevertheless, governments may act without considering the long-term effects of deepfake use. Therefore, deepfake technology should be governed by a set of rules and standards to ensure that governments consider all relevant factors and long-term perspectives as well as short-term ones.
For these rules to be effective in practice, governments should rely on a range of perspectives, including those beyond traditional national security, when discussing the use of deepfakes. While the research paper provides some initial criteria, discussions and rich observations about the use of deepfakes in practice by other governments will clarify how to assess these factors and what additional criteria may be needed.
Sources
Daniel Byman, Daniel W. Linna Jr., and V. S. Subrahmanian, Government Use of Deepfakes: The Questions to Ask, The Center for Strategic and International Studies (CSIS), 2024.
https://www.csis.org/analysis/government-use-deepfakes