LegalPolitics

Social Media Strategies to Combat Electoral Misinformation

The year 2024 is shaping up to be a pivotal election year, with 64 countries hosting elections that will involve nearly half of the world’s population. Given this enormous scale, the world is concerned about significant challenges related to voter misinformation and the intensification of political polarization through the creation and spread of false content. These challenges have been exacerbated by the development of artificial intelligence (AI) applications, which can replicate a candidate’s voice, create fake videos and narratives, and design propaganda capable of psychological manipulation. This is done by leveraging the analysis of vast amounts of personal data and sentiment analysis through natural language processing models, which can distort public opinion and influence voters, ultimately undermining the integrity of the electoral process.

These challenges are amplified by the use of social media networks as resources, targets, and intermediaries for such practices, greatly increasing their impact. This has placed enormous pressure on these networks to take responsibility for regulating and confronting these issues, with the threat of severe penalties should they fail.

Propaganda Challenges

Following U.S. President Joe Biden’s announcement of his re-election bid, the Republican National Committee released a short video on YouTube titled: “What Happens If the Weakest President We’ve Ever Had Is Re-elected.” The video included AI-generated fictional scenes of domestic and global upheaval, such as the invasion of Taiwan, financial system collapse, and border breaches. This sparked concern about the use of “fake content” in political propaganda and how it could blur the lines between disinformation and campaign advertising, especially when spread through social media. The most significant challenges include:

Failure to Curb Misinformation in Electoral Content: Despite AI image-generation applications adopting policies that prohibit using their tools to create fake images, a March 2024 report by the Center for Countering Digital Hate showed that AI tools could still generate disinformation about elections. When researchers prompted four major AI tools, which had clear anti-misinformation policies, to create 40 fake election-related images, they found that 41% of the images were misleading. These included fake images of Biden in the hospital and Trump in a prison cell, as well as images of discarded ballot boxes. The findings reveal how such applications could still produce fake content supporting false claims about candidates or election fraud, despite their stated policies. The risks are compounded when these images are shared on social media platforms.

Amplifying AI-Generated Fakes via Social Media: Social media platforms act as megaphones for AI-generated fake content, not only through user sharing but also via paid advertisements targeting specific users. Early in 2024, as the UK prepared for its general election, a report from communications firm Finemore Harbour found 143 AI-generated fake videos of Prime Minister Rishi Sunak circulating on Facebook alone. Approximately £13,000 was spent on promoting these videos as paid ads over a month, reaching more than 400,000 people. These funds originated from 23 countries, including Turkey, Malaysia, the Philippines, and the U.S., violating Facebook’s ad policies.

Algorithmic Campaigns: These are expected to launch soon and aim to influence electoral behavior. Harvard’s Lawrence Lessig and Archon Fung outlined a hypothetical system they named “Kluger,” which uses three key technologies. First, language models would create personalized messages from social media posts, images, videos, texts, and emails. Second, reinforcement learning would improve machine performance by analyzing responses and determining the best strategies to increase the likelihood of changing a voter’s decision. Third, dynamic conversations would reach millions via social media, blending truth with falsehood, mixing political and non-political messages, and convincing voters that a particular candidate is the most popular within their social circles. The danger lies in victory going to the most effective machine, rather than a candidate or ideas, threatening the very concept of democracy.

Astroturfing Strategies: These strategies involve hiding the source or sponsor of content, making it appear as if it comes from regular users. This creates the illusion of popular support for a candidate or issue and misleads voters about the true level of public opinion. One example is the widespread sharing of fake images showing Trump with Black voters on social media. While no direct link was established between Trump’s campaign and these images, they deceitfully suggested that Trump enjoyed significant support within the African-American community.

Mounting Pressure

American and European institutions have begun tightening legal frameworks around social media companies to force them into adopting stricter policies to combat AI-generated misinformation during election periods. This has included the passing of laws, conducting hearings, and threatening sanctions.

On March 13, 2024, the European Parliament passed the AI Act, which includes provisions for protecting fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI systems. This legislation covers systems related to democratic processes, such as elections, requiring risk assessment, record-keeping, transparency, accuracy, and human oversight. Citizens will also have the right to file complaints and receive explanations for AI-driven decisions that affect their rights.

The European Commission also invoked the Digital Services Act on March 14, 2024, summoning major social media companies to account for their actions in addressing AI risks, including deepfake dissemination during European Parliament elections. Under this law, platforms with over 45 million active monthly users in the EU are required to combat disinformation effectively, or they risk fines of up to 6% of global revenues or even a ban.

In the U.S., lawmakers in 43 states have introduced at least 70 bills regulating AI use in political campaigns, with seven of these laws being enacted. Some require notifications when AI-generated media is used in political ads, while others criminalize deepfakes designed to harm candidates. Members of Congress have also sent letters to Meta and X’s CEOs, expressing “serious concerns” over the rise of AI-generated political ads and asking for details about the platforms’ rules to curb harmful content.

Strategies to Counter Misinformation

Faced with these challenges and mounting pressure, global social media companies have adopted three main strategies to limit the exploitation of their platforms for spreading AI-generated misinformation: updating content policies, investing in smart detection technologies, and fostering collaboration:

Updating Content Policies: Meta released a statement titled “How Meta Is Preparing for Elections in 2024,” requiring advertisers to disclose their use of AI or other digital technologies to create or alter political or social issue ads. This includes whether an ad features AI-generated or digitally modified realistic images, videos, or audio depicting real people saying or doing things they never did, or events that never occurred. Meta’s Oversight Board has recommended that the company’s “manipulated media” policy include AI-generated content that may interfere with voting rights.

TikTok has also made “protecting election integrity” a core commitment, pledging not to allow misinformation about electoral processes. It has introduced safeguards against AI-generated misleading content during elections and mandates clear labeling for AI-created content that portrays realistic scenes or resembles real people.

Detection and Classification Technologies: Meta is building tools to detect AI-generated content at scale to identify misinformation and deepfakes. They are also categorizing AI-generated content from companies like Google, Microsoft, Shutterstock, OpenAI, and Midjourney, with penalties for users who fail to disclose AI-generated videos or audio. Google has similarly announced AI policies across its platforms, including YouTube, to support upcoming elections in India.

TikTok is also planning to launch “Election Centers” in the app, available in the local language of each of the 27 EU countries, to help users “separate fact from fiction” and limit the spread of AI-generated misinformation during election periods.

Collaborative Efforts: In February 2024, twenty tech companies, including Meta, Google, TikTok, X, and Snapchat, signed an agreement to combat the deceptive use of AI in elections. The companies pledged to make investments, ensure transparency, educate the public, and collaborate across industries and with civil society to tackle AI-driven electoral disinformation on their platforms.

These strategies mark positive steps toward combating misinformation, especially the broad agreement among major tech companies. This agreement accurately reflects the complex challenges posed by AI-driven electoral misinformation, highlighting the need for cross-border and cross-sector collaboration to preserve the “humanity” and value of democratic elections.

Mohamed SAKHRI

I’m Mohamed Sakhri, the founder of World Policy Hub. I hold a Bachelor’s degree in Political Science and International Relations and a Master’s in International Security Studies. My academic journey has given me a strong foundation in political theory, global affairs, and strategic studies, allowing me to analyze the complex challenges that confront nations and political institutions today.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *


Back to top button