This may be the most detailed work so far: “The Impact of Generative Artificial Intelligence on Elections Across the Globe”

The Center for Media Engagement at UT Austin released a vital report. The executive summary is below. For the full report, click here

Generative AI (GenAI) has emerged as a transformative force in elections playing out across the world. In a series of reports, the Center for Media Engagement investigates GenAI’s role before, during, and after several key global elections in 2024.

The reports examine the potential impacts of GenAI on key democratic processes in the U.S., Europe, India, Mexico, and South Africa. These insights are critical to groups working to sustain and advance democracies in the face of constant transformation of the digital environment and associated communication processes.

Below we share the emerging trends developing around elections and AI in each of these regions.

The U.S.:

  • Extensions of mis- and disinformation strategies. Many cases of generative AI used during the 2024 U.S. election spur from strategies used in previous elections.
  • Manipulation of information related to electoral processes. Both proposed and actual uses of GenAI during the U.S. election rely on the manipulation of information related to electoral processes, offices, and vendors via various media formats (video, image, audio, and text).
  • Leveraging of trusted messengers and messages. Actors behind these use cases work to leverage trusted messengers and messages shared via a trusted mode of communication — i.e., in a particular language or cultural vernacular.
  • Erosion of public trust in institutions. Conversely, as with prior digital propaganda campaigns, such AI-driven efforts focus on eroding public trust in institutions as a goal in and of itself.
  • Taking advantage of Big Tech. These efforts take advantage of Big Tech’s ineffective mitigation measures and lax government regulation.

Europe:

  • Unreliability of chatbots. Several research institutions and news organizations in Europe tested popular chatbots and were underwhelmed; the general consensus was that trustworthy answers are hard to come by and some answers include made-up information.
  • Creation of AI personas. Political parties (especially right-wing parties like the Alternative for Germany and the National Rally in France) created AI personas online and relied on them for (fake) support.
  • High-profile politicians were prominently targeted with deepfakes. Several prominent politicians, including German Chancellor Scholz, UK Prime Minister Starmer, and Marine Le Pen, the parliamentary party leader of the French National Rally, were the targets of deepfakes. Some of these deepfakes had more satirical undertones than others.
  • Foreign interference relied on LLMs. Russian actors relied on large language models (LLMs) to promote pro-Russia content in attempts to influence public opinion.
  • AI weakened the belief in and practice of democracy generally. Instead of fears about imminent electoral impact, there is a broader skepticism towards AI and democracy. For example, the Alan Turing Institute wrote that “the current impact of AI on specific election results is limited, but these threats show signs of damaging the broader democratic system.”

India:

  • Widespread usage. AI was used across the board, by political parties big and small, for a variety of tasks, including content creation and replacement of human survey callers.
  • Appeal of AI’s translation capabilities. The translation capabilities of generative artificial intelligence (GenAI) make it particularly useful to political strategists, who craft entire campaigns in local Indian languages.
  • AI voice clones. Voice cloning was used widely and was viewed as more authentic/convincing than other types of AI, such as deepfakes.
  • Satire. Official parties’ social media handles used AI content to openly parody their rivals.
  • Plausible deniability. As increasingly convincing AI content emerged, politicians began to use GenAI as an excuse to dismiss genuine videos as deepfakes in the hope of distancing themselves from unflattering content.
  • Lack of intervention by global companies. At least eight chatbots focused on elections in India were publicly accessible in the GPT Store, flouting OpenAI’s policy prohibiting the use of its tech for political campaigns. Most Meta platform content analyzed for this report had no disclaimers about AI usage.
  • Half-hearted local regulatory efforts. It was not until the election was already underway that the Election Commission of India (ECI) notified national and state political parties that AI content was disallowed; their first letter instructed parties to take down any deepfakes within three hours, but follow-ups were patchy and had little impact.

Mexico:

  • Generative artificial intelligence (GenAI) has permeated political campaigns. Manipulation of both video and audio content via AI has found its way into the Mexican campaign toolkit.
  • Local and regional fact-checking organizations are crucial for countering manipulative content. Networks such as Latam Chequea, which includes over 30 fact-checking organizations from 15 countries, have emerged to coordinate efforts and share best practices about countering manipulative content, including content generated by AI.
  • Prevalence of Mexican high-profile politicians in deepfakes. The primary targets of AI- manipulated content tended to be the leading political candidates in the elections.
  • Creation of fake associations or endorsements. Fabricated images of international stars featured artists like Lady Gaga and Dua Lipa wearing attire promoting local political candidates or parties.
  • Fun or manipulation? The presence of AI-generated satire makes it difficult to distinguish between genuine attempts at humor and malicious disinformation relying on humor – especially when such content is shared across different platforms or communities without context.
  • X as an important political platform. X played a large role in the elections as an important political platform on which AI content flourishes.

South Africa:

  • Reliance on older technologies rather than cutting-edge generative artificial intelligence (GenAI). Most of the misleading content included more traditional mis- and disinformation, such as false headlines, allegations of voter fraud, out-of-context images, etc.
  • Necessity of cross-sector collaborations to counter misinformation. The Electoral Commission of South Africa (IEC) signed a framework of cooperation with Google, Meta, and TikTok, as well as with local civic partner Media Monitoring Africa (MMA), to combat disinformation ahead of the 2024 election. This framework helped to counter the spread false and misleading content on the participating platforms.
  • Deliberate vagueness on AI as a political tactic. The abovementioned cross-sector counter-disinformation framework does not include any AI-specific measures. South Africa currently has no specific regulations governing the use of artificial intelligence for political purposes. Existing codes of conduct are outdated and provide little guidance on whether a politician may use GenAI to create content.
  • Liars’ dividend. Similar to the tactic of plausible deniability in India, politicians capitalize on the spread of AI-generated content by creating uncertainty and confusion around what is true and what is false. In South Africa, politicians did not have to use GenAI themselves in order to benefit from it; referring to GenAI allowed politicians to call any unflattering information into question by raising doubt about the veracity of the video or image.
  • X (previously Twitter) as a proliferator of AI manipulated content. X played a central role in both the spread and, more importantly, the longevity of misleading AI-content. X was notably absent from the framework of cooperation mentioned above, and the platform removed little to no disinformation.

Discover more from Erkan's Field Diary

Subscribe to get the latest posts sent to your email.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.