Unraveling the Role of AI-Enabled Disinformation in an Election Year

The latest instalment of our AI and Elections Discussion series took place on November 25, and was organised in collaboration with the European Digital Media Observatory. This session looked back at the 2024 global election cycle and examined the role of generative AI and disinformation in various political landscapes. How much did AI interfere? Are we only dealing with buzzwords and fearmongering, or serious global threats to democracy? And how could AI be utilized to improve democratic structures?

We were joined by experts Mark Scott (Atlantic Council), Mato Brautović (University of Dubrovnik), and Taberez A. Neyazi (National University of Singapore). The session was chaired by Claes de Vreese (University of Amsterdam) and Sophie Morosoli (University of Amsterdam). 

Mark Scott, Taberez A. Nayazi, Claes de Vreese, and Mato Brautović

Mark Scott, Senior resident fellow at the Digital Forensics Research Lab of the Atlantic Council, kicked the session off by providing some nuance to the kind of AI that is used in elections, specifically from a US context. Rather than the flashy, intricate deepfakes that are often mentioned in discussions about AI and elections, he pointed toward the use of large language models (LLMs) to analyse voter data and target specific audiences, naming AI as an effective campaigning tool that is likely to increase in importance in the coming years. While there were some cases of AI that was employed in harmful manners, especially in cases of foreign election interference, Mark also brought up positive examples. In Belarus, a chatbot was created to answer questions about the opposition without putting anyone in direct harm. 

Taberez A. Neyazi, Associate Professor of New Media and Political Communication Director of the Digital Campaign Asia project at the National University of Singapore, followed up on this from a south-Asian perspective, with a specific focus on India and Indonesia. He said that AI did play a role in redefining how voters were engaged and manipulated in this year’s general elections in India, both positively and negatively. A deep-fake video of a politician making highly controversial statements caused some uproar, even after it was revealed to be fake. Whether this had a direct impact on voting is unclear, but possible.  In a more positive use of AI, voice cloning was used tactically to call voters and address specific regional problems. Due to the high linguistic diversity in India, this allowed for broader and more direct communication between politicians and the people. Taberez argues that AI has the potential to close linguistic gaps with technology to make democracy more accessible, inclusive and engaging. 

Mato Brautović, Professor and Head of the Department of Mass Communication at the University of Dubrovnik, highlighted his research on the use of AI around the parliamentary elections in Croatia. He confirmed the use of genAI during the campaigning process, specifically on social media platforms. Deepfakes and AI-based disinformation were used to target major political figures but were generally poorly made and could be recognised by a knowledgeable user. Alarmingly, none of the identified AI-generated disinformation was labelled by platforms as such, and most fact-checking organisations failed to detect this. At this point, it is impossible to judge the impact of genAI on the Croatian elections. 

Kickstarting the discussion, our postdoctoral researcher Sophie Morosoli posed the problem of an overall declining trust in democracy. Is generative AI not just exacerbating existing issues that threaten democracy, like disinformation, and polarisation? Should AI be considered a separate, new threat?

All three panellists agreed that AI is not an entirely separate thing but rather a tool that amplifies the pre-existing issues. Mato pointed out that the danger lies in the accelerated speed AI helps spread disinformation, and how it is made more linguistically accessible which is relevant when foreign actors come into play. Mark suggested that demystifying AI is an important step to combat the risk, while Taberez highlighted the importance of learning from each other. “Some of the things that happened with Trump could have been addressed if we would have looked at India before.”

What about future initiatives? Mark pointed toward watermarking AI-generated content, but only as a short-term solution. Instead, he stressed the importance of fostering digital literacy. Mato suggested that transparency obligations would be beneficial, but pointed out the root issue: developing technologies first, and only considering their impact as an afterthought. Taberez, in agreement, argued for more responsibility obligations and accountability on the side of developers.

We would like to thank the panellists for joining us in this edition of the AI and Elections Discussion Series, and the European Digital Media Observatory, specifically Aqsa Farooq, for co-organising the session with us.