Following the workshop on Large Language Models for Media and Democracy, the CWI Semester Programme continued with the two-day event on AI’s Impact on Media and Democracy, where national and international experts got together to discuss the influence of AI on media and its impact on the public’s democratic participation.


Day one started with introductory remarks from Ton de Kok, director of the CWI, and the organizing team, with an overview of the program. The event aimed to address and discuss some of the following problems:
- Transparency and explainability of AI in media to improve trust and acceptance
- Computational techniques to automatically gauge the quality and objectivity of information and information sources
- Elucidating the dynamics of disinformation diffusion and the emergence of polarisation in social networks
- AI safety and algorithmic risk in decision-making
- Legal, ethical and policy challenges associated with the use of AI in the media
- AI-mediated communication and the psychology of human-AI interaction
- Human-computer interaction and responsible AI
The first group of presenters brought up AI and its function to mediate human-to-human interaction in various ways. Takayuki Ito from Kyoto University introduced an AI-empowered consensus support system for crowds on social networking services, elaborating on the process of developing the tool over the last decade. Mark Klein from MIT confronted a similar problem from a different perspective. He proposed investigating the use of LLMs for crowd-scale deliberation and structuring discussions effectively on a larger scale in the form of maps. Although they carry their challenges, both of their approaches use AI in an attempt to facilitate conversation between humans, a key mechanism for a functioning democracy. Rafik Hadfi from Kyoto University considered this and questioned whether it is possible in the first place to codify democracy. Emulations are commonplace in democratic systems, like voting predictions, trends on social media, and behaviours of citizens, but to what extent can we use this to evaluate governing systems, and where does genAI fit into this?

Natali Helberger, co-founder of the AI, Media and Democracy Lab, offered a legal perspective on AI and its impact on media and democracy with an overview of the European digital framework on AI in the media. Although there is an increasing use of AI across the board in news production and acquisition, with growing divides between companies that have access to AI and those who do not, and a general decline of trust in the media, the responsibility of risk monitoring and mitigation is still largely put on platforms.
Tackling platform power, especially the asymmetries between the global market, is a major topic of discussion. Could a possible strategy to combat this platform power be the approach of requiring platforms to contact news organizations before removing a news item to prevent censorship? Natali highlights the importance of seeing digital platforms and their roles as vertically integrated AI companies and not just social distribution networks.
Generally, the emerging frameworks seek to create more accountability for broader societal implications, a fairer level playing field, and special protections for the media. Systemic risk monitoring provisions could develop into powerful governance mechanisms, but much still depends on the effectiveness of rule-enforcement and the ability to address concerns.


On day two, Abdallah El Ali and Karthikeya Venkatraj hosted an interactive session in which they conducted a live experiment with the event attendees. It presented participants with news items — a headline and either an image or an article paragraph — which were then labelled as AI-generated or human-generated. However, those labels were not always correctly assigned. Abdallah and Karthikeya asked participants to vote on whether they believed the news piece was correctly labelled, whether they found it intriguing, and whether they believed it was authentic or not.
They conducted a similar experiment at the CWI Large Language Models for Media and Democracy Workshop that took place a month prior, and although the core concept of the session was the same, the stimuli and set-up varied, making it an unfair direct comparison. Interesting to note was that the two groups had different results when it came to detecting the accuracy of the labels, which may be due to the different stimuli used or the demographics of the audience. Furthermore, the discussion in the previous session was concerned more closely with the AI detection process, whereas participants of this session were discussing the pros and cons of AI applications in journalism.
Multiple attendees pointed out that the application of AI as a writing aid is useful, as long as the content itself is not entirely AI-generated. This was in line with the general attitude that AI-generated content in general did not seem to be as engaging as human-authored content.
The interactive session, as the previous one, was engaging and led to interesting insights, especially embedded in the diverse and thought-provoking program of the AI’s Impact on Media and Democracy event.


We thank the organizers of the CWI for making this event possible, especially Abdallah El Ali, Eric Pauwels, Pablo Cesar, Davide Ceolin, Laura Hollink, and Valentin Robu.
