
AI’s Impact on Society, Media, and Democracy
How is AI impacting the media and our democratic processes? What can be expected in the near future, and what measures need to be taken to ensure society at large benefits from these rapid developments? The emergence of powerful AI (such as Large Language Models) is rapidly pervading our digital society and has the potential to profoundly transform people’s public and private lives. There is therefore an urgent need to explore how we can ensure that these developments are both enriching, beneficial and fair for society as a whole.
These topics will be discussed in the Research Semester Programme on AI’s Impact on Society, Media, and Democracy at Centrum Wiskunde & Informatica (CWI). This is a joint effort by the Intelligent and Autonomous Systems group, the Human-Centered Data Analytics group, and the Distributed & Interactive Systems group. This program is supported by the larger AI, Media, and Democracy (AIMD) lab initiative.
Join us for this open (free) two-day event at CWI (limited slots!), together with national and international experts in this area, on May 27-28, 2024.
Speakers
Scope
The introduction of AI in media is fundamentally changing the relationship between content providers and influencers on the one hand, and their public on the other, and as such, can have a profound effect on democratic participation. As such, there is a pressing need to consider the following:
- Transparency and explainability of AI in media to improve trust and acceptance
- Computational techniques to automatically gauge the quality and objectivity of information and information sources
- Elucidating the dynamics of disinformation diffusion and the emergence of polarisation in social networks
- AI safety and algorithmic risk in decision making
- Legal, ethical and policy challenges associated with the use of AI in the media
- AI-mediated communication and the psychology of human-AI interaction
- Human-computer interaction and responsible AI
Tackling the above requires the concerted insights and efforts from experts in various scientific disciplines such as mathematics and computer science, but also in media technologies and emerging legal frameworks. We hope these two days provide the start of an important conversation and subsequent collaborations to urgently look into the social and political impact of AI.
Schedule
Detailed Schedule
Day 1 : Monday, 27 May 2024
All times shown are Central European Time (CET)
09:00 – 09:45
Registration
09:50 – 10:00
Ton de Kok – Director CWI
Welcome
Morning : Computational Approaches
10:00 – 10:50
Takayuki Ito – Kyoto University
Towards Hyperdemocracy: AI-powered Crowd-scale Discussion Support System
Large-scale online discussion platforms are receiving great attention as potential next-generation methods for smart democratic citizen platforms. One of the studies clarified the critical problem faced by human facilitators caused by the difficulty of facilitating large-scale online discussions. In this talk, I present D-agree, a crowd-scale discussion support system based on an AI facilitation agent. We conducted a large-scale social experiments in Japan and Afghanistan. Also I will present the future vision to realize the hyperdemocracy platform where multiple AI agents participate in human discussion to achieve more creative and innovative agreements and consensus.
10:50 – 11:40
Mark Klein – MIT
The Deliberative Survey: An AI-Powered Approach to Complex Deliberation at Scale
This talk will describe the latest iteration of my career-long effort to develop software tools that enable more efficient and productive deliberation, around complex and controversial topics, at scale. The talk will cover the limitations of current deliberation-support technology, and provide an introduction to deliberation mapping as a group boundary object. I will also comment on the lessons learned and the next steps to be taken.
11:40 – 12:30
Rafik Hadfi – Kyoto University
Rethinking Democratic Governance Through Generative AI
Emerging technologies are transforming institutional decision-making by reducing costs and boosting efficiency. This technological shift is reshaping democratic processes beyond mere digitization. For instance, the internet can improve participatory governance in direct democracies through online deliberation and electronic voting. The adoption of such technological solutions is becoming crucial as numerous long-standing democracies face challenges from increasing populist rhetoric and deepening polarization. In this talk, I will introduce an innovative computational framework that demonstrates how generative AI could transform the way we model democratic decision-making. Generative agents, in particular, offer new opportunities to simulate citizens’ behaviors as they engage in collective decision-making activities, including conversations, deliberations, voting, and participation in polls. These simulations can serve as a testing ground for social choice aggregation procedures before their introduction into society. Finally, I will discuss the factors that set this framework apart as a potential governance model and examine its implications for the evolution of democratic processes.
12:30 – 13:30
Lunch
Afternoon I : Pervasive AI – Ethical and legal aspects
13:30 – 14:15
Natali Helberger – University of Amsterdam
Saving democracy? What the European Digital Framework has to say about AI in the media.
Over the past years, the European institutions have rolled out a hyper-ambitious new governance framework for AI, including new instruments like the Digital Services Act, the Digital Markets Act or the AI Act. Some of those frameworks directly address the media, others may be of relevance indirectly. Common to all is the attempt to exercise more democratic control over AI companies, AI and its uses to protect fundamental rights and public values. But what exactly are the potential implications of the framework for the media, and journalism in particular? What concerns about the transformative power of AI for the sector are addressed, and what questions are still left in the open? And how do the different regulatory initiatives relate to each other. In this presentation, I will provide a first synthesis and critical analysis, and an attempt to glance into the future.
14:15 – 15:00
Michiel Bakker – DeepMind (remote)
AI can help humans find common ground in democratic deliberation
Abstract Large language models (LLMs) are often associated with deepening political divides. In this talk I will show how we trained an LLM-based `deliberative assistant’ with an objective to generate statements that reflected a consensus view among a group. Human participants preferred the LLM-generated statements to statements written by humans playing the role of mediator, and rated them as more informative, clear, and logical. After critiquing these `group statements’, discussants tended to update their views and converge on a common position on the issue. Text embeddings suggested that the LLM responded to the critiques by incorporating dissenting voices while not alienating the majority. To test for external validity, we mounted a virtual citizens’ assembly with a demographically representative sample of UK residents and found the AI-mediated process allowed people to find agreement on controversial political issues.
15:00 – 15:30
Coffee / Tea break
Afternoon II: Pervasive Human-AI Interaction: Societal Implications
15:30 – 16:10
Linda Kool – Rathenau Instituut
Staying human in a world of robots. Opportunities and risks of generative AI.
In November 2022, OpenAI surprised the world with the public launch of chatbot ChatGPT. Suddenly, it seemed, artificial intelligence was able to master language, a significant challenge in AI for a long time. The expectations of these new systems are therefore very high: businesses, policy makers and tech experts predict transformative changes. At the same time, there is criticism: how powerful can these systems become? What are the risks? And how to handle current problems, such as bias, unreliable results or the use of these systems to create disinformation? In this lecture, I will show the implications of generative AI for our economy, society and democracy, positive and negative. I will highlights the opportunities of generative AI to address societal challenges, and point to the fundamental ethical and societal questions generative AI poses. Finally, I will highlight the main responses from Dutch and European policy makers to address these issues.
16:10 – 16:50
Bennie Mols – Author / Speaker / Science Journalist
The art of human-AI collaboration
AI writes texts at lightning speed, recognizes faces, supports doctors and makes cars partially self-driving. But no matter how much progress AI has made in recent years, humans still win out over AI in numerous cognitive skills. In some areas, even toddlers still win out over AI, such as in understanding what another person wants or feels. I will discuss the relationship between human and artificial intelligence. In what is AI better than humans? In what are humans better than AI? How can humans and AI work together successfully? And what are the societal implications of AI’s ever-increasing role?
16:50 – 17:30
Mor Naaman – Cornell Tech (remote)
Potential risks and harms of AI-Mediated Communication
From autocomplete and smart replies to video filters and deepfakes, we increasingly live in a world where communication between humans is augmented by artificial intelligence: AI-Mediated Communication (or AI-MC). My talk will briefly outline some of the potential impact AI-MC might have on both senders and receivers of communications, based on experimental research in my lab over the last seven years. For example, our research shows that AI-MC involvement can impact how we evaluate the trustworthiness of others; and shift not only what we write, but even our expressed attitudes. Overall, AI-MC raises significant practical and ethical concerns as is reshaping human communication, calling for new approaches to the development and regulation of these technologies.
17:30
Drinks
Day 2 : Tuesday, 28 May 2024
Morning : Trustworthy AI for Journalism and Democracy
09:00 – 09:45
Registration
09:50 – 10:00
Welcome
10:00 – 10:50
Mathias Felipe De Lima Santos – Macquarie University (remote)
AI to Empower Media and Democracy in the Global South
The rapid advancement of artificial intelligence (AI) has profoundly impacted various sectors, including media and journalism. However, the influence of AI on media and democracy in the Global South remains a critical yet under-explored area. The Global South, which encompasses Africa, Asia, Latin America, and the Caribbean, faces unique challenges in terms of media freedom, access to reliable information, and the prevalence of mis- and disinformation. This talk aims to examine the potential of leveraging trustworthy AI to strengthen media ecosystems and democratic processes in the developing regions of the world. This talk will explore case studies and best practices from various regions, highlighting how AI-driven solutions can be tailored to the local contexts and needs of the Global South. It will also address the ethical considerations and potential risks associated with the deployment of AI in these regions, ensuring that the technology is developed and implemented in a manner that respects human rights, privacy, and the principles of inclusive and equitable access.
10:50 – 11:40
Interactive session by Abdallah El Ali + Karthikeya Puttur Venkatraj (CWI)
Generative AI Disclosures
Advances in Generative Artificial Intelligence (AI) are resulting in AI-generated media output that is (nearly) indistinguishable from human-created content. This can drastically impact users and the media sector. While the currently discussed European AI Act aims at addressing these risks through Article 52’s AI transparency obligations, its interpretation, implications, and subsequent enforcement remains unclear. In this playful, interactive session, we will collectively explore how we perceive human vs AI-generated news media content. By exploring the issues surrounding AI detection in journalistic publications, we can take steps toward more sensible transparency disclosures.
11:40 – 12:30
S. Shyam Sundar – Penn State University
Interactivity and Democracy: Media Effects in the Age of AI
This talk will discuss the psychology of human-AI interaction in the contexts of online political expression, fake news, and content moderation, guided by the speaker’s theory of interactive media effects (TIME) and providing implications for socially responsible AI media for the future of democracy.
12:30 – 13:30
Lunch
Afternoon I : Responsible AI
13:30 – 14:15
Cynthia Liem – Delft University of Technology
On incentives and perceptions of success
Originally having been practicing and researching questions of taste expansion in music and cultural heritage, while now having evolved into a supervisor of an interdisciplinary lab and publicly sought expert on responsible AI, I have frequently ‘resided’ in zones where what I thought important was not trivially incentivized. In this talk, I would like to reflect on this, arguing that the implementation of responsible AI practice and responsible digital information landscapes both need transdisciplinary perspectives that however come with serious and non-trivial transaction costs. With this, I would like to trigger a discussion on how this can better be facilitated, and how best practices can more pro-actively be exchanged between relevant (but differing) stakeholders and disciplinary perspectives.
14:15 – 15:00
Pascal Wiggers – Amsterdam University of Applied Sciences
Putting responsible AI into practice
The interest in the ethics of AI has been steadily increasing over the past years. This has led, among other things, to a large number of ethics guidelines often stressing value-driven AI development and of AI impact assessments that help to think through the potential ethical, social and legal impact of AI systems. However, practitioners that want to develop or deploy responsible AI struggle with the question of how to translate ethical principles to their day to day work. In this talk, I will share examples of ethical tools that help developers and users of AI to put ethics into practice.
15:00 – 15:30
Coffee / Tea break
Afternoon II: Algorithmic Responsibility and Explainable AI
15:30 – 16:10
Henriette Cramer – PaperMoon.AI
So, now what? Practical routes -and detours- in (relatively) safe(r) & quality AI
Decades of work are available about both AI and the impact of technology. However, there are no clear, readily applicable standards on how to exactly build and test for both positive and negative impact. In parallel, AI development has sped up considerably. New incentives are necessary to address the wide gap between impact debates and industry practice. Using examples from conversational bots, recommender systems and search, this talk will reflect on some lessons learnt when trying to address algorithmic responsibility, as well as translation between different AI and data science communities.
16:10 – 16:50
Nava Tintarev – Maastricht University (remote)
Interactive explanations for media
Public opinion is increasingly informed by online content and spread via social media. This information is algorithmically curated by algorithms which can amplify certain viewpoints. This viewpoint bias is further amplified when users select information that confirms their pre-existing beliefs. My aim is therefore to increase opinionated people’s awareness of viewpoints other than their own. However, just showing challenging viewpoints to people with strong opinions is likely to backfire. I propose solutions for how to display or explain the selection of content (e.g., in online search or news). The only way to create explanations that truly support people is to consider our reasoning flaws, like how our confirmation bias makes us ignore information contrary to our expectations. This talk will describe some of the state-of-the-art explanations in social media and news that can help mitigate biases of both systems and people.
16:50
Drinks
Organisation
Support
Practical Information
The event will be held at CWI (Turing room), Science Park 125, on May 27-28, between 10:00-18:00.
Learn about the CWI Research Semester Programme: