Dates:
27 May 2024 – 28 May 2024

How is AI impacting the media and our democratic processes? What can be expected in the near future, and what measures need to be taken to ensure society at large benefits from these rapid developments? The emergence of powerful AI (such as Large Language Models) is rapidly pervading our digital society and has the potential to profoundly transform people’s public and private lives. There is therefore an urgent need to explore how we can ensure that these developments are both enriching, beneficial and fair for society as a whole.

These topics will be discussed in the Research Semester Programme on AI’s Impact on Society, Media, and Democracy at Centrum Wiskunde & Informatica (CWI). This is a joint effort by the Intelligent and Autonomous Systems group, the Human-Centered Data Analytics group, and the Distributed & Interactive Systems group. This program is supported by the larger AI, Media, and Democracy (AIMD) lab initiative.

Join us for this open (free) two-day event at CWI (limited slots!), together with national and international experts in this area, on May 27-28, 2024.

DeepMind

PaperMoon.AI

Macquarie University (Remote)

Kyoto University

University of Amsterdam

Kyoto University

Rathenau Instituut

Delft University of Technology

Author / Speaker / Science Journalist

Cornell Tech (Remote)

Penn State University

Maastricht University

Amsterdam University of Applied Sciences

The introduction of AI in media is fundamentally changing the relationship between content providers and influencers on the one hand,  and their public on the other, and as such, can have a profound effect on democratic participation.  As such, there is a pressing need to consider the following:

  • Transparency and explainability of AI in media to improve trust and acceptance
  • Computational techniques to automatically gauge the quality and objectivity of information and information sources
  • Elucidating the dynamics of disinformation diffusion and the emergence of polarisation in social networks  
  • AI safety and algorithmic risk in decision making
  • Legal, ethical and policy challenges associated with the use of AI in the media
  • AI-mediated communication and the psychology of human-AI interaction
  • Human-computer interaction and responsible AI

Tackling the above requires the concerted insights and efforts from experts in various scientific disciplines such as mathematics and computer science, but also in media technologies and emerging legal frameworks. We hope these two days provide the start of an important conversation and subsequent collaborations to urgently look into the social and political impact of AI.

Click on the slot to see more details

All times shown are Central European Time (CET)

09:00 – 09:45

09:50 – 10:00


Morning : Computational Approaches


10:00 – 10:50

Towards Hyperdemocracy: AI-powered Crowd-scale Discussion Support System

Large-scale online discussion platforms are receiving great attention as potential next-generation methods for smart democratic citizen platforms. One of the studies clarified the critical problem faced by human facilitators caused by the difficulty of facilitating large-scale online discussions. In this talk, I present D-agree, a crowd-scale discussion support system based on an AI facilitation agent. We conducted a large-scale social experiments in Japan and Afghanistan. Also I will present the future vision to realize the hyperdemocracy platform where multiple AI agents participate in human discussion to achieve more creative and innovative agreements and consensus.

10:50 – 11:40

The Deliberative Survey: An AI-Powered Approach to Complex Deliberation at Scale

This talk will describe the latest iteration of my career-long effort to develop software tools that enable more efficient and productive deliberation, around complex and controversial topics, at scale. The talk will cover the limitations of current deliberation-support technology, and provide an introduction to deliberation mapping as a group boundary object. I will also comment on the lessons learned and the next steps to be taken.

11:40 – 12:30

Rethinking Democratic Governance Through Generative AI

Emerging technologies are transforming institutional decision-making by reducing costs and boosting efficiency. This technological shift is reshaping democratic processes beyond mere digitization. For instance, the internet can improve participatory governance in direct democracies through online deliberation and electronic voting. The adoption of such technological solutions is becoming crucial as numerous long-standing democracies face challenges from increasing populist rhetoric and deepening polarization. In this talk, I will introduce an innovative computational framework that demonstrates how generative AI could transform the way we model democratic decision-making. Generative agents, in particular, offer new opportunities to simulate citizens’ behaviors as they engage in collective decision-making activities, including conversations, deliberations, voting, and participation in polls. These simulations can serve as a testing ground for social choice aggregation procedures before their introduction into society. Finally, I will discuss the factors that set this framework apart as a potential governance model and examine its implications for the evolution of democratic processes.


12:30 – 13:30


Afternoon I : Pervasive AI – Ethical and legal aspects


13:30 – 14:15

Saving democracy? What the  European Digital Framework has to say about AI in the media.

Abstract (TBA)

14:15 – 15:00


15:00 – 15:30


Afternoon II: Pervasive Human-AI Interaction: Societal Implications


15:30 – 16:10

Staying human in a world of robots. Opportunities and risks of generative AI.

In November 2022, OpenAI surprised the world with the public launch of chatbot ChatGPT. Suddenly, it seemed, artificial intelligence was able to master language, a significant challenge in AI for a long time. The expectations of these new systems are therefore very high: businesses, policy makers and tech experts predict transformative changes. At the same time, there is criticism: how powerful can these systems become? What are the risks? And how to handle current problems, such as bias, unreliable results or the use of these systems to create disinformation? In this lecture, I will show the implications of generative AI for our economy, society and democracy, positive and negative. I will highlights the opportunities of generative AI to address societal challenges, and point to the fundamental ethical and societal questions generative AI poses. Finally, I will highlight the main responses from Dutch and European policy makers to address these issues.

16:10 – 17:00

The art of human-AI collaboration

AI writes texts at lightning speed, recognizes faces, supports doctors and makes cars partially self-driving. But no matter how much progress AI has made in recent years, humans still win out over AI in numerous cognitive skills. In some areas, even toddlers still win out over AI, such as in understanding what another person wants or feels. I will discuss the relationship between human and artificial intelligence. In what is AI better than humans? In what are humans better than AI? How can humans and AI work together successfully? And what are the societal implications of AI’s ever-increasing role?

17:00 – 17:40

Potential risks and harms of AI-Mediated Communication

From autocomplete and smart replies to video filters and deepfakes, we increasingly live in a world where communication between humans is augmented by artificial intelligence: AI-Mediated Communication (or AI-MC). My talk will briefly outline some of the potential impact AI-MC might have on both senders and receivers of communications, based on experimental research in my lab over the last seven years. For example, our research shows that AI-MC involvement can impact how we evaluate the trustworthiness of others; and shift not only what we write, but even our expressed attitudes. Overall, AI-MC raises significant practical and ethical concerns as is reshaping human communication, calling for new approaches to the development and regulation of these technologies.


17:40



Morning : Trustworthy AI for Journalism and Democracy


10:00 – 11:00

AI to Empower Media and Democracy in the Global South

The rapid advancement of artificial intelligence (AI) has profoundly impacted various sectors, including media and journalism. However, the influence of AI on media and democracy in the Global South remains a critical yet under-explored area. The Global South, which encompasses Africa, Asia, Latin America, and the Caribbean, faces unique challenges in terms of media freedom, access to reliable information, and the prevalence of mis- and disinformation. This talk aims to examine the potential of leveraging trustworthy AI to strengthen media ecosystems and democratic processes in the developing regions of the world. This talk will explore case studies and best practices from various regions, highlighting how AI-driven solutions can be tailored to the local contexts and needs of the Global South. It will also address the ethical considerations and potential risks associated with the deployment of AI in these regions, ensuring that the technology is developed and implemented in a manner that respects human rights, privacy, and the principles of inclusive and equitable access.

11:30 – 12:30

Interactivity and Democracy: Media Effects in the Age of AI

This talk will discuss the psychology of human-AI interaction in the contexts of online political expression, fake news, and content moderation, guided by the speaker’s theory of interactive media effects (TIME) and providing implications for socially responsible AI media for the future of democracy.


12:30 – 13:30


Afternoon I : Responsible AI


13:30 – 14:25

On incentives and perceptions of success

Originally having been practicing and researching questions of taste expansion in music and cultural heritage, while now having evolved into a supervisor of an interdisciplinary lab and publicly sought expert on responsible AI, I have frequently ‘resided’ in zones where what I thought important was not trivially incentivized. In this talk, I would like to reflect on this, arguing that the implementation of responsible AI practice and responsible digital information landscapes both need transdisciplinary perspectives that however come with serious and non-trivial transaction costs. With this, I would like to trigger a discussion on how this can better be facilitated, and how best practices can more pro-actively be exchanged between relevant (but differing) stakeholders and disciplinary perspectives.


15:00 – 15:30


Afternoon II: Algorithmic Responsibility and Explainable AI


15:30 – 16:10

So, now what? Practical routes -and detours- in (relatively) safe(r) & quality AI

Decades of work are available about both AI and the impact of technology. However, there are no clear, readily applicable standards on how to exactly build and test for both positive and negative impact. In parallel, AI development has sped up considerably. New incentives are necessary to address the wide gap between impact debates and industry practice. Using examples from conversational bots, recommender systems and search, this talk will reflect on some lessons learnt when trying to address algorithmic responsibility, as well as translation between different AI and data science communities.

16:10 – 17:00

Interactive explanations for media

Public opinion is increasingly informed by online content and spread via social media. This information is algorithmically curated by algorithms which can amplify certain viewpoints. This viewpoint bias is further amplified when users select information that confirms their pre-existing beliefs. My aim is therefore to increase opinionated people’s awareness of viewpoints other than their own. However, just showing challenging viewpoints to people with strong opinions is likely to backfire. I propose solutions for how to display or explain the selection of content (e.g., in online search or news). The only way to create explanations that truly support people is to consider our reasoning flaws, like how our confirmation bias makes us ignore information contrary to our expectations. This talk will describe some of the state-of-the-art explanations in social media and news that can help mitigate biases of both systems and people.


17:00


The event will be held at CWI (Turing room), Science Park 125, on May 27-28, between 10:00-18:00.


Learn about the CWI Research Semester Programme:


Dates:
27 May 2024 – 28 May 2024