One year of ELSA: A report of our first annual Community Meeting

Last year we started our trajectory as a Dutch Research Council-funded Ethical, Legal, and Societal Aspects (ELSA) Lab. Our overarching goal: to develop and test value-driven, human-centered AI applications and ethical and legal frameworks for responsible use of AI in the media. We do so in close collaboration with our project partners, which include journalists, media professionals, designers, citizens, researchers, and public and societal partners.

Much to look back on

Over the past year, we have been actively involved in a range of research initiatives, outreach events, and conferences, underlining our commitment to exploring the intersection of artificial intelligence, media, and democracy. From delving into provocotyping AI tools to actively participating in conferences (about AI and Democracy, the Future of Journalism, and more), our researchers have been at the forefront of discussions surrounding the impact of AI on our society. We’ve contributed to the regulatory discussion on generative AI by co-organizing a workshop with AlgorithmWatch and providing recommendations for the AI Act. Earlier this year, we committed to demystifying generative AI by organizing a discussion series called ‘The Impact We Generate’, covering legal aspects, disinformation, and news media. Together with the Institute for Advance Study, our host on Tuesdays, we organized a series of meetings called ‘Meet New Methods’, fostering discussions on methodologies that contextualise the impact of new algorithmic technologies.

Tapping into our community with annual meetings

As part of our ambition to proactively gather input from our project partners, we aim to hold annual community meetings to present ongoing research, hear from our partners about pressing matters in their organisations, and, essentially, discuss our research agenda. On Tuesday, November 7th, we held the first of these annual meetings at the Institute for Advanced Study.

Introductory remarks by Claes de Vreese
Sara Spaargaren on milestones of the AIMD Lab

Sharing research insights from year one

To summarize the past year and give our partners an overview of what the AI, Media and Democracy Lab has achieved, we asked four of our researchers to present their projects and explain their work. 

Postdoctoral researcher Laurens Naudts highlighted the significance of studying both the long-term and short-term effects of AI in the context of democracy due to citizens’ increasing exposure to artificially mediated content. In light of the AI Act’s propositions of transparency and authenticity, he challenges the assumption that informed citizens equal a healthy democracy made up of protected and empowered citizens. While transparency is a crucial component of understanding and governing AI, he argues that merely knowing that one is interacting with artificial agents is not enough to protect citizens. 

With rising applications of AI in journalism, postdoctoral researcher Sophie Morosoli conducted a survey to scope individuals’ attitudes towards AI in journalism, measuring their acts of resistance, risk perception, and trust. The study reveals that participants are both impressed by the possibilities of AI and concerned about it, being especially distrustful about using it in “life or death” situations. Generally, the more benefits people see in the application of AI in journalism, the more trusting they are of it, and the more they find AI-written news credible. Although some resistant strategies were displayed, no direct correlation could be found between distrust of AI in journalism and resistance to AI applications. 

Laurens Naudts – A right to authentic communication? A critical perspective on the AI Act’s transparency provisions
Sophie Morosoli – A Survey on Individuals’ Attitudes towards AI in Journalism

Capturing the other side of AI in journalism, postdoctoral researcher Hannes Cools considers how journalists perceive language models and generative AI, and how they evaluate it on accuracy and believability. To scope how various journalistic institutions position themselves towards generative AI in the newsroom, Hannes analysed 21 guidelines on responsible AI use published by news organisations. While there were many overlapping values among them, like the Human-in-the-loop principle, transparency, and willingness to adapt, some blind spots were revealed as there were few mentions of bias, privacy, or sustainability. Projects such as this research have the benefit of allowing news organisations to collaborate and communicate with the lab, creating opportunities for advisory exchange, and future partnership. 

Design researcher Simone Ooms aims to bridge the spheres of academia and industry by making technological research more accessible in comprehensible portfolios that can then be used for focus group workshops with partners. By incorporating partners into the selection and scoping of case studies, we gain insight into AI-related needs and challenges in the media ecosystem, resulting in opportunities to explore case studies. Valuable insights of this practice show the importance of balancing long-term research technology with short-term implications, which is why collaboration with the innovation department of partners is crucial to finding a perfect in-between. 

Hannes Cools – Responsible AI and Journalism
Simone Ooms – Identifying Case Study Opportunities: Bridging Academia and Industry

Partner presentations

Just as important as our researchers are the partners of the AI, Media and Democracy Lab. We are grateful for the opportunities to exchange and collaborate with both policymakers and the industry, and highly value this partnership. We invited our partners to present their AI-related projects as well as their questions and concerns, to invite discussion and further fruitful collaboration. 

We were joined by Jorien Scholtens and Sela Kooter from the Dutch Media Authority (Commissariaat voor de Media), an institution that supervises compliance with the 2008 Media Act. They are part of the working group for AI and Media whose efforts include keeping track of AI as it is not part of the Media Act yet. They discuss legal developments, specify core values, and build connections with corporations and other research institutions, which we are happy to be a part of. Their working group is concerned with how to support democracy in a way citizens can form their own opinions, and their core values lie in making sure diversity is present in recommender systems, privacy, and safety are assured for users, and encouraging the potential of AI for more accessibility. 

We were happy to welcome Roza Dorresteijn representing the News Analytics Team (NAT) of DPG Media, a media group spanning Belgium, Denmark, and The Netherlands. The News Analytics Team values the collaboration with the AI, Media and Democracy Lab for academic and interdisciplinary perspectives from outside the industry. Our collaboration with NAT is focused on measuring diversity and inclusion in articles and representation in news. Future topics of interest to NAT include the role of news media in polarization, measuring objectivity, and overall question how to serve everyone with news.

Jorien Scholtens and Sela Kooter from the Dutch Media Authority
Roza Dorresteijn from DPG Media

Opening the floor for discussion, we asked our partners and researchers alike what they value in future collaborations and knowledge sharing, and which topics they would like to see on our research agenda for the next six months. Laurens stressed the importance of clarifying terms and definitions to improve communications, as they tend to vary across disciplines and industries. Frank Visser from Media Perspectives highlighted the benefits of bringing together the academic with the industry through workshops, while Willem Pleiter from DPG Media expressed the ambition to tackle internal guidelines on AI use in the future, especially drawing from the potential of using AI sustainably. 

Concluding thoughts and discussion

Much to look forward to

For the year ahead, we are dedicated to continue in close collaboration with our partners and explore research opportunities. We are looking towards working more closely with hands-on technology and testing out AI tools in a sandbox setting. 

So far, we have been very lucky with the group of dedicated researchers who joined us and have contributed their expertise and passion to our lab. We are also thankful to our lab partners, for welcoming our researchers into their work environments, whether it be newsrooms, cultural institutions, regulatory bodies, etc., and allowing us to bridge the gap between theory and practice. Our lab community’s involvement has been pivotal in shaping our research agenda and fostering an exchange of ideas. We look forward to upcoming innovative collaborations and strides towards developing a societal vision on the use of AI in the media. 

For more information or to receive the slides from our annual community meeting, feel free to contact us.

Community drinks