Events
Discussion series about Generative AI ‘The impact we generate’
March 21, April 18 & May 30 2023 at 16:00

In recent months, increased media attention has been given to AI-driven applications like Stable Diffusion, Dall-E, GPT-3, ChatGPT and Bard. These programmes are commonly referred to as Generative AI: technologies that learn from existing data in order to produce new content, including audio, (realistic) images and art, chat, text and code. Though these technological innovations have sparked renewed enthusiasm and interest for the field of AI, their potential to disrupt and transform has also been met with concern.
In an effort to demystify the use of Generative AI, the AI, Media and Democracy Lab is organising a series of events that will explore and discuss the pitfalls and possibilities of the technology’s transformative impact on the media and journalism industries: How can the sector reap the benefits of artificially created content in line with the industries’ values, such as personal creativity, autonomy, user agency, transparency, objectivity, diversity, trust and authenticity?
Discussion #2 | News Media and Generative AI
Date and time: Tuesday 18 April, 16.00-17.30
Location: UvA Institute for Advanced Study (IAS) or online
For our second session, we will explore the practical challenges of Generative AI for news media. How does the integration of Generative AI benefit and challenge the various phases of news reporting, including news gathering, production, distribution and interaction? Will the technology drastically alter the process as well as the product of news media and journalism, and if so, which technological characteristics play a central role as part of this transformation? The media landscape is not unfamiliar with digital change and disruption. The arrival of the Internet, the growth and popularity of crowd-sourcing websites and social media platforms and the emergence of data analytics have equally challenged the way news is provided and consumed. Even more so, the concerns news media face today (e.g., the need to verify the reliability of AI-generated information and the creative relationship between journalists and AI) sound awfully familiar to those experienced previously. Even in the case the issues raised by generative AI are truly distinctive (are they?), the industry may nonetheless draw from the lessons it has learned from the past.
Discussion #3 | The Social Impact of Generative AI
Date and time: Tuesday 30 May, 16.00-17.30
Location: UvA Institute for Advanced Study (IAS) or online
In this final session, we take a bird’s-eye perspective, and consider the wider social impact the use of Generative AI in news media and journalism may have. News media and journalism perform a key democratic function: they act as a public watchdog and they offer citizens a platform to impart, seek and receive information and engage in public and political dialogue. At the same time, news media are also a commercial product, a source of entertainment, artistic and cultural expression. To what extent does the integration of Generative AI in the media value chain affect the various societal functions news media and journalists perform, and could this in turn, alter social dynamics present in society, including the way people and groups of people participate, engage and interact with one another? And in a situation where everybody can write high-quality texts with the help of Generative AI, what exactly will the role of journalism be?
We explore how media actors can provide responsible organisational and technological answers to the social challenges of Generative AI that are sufficiently robust, i.e., with a future-oriented, rather than short-term mind-set. To truly understand the social impact of these systems, one needs to understand the underlying technological processes. Hence, proper attention must also be paid to the various design choices made leading up to Generative AI’s development and introduction in the media landscape, and how the envisaged social impact can be linked back to these choices in order to adequately assess and address its main risks, and maximise its positive effects.
Previous sessions
Discussion #1 | Legal Aspects of Generative AI
Date and time: Tuesday 21 March, 16.00-17.30
Location: Institute for Information Law (IViR) or online
In this first panel discussion, we will dive into legal and governance aspects regarding the technology’s proliferation, and its impact on media industries. Drawing from a variety of legal fields, we will attempt to formulate answers to pressing industry, social and public policy questions, including: How to protect the rights and interests of creators and should AI-generated content be protected (intellectual property)? How to ensure fair access, competition and choice in the market for Generative AI (competition law)? Is it okay to train large language models on personal information collected from publicly accessible (online) resources or during user-interaction (privacy and data protection law)? How to address the presence of bias and prejudice in AI-produced content (equality and non-discrimination law)? When is Generative AI high risk and subject to more stringent regulation (AI law)? At the same time, the question must be raised: are these technologies all that revolutionary, or do they present familiar questions and dilemmas in renewed ways? Through an open dialogue, this discussion hopes to further practical and academic debate regarding the responsible deployment of Generative AI in media and journalism and help establish a legal research agenda for doing so.
The AI, Media and Democracy Lab aims to bring together different legal and societal perspectives, including from research and practice. The format is interactive with short pitches and plenty of room for questions and interaction. The event will be organised in a hybrid format. Physical attendance is limited to 20 participants, and will be administered on a first-come, first-served basis.
Panellists
Ot van Daalen
Ot van Daalen is a researcher and lecturer in the field of privacy and security at the Institute for Information Law. He is also an attorney at Root Legal. Previously, he worked at the Dutch Data Protection Authority and founded the Dutch digital rights movement Bits of Freedom.
Viktorija Morozovaite
Viktorija Morozovaite is a PhD candidate at Utrecht University, School of Law, member of the Renforce research group and Governing the Digital Societies Focus Area. Her research focuses on examining user-influencing practices, such as hypernudging, from the perspective of European competition law and EU’s emergent digital policy on regulating digital markets. Her research is part of the fulfilment of Modern Bigness ERC project, under leadership of Anna Gerbrandy. Viktorija is a former Wirtschaftskammer Steiermark Fellow at the University of Graz and a former visiting scholar at the Annernberg School for Communication at University of Pennsylvania.
Natali Helberger
Natali Helberger, KNAW member, is Distinguished University Professor of Law and Digital Technology with a special focus on AI at the University of Amsterdam and a member of the board of directors of the Institute for Information Law (IViR). She co-founded two Research Priority Areas at the UvA: Information, Communication, and the Data Society and Human(e) AI – university-wide research programs and hubs for researchers from the social sciences, humanities, and computer science to advance a societal perspective on AI. In 2021, Natali co-founded the AI, Media & Democracy Lab.
Martin Senftleben
Martin Senftleben is Professor of Intellectual Property Law and Director of the Institute for Information Law (IViR) at the Amsterdam Law School. His activities focus on the reconciliation of private intellectual property rights with competing public interests of a social, cultural or economic nature. Current research topics include institutionalized algorithmic copyright enforcement in the EU, the interplay between robot creativity and human literary and artistic productions, the preservation of the public domain of cultural expressions, and the impact of targeted advertising on supply and demand in market economies.
Naomi Appelman
Naomi Appelman is a PhD researcher at the Institute for Information Law (IViR) interested in the role of law in online exclusion, speech governance, and platform power. Her interdisciplinary research combines information law, specifically, online speech and platform regulation with (agonistic) political philosophy. More concretely, her research asks how European law should facilitate contestation of the content moderation systems governing online speech. The aim of facilitating this contestation is to minimise undue exclusion, often of already marginalised groups, from online spaces and democratise the power over how online speech is governed. Her PhD is part of the Digital Transformation of Decision-making project and the Digital Legal Studies sectorplan. She was a visiting researcher at the Humboldt Institute for Internet and Society. Connected to her PhD research she has co-authored several reports and papers on the topic of online speech regulation and automated decision-making. Finally, Naomi has previously done volunteer work at the Dutch digital rights NGO Bits of Freedom and is one of the founders of the Racism and Technology Center.
Philipp Hacker
Prof. Dr. Philipp Hacker, LL.M. (Yale), holds the Chair for Law and Ethics of the Digital Society at the European New School of Digital Studies (ENS) at European University Viadrina Frankfurt (Oder). In 2021, he was a Research Fellow at Weizenbaum Institute Berlin. Prior to joining ENS, he served as an AXA Postdoctoral Fellow at the Faculty of Law of Humboldt University of Berlin; a Max Weber Fellow at the European University Institute, and an A.SK Fellow at WZB Berlin Social Science Center. His research focuses on the intersection of law and technology. In particular, he analyzes the impact of AI and the IoT on consumer, privacy, anti-discrimination, and general regulatory law. He often cooperates with computer scientists and mathematicians, especially on questions of explainable AI and algorithmic fairness.