Public Spaces Conference: Media Content for Responsible AI


On June 6th, the AI, Media, and Democracy Lab held a panel at the 2024 Public Spaces Conference in collaboration with the Open Future Foundation. The discussion focused on the relationship between media content and responsible AI development. Professor Natali Helberger, co-founder of the AI, Media and Democracy Lab, moderated the talk and opened with a question: how can we balance the legitimate moral and economic rights of media organizations while providing responsible European AI models with high-quality data? As Professor Helberger pointed out, high-quality content is the “pixie dust” of the AI industry. But media players are split on whether or not to license their content to technology giants like Microsoft, whose synthetic content might ultimately displace them in their work.

Daan Odijk, head of AI and data at RTL, outlined the precarious position of media outlets. Media groups are beginning to deploy generative AI systems in content production; yet they are simultaneously the rights holders of content that is appropriated by those same systems without compensation. RTL’s in-house policy is to respect the rights of other organizations and to only use AI as a supplement to human creativity, but Odijk urged that these individual norms are not enough: the EU must establish stronger transparency requirements and firmer regulations on the permissible use of data.

While adhering to data protection norms is central to the responsible development of AI models, doing so puts public-interest AI developers at a technical and economic disadvantage. Dayana Spagnuelo (Netherlands Organisation for Applied Scientific Research) described her experience working on GPT.NL, an LLM that operates with native, rather than translated, Dutch content and is embedded with Dutch cultural characteristics. The group only uses data in ways that adhere to copyright law, the value of privacy, and transparency mechanisms—but they have reached development bottlenecks during extensive negotiations with media rights holders over the use of their content.

For individual content creators, the trade-offs are just as fraught and complicated. Hanneke Holthuis, general counsel of Picture Rights—a collective management organization that represents visual artists—described her own experience fighting for stronger protections of proprietary creative work. Some artists see the advent of AI as a “beautiful opportunity” to develop new modes and styles of art; others worry that it will ultimately replace them, as it has begun to do with voice actors. Like media organizations, many artists are split on whether to license their content to AI companies—is doing so a practical necessity? Or is it only contributing to their eventual displacement? While the AI Act offers creatives an “opt-out” option to protect content from data scraping, the provision does not account for the appropriation of content that happened before it was clear that the “opt-out” applied to AI training.

Yet, the protection of creative rights and establishment of stronger data regulations must avoid inadvertently entrenching the power of Big Tech. Allowing large companies to freely appropriate public data—the current paradigm—is not working. But being overly restrictive of data rights in the public sphere might even make things worse, by advantaging big companies who can afford to strike up licensing agreements. Paul Keller (Open Future Foundation) urged that we will need to learn how to build a common European data space, and govern that space effectively, in order to support home-grown initiatives like GPT.NL.

The talk touched on a number of issues that will be central to debates over media rights and AI regulation in the coming years. How should European regulators navigate the interests of creators and the need to support endogenous AI development? How can we support creators without reinforcing the market dominance of Big Tech? We would like to thank our speakers Daan Odijk, Dayana Spagnuelo, Hanneke Holthuis, and Paul Keller for their contributions to this important discussion, as well as our moderator Professor Helberger for facilitating this discourse.

Watch the full panel here: