Responsible AI at the BBC – A Collaborative Research Project

“We need to intentionally do good, not simply focus on not doing harm.”
– Rumman Chowdhury at the BRAID launch

Like most media organisations, the BBC has questioned their positionality regarding rapidly evolving AI technology. As part of our agenda to contextualise AI research, we make it a point to root our research practices into real and current experiences of media organisations and collaborate with them on navigating the responsible introduction of (generative) AI into their daily operations.

Over the past year, our researchers Hannes Cools and Anna Schjøtt Hansen have been embedded in the BBC’s Responsible Innovation team, researching how BBC navigates the development and use of AI technologies such as large language models as a public broadcaster. In two complementary projects, Cools and Hansen have been granted the opportunity to learn from their practices and in turn, share our research findings in applicable ways.

The project ‘Towards Responsible Recommender Systems at BBC?’ was aimed at further evaluating how transparency is understood, how it is translated in practice across teams, and what its main challenges are. The central question asked how the Machine Learning Engine Principles (MLEP) transparency principles can effectively be translated and operationalized across teams at the BBC.

‘Exploring AI design processes and decisions as moments of responsible intervention’ is a project that aimed to explore this gap in the context of the BBC via an ethnographic enquiry into the ways in which responsible decision-making unfolds in practice — an area of study that currently remains underexplored. To do so, the project followed and observe ongoing projects in the ‘Personalisation Team’ with the aim to explore the overarching question of: How could responsible AI practices guided by the MLEP principles or editorial values be better integrated into the design process of AI systems within the BBC?

This research cooperation has brought multiple opportunities for the BBC and the AI, Media and Democracy Lab to collaborate and share insights.

Six months ago, we participated in a Responsible AI symposium at the BBC Broadcasting House in London. The symposium focused on identifying and addressing industry challenges and establishing collaborative research agendas for the overseeable future. How can media organisations move beyond blanket statements on transparency, human oversight, privacy, etc., when it comes to developing and using AI technologies? As our co-director Natali Helberger mentioned in her contribution to the symposium: ‘We need to see AI guidelines as a living document, an actionable inclusive dialogue, something that we live, with room for experimentation and failures.’

In July, we were happy to host Rhianne Jones and Bronwyn Jones from the Bridging Responsible AI Divides (BRAID) initiative. BRAID is a partner of the BBC Research and Development department and brings together expertise in human-computer interaction, moral philosophy, arts, design, law, social sciences, journalism, and AI. Similarly to the AI, Media and Democracy Lab, they work with an interdisciplinary team of researchers and collaborate with organisations to navigate a responsible handling of AI in broadcast media.

As this project is coming to an end, we thank the BBC and BRAID teams for the collaboration and the opportunities to learn from each other’s challenges and insights. We look forward for more research collaborations to come.