To disclose or not to disclose the use of AI to news readers? This is one of the most pressing questions news organisations are facing at the moment when working with (generative) AI tools. AI is here to stay – also in journalism. News organisations are experimenting with or have already implemented generative AI technologies in their news production processes and are creating internal guidelines as we speak. Some organisations inform their readers about the fact that they are using AI, and some don’t. Is this ethical? What do readers want? And what should useful disclosures even look like?
Within our lab, a highly interdisciplinary group of researchers is working to contextualize these questions. To ensure that we share our insights in a timely manner, the group has recently written an article for Generative AI in the Newsroom titled ‘Tackling the Transparency Puzzle‘ and organised an online panel discussion on Navigating AI Disclosures in the News.
In the panel discussion moderated by Teresa Elena Weikmann, our researchers Sophie Morosoli, Abdallah El Ali, Laurens Naudts, and Hannes Cools shared our latest empirical insights and ethical/legal considerations around AI disclosures in the news. We were also joined by Katharina Schell, Deputy Editor-in-Chief of the APA – Austria Presse Agentur, who contributed valuable insights from her work on practical frameworks for AI disclosures in journalism.

Laurens Naudts kicked off the discussion, observing that trust is a vital aspect of democracy. If people cannot distinguish between artificial and non-artificial content in the news, they feel a loss of trust. This can reduce confidence in their capacity to engage in democratic procedures, leading to harmful effects on the political climate. He stresses that it is important for citizens to have agency when engaging with information, which, in the context of AI disclosures, could potentially look like an option to filter AI-generated content out from human-generated content.
Speaking from first-hand experience researching the ins- and outs of newsrooms, Hannes Cools offered insights about how news organisations deal with AI disclosures. He observed that, although most news organisations have guidelines on AI, the difficulty emerges in translating them into practice efficiently. Principles alone cannot guarantee ethical applications of AI if they are not realised. Hannes further notes that, although disclosures are desired, labelling can also be ineffective and lead audiences to distrust media organisations in general.
Sophie Morosoli followed up on this observation with complementary results from a recent focus group with Dutch citizens (more information here). Although transparency is very desired among citizens, answers about what AI disclosures should look like in practice differed. A hierarchy emerged: most participants agreed that there were few issues in using AI for spelling checks, and could even imagine AI-generated articles about sports or entertainment, but heavily refused it in cases of sensitive topics like politics.
Referring to a recent study on how AI-generated content affects consumers, Abdallah El Ali noted that, generally, humans could not reliably distinguish between human- and AI-generated content. Still, if the suspicion arose that a text was AI-generated, participants perceived the quality to be lower. He highlights that disclosures do have an effect on consumers, and wants that too many can have problematic effects on trust.
Journalism is narration. If we delegate the role of authors to machines, who does the storytelling? – Katharina Schell
Speaking from a hands-on industry perspective, Katharina Schell stresses that editorial decision-making is a crucial moment that we have to consider when talking about the use of generative AI in the newsroom. It is important to be clear about how much agency is given to semi-autonomous actors.
In a discussion round following the presentations, the projects of Abdallah and Sophie were examined for potential overlaps and clearer indications of what disclosures should look like concretely. Both projects found that there is a severe concern about AI creating false information, and people’s inability to recognise it. Both projects also found that people are sceptical when they come across AI-generated news, and often take it with a grain of salt. Abdallah suggests that perhaps something beyond a simple label is necessary in an environment where AI is increasingly present.
Hannes stresses a nuanced understanding of trust in news organisations. He argues that, if a news organisation is perceived to be reliable and trustworthy already, audiences might trust them to use AI responsibly more. In this moment, Hannes sees an opportunity for the sector to take initiatives to showcase responsible AI use, potentially by considering local models over LLMs like ChatGPT.
In imagining alternative models for disclosures and content distribution, Katharina warns from establishing paywalls to separate human-generated content from AI-generated content. Making human-edited information less accessible threatens democratic structures. Hannes agrees that, from a societal perspective, we do not want purely AI generated content, but will have to consider that it might be a business model in the future.
It is crucial to consider various approaches to disclosing AI in the news, and to understand needs from both the industry and the audience.
Do you want to learn more about our research around AI disclosures? Then make sure to read our article for Generative AI in the Newsroom below.
