The Audience Perspective on How and Why News Organizations Should Disclose Their Use of AI
Written by Sophie Morosoli, Hannes Cools, Karthikeya Venkatraj & Laurens Naudts
Although the AI Act does not require it, we know that individuals desire to be informed about the use of AI in journalism. So far, little research has been done to investigate their specific information needs. Our researchers explore this question in a piece published in Generative AI in the Newsroom.

Read the full Article
Research on generative AI (GenAI) and journalism often focuses on the uses and perceptions of the technology, with comparatively less attention given to audiences’ needs when it comes to disclosing the use of AI. This blog post explores the complex issue of transparency in AI-driven journalism, drawing on recent research at the AI, Media and Democracy Lab of the University of Amsterdam into Dutch citizens’ views and needs around knowing about AI use in news. First, we present the legal perspective and summarize some recent scholarly work regarding individuals’ desires for disclosures of AI. Then we highlight some key concerns and wishes that came out of our focus group research. Finally, we reflect on the implications and propose some sound practices for AI disclosures. These findings offer insights for news organizations into the citizens’ perspective regarding societal concerns, their desire for transparency, and the kind of disclosure that may help rebuild trust in an AI-augmented media landscape.
In recent years, the media sector has been actively experimenting with (generative) AI across the entire journalistic value chain. Currently, research by The Associated Press has shown that AI has been most prominently used for content production, which includes uses like generating social media posts, news headlines, and drafts for a story. Similarly, other research has found that translations and information gathering are named as one of the main uses of AI in newsrooms. With rapid advancements in generative AI, it’s only a matter of time before every newsroom is leveraging these powerful tools, often provided by major tech companies like OpenAI, Google, and Microsoft. This growing dependency on big tech, as well as the increasing use of GenAI by journalists, forces news organizations to think about how they should deploy these technologies responsibly, sparking questions like: “Which provider should we use?”, “What are banned and allowed uses?” and “Should or should we not communicate such uses with our audiences?”
To better understand some of these above-mentioned questions, regulatory frameworks, and guidelines could help establish forward-facing accountability measures concerning the use of AI throughout the entire journalistic value chain. Both lawmakers and the news industry, however, are cautious toward the sharp regulation of journalistic best practices as this can clash with free speech interests. For example, in the European Union, the recently adopted AI Act requires disclosure statements when content is created and/or manipulated using artificial intelligence. News media organizations, however, appear to enjoy an exemption. Most notably, in the case of artificially generated and/or manipulated text content — audiovisual deepfakes remain subject to heightened scrutiny — disclosure duties only apply when they are “published with the purpose of informing the public on matters of public interest”. Depending on one’s interpretation of the law, the latter obligation might therefore only cover a minimal set of news topics. Yet, even in the public interest scenario, no disclosure obligation exists where the content “has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content”. Furthermore, as the AI Act does not label the media sector as a high-risk sector, there’s no immediate and clear legal duty for media organizations to impose responsible design, development, and deployment strategies.
Individuals’ Desire for Disclosure: What Current Research Says
Even though news organizations do not have to disclose the use of AI under the AI Act, the question arises whether individuals actually even perceive news that has been produced by or with the help of artificial intelligence differently. Do they trust the information? Do they think this type of news is credible?
Previous research shows mixed findings. On the one hand, research shows that news labeled as AI is perceived as more accurate, objective, and impartial by individuals because it lacks basic human characteristics such as beliefs and emotions. In this context, empirical evidence has suggested that AI-written news is evaluated as more credible including higher journalistic expertise compared to human generated news. On the other hand, scholars have also argued that AI as a news source is perceived as less credible than human journalists and individuals have a general tendency to reject artificial intelligence. In other research, this AI aversion has been linked to reasons such as general skepticism toward technological innovation or overarching concerns about AI systems being a “black box” lacking essential journalistic values. Recent research has also identified a so-called “disclosure paradox”. In other words, audiences have a strong desire to be informed through disclosure statements about the use of AI in journalism, but at the same time, these statements might also lead to more distrust after readers have been exposed to them.
The current research appears both contradictory and inconclusive about how individuals feel about AI generated news content, with little research having been done to investigate what information needs individuals might have.
Dutch Citizens’ Attitudes towards AI in News: Concerns, Needs, and Appearance
Knowing that individuals desire to be informed about the use of AI in journalism but that there are inconclusive results connected to the perceptions of AI generated news, we set out to empirically investigate why news organizations should or should not disclose their use of AI and how AI disclosures should look like according to individuals’ wishes. To get to the bottom of this we conducted three sets of focus group interviews with Dutch citizens (N = 21), which allowed us to get in-depth and dynamic responses from the participants. (For more context on the methods, find the expanded explanation in the appendix.)
The focus groups gave us valuable insights into concerns people have regarding AI generated news and individuals’ desires regarding AI disclosures. First of all, we see that the fact that AI is used in news invokes worries and concerns for citizens. For instance, our participants were generally worried that AI might create false and misleading information. They also stress more far-reaching concerns such as polarization, job displacement, and the fact that AI can be used to manipulate individuals and their beliefs (i.e. deepfakes and conspiracy theories), which can lead to the erosion of trust. As one person said, “There is a risk that the whole population becomes distrustful. Now we can still pretty much all trust each other but if at some point you can no longer tell whether something is real or fake. I am a little afraid that we will all be very distrustful of each other and of everything we send” (Anna, 23).
Given these concerns, in a second step, we asked citizens what needs they have to make clear that the content they are seeing was AI generated and how this information could be useful for them. Overall, we observe an outspoken desire for disclosure in the sense that news organizations should communicate their use of AI no matter what. As one participant put it: “I personally would have no problem with an article being written by AI. I just would like to know, but I think it can be very useful and indeed also to put more than one kind of information together. And yes, I would still just read that and make use of articles if indeed there were proper source citations” (Marie, 25). Connected to disclosure, we find a specific emphasis on clear labeling and source referencing. The interviewees underlined that if a news organization uses AI to generate content, the organization should make that very clear to the audience in the form of a distinct label. For instance: “As long as it is mentioned somewhere and that can be through an icon, that can be through text, but that plain and simple, it is clear that it is indeed AI generated” (Marc, 56). Additionally, the news organization should have disclosure statements, so individuals have the option to trace back the sources used by the AI system to generate content: “Maybe also where they get the information from? Of course, it could be a combination of articles, but whether something is really news, or whether it is rewritten old news” (Helena, 23).
Lastly, and in the absence of both regulatory and industry-driven guidelines, we asked citizens what effective labels of disclosure should ideally look like. Across the three focus groups, there was a striking agreement that labels of disclosure should stand out visually, be it regarding size, color, or placement: “Shouldn’t you make the statements bigger? […] Because a lot of times you don’t look at who wrote it. And so, if you do a little caption “created by AI” I don’t think people are going to see that […]” (Margot, 41). “I think at the top. Then you know right away if you want to read it” (Theo, 32).
Next to these concrete visual characteristics, some individuals also desire a watermark. One person pointed out the following: “For me then, the preference would be for a watermark which is not removable, and also indeed, at the time you print it out still visible” (Ratna, 25). Some individuals also stated that they would like labels that disclose different levels of the use of AI because they were more accepting of certain uses than of others. Labels could specify why generative AI was used in specific cases. For example, disclosure statements could differ if AI was used for a headline, to summarize the article, or to write a first draft. Likewise, labels could indicate news organizations’ dependency on AI “I would maybe like more options, in the sense of ‘fully generated’, ‘partially generated’, or ‘checked by AI’” (Erik, 32). This finding is consistent with the arguments the Trusting News project brings forward. A majority of the participants were in favor of a logo, which is easily recognizable and is connected to a certain sense of (institutional) accountability: “Of course, it also has to be a recognized logo. You just see that in the normal world that there are a lot of fake logos, and that people are being misled anyway. So it has to be something, something official” (Bart, 63).
All in all, we conclude that if disclosure statements are considered by news organizations, they should visually stand out to immediately draw the attention of individuals to the label before they look at the content. Furthermore, labels in the form of a logo or watermarks are the most desired when connected to an independent institution or certain (journalistic) values. Such disclosures enable individuals to make informed decisions to opt in or out of AI generated content.
Three Key Learnings for News Organizations
Our findings highlight that the individuals who participated in our study want to be informed about the use of AI in journalism. We identified three key learnings for news organizations based on individual preferences:
- Disclosure is expected but try to find a balance: People want to know when AI is used in news production. News organizations should clearly disclose AI involvement through labels, watermarks, or other visual indicators. These disclosures can help mitigate the perceived risks and concerns people hold when it comes to AI generated news content. Highlighting the benefits AI can have for journalism might be another way to reduce risk perceptions and could be included in disclosure statements. At the same time, news organizations can try to strike a balance regarding these disclosure statements as it might also lead to information overload.
- Detailed information matters: There is no such thing as “the audience” which entails that different audiences require different forms/levels of disclosure about how AI was used (e.g., for headlines or grammar checks) based on their individual needs. News organizations could try to map different audience needs which can help establish credibility and can potentially reduce concerns about manipulation of content by news organizations.
- Disclosure alone is not enough: While important, disclosing information should be combined with efforts to improve digital literacy and critical thinking. This comprehensive approach can better empower citizens to navigate AI-generated content. News organizations could consider involving their audiences more in conversations on how they should disclose specific uses of AI and engage in user evaluations, such as A/B testing, after having identified valuable disclosures.
Keeping these key learnings in mind, we conclude that being open about using AI in news is a good idea, but we believe it won’t solve all trust issues that the industry is facing. As AI adoption is ever evolving, determining the who, what, where, when, and how remains vital in disclosing specific information to audiences. Furthermore, more research needs to be done into the potential backfire effects disclosures could have to establish if transparency labels are even the right way to go about this. At the AI, Media and Democracy Lab, we will continue studying the impact of AI disclosures through different methods to get an even better understanding of the citizen perspective of AI and journalism.
DISCLOSURE: This article was written by the human co-authors. Some sentences in the introduction and in the results sections were adjusted and made more understandable with the help of Perplexity.AI. The prompt used was: “Please make the following sentence(s) more understandable”. The picture was generated by DALL-E using the prompt: Create an image that visually represents the concept of focus groups discussing artificial intelligence (AI) and the importance of disclosure and transparency. The scene should show a diverse group of people in a professional setting, seated around a large table.
Appendix: Expanded methodology
Three sets of focus group interviews with Dutch citizens (N = 21) were conducted during the month of June 2024. We aimed for diversity in each focus group in terms of age, gender and education level. The participants were asked a set of predetermined questions, that included but were not limited to their general attitudes towards AI in journalism, their understanding of AI’s role in news production, and their expectations regarding disclosure. Each focus group lasted about one hour, and the transcripts of the audio recordings were translated from Dutch to English. After that, a group of four coders (all authors) coded the transcripts in NVivo, where we identified key themes of the interviews. The names used connected to the quotes are pseudonyms. You can send us an email here if you want to know more about the methodology.