The AI Disclosures Project: Transparency How?

Project Overview

With technological advancements in the development of generative AI, it has become increasingly difficult to distinguish human-created media output from AI-generated output. There have been rising concerns about misinformation, which can lead to heavy consequences for users and the media sector. How can we best deal with AI-generated content in the media, and to what extent does the use of AI have to be declared?

The AI Disclosures Project aims to address these concerns from various angles. The team is made up of researchers with diverse disciplinary backgrounds such as computer science, law, and social science. Combining their expertise, they approach the question with a range of research initiatives.

Benefits of AI Disclosures

In a survey conducted on citizens of the Netherlands, Sophie Morosoli investigated the public’s general attitude toward the use of generative AI in journalism. The results showed that the majority of participants value being informed about the use of AI. Sophie observed that they also felt a general sense of entitlement to know whether they were communicating with AI, and felt manipulated when information was kept from them. Overall, Dutch citizens trust AI less, but their attitude is partly dependent on the benefits they perceive AI to have. The study shows a correlation there, which suggests that transparency can not only demonstrate positive aspects of AI but also secure trust between organizations and users.

In response to these results, Sophie questions the effectiveness of transparency labels and how they impact citizen’s trust in AI. Addressing the media sector concretely, it is important to consider that users may have different needs concerning transparency and to explore how to communicate the use of AI best.

Designing AI Disclosures

How AI disclosures are communicated in praxis is another key concern of the AI Disclosures project. Karthikeya Venkatraj focuses on designing disclosures that are transparent, informative, and minimally distracting with a mixed methods approach. Previously, he conducted interactive experiments at events together with his Abdallah El Ali. In an effort to gauge attendees’ opinions and ability to detect AI in journalistic publications, they presented participants with news pieces which were either human- or AI-generated, and labelled, although some labels were incorrect. The experiment asked participants to decide whether they believed the label matched the example, whether they found the headline appealing, and whether they found the headline authentic. 

The outcome of this experiment varied from group-to-group and stimuli-to-stimuli, and the AI-generated content was not detected with full accuracy. However, there seemed to be a general consensus that AI-generated content was less engaging than human-authored content. Currently, focus groups with citizens are being conducted to understand their information needs and concerns regarding AI disclosures, which will inform future co- creation sessions with media partners to design and test prototypes.

Who, What, When, Where, Why, How?

The preliminary findings of the AI Disclosures project are illustrated in the paper “Transparent AI Disclosure Obligations: Who, What, When, Where, Why How”, co-authored by Laurens Naudts, Natali Helberger, and Pablo Cesar along with the previously mentioned researchers.

Concluding Observations

  • AI Disclosures should be integrated into the interface from the get-go and not merely be an afterthought
  • Responsibility should not be left solely to providers, especially in cases of sensitive data
  • Users should have the agency and power to dig deeper
  • AI-generated content should be made detectable and disclosures should enable communication in meaningful ways

Publication

Transparent AI Disclosure Obligations: Who, What, When, Where, Why, How

CHI EA ’24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems

Abdallah El Ali, Karthikeya Puttur Venkatraj, Sophie Morosoli, Laurens Naudts, Natali Helberger, Pablo Cesar
2024