Out now: ‘Generative AI in Journalism’ report in collaboration with The Associated Press

(Associate) lab members Natali Helberger, Hannes Cools and Nicholas Diakopoulos have published a new report, in collaboration with The Associated Press and co-authored by Charlotte Li, Ernest Kung, Aimee Rinehart. The team conducted a survey among individuals in the news industry, asking how they use and want to use generative AI, and what they see as the main ethical and practical issues around developing responsible usage.

The introduction of ChatGPT by OpenAI in late 2022 captured the imagination of the public—and the news industry—with the potential of generative AI to upend how people create and consume media. Generative AI is a type of artificial intelligence technology that can create new content, such as text, images, audio, video, or other media, based on the data it has been trained on and according to written prompts provided by users. ChatGPT is the chat-based user interface that made the power and potential of generative AI salient to a wide audience, reaching 100 million users within two months of its launch.

Although similar technology had been around, by late 2022 it was suddenly working, spurring its integration into various products and presenting not only a host of opportunities for productivity and new experiences but also some serious concerns about accuracy, provenance and attribution of source information, and the increased potential for creating misinformation.

This report serves as a snapshot of how the news industry has grappled with the initial promises and challenges of generative AI towards the end of 2023. The sample of participants reflects how some of the more savvy and experienced members of the profession are reacting to the technology.

Based on participants’ responses, they found that generative AI is already changing work structure and organization, even as it triggers ethical concerns around use. Here are some key takeaways:

  • Applications in News Production. The most predominant current use cases for generative AI include various forms of textual content production, information gathering and sensemaking, multimedia content production, and business uses.
  • Changing Work Structure and Organization. There are a host of new roles emerging to grapple with the changes introduced by generative AI including for leadership, editorial, product, legal, and engineering positions.
  • Work Redesign. There is an unmet opportunity to design new interfaces to support journalistic work with generative AI, in particular to enable the human oversight needed for the efficient and confident checking and verification of outputs.
  • Ethical Concerns and Responsibility. Ethical considerations are paramount, with concerns about human oversight, accuracy, and bias most prominent. The industry
    is grappling with how to balance the benefits of generative AI with the need for ethical journalism practices, including the banning or limiting of use for particular use cases such as for the generation of entire pieces of published content.
  • Strategies for Responsible Use. While many organizations are developing or following guidelines for the ethical use of generative AI, there is a call for clearer, more concrete guidelines, training, and enforcement to navigate the ethical landscape effectively.
  • Ambivalence in Content Rights. Respondents expressed a degree of uncertainty about whether tech companies should be allowed to train models on news organizations’ content, with some emphasizing the negative commercial impacts and others advocating to advance the accuracy and reliability of models which could benefit society.

Read the full report below.