Advancing a Material and Epistemological Turn in the Study of AI: A Review and New Directions for Journalism Research
In this newly published review paper, Anna Schjøtt Hansen traces the current state of journalistic scholarship on AI from 2010 onwards. The paper identifies an analytical gap in the current literature and argues that future studies should:
- Pay analytical attention to the materiality of AI as opposed to the human subjects,
- Consider the epistemologies of AI as opposed to only the epistemology of journalism,
- Move outside the newsroom to study the actors and processes that increasingly sociotechnically govern AI development in newsrooms.

To the publication
More Than Justifications: An Analysis of Information Needs in Explanations and Motivations to Disable Personalization
By Valeria Resendez, Kimon Kieslich, Natali Helberger, and Claes de Vreese
This article explores what users want from explanations in the context of news recommender systems, based off a survey on user preferences.
Key takeaways:
- Users have diverse informational needs. Some care about who the news comes from, others about why it was recommended, often wanting to compare different recommendation logics.
- These differences show there’s no one-size-fits-all approach to explainability. Building responsible, trustworthy AI means designing with varied user expectations in mind.
- The findings suggest a strategic angle: Organizations developing recommender systems should rethink how they invest in personalization, as audience openness to personalization shapes adoption.

To the publication
From Automation to Transformation with AI-Tools: Exploring the Professional Norms and the Perceptions of Responsible AI in a News Organization
By Hannes Cools and Claes de Vreese
This article is based on in-depth interviews & workshops across four media titles at Berlingske Media with the goal to map how journalists imagined ‘responsible AI’ before clear policies or best-practice playbooks even existed.
Key takeaways:
- Invisibility of AI feeds folk stories: AI was already shaping story selection & wording, yet most staff couldn’t see where or how. That opacity sparked “folk theories” about what the tech could (or would) do.
- Jobs vs. tasks: When a round of layoffs were announced in an internal town hall meeting, an editor-in-chief added that “some of the work can now be done by AI”, this led to a two-day strike – the first strike in the company since 2008.
- Responsible AI is culture, not just code: Transparency and continuous dialogue mattered as much as any technical safeguard. This is an inherently dynamic process that requires a ‘new’ culture across newsrooms.

