(Deadline expired) We are hiring a Postdoc Trustworthy Human-AI Interaction for Media and Democracy (m/f/x)!

Encounters with AI-generated content can impact the human experience of algorithms, and more broadly the psychology of Human–AI interaction. In particular, AI system disclosures can influence users’ perceptions of media content. There is a growing concern that as generative AI becomes more widely used, manipulated content could easily spread false information. As a key step toward mitigating harms and risk, a key recent mitigation effort involves the European Al Act Proposition “Transparency Obligations for Providers and Users of Certain AI Systems and GPAI Models”, which seeks to address the issue of AI system transparency.

The research scope broadly addresses the effective, trustworthy, and transparent communication of AI system disclosures. We aim to account for ethical and legal considerations, design and human factors perspectives, as well as policy recommendations. As such, this role may involve relevant stakeholders where necessary, ranging from media organizations, policy makers, as well as AI researchers and practitioners. The initial focus is on the end-user (media consumer) perspective, and at later stages, on the perspective of the media organizations and the generative AI media production process itself. For this postdoc, we are specifically interested in how trust in AI systems can be garnered by focusing on creating better user interfaces and/or understanding human-AI system interactions at a cognitive, behavioral, and physiological level. By establishing user-centric designs for transparent AI disclosures, we can take steps toward ensuring a well-functioning democratic society.

The Postdoctoral researcher is appointed at the national research institute for mathematics and computer science (CWI) and will be embedded at the AI, Media & Demodracy Lab.