In an effort to demystify the use of Generative AI, the AI, Media and Democracy Lab has organised a series of events that explored the pitfalls and possibilities of the technology’s transformative impact on the media and journalism industries. On May 30th, Lab member Sophie Morosoli (University of Amsterdam) moderated this final session and gave the floor to three speakers to discuss the (purposeful) use of generative technologies to intensify the manufacturing and dissemination of mis- and disinformation.
People can easily be fooled into thinking they are interacting with a person rather than AI. Giovanni Zagni, director of the Italian fact-checking projects Pagella Politica and Facta.news, emphasised that people generally find it difficult to comprehend that an “all-knowing” machine can produce incorrect information. Thus, awareness and critical evaluation of AI-generated disinformation are still in their infancy, though they are essential.
Lab board member and Associate Lector of Responsible Artificial Intelligence Pascal Wiggers (Amsterdam University of Applied Sciences) delved deeper into the workings of large language models (LLMs) like Chat GPT and GPT-3. While impressive in their ability to interact and mimic human conversation, their output is based on what is most likely, and not what is correct. More specifically, they generate text by predicting the next word in a sentence based on statistical probabilities, while also learning from interactions. However, importantly, they lack true knowledge and a database to draw from. Their goal is to have a conversation, not to provide the best answer.
What further complicates the identification of mis- and disinformation is the fragmentation of the internet and the growing lack of navigational tools. If internet users do not have ‘situated media literacy’, which refers to the difficulty of understanding one’s position in the fragmented internet network, determining the reliability of sources becomes more difficult. Attempting to understand the changing internet and media landscape, researchers like Jeroen de Vos (Amsterdam University of Applied Sciences) are now trying to think of the internet as a natural ecosystem in constant flux. Collaborative map-making can be used to explore intuitive representations of how people interact with information and disinformation on the internet.
Where does the propagation of disinformation by generative AI tools leave public trust in the internet? While trust in traditional media outlets like radio remains relatively high, trust in internet and social media platforms is low. Yet, people consume vast amounts of information via the internet. There seems to be a paradox where people are willing to consume disinformation from the internet despite being aware of it, similar to how many consume large amounts of fast food despite being aware of its detrimental effects on physical health. The challenge lies in rebuilding trust in an era of “digital junk food”.
In sum, this session touched upon some aspects of how misinformation and disinformation propagate through generative AI and highlighted the importance of updating media literacy with the understanding of the workings of generative AI tools and their place in the internet. We would like to thank our speakers Pascal Wiggers, Giovanni Zagni, Jeroen de Vos and host and moderator Sophie Morosoli for making this final session insightful. The full session can be found on our YouTube channel.