What situations can AI tools be used for, and where do we draw the line? Is it possible to build AI provocatypes responsibly? We hosted Rick van Kersbergen at a TuesdAI session in which he presented the Consensus Machine: a tool that uses generative AI to aid in decision-making processes and works as a conversation guide. The project is developed by Responsible IT Amsterdam (RITA) at the Amsterdam University of Applied Sciences, and possibilities for application are still being explored. The development of a tangible AI tool based on generative language models is a rare opportunity that allows us to experiment and ask questions about tools before they are even built.
First and foremost, the mission of the Consensus Machine is to be a conversation starter, a provocation, but it also has the potential to be utilized in other fields. When it is fed a statement that might raise controversy, AI suggests six alternate building blocks to it, two in affirmation, two in disagreement, and two from a neutral standpoint. The Consensus Machine is constructed on the basis of GPT-4, utilizing the model’s generative AI tools to rewrite sentence structures in statements that were selected by participants. It is intended to be used in collaboration: a group will introduce a statement together on a board and will pick a part of the sentence that they disagree on or would like alternative viewpoints to. The Consensus Machine will generate six reworded building blocks for the statement, from which participants will select one option, updating the original sentence. As the group continues to amend the original statement, incorporating the perspectives and views of its contributors, the Consensus Machine bridges the gaps of controversy with every step. The result: a statement that, even if not everyone fully agrees, is one that everyone can live with. Ideally.

There are many prospective uses for a tool like the Consensus Machine, but it also raises questions and concerns about its applications and to what extent it can be used responsibly. As members of the AI, Media and Democracy Lab were being walked through the process by Rick van Kersbergen and tested out the machine themselves, discussions arose covering both the technical structure of the tool and questions about a democratic and ethical use of AI.
One major concern was the Consensus Machine’s dependency on GPT-4, and the opacity of the generative AI tool. How can the ethical values the Responsible IT Lab and Amsterdam University of Applied Sciences adhere to be implemented when they do not have control over the training data pool the generative AI tool utilizes? Rick explained that they took precautions and built an ethical framework into the generative algorithm, instructing it to adhere to values like integrity, safety, and inclusiveness, and testing this thoroughly. Nevertheless, a risk remains as GPT-4 functions and evolves independently.
Extending beyond the technical, multiple concerns were raised about the applications and results of the Consensus Machine in action. If the desired outcome of using it in a group setting is to overcome controversy and reach a mediated compromise, what happens to the opinions on the fringes? How are minority standpoints represented in an environment that actively works towards a majority opinion? And is a process like this even desirable in situations of extreme disagreement, especially in cases of ethical dilemmas or strongly opposing political opinions?
It raises the question of whether consensus is a desirable outcome in the first place. It also puts into question the very idea that consensus is “something not everyone fully agrees with, but something everyone can live with”. Not every compromise is livable for everyone. If it comes down to this, we must evaluate whether relying on AI to help with decision-making is a desirable path to take in the first place.
Although a tool like the Consensus Machine harbors risks and should be used thoughtfully, we have experienced first-hand that it is an excellent way to start a conversation about using AI in decision-making processes. Questioning its technical framework, its ethical implications, and broader applications of AI in a group setting exemplifies the process of the Consensus Machine and creates a situation in which an AI application is made tangible. Therefore, it could be a useful tool in an educational context as well as in political or journalistic environments as a conversation starter or thought experiment, as long as it is framed in a responsible manner. Just like any other AI system, it is important to not take the results of the Consensus Machine at face value but to situate it in the context it arises from.