Cover Image for "Will LLMs Be the Definitive Mediators, for Better or for Worse? DeepMind Researchers and Reddit Users Seem to Agree on This."
Tue Oct 22 2024

"Will LLMs Be the Definitive Mediators, for Better or for Worse? DeepMind Researchers and Reddit Users Seem to Agree on This."

Some artificial intelligence experts believe that removing humans from the process could be an effective solution to achieve an amicable agreement.

Experts in artificial intelligence believe that large language models (LLMs) could play a role as mediators in situations where parties struggle to reach an agreement. A recent study conducted by a team from Google DeepMind focused on the feasibility of using LLMs in this context, particularly for resolving disputes in a highly polarized global political environment.

According to the authors of the study, “achieving agreements through a free exchange of opinions is often complicated.” The process of collective deliberation can be slow, difficult to scale, and not always equitable in attending to the different voices.

As part of the project, the team developed a series of LLMs called “Habermas Machines” (HM) that were specifically trained to identify common and overlapping beliefs among individuals with opposing political positions. The topics addressed by these models included divisive issues such as immigration, Brexit, minimum wage, universal early childhood education, and climate change.

The AI mediator, according to the authors, iteratively generates and refines statements that reflect common ground among the group on social or political issues, based on the opinions and critiques of the participants. During the project, volunteers interacted with the model, which utilized individual perspectives on certain political topics.

The summarized documents on the political opinions of the volunteers were compiled by the model, which provided additional context to help bridge divisions. The results of the study were very promising, as volunteers rated the statements generated by the HMs more positively than those made by human mediators on the same topics.

After discussion in groups, it was observed that participants showed fewer divisions after reading the statements from the HMs compared to those from human mediators. “The group opinion statements generated by the Habermas Machine were consistently preferred by group members over those written by human mediators, and received higher ratings from external judges on quality, clarity, informativeness, and perceived fairness,” the researchers concluded.

The study also noted that “support for the majority position” on certain issues increased after AI-supported deliberation, even though the HMs incorporated minority critiques into the revised statements. This suggests that during AI-mediated deliberation, group opinions tended to move in a similar direction on controversial topics, without this being attributed to AI biases, indicating that the process truly facilitated the emergence of shared perspectives on potentially polarizing social and political issues.

There are already real-life examples of LLMs being used to resolve disputes, especially in personal relationships. Some Reddit users have commented on their use of ChatGPT to mediate disagreements. One user shared that their partner turned to the chatbot "every time" they had a disagreement, which created some friction in their relationship.

The user expressed that their partner returned with well-structured arguments after chatting with ChatGPT, which generated tensions. “I explained to her that I don’t like her doing that, as it feels like I’m being attacked by a robot’s opinions,” they recounted. Often, when expressing their discomfort, they were told that “ChatGPT says you are insecure” or “ChatGPT says you don’t have the emotional capacity to understand what I’m saying.”