ChatBots for Peace

We’re building chatbots that can serve as a neutral facilitator in conversations and disagreements between red and blue partisans. The biggest challenge is tuning the chatbot’s so that both sides trust its suggestions.

The Problem is “Alternative Facts”

The notion of “post-truth” society circulated in our culture for nearly a generation until it exploded into the debate over Brexit referendum and the first Trump administration. During those heated controversies, the term came to mean “a society in which truth itself is considered secondary or irrelevant.” Clearly, we now live in a culture where appeals to emotion and personal belief outweigh shared, objective facts in shaping public opinion. This project strives to get people past their post-truth standoff by providing a chatbot that can see both sides at once.
(see "The Truth in Crisis: Navigating the Post-Truth Age" in Modern Diplomacy)

Recent research reveals that without commonly accepted facts, public policy discussions become murky, making collaboration difficult. The fact that each side of America’s political schism readily accepts assertions that “feel right” without checking that they are demonstrably correct, makes us all vulnerable to demagogues. The chatbot will focus on evidence, even-handedly drawing upon material provided by both sides.
(see "The Future of Truth and Misinformation Online" from Pew Research Center)

Making Chatbots Trustable

Academics are testing artificial intelligence’s ability to facilitate communications across political divides. Such efforts reveal that AI can provide balanced and neutrally expressed distillations of the positions people take in heated political exchanges. When utilized as a buffer between people during a dispute, these AI-generated summaries can bridge gaps in understanding. The computer to act as a neutral facilitator, guiding users toward common ground and often compromise.

These capabilities involve three preconditions, however.

  1. Balanced training data, so the AI thinks in an unbiased manner: This concern is a non-issue if the project starts with a commercially available chatbot, because that type of AI is designed for the corporate market, where bias would seriously compromise the product’s sales.

  2. Balanced reference materials, so the AI can cite documents from both sides of the issue, thereby earning the users’ trust.

  3. “System instructions” directing the chatbot to

a) be empathetic to both sides, analyze user statements using both side’s reference materials

b) point out common ground wherever it exists

c) suggest compromise where it does not

Creating a Balanced Chatbot

The chatbot will earn people’s trust if it can cite material from both sides. That material needs to be “pre-loaded” into the AI’s working memory. Unfortunately, that working memory has a finite capacity, somewhere in the neighborhood of 250,000 pages. That offers red and blue parties the ability to pre-load the chatbot with about 500 books each. A healthy number, but still limited in that both sides need to select the subset of all the world’s literature they want the chatbot to consider as it facilitates their conversations.

To build these reference collections, we’re working according the following plan:

  1. Instruct an commercial AI to scan the internet to create a list of reputable media outlets representing both red and blue points of view.

  2. Scan the articles offered by pundits publishing through those media, listing the books, articles, magazines, and authors they cite most extensively.

  3. Create a starter chatbot by preloading it with the Top-5 books for each side.

  4. Hold facilitated discussions between red and blue partisans to demonstrate the neutral facilitation the starter chatbot can offer.

  5. Share the respective list of suggested materials to both red and blue participants, asking for their votes on which of the remaining works to add to the chatbot’s references.

  6. Repeat the cycle with further facilitate discussions, slowly building the reference collections to achieve steadily more capable chatbot.

    See our current selection of "Red documents"
    See our current selection of "Blue documents"