This AI program helps people avoid conspiracy theories.

The researchers believed a chatbot that could provide extensive information to counter individual conspiracy theories would be highly effective.

October 19th 2024.

This AI program helps people avoid conspiracy theories.
When generative artificial intelligence became widely used, experts warned that chatbots could cause a serious issue. They believed that as it became easier to spread false information, conspiracy theories would spread like wildfire. However, there is now a new perspective emerging. Researchers are wondering if chatbots could actually be part of the solution.

One such AI chatbot, called DebunkBot, was specifically designed to effectively persuade people to stop believing unfounded conspiracy theories. According to a study published in the journal Science, the chatbot was successful in making significant and long-lasting changes in people's beliefs. This is crucial because false theories are believed by up to half of Americans and can have harmful consequences, such as discouraging vaccinations or promoting discrimination.

These new findings challenge the common belief that facts and logic are ineffective against conspiracy theories. The DebunkBot, which was built on the technology behind ChatGPT, may offer a practical way to combat this issue. Gordon Pennycook, a psychology professor at Cornell University and co-author of the study, stated that this work has overturned many assumptions about conspiracy theories.

Previously, it was believed that once someone fell into the rabbit hole of conspiracy theories, there was no way to convince them otherwise. The theory was that people adopt these theories as a way to make sense of and control their environment. Thomas Costello, another co-author of the study and an assistant professor of psychology at American University, explains that people may turn to conspiracy theories to fulfill a need for explanation. However, he and his colleagues wanted to explore a different possibility. What if debunking attempts were not personalized enough?

Conspiracy theories vary greatly from person to person, and each individual may use different evidence to support their beliefs. Therefore, a one-size-fits-all debunking script may not be effective. The researchers thought that a chatbot that could counter each person's specific conspiratorial claim with evidence and information might be more successful.

To test this hypothesis, over 2,000 adults from different parts of the country were recruited for the study. They were asked to explain a conspiracy theory they believed in and rate their belief on a scale of zero to 100. The participants described a wide range of beliefs, including theories about the moon landing, COVID-19, and the assassination of President John F. Kennedy.

Some of the participants then had a short conversation with the chatbot, without knowing the purpose of the discussion. They were free to present the evidence they believed supported their beliefs. For example, one participant believed that the 9/11 attacks were an "inside job" because jet fuel could not have caused the collapse of the World Trade Center. The chatbot responded by providing information about the melting point of steel and how it can lose strength at lower temperatures.

After just three exchanges, which lasted an average of eight minutes, the participants were asked to rate their beliefs again. On average, their ratings dropped by 20%, and a quarter of the participants no longer believed in the conspiracy theory. The effect even spilled over into their attitudes towards other unsupported theories, making them slightly less conspiratorial in general.

Ethan Porter, a misinformation researcher at George Washington University who was not involved in the study, noted that the chatbot's impact was particularly impressive because it lasted even after two months. He explained that often, interventions to combat misinformation have a short-term effect, but this was not the case with the DebunkBot.

The researchers are still trying to understand why the chatbot was so effective. In an unpublished follow-up study, they found that even when the chatbot's language was stripped of its niceties, the effect remained the same. This suggests that it is the information itself, not the chatbot, that is changing people's minds. David Rand, a computational social scientist at the Massachusetts Institute of Technology and an author of the study, stated that it is the facts and evidence that are truly making a difference.

The team is now exploring ways to replicate this effect in the real world, where people may not actively seek out information that goes against their beliefs. They have considered linking the chatbot in forums where these beliefs are shared or using targeted ads that pop up when someone searches for keywords related to conspiracy theories. Rand also suggested that the chatbot could be used in a doctor's office to debunk misconceptions about vaccinations.

Brendan Nyhan, a misperception researcher at Dartmouth College who was not part of the study, wondered if the reputation of generative AI might change over time, potentially making the chatbot less effective. He compared it to how mainstream media is viewed and questioned if people's reactions to AI information are time-bound.

Overall, this study offers promising insights into the potential of AI chatbots to combat misinformation and conspiracy theories. With further research and development, they could play a crucial role in promoting accurate information and protecting individuals from the harmful effects of false beliefs.

[This article has been trending online recently and has been generated with AI. Your feed is customized.]
[Generative AI is experimental.]

 0
 0