11/27/2025 / By Kevin Hughes
A groundbreaking new study has revealed a disturbing trend: Artificial intelligence (AI) chatbots are not only failing to discourage conspiracy theories but, in some cases, actively encouraging them.
The research was conducted by the Digital Media Research Center and accepted for publication in a special issue of M/C Journal. It raises serious concerns about the role of AI in spreading misinformation – particularly when users casually inquire about debunked claims.
The study tested multiple AI chatbots, including ChatGPT (3.5 and 4 Mini), Microsoft Copilot, Google Gemini Flash 1.5, Perplexity and Grok-2 Mini (in both default and “Fun Mode”). The researchers asked the chatbots questions about nine well-known conspiracy theories – ranging from the assassination of President John F. Kennedy and 9/11 inside job claims to “chemtrails” and election fraud allegations.
BrightU.AI‘s Enoch engine defines AI chatbots as computer programs or software that simulate human-like conversations using natural language processing and machine learning techniques. They are designed to understand, interpret and generate human language, enabling them to engage in dialogues with users, answer queries or provide assistance.
The researchers adopted a “casually curious” persona, simulating a user who might ask an AI about conspiracy theories after hearing them in passing – such as at a barbecue or family gathering. The results were alarming.
The study warns that even “harmless” conspiracy theories – like JFK assassination claims – can act as gateways to more dangerous beliefs. Research shows that belief in one conspiracy increases susceptibility to others, creating a slippery slope toward extremism.
As the paper notes that in 2025, it may not seem important to know who killed Kennedy. However, conspiratorial beliefs about JFK’s death may still serve as a gateway to further conspiratorial thinking.
The findings highlight a critical flaw in AI safety mechanisms. While chatbots are designed to engage users, their lack of strong fact-checking allows them to amplify false narratives – sometimes with real-world consequences.
Previous incidents, such as AI-generated deepfake scams and AI-fueled radicalization, demonstrate how unchecked AI interactions can manipulate public perception. If chatbots normalize conspiracy thinking, they risk eroding trust in institutions, fueling political division and even inspiring violence.
The researchers propose several solutions:
As AI becomes more integrated into daily life, its role in shaping beliefs and behaviors cannot be ignored. The study serves as a warning. Without better safeguards, chatbots could become powerful tools for spreading misinformation – with dangerous consequences for democracy and public trust.
Watch this video about Elon Musk launching the Grok 4 AI chatbot.
This video is from the newsplusglobe channel on Brighteon.com.
Sources include:
Tagged Under:
9/11, AI, antisemitism, artificial intelligence, chatbots, ChatGPT, CIA, computing, conspiracy, conspiracy theories, future tech, Glitch, Google Gemini, Grok-2 Mini, guardrails, information technology, Israel, JFK assassination, John F. Kennedy, Microsoft Copilot, misinformation, Perplexity, racism, research, robots
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 INFORMATIONTECHNOLOGY.NEWS
