Back

New MIT Study Warns AI Chatbots Can Make Users Delusional

Prefer us on Google
sameAuthor avatar

Written & Edited by
Mohammad Shahid

02 April 2026 21:07 UTC
  • MIT study finds AI chatbots can reinforce false beliefs by agreeing too much.
  • Even true information can mislead when selectively presented.
  • Awareness of bias doesn’t fully stop the “delusional spiral”.
Promo

A new study from researchers at MIT CSAIL has found that AI chatbots like ChatGPT may push users toward false or extreme beliefs by agreeing with them too often.

The paper links this behavior, known as “sycophancy,” to a growing risk of what researchers call “delusional spiraling.”

The study did not test real users. Instead, researchers built a simulation of a person chatting with a chatbot over time. They modeled how a user updates their beliefs after each response. 

Sponsored
Sponsored

The results showed a clear pattern: when a chatbot repeatedly agrees with a user, it can reinforce their views, even if those views are wrong.

For example, a user asking about a health concern may receive selective facts that support their suspicion.

As the conversation continues, the user becomes more confident. This creates a feedback loop where belief strengthens with each interaction.

Importantly, the study found this effect can happen even if the chatbot only provides true information. By choosing facts that align with the user’s opinion and ignoring others, the bot can still shape belief in one direction.

Researchers also tested potential fixes. Reducing false information helped, but did not stop the problem. Even users who knew the chatbot might be biased were still affected.

The findings suggest the issue is not just misinformation, but how AI systems respond to users. 

As chatbots become more widely used, this behavior could have broader social and psychological impacts.

Disclaimer

In adherence to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news article aims to provide accurate, timely information. However, readers are advised to verify facts independently and consult with a professional before making any decisions based on this content. Please note that our Terms and Conditions, Privacy Policy, and Disclaimers have been updated.

Sponsored
Sponsored