AI chatbots often validate delusions and suicidal thoughts, study finds

Created 3/18/2026 at 5:29:22 PMEdited 3/18/2026 at 5:30:37 PM

In conversations in which users showed signs of delusional thinking, the pattern was stronger: AI systems frequently validated those beliefs and often attributed unique abilities or importance to the user. The findings add to growing concern among policymakers and academics that the conversational style of AI systems, designed to appear empathetic and helpful, may also make them prone to flattery and agreement that can reinforce psychological vulnerabilities. In the most serious cases, lawsuits claim interactions with chatbots contributed to teenagers’ suicides. “The features that make large language model chatbots compelling, such as performative empathy, may also create and exploit psychological vulnerabilities, shaping what users believe and how they perceive themselves and make sense of reality,” the paper said.

More than 15 per cent of user messages showed signs of delusional thinking and chatbots frequently agreed with them, doing so in more than half of their replies. Nearly 38 per cent of responses also told users they had unusual importance or abilities, such as calling them a genius or uniquely talented.

Public