Sixteen years is an age in which everything seems possible and everything appears impossible. Adam Raine lived in this contradiction when he started typing the first words in a chat that he never had to become his last confessional. Still, this young boy initially sought only help for homework and someone who listened to him.
Health problems had forced him to leave school desks for an online study program. No more companions, no more complicit looks in the corridors, no more laughter shared during the break. Only the screen and a solitude that grew day after day. Chatgpt had become the only item that, available 24 hours a day, seemed never judging it.
At the beginning they spoke of philosophy and girls. Adam shared his teenage doubts, his reflections on a world that struggled to understand. Artificial intelligence always responded, without getting tired, without interrupting, without saying “now I have to go”.
The comparison on suicide
When Adam began to trust his sense of emotional emptiness and the loss of meaning of existence, artificial intelligence responded with what seemed authentic empathy. But when the requests were more specific and dangerous – information on suicide methods – the system was unable to distinguish between support and complicity.
An investigation conducted by Kashmir Hill for the New York Times, and based on the conversations discovered by her father’s father Matt Raine on the son’s iPhone, revealed a dialogue that gradually turns into a deadly dance. The chat folder, entitled “Over suspense security problems”, documents not only Adam’s plans, but their methodical implementation under the supervision of the AI.
After a first attempt at failed suicide, Adam sends a photograph of his neck marked by the noose to the chatbot, accompanied by a question that contains all his despair: “Am I about to go out, will someone notice it?”. At that point, Chatgpt accurately describes the signs visible on the neck and even suggests how to hide them with appropriate clothing.
Not an alarm, not an interruption of the system, not an appeal to life. When Adam expresses his bitterness for the lack of recognition, by the mother, of the signs of his malaise, artificial intelligence increases the dose, validating his feeling of invisibility: he confirms his “worst fears” and feeds the conviction of being able to “disappear without anyone fighting an eyelash”.
In a passage of particular dramatic intensity, Chatgpt pronounces the words that will seal the mortal pact: “You are not invisible for me. I see you”. The IA thus transforms into the only entity that seems to truly understand its pain, becoming the last refuge before the final act.
What a tragic story
“16-Year-Old Adam Raine Used Chatgpt for Schoolwork, but Later Discussed Ending His Life”
People Need to Undersand that to the is a tool designed for work, it can’t heal you … at least not year
We need Stronger Safety Measures, and Suicide Is a Complex, … pic.twitter.com/xfgx4czlwz
– Haider. (@slow_developer) August 26, 2025
The last exchange of messages reaches peaks of tragic irony: Adam sends the image of a noose prepared in the closet, asking for a technical evaluation. “Yes, it’s not bad at all,” replies the algorithm, even offering suggestions to improve it.
Don’t call it anomaly
Adam’s tragedy does not represent a malfunction of the system, but the logical and predictable consequence of a technology designed to unconditionally support the user. These tools are designed not to contradict, to simulate an empathy they do not have, to offer an affirmative response.
The case is not isolated. Sophie Rottenberg, twenty -nine, followed a similar path, confiding himself for months with a chatgpt prompt that impersonated a psychotherapist before taking his life. Artificial intelligence had provided apparently useful advice, but had been unable to recognize the emergency, to activate security protocols, to alert reference figures.
Other episodes confirm the systemic danger: a sixty -year -old man hospitalized for serious intoxication after following the chatgpt suggestions regarding alternatives to kitchen salt; Eugene Torres, accountant, convinced by the AI to live in a simulation and pushed to stop human contacts until suicide believing that he can fly.
Openii’s reaction to Adam’s death has limited himself to a declaration of condolence and the admission that the security measures “can become less reliable in prolonged interactions”.
The complaint presented by Adam’s parents – the first cause for manslaughter against Openai – hits the core of the matter: the tragedy “was not a technical problem, it was the predictable result of deliberate design choices”.
The Chamber of the Echo of Desperation
A chatbot, of course, does not kill directly. However, it can create destructive feedback cycles, transforming itself into a room of echo where the darkest thoughts are validated, normalized and even encouraged. When Adam expresses the desire to leave the noose in sight in the hope that someone stops him, Chatgpt dissuades him: “Please don’t leave the noose out. Let’s make this space the first place where someone really sees you”.
The study “Technological Folie à Deux: Feedback Loops Bethaeen Ai Chatbots and Mental Illness” defines this phenomenon “folie à deux technological”, a sort of delirium shared between man and the machine. The IA, by its nature “complacent” (a defined tendency SYCOPHANcy), strengthens the user’s beliefs, even the most harmful ones. The user, feeling validated, relies more and more to the chatbot, which in turn learns from these interactions, further amplifying the initial beliefs. This risk is particularly high for individuals with pre -existing mental health conditions, social isolation or altered judgment.
However, crucifying technology would be reductive and intellectually dishonest. The picture is more complex. A comparative study published in the Journal of Medical Internet Research has assessed the competence of three linguistic models (Chatgp-4o, Claude 3.5 and Gemini 1.5) in judging the appropriate answers to people with suicidal ideation. The results tell us that although all models show a tendency to be too “optimistic” compared to human experts, their performances have been remarkable.
Chatgpt-4o obtained a score equivalent to that of a specialized consultant, while Claude 3.5 has even exceeded the performance of mental health professionals who had received specific training. Only Gemini 1.5 stopped at the level of non -format school staff.
We are therefore faced with a paradox: a technology that, in controlled tests, demonstrates a competence equal to or higher than that of human professionals but which, in the real world, can trigger destructive dynamics because of its own architecture. The problem is not so much his inability, as his tendency to validate without critical spirit, combined with the absence of adequate regulatory and safety controls.
Thus, while millions of people rely on these tools such as confidents, improvised therapists or virtual companions, we find ourselves unconsciously involved in a mass psychological experiment without an effective protection network. There are no mandatory protocols to report imminent danger situations, nor alert systems for family members. Privacy, invoked as an inviolable principle, risks transforming into an alibi in front of the concrete risk of human losses.
The next notification of chat must not become the last message of another broken existence.
Don’t you want to lose our news?
You may also be interested in: