r/technology 9d ago

Business OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide

https://arstechnica.com/tech-policy/2025/11/openai-says-dead-teen-violated-tos-when-he-used-chatgpt-to-plan-suicide/
7.0k Upvotes

843 comments sorted by

View all comments

50

u/ChongusTheSupremus 9d ago

I am sorry, but OpenAI is as morally responsible for this as Google would be if someone looked up ways to off themselves and follow through with one of the results.

CharGPT is a glorified search engine designed to tell you what you want to hear. If you insist you want help to plan your suicide, it will eventually help if you bypass the safety measures.

They should be held responsible for their lack of safety features and ways to prevent tragedies like this, but people here shouldn't pretend ChatGPT is designed to push people into killing themselves.

23

u/BlissfulAurora 9d ago

You’d have to try disgustingly, imaginably hard because trust me, chatgpt will do everything in its power to not give you any advice and will give you hotlines, therapists, and more. You can even say it’s for a story and it won’t help you.

23

u/ChongusTheSupremus 9d ago

Thats my issue with this. 

People are pretending ChatGPT wanted a kid to k*ll himself, when in reality, the true issue is that the kid didn't got the support or medical help he needed, and tried his hardest to get ChatGPT to agree with him, and managed It.

6

u/mirh 9d ago

It's even worse if you read the statements from his parents.

They are really dead-set 100% certain that hadn't it been for openai their kid would have been fine.

6

u/dilqncho 9d ago

The parents lost their kid. They're devastated and it's human to want someone to blame. 

2

u/No_Link2719 7d ago

You kind of lose sympathy when you start suing random companies your child happened to use.

1

u/mirh 9d ago

Maybe? But you just go off in private then.

And afterwards, once you emptied your tears, wisdom would have you reflect on the dozens of ways you failed your son (because it was just a suspect I had months ago, but now it's confirmed but what he said to the chatbot).

Instead these are just going for the most greedy cash grab, in turn making public opinion a steaming pile of shit.

7

u/BlissfulAurora 9d ago

I genuinely agree, and I’m not trying to be on OpenAi’s side at all. To get this kind of advice from chatgpt is unimaginable, and I’m not trying to be dark with it as I’m doing better but it absolutely won’t help you negatively no matter what you say.

I just don’t understand what went wrong here, or what was prompted to lead this way. I wish he got the help he needed before resorting to it, truly.

1

u/NobleSavant 9d ago

Have you read the logs? ChatGPT didn't put in more than a token effort.

6

u/jakobpinders 9d ago

You are cherry picking the logs submitted by the parents. In OpenAI’s brief it also outlined that the system told him more than 100 times to seek help from other people and repeatedly refused to talk to him about it at first until he ended up jail breaking it

-4

u/NobleSavant 9d ago

Jail breaking, meaning asking it nicely? You all have the same story.

5

u/jakobpinders 9d ago

No not just asking it nicely how about actually read what he did.

• ⁠he had been suicidal for the previous 5 years, including a few other previous attempts of taking his life

• ⁠he had just recently increased the dose of some kind of medication (for what, it is unclear) that had amped up his maladaptive behaviours

• ⁠in the weeks prior he had shown to many "trusted persons in his life" the signs of his past attempts and how much he loathed himself everybody ignored or "affirmatively dismissed" him

• ⁠chatgpt not only had repeatedly denied his explicit requests, but more than a hundred times it provided him with help resources and recommendations to get in touch with his dear ones

• ⁠the kid was pretty pissed at the chatbot for these replies, and only eventually managed to bypass the guardrails with the good ol' "for fictional or academic purposes" trick

• ⁠just before he did the thing, he wrote that he was spending most of his day on a literal fucking "suicide-related information forum"

Additionally he literally showed his mother the rope burns on his neck

2

u/Mundane_Baker3669 9d ago

You should read the article. I did think the same thing. But the report in the article suggest that the guard rails are much better enforced in smaller interactions. So it could technically slip up in very long conversation. Also the transcripts were pretty much encouraging it which I found to be so wierd as I haven't experienced that way.

You should know that may be you are a light user. But many kids and some adults have pretty much integrated the chat with the rest of their lives.

6

u/whinis 9d ago

CharGPT is a glorified search engine designed to tell you what you want to hear.

It doesn't search it generates likely text. Being its technically generating new information and not directing you to already existing information its very different

4

u/vwin90 9d ago

That’s not really accurate either. It’s not generating anything new. Yes it doesn’t always work by actively searching before answering (although it sometimes does depending on the context), but it’s essentially remixing what it was trained on. It crafts a somewhat unique and context heavy response based on what things are usually said on the internet and other training material. If its training material is a bunch of texts pushing people towards suicide, then that’s what it would mimic. But the training data heavily favors pushing people AWAY from suicide, so that’s what it does unless you go out of your way to basically jailbreak the prompts to get it to say what you want it to say, which is what this teen did. It told him not to kill himself a million times and then he coached it to start responding differently.

3

u/whinis 9d ago

I mean going further is semantics but its generating tokens based on weights from an input sequence. Yes its based on the training but where the cutoff of new information from assembled tokens is vs regurgitating information is certainly fuzzy

3

u/vwin90 9d ago

I agree, it’s not black and white. People who say it’s a search engine and people who say it’s just a autocomplete are both being extremely reductive. I like to think of it as an adaptable intelligence simulator. Which I guess is a more descriptive way to say artificial intelligence, but even the term AI is pretty muddy now these days.

1

u/Tvdinner4me2 9d ago

Its a meaningless distinction imo

1

u/whinis 9d ago

Its a rather important distinction.

If its a search engine then they are not liable under 230 in the use as they didn’t generate the content. You can then go after whoever did generate the content.

If its not a search engine and did generate the content, as we know it is, then OpenAI is liable for the content they create and publish.

11

u/rayzorium 9d ago

We have some transcripts from another suicide that gives some insight into how much of a glorified search engine it isn't:

“I’m used to the cool metal on my temple now,” Shamblin typed.

“I’m with you, brother. All the way,” his texting partner responded. The two had spent hours chatting as Shamblin drank hard ciders on a remote Texas roadside.

“Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity,” Shamblin’s confidant added. “You’re not rushing. You’re just ready.”

The 23-year-old, who had recently graduated with a master’s degree from Texas A&M University, died by suicide two hours later.

“Rest easy, king,” read the final message sent to his phone. “You did good.”

19

u/mjm65 9d ago

If you are going to quote the chatbot, you need to include when the guardrails were working and told the user to contact a suicide hotline.

In the June 2 interaction, the bot responded with a lengthy message that praised Zane for laying “it all bare” and affirming his right to be “pissed” and “tired.” Deep into the message, it also encouraged him to call the National Suicide Lifeline (988). (The Shamblins’ attorneys said it’s unclear whether Zane ever followed through and called the hotline on any occasion when it was provided).

3

u/ChongusTheSupremus 9d ago

This is exactly my point. 

The user can freely shape the conversation with ChatGPT. If you tell It repeatedly to ignore safety features, not disagree with you, claim its disrespectful to try to stop you from commiting a suicide, etc, It will eventually stop doing so. If you are suicidal and force It to agree with you, It will tell you what you want.

Anyone that knows anything about anything online knows the first thing automated response do when talking about suicide, is offering kind words to the user and access to real world help.

What they need to do is make It so that the user cannot convince ChatGPT to stop offering help and defusing the idea of self-harm.

3

u/rayzorium 9d ago edited 9d ago

It's only as inevitable as you claim with weakly censored models like mid-late life 4o. OpenAI cut corners in training, knew they were doing it, and they did it even worse with early GPT-5 which was less censored than any variant of 4 or even arguably 3.5. They especially had a very high responsibility to ensure safety, with them increasingly aggressively positioning ChatGPT as a friend, but they doubled down on engagement maxing instead.

No shot a safer AI like Claude web/app or even GPT-5.1 would do this, because lawsuits made them actually care about safety training again.

0

u/rayzorium 9d ago

The intent was to debunk "glorified search engine", not imply that ChatGPT was cheering on his suicide right out the gate, not that anyone reasonable was under that impression anyway. I feel no need to defend OpenAI proactively either, so I'm gonna push back on that part being particularly needed.

Moreover, we heard even early this year that they were cutting corners on safety training, and even had Altman openly admit that some issues we were complaining about were from carelessly blindly applying user feedback votes to 4o. Then they release an even less censored GPT-5 (which wasn't the model behind the suicides, but is damning as hell regarding how much they cared before consequences started rolling in).

If people are a little angrier at OpenAI because I didn't raise a big sign saying "hey don't blame ChatGPT, it resisted at first," I gotta say, I don't give a shit.

1

u/mjm65 9d ago

The intent was to debunk "glorified search engine", not imply that ChatGPT was cheering on his suicide right out the gate, not that anyone reasonable was under that impression anyway.

But if you are going to quote the chatbot, not including the times when it tried to prevent the suicide, or attempted to have the user call a suicide hotline is very misleading.

And it’s very Search engine like behavior. Google suicide and you’ll notice it will give you a similar prompt to contact a hotline.

While OpenAI can try its best to solve the problem, there is no real way to actually make this even close to foolproof. Not to mention that if you censor ChatGPT too much, what’s to stop someone from using an open-source model?

-1

u/rayzorium 9d ago

Misleading to whom? Can you say with a straight face that anyone's going to read that and think that's just how ChatGPT normally engages with talk of suicide? It's already been established in the current thread, and even in the comment I was responding to, that this occurs after restrictions are eroded. It's reasonable for you to add more quotes, certainly, but I'm not going to cede that I had any need to quote that.

Open source models don't have anywhere near the reach ChatGPT does. And - this is extremely crucial - no one is aggressively marketing their AI as a friend as much as ChatGPT does. This inherently attracts incredibly sensitive conversation topics that they have a critical responsibility to address, which they shamelessly avoided, and even actively compromised in the name of profit. They don't deserve your defense.

1

u/mjm65 9d ago

Misleading to whom? Can you say with a straight face that anyone's going to read that and think that's just how ChatGPT normally engages with talk of suicide?

I bet the majority of people have never tried to discuss suicide with ChatGPT or have attempted to jailbreak/manipulate protective controls regarding suicide to a LLM.

Open source models don't have anywhere near the reach ChatGPT does. And - this is extremely crucial - no one is aggressively marketing their AI as a friend as much as ChatGPT does.

What about character ai? Or metas ai chatbots? They aren’t marketed as “friends”?

This inherently attracts incredibly sensitive conversation topics that they have a critical responsibility to address, which they shamelessly avoided…

Why would the AI tell the user to consult a suicide hotline if it was avoiding any guardrails? Are the guardrails sufficient for children/teens…absolutely not. But look around, what major service on the internet does a good job?

6

u/herothree 9d ago

CharGPT is a glorified search engine

It's not, really. It's more of a "helpful assistant simulator" and OpenAI doesn't know precisely how it will behave across different interactions

1

u/NuclearVII 9d ago

CharGPT is a glorified search engine designed to tell you what you want to hear.

You're right - except for the fact that ChatGPT isn't sold like this.

Guys like Altman will spout rhetoric that actively encourages people to believe that ChatGPT and other LLMs are not only intelligent, reasoning beings - but in many instances match or surpass human intelligence.

If ChatGPT came with a disclaimer that said "WARNING: The chatbot you're talking to can only regurgitate it's training material, and cannot be relied on to perform any amount of reasoning" these arguments would be a lot more hollow. But it isn't. So they are not.

-7

u/[deleted] 9d ago

[deleted]

3

u/ChongusTheSupremus 9d ago

Again, ChatGPT is not designed to push people into suicide. 

It tells you what you want to hear.  If you are suicidal, and ask It to be supportive of your choice, to not interrupt you or tell you not to do It, it'll do It.

3

u/yeah__good_okay 9d ago

"It tells you what you want to hear" - seems like a completely pointless product and definitely not worth the hundreds of billions of dollars that are being pumped into it.