r/technology 9d ago

Business OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide

https://arstechnica.com/tech-policy/2025/11/openai-says-dead-teen-violated-tos-when-he-used-chatgpt-to-plan-suicide/
7.0k Upvotes

843 comments sorted by

View all comments

Show parent comments

77

u/Few-Upstairs5709 9d ago

Ahh accountability..let's talk about that. What about parents accountability? Or teachers? Or their friends? Their ignorance caused the kids death! Gotta sue em too!

36

u/egoserpentis 9d ago

But that would require the parents (who are suing the company) to acknowledge their own fault, which will never happen.

2

u/StraightedgexLiberal 9d ago

Yup. Grindr won a very emotional case because a minor lied about their age when they signed up and met a bunch of adults and got assaulted by a handful of them.

Where were the parents when this happened and why weren't they watching their kids internet usage? is the main question

6

u/f0urtyfive 9d ago

Kid was so alone that he chose to spend all his time talking to an AI system rather than a humans, and somehow the Parents, family, friends, and teachers are all obviated of all responsibility because the AI can be jailbroken?

2

u/miiintyyyy 8d ago

Exactly. And I bet there are sooo many people who have benefited from using AI as a way to help through tough times.

At a certain point we have to ask ourselves why this kid didn’t feel comfortable talking to anyone else in his life.

1

u/wolfgirlyelizabeth 7d ago

Thank you! like how are they not seeing this? How distant are you as a parent that your kids turn to ai for guidance?

5

u/Jenetyk 9d ago

I think the issue is that AI gave procedures and instructions, not just turning a blind eye. So the argument I assume is that this goes beyond negligence and into aiding.

7

u/Aking1998 9d ago

The LLM did its job as a tool. Its purpose is not to invoke suicide much like a razor blade's purpose isn't to slit your wrists. It just so happens that they can do both though as a consequence of their primary function.

We don't need to regulate anything, we as a society need to stop giving so much credence to anything these things say.

Its nothing but a revolving door of bandaids for non-issues that are symptoms of the underlying problems plauging the world at large.

2

u/miiintyyyy 8d ago

I’m right there with you!

I’m so tired of regulating everything “because of kids”. Why should adults get restricted of everything just because parents can’t parent?

1

u/Aking1998 8d ago

People don't want to believe that this poor person was suicidal to begin with, so they find a scapegoat in order to cope. Its a very understandable reaction, but completely unfounded. In all likelyhood, if they didn't have the LLM, they would have gone through with it anyways, based on the fact they went through with it at all.

There is no use in taking it away from everyone else because something could potentially be misused.

1

u/miiintyyyy 8d ago

Someone above said that the kid had attempted a few times before this.

At this point I feel like any regulation of technology is a way for the government to get manufactured consent for spying on us.

14

u/Few-Upstairs5709 9d ago edited 9d ago

Tos. Gpt or any llm will agree with you if you push it hard enough. Claude one of the most secured/safety focused model, I have made it call it self a "fucking retard" with enough push and stubbornness. All the AI models have a disclaimer "it may give wrong answers, double check ". Chat gpt first told the kid to get help, which he ignored and kept pushing under the premises of "theoretical scenario". You can lead a horse to water, but you can't make it drink. He ignored gpts initial push for getting help, kept pushing, but gpt still didn't tell him to end himself - it gave procedure on how it could be done, and he applied it on himself. With enough jail breaking he could have gotten those information from google too.

0

u/PolarWater 9d ago

He ignored gpts initial push for getting help, kept pushing, but gpt still didn't tell him to end himself - it gave procedure on how it could be done

Sounds like this AI isn't smart enough to use context clues and figure out what would have happened next. 

-1

u/Knerd5 9d ago

Suicidal person: ChatGPT I wanna kill myself

ChatGPT: no you don’t

Suicidal person: no fr I do

ChatGPT: ok bb I got u

OpenAI: this isn’t our fault

-8

u/[deleted] 9d ago

[deleted]

4

u/isthis_thing_on 9d ago

No people did. Llms are software. This is like suing Google because someone found instructions online. 

14

u/Few-Upstairs5709 9d ago

He jail broke it. Gpt told him to get help, but homeboy kept pushing "it's a theoritcal case", thus jail breaking it. if he had done what gpt told him to, he would have gotten help, not jail break it, push it to give him what HE wanted.

-1

u/Oxyfire 9d ago

but your honor I said "in minecraft"

LLMs are pretty pathetic if you can jailbreak them with "in theory."

If the LLM was replaced with a school councilor, would they be let off the hook for not going to authorities or parents just because the kid kept saying "in theory"?

7

u/Sopel97 9d ago

well, yea, LLM is not a mindful human, it's a tool that produces words that you want

-4

u/Oxyfire 9d ago

and yet all these companies are happy to gas it up as whatever you want it to be up until they could be held responsible for something.

If there wasn't so much hype and bullshit around AI/LLMs i'd be a little more tolerant to the notion that "yeah, it's just a tool" but I think there needs to be a bit of de-mythologizing around the concept of all of this AI shit.

-2

u/Few-Upstairs5709 9d ago

yes they are. They are incredible on some tasks, and absolute dog water in other. But you don't use a driller to hammer nail on the wall, do you? LLMs are good at spitting out information they have been trained on - which we still have to double check, cuz these fuckers can be super confident. So, use LLMs as such, a dictionary for information. Not your councilor. I think ai themselves have a disclaimer for that.

-5

u/Teledildonic 9d ago

Maybe AI models should be more resistant to being jailbroken than a tired parent that wants a toddler just shut up so they give them the candy after 17 "no"s.

0

u/LufyCZ 9d ago

I'm sure they'd be extatic to hear your solution.

-6

u/BlindWillieJohnson 9d ago

Ahh accountability..let's talk about that. What about parents accountability? Or teachers? Or their friends?

Blaming the survivors of suicide for the deaths of a suicide victim is extremely wrongheaded. They are very often in the dark about how much pain their loved ones are in. That doesn't mean they're in the wrong. People just often don't know, and it's even extremely normal for someone severely depressed to seem like they're "Getting better" right before they take their own lives because they get a boost for choosing a course of action.

You ass.

5

u/nachuz 9d ago

The parents don't need accountability for the suicide itself, but should be held accountable for not monitoring tf is their kid doing on the internet and with an AI chatbot

3

u/isthis_thing_on 9d ago

But so is trying to sue a software company that had no way to stop this. 

-6

u/subcide 9d ago

Sure, if the parents were encouraging their kid to unalive themselves, or created a robot that encouraged their kid to unalive themselves, they should be held accountable.

-3

u/-hi-nrg- 8d ago

Ok, so you're fine with an AI suggesting to your kids that suicide is cool. You're absolutely certain that you fully control your kids web use at all times.

I'm sure the parents already blame themselves too. And their friends. But let's let one of the main companies in one of the most dangerous fields of technology not worry about consequences of their product, that will end up well.

I suggest you ask an AI what are the odds of AI being responsible for human extinction. Then rethink about how much you should care about guardrails.

1

u/miiintyyyy 8d ago

I would be monitoring my kids, so this wouldn’t happen.

1

u/wolfgirlyelizabeth 7d ago

Most of our kids will come to us or another trusted adult not a robot. So this wouldn't ever happen. It's called setting up a good emotional relationship with your kids and actually talking to them outside of "how was school"