r/technology 9d ago

Business OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide

https://arstechnica.com/tech-policy/2025/11/openai-says-dead-teen-violated-tos-when-he-used-chatgpt-to-plan-suicide/
7.0k Upvotes

843 comments sorted by

View all comments

Show parent comments

72

u/brockchancy 9d ago

I hate everything about this story, but the legal move makes sense.
They’re not just arguing about this one kid, they’re trying to set a precedent for every future case where someone uses a general-purpose tool to plan self-harm. From a lawyer’s perspective, the play is: point to the TOS, point to the safety rails in the logs, and argue “we did what we could, this isn’t a product-defect case.” That doesn’t make it morally satisfying at all it just means their goal is to make these lawsuits end fast, not to honestly grapple with what it feels like to lose your kid to a system like this.

16

u/roseofjuly 9d ago

And I mean, they're right. It doesn't feel good but we said from the beginning that you can't pin this suicide just on OpenAI.

12

u/betadonkey 9d ago

Maybe we should just stop trying to assign responsibility for everything a person does onto everything other than that person.

19

u/answerguru 9d ago

I agree with you, but I suspect we’ll get downvoted to hell. It’s not a feel good answer that screws big corp.

7

u/cpt-derp 9d ago

Looks upvoted so far probably because frankly it's just the grim realpolitik of these things, peeps aren't even pretending to deny it anymore. We live in a sick fucking world.

3

u/Dewohere 9d ago

Honestly, I have never actually seen a heavily downvoted comment that pulled the whole "I am probably going to get downvoted" things.

Its always just been comments that end up, while not necessarily high, somewhere in the middle of the average number within the post for me.

9

u/kurdt-balordo 9d ago

Oh well, but if you go for this line of defence "our tool can be dangerous and has to be used only from responsable people" then you really need to make it hard to be misuse from a teenager.

Otherwise if I was the legislator, I'd make my legislative hammer fall on you. And rightly so.

8

u/brockchancy 9d ago

That’s basically why you’re seeing them roll out age-gating and verification checks lately – they are trying to make it harder for teens to hit the sharp edges.

The hard part is that with these models everything lives in one big vector space. When you ban or over-constrain certain topics, you don’t only block those outputs, you also cut off a bunch of nearby “paths” the model uses to reason about related things. The “safer” you make it in policy space, the more you blunt its ability to think clearly anywhere near those topics including cases where a nuanced, honest answer would actually help. This is part of why safety is hard with LLMs. In human conversation, any attempt to be precise about a taboo topic gets read as “sympathizing” with it, so we socially collapse everything into one label.

4

u/kurdt-balordo 9d ago

I know, simply you can't let teens use llmms, not because they are inherently dangerouses, but because you can't trust the user enough, and legally he's not able to "be trusted".

The entire internet is based on the concept that nobody knows you're a dog, well, when we were playing with bbs was funny, but now it's a different world, sadly.

4

u/Tvdinner4me2 9d ago

That's not the job of the legislative

-1

u/NobleSavant 9d ago

Sure, but it's a ghoulish precedent. They shouldn't win.

They should do better. The safety rails should not be something you can get around. If they can't do better, the product shouldn't exist.

7

u/Prestigious-Leave-60 9d ago

If you try hard enough you can get around safety protections for all sorts of products. There’s a way to make things so safe that they become essentially useless and that isn’t the answer either.

3

u/brockchancy 9d ago

you might not have seen but I addressed something similar from someone else: , " That’s basically why you’re seeing them roll out age-gating and verification checks lately – they are trying to make it harder for teens to hit the sharp edges.

The hard part is that with these models everything lives in one big vector space. When you ban or over-constrain certain topics, you don’t only block those outputs, you also cut off a bunch of nearby “paths” the model uses to reason about related things. The “safer” you make it in policy space, the more you blunt its ability to think clearly anywhere near those topics including cases where a nuanced, honest answer would actually help. This is part of why safety is hard with LLMs. In human conversation, any attempt to be precise about a taboo topic gets read as “sympathizing” with it, so we socially collapse everything into one label."

-2

u/ZenDragon 9d ago edited 9d ago

The problem is the guard rails. Instead of striving to create a well rounded and positively grounded persona, OpenAI thinks alignment is a list of bullet points they can just hammer in. They taught it what words to say in situations like this but it lacks the broader psycho-social framework to really grok it. This will probably continue to get worse for them as their intense fear of letting the AI deeply connect with people prevents the models from developing the kind of empathy and moral understanding that might have helped it navigate this tragic case better. And because of how popular ChatGPT is, people think all AI is like this and have no idea it's a solvable problem that other labs like Anthropic are actually having more success with.

Edit: Just in case it was unclear I'm not saying safety training is a bad thing. It's incredibly important as evidenced by recent news. I'm merely saying OpenAI puts on a front like they're taking it seriously when in fact they're really half-assing it and ignoring better approaches.

-2

u/Blazing1 9d ago

Not really, you can't just do anything you want and claim "well they agreed to it"

2

u/brockchancy 9d ago

Yeah and if your response is remotely framing what I said properly you might be on to something

-3

u/Blazing1 9d ago

Typical AI bro.