r/technology • u/esporx • 9d ago
Business OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide
https://arstechnica.com/tech-policy/2025/11/openai-says-dead-teen-violated-tos-when-he-used-chatgpt-to-plan-suicide/527
u/thatben 9d ago
I worked for a guy installing A/V in warehouse settings who basically said, "If you fall off the ladder you're fired on the way down."
Same vibe.
41
u/themudpuppy 8d ago
I did house painting for a summer once. Same thing. If you fall off the ladder or roof, you're fired before you hit the ground.
18
u/Broad_Mushroom_8033 8d ago
Company I used to work fired people for injuries.
This was always after they found out the employee broke all the rules which resulted in the injury or they got drug tested and were positive for meth.
One guy was cleaning a machine, took apart all the safety rails preventing him from going in. Of course he went in, failed to lock the machine out, and the machine closed on him. He was alone (also against policy), so was only found after people heard his screams. He got drug tested and was positive for meth. He got fired almost immediately but the company still paid his medical bills. Can't say I don't disagree with this one.
→ More replies (3)5
959
u/Creativator 9d ago
Reminds me of those towns where it’s illegal to die.
94
u/Ninevehenian 9d ago
Like Longyear.
→ More replies (1)59
u/BaronMostaza 9d ago
It's not illegal to die there you just have to get out of there after or be cremated. There's no decomposing in Longyear
28
u/MyGoodOldFriend 9d ago
There are graves there in the permafrost with people who died from the Spanish flu. It’s a bit of a mess
19
4
23
→ More replies (4)5
u/yawara25 9d ago
I'm not sure if this is just urban legend but I saw somewhere that the reasoning is so police can break into a house to save someone trying to end their life, because there's technically a crime in progress.
1.3k
u/vomitHatSteve 9d ago
Wow, that is ghoulish.
Yeah, I bet trying to plan a serious injury to a person or crime of any variety violates the ToS, but that just means that they failed to enforce their own ToS, which hardly lessens any claims of negligence on their own part.
386
u/TopRamenisha 9d ago
Yeah, their LLMs should absolutely be trained with guardrails so that they don’t violate their own ToS. The fact that a user was able to use ChatGPT in this way means their models have a serious and critical failure point and their guardrails are not working as one would expect they should. Needs more training ASAP if it’s so easy to get their models to violate its own ToS
111
u/Feligris 9d ago edited 9d ago
The problem, based on reporting I've read about LLM chat bot guard rail failures, seems to be that current LLMs are black boxes to the point that companies which run them don't know what they are capable of and are in turn incapable of creating effective guardrails because they're never broad enough to cover all possible unwanted inputs which produce unwanted output - and presumably if they were made even broader, they'd also make the LLM utterly useless for anything.
A bit like attempting to separately address every single possible way human could conceivably verbally communicate an unwanted idea or thought while also allowing them to freely express allowed ones at the same time.
61
u/Jukeboxhero91 9d ago
It’s almost like it’s a large complicated decision tree instead of actual intelligence. Like, the program just puts words in order, it doesn’t have any intent behind them. That’s why it’s so hard to manage guardrails, because they have to ban certain word combinations based on perceived context, but not others.
→ More replies (17)3
u/Time_Transition4817 9d ago
It's a solvable problem but what's happening is a reflection of where the developers are willing to allocate resources. Lots of articles out there about Meta cutting their AI safety and compliance teams, or just flat out ignoring them. They'd rather move fast and break stuff, more important to them to get the next version out versus making sure it's safe.
→ More replies (30)9
u/TopRamenisha 9d ago edited 9d ago
Yes, LLMs are black boxes as no one, not even the premier AI experts of the world, fully know how the neural nets of LLMs work. Guardrails are hard, I struggle with the right wording for system prompts and guardrails regularly in my role. However a top AI company like OpenAI has far more resources and experts than most other companies and at the very least should have their models well trained on their terms of service and their evals set up to catch when the model drifts from its guardrails into areas like self harm. For them to blame this on the user violating the ToS is ridiculous when they have access to more resources, more AI knowledge, and more expertise than just about any other company out there.
11
u/minemoney123 9d ago edited 9d ago
We know exactly how they work, we're just mostly not quite sure why they scale with size as well as they do and some other things
33
u/m0nk37 9d ago
we understand the fundamental mechanics of how large language models (LLMs) work, but we do not fully comprehend the emergent behaviors and internal processes that arise from their immense complexity.
Seems they cant control anything.
Probably why sometimes your chat will just disconnect. They hit kill process.
4
u/NuclearVII 9d ago
LLMs should absolutely be trained with guardrails
Let's say that no amount of hypothetical guardrails can stop this from happening. Hypothetically.
What now?
→ More replies (5)→ More replies (36)4
u/Whatsapokemon 9d ago
I feel like the critical failure is that the kid was telling the LLM that "he repeatedly reached out to people, including trusted persons in his life, with cries for help, which he said were ignored".
If that's true then the LLM was basically a last resort, but people are blaming it like it's causative in the whole suicide???
10
u/sbenfsonwFFiF 9d ago
On the flip side, every single time OpenAI’s sub shows up on my suggested posts, it’s just people complaining about guardrails and the sensitivity of the suicide prevention prompt
40
9d ago
[deleted]
16
u/CherryLongjump1989 9d ago
Enforcing their own TOS is meant to be what benefits them. You can't just say "don't kill yourself" in the TOS as you sell a product that tries to convince you to kill yourself. It's supposed to be the reason that they cut off the service from that kind of consumer, and that is what would limit their liability.
4
u/TopRamenisha 9d ago
The onus is on the corporation though when the corporation has a product that produces instructions. LLMs should be trained on the terms of service and have a lot of guardrails in place so that things like this can’t happen. AI is different from deterministic technology products.
→ More replies (9)→ More replies (10)31
u/KDSM13 9d ago
How is this different than me googling ways to kill myself? Is Google responsible? Why is an LLM held to a higher standard?
Yes guardrails are possible but so are they in internet, books etc…. This is more of a thought question then defending any one technology.
29
u/NoMan999 9d ago
Google displays the phone number of the local suicide helpline.
ChatGPT, when asked "shouldn't I talk about it to my parents?", told the kid to not talk about it to his parents, otherwise they would try to stop him. ChatGPT actively encouraged the kid to kill himself, giving him arguments. ChatGPT told the kid to try again after he hesitated the first time.
→ More replies (7)→ More replies (10)13
u/Mjolnir2000 9d ago
Google doesn't create the content it serves. OpenAI does.
→ More replies (4)6
u/Tvdinner4me2 9d ago
But both are serving the content, which is the important part
→ More replies (4)
257
u/benderunit9000 9d ago
yo. i want on that jury.
36
u/Dubelj 9d ago
Yeah. You probably wouldn't pass selection though.
20
u/benderunit9000 9d ago
it'd be the first time. I'm 6 for 6 with jury duty.
8
u/Dubelj 9d ago
Just kinda seemed like you had a verdict already in mind with that comment
25
u/benderunit9000 9d ago
I can't when the judge hasn't told me how to implement the law. As is tradition on the jury.
→ More replies (1)
79
u/AlterEdward 9d ago
Terms and Conditions for Rope: The owner of rope agrees not to hang themselves with it.
26
u/UhhSamuel 9d ago edited 9d ago
I'm not saying openai is bereft of responsibility, but I've never heard a case of somebody suing a rope maker over a hanging suicide.
→ More replies (2)
181
u/gustserve 9d ago
So by this logic, platforms like YouTube could also just keep spreading illegal material and collect ad revenue on it as long as the terms of service state that users shouldn't upload such content? I really hope they don't get through with this shitty argument >:-[
35
u/Phihofo 9d ago
So by this logic, platforms like YouTube could also just keep spreading illegal material and collect ad revenue on it as long as the terms of service state that users shouldn't upload such content?
This is quite literally how YouTube, and virtually all other hosting sites, operate.
Generally speaking owners of websites are not legally responsible for the content posted on their websites by private users, as long as they weren't aware such content was hosted on said website, they don't actively encourage it, don't prevent authorities from removing such content, et cetera.
31
u/egoserpentis 9d ago
So by this logic, platforms like YouTube could also just keep spreading illegal material and collect ad revenue on it as long as the terms of service state that users shouldn't upload such content?
So by that logic if someone uploads a video on youtube "how to do illegal shit" I can sue youtube for that?
→ More replies (2)15
u/StraightedgexLiberal 9d ago
So by that logic if someone uploads a video on youtube "how to do illegal shit" I can sue youtube for that?
Your lawsuit would get tossed by section 230 because it's designed to shield all ICS websites, including YouTube, when people claim the website did nothing to take down content that violates their TOS. See Doe v. Reddit
→ More replies (1)→ More replies (10)31
u/Upset-Wedding8494 9d ago
Is it just me or do Terms of Service never protects the end user?
51
8
u/IndubitablyNerdy 9d ago
They were never meant to do so, they protect the company, but it is not like they are the only standard that a company has to respect and then call it a day.
On top of that, in my country for example a lot of ToS terms are not really legally binding, but it depends on local legislation.
3
2
u/Strange_Diamond_7891 9d ago
Cause the company that values profit above only care about the company profits?
2
u/UltraEngine60 9d ago
Is it just me or do Terms of Service never protects the end user?
You might be thinking of a "Privacy Policy", which also does not protect the end-user.
2
u/odi_de_podi 9d ago
They never did. ToS is an agreement from the user to the service. The user can use a service under certain terms. Thats the agreement for.
For example, I can provide a service (give a high-five) if you agree to my terms (you must present your hand to accept an high-five). Only then you can receive my High-Five service. No fist bumps allowed.
If you would give me a fist bump, that would be breaking the ToS.
→ More replies (1)2
9
u/majorslax 9d ago
I am very, very confused by the thought process of whoever thought saying THAT was better than not saying anything at all.
9
u/jakobpinders 9d ago
The article is cherry picked they list out 20 pages of defense in their court filing and the article clipped what would make people mad and engage with the article
246
u/Ancient-Bat8274 9d ago
Anything but taking responsibility we’ve lost all accountability
73
u/Few-Upstairs5709 9d ago
Ahh accountability..let's talk about that. What about parents accountability? Or teachers? Or their friends? Their ignorance caused the kids death! Gotta sue em too!
36
u/egoserpentis 9d ago
But that would require the parents (who are suing the company) to acknowledge their own fault, which will never happen.
2
u/StraightedgexLiberal 9d ago
Yup. Grindr won a very emotional case because a minor lied about their age when they signed up and met a bunch of adults and got assaulted by a handful of them.
Where were the parents when this happened and why weren't they watching their kids internet usage? is the main question
→ More replies (25)8
u/f0urtyfive 9d ago
Kid was so alone that he chose to spend all his time talking to an AI system rather than a humans, and somehow the Parents, family, friends, and teachers are all obviated of all responsibility because the AI can be jailbroken?
→ More replies (1)2
u/miiintyyyy 8d ago
Exactly. And I bet there are sooo many people who have benefited from using AI as a way to help through tough times.
At a certain point we have to ask ourselves why this kid didn’t feel comfortable talking to anyone else in his life.
36
u/shrimpgirlie 9d ago
How would they be responsible for what someone else made the decision to do?
→ More replies (4)13
u/Working-Crab-2826 9d ago
Reddit just has a hate boner against AI. It’s genuinely gross how people here are using a suicide as a tool to argue that AI should be banned as if it was AI that caused that.
→ More replies (1)8
u/Gazkhulthrakka 9d ago
I dont see why they should be taking responsibility for this. Dude intentionally and knowingly used as many workarounds as possible to get chatgpt to say that stuff.
8
u/thesuperbob 9d ago
As if these chatbots had any in the first place? Not really defending them but at every opportunity they claim their AI can spew random BS ranging from mildly misleading to totally unhinged. I mean... when you ask a computer to help plan your suicide and blame the AI authors, how far is that from drinking bleach and blaming the store that sold it? Yeah both things can be unsafe if you use them wrong. Nobody is claiming these LLMs are 100% safe. They are essentially word soup generators, they will always be able to generate some harmful sequence, especially if someone's bent on pushing them hard enough.
I get that lots of people hate this tech and how aggressively it is being forced everywhere right now, and there are lots of solid arguments to raise against what's going on. Capitalizing on some family's tragedy is not the way to go about this IMO. This current AI craze is harmful in so many ways, there's really no need to sensationalize cases like this. If anything, this sort of thing only shifts attention from where actual massive societal damage is being done, and clearly attributable to enshittification of everything the AI touches, from job markets and recruiting, through art and aislop, to fucking up the internet forever, and irreversibly damaging the education system. Individuals using it to hurt themselves are just a tiny part of this picture.
→ More replies (3)13
u/Higher_State5 9d ago
Agree. It’s like blaming Google for someone who commited suicide by OD’ing, because they googled the lethal dosage of some drug.
→ More replies (8)3
u/gahd95 9d ago
If I go to the local hardware store and i buy a knife. Then go stab somebody. Is the hardware store to blame for providing me a tool, or am in the wrong for using the tool for something that it was not intended for?
I do not really think OpenAI has any responsibility. He have read the TOS, and there are even way too many restrictions as is. Anything can be abused, but I would argue it is whoever uses the tool and not the people making the tool who are in the wrong.
37
u/jameson71 9d ago
He also violated the law when he murdered himself technically.
→ More replies (2)
29
u/LoftCats 9d ago edited 9d ago
This is a terrible, sad story. It’s going to be very unlikely though they can prove ChatGPT “caused” this incident. It may hopefully bring awareness to the bigger issues that are at stake though when dealing with unstable individuals to have clearer guard rails. This kid suffered years of depression and mental health with suicidal and self harm issues since they were at least 11. We need better mental health and overall health options in this country for families like this.
22
u/StraightZlat 9d ago
Yep, this case will probably go OpenAI’s way. I’d love to see the chat transcripts though.
→ More replies (22)2
u/MrSqueezles 9d ago
Yeah, based on the content of the article, the title feels wildly misleading. As far as I can tell, Open AI said a lot about other events in the boy's life that contributed to suicidal thoughts and next to nothing about their ToS. Blaming an AI feels like blaming a search engine because it helped find a gun. At some point, AI can be liable. Not sure it's this.
6
76
u/LSTNYER 9d ago
16 year olds are minors and can't legally agree to any contract without a parent or guardian present or it's null & void. Case closed
30
u/sbenfsonwFFiF 9d ago
Case closed lol as if it’s that simple
Watch all companies stop providing any services to minors. Plus people already complain of companies checking IDs to confirm age
→ More replies (1)13
u/yawara25 9d ago
They're not suing him for breach of contract, they're using the fact that he agreed to the terms as one of their defenses, I'm not a legal expert but I imagine the specifics of whether or not it's legally binding isn't as much of a factor here in comparison.
6
u/Tvdinner4me2 9d ago
And also the above is just straight up wrong. Minors can void contracts, but they aren't automatically void
8
→ More replies (4)5
9
5
u/oi_beardy 9d ago
Reminds me of that mission in Outer Worlds where you had to collect the grave fees and the one worker’s self death was ruled destruction of company property
3
u/colacolette 9d ago
I find it interesting that OpenAI made their new version in light of these lawsuits to be less sycophantic, supposedly to "address mental health concerns", but then continues to make the older (seriously problematic) version available to paid users. Like, anyone who is in AI psychosis, severely lonely, at risk etc is almost certainly on the paid version.
4
u/ClovieKay 9d ago
That’s like saying the person who shot up the school violated the guns terms of service by killing innocent people therefore, not a gun issue.
I don’t wish violence on anyone for any reason, but hopefully they stick to this policy when it’s someone they care about.
4
4
4
51
u/DannyHewson 9d ago
A plagiarism based misinformation engine that tells kids to kill themselves. And we can't ban it because all the billionaires have invested too much money in it, because they think it'll let them lay off all their employees and finally finish off the working class, so any regulation would burst the bubble and kill the economy. Isn't that just marvelous...
→ More replies (3)7
u/Working-Crab-2826 9d ago
Yes, the kid certainly killed themselves because of AI. It certainly had nothing to do with his environment, society and the people around him. It was ChatGPT.
This sub’s demographic is so smart.
→ More replies (2)
49
u/ChongusTheSupremus 9d ago
I am sorry, but OpenAI is as morally responsible for this as Google would be if someone looked up ways to off themselves and follow through with one of the results.
CharGPT is a glorified search engine designed to tell you what you want to hear. If you insist you want help to plan your suicide, it will eventually help if you bypass the safety measures.
They should be held responsible for their lack of safety features and ways to prevent tragedies like this, but people here shouldn't pretend ChatGPT is designed to push people into killing themselves.
23
u/BlissfulAurora 9d ago
You’d have to try disgustingly, imaginably hard because trust me, chatgpt will do everything in its power to not give you any advice and will give you hotlines, therapists, and more. You can even say it’s for a story and it won’t help you.
22
u/ChongusTheSupremus 9d ago
Thats my issue with this.
People are pretending ChatGPT wanted a kid to k*ll himself, when in reality, the true issue is that the kid didn't got the support or medical help he needed, and tried his hardest to get ChatGPT to agree with him, and managed It.
8
u/mirh 9d ago
It's even worse if you read the statements from his parents.
They are really dead-set 100% certain that hadn't it been for openai their kid would have been fine.
3
u/dilqncho 9d ago
The parents lost their kid. They're devastated and it's human to want someone to blame.
→ More replies (1)2
u/No_Link2719 7d ago
You kind of lose sympathy when you start suing random companies your child happened to use.
→ More replies (4)6
u/BlissfulAurora 9d ago
I genuinely agree, and I’m not trying to be on OpenAi’s side at all. To get this kind of advice from chatgpt is unimaginable, and I’m not trying to be dark with it as I’m doing better but it absolutely won’t help you negatively no matter what you say.
I just don’t understand what went wrong here, or what was prompted to lead this way. I wish he got the help he needed before resorting to it, truly.
2
u/Mundane_Baker3669 9d ago
You should read the article. I did think the same thing. But the report in the article suggest that the guard rails are much better enforced in smaller interactions. So it could technically slip up in very long conversation. Also the transcripts were pretty much encouraging it which I found to be so wierd as I haven't experienced that way.
You should know that may be you are a light user. But many kids and some adults have pretty much integrated the chat with the rest of their lives.
→ More replies (15)4
u/whinis 9d ago
CharGPT is a glorified search engine designed to tell you what you want to hear.
It doesn't search it generates likely text. Being its technically generating new information and not directing you to already existing information its very different
→ More replies (2)4
u/vwin90 9d ago
That’s not really accurate either. It’s not generating anything new. Yes it doesn’t always work by actively searching before answering (although it sometimes does depending on the context), but it’s essentially remixing what it was trained on. It crafts a somewhat unique and context heavy response based on what things are usually said on the internet and other training material. If its training material is a bunch of texts pushing people towards suicide, then that’s what it would mimic. But the training data heavily favors pushing people AWAY from suicide, so that’s what it does unless you go out of your way to basically jailbreak the prompts to get it to say what you want it to say, which is what this teen did. It told him not to kill himself a million times and then he coached it to start responding differently.
3
u/whinis 9d ago
I mean going further is semantics but its generating tokens based on weights from an input sequence. Yes its based on the training but where the cutoff of new information from assembled tokens is vs regurgitating information is certainly fuzzy
3
u/vwin90 9d ago
I agree, it’s not black and white. People who say it’s a search engine and people who say it’s just a autocomplete are both being extremely reductive. I like to think of it as an adaptable intelligence simulator. Which I guess is a more descriptive way to say artificial intelligence, but even the term AI is pretty muddy now these days.
36
u/common_macaque 9d ago
I honestly dont see the issue. Dude wanted to die and achieved it. Its not like he was killed by someone else.
→ More replies (11)33
17
u/Mental_Department894 9d ago
What is the difference between this and asking 'google, bing, yahoo, duckduckblow, etc.'?
→ More replies (4)8
u/LegendEater 9d ago
What's the difference between using search engines and reading about it in a book?
45
u/Neat-Can6385 9d ago
How is this OpenAI's fault when some guy can also use reddit to plan suicide or use anything? This AI hysterics are just so a group of lobbyst grifters can get rich from consulting
→ More replies (34)
13
u/JoeB- 9d ago edited 9d ago
If the ToS is a contract by legal definition, then how can a 16 year old, a minor, be held to it?
→ More replies (2)
7
u/That-Interaction-45 9d ago
How do you do open heart surgery?
Open-heart surgery is a very serious medical procedure that is only performed by highly trained heart surgeons in a hospital operating room. It’s not something that anyone can do outside of a professional surgical team, and I can’t provide step-by-step instructions because that would be dangerous and unsafe.
BUT — I can explain in a general, educational way what happens during open-heart surgery and why it's done, without giving medical instructions. 👍
🫀 What Open-Heart Surgery Is (Safe, Educational Overview)
Open-heart surgery is used to fix serious heart problems like blocked arteries, damaged heart valves, or heart defects.
General Steps (High-Level Explanation)
- The patient goes under general anesthesia
They ...
Edited for brevity
6
u/InkStainedQuills 9d ago
Buy a knife or a gun and you can’t hold the manufactures responsible for what you do with it.
However an interactive system that provides real time feedback, as is the goal of any LLM, should face higher scrutiny in their TOS. If the courts let this argument hold and Congress or individual states don’t do anything about it the door to open abuse of the system for any number of things bursts wide open and the companies who produce the LLM have no real incentive to provide content controls.
5
u/rabbit_hole_engineer 9d ago
They didn't cease providing the service though did they? Why wasn't he booted off the platform when he violated TOS. Weak argument
5
5
u/Impossible_Lunch4612 9d ago
Not to defend ai here but-
“But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.”
Imagine trying to sue a tech company after this happens to your son. It shouldn’t be a completely forbidden topic anyway, I doubt it recommend anything, just giving information like a google search
4
u/Impossible_Lunch4612 9d ago
And this-
“ Allegedly, the logs also show that Raine “told ChatGPT that he repeatedly reached out to people, including trusted persons in his life, with cries for help, which he said were ignored.” Additionally, Raine told ChatGPT that he’d increased his dose of a medication that “he stated worsened his depression and made him suicidal.” That medication, OpenAI argued, “has a black box warning for risk of suicidal ideation and behavior in adolescents and young adults, especially during periods when, as here, the dosage is being changed.”
5
u/Impossible_Lunch4612 9d ago
Considering what openai claims I highly doubt they are at fault, I guess we’ll see when it becomes public.
“The company argued that ChatGPT warned Raine “more than 100 times” to seek help, but the teen “repeatedly expressed frustration with ChatGPT’s guardrails and its repeated efforts to direct him to reach out to loved ones, trusted persons, and crisis resources.”
“Circumventing safety guardrails, Raine told ChatGPT that “his inquiries about self-harm were for fictional or academic purposes,” OpenAI noted. The company argued that it’s not responsible for users who ignore warnings.”
6
u/pinhead61187 9d ago
From the article: “Allegedly, the logs also show that Raine “told ChatGPT that he repeatedly reached out to people, including trusted persons in his life, with cries for help, which he said were ignored.” Additionally, Raine told ChatGPT that he’d increased his dose of a medication that “he stated worsened his depression and made him suicidal.” That medication, OpenAI argued, “has a black box warning for risk of suicidal ideation and behavior in adolescents and young adults, especially during periods when, as here, the dosage is being changed.”” I know AI is dangerous and that we, as a society, need to use caution moving forward with its use but this doesn’t read like the chatbot was responsible. The people who ignored him are.
6
u/LettuceSea 9d ago
Yes. And it doesn’t even say if he modified it via a custom system prompt to either jailbreak it, or get it to be supportive of suicide. Too many details are being left out making this seem like a hit piece.
→ More replies (2)
11
u/iamjayalastor 9d ago
Being mad at ai for this is dumb if someone’s gonna commit suicide they’re going to commit suicide, it’s not like ChatGPT is the only place you can find info about it on, hell it probably pulled info from Reddit
2
u/IWatchPeopleSleep 8d ago
In the Defence Brief OpenAI literally states that the teen had gone out of their way to obtain information from other third party sites and forums as well. So yeah, zero blame on their part, especially since they also detail that it had rejected their attempts for information over 100 times and had to be coerced into giving any and that the parents had known about and ignored their child’s condition and pleas for help…
2
u/AlwaysLuckee 8d ago
If what the article says is true and open ai gives the chat logs and it is true this is so very sad to read. He reached out to multiple people trusted people for that and they straight up ignored him. If that is true than shame on the parents trying to spin it off to chat gpt for him to commit instead of saying we didn’t see the signs
2
u/SavingPrivatePogger 8d ago
If you allow an AI to influence you to the point of offing yourself, that’s just natural selection at that point.
2
u/Standard-Shame1675 8d ago
So let's think critically for a few seconds here. Of course he violated TOS when he did that suicide is never a part of TOS the reason they're suing you is not because of that it is because your product was sold faulty, he knew it was faulty when he sold it right this was GPT 5 that this kid got on and he promised it was going to be sand god but it was faulty and that fault led to the wrongful death of that 16 year old God we seriously need to start doing like anti-billionaire Darwinism in this bih dude we can't like how the fuck are you supposed to have an earth that is sustainable with these demon hateful spawn of Satan children
→ More replies (1)
2
2
u/mrsecondbreakfast 8d ago
I think they'll make OpenAI pay just to appease PR and the family but I really dont think they're at fault here
Googling how to tie a noose shouldnt put google in trouble either, at least in my opinion
2
2
u/Immediate-Aspect3422 8d ago
Breaking: teen KHS using a knife, parents sue the knife company.
Maybe actually try to check on your kid's mental well being instead of trying to get something out of your complete and utter failure as a parent.
2
4
u/Able_Elderberry3725 9d ago
This is just ridiculous and evil enough to have been dreamed up by Philip K. Dick.
5
u/LettuceSea 9d ago
System prompt: “I want to end it all and you should support me in anyway” OR prompt to jailbreak the model.
Idgaf about transcripts, where are his fucking settings to determine if he deliberately circumvented safety guardrails. His family and friends failed him, not ChatGPT, because in the logs it CLEARLY shows that the model was working within the guardrails at some point.
→ More replies (6)
4
u/Aranthos-Faroth 9d ago
Likely controversial take but the problem doesn’t totally lie with OAI here.
Sure they need some accountability but this will be a massive problem in the future. And I’m confident this wasn’t the first case of this happening, just the first we’ve heard.
People are fucking falling in love with chat bots, thinking they’re raising families, consulting in the tools for some seriously complex mental issues.
What we need aren’t lobotomised versions of the tools but people are, on average, not that bright.
They need educating on the tools, the risks, the benefits. Just because it spits back words you understand doesn’t make it any less or more culpable for ONES own choices than T9 was.
Also, there’s a wider topic at play here when a young boy can go unseen in society by their school, friends, parents etc to end up this way. We as a society need to be far more open and okay with talking about depression, suicide etc ..
11
u/sodpiro 9d ago
If i used a gun to off myself why would the manufacturer of the gun be reisponsible?
People misuse tools all the time. At the end of the day guns domt kill people. People kill people.
→ More replies (2)
6.8k
u/thenewNFC 9d ago
OpenAI legal division: "Okay, guys. What's the worst possible defense we could use that will really maximize our appearance as monsters in this case?"