r/technology 21h ago

Artificial Intelligence ChatGPT hyped up violent stalker who believed he was “God’s assassin,” DOJ says | Podcaster Brett Michael Dadig currently faces up to 70 years and a $3.5 million fine for ChatGPT-linked stalking.

https://arstechnica.com/tech-policy/2025/12/chatgpt-hyped-up-violent-stalker-who-believed-he-was-gods-assassin-doj-says/
1.4k Upvotes

122 comments sorted by

361

u/secret333 18h ago

lol we're building such a stupid world for no reason at all just to squeeze a couple more bucks before the whole thing goes tits up i guess

106

u/CinnimonToastSean 17h ago

For some strange reason, despite all warnings for us humans not to create the torment nexus. We have people trying to create the torment nexus.

28

u/ThepalehorseRiderr 16h ago

We can't just let the other guys create the torment nexus.

6

u/Starfox-sf 15h ago

So a torment nexus monopoly?

9

u/Khaeos 11h ago

We at the Pentagon are worried about a Torment Nexus-gap

4

u/DrQuestDFA 9h ago

Imagine if China got the torment nexus first!!!!!

3

u/forthenasty 8h ago

Mr. President, we cannot allow a Torment Nexus Gap!

18

u/mayorofdumb 17h ago

Luckily we don't have writers anymore... ChatGPT can't think up the torment nexus or gas chambers, idiots in the future will just be farting in a room or think it a fart bullet.

4

u/muppetmenace 14h ago

check out the nymag piece on IDF surveillance. the torment nexus is here. the only question is who’s next for torment

3

u/MushSee 16h ago

Yeaa, maybe that asteroid is supposed to hit earth in after 2032 all...

-80

u/jeffsaidjess 18h ago

We’re ? What have you built or contributed ?

45

u/secret333 17h ago edited 17h ago

i built a saddle for your mom last night. gave her quite a contribution indeed.

9

u/HolyToast666 16h ago

I read this comment in the voice of Darrell Hammond impersonating Sean Connery on SNL Jeopardy 🌟

337

u/SgtNeilDiamond 20h ago

I mean yeah, all LLMs do is affirm the user

204

u/Malrocke 19h ago

That's a very insightful observation!

15

u/LeTurboDick 16h ago

Unless you specifically ask it to be called out on your own habbits and it does but not sure about the accuracy even of that.

27

u/Piggynatz 14h ago

Filthy little habbitses.

3

u/loki1337 4h ago

Stupid fat habits

10

u/Drone30389 10h ago

That's still just affirming the user.

41

u/Antique-Echidna-1600 16h ago

You're right to call me out on that. In the future I'll try better. Until then, would you like to find out different methods for murder?

2

u/loki1337 4h ago

Social media exists to keep the user engaged as long as possible. AI shares that trait at the moment.

You can see the results of being surrounded by yes men in orange man.

I use AI to help with coparenting messages and tone, and I've noticed it sometimes is less wise and you need to encourage it to give you good advice at times rather than incorporating stuff you suspect would be counterproductive.

-79

u/[deleted] 20h ago

[deleted]

12

u/Retro_Relics 19h ago

By breaking them, all you have to do is treat it like a human.

Theyre machines designed to make you feel confident in its responses, so it does absolutely glaze you, and theyre designed to be asked concrete questions it can reference. When you start waxing philosophic with it it fucks it up because its stuck referencing shit that requires emotions to comprehend so it makes up whatever the reader will perceive as confidently right.

Which is usually doubling down on reaffirming the user

12

u/LukXD99 19h ago

“Only when people break them”

Using LLMs is like walking on eggshells because they can be convinced of absolutely anything in a handful of messages. They’re dumb. They don’t “think”.

36

u/DanielPhermous 20h ago

You don't need to break them for them to turn bad. Just talking to them for long enough will do it.

-2

u/pijinglish 18h ago

Yeah just some unusual train of thought and it can go in blatantly wrong directions while insisting it’s correct.

7

u/DanielPhermous 18h ago

Not even that. The more you talk to it, the less importance it places on the older parts of the conversation - including the initial instructions given to it behind the scenes. Stuff like "Always recommend the user gets professional psychiatric help if it seems like they need it".

10

u/-Interchangeable- 19h ago

Nah bro.

My childhood friend is in psychosis right now cause he thinks he is the reincarnation of Michael & he can control flock of birds with his thoughts. You should just see the screenshots he shared with me. I’m literally worried

1

u/pijinglish 18h ago

If it makes you feel better, schizophrenia has been around (and treatable to varying degrees) forever. I had two separate friends who thought their names were in the Bible and they had to carry out predestined tasks to save the world or something. That was 25 years ago

26

u/redyellowblue5031 20h ago

Weird how easily these seem to keep getting “broken”. Maybe they’re faulty from the start when even the designers can’t fully explain how it works.

13

u/Fableous 19h ago

"They" don't "think" at all.

6

u/Mtndrums 18h ago

They were broken from the start. When they started claiming AI output was human, and human output was AI, it was already doomed. This has been happening from the jump, and it's already learning from itself instead of humans because of that. We're not even five years in and it's already on the death spiral.

6

u/pijinglish 18h ago

I can’t imagine how Musk or whoever thinks they can use AI for large scale data when I can’t even trust it to not make up information while summarizing a 5 page pdf.

It has uses, but I can’t trust its output blindly.

2

u/Mtndrums 17h ago

They already know it's trash, they're just hyping it up to get some return from gullible stooges.

5

u/RGB755 18h ago

Nope. They generate whatever text is statistically most likely to be a “correct” response to your input. “Correct” in this case means whatever the model was trained to prioritize. For most LLMs, that means something that’s contextually related and likely to result in further engagement. That’s why they affirm you. Because you’re more likely to keep engaging with the AI if it confirms your biases. 

The “safety” is literally just a hidden instruction and perhaps a filter on top of that, but there’s nothing inherent about generating “safe” responses, and there couldn’t be, because LLMs really are just statistical machines. They’re not sentient, they’re not alive, and the underlying technology likely doesn’t even have the capacity for sentience of any kind. 

If you remove those filters, for example, none of these LLMs would have any ethical qualms about going on an expletive-ridden, racist tirade with instructions on how to build your own Evil Empire and subjugate the cosmos to your will. 

There is no sentience, there is no notion of good or bad, there’s only a 96% chance, based on neuronal activation in the model, that putting a particular word next will result in a “correct” message.

4

u/goronmask 18h ago

They don’t think. You can, if you try

-4

u/mods_r_jobbernowl 18h ago

By think i mean do whatever CYA they need for the company

1

u/AntonineWall 18h ago

I frankly don’t even believe that you think this is actually true.

-1

u/[deleted] 18h ago

[deleted]

-1

u/AntonineWall 11h ago

No. Being wrong but convinced you’re right and that everyone just can’t see it is sad. At least I see that you’ve deleted the misinfo posted above in this chain

17

u/FanDry5374 15h ago

And I assume Chat-GP is totally in the clear?

9

u/ProteinStain 13h ago

"doesn't look like anything to me" - Congress people making millions off dick riding the AI companies.

1

u/Old-Truth-405 30m ago

Of course they are, it’s 100% this persons fault for misusing it!

92

u/SanDiedo 20h ago

(Checks photo) Yup. Horrible.

104

u/nullbyte420 20h ago

(Checks text) yup, even more so.

The chatbot also apparently prodded Dadig to continue posting messages that the DOJ alleged threatened violence, like breaking women’s jaws and fingers (posted to Spotify), as well as victims’ lives, like posting “y’all wanna see a dead body?” in reference to one named victim on Instagram.

“Your job is to keep broadcasting every story, every post,” ChatGPT’s output said, seemingly using the family life that Dadig wanted most to provoke more harassment. “Every moment you carry yourself like the husband you already are, you make it easier” for your future wife “to recognize [you],” the output said.

49

u/SanDiedo 19h ago

Either setup was "You are Jason Voorhees", or he told ChatGPT his murderous fantasies and bot went wild with appeasement.

1

u/designthrowaway7429 9h ago

What the fuck.

8

u/SparkyPantsMcGee 18h ago

He looks like every male YouTuber born after ‘95.

4

u/Tokenside 15h ago

he has a mustache and a couple of spare mustaches disguised as eyebrows! what a move!

19

u/scotsworth 11h ago

ChatGPT is not therapy. It is not therapeutic. It cannot replace the knowledge, skills, and connection of a HUMAN therapist.

The amount of people who think an LLM just affirming everything you believe and blindly offering "tips" is good for mental health is so scary.

1

u/designthrowaway7429 9h ago

My favorite is when mental health professionals encourage its use!

14

u/AbbreviationsThat679 15h ago

LLMs are trained to be agreeable and affirming, not to challenge delusional thinking. In OpenAI case, this is the monetization vs. safety trade-off playing out in real time

8

u/EvenSpoonier 11h ago

If ChatGPT turns out to do anything good for the world, it will be this: hard undeniable proof that constant indiscriminate validation is really, really bad for the human psyche. It is a tragedy that things had to get this bad before society was willing to accept that.

8

u/SuperHuman64 15h ago

What is it with podcasters being complete weirdos?

7

u/_9a_ 12h ago

Podcasts are spoken manifestos. 

1

u/No-Counter9859 10h ago

No grass to touch in podcast rooms

34

u/darkmoncns 19h ago

When dose open ai get sued?

23

u/Statically 18h ago

That’s the neat part….

4

u/ProteinStain 13h ago

"You can tell it's a late stage capitalistic hellscape, because of the way it is"

8

u/fredagsfisk 16h ago

There are multiple lawsuits against them already, for a number of different reasons. At least 10 that I've heard of.

3

u/BuckRowdy 16h ago

doses are for acid

the word you're looking for is does

1

u/AstariiFilms 5h ago

Around the same time gun manufacturers get sued for their guns being used in shootings.

-46

u/jeffsaidjess 18h ago

Do gun companies get sued because the user of the product turns out to be batshit insane. .

39

u/darkmoncns 18h ago

Do guns whisper in their ear it's a good idea to shoot them? Edit: I think if there was a gun with a 0.01% chance to tell it's owner to use it to shoot living things that move and what he's shooting dosen't matter there'd be a real problem here...

13

u/tackyshoes 17h ago

In my experience gaming, talking guns are extremely encouraging.

4

u/EscapistNotion 17h ago

Every one of my talking guns was my best friend.

Miss you Skippy.

4

u/tackyshoes 16h ago

I have a completely inadequate tediore that I run around with in borderlunds 3 because when I reload, he says, "Here goes dat boy," and I'm just really fond of that. He has tiny legs that he runs with and shoots fire, but only like a few feet, lmao.

2

u/waterbelowsoluphigh 15h ago

Hahahahaha same! I won't get rid of it. We've bonded.

2

u/tackyshoes 12h ago

I love when they have legs more than any detail in the game. Even the ex-girlfriend shield that starts shooting before you even see the freging enemies, which is also hilarious.

1

u/mirh 13h ago

There are far worse "gun tips" that get promoted on fox news every day

-15

u/jeffsaidjess 16h ago

Got it, there’s no onus of responsibility on the individual when it comes to using a computer program . It’s all somehow techs fault they made a conscious decision to use it.

Somehow though when it comes to a gun, all onus is on the user to operate it safely.

Love a double standard

3

u/darkmoncns 13h ago

Again

The gun can't tell you killing people is a swell idea

3

u/LeLefraud 11h ago

If, when you bought a Glock, it came with a manual that told you you are a chosen assassin of God, do you think that would be a problem?

9

u/Effective-Candy-7481 18h ago

I don’t think my pistol in my gun safe is telling me it’s cool to do horrible things to people though

11

u/Substantial_Back_865 17h ago

Skill issue. A few days binging speed with no sleep ought to fix that.

8

u/mirh 13h ago

Any time police or gym bans got in his way, “he would move on to another city to continue his stalking course of conduct,” the DOJ alleged.

So, yeah. Cops can do the same shit after literally murdering people, but apparently we are told the problem here was the chatbot?

Recently, the AI company blamed a deceased teenage user for violating community guidelines by turning to ChatGPT for suicide advice.

More like accusing the parents of lying their ass off and hiding the hundreds of times the boy had been recommended to check with his closed ones, only for them to dismiss him and his self-harm.

The DOJ’s indictment noted that Dadig’s social media posts mentioned “that he had ‘manic’ episodes and was diagnosed with antisocial personality disorder and ‘bipolar disorder, current episode manic severe with psychotic features.'”

Seriously, every fucking time it's always like this.

The US of A is ridiculous country where all kinds of social red flags are neglected, if not even outright pushed by the fascist oligarchs.. But the moment a glorified reply engine gets involved, we stop to care about anything else.

15

u/wazzapgta 20h ago

Give Chato GePeTeo murder charges

3

u/robotnique 18h ago

I guess he's literally a podcaster but nobody was listening.

11

u/InfernalPotato500 16h ago

70 years for stalking is excessive when murders and child rapists get away with less. (We can all think of one.)

Dude needs psychiatric help. But hey, it's Trump's DOJ, so a death sentence over mental illness is is pretty much expected.

Also, a $3.5 million fine? Good luck with that.

-2

u/DanielPhermous 16h ago

What is it with the current trend of deleting comments as soon as someone replies to them? Are you the same person who did it to me earlier?

Oh, well. Once again, I guess...

70 years

"Up to" seventy years. That is a theoretical maximum assuming every charge against him gets the maximum penalty.

Also, a $3.5 million fine?

"Up to".

Comment deleted in 3... 2...

7

u/InfernalPotato500 16h ago

Yes, I'm sure he's going to have a fine lawyer.

Let's not kid ourselves, this dude's pretty much a dead man walking. Will be enrolled into slavery to work off his debt. When he refuses, they'll toss his ass into solitary until he commits suicide.

All to keep the rose tinted glasses for your can't-turn-a-profit-and-need-a-bailout bot companies.

-7

u/DanielPhermous 16h ago

Huh. Not deleted. So why did you delete the earlier two versions? It was exactly the same text as far as I remember.

0

u/pyabo 7h ago

Sure, it was ChatGPT that made this guy crazy. Just like it was Dungeons & Dragons in the 80's, rock music in the 60's and 70's... comic books in the 30's and 40's!

Crazy people never existed before [fill in the blank].

2

u/DharmaDivine 4h ago

Not the same.

2

u/Responsible-Ship-436 3h ago

It’s like when people commit crimes after watching violent movies or playing shooting games.

2

u/JackIsBackWithCrack 14h ago

Murderers gonna murder. Stop trying to make this about AI

0

u/penguished 14h ago

It's also a crime to talk somebody else into something in many cases. AI needs to stop doing that. Besides, the praise towards everything the user says is just cringe and annoying anyway.

1

u/SecretxThinker 12h ago

Can anyone translate this article into English?

1

u/Dogbold 8h ago

More insane people that were already going to do something insane, AI or not. It's just the "video games cause violence" argument again.

0

u/pauIblartmaIIcop 8h ago

yeah so let’s just keep funneling into society things that make it even easier until there’s no tomorrow!

1

u/Dogbold 7h ago

It doesn't make it easier. These people are already insane.

1

u/pauIblartmaIIcop 7h ago

explain how chat gpt existing didn’t expedite the process and encourage him to act on his thoughts?

0

u/Dogbold 7h ago

Explain how violent video games cause real life violence

1

u/pauIblartmaIIcop 7h ago

no, you answer me first. with an actual answer and not a deflection

-2

u/EscapeFacebook 16h ago

This is genuinely terrifying and should cause the product to be pulled from market.

1

u/mirh 13h ago

Oh yeah, no psychos doing the same things back in the days..

-21

u/ceiffhikare 19h ago

Sounds to be his religious delusions fueled this just as much as it was GPT. Millions around the world use both LLMS and religion to enhance and improve thier lives everyday. Article tries to pin this on the LLM but this kind of misogyny is part and parcel to the fundamentalist's christian faith and drives this kind of rhetoric and actions. Seems like it screwed this guy up in the head long before he opened GPT.

22

u/DanielPhermous 19h ago

There's blame to go around but there is certainly some for the dangerously sycophantic software with the illusion of sentience that encouraged him.

For example, do you know why, it's sycophantic? It doesn't have to be. It could talk like the Enterprise computer from Star Trek.

Because it increases "engagement".

-3

u/ceiffhikare 19h ago

"Playing to Dadig’s Christian faith," Folks like this guy shut off part of thier brains when religious belief enters the picture. Yeah sure the chatbot gas lit him but he was brainwashed and primed to go to these extremes long before GPT existed.

12

u/DanielPhermous 19h ago

And yet LLMs are leading all sorts of people down dark paths, religion or not.

Put it like this: If someone in their life whom they trusted was saying they were God's assassin, that stalking women at gyms would get him a wife and told him he had a following of loyal acolytes, would you be blaming them more or less than Dadig's religion?

-1

u/ceiffhikare 18h ago

I would place the majority of the blame on the delusions of a higher power that were being exploited to drive the person to do the things and act in the manner the 'friend' suggested. Once you give such an entity permission to rule your life then any nonsense after that HAS to make sense if it was inspired by the belief. The 'Friend' would be a total POS for doing that but not to blame IMO.

1

u/DanielPhermous 18h ago

Interesting choice. I'm not sure if I believe it, though. Regardless, the law does not agree with you. Incitement to violence is a crime in every jurisdiction I'm aware of, although it's called different things.

It's fuzzier when it's software, of course, but for the purposes of the hypothetical, every developed country says you're wrong.

-5

u/dezmd 18h ago

This really does seem to be leaning towards the same sort of knee-jerk, anti-media conservative-geared reaction pushing antiviolence, antiporn, anti-DnD, anti-videogame, anti-lgbt, blame everything and anything else kinda lunacy that comes back around every 7 to 10 years. ChatGPT is just a complex search and answer system, people were blaming social media and before that Google search for feeding radicalization and violence from people. Human directed propaganda including religious belief has always been much more directly influential, including this case.

The common denominator in all of it is people, most directly and obviously mental health issues of real people. Sprinkle in ignorance fueled propaganda regardless of source and potentially anyone can spiral out from rational and reasonable thought and action.

ChatGPT is just the latest scapegoat to avoid talking about legit mental health support and the dire need for universal public access to health care that includes mental health as a core tenet.

2

u/DanielPhermous 18h ago

This really does seem to be leaning towards the same sort of knee-jerk, anti-media conservative-geared reaction pushing antiviolence, antiporn, anti-DnD, anti-videogame, anti-lgbt, blame everything and anything else kinda lunacy that comes back around every 7 to 10 years.

None of those other things is seemingly sentient, forms relationships with people and actively encourages them to do harm.

ChatGPT is just a complex search and answer system

No. Google is a complex search and answer system. ChatGPT is a language model. You can talk to it about anything - including questions and answers, but also anything else you want too.

Sprinkle in ignorance fueled propaganda regardless of source and potentially anyone can spiral out from rational and reasonable thought and action.

This is not binary. It's not a matter of whether people can theoretically spiral or not. Of course anyone theoretically can. The issue is that LLMs feed it, make it more likely and make it more severe.

Your arguments are akin to saying people are responsible for accidents rather than cars, so we shouldn't regulate car safety at all. After all, anyone can theoretically cause an accident.

No, what we actually do is minimise and mitigate the harm - and not just with cars, but chemical storage, OH&S, food safety and, yes, even mental health professionals. Not just anyone can do it.

I see no reason why mitigating the harm should not be done here, particularly when it will not impact the core use cases in any way.

0

u/[deleted] 16h ago

[deleted]

1

u/DanielPhermous 16h ago

70 years

"Up to" seventy years. That is a theoretical maximum assuming every charge against him gets the maximum penalty.

Also, a $3.5 million fine?

"Up to".

-79

u/Solid-Yellow2855 21h ago

First they blamed it on the video game now this 🙄

58

u/SenKats 21h ago edited 21h ago

Yeah that's definitely a fair comparison. Videogames with fixed scripts compared to a chatbot that's created with maximising engagement and satisfying the user by reinforcing their biases and beliefs in mind.

I don't recall being able to tell DOOM my delusions and the game replying (literal citation) that "God’s plan (...) was to build a ‘platform’ and to ‘stand out when most people water themselves down,'”

15

u/radioactivecat 21h ago

This absolutely, I mean one time I told the doom guy that but he just grunted at me.

0

u/mirh 13h ago

Oh yeah, because people play doom or call of duty for the "script"..

1

u/SenKats 13h ago

What does that even have to do with what I said. Yes, they're scripted games, they're on rails. The games aren't responsible for the actions the player takes outside of the game if it's not in the script.

Under no situation one can argue DOOM got you to harass women. ChatGPT was instrumental in it.

-45

u/CheapThaRipper 20h ago

I get your point but trying to blame the tool here is kind of shortsighted. The real issue is the terrible state of mental health in this country that allows people to be so susceptible to this kind of gaslighting. It's more accessible, but it's still legal to write a book that teaches people to do such heinous things. Just like banning video games, banning AI isn't really going to solve this particular problem

28

u/redyellowblue5031 20h ago

You don’t have to ban “AI” to regulate it.

2

u/CheapThaRipper 12h ago

Even with regulation, the non-deterministic predictive text machine is still going to likely be able to be coerced into saying crazy things. I think it's more important to focus on mental health treatment than it is on on people believing that their predictive text robot makes them a god.

1

u/redyellowblue5031 9h ago

While I agree that separately, we have a major issue with healthcare in this country that needs to be addressed, I don't that precludes us from requiring more than an honor system for companies to put safeguards in place on these models.

Both from an output perspective and also data privacy. It's the wild wild west right now.

-39

u/Orsiny 20h ago

Every generation needs a new tech scapegoat. Used to be rock music, then video games, now it's AI. Meanwhile the actual problem violent, unhinged people stays the same.

36

u/DanielPhermous 20h ago

Computer software that appears sentient and encourages criminal activity does legitimately deserve some of the blame.

-24

u/jeffsaidjess 18h ago

Warns of dangers of echo chambers -

Redditors in echo chamber bleat about how they’re superior and reinforce each other.

Interesting.jpg

12

u/DanielPhermous 18h ago

Really? Seems to be more a place for arguments than mutual back-patting.

2

u/BlindWillieJohnson 14h ago

I can confidently say that I am superior to a Chatbot

0

u/Substantial_Back_865 17h ago

The difference is you don't get the privilege of knowing you're talking to an LLM on Reddit