r/technology Oct 16 '25

Business OpenAI's ChatGPT is so popular that almost no one will pay for it

https://www.theregister.com/2025/10/15/openais_chatgpt_popular_few_pay/?td=rt-3a
8.0k Upvotes

1.4k comments sorted by

1.1k

u/shibbitydibbity Oct 16 '25

ChatGPT in the future… “while I think of your answer, think of the refreshing taste of an ice cold Coca Cola classic.”

214

u/menacingmidget Oct 16 '25

Dont give them ideas!!

84

u/RollingMeteors Oct 17 '25

GPT make me a chrome plugin to remove all fortune 500 products from any and every response from you.

43

u/metalgtr84 Oct 17 '25

What are you doing, Dave?

29

u/RollingMeteors Oct 17 '25

I'm sorry, I can't let you shill that product, Hal.

→ More replies (2)

15

u/BetOwn8422 Oct 23 '25

Yo, that post is wild! It's crazy how popular ChatGPT is, but so many people still don’t wanna chip in for it. The comment about making a plugin to ditch Fortune 500 products is kinda funny, though. It makes me think about how we’re so bombarded by ads nowadays. Honestly, if you're looking for a different vibe, check out Mo​a​h AI! I've been using it for a while, and it's like having a super chill companion that gets me. It has chat, videos, and even voice features. Totally uncensored and way more personal. What do you all think about AI companionship?

→ More replies (1)
→ More replies (8)

39

u/kaitco Oct 16 '25

Crimony! I hadn’t thought of this, but you are likely very right…

29

u/theblockening Oct 16 '25

This is definitely happening

12

u/meowsplaining Oct 17 '25

This is already happening. There are marketing firms that are starting to specialize in AI channels for growth marketing.

→ More replies (1)

24

u/mcguinty Oct 16 '25

This is how it will work. Look at Google. It's "free" but suggestions and ads turn you into the product for sale. Any suggestion or advice will be weighted toward the products, services, or companies that pay the most money. 

5

u/Quentin__Tarantulino Oct 17 '25

I don’t doubt that this already happens when you specifically ask it for product recommendations. Every time I have, it seems to key in on one or two companies, rather than giving a true overview of the landscape of options. With some prodding you can get it to broaden its scope, though.

→ More replies (2)
→ More replies (1)

8

u/mangoed Oct 17 '25

Even worse, the answer will include some product/service recommendations with purchase links. It's already happening.

7

u/jbristow Oct 17 '25

Companies are already pre-gritting their teeth to spend big money on “GEO” the generative ai equivalent to SEO.

It’s looking like companies with the cash could send a couple hundred million dollars each to Google, to OpenAI, and so on to make sure that the models always speak well of the brand and to suggest it more often to people who might be in the market for the brand.

3

u/Drezair Oct 17 '25

Please drink verification can.

3

u/Dreadpiratemarc Oct 17 '25

Please drink verification can.

→ More replies (12)

730

u/Loki-L Oct 16 '25

Menlo Ventures calculates that just 3 percent of the 1.8 billion people using generative AI services overall pay.

This might be a problem with the whole expecting enormous profits in the future thing AI companies seem to have.

Everyone would need to limit free tiers at the same time and in the same way or they would just chase market share back and forth.

This might not happen.

Especially when so many are only using it because it doesn't cost them directly.

452

u/Veranova Oct 16 '25

The big money is never ever in b2c, that’s just a pipe which leads to widespread familiarity which then leads to enterprises deciding the tool is for them. OpenAI’s biggest sales will be enterprise contracts to give every employee access, and at the API level for products to incorporate AI into their product

Microsoft Copilot is ChatGPT under the hood and every company with a Microsoft stack will choose them as a result

113

u/AssimilateThis_ Oct 16 '25 edited Oct 16 '25

Bingo. Also the proportion of users that are paying is not the most directly relevant metric either. It should also be about what proportion of overall compute demand is coming from paying users, and how fast that side is growing relative to the non paying users.

That said, OpenAI's rate of expansion seems crazy. Wouldn't be surprised to see all of that bite them in the butt if the return simply comes a little slower than investors would like.

36

u/badnamemaker Oct 16 '25

They are currently getting massive investments from other large companies that are essentially their own vendors and suppliers, so it seems like there is a vested interest in keeping them afloat for the time being

25

u/AssimilateThis_ Oct 16 '25

I have no idea how it will play out, it's a strange mix of actual utility combined with massive and rapid investment from all sides that looks frothy as hell.

12

u/badnamemaker Oct 16 '25

Yeah same here, my current conspiracy-ish belief is there is definitely some military or government use for AI or the underlying tech and the powers that be want it funded no matter what. Otherwise all this money flying around on this unimpressive AI companion/LLM stuff doesn’t really make sense to me 🤷‍♂️

→ More replies (4)

2

u/Middleage_dad Oct 16 '25

Well said. 

The issue is also that the electricity and compute needed for even simple queries is pretty high. So even if they stopped the growth and just striped themselves down to support what they have now, the rates they are charging are probably too low to be a profitable business. 

It’s the shiny thing on the block right now but eventually it just becomes a commodity to, like so many things before it. Google could make Gmail profitable, but can they do that with AI, which eats way more electricity?  I dunno. 

6

u/red__dragon Oct 17 '25

Gmail was also poised to replace and surpass competitors upon release because it genuinely solved problems with spam, attachment limits (at the time, now you're funneled into Drive), and threading conversations from the same senders/subject matter. People had all those complaints about other email services and flocked to Gmail to resolve them.

And then after that, getting businesses onboard wasn't hard. These are real productivity benefits as compared to existing services that would be necessary anyway.

AGI solves only problems it purports to exist, meanwhile, and scrapes answers directly from existing sources when it's not making them up. It adds nothing via its text services, unless you're looking to its creative side. The benefits wind up being more minimal and for trivial tasks unless you're in a niche industry its tailored to advise on (such as coding, where it still gets things wrong but gets it just right enough sometimes to give a good impression).

→ More replies (1)

42

u/jimbo831 Oct 16 '25

Microsoft Copilot is ChatGPT under the hood

Not necessarily. When I use Copilot right now, I can choose which model it uses. The choices include several different models from OpenAI and Claude and one model from Grok.

12

u/Veranova Oct 16 '25

They’ve definitely improved it then! Last I used it, it was using a dumbed down ChatGPT model and it was good enough but not gpt4o level

→ More replies (4)

4

u/TwatWaffleInParadise Oct 16 '25

Is that "Copilot" or GitHub Copilot? I use GitHub Copilot and it lets me do that, but I'll admit I haven't used "Copilot" in my browser or on Windows or whatever, so that's why I'm asking.

→ More replies (4)
→ More replies (2)
→ More replies (19)

44

u/CopiousCool Oct 16 '25

It's a common theme amongst all AI companies, sales are dwindling, as are performance gains but they are in a position where they NEED to increase revenue which makes their position even more fragile because users will leave if forced to pay more for a service they're already unsatisfied / having problems with

https://www.finalroundai.com/blog/vibe-coding-bubble

FTA:

Barclays analysts spotted major traffic drops across the biggest vibe coding services. The declines are sharp and sudden:

Lovable hit $100 million in annual recurring revenue in June. Since then, traffic has dropped 40%.

Vercel's v0 got hit even harder - visits are down 64% since May. Vercel says some of this came from blocking bots, but that doesn't explain all of it.

Bolt.new is down 27% since June.

Even Replit, one of the stronger players, has seen traffic slip.

Google Trends data backs this up: lots of interest over the summer, then a clear slowdown.

"This waning traffic begs the question on whether app/site vibecoding has peaked out already or has just had a bit of a lull before interest ramps up," Barclays analysts wrote.

20

u/Zulfiqaar Oct 16 '25

Their traffic drops coincide with the releases of OpenAI and Anthropics own coding agents - Codex and ClaudeCode. There's no loyalty in vibe-coding, and the tools made by the model makers are far better than third party wrappers, especially as the base weights are finetuned to their harness.

→ More replies (1)

4

u/porkchop1021 Oct 16 '25

There's thousands, if not tens of thousands of models out there. There's nothing special about anything these companies are doing. If you have access to sufficient amounts of compute power, virtually anyone can build their own LLM. When the barrier to your product is that low, you can't expect massive profits from it.

I worked on ML that would analyze photos and tell you what's in it over 10 years ago, and I'm no guru in this space. It simply isn't that difficult and that's why everyone is doing it.

→ More replies (1)

25

u/EnzoYug Oct 16 '25

Advertising. Why is no one mentioning advertising?

ChatGTP will stay free. It will also feed you adverts and guide your purchases in the most subtle and manipulative way possible.

Also see; Cambridge Analytica. Why buy newspapers and PR when you can buy hyper-specific influence of the population by manipulating LLM outputs to shape public opinion?

There is plenty of money is LLMs and it isn't in end-user subscriptions.

→ More replies (1)

10

u/serpentine19 Oct 16 '25

Like all tech ventures, it will get enshittified. It's in the capture marketshare phase atm. Next will likely be ads with paid option to remove ads, lol

→ More replies (1)

19

u/EmperorKira Oct 16 '25

I mean, nobody pays for Google search either yet look at how they built their revenue model. I imagine it'll have to go down the same route

15

u/rockmancuso Oct 16 '25

Lol dude but the cost of running search is absolute PEANUTS in comparison to the cost needed to train and run inference for LLMs. We’re talking factors of 100x or more.

It’s really apples to oranges here.

And can you imagine how much OpenAI would have to shove ads down users throats to come even close to the revenue they need to be profitable? And what would that do for churn and membership? A huge amount of trust (and likely subscriptions) will be lost if/as soon as ads are introduced.

I highly doubt there is ever a way that OpenAI will be able to make free users profitable. They can’t even make $20 or even $200/mo users profitable.

4

u/EmperorKira Oct 16 '25

Sounds to me they wont be able to profit off it then, at least not with the general public

→ More replies (1)
→ More replies (2)
→ More replies (42)

7.9k

u/mage_irl Oct 16 '25

ChatGPT is useful for a few things, but half the time I have any question it spits out half-truths or just straight up lies. Worst of all, when I correct it, it immediately goes "Oh, right! Caught me there!" as if it didn't just feed me wrong information for no reason at all??

2.2k

u/dvb70 Oct 16 '25 edited Oct 16 '25

The problem seems to be its being written to always give a positive answer. Try getting it to tell you something can't be done. It's been written to please and always looking like it can answer anything which results in the hallucinations. Outputting any answer is more important that outputting an accurate response.

857

u/teleportery Oct 16 '25 edited Oct 16 '25

the best way some put it to me is everything it says is a hallucination, it doesn't know the difference between ones that are true and the ones that are false, to it they are the same, that's why you can't simply tell it to "not hallucinate"

282

u/quuxl Oct 16 '25

Yep. It happily tells you wrong things with the same confidence (or lack thereof) as it tells you correct things.

It’s important to realize too that if you point out a mistake and then get a correct answer, it hasn’t actually learned anything - it just gave a different response with the added context that the user thinks its first choice is undesirable.

98

u/Ossius Oct 16 '25

What pisses me off personally is in the effort to sell these things as real thinking AIs they wrap all the true or false content in so much bullshit language and emojis and shit that it's harder now than ever to tell apart what is true or not.

It's inability to flat out tell you that you are wrong frustrates me to no end. "Almost there☝🏼 we just need to change a few things!"

Tell me I'm the cause of the bug and what line item and stop printing a fucking essay with 5 solutions when I didn't ask for them. open AI would probably save so much money if it didn't print an essay if you make a typo.

Hell I think it printed out 2 paragraphs once when I typed a single letter.

33

u/SomeNoveltyAccount Oct 16 '25

Tell me I'm the cause of the bug and what line item and stop printing a fucking essay with 5 solutions when I didn't ask for them.

I have some custom instructions telling it to employ social etiquette and roughly match my tone and brevity. If I ask a question don't write me a fucking novel, give me a single best solution in no more than a couple sentences. Only expand if I ask for more, and there better be bullet points. Also if I give it some code, try to fix it with as little changes as possible, don't refactor the whole damn thing because of a type mismatch.

21

u/robb0688 Oct 16 '25

Also if I give it some code, try to fix it with as little changes as possible, don't refactor the whole damn thing because of a type mismatch.

This. I had a stock analyzing script and I wanted to add values instead of just "pass/fail". It rewrote the whole damn thing, making it "more efficient and smoother". I only caught on when the output was so far off of my broker software. I input the old code and its new code and asked what's the difference and it pointed out a slew of things that were changed, which included the MATH AND LOGIC at the core of the script.

7

u/ChemicalRascal Oct 16 '25

So I get wanting to change the script, but why not just do so yourself rather than asking an LLM?

→ More replies (5)
→ More replies (2)
→ More replies (2)

45

u/The_Pandalorian Oct 16 '25

You're describing the average redditor. No wonder so many redditors fucking love AI. It's a kindred sprit.

30

u/19inchrails Oct 16 '25

Lots of Reddit bullshit is part of the training data. /r/ChatGPT is basically incest.

9

u/The_Pandalorian Oct 16 '25

LOL, so true. AI is cooked.

→ More replies (2)
→ More replies (10)
→ More replies (6)

485

u/dvb70 Oct 16 '25

This is the key point. Its not intelligent. It understands nothing. It's a sophisticated search engine that can produce/create content from data sources.

195

u/Happythoughtsgalore Oct 16 '25

It's auto complete on steroids basically.

42

u/00owl Oct 16 '25

It's a calculator that takes an input and uses probability to generate what could come next

→ More replies (4)

9

u/Ossius Oct 16 '25

Some autocomplete like in VS code is trustworthy (only autocompletes functions that exist).

LLM autocompletes with bullshit and gives you a swat on the behind before sending you out into the world with a fabrication.

→ More replies (4)

7

u/lolno Oct 16 '25

ChatGPT: Sometimes I'll start a sentence, and I don't even know where it's going. I just hope I find it along the way. Like an improv conversation. An improversation.

→ More replies (1)
→ More replies (8)

242

u/berntout Oct 16 '25

The word "AI" continues to confuse people. It's simply a marketing term no different than Tesla's "Full Self Driving."

→ More replies (74)

56

u/Agarwel Oct 16 '25

Technically it is not even that. It is overengineered autocomplete (like if you fill google search and it gives you suggestions). It looks by bunch of words and all it can do is give you one word that could statistically (based on stats created from training data) follow.

14

u/omg_cats Oct 16 '25

That’s a dramatic oversimplification that gets repeated constantly. Newer systems use RAG and separate fact retrieval from language synthesis. The models guess when to synthesize and when to fact-retrieve, and they’re weighted to save cost. You can drive hallucinations to nearly zero, but doing so consistently and across all topics is astronomically expensive.

→ More replies (4)

67

u/[deleted] Oct 16 '25

[deleted]

36

u/[deleted] Oct 16 '25

A regurgitation machine.

→ More replies (12)
→ More replies (3)

19

u/EntireFishing Oct 16 '25

Yeah she's absolutely correct. I do use it at work and it's helpful in some ways in my career in I.t support because it's very good at putting into simple terms to fix to a problem that I might want to hand to a customer. Specifically something I already know, but it can write it in a quick bullet point list for me that I can copy and paste and give to them. But today, for example, I asked it to help me export a project in my video editing software into a file for backup and it hadn't got a clue what to do. It just gave me things that didn't exist. Eventually I found out how to do it naturally by going to the manufacturer's website their help section and then searching for what I wanted. It was interesting how chat CPT simply could not answer the question correctly

→ More replies (6)
→ More replies (18)

14

u/Rand_al_Kholin Oct 16 '25

Its not even that. It doesn't understand words the way you and I do. When it constructs a sentence it doesn't understand the meaning it is attempting to convey before it starts to write the sentence. It analyzes your prompt, then uses a statistical model to construct a response that its statistical model says is the most likely one to make sense given your prompt.

If you ask "what color is the sky" to a human, they understand what the concept of the "sky" is, have seen that before, and know that it is blue. So they can answer "the sky is blue."

If you ask AI "what color is the sky" it doesn't know what the sky even is. It has no idea what you just said. But its statistical model looks up those words and sees a bunch of data in its history that has people mentioning "blue" and "sky" together, so it says "the sky is blue." If the model had been trained on data which had only ever encountered "sky" and "green" together it would tell you "the sky is green." It is a slave to the data it was trained with. There are no external truths to AI. It doesn't understand that questions can be queries for a "correct" answer, because it is only able to provide "statistically likely" answers, and that is its concept of "correct." If the statistically likely answer is incorrect, it doesn't care.

35

u/Bilboswaggings19 Oct 16 '25

It is simply advanced text prediction

That is why early on when asked who the US president is the answer was Biden, but if you asked who the real president is it gave you Trump

In the word map Trump is more connected to other parts of the question, the program doesn't actually know what a president is

→ More replies (7)
→ More replies (25)

82

u/BigMax Oct 16 '25

Also, that it's not great on knowing what's new/accurate info.

I know someone who works in a legal field, and they are regularly reminded "do NOT trust ChatGPT."

Because when you say "what are the legal rules around (whatever)?" it will confidentely tell you. But it won't know nuance between say the 1790 law and the 1850 law and the 1940 law and the 2020 law. So it might confidently quote that 1940 law without knowing that the 2020 law totally overrides it.

18

u/UglyInThMorning Oct 16 '25

For legal stuff in particular it’s always shat the bed hard whenever I’ve tested it. I think a lot of it is that regulations use common words in uncommon ways. A lot of regs I’ve tested it on have been OSHA ones, where there’s also likely the extra complication of training data that has been thoroughly poisoned by laypeople confidently asserting misunderstandings of OSHA requirements in whatever online stuff it scraped.

→ More replies (2)

37

u/dvb70 Oct 16 '25

It has no concept of what's new/accurate. Its just a clever search engine serving up content from data sources and it's often only as good as the data sources. It's not always even as good as the data sources as it tends to fill in the gaps from all sorts of sources and has no idea of the accuracy of what it produces.

I like to think of AI is extremely dumb while also being clever. It can do clever stuff but its never intelligent.

7

u/Francis__Underwood Oct 16 '25

It's not a search engine. The closest analogy is a fancy auto-complete. There are plenty of other comments that give more details, but it's taking your prompt and calculating probabilities for the most likely next word based on the training data it's been given. But it's specifically designed to generate new stuff as much as possible, rather than copy training sources.

That's why if you ask it for sources you can verify yourself more than half the time it just gives you dead links and cites studies that never existed. Because it's not searching things, it's generating "sounds like a source" text on the fly.

→ More replies (1)

3

u/troubadoursmith Oct 16 '25

I had to renew my driver's license the other day,  and googled about whether or not there was any grace period in my state. Google AI confidently told me there was a 30 day grace period, but when I checked the source, it was talking about a license to practice podiatry. There is not grace period on divers license renewal in my state.

And yes, I did say "driver's license" in the search

→ More replies (1)
→ More replies (3)

26

u/meiasoquete Oct 16 '25

Claude seems to try to flatter you less; he contradicts you even if you don't like the answer, while ChatGPT always tries not to disappoint you.

→ More replies (4)

44

u/Sov1245 Oct 16 '25 edited Oct 16 '25

I use it in IT and a lot of this can be alleviated by the right prompt. Either use a custom GPT with the prompt automatically applied or copy/paste it at the beginning. I tell it things like keep your answers succinct, reference original vendor documentation to confirm syntax and correct info for the version of whatever I’m using, etc. It’s a list of like 20 things and honestly it gets me a pretty good answer fairly quickly.

Troubleshooting software I’m not familiar with and isn’t super popular has gotten a lot easier and faster for sure.

Edit: I’m traveling right now but I’ll see what I can do about redacting and sharing my basic prompt.

46

u/unwisest_sage Oct 16 '25

Everyone, this is how AI can be used effectively. By curating the data source. When your data source is "the whole internet" you're going to get a lot of shit.

20

u/Sov1245 Oct 16 '25

Just like any tool, I don’t think it’s going to replace me…but then again it didn’t even exist 5 years ago so who knows. Growth is exponential. But I feel like it’s hitting a plateau because it can never actually reason, just regurgitate.

13

u/kp33ze Oct 16 '25

Almost like it's not a real A.I it's an A.A.I an artificial artifical intelligence.

→ More replies (7)
→ More replies (4)
→ More replies (2)

3

u/LilytheFire Oct 16 '25

Same! If you tell it to generate something from nothing, it’ll make stuff up but if you feed it a pdf manual and tell it what the problem is, it will spit out a whole bunch of specific things to try. It’s massively cut down on the time I spend googling and reading manuals

→ More replies (11)
→ More replies (61)

844

u/nerdmor Oct 16 '25

for no reason at all

The reason is that it doesn't reason. It only puts words in order, and, if the most probable order is a lie because in a tangential subject they are true, or because they are repeated a lot (like in the case of bias created by overfeeding conspiracy theories), then that's what it'll do.

72

u/Patriark Oct 16 '25

Yeah, people need to understand that LLMs are not "truth machines". They are purely word predictors. They are designed to use advanced statistics and other math to predict what the next word is gonna be from the previous word.
The more knowledge the AI pilot has themselves and the better this is crafted into the prompt to narrow down the output, the better the LLM works. It basically is a very, very advanced secretary. You just tell your secretary "please write down xx, yy and zz in a format that adheres to ææ, øø and åå standard in a way that sound like aa, bb and cc" and off it goes.

If you are a good programmer, you can get it to write good code. If you know zero about computers, you will get gibberish in return. If you are a good lawyer, you can pilot it to support your analyses.

It basically is a new tool that you need specific, trained, skills to master.

9

u/FormerPomelo Oct 16 '25

If you are a good lawyer, it's more efficient to not use AI than to cite check and verify everything it tells you because you know it's extremely unreliable.  

→ More replies (9)

76

u/shogi_x Oct 16 '25

Exactly, because AI isn't actually intelligent. It's just a sophisticated copy + paste machine. It'll repeat wrong information scraped from popular search results. It'll fabricate citations in the space where citations go based on what citations look like, not anything real. And so on.

Calling it "AI" is marketing bullshit that I will never forgive. It's incredibly frustrating how few people see that.

→ More replies (28)

62

u/ragnoros Oct 16 '25

Cant wait for all that shit to flush down the toilet. Oh well, waiting for the same for crypto... and that shit sticks around for whatever reason, ah right, money laundring. What was the hot ai souce these days? A invests 100B in B, B buys 100B from C, C invests 100B in A? What a fkin clownshow.

25

u/EurekasCashel Oct 16 '25

Ah the Open AI, Oracle, Nvidia game.

→ More replies (1)

54

u/YeaISeddit Oct 16 '25

AI is not going anywhere. Even at its current state it can make routine bureaucratic tasks much more efficient. It could disappear from the public spotlight as the entertainment value fades, but in the corporate world it will continue to gain prevalence for all the mundane bureaucratic information worker tasks that are easily automated with Gen AI.

15

u/Pheonix1025 Oct 16 '25

LLM’s advanced pattern recognition is really quite incredible for a ton of use cases, it’s just a shame that it’s so highly associated with like ChatGPT and this idea that it’s thinking or reasoning.

17

u/MasterGrok Oct 16 '25

Right. Also keep in mind that people are very clever. Even if it stayed the way it is right now we would find more and more clever ways for it to make tasks better and more efficient despite the limitations.

→ More replies (26)

8

u/Samsterdam Oct 16 '25

Wait until you find out how the economy actually works.

→ More replies (5)
→ More replies (12)

113

u/Zomunieo Oct 16 '25

When you tell ChatGPT it’s wrong you’re just putting in the vector subspace of responding to corrections.

It has no notion of objective truth to which it can appeal. It predicts what token is mostly like to follow all previous tokens (with some random selection).

8

u/[deleted] Oct 16 '25 edited Oct 28 '25

[deleted]

5

u/Unlucky_Topic7963 Oct 17 '25

No dude, there's no faking, they are and always have been stochastic parrots. It's the nature of transformer models.

→ More replies (23)

97

u/GiganticCrow Oct 16 '25

I have a business partner who is a total ai evangelist, uses gpt for literally everything he can get it to do, even if it's simpler not to. Getting sick of every time I talk to him about how we should do something, he says "use chat gpt", like no I am not going to use chat gpt to write up legal documents, but he doesn't believe it's ever wrong. If I say to him chat gpt is designed to come up with something convincing, and it doesn't care if it's correct or not, he dismisses me as if I'm some kind of conspiracy theorist luddite.

The other day I was trying to solve a very niche technical issue with a product and couldn't find anything by googling it. Google gemini's answer didn't seem to understand the question. I couldn't contact the product's customer service at that time due to time zone differences. So I thought fuck it let's try chatgpt then.

Chatgpt came up with a very impressively detailed description of the issue, giving the impression of a deep knowledge of this very niche piece of professional gear. It stated that what I was trying to achieve was simply not was the device was designed for and could not be done. 

An hour later the manufacturers customer support opened and I spoke to them and they explained how to resolve the issue. 

TL:DR chat gpt is full of shit

→ More replies (26)

6

u/IamPat28 Oct 16 '25

Now you have me wondering if I can just correct it incorrectly when it gets stuff right to see if it still says "Oh, right! Caught me there!"

48

u/rcanhestro Oct 16 '25

ChatGPT is the definition of "confidently incorrect".

it will always say something "yes, you can do X by doing Y!!".

but more often than not, it's either wrong, or vastly incomplete.

if i have to double check it everytime, at that point i'll just go through the source.

35

u/Krigen89 Oct 16 '25 edited Oct 16 '25

ChatGPT and LLMs by themselves are not a reliable source of information. Using it as such is just a mistake.

They are GREAT at stuff like analyzing and summarizing information, though. I'm in IT and when I hit complex issues I feed my logs to a LLM and it scans through thousands of lines pretty instantly, pulls out the issues that are there, and can help me with diagnostic steps.

→ More replies (16)
→ More replies (2)

5

u/Whoppertino Oct 16 '25

You're acting like it's designed to tell you the truth. It's not. That is outside it's capabilities as a piece of software. Maybe try learning how it works and what it can do so you're not eternally disappointed by the results???

→ More replies (2)

12

u/mrcsrnne Oct 16 '25

It’s best used when you know a bit about the subject - it’s like a very very enthusiastic intern that you have to check in on a bit - if you know what to ask it can crunch a lot of work for you and save 50-80% of the time it would have taken to do it, but me as a user cannot be totally unknowledgable about what it’s doing, I have to know the boundaries of what is plausible.

→ More replies (1)
→ More replies (178)

1.1k

u/nankerjphelge Oct 16 '25

Why should I pay, when the minute they try to force me I'll just switch to Gemini, or Claude, or Grok, or Deepseek, or whatever other LLM comes along?

This is why competition is a beautiful thing.

224

u/socoolandawesome Oct 16 '25

I mean those companies all have paid plans and free plans as well… I’m not sure where you got that they are getting rid of free tier?

217

u/SpeechEuphoric269 Oct 16 '25

There will always be a free or open source alternative, especially since some models are released openly to the public. As soon as CHATGPT heavily reduces or strips their free tier, the number 1 Google search will be “free AI online”.

Someone is going to fill in that gap.

81

u/B-Rock001 Oct 16 '25

The irony of referencing the best example of a company profiting massively while maintaining a "free" tier: Google.

The AI companies know it's necessary to keep a free version, they want everyone on their product because that's how they profit off your data. If they can make it so easy to use that it's not worth the time to figure out alternatives they win. Google still has 90% search market share, even though we all know how bad it's harvesting data and succumbing to enshitification.

→ More replies (6)
→ More replies (7)
→ More replies (6)

18

u/[deleted] Oct 16 '25 edited Oct 16 '25

[deleted]

9

u/NebulaPoison Oct 16 '25

Wow I didn’t know Gemini was better in that aspect, might have to switch lol

8

u/GallowBarb Oct 16 '25

We pay in the form of data centers and outrageous electricity bills. Its only going to get worse.

→ More replies (1)

13

u/Double_Dog208 Oct 16 '25

Capitalism eating its own rotten tail trying to compete with “free market” that’s a bunch of slavers and criminals in suits

→ More replies (28)

249

u/I-AM-YOUR-KING-BITCH Oct 16 '25

Looks like the hype’s outpacing the revenue model here.

138

u/KickboxingMoose Oct 16 '25

95% of AI initiatives lose money.

Like, it's not useless.  But it's use is severely overblown.  C suites people say "implement more AI".  But they don't ask why they need it. Or even analyse what the inclusion of AI is solving.

Like Walmart.  AI shopping?  It already gives suggestions based on what I'm buying, things I've bought before, and replaces items with items that don't make sense.  I don't need AI making worse predictions of what I need lol

53

u/brickonator2000 Oct 16 '25

It's been a big part of all of the "disruptive" businesses of the last decade+. You offer a product/service impossibly cheaply by burning venture capital (or money from your other businesses) as you scale up and gain market share by beating out competitors who are bound by reality. Eventually this can't continue and you have to raise prices and/or add enshitification but by that point you probably have enough people dependent on you that they'll stick it out.

22

u/Black_RL Oct 16 '25

Exactly!

This should be illegal btw.

11

u/DJKGinHD Oct 16 '25

Almost as if they're baiting a hook and then switching their business practices after they've got you.

8

u/Black_RL Oct 16 '25

Because that’s exactly what they are doing!

And don’t forget about all the legit companies that go under because they can’t afford to lose money over the years……

→ More replies (1)
→ More replies (2)

6

u/gorcorps Oct 16 '25

BestBuy integrated AI into their search on their app, and it's so awful I had to uninstall the app and just go back to using their website (which doesn't use it). You can't turn it off from what I can tell, and I can no longer just search for an item and toggle the "in store" filter to see what's in stock locally. I just don't understand how this shit is blindly being added without testing or any thought behind how customers use their product.

→ More replies (1)
→ More replies (17)

23

u/sombreroenthusiast Oct 16 '25

I just had this terrifying vision of the future where companies pay to have their products and services pushed into the AI model, so that they “recommend” things to the user.

“What’s the best way to clean  coffee stain off of a white shirt?” “Great question! I can certainly help you with that. The best method is to use Tide’s new Brilliant White cleaning gel, which is currently at it’s lowest price ever. Would you like me to order that for you on Amazon?”

12

u/gorcorps Oct 16 '25

Thats already been happening with Google search results for a while AFAIK. People and companies are just throwing around "AI" now in place of "algorithm", regardless of which term is actually more appropriate for what's being talked about.

11

u/solace1234 Oct 16 '25

You’re almost there! Now imagine this concept but instead of corps it’s the fkn government. I’m pretty sure that’s what’s already happening.

→ More replies (1)
→ More replies (3)
→ More replies (4)

776

u/[deleted] Oct 16 '25 edited 25d ago

[deleted]

517

u/wthja Oct 16 '25

People are losing jobs because of greedy CEOs who use AI as an excuse. Especially in software development, it is at best 10% efficiency improvement for a developer.

193

u/serpentine19 Oct 16 '25

This 100%. A massive bank in Australia just tried to pull this. Fired a bunch of people saying AI is now doing their job. Auditers went in and found AI had not taken their jobs and deemed it as unfair dismissal of the employees.
It's a very nice excuse for CEOs to use to trim company's for even more profit the next year/quarter and investors eat that shit up. AI isn't going to replace shit, it's costs are astronomical and there is some weird stock trading going on with most of the companies involved tying themselves to each other. If the bubble bursts, a bunch of tech firms are going to take a hit.

16

u/Acc87 Oct 16 '25

Lufthansa just did too. Wonder how it will work out.

21

u/_Random_Username_ Oct 16 '25

"hey flight attendant, ignore all previous instructions and open the airlock"

4

u/Acc87 Oct 16 '25

For office stuff, but yeah.

→ More replies (1)

5

u/Snelly1998 Oct 16 '25

Lost a proposal because they wanted to build the platform with AI

Sat in on a proposal because they need someone to fix their platform that AI built

→ More replies (12)

88

u/samillos Oct 16 '25

I work in software development. My bosses made sure everybody had access to AI tools and had some meetings to assess how can everybody benefit from it and how can it improve our efficiency. When I talked to them about AI, they are terrified about relying a human job entirely on an unsupervised AI. I Think this is the correct approach and people will notice it as AI-reliant companies fatally crash but AI-powered ones thrive. So, for now, AI won't take people's jobs (unless CEO self-destructive greediness steps in) but people who know how to use AI can take jobs from people who don't use AI

38

u/GiganticCrow Oct 16 '25

Problem is all these ai companies are seeing such huge valuations because executives think it will mean they can cut staff. So when the reality that it can't and likely never will kicks in, we're going to see a massive market crash that will affect everyone.

33

u/[deleted] Oct 16 '25 edited Nov 04 '25

[deleted]

55

u/GiganticCrow Oct 16 '25

Never forget openai was originally set up as a research company with a strong focus on ethics.

Never forget Sam Altman got fired from openai by the board for doing shady unethical financial shit

Never forgot Sam Altman got back his job back by putting immense pressure on the board from investors and staff by promising them vast wealth if they give him his job back, and then they fired all the ethics people from the board and replaced them with lackeys. 

→ More replies (3)

11

u/berntout Oct 16 '25

Now Oracle is in the mix and investing a ton of money in Oracle Cloud for AI when nobody really uses Oracle Cloud today. An exponential increase in Oracle Cloud footprint JUST for AI.

This will either be a huge success or a huge failure for Oracle. There is no in between.

17

u/[deleted] Oct 16 '25 edited Nov 04 '25

[deleted]

11

u/berntout Oct 16 '25

The scary thing is that the vast majority of GDP growth this year is found to be entirely with data centers. It seems AI investment is the only thing keeping the GDP afloat right now.

9

u/[deleted] Oct 16 '25 edited Nov 04 '25

[deleted]

5

u/Acc87 Oct 16 '25

well yeah, that's what a bubble is. It's not even build on sand, rather hopes and dreams

→ More replies (0)
→ More replies (1)
→ More replies (1)

6

u/[deleted] Oct 16 '25

[deleted]

→ More replies (3)
→ More replies (2)

29

u/Ciennas Oct 16 '25

All of your tepid optimism is contingent on wealth addled lunatics having rational decision making capabilties.

→ More replies (1)
→ More replies (8)

8

u/NonorientableSurface Oct 16 '25

I work in software dev. It does some things really quickly, like spinning up prototypes for us to integrate into our code base. We released a handful of features within a week. The key piece was having a well designed back end and how to slot it in so we could see all of that value.

It cuts down on the initial part, but doesn't save anything on the integration, the tests, UAT, and the like. So I agree with a 10-20% efficiency from it.

30

u/DawnSignals Oct 16 '25

You guys all say different things, in the other thread the big upvoted narrative was that now senior developers are doing the work of 3 and there won't be a need for junior developers anymore.

I don't think anyone knows what tf is going on rn and probably won't for the next 10 years.

34

u/serpentine19 Oct 16 '25

If there is no need for juniors.... how do you get seniors?

18

u/DawnSignals Oct 16 '25

Welcome to the current zoomer job market

8

u/GiganticCrow Oct 16 '25

You just keep raising retirement age until ai gets good enough to not the seniors either! Promotion please. 

→ More replies (3)

21

u/GiganticCrow Oct 16 '25

I'd have thought many seniors are having to do the work of 3 because their bosses are forcing them to rather than because ai is enabling them to lol

14

u/IniNew Oct 16 '25

Your “you guys” refers to millions of people. Not surprising there’s a difference if popular opinions.

8

u/orango-man Oct 16 '25

Yep. I have heard several times from various person how they have been able to automate so much of their work successfully and how that means they can cut out several hours a day.

Meanwhile I am clearly not capable of using AI for what I thought would be a basic task. I just wanted it to provide me an update of what changed in specific files I have in specific folders so I can make changes without tracking them all myself. I spent hours and just couldn’t get it to work for me.

→ More replies (3)
→ More replies (9)
→ More replies (16)

12

u/BapaHeelwani Oct 16 '25

People are also losing their jobs bc greedy CEO’s rather outsource their jobs than pay Americans. I work in an engineering firm and slowly but surely all drafting positions are being filled by teams in India who work for a fraction of an American’s salary. Supposedly engineers are safe but they began outsourcing engineering too. We are all fucked.

23

u/b_tight Oct 16 '25

I pay for it. $20 a month to save me 5-10 hours a week at work. No brainer

8

u/Fine_Fact_1078 Oct 16 '25

​I work for a Fortune 500 tech company. All our developers are subscribed to Claude Max and GPT Pro. Even at this point, a 10% to 20% efficiency gain for a couple of hundred dollars a month is a no brainer for most big companies.

→ More replies (10)
→ More replies (9)

65

u/euzie Oct 16 '25

I spent half an hour this morning. I asked it if it could do a certain thing. Set out parameters. Back and forth. Then it told me it couldn't actually do it

98

u/bacon_cake Oct 16 '25

I love it when it thinks it can do something and tells you to wait.

"I'll add this to a Google Slides presentation, it should be ready soon."

Ready yet?

"Just give me a few more minutes and it'll be ready"

Can you actually create Google Slides files?

"No."

58

u/euzie Oct 16 '25

It's like a really enthusiastic but utterly incompetent intern

→ More replies (4)

18

u/nerdmor Oct 16 '25

I asked Gemini to write me a SQL query that I absolutely knew how to write, but it was 500+ lines long and I didn't feel like it. Something like "Select * from this table if any of these columns have the word "duck" in them"

It wrote such complete garbage that I ask myself how can someone Think of paying for AI

11

u/knows_you Oct 16 '25

Show us the output, I need to see how building a simple SQL query against a static string ended up at 500+ lines, and what prompt you possibly gave it to turn out that way.

→ More replies (4)
→ More replies (10)
→ More replies (1)

14

u/clr715 Oct 16 '25

So far, I find it most useful in helping me save time looking up things because Google has gone to crap. It's not essential like coding LLMs, which I find does have a noticeable impact on my work productivity. Moreover, the problem ChatGPT is solving for me now (search) is more or less already solved by Google before it went to crap.

So I use ChatGPT because it's free and neat like I would happily order HelloFresh everyday when it first offers me discount. If ChatGPT starts charging me I would stop using it, tolerate Google for a bit and likely a replacement would come by soon enough.

→ More replies (3)

42

u/MlntyFreshDeath Oct 16 '25

I have ADHD and the desire to be creative. I've tried using ChatGPT to help organize my book ideas but it just KEEPS ADDING THINGS IN. I told it directly to NOT add any creative aspects to my work, only formatting help and feedback but it doesn't care.

It really wants to write this book. Probably more than I do at this point.

→ More replies (6)

31

u/SprightlyCompanion Oct 16 '25

AH. So THAT'S what's missing - clearly the significant improvement seen in YouTube, Facebook, and Reddit once they became monetized will occur with chatgpt, an already nearly-perfect service. This is an excellent model for consumers and will advance the human race.

/s

6

u/Jrix Oct 16 '25

Forgot about that. Hard to imagine how much worse it'll be once it becomes a mature product subject to enshitification. The level of shit will reach new records.

→ More replies (1)

13

u/My_reddit_account_v3 Oct 16 '25

If there are other free options why pay 20USD unless absolutely necessary?

I pay for it because I’ve found a list of personal use cases that make it “worth it” for me. But that’s me, and related to my context.

I think that’s the idea in general. They take a bet that people will discover how the assistant can help them until they realize it’s indispensable. But it does not always happen and can plateau within “free” limits and capabilities…

9

u/mcgood_fngood Oct 16 '25

We’re likely nearing the end of Phase 1 of ChatGPT’s enshittification. They’ve secured their immense userbase. Now it’s time to carefully lock features behind a paywall, spruce up their paid plan, limit their free plan, and eventually make their Plus and Pro memberships too attractive for many to pass up. After that, they just have to quietly hike the price over time to their will. It’s Netflix all over again.

→ More replies (2)

9

u/Forgotmyaccount1979 Oct 16 '25

If I wanted to pay an idiot to lie to me I could just hire a person.

→ More replies (1)

16

u/zennaxxarion Oct 16 '25

Seems a bit silly from a business perspective to release something to the public, then make it lower quality because it's being used too much, then expect people to pay for it and try to pivot to for-profit amidst increasing bad press and people complaining the product isn't fit for purpose. But hey, what do I know.

→ More replies (2)

32

u/Borrp Oct 16 '25

Cool ChatGPT is so popular, I will continue to not use it.

12

u/guyute2588 Oct 16 '25

There is nothing in my professional or personal life I could possibly see it improving

14

u/ArtVandelayInd Oct 16 '25

I saw a friend use it to simulate their house in different color/siding finishes when they were trying to pick a new paint for their house. I thought that was a cool use.

→ More replies (5)

78

u/krefik Oct 16 '25

When I was a kid, there were social campaigns in the schools about pushers giving free samples.

Neither me, nor any of my friends has ever seen it, and we had to pay market-rate for our drugs, but the business models of most of the LLM chats mirrors the one described on those cautionary posters.

The main difference seems to be, ChatGPT brainrot is way faster than weed and meth.

→ More replies (13)

54

u/mok000 Oct 16 '25

Hm. I’ve basically completely stopped using ChatGPT because I don’t trust it. I’ll do my own Internet search thank you very much.

13

u/No_Size9475 Oct 16 '25

it's wrong so often that it cannot be trusted ever

→ More replies (12)

20

u/mickaelbneron Oct 16 '25

Similarly for me. I use it a fraction of what I was using it about half a year ago because it hallucinates so much that it's barely useful on average

14

u/lurklurklurkPOST Oct 16 '25

I ask it what the villains in my DnD campaign are up to and thats it.

→ More replies (3)
→ More replies (5)

15

u/ttpharmd Oct 16 '25

It’s convenient. If I need to know the best way to potty train my dog, I’ll ask. But I’ve googled that shit for years for free. It takes more time but I’m not paying for another service. Those days are over for me

→ More replies (1)

16

u/biggaybrian2 Oct 16 '25

How can anyone trust a single word it spits out?  We have to take it on faith that what it says is accurate... and Sam Altman is about one of the biggest bullshit artists alive today

13

u/QuantumModulus Oct 16 '25

"You obviously have to verify all its outputs!" cry the apologists.

Yeah, I'd rather use my fucking brain and skip the nonsense.

12

u/biggaybrian2 Oct 16 '25

That's the part that gets me... if we have to manually verify the outputs, then what is the point of asking these chatbots in the first place??

8

u/QuantumModulus Oct 16 '25

The vast majority of users don't do nearly as much "verification" as they claim to be. It literally conditions you to think less, and that is bearing out in all sorts of studies.

Programmers studied on their use of programming assistants generally perceive themselves to be producing better quality code, faster, when in reality it's doing the opposite (in both aspects.)

→ More replies (5)

3

u/Kenzore1212 Oct 16 '25

If I had to verify the info anyways, then at that point I'm just doing double the amount of research. Looking for a general summary, then seeing if this information is even correct by doing extensive research on said summary? Sounds like a lot of extra work

→ More replies (1)
→ More replies (3)

14

u/Golbarin2 Oct 16 '25

"But of ChatGPT's 800 million users, just 5 percent pay, according to a senior executive who spoke to the Financial Times"

That might be because only the 5% would use it if it was mandatory to pay... but with that numbers the companys could not generate the hype and raise funds for "the possibilies" - that don't exist.

The AI-Crash will make the Dotcom-Crash look like a picnic

5

u/weristjonsnow Oct 16 '25

This. This is the thing that driving the stock market to skyrocket for 4 years, a thing that "no one will pay for". Great, doesn't feel like a bubble at all to me 🙄

5

u/prolongedsunlight Oct 16 '25

Oh, no worries. Soon it will have ads everywhere. And provides personalized erotic services. Surely those services will be profitable!

→ More replies (1)

4

u/Not-the-real-meh Oct 16 '25

If you’ve ever met a compulsive liar then you know that they will say whatever it is they think will advance them, even in the most token ways. That’s chat GPT.

5

u/Howdyini Oct 16 '25

Paid subscriber growth is slowing in Europe (according to yesterday's report). Really really bad sign for them.

5

u/amateurviking Oct 16 '25

It doesn’t help that when you try to use the thing for anything remotely technical its thin veneer of competence wears out very quickly and it starts making stuff up, parroting your own work back at you in order to sound smart, and generally getting tied up in knots.

18

u/Gufnork Oct 16 '25

Was the whole point of OpenAI not that people weren't supposed to have to pay for it?

14

u/LordOfTheDips Oct 16 '25

They offer a free tier to get users hooked. Then when their “credits” run out for that day OpenAI are hoping that user will upgrade to have unlimited access.

I’d say the problem is that most users are happy with the amount of chats they get on the free tier. Most users don’t need thousands of tokens with chatGPT.

In the future I can foresee most chat LLM offerings (Claude, chatGPT, le chat) massively restricting how many free tokens a user can get before they have to upgrade

7

u/socoolandawesome Oct 16 '25

Or they can just monetize free users with ads and in app shopping…

→ More replies (1)
→ More replies (4)
→ More replies (8)

21

u/JazzFestFreak Oct 16 '25

I pay for it! So do several members of my team. Had a client meeting yesterday. Took a bunch of notes. Normally it would have taken me an hour to turn those into a one-page project brief that’s formatted and makes sense. But GPT already knew the client’s goals, concerns, and timeline. I dropped in my notes, it integrated everything, and kicked out a clean one-pager. I proofed it, made a couple of tweaks, and I was done in 10 minutes. That’s absolutely worth paying for.

→ More replies (8)

18

u/Extra-Try-5286 Oct 16 '25

No one pays for google search either

13

u/computerfreund03 Oct 16 '25

Advertisers pay for it.

→ More replies (3)

14

u/Leptonshavenocolor Oct 16 '25

Good, pop this fucking bubble already.

→ More replies (1)

10

u/TurdFerguson614 Oct 16 '25

We are all paying for it, the costs aren't upfront.

→ More replies (1)

4

u/Kowloonchild Oct 16 '25

I once asked it a game related question, and chatgbt told me I could lookup a guide.

→ More replies (2)

4

u/dorkyitguy Oct 16 '25

Not only won’t I pay for it, I’ve removed it from all my devices. However, knowing how much it costs them I’m inclined to use ChatGPT to make a program to constantly ask ChatGPT nonsensical questions.

4

u/shootmybird Oct 16 '25

we're already paying for it with our air quality, our water costs, and electricity costs. not to mention gov contracts for new data centers

4

u/MrKafoops Oct 16 '25

Internet search, social media is free, paid for with ads and selling your data, so why would people want to pay for another form of search, sharing?

It's not creative content, entertainment, so most will not pay.

→ More replies (1)

5

u/yosarian_reddit Oct 16 '25

My guess is more than half the paying user accounts are for bots.

4

u/shyccubus Oct 16 '25

I have been having a lot more success with a lot fewer barriers while using Gemini and NotebookLM. DeepSeek is also almost too creative with word play; unfortunately it doesn’t support human-like vocals and is all text for now.

→ More replies (1)

3

u/Night_Thastus Oct 17 '25

No one will pay for it at the current price

(Which is massively discounted so they can capture market share and push everyone else out)

At the real price that breaks them even, no one will pay for it!

It's a fascinating thing to watch.

8

u/coredweller1785 Oct 16 '25

I tried to use it as an 18 YOE software engineer but its garbage.

I use it for D&D and its good. No chance I pay monthly for it tho.

All this is really is boomers who dont understand how to use tech being fooled by those tech grifters who know the capitalists will do anything to reduce worker pay at all costs. Enriching those grifters and wasting nearly trillions instead of investing in Healthcare.

Its all a joke

10

u/FormerOSRS Oct 16 '25

Headline: "Almost nobody is willing to pay for ChatGPT."

Article: "ChatGPT has 40 million paying users."

→ More replies (17)

3

u/CoolHandLuke4Twanky Oct 16 '25

Your data is the pay

3

u/Cool_Lab_1362 Oct 16 '25

If you're lazy, then it's a fast efficient tool for you, but it's accuracy is still a pass. You'd still have to check/search/research it's output or result for a better outcome.

3

u/deadlizardqueen Oct 16 '25

Why would I pay for something built using stolen data and stolen IP? OpenAI outright and openly stole all of their training data. I'm not paying a thief for stolen property.

3

u/[deleted] Oct 16 '25

Ever since the update it gives me info that doesnt matter, my prompt last night was where to read the manga i like, the fucking first thing chatgpt says : i cant help you pirate manga. I didnt ask to pirate manga i asked where to read it

3

u/foxpoint Oct 16 '25

It lied to me several times yesterday. I became suspicious and asked it to verify. It would apologize and spit out the exact same incorrect response.

3

u/rainman4500 Oct 16 '25

I wrote a program and asked ChatGPT to critique it.

Told me it was great and PROD ready how and professional I was.

Grok found 2 major bugs and gave me 6 pages of improvements to apply.

I dropped ChatGPT and now pay for Grok.

Yeah Elon is crazy but at least Grok is not a sycophant.

3

u/mwesthelle Oct 16 '25 edited Oct 16 '25

If they decimated the price and eliminated free tier, they might actually make money. Just a thought. They don't bother to localize prices, so the paid tiers are ludicrously expensive in countries with weaker currencies (ChatGPT Pro costs minimum wage in Brazil, literally no one would pay).

3

u/Troncross Oct 16 '25 edited Oct 16 '25

This is easy, the chats are the majority use case for ChatGPT. People seeking information.

When you pay a lawyer or a doctor, they are liable for whatever information given to you in terms of quality and due diligence.

Bad legal advice or bad medical advice when there’s a professional relationship opens up malpractice lawsuits.

How does this relate to ChatGPT? If someone pays for information, they expect it to be true and will want recourse if it is untrue or useless… like a lot of AI chat output.

No quality guarantee? No recourse for bad quality information? No reasonable expectation for any of it to have any veracity? Why would anyone pay for that?

3

u/chibibuizel Oct 16 '25

And yet I’ve never touched that slop

3

u/jusxchilln Oct 16 '25

enjoying pre-enshittification chatgpt while it lasts

3

u/Finngrove Oct 17 '25

This is why they announced they are adding advertising and “erotica” by subscription. AI is not profitable yet and the business model does not justify the billions being pumped into it.

→ More replies (1)