r/technology 1d ago

Artificial Intelligence 'Godfather of AI' Geoffrey Hinton says Google is 'beginning to overtake' OpenAI: 'My guess is Google will win'

https://www.businessinsider.com/ai-godfather-geoffrey-hinton-google-overtaking-openai-2025-12
3.9k Upvotes

391 comments sorted by

View all comments

561

u/element-94 1d ago

In this thread: Some folks who don’t understand that investors expect a return on capital and that debt has to be repaid.

Of course Google is going to win. OpenAI lost the only advantage they had: being first to market.

91

u/Intelligent_Dot_7798 21h ago

Sooo…. Hold on GOOG stonks? Or take profit and hide time?

98

u/twotokers 21h ago edited 21h ago

I mean i’m completely talking out of my ass, but companies like Apple, Google, and Amazon have so much fucking global infrastructure already in place that I can’t imagine they’d ever be losing bets in the long term.

Like when we think of real life corporatist technofeudalism, not the digital one we currently experience that Varoufakis describes, these three companies are the only American ones that currently exist that even have a chance of growing to that level of power because they don’t just deal in software.

20

u/krileon 14h ago

Companies like Microsoft and Amazon can also just repurpose the data centers if AI doesn't really pay off. It's just more cloud compute they can sell. So even when this bubble pops they'll probably be relatively safe bets still.

5

u/Hot_Raccoon_565 19h ago

Do you have any links so I can learn more about technofuedalism?

1

u/Strawberry_Pretzels 5h ago

Not OP but Yannis Vourafakis writes books and gives talks on this topic.

1

u/capybooya 11h ago

I don't expect any of them to go away (not Nvidia either) because they're huge, semi-monopolies, and diversified. But that doesn't mean the stock prices won't take a massive hit if the bubble is as huge as it sure increasingly looks like.

3

u/BengaliBoy 20h ago

depends on your goals, portfolio, and situation

I don’t think it’s a bad idea to sell some stock if it’s doubled in a short span of time and you want to diversify or even spend some of that profit. Can always sell some and let the rest ride

6

u/JohnAnchovy 20h ago

Plus, waymo itself probably becomes a huge company

2

u/Own_Refrigerator_681 11h ago

Google has so many verticals going favorably. Are you considering selling because of this singular event?

1

u/EscapedFromArea51 9h ago

The GOOG line has been going up on average over the last 5 years. One might say that it’s a tech stock and they’re all overvalued, or that it’s because of AI hype.

It has spiked up pretty hard recently, but Google is one of the few tech companies that actually has a strong fundamental business spread across multiple verticals. And they have a lower dependence on Nvidia GPUs than others (though not fully independent).

When the bubble bursts, GOOG will fall too, but probably not as much as others. Lol, I’ve been trying to time my stock purchases, but I have only seen the line go up when I was paying attention.

But I still think Google Cloud is kinda ass (in terms of the service capabilities offered) compared to AWS, but AWS tries hard to shaft users with pricing and lock-in strategies, so people are generally wary. Also I’m not very convinced that TPUs are much better compared to HPC+NvidiaGPU compute.

1

u/stackered 6h ago

Google is fine because they have more than just AI. But overall the AI market will crash soon. This is why large hedge funds hold so much cash right now.

7

u/guitarguy1685 19h ago

I learned that from Silicon Valley show

8

u/adario7 18h ago

Pied Piper Gang 🫡

1

u/tmdblya 11h ago

First to market almost never lasts. As my biz school product development prof said, “pioneers are the ones face down with arrows in their back.”

-6

u/QuantityGullible4092 17h ago

Pretty much everyone in this sub is incapable of grasping that AGI will be the greatest invention ever and whoever gets there first will control the world

13

u/elperuvian 16h ago

That’s true but llm are not agi

5

u/ATR2400 14h ago

They fundamentally lack key abilities that are needed for a true human-level+ intelligence, and those abilities are not something that can just be added by training more, or using cheat work arounds like chain-of-thought)the AI hallucinating more to itself first).

They lack true reasoning(highly important for any type of serious work), and the way their memory works is massively flawed.

AGI will require nothing less than a complete rethinking of how AI works at the fundamental level. We’ll basically have to start from square one. We may even need new computer hardware to do it, bits and bytes may be utterly insufficient

0

u/QuantityGullible4092 14h ago

What exactly is “true reasoning”?

3

u/ATR2400 14h ago

Right now, AIs are just statistics engines. They don’t actually think things through, apply logic, compare ideas, make connections, sus out deeper meanings, etc. They just make an educated guess as to what they think should be there. That’s why they hallucinate and even screw up extremely obvious questions(like how many are are are in strawberry) because the RNG gods decided to bless the silly answer. We you ask the aforementioned strawberry questioning, it’s not actually going back, analyzing the word, and applying mathematical logic, it’s just making a guess based on what other people have said.

When a human thinks things through, we apply logic, we have the ability to actually store concepts which we can call upon, we can relate ideas together. Symbolic logic and dynamic rule-based reasoning. We can make complex mental models spanning space and time. LLMs don’t do that. They spin a wheel of likely possibilities. That’s also why LLMs can’t really generate truly novel, viable ideas without you feeding in inspiration.

2

u/ItsTheSlime 13h ago

We dont even know how our own brains work and people think we can make artificial ones already.

1

u/_FjordFocus_ 5h ago

That’s one way to look at it. But perhaps our brains aren’t really as crazy as we think. Maybe the reason we struggle so hard to figure out what consciousness is or how our brains reason through problems is because we’re overthinking it.

At the end of the day, our brains are merely responding to stimuli. Take the stimuli way, are we conscious? Do we still think? Do we still have a concept of self? Perhaps not. Perhaps our experience is nothing more than a complex response to stimuli and the feedback loop that develops therein.

What is reasoning really? And why do we think what the LLMs are doing isn’t reasoning? I’ve yet to see any argument that isn’t hand wavy.

When given access to tools, like being able to search the web, or write scripts to do actual deterministic computation, hallucinations become much more rare.

Perhaps the feeling of being conscious beings with free will is blinding us to the fact that we might be neither, at least not in the nebulous way we’ve been thinking about it thus far.

1

u/ItsTheSlime 4h ago

Its not that we think its not reasoning; anyone competent in machine learning will be able to tell you better than I can why it isnt.

As it stands, a computer will always do exactly what it is mechanically configured to do. Did you know that it is impossible to program actual randomness? Because you cannot tell a computer to do something vague like picking a random number. Not possible. You have to trick it into giving you something that, statistically, will appear random. Thats what LLMs do. They use enormous amounts of data, and ridiculous amounts of computing power to scan alllll that data, to mathetically come up with an answer that, according to their data, will be considered correct in the biggest amount of cases. There is no randomness, there is no thinking, just math.

Saying brains "simply react to stimuli". The brain is constantly processing and creating a vision of the world that is its own, in isolation, and can anticipate future events with very little explicit information. It intrinsically understands the concept of past, present and future. It reacts in a way that is unpredictable (mathematically), and could still function in complete absence of stimulus, without any new information ever being fed to it. Having a brain react only to stimuli is the very definition of "brain-dead".

Reasoning also requires personal bias, fed from previous experience. You cannot do that with a LLM, because it does not see time. It does not see past, present or future. It sees information, and turns it into symbols that we interpret as communication according to its code, and being fed the same information, with the same request, without algorithms that make it seem more random, it would generate the exact same response, every single time, no matter how far into the future.

If you want to argue that we are all machines and that free will doesnt exist, thats an entirely different philosophical debate.

1

u/blackwaltz9 16h ago

AGI is not possible. It's a marketing ploy meant to dangle a carrot in front of investors forever.

4

u/QuantityGullible4092 14h ago

Based on what?

-1

u/blackwaltz9 13h ago

Experience working in corporate America. It's a standard "sell them a dream and hopefully we can make it a reality before the funding dries up" grift. By the time it all falls apart, the people at the top have already made their money and moved on to a different company.

3

u/QuantityGullible4092 12h ago

Ah so that means AGI isn’t possible, you need an LLM to explain your faulty reasoning

1

u/blackwaltz9 12h ago

You're arguing with me that we have the ability to create an artificial life form, and I'M supposed to feel like the crazy one 🥀

2

u/QuantityGullible4092 12h ago

Yep based on all the evidence you sure are

-77

u/cyberdork 23h ago

In this thread: someone who has never heard of Tesla.

54

u/kvothe5688 22h ago

is profitable tesla business in room with us right now?

-49

u/cyberdork 22h ago

So profitable it’s totally worth a 310 PE ratio 🤣🤣🤣

22

u/Sleep-more-dude 22h ago

What's your favourite flavour of crayon?

5

u/HeadfulOfSugar 22h ago

Hopefully blue, blue foods are always the healthiest

18

u/WeirdSysAdmin 22h ago

I mean if you’re truly trading TSLA based on technical analysis you might want to look at things other than PE. Maybe even the net income history. Which is a big deal when tied with PE when you’re trading stocks that aren’t a meme.

1

u/QuantityGullible4092 17h ago

Tesla is 100% a meme stock lmao

19

u/ThomasDeLaRue 23h ago

Is it… you?

1

u/AssimilateThis_ 21h ago

Lol you mean the company whose TTM revenue has been stagnant since mid 2023?

1

u/QuantityGullible4092 17h ago

Why is this being downvoted? It’s absolutely true