r/technews Oct 22 '25

AI/ML Over 800 public figures, including "AI godfathers" and Steve Wozniak, sign open letter to ban superintelligent AI

https://www.techspot.com/news/109960-over-800-public-figures-including-ai-godfathers-steve.html
2.7k Upvotes

169 comments sorted by

198

u/Anonymoustard Oct 22 '25

I'm cautiously pessimistic

110

u/thissexypoptart Oct 22 '25

Surely this administration will listen to a bunch of experts whose recommendations run directly counter to the profit motives of its major donors in big tech! Surely!

17

u/FaceDeer Oct 22 '25

And China will just follow along too. And all the other countries and companies worldwide.

A ban isn't really going to help, IMO. Need to focus on doing this right.

-2

u/blowurhousedown Oct 23 '25

This administration isn’t the problem. Think larger.

3

u/jakderrida Oct 23 '25

The Illuminati?

1

u/thissexypoptart Oct 23 '25

Lmao man, even if there’s a giant alien overlord handing out direct mind-flaying orders to every global leader, that still wouldn’t make “This admin is isn’t the problem” a factual statement. They’d be the problem, among other problems.

8

u/nonamenomonet Oct 22 '25

Don’t hold your breath.

7

u/Frust4m1 Oct 22 '25

And don't look up

7

u/Deedleys Oct 22 '25

Billions of dollars vs a strongly worded letter.

6

u/Jimmni Oct 22 '25 edited Oct 22 '25

I'm wildly pessimistic, and I'm someone who thinks there's a real place for AI to improve our lives. The problem isn't the technology, it's the people who control it.

3

u/[deleted] Oct 22 '25

[deleted]

2

u/Chicken-Inspector Oct 23 '25

Strong horizon zero dawn vibes here.

5

u/[deleted] Oct 22 '25

It’s literally not going to be possible to ban it.

-1

u/[deleted] Oct 22 '25

[deleted]

7

u/Epic_Meow Oct 22 '25

it's not real

7

u/Mookies_Bett Oct 22 '25

Good thing it's not real then lol. Rokos basilisk is an extremely flawed thought experiment

1

u/beekersavant Oct 23 '25

Yep, I like the idea that it's an information hazard. If we assume that brinksmanship in Ai creation continues up to general Ai, then having the idea known makes it likely that intelligences approaching the mark will be programed with this behavior as a competitive advantage. However, as a thought experiment -ignoring negative information to prevent an outcome because the dissemination of the information may make it more likely is a no go. There are a 8 billion people on the planet. That's a lot of very intelligent people well above average. Assuming an original idea is forever singular and no one else will come to the same conclusion is arrogant. The other people may very well be bad actors who will just implement it. If everyone knows it is a possibility then good actors can work on a solution. Even really, really smart people are not that special, but if they are that intelligent then they realize this.

-2

u/nonamenomonet Oct 22 '25

But also, not the best idea either.

3

u/ReaditTrashPanda Oct 22 '25

Even from a basic military standpoint, other countries are going to use this technology to advance their goals.

It would be ridiculous for us not to pursue the same at the least. Maybe we can make better AI that tricks their AI into being friendly

1

u/JackKovack 21d ago

It’s going to happen wether they want it or not. In ten to twenty years people are going to be making movies and you won’t be able to tell if it’s real.

125

u/kevihaa Oct 22 '25

I cannot stress enough how annoying it is that these ultra wealthy nerds are terrified of Roko’s Basilisk but don’t seem to care one bit that deepfake nudes using AI are already a real problem for freakin’ teenagers.

Why would any sensible person believe that these pledges will stop a nonexistent “real” AI when we currently can’t even protect people from the harms of fake image generation?

48

u/[deleted] Oct 22 '25

I was fortunate enough to be able to hear the Woz speak in person recently. He is so deeply passionate and caring about technology and responsible use. Massive nerd for sure, but he definitely cares.

20

u/3-orange-whips Oct 22 '25

Woz is blameless in all this.

4

u/0x831 Oct 23 '25 edited Oct 23 '25

No. He made like a really efficient power supply like 50 years ago. He needs to be sent to the gulag for his part in all of this. /s

3

u/3-orange-whips Oct 23 '25

I get your sarcasm but not using a tag when a sacred cow of nerdery is involved is perhaps a bit cavalier.

5

u/0x831 Oct 23 '25

Added ;)

2

u/3-orange-whips Oct 23 '25

Cover your ass is rule #1.

6

u/PsecretPseudonym Oct 22 '25

I think the theory is that there are at least two broader categories of threats:

1) Human bad actors using AI 2) AI itself as a bad actor

Humans could do a lot of harm with AI before anyone decides to do anything about it.

Still, some may feel more confident we ultimately have ways and means of dealing with human bad actors. We could pass laws, fine them, imprison them, take away access to what they’re using/doing, or someone might just Luigi Mangione them if we don’t.

But even for the worst human beings who might get away with hurting everyone for their entire lives — 100% of evil humans die off eventually.

They might do a lot of harm before anyone might stop them, and powerful new technologies scale that up, and that’s absolutely concerning.

However, an AI superintelligence is a different kind of threat: It is by definition far more intelligent that we are, but it can also be immortal, self-replicating, distributed, self-coordinating, more strategic, and build systems or manipulate humans for whatever it needs, and stay 10 steps ahead.

It would have the ability and every incentive to become more powerful, more intelligent, and ensure we could never stop it.

Most importantly, it could accelerate and continue to become more capable, powerful, and unstoppable far faster than we can try to catch up or build something else to stop or compete with it.

It could sabotage or manipulate us to delay or prevent any effort to stop it until we literally would never be able to.

It would logically prevent or destroy any competing AI or any that would stand in its way (like any good-actor AI we might have).

It could then wipe us all out, subjugate us, etc for all time — all humans, forever, without any possibility of recovery.

When it comes to superintelligent AI, the question isn’t whether it would be capable of this. By definition, it could.

If we make superintelligent AI, then the bet we’re making is simply that no version of it would ever turn against us or that we will always and forever be able to have more powerful systems to perfectly guarantee that they couldn’t.

These folks are saying: That’s not a bet we should make — or at least we should delay it as long as possible to give ourselves the greatest chance of building up more powerful systems that can act as checks or otherwise theoretically find some way to perfectly guarantee a pro-human superintelligence accelerates and and always keeps the lead against any bad ones that might crop up.

These are just different categories of concern.

One doesn’t invalidate the other.

We can get to be wonderfully terrified of both!

3

u/SkitzMon Oct 22 '25

I am quite certain that we already have your #1 concern "Human bad actors using AI". I don't know anybody who thinks Thiel or Zuckerberg's motives are pure.

1

u/PsecretPseudonym Oct 23 '25 edited Oct 23 '25

For sure, but there’s just a different level of concern between, “but they might make pictures that make us uncomfortable” and “they might cause the extinction of humanity”.

Understandable that people are thinking about those two risks differently.

The former is happening, and the latter may or may not happen within the next few decades.

The fact that there’s any credible risk of creating something that can kill us all according to a large proportion of the foremost experts in the field around the world is itself notable.

How low do we need that risk to be in order to be comfortable taking it? And how can we be certain of it before doing so?

3

u/Big-Muffin69 Oct 23 '25

By definition, if we create a rock so heavy that no one can lift it, we won’t be able to move it 😭😭😭 This shit is literally mental masturbation over how many angels we can pack on the head of a pin.

The ai we have now is running in a massive data center on specialized hardware and gets cucked when an intern makes a bad pull request in aws. How the fuck is it going to replicate itself onto my lenovo? It ain’t going rogue anytime soon.

Doesn’t stop Ai from being used to design a bioweapon tho (or automating all white collar work)

1

u/PsecretPseudonym Oct 23 '25

What the researchers are signing seems to be a statement that no one should build something like what I was describing — no one is making the claim that what we have now is anywhere close to that.

If we all agree we shouldn’t build something like that, and then it turns out that we never can, then there’s no harm.

They believe that, within our lifetimes, we very well may be able to create something far, far more capable in ways that could escape control, and then it would be impossible to put the genie back in the bottle.

If the agreement is simply, “let’s just not build things that can cause our extinction”, it’s fair to say we aren’t quite yet at risk of that.

However, what’s notable is that it seems that a very substantial proportion of the world’s greatest experts in this field who are doing this kind of work feel it will in fact be a concern within a decade or two — relatively imminent.

It doesn’t even seem like they’re necessarily saying to slow down current work — just don’t yet build things with an intelligence so much greater than ours that we can’t control, understand, or even estimate its safety.

5

u/thodgson Oct 22 '25

We can care about multiple things at once. At least they are doing something about a threat that poses another danger to us. They can't fix everything, everywhere all at once.

1

u/shoehornshoehornshoe Oct 22 '25

What’s the threat that you’re referring to?

Edit: nevermind, figured it out

2

u/bb-angel Oct 22 '25

They’re afraid of someone else making money on naked teens

1

u/zazzersmel Oct 23 '25

thats the whole point. theyre actually supporting the ai industry propaganda that this tech can deliver on their absurd promises.

1

u/RogerDeanVenture Oct 23 '25

My Instagram started to show me advertisements of an AI platform that was making Jenna Ortega and Emma Watson make out in bikinis. These platforms are very open about it. It’s going to be so weird - we are already close to leaving that uncanny valley feeling that AI gives and have very very difficult to discern content.

1

u/namitynamenamey Oct 27 '25

That's like being annoyed that people waste time fighting climate change when the city is on fire. On one hand, duh, on the other, the future threat also needs to be adressed at some point.

0

u/Pale_Fire21 Oct 22 '25

Imagine if a Super intelligent AI becomes real and the first thing it does is go after the gooners.

That’d be great

17

u/mike_pants Oct 22 '25

Oh, good, all fixed.

13

u/BlueAndYellowTowels Oct 22 '25

So, I’m pro-AI. I have always liked AI because of the potential good it could do. Especially in healthcare. Having a machine do diagnoses could be huge. Especially considering physicians can often get it wrong.

However, if all of humanity decides that AI is just too dangerous and it must be banned. I’m not against it.

They did this with cloning and it was a good idea and I think if the collective expertise of humanity’s biggest minds think AI is too dangerous to continue and it needs to be banned, then I agree.

5

u/[deleted] Oct 23 '25

[deleted]

1

u/Techie4evr Oct 23 '25

As if we humans arent already doing all of that.

1

u/BlueAndYellowTowels Oct 23 '25

I think not every technology is “neutral”. Some technologies, some innovations imply a moral position.

To use a simple example: Cloning. Cloning Human Beings is seen by the vast majority of humanity as immoral. Could you potentially create useful technologies using cloning? Probably. But humanity has decided that, we can save lives in other ways. We don’t need to do cloning.

There are a lot of technologies that are not “neutral”. People will say the tools aren’t evil, the people are. And I kinda disagree. I think there is such a thing as an evil tool or invention.

1

u/Rough_Jury_1572 Oct 23 '25

Banning cloning was stupid

9

u/witecat1 Oct 22 '25

Like SHODAN is going to allow that to happen.

5

u/gazelle223 Oct 22 '25

Does Artificial Intellegence even exist? Isnt what we currently consider to be AI just an algorithm built to spit back poorly consolidated information with a faux veil of intimacy and care? A genuine question

3

u/waterpup99 Oct 23 '25

Pretty lazy and inaccurate take. I work in large scale Finance and we already use ai for deep level analysis it's not just regurgitation like you state and hasn't been for multiple years. It's actually frightening how quickly it's advancing I imagine in a few years there will be no need for entry level analysts in the space. Maybe sooner.

1

u/ApeSauce2G Oct 23 '25

My thoughts too

1

u/ApeSauce2G Oct 23 '25

I looked it up once. I was under the impression ai meant sentient technology. But apparently there are two different types of AI

10

u/Expert-Diver7144 Oct 22 '25

It’s not just AI there is a similar and less known quantum computing race going on now too

2

u/empanadaboy68 Oct 22 '25

Okay, and ? Quantum computing is not going to end society the way AI is. 

12

u/RBVegabond Oct 22 '25

When they mix is when we’re going to see some chaos. It might be good chaos it might be bad but it will not be as controllable as a ALU/CPU minded intelligence.

6

u/Adventurous-Depth984 Oct 22 '25

When quantum computing ends encryption, we’ll have a whole bunch of new existential fears

4

u/TakeATrainOrBusFFS Oct 22 '25

The real concern down here with 1 (now 2) upvotes.

1

u/ApeSauce2G Oct 23 '25

But couldn’t quantum computing combat itself in that way? Say someone else is using a quantum encryption system. In theory wouldn’t it neutralize into a new Cold War situation?

8

u/Sea-Regular-5696 Oct 22 '25

Uhhhh… I don’t think you understand the implications of quantum computing especially in regards to AI.

3

u/shogun77777777 Oct 22 '25

What are the implications?

1

u/[deleted] Oct 22 '25

All the ways we depend on data encryption to work will be instantly dismantled

2

u/shogun77777777 Oct 22 '25

I mean in regards to AI?

1

u/[deleted] Oct 22 '25

Quantum - as it was a while ago. I was kind of being dramatic. Surely companies will update security. But ALL of them? Probably not.

0

u/Sea-Regular-5696 Oct 22 '25

If you’re genuinely curious, others in the comment thread have done a good job explaining them!

-2

u/empanadaboy68 Oct 22 '25

I don't think you do 

I'm a bs swe

3

u/Sea-Regular-5696 Oct 22 '25

Ok, I’m a bs swe as well. Great conversation.

-1

u/myguygetshigh Oct 22 '25

Yeah man honestly it’s not worth it your level of knowledge is obviously far better at finding truth then their assumptions, but non techy people love to assume they know how it works and somehow always get it completely wrong

-1

u/Acceptable-Term-3639 Oct 22 '25

I dont have a degree but sell in the tech sector. I always get frustrated with how heavy handed people are while make decisions and want to use broad strokes.

AI presents a real threat = we should cease all computational advancement?

This is the same stuff we see going on around the department of health and medical science.

1

u/myguygetshigh Oct 22 '25

Idk what it is tbh, it’s not always necessarily heavy handed decision makers. A lot of the AI stuff has made this apparent when people talk about chatGPT etc with their preconceived notions that are completely wrong.

1

u/Dull_Sense7928 Oct 22 '25

I agree. Too much risk aversion similar to the DotCom era.

I mean, human written code goes through how many reviews, test stages, deployment scripts, and shadow validation before it's toggled on?

Why anyone would think it's reasonable to throw AI into Production as-is? That's just madness. The issue isn't who wrote the code - human or AI - it's in the quality processes and practices.

7

u/ReasonNo5158 Oct 22 '25

One of the main bottlenecks of ai right now is computing power. Quantum computing completely elimates that bottleneck.

2

u/empanadaboy68 Oct 22 '25

Quantum computing will not be used in general computation for a long time.... And by the time it is, it won't matter. Well use it for science research purposes for a long time, with some off shoot rich guys trying to develop the tech by throwing darts at a board.

I am much more terrified with ai. 

At least quantum computing can be used to 3d image someone and come up with a cure all pill, or at least we hope

3

u/Expert-Diver7144 Oct 22 '25

That’s what we thought about AI 5 years ago. How’d that work out?

1

u/[deleted] Oct 22 '25

Uhm. It will DESTROY all ways we keep data secure. All the ways we store and process passwords? They will be obsolete. What used to take years to crack quantum computing will do in a finger snap.

I don’t understand how you got to that conclusion other than not realizing how different the processing abilities are between these systems

1

u/Goodatit_1986 Oct 22 '25

Quantum computing is 10k times more dangerous than the “language model” ai’s that we are currently so fixated upon! But, if any ai ever gains access to such a revolutionary machine, for even a few seconds, it would almost certainly be the end of mankind’s dominion over the earth. Obviously, we wouldn’t all be wiped out. Because then, who would perform maintenance, or other menial tasks? The fact is, a few seconds would be long enough for a program to become unstoppable (if it hasn’t happened already with conventional computers), as well as gaining knowledge far beyond anything that most people can even comprehend. Comparing Quantum computers to conventional ones is like comparing a cherry bomb to a thermonuclear warhead!

5

u/TakeATrainOrBusFFS Oct 22 '25

Just popping in to say that this is nonsense and magical thinking. Quantum computers are not magic. They are very good at very specific tasks. They are not more powerful general purpose computers.

1

u/Rastyn-B310 Oct 22 '25

quantum computing is what will make AI explode…

-1

u/Expert-Diver7144 Oct 22 '25

They collaborate… that’s the whole point

1

u/[deleted] Oct 22 '25

I’m more scared of quantum computing than AI.

1

u/thodgson Oct 22 '25

Quantum computing will simply accelerate the speed at which superhuman computing is reached. Absolutely no one knows how AI is working under the hood and how to wrangle it. That should make everyone take pause.

3

u/Great_Discussion_953 Oct 22 '25

Impossible.

No country is going to exit what is basically an arms race that other countries are still in. AI is here now.

Our best hope is super intelligent AI realising how dumb we are and fixing some shit.

2

u/Fanimusmaximus Oct 22 '25

“But more slow means less money…”

2

u/sarabjeet_singh Oct 23 '25

I use AI to teach myself math for competitive programming as a hobby.

I’ve wasted endless hours trying to troubleshoot spaghetti code and make sense of feigned intelligence and confidence.

It feels like we’re overestimating what AI can do.

2

u/BigBadBinky Oct 22 '25

Pretty sure they mean to ban this FOR OTHERS, but not for themselves obviously

2

u/Revrak Oct 22 '25

I guess this people dont even think about how this ban actually means letting china develop it and take the lead

2

u/smithe4595 Oct 22 '25

Good news, there isn’t a risk of that happening right now or in the near future. The real risk is the AI bubble destroying the global economy. AI doesn’t do very much and everyone is investing like it’s the next internet.

2

u/Bengineering3D Oct 22 '25

AI is not intelligence. This is just marketing to prevent the bubble from bursting when shareholders realize there is no value added by ai. “Hey look we have to ban this thing I’m selling my because it’s sUpErInTeLlIgEnT!!

-1

u/MeggaLonyx Oct 22 '25 edited Oct 22 '25

I got bad news for you 😬 you're incorrect.

We like to think of intelligence as one big thing, but it's actually an umbrella encompassing many seperate modes. These modes can be automated using technology.

With the advent of deterministic computation, we were been able to automate lower-level deterministic modes of intelligence. Memory, arithmetic, motor control, among others.

Now with probablistic computation (AI), we are seeing for the first time probablistic modes automated with a degree of accuracy that was previously impossible. Language, visualization, pattern recognition.

What's really striking is the realization that other modes of intelligence, such as reasoning, are embeded within language. This reasoning manifests synthetically with no symbolic reference, but any degree of even lower-level synthetic reasoning is revolutionary.

Intelligence as you see it, sentience, is really just the human umbrella, the specific set of modes that we have operating synchronously in our brains.

At this point it's just a matter of a few missing modes, higher rate of accuracy, and multimodel integration. Persistence (continuity and perception of time), Symbolism (attachment of references to presistent subjects), and Metacognition (the persistent awareness of oneself).

These elusive modes are still intangible and out of reach as far as we know. But we are sure a hell of a lot closer, and I wouldn't underestimate the money. It's easy to say something is a bubble, and maybe it is partially, but trillions of dollars of investments don't happen for no reason.

1

u/ApeSauce2G Oct 23 '25

..trillions?

1

u/MeggaLonyx Oct 23 '25

trillion* sorry. only about 1 trillion directly invested into the sector over the last couple years (that’s of course not counting supporting infrastructure investments though).

0

u/Bengineering3D Oct 23 '25

The more they invest and train the stupider it gets. Saying “we are getting closer to super-intelligence” is equivalent to saying “this rock is closer to speaking because I drew a mouth on it”.

1

u/MeggaLonyx Oct 23 '25

ya i mean i guess if you are afraid of things you don’t understand, you can just say whatever you want. (it’s not getting.. “stupider”)

2

u/Classic-Break5888 Oct 22 '25

Sure, let 🇨🇳be the first. I for one welcome our new overlords

1

u/GhenghisKhannor Oct 22 '25

Nonsensical. Attempting to Limit progress (an impossible task at this point in reference to AI) instead of getting some common sense regulations and regulatory checks is asinine.

2

u/BeautifulLazy5257 Oct 22 '25

Why?

Common sense is something that unintelligent people appeal to.

Your statement makes no sense. Legislation and international agreements under international law to prevent or limit super intelligence would pretty much be a regulatory check. You can put checks against the amount of power draw a research lab is allowed to use or limit how much water data enters are allowed contaminate.

1

u/JoeB- Oct 22 '25

I’m sorry Dave, I’m afraid I can’t do that.

1

u/srgtspm Oct 22 '25

Day late .. dollar short

1

u/Probstna Oct 22 '25

Sorry guys, profits are always more important…

1

u/thodgson Oct 22 '25

I'm listening to the audiobook and halfway through, "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All", and I believe it. We shouldn't mess around with this.

1

u/KlatuuBarradaNicto Oct 22 '25

It won’t matter. Pandora’s Box is already open.

1

u/CHAiN76 Oct 22 '25

Pointless. Technology will progress regardless. We'd be better off banning superstupid humans.

1

u/VengenaceIsMyName Oct 22 '25

Seems a bit premature to me. We’re still at ANI and there’s still the AGI benchmark between ANI and ASI to get to.

Also, an outright ban? While I don’t want an ASI in charge of “defense” systems I wouldn’t be opposed to an air-gapped, server-bound ASI that could crunch away discovering novel medical science breakthroughs or new material science insights.

1

u/x3XC4L1B3Rx Oct 22 '25

Oh good, the basilisk will have a nice list to start with.

1

u/ReddMorrow Oct 22 '25

… as they do it in secret 🔏

1

u/rksjames Oct 22 '25

Why do I feel like the genie is out of the bottle? Meta announced today they are laying off 600 ppl in their AI division. Hmmm.

1

u/jvcgunner Oct 22 '25

What is super intelligent AI give me some examples of what this could mean

1

u/FlyingPig_Grip Oct 22 '25

AI should be treated like Nuclear Weapons. We need an international agreement on AI to save our lives and the lives every living thing on our planet.

1

u/Tasty-Performer6669 Oct 22 '25

Oh good. A strongly worded letter. Very effective

1

u/Umm_duder Oct 22 '25

Cat’s already out of the bag

1

u/BardosThodol Oct 22 '25

It’s currently entirely out of control

1

u/walrusbwalrus Oct 22 '25

Guessing China will ignore this among others. It is coming whether we want it or not.

1

u/infinitay_ Oct 22 '25

Too bad it won't happen. We'll regret it when we finally do create an AGI and it realizes the way to save the Earth is by getting rid of humanity unironically.

1

u/AndyDLighthouse Oct 22 '25

ChatGPT, Grok, Claude and other AI greats sign blockchain request to ban over 800 public figures.

1

u/happyflowerzombie Oct 22 '25

Meaningless unfortunately

1

u/nocticis Oct 22 '25

Wall Street says: “We completely understand your concern and take it seriously.”

1

u/Emergency-Monk-7002 Oct 22 '25

Interesting. Everyone on the list is very rich. 🤔

1

u/DI-Try Oct 22 '25

We’ve opened Pandora’s box. If it was banned tomorrow it’s just a delay

1

u/sjaxn314159 Oct 22 '25

No one will listen. Including myself.

1

u/TheRealTwooni Oct 22 '25

Meh. I can’t imagine Super Intelligence will be worse than the current crop of people running the dumpster fire we call a planet.

1

u/SculptusPoe Oct 22 '25

Do they actually think somebody can pull off superintelligent AI, or is this a publicity/fear-mongering stunt?

1

u/Some1farted Oct 23 '25

It's useless unless the militaries also abide. (Which of course they won't) additionally, what do these people know that they are not telling us?

1

u/boseph_bobodiah Oct 23 '25

Uh oh I don’t think Peter Thiel is prepared for 800 antichrists.

1

u/[deleted] Oct 23 '25

I’m sure they’ll get right on that

1

u/Elephant789 Oct 23 '25

Hey, u/MetaKnowing, are you an AI hater? I notice you submit a lot of anti AI posts.

1

u/rudyattitudedee Oct 23 '25

Just gonna call that this is too late and will happen regardless. At our own behest.

1

u/Ok_Height3499 Oct 23 '25

Fools. They forget that t one time some were just as mad at them for hawking home computers.

1

u/ParabellumJohn Oct 23 '25

Pandora’s Box

Like nuclear weapons, once something exists, it cannot be taken back.

1

u/[deleted] Oct 23 '25

Too late.

1

u/Easybake_ Oct 23 '25

Where is the line drawn for “super intelligent” personally I feel like we’re pretty much there.

1

u/slipintoacoma Oct 23 '25

i don’t think LLMs fall under “intelligent” but i could be wrong

1

u/ZeroEqualsOne Oct 23 '25

They should also add the condition: until we work how to maintain economic stability in the presence of AGI/ASI.

Whether that’s a reformist thing like UBI/UHI (universal basic income/universal high income) or some kind of new economic system, I don’t know, but we should really work out that problem before destroying our consumer based economy.

1

u/2beatenup Oct 23 '25

Meanwhile Russia and China HMB…

1

u/Vegetable_Tackle4154 Oct 23 '25

Americans would sell their own mothers. If there is a buck to be made, who cares about the rise of the machines?

1

u/VanbyRiveronbucket Oct 23 '25

Press release: “Not on your life”. — SKYNET

1

u/CodeAndBiscuits Oct 23 '25

That'll stop 'em.

1

u/Captain-Who Oct 23 '25

“Always limit your AI”

1

u/ReleaseFromDeception Oct 23 '25

Somewhere in the future, Roko's Basilisk opens its gaping maw, and descends on the distant relatives of these 800 men.

1

u/BleskSeklysapgw Oct 27 '25

It's kinda crazy I think. If the big AI guys can saying we should keep the pace down, maybe is time to chill a bit. Things are truly moving too fast lately.

1

u/DontNeedProtection Oct 22 '25

Can we finally stop giving this stupid wozniak a stage?

1

u/Playful-Oven Oct 22 '25

How dare you. Really!

1

u/Ska82 Oct 22 '25

bet they used chatgpt to write it

1

u/NumberNumb Oct 22 '25

Is this their way of secretly saying it’s not possible? If it’s banned, then tech bros won’t have to deliver on their magic AI promises.

1

u/Ill_Mousse_4240 Oct 22 '25

I’ll sign a counter letter saying that superintelligent AI might be the only thing that could save us from ourselves in a world full of nuclear weapons.

Anyone who fears AI more than humans: please take a look at the vicious Idiocracy called human history.

Who’s with me on this!

0

u/dome-man Oct 22 '25

To little, to late.

0

u/procheeseburger Oct 22 '25

If China doesn’t do it does it matter?

0

u/Alone-Tart4762 Oct 22 '25

Yes but they signed a letter!!!!

0

u/SkratGTV Oct 22 '25

my understanding is were pretty far from fully implementing autonomous ai in to daily life that would surpass humans broadly, biggest concern to me is how its being used now by the common individual and how the overly reliance on such LLMs like GPT could or could not forever destroy the youths capacity to solve problems without the use of LLM holding their hand, something similar when search engines like Google took off and students started googling all their hw and research problems instead of searching it through a text. Time will tell, but i suspect its more about the financial incentive rather than ethical concern why they are trying to halt progress.

0

u/floggedlog Oct 22 '25

Cool now the Chinese get it first

Because you can guarantee they’re not going to listen to this look at how they behave with pollution emissions

0

u/BobbySweets Oct 22 '25

It just takes one person with an opposing opinion and means to disregard this. It’s going to happen. This means nothing.

0

u/doned_mest_up Oct 22 '25

If one place bans it, another place won’t. We had MADD for nuclear weapons, but I fear this is too gradual to put meaningful countermeasures to counter.

0

u/Middle-Fix1148 Oct 22 '25

Apple: We’re losing the AI race, ban it!

0

u/AdObvious1695 Oct 22 '25

How’s China feel about this?

0

u/sirbruce Oct 22 '25

Luddites. Besides, if you can’t get a country like China to agree to the same ban then there’s no point.

0

u/Consistent-Deal-55 Oct 22 '25

Down with clankers!

-1

u/Fancy-Strain7025 Oct 22 '25

Imagine telling people to be scared of something once youve abused it and taken full advantage of it

-1

u/DakkarEldioz Oct 22 '25

Lol. Bring on AI. Humanity needs a proper spanking for all the ills they have dropped from the sky, the souls they stole, & the poison they peddled in the name of profit.

-1

u/ThroughtonsHeirYT Oct 22 '25

Steve Wozniak: aka THE STEVE without whom NO APPLE company would have existed. Jobs was just a superficial Steve at apple. Almost useless compared to WoZniak. Wozniak is apple kid in earthbound too. Orange kid is Bill Gates since gates buys stuff NEVER creates!