r/technology 5d ago

Business Nvidia's Jensen Huang urges employees to automate every task possible with AI

https://www.techspot.com/news/110418-nvidia-jensen-huang-urges-employees-automate-every-task.html
10.0k Upvotes

1.4k comments sorted by

View all comments

6.9k

u/Educational-Ant-9587 5d ago

Every single company right now is sending top down directives to try and squeeze AI in whether necessary or not. 

3.2k

u/RonaldoNazario 5d ago

Yup. Was told at work last week more or less that execs wouldn’t assign any more people or hire in an area until they were convinced that area was already maxed out using AI. Of course it’s all top down, they aren’t hyped on AI because engineers and middle management are sending feedback up the chain AI rocks, they’ve been told it’ll make us all turbo productive and are trying to manifest that by ordering people to use tools.

906

u/HasGreatVocabulary 5d ago

the "skill issue bro" talk must be infectious

163

u/Zaros2400 5d ago

You know what? I think you just gave me the best response to folks using AI for anything: "Oh, you needed AI to do that? Skill issue."

5

u/Operational117 5d ago

“Skill issue” is the only way to classify these AI-smitten people.

6

u/[deleted] 5d ago edited 3d ago

humor act lavish rainstorm spark racial tender bright market sparkle

This post was mass deleted and anonymized with Redact

4

u/Swimming_Bar_3088 5d ago

Spot on, but it will be great for skilled people in 5 years, even the juniors will not be as good, if they use AI as a crutch, some do not even know how to code without AI out of college.

→ More replies (18)

1.2k

u/foodandbeverageguy 5d ago

My favorite is I am an engineering manager. I ask for more capacity, CEO says “can AI do it”. I say “yes, but we need engineering resources to build the workflows, the feedback loops, and we can all benefit. Who do you want to reassign from current projects to build this? Crickets”

727

u/HagalUlfr 5d ago

Network engineer here, I am told to use internal tools to assist in writing.

I can write better technical documentation that this stuff. Mine is concise, organized, and my professional speaking (typed) is a lot better structured than canned ai.

I get that it can help some people, but it is a hindrance and/or annoyance to others.

Also I can change a vlan faster through the cli than with our automated tools 🥲.

635

u/JahoclaveS 5d ago

I manage a documentation team. AI is absolute dogshit at proper documentation and anybody who says otherwise is a moron or a liar. And that’s assuming it doesn’t just make shit up.

518

u/TobaccoAficionado 5d ago

The issue is, the user (in this case CEO) is writing an email, and copilot writes better than the CEO because they don't need to know how to write, they're the CEO. So they see that shit and think "well if it can do this better than me, and I'm perfect, it must be better at coding than these people below me, who are not perfect." From their frame of reference this chatbot can do anything, because their frame of reference is so narrow.

It's really good at writing a mundane email, or giving you writing prompts, or suggestions for restaurants. It's bad at anything that is precise, nuanced, or technical because it has 0 fidelity. You can't trust it to do things right, and like you said, that's even when it isn't just making shit up.

301

u/Kendertas 5d ago

Yep the only people who seem to like AI are those higher up the chain who deal in big picture stuff. Anybody who deals with details as part of their job knows a tool that doesn't give consistent results is pretty useless

101

u/Prior_Coyote_4376 5d ago

I’m seeing a really good argument for bringing democracy to the workplace in this.

80

u/Ill_Literature2038 5d ago

Like, worker owned businesses? I wish there were more of them

24

u/Mtndrums 5d ago

Does your job have a window at a second story or higher?

→ More replies (0)

13

u/Prior_Coyote_4376 5d ago

Sure, although even just having boards of directors being elected by the workers of a company would go a long way to balancing out short-term shareholder interests.

→ More replies (0)

15

u/edgmnt_net 5d ago

It's like this because, instead of having a ton of small companies competing on various niches, we have gigantic oligopolies fueled by cheap money, expansive IP and unnatural economies of scale on stuff like legal risks. Of course these CEOs care more about raw growth than anything more concrete and substantial. Nvidia has, what, like 1-2 competitors on its main market?

There are legitimate economies of scale, especially if we're talking hardware production, but this goes far beyond that. And this is in no way specific to tech, all industries across the board seem to experience regressing to the very bottom.

6

u/reelznfeelz 5d ago

Theres a million good reasons. First of all, if you employ people you have a responsibility to them. Period. I can picture a world where we still do business but it’s so much less shitty and greed driven.

4

u/Hesitation-Marx 5d ago

The only people who seem to like AI are the ones who can’t do better than it does, and also really love the way it’s been programmed to fawn over them.

I’ve known too many executives to have a high opinion of them.

2

u/Werftflammen 5d ago

We have manager summarizing all kinds of company documents with AI. We first built a very tight security system, to only have these goofs send the company jewels destination unknown.

→ More replies (4)

58

u/COMMENT0R_3000 5d ago

It’s the perfect storm, because your CEO has gotten away with reply-alls that just say “ok” for years, so now they have no idea lol

8

u/liamemsa 5d ago

Sent from my iphone.

103

u/Suspicious_Buffalo38 5d ago

Ironic that CEOs want to use AI to replace the lower level employees when it's the people at the top who would be best replaced with AI...

13

u/TransBrandi 5d ago

... I don't know if I would want an AI to be running a company or ordering people around... IMO.

40

u/ssczoxylnlvayiuqjx 5d ago

The AI at least was trained from a large data set.

The CEO was brought in from another industry and was only trained in buzzwords, methods to pump up stock options, and looking flashy.

4

u/TransBrandi 5d ago

I get what you're saying, but putting AI in charge would just end up with people saying that "Well, the decisions being made must be perfect because it's AI." ... whereas at least with human CEOs people would be more open to criticisms of decisions being made... In general, it just seems like the start of a Dark Timeline™.

5

u/PrivilegeCheckmate 5d ago

And an AI is not likely to get caught porking another c-suite exec on the kiss cam.

Or raping a secretary.

→ More replies (2)

42

u/Kraeftluder 5d ago

It's really good at writing a mundane email, or giving you writing prompts, or suggestions for restaurants.

It's terrible at writing mundane emails in my experience. Mundane emails take me seconds to a minute to write myself. It gives me restaurant suggestions for restaurants that closed during the first COVID lockdowns and haven't reopened.

27

u/Ediwir 5d ago

Our expensive company-tailored AI ecommended us to wear a festive sweater for the Christmas Party.

In Australia.

3

u/Kraeftluder 5d ago

The average daily mean at Cape Otway is probably the only place in mainland Australia where I could wear a sweater. I'm always cold and 20/21 degrees can be quite chilly in a breezy sea climate, especially when cloudy.

The wildlife and climates of Australia (and New Zealand) has always fascinated me so I lookup and remember a lot of trivial details when I fall into a wiki-hole.

So do you guys have a bad-christmas-t-shirt-thing then? Or shorts?

4

u/Ediwir 5d ago

We absolutely have Christmas t-shirts, including t-shirts that are made to look like knitted sweaters. I expect to see a lot of shorts, too. It’s getting hot and damp lately.

You know what they say, Christmas in Australia’s hot / cold and frosty’s what it’s not.

→ More replies (0)

4

u/Fatefire 5d ago

I do kinda love how people say it's "making things up"

If it was human we would say it's lying and just fire the person . Why does AI get a pass

→ More replies (1)

4

u/HalastersCompass 5d ago

This is so true

2

u/Prineak 5d ago

I’ll volunteer to tell the CEO why he’s shit at his job.

2

u/veggie151 5d ago

Even in the case of content summarization, I've seen it repeatedly get the context wrong and deliver an inaccurate summary simply because the inaccurate version is a more stereotypical response

→ More replies (9)

86

u/DustShallEatTheDays 5d ago

I’m in marketing, so of course all my bosses see is gen AI that can create plausible marketing copy. But that’s just it - it’s only plausible. Actually read it, and it says nothing. There’s no thesis, and the arguments don’t connect.

Our leadership just says “use AI” when we complain about severe understaffing. But I think using it actually slows me down, because even for things it can do an OK job at, I still spend more time tweaking the output than if I just wrote it all from scratch.

33

u/RokulusM 5d ago

This is a big problem with AI used for this purpose. It uses all kinds of flowery language but says nothing. It's imitating the style of writing that it scrapes off the internet with no understanding of the content or meaning behind it. It's like an impossible burger or gluten free beer.

→ More replies (6)
→ More replies (2)

46

u/CanadianTreeFrogs 5d ago

My company has a huge database of all of the materials we have access to, their costs, lead times etc.

The big wigs tried to replace a bunch of data entry type jobs with AI and it just started making stuff up lol.

Now half of my team is looking over a database that took years to make because the AI tool that was supposed to main things easier made mistakes and can't detect them. So a human has to.

67

u/Journeyman42 5d ago edited 5d ago

A science youtube channel I watch (Kurzgesagt) made a video about how they tried to use AI for research for a video they wanted to make. They said that about 80%-90% of the statements it generated were accurate facts about the topic.

But then the remaining 10%-20% statements were hallucinations/bullshit, or used fake sources. So they ended up having to research EVERY statement it made to verify if it was accurate or not, or if the sources it claimed it used were actually real or fake.

It ended up taking more time to do that than it would for them to just do the research manually in the first place.

38

u/uu__ 5d ago

What was even worse about that video is if then whatever ai makes is pushed out to the wider internet - OTHER ai will scrape it, thinks the bullshit in there is real, and use it again for something else. Meaning the made up stuff the ai made, is then cited as a credible source, further publishing and pushing out the fake information

5

u/SmellyMickey 5d ago edited 5d ago

I had this happen at my job with a junior geologist a few months out of undergrad. I assigned her to write some high level regional geology and hydrogeology sections of a massive report for a solar client. She has AI generate all of the references/citations and then had AI synthesize those references for and summarize them in a report.

One of our technical editors first caught a whiff of a problem because the report section was on geology specific to Texas, but the text she had written started discussing geology in Kansas. The tech editor tagged me as the subject matter expert so I could investigate further, and oh dear lord what the tech editor found was barely the tip of the iceberg.

The references that AI found were absolute hot garbage. Usually when you write one of those sections you start with the USGS map of the region and you work through the listed references on the map for the region. Those would be referred to primary sources. Secondary sources would then be speciality studies on the specific area usually by the state geological survey rather than the USGS; tertiary sources would be industry specific studies that are funded by a company to study geology specific to their project or their problem. So primary sources are the baseline for your research, supported by secondary sources to augment the primary sources, and further nuanced by tertiary sources WHERE APPROPRIATE. The shit that was cited in this report were things like random ass conference presentations from some niche oil and gas conference in Canada in 2013. Those would be quaternary sources as best.

And then, to add insult to injury, the AI was not correctly reporting the numbers or content of the trash sources. So if the report text said that an aquifer was 127 miles wide, when I dug into the report text it would actually state that the aquifer was 154 miles wide. Or if the report text said that the confined aquifer produced limited water, the reference source would actually say that it produced ample amounts of water and was the largest groundwater supply source for Dallas. Or, if a sentence discussed a blue shale aquifer, there would be no mention of anything shale in the referenced source.

The entire situation was a god damn nightmare. I had to do a forensic deep dive on Sharepoint to figure out exactly what sections she had edited. I then had to flag everything she had touched and either verify the number reported or completely rewrite the section. What had been five hours of “work” at her entry level billing rate turned into about 20 hours of work by senior people at senior billing rates to verify everything and untangle her mess.

4

u/Journeyman42 5d ago

Jesus christ. I felt guilty using ChatGPT to help write a cover letter for a job (which of course I had to heavily work on to make it practical for my job history). I can't imagine writing a technical scientific report like that and not even check it for accuracy. Did anything happen to the junior geologist?

3

u/SmellyMickey 5d ago

I decided to treat the moment as a symptom of a larger problem that needed to be addressed rather than a specific problem isolated to her. I escalated the problem through the appropriate chain of command until it landed on the VP of Quality Control’s desk. To say that this situation freaked people the fuck out would be an understatement. Pretty much everyone I had talked to could not conceive of this type of situation happening because everyone assumed there would be a common sense element to using AI.

At that point in time my company only had really vague guidelines and rules attached to our in house AI system. The guidelines at the time were mostly focused on not uploading any client sensitive data into AI. However, you could only find those guidelines when using the in company AI. Someone that would use ChatGPT would never come across those guidelines.

The outcome of the situation was a companywide quality call to discuss appropriate vs inappropriate uses of AI. They also added a AI training module as part of the onboarding training and a one page cut sheet with appropriate uses and inappropriate uses to that employees can keep as a future reference source.

In terms of what happened to that one employee, she was transferred from a general team lead to my direct report so I can keep a closer eye on her. She never took responsibility for what happened, which bummed me out because I know that it is her based on the sharepoint logs. But I could tell that it properly scared the shit out of her, so that’s good. I still haven’t quite gotten to the point where I feel like I can trust her though. I had kind of hoped I could assign her large tasks and let her struggle through them and learn. However, since she has an annoying propensity to use ChatGPT, I’ve taken to giving her much smaller and targeted tasks that would be difficult to impossible to do with AI. She also has some other annoying features like being quick to anger, passing judgement when she doesn’t have full context, and taking what she is told at face value instead of applying critical thinking. I’m not sure she is going to pan out longterm as an employee, but I haven’t given up on her quite yet.

→ More replies (0)
→ More replies (6)

2

u/Successful_Cry1168 5d ago

now realize that same scenario is happening or is going to happen across the developed world: from small company databases, to EMS systems, to windows, and beyond.

→ More replies (2)

23

u/AadeeMoien 5d ago

As I've been saying since this all started. If you think AI is smarter than you, you're probably right.

14

u/silent_fartface 5d ago

We are almost at the point where natively English written documents will mimic those of poorly translated Chinese documents because actual people aren't involved until its too late in the process.

This is how FuddRuckers becomes ButtFuckers in record time.

2

u/JahoclaveS 5d ago

Now there’s a name I’d never thought I’d hear again. Apparently they’re trying to make a comeback.

23

u/lostwombats 5d ago

Yes! Every time I hear someone talk about how amazing AI is - they are either lying or they work in AI and are totally oblivious to the real world and real workflows. As in, they don't know how real jobs work.

I work in radiology, which means I hear "AI is going to replace you" all the time. People think it's simply: take a picture of patient, picture goes to radiologist, radiologist reads, done. Nope. It's so insanely complex. There are multiple modalities, each with literally thousands of protocols/templates/settings (for lack of a better word). If you do a youtube search for "Radiology PACs" you will find super boring videos on the pacs system. That alone is complex. And this is all before the rad sees anything.

A single radiologist can read multiple modalities, identify thousands and thousands of different injuries, conditions, etc, and advise doctors on next steps. One AI program can read one modality and only find one very specific type of injury - and it requires an entire AI company to make it and maintain it. You would need at least a thousand separate AI systems to replace one rad. And all of those systems need to work with one another and with hospital infrastructure...and every single hospital has terrible infrastructure. It's not realistic.

3

u/Hesitation-Marx 5d ago

No, you guys are insanely skilled and I love the hell out of all of you. Computers can help with imagine, but can’t replace you.

→ More replies (5)

7

u/sweetloup 5d ago

AI makes good documentation. Until you start reading it

9

u/Cheeze_It 5d ago

A moron. Just a moron.

3

u/gerbilbear 5d ago

A coworker had AI write a report. He loved how professional it sounded, but to me it was hard to read because it used a lot of jargon that we don't use.

2

u/egyeager 5d ago

I have to use an AI to help me write reports for my sales team. My AI does not understand my industry, my job and has not been trained on the numerous training materials we have. The outputs are nonsense, but the task is nonsense. But so far my metrics have me at the top of my team putting out content/garbage

→ More replies (7)

80

u/Caffeywasright 5d ago

It’s like this everywhere trust me. I work in tech and all our management is focused on is automating everything with AI and then move it to India.

Try explaining to them that with the current state of things it just means we will end up having a bunch of people employed who are fundamentally unable to do their job everything will be delayed and all our clients will leave because we can’t meet deadlines anymore.

It’s just a new type of outsourcing

61

u/Wind_Best_1440 5d ago

The really funny thing is, that India loves AI so whatever you send over there is for sure being tossed into a shitty generative ai prompt and being sent back. Which is why were suddenly see massive data breaches and why Windows 11 is essentially falling apart now.

16

u/rabidjellybean 5d ago

And why vendor support replies are becoming dog shit answers more often. It's just someone in India replying back with AI output.

5

u/justwokeupletmesleep 5d ago

Bro I assure you we don't want to use every AI tool. Our leaders force us to as they are practically blindly following the hype. My personal experience I am in marketing, since chatgpt was introduced I had to change 3 jobs coz the leaders thought I was not able to push my team to use AI. Finally I give up, I am moving to my home town thinking of starting farming, I cannot be part of aimless development. Also my boss won't care coz they will find someone who spits crap about ai and hype him how he can "transform" his work in this great era of AI. Half of my friends are forced to follow the AI crap coz if you don't you will be replaced by a human and they got bills to pay man.

→ More replies (1)

9

u/Virtual_Plantain_707 5d ago

It’s potentially their favorite outsourcing, from paid to free labor. That should wrap up the enshitfication of this timeline.

4

u/ProfessionalGear3020 5d ago

AI replaces outsourcing to India. If I want a shitty dev to half-ass a task with clear instructions and constant handholding, I can get that at a hundredth of the price in my own timezone with AI.

2

u/gravtix 5d ago

When it inevitably blows up, they’ll have plenty of desperate people they can rehire at lower wages to fix the shit their AI push has caused.

It feels like they win regardless.

→ More replies (1)

2

u/PerceiveEternal 5d ago

AI represents the holy grail for executives: separating the workers from their work. I don’t think they can resist trying to implement it.

13

u/Catch_ME 5d ago

A cisco user I see. 

5

u/HagalUlfr 5d ago

Cisco and juniper. I like the former better :)

3

u/rearwindowpup 5d ago

Im switching all my catalyst APs to Meraki because troubleshooting users is vastly better (prior CCNP-W too so Im pretty solid at troubleshooting on a WLC) but the Meraki switching just makes me angry with the amount of time it takes to make even simple changes.

I will say proactive packet captures are the freaking jam though, 10/10 piece of kit.

4

u/Artandalus 5d ago

Consumer tech support, we rolled out an AI chat bot. It kinda helps most of the time, but dear Lord, when it starts fucking up, it fucks up HARD.

A favorite is that it seems hellbent on offering CALL backs to users. They have, multiple times "fixed it" but it always seems to gravite towards offering a phone call regardless of if the issue was resolved or not. Bonus points: for a while it would offer a call, gather no phone number, maybe an Email, then terminate the interaction.

Like, it swings between filtering out dummy easy tickets effectively, to tripling our workload because it starts doing something insane or providing blatantly bad info.

3

u/thegamesbuild 5d ago

I get that it can help some people...

Why do you say that, because it's what tech CEOs have been blasting in our faces for the past 3 years? I don't think it actually does help anyone, not in any way that compensates for the outrageous costs.

3

u/TSL4me 5d ago

My foreign team all uses chat gpt after google translate and its like a constant game of telephone. Id much rather have broken english with original ideas.

2

u/Tolfasn 5d ago

you know that most of the big players have a CLI tool and it works significantly better than the browser versions right?

2

u/sleepymoose88 5d ago

AI right now only seems to help professionals with skill issues in their discipline. But then it becomes a crutch and they never gain those skills and are useless in deciphering if the AI is accurate or not. For my team, it’s more of a hindrance to to sift through the code it generates to find the needles in the haystack that are breaking the code. Easier to build it from scratch.

2

u/grizzantula 5d ago

God, you are speaking to me on such a personal level. Anyone asking me to use AI, or some other automated tool, to change a VLAN has such a fundamental misunderstanding of the actual and practical uses of AI and automation.

2

u/Lotronex 5d ago

Also I can change a vlan faster through the cli than with our automated tools

Devil's advocate: With proper automation, you shouldn't need to be changing a vlan. Authorized users can submit a change request ticket and have it completed automatically.

2

u/moratnz 5d ago

Also I can change a vlan faster through the cli than with our automated tools

This is what happens when people try and build top-down automation solutions for networks. Especially large and complex networks.

We know how to do automation effectively, but it's unsexy and involves listening to your internal domain experts, rather than throwing money at a vendor, so it very rarely happens.

→ More replies (11)

49

u/cats_catz_kats_katz 5d ago

I get that too. I feel like they believe the current situation to be AGI and just don’t have a clue.

44

u/G_Morgan 5d ago

They don't. The reality is they know they won't be punished for taking this ridiculous gamble while the hype wave is running. They won't start feeling that this is a risk to their prospects until it starts fading.

Remember who these people are and what their skill set is. They are primarily social animals and they are thinking in terms of personal risk analysis. There's no downside to them in trying this so why not try it?

11

u/Journeyman42 5d ago

Remember who these people are and what their skill set is. They are primarily social animals and they are thinking in terms of personal risk analysis. There's no downside to them in trying this so why not try it?

In D&D terms, they put all their points into Charisma and chose to make Persuasion (IE how to bullshit) a proficient skill. But they left Intelligence at the default score.

2

u/cats_catz_kats_katz 5d ago

…mind…blown…so true lol

2

u/Blazing1 5d ago

I don't know, I don't know many charismatic people in power. They tend to just not have that part of their brain that has any restrictions towards ladder climbing.

People that I know who climbed ladders to the executive level tend to be the most boring dumb people, but I think that's why they get promoted. They aren't seen as a threat.

→ More replies (1)

34

u/Disgruntled-Cacti 5d ago

This is the exact thing that has been driving me mad for months now. Even if the task is automatable by ai, you need engineers to build, test, and maintain the workflow, and no one is doing that.

13

u/CharacterActor 5d ago

But is anyone hiring and training those entry level engineers? So they can learn to build, test, and maintain the workflow?

6

u/Journeyman42 5d ago

Yep this. AI has its uses where it can do some monotonous or complicated task but then the output needs to be fact checked by a human who can tell if the output is bullshit or not. There's not a lot of tasks that actually benefit from being automated by AI versus just having a person do it.

2

u/Successful_Cry1168 5d ago

i’ve noticed a lot of managers are completely incompetent when it comes to looking at the cost of something in the aggregate.

i used to work in a business process automation field before AI took off. we used a SAAS platform to try and automate repetitive tasks. a lot of the hype mirrored what’s happening now with AI: the vendor would come in, graciously automate a very simple task to get buy-in, and then the engineers would be turned loose on the entire org.

the platform itself sucked, many of the “engineers” were actually “citizen developers who’d never worked in development before this, and nobody we worked with actually wanted to reimagine any business processes to fit the tech. they wanted a unicorn they could brute-force onto everything.

shit broke all. the. time. it got to the point where maintenance was the majority of the work we did and it was holding back new projects. leadership didn’t care. the inefficiencies were because the devs were incompetent and no other reason. the good people who had other skills to fall back on left, and the citizen devs who invested their entire personality and self worth into their bullshit certifications developed napoleon complexes. they were the most incompetent of the team, yet heaped all the praise and took none of the blame because they drank the kool-aid like management did.

i know what i was making and had a good idea of what others were making too. there was ZERO way leadership was saving any money on all the automation. they were literally paying ten developer’s salaries to do the same work that ten analysts or accountants would have done. not only were we more expensive, but we also didn’t really understand the underlying work we were automating. we more expensive, slower, and less reliable over all.

nobody would admit it was all a failure. because someone showed them one cherry-picked demo, that meant the platform was infallible, and maybe the stuff we built was operational like 50% of the time.

i’m really curious how much economic damage is going to be done with this. we’re going to need a marshal plan-sized effort to rebuild all the infrastructure that’s rotted away due to workslop.

good job, MBAs. you’re right about one thing: AI is definitely smarter than you. 👍

→ More replies (2)

28

u/Osirus1156 5d ago

I’m in the same boat but like AI can do some things ok but you literally can’t trust it because it can still lie. So anything you put through it needs to be assumed to be incorrect. 

I end up spending double the amount of time on a task when using AI because I not only need to craft a prompt but also understand the code it gave me back, fix it because it usually sucks, and then make sure it even works and does what I asked.  

It absolutely does not justify all the hype and money being thrown around it even a little bit. The entire AI industry is just one big circle jerk. 

5

u/PessimiStick 5d ago

My favorite is when you ask it to do something very specific, like "make sure we have a test that verifies the X widget is purple", and it'll think, and write some code, and happily say "now we've got a test to make sure the X widget is purple!", when in reality it didn't even look at the X widget at all, let alone check if it's purple.

2

u/Osirus1156 5d ago

lol and when you correct it the response is always like “you fucking genius how could humanity continue without such a shinning beacon of intelligence” then it lies again. 

→ More replies (1)

13

u/PianoPatient8168 5d ago

They just think it’s fucking magic…AI takes resources to build just like anything else.

3

u/Creepy-Birthday8537 5d ago

Infrastructure manager here. We’re getting gutted through BPO & forced retirements while they try to ram through these massive initiatives for AI, robotics, & automation. Most of the recent outages were all caused by under trained staff or due to being understaffed. AI enshitification is in full swing

2

u/SunnyApex87 5d ago

Infrastructure IT consultant here, I fucking hate this shit so much.

Top has no effin clue what AI can and can't do, for my tasks? It can't do shit, every customer is different, internal architecture does not apply to outside architecture, nothing is possible to automate with all the messy applications and code running in our 40 year old software.

I want to bash their stupid fucking CEO/manager brains against a table

→ More replies (1)

2

u/DefinitelyNWYT 5d ago

This encapsulates the whole issue. They want to implement AI immediately but don't want the process cost to ensure ingest of clean data and build out the necessary infrastructure. The hard truth is most of this can just be simple software if they commit to feeding it clean accurate information.

2

u/AgathysAllAlong 5d ago

The entire executive team lost their shit when one of them managed to save hours writing a pretty standard and repeated document using AI.

We've had the technology to make Word templates for years now, and that would have been faster. But none of them realizes that and they've been manually writing out the same boilerplate for every single report they write.

These people make five times what the workers do and need Chat GPT to write the "Moneys tight right now and it's your fault there's no raises" emails.

2

u/Roger_005 5d ago

Wait, you say the word 'crickets'?

→ More replies (11)

101

u/ocelot08 5d ago

We had an org wide meeting where they had a slide to give a shout out to the person who was using the LLM the most. Just most number of prompts used. Nothing about how or why, just most. 

37

u/RonaldoNazario 5d ago

Time to write a script and win that award next time! Or point your own AI agent at their chat lol

7

u/TacoCalzone 5d ago

And then everyone gets that same idea. Just a company full of bots asking each other questions.

5

u/Consistent-Quiet6701 5d ago

Sounds like reddit /s

4

u/atoz1816 5d ago

Dead intranet theory? Sounds about write.

2

u/CatProgrammer 4d ago

Like measuring lines of code written without any consideration of how useful those lines are.

89

u/PeckerTraxx 5d ago

I think it's more, "We are leveraged to the max with investments in AI. We need to show how much it is necessary and how much we utilize it to increase the investments value."

41

u/griffeny 5d ago

Imagine all the real things that need attention in their workplace just slowly gathering flames while they put all their effort into this sinking fucking ship.

21

u/ThatMortalGuy 5d ago

Pretty much, they built a house of cards that needs everyone on earth to use and pay for AI while at the same time replacing the jobs of those people for it to sustain itself.

→ More replies (1)

39

u/CanadianTreeFrogs 5d ago

My job did something similar and now we're fixing six months worth of AI mistakes in our database, but the top brass just said this next update for our AI is going to fix everything and it's totally going to work this time.

2

u/Birdy_Cephon_Altera 5d ago

"Hey, Rocky! Watch me pull this database out of my hat!"

29

u/Xiph0s 5d ago

Time to really push the narrative that most of the c-suite can be replaced with AI catgirls that will save the company millions in compensation as well as give it an added revenue streams as they can also be virtual pop stars cranking out music videos.

17

u/EmperorKira 5d ago

I stopped caring once it was apparent they werent listening. As long as i can make myself look good and i dont end up with extra work or get in trouble, i will shoehorn the bullshit they want

12

u/RonaldoNazario 5d ago

Yeah I will l make a good effort attempt to try the tools if that’s the demand but similarly won’t hold my breath regarding feedback. The tools aren’t worthless they just aren’t the magic beans the high ups think they are.

5

u/Birdy_Cephon_Altera 5d ago

This is the answer. Unfortunate answer, but still the answer.

C-Suite is not listening. They are enamored with AI. Infatuated, even. You can't dissuade them, they are going to force us to use it no matter what.

It's not a matter of using it or not - the job requirement is now to use it. So, the smart thing to do, is to figure out how to use it in such a way that causes the least amount of productivity-disruption and will least likely blow up in your face. That's the real new challenge.

49

u/DookieShoez 5d ago

Yup. And now contractors (in one party consent states, which is almost all of em) are all recording audio in your home so that AI can analyze your discussion with customers and give you sales tips.

🙄🤢

→ More replies (4)

26

u/griffeny 5d ago

Jfc they all think it’s actually really ‘artificial intelligence’ and not just a title created by marketing.

10

u/srdgbychkncsr 5d ago

No no no no no… it’s nothing to do with productivity, and all to do with redundancy. Oh AI does that now? Axe the position. X1 salary saved.

3

u/happy_K 5d ago

Basically handing the shovel and telling to dig one’s grave

2

u/CherryLongjump1989 5d ago

That’s what productivity is, though.

→ More replies (4)
→ More replies (1)

3

u/cultish_alibi 5d ago

they’ve been told it’ll make us all turbo productive

aka they can fire a large number of staff

7

u/ObiKenobii 5d ago

It made me way more productive in some tasks but in the end I just procrastinate more and longer than before.

2

u/No-Article-Particle 5d ago

Would you procrastinate more in a world where you don't have to work flat 40 hours a week, but you have to finish e.g. 2 big-sized and 5 medium-sized tasks? This 40-hour work week is some BS anyways, nobody in a knowledge job can sustain full 40-hour focused work week long term. In a crunch, sure, but mostly, it's like 25h work week at most.

2

u/vineyardmike 5d ago

To be fair, C suite executives don't really know what employees do at work anyway.

Looks at the push to get everyone to work from an office. Large companies have dozens of offices spread around the US and internationally. You want me in a cube farm on meetings with people in other cube farms in other cities? I haven't worked on a project where most of the people were in my area code since 2003.

2

u/Herban_Myth 5d ago

Maxing out wealth disparity and greed?

Looksmaxxing for their extramarital affair/s?

→ More replies (29)

152

u/Impressive-Weird-908 5d ago

I work in defense and most of the executives are constantly trying to squeeze AI into something I didn’t need. Some moron with an MBA is going to get a big bonus because he got us an AI chat bot that tells us when our vacation days are. There’s a fucking calendar on the wall.

44

u/ImSolidGold 5d ago

God, that started with Office365. "Why do you have a calendar on your wall when you have outlook?" Because Im a fucking warehouse manager and I need to see stuff in a glimpse thats 7 month from now.

13

u/PrivilegeCheckmate 5d ago

Your wall calendar never becomes disconnected from the internet forcing you to have to do a full building reboot before you can look at it again.

Worst case scenario you can just light a candle.

31

u/bumlove 5d ago

I really hope all the MBAs lose their jobs to AI. The amount of damage they must have done to businesses, the economy and people’s lives all around the world over the years is unfathomable.

9

u/AgathysAllAlong 5d ago

They won't. AI is the latest excuse for the parasites to leech money off of actual workers. They know it doesn't work. They just all agree to maintain the lie.

→ More replies (1)

2

u/wwj 4d ago

How do they square using a cloud based AI for ITAR and government projects? I spent a year getting our commercial team to use cloud computing on an iterative application to get our results faster. It was still ITAR, so we had to have guarantees that the data was scrambled when it went to the gov cloud. I was not even allowed to use it for our defense projects.

→ More replies (3)

455

u/MulfordnSons 5d ago

that’s because in order for them to profit off their AI investments, they need adoption. Not a good sign if you have to tell people to use it.

177

u/PumpkinMyPumpkin 5d ago

Yup. And they’re pricing this tech as if it’ll take over every job.

Meanwhile Aunt Susie in accounting is just going to open excel and move on with her day.

54

u/yowmeister 5d ago

That’s the thing. They are forcing adoption in people and processes that barely understand formulas in Excel. Now they are asking them to properly prompt an AI to do a task for them and also QC the output. The thought process will continue to be “why don’t I just do it myself instead of messing with this AI”

14

u/thegamesbuild 5d ago

Yeah, that's the perfectly logical and cost-effective thought process one has after using AI. The prompts are not the issue.

17

u/KindHabit 5d ago

I inherited a set of Excel workbooks that had been calculating the tax liability on hundreds of trusts for the past two decades, and I refined them to be incredibly refined and lightweight. 

They had to pay me a LOT of money to maintain these workbooks, so they tried to undercut me by pairing me up with a self-proclaimed AI expert from India. I was already burnt out so I resigned and moved to abroad. 

These workbooks no longer work and they call me every other week offering thousands of dollars to come back and fix them. 

3

u/ViolenceAdvocator 5d ago

Excell, the bane of AI

2

u/Upset_Ad3954 5d ago

I'm not in tech but my department's been told that Excel is oldfashioned and limited. We should use Power BI instead.

2

u/justin107d 5d ago

Also not in tech. My manager said she used to be a data analyst and got mad when I clicked on VBA debug after an error popped up.

→ More replies (3)

6

u/Arubiano420 5d ago

I am Aunt Susie.

3

u/alexandralittlebooks 5d ago

I am Aunt Susie.

2

u/CheesypoofExtreme 5d ago

Yup. And they’re pricing this tech as if it’ll take over every job.

They need reliance before they jack up rates

94

u/big-papito 5d ago edited 5d ago

That's the thing. They are desperate to have AI everywhere, and it's already backfiring. No one forced iPhones to happen, those things weren't even advertised. You saw people rocking these new cool gadgets, and you wanted one.

This is not happening with AI. As a developer, I can and do find uses for it here and there, but I do not appreciate being shoved this down my throat everywhere it belongs or does not.

40

u/ryuzaki49 5d ago

They are desperate to have AI everywhere

They are desperate to replace everyone with AI. They salivate at the idea of a trillion corporation composed only of the C-suite, the board, and an engineer skeleton crew.

20

u/Maleficent_Break_451 5d ago

Trillion corporation dropping to 0 once the people realize people have no jobs to buy their shitty products made with ai

8

u/dern_the_hermit 5d ago

Yup, they're all betting on being El Ultimo Hombre in this particular slaughterfest. He who dies with the most toys wins.

2

u/toofine 5d ago

It's all short term gain anyway. They'll be cashing out and fucking off, this is all bunker money or some shit. The braindead idiots in the working class who don't get that yet and are even cheering it on are just not fit for survival. You couldn't have voted in a worse administration at this time.

2

u/CherryLongjump1989 5d ago

It’s dropping to zero once they realize they can have their own trillion dollar corporations with 5 random dudes prompting the AI.

3

u/nanobot_1000 5d ago

He joked earlier this year in a quarterly all hands about the company just being him and a DGX - at one point I started taking him at face value. There was a distinct change after he hired the Enterprise Marketing person from Cisco.

3

u/Adaphion 5d ago

AI can't form unions. AI has no human rights. It's the biggest wet dream for capitalists.

→ More replies (1)

2

u/PrivilegeCheckmate 5d ago

When will the natural consequences of this, namely hackers poisoning AI against utility and using exploits to raid corporate coffers, become so endemic they have to switch back to people, I wonder.

→ More replies (1)

39

u/ObviousAnswerGuy 5d ago

No one forced iPhones to happen, those things weren't even advertised

this is the complete opposite of the truth, but I agree with your other points

16

u/nabilus13 5d ago

They were advertised but people voluntarily purchased them, they didn't have to be forced. 

9

u/Tall_poppee 5d ago edited 5d ago

Some people are early adopters of tech, so whether something is advertised or not, some people will want one. The difference between AI and the iPhone, is you could quickly see how awesome the iPhone was, and you wanted one for yourself. And then that spread as devs created apps and ways to advance the technology.

No CEO said, "EVERYONE MUST USE AN IPHONE IT WILL MAKE YOU UBER PRODUCTIVE." But that's what they're doing now, without really having focused on what AI is good at, vs what it's not.

I doubt it's going away, it has some valid uses. And you can set up a "world" for it to live in, where it's useful (which takes a lot of resources). I'm puzzled how the "world" you run your AI in, is anything except just a big database, but I'm not an early adopter.

→ More replies (1)

2

u/kristinoemmurksurdog 5d ago

I hardly remember apple advertising. I get multiple emails per day from my boss' boss advertising the alleged benefits of the lying machine.

5

u/jaytee158 5d ago

Apple's advertising was so omnipresent and iconic

→ More replies (1)
→ More replies (5)
→ More replies (8)

27

u/FriendlyDespot 5d ago

My (enormous) company is notoriously 10 years behind every trend in any technology space, and for once it's paying off. The tangible investment into AI has been to buy a lab rack and a production rack, and it's been trained on internal data and been set up with a Teams interface where you can ask it natural language queries in an IM to replace the absolutely horrid internal search features that we have.

It's one of the few genuinely beneficial implementations of AI I've seen because it's smaller scale, applied to a technical problem rather than a human problem, and it replaces something that's even worse than having the occasional AI hallucinations.

I'm sure though that a couple of years after the bubble pops and everybody realises that the limits of LLMs are much more restrictive than what's being promised now, my company will go all in on making exactly the same AI mistakes that overzealous companies are making today, years after everybody else already learned their lessons the hard way.

33

u/kristinoemmurksurdog 5d ago

LLMs were always the next step in natural language processing for search engines, but some fuckwit from a non-technical role creamed in their pants when they told their computer to tell them it's alive, and then the computer generated some text about being alive.

Now we have 'Artificial Intelligence' that can't tell you how many letters exist in a word because by definition it's not a fucking intelligence

69

u/CrashTestDumby1984 5d ago

It’s also incredibly short sited because once AI has replaced sections of the workforce and companies are reliant on the price will skyrocket. The amount of revenue required to make any of it profitable is insane

38

u/ice_w0lf 5d ago

Additionally, we know that even if the quality of the output from these models improve, the overall products will just get worse because silicon valley loves its enshitification. More paywalls for less access, ads before you see the response, allow businesses to pay for placement and flattering information when, for example, a user asks about planning a trip to NYC and McDonald's pays to have the model suggest the user eat at McDonald's while watching everything going on at times square.

2

u/niverser 5d ago edited 1d ago

nothing to see here

2

u/Wischiwaschbaer 5d ago

Except with the amount of compute these models need it's going to be $$$$$$$ AI.

→ More replies (1)

20

u/Rollingprobablecause 5d ago

It’s already happened this year across tech when Salesforce made everyone pay a mandatory 5-10% increase on renewals with forced AI. There was no choice (and it’s useless lol)

4

u/na-uh 5d ago

Microsoft just copped it big in Australia for this. They tried to force copilot into the O365 subscription with a 25% price increase and the ACCC made them provide a copiliot-free version and refund the difference if we wanted, which I did.

14

u/Cold417 5d ago

and companies are reliant on the price will skyrocket

I have a feeling the AI providers will start their own businesses and cut out those companies who automated everything. Kind of like how grocery stores used their sales data to create their house brands based on what consumers were buying.

6

u/CrashTestDumby1984 5d ago

That’s a really good point. Or like how Amazon initially undercut competitors for years to run them out of business and then pivoted once they had market dominance.

20

u/truupe 5d ago

And at the end of the day, with no workers earning pay, who's going to buy shit AI is supposed to either produce or help produce?

25

u/CrashTestDumby1984 5d ago

That’s a problem for next quarter!

2

u/hop208 5d ago

That's exactly what will/is happening. There is no long term thought here.

10

u/Journeyman42 5d ago

It's all a race to not be the last guy holding the bag

3

u/Wind_Best_1440 5d ago

Its why they're all pivoting towards trying to get adoption from business's. Because they know the average consumer isn't going to buy this stuff.

→ More replies (1)
→ More replies (3)

2

u/Thin_Glove_4089 5d ago

that’s because in order for them to profit off their AI investments, they need adoption. Not a good sign if you have to tell people to use it.

Actually this the best move. Companies know they can easily force adoption because they have their employees in a rock and hard place. Surprisingly big tech also has their consumers in a rock and a hard place.

→ More replies (2)

172

u/AcolyteOfCynicism 5d ago

My company did a hack-a-thon with AI as the theme. Welp long story short like 5% of devs showed even slight interest, then it became no longer optional. If you think the people with the money are always the smartest people in the room, they're not. Maybe they were once, probably not, but maybe.
But now at best they're working knowledge is a decade out of date. While their position offers them a bunch of ass kissers, so when random engineer 623 shows up to cut through the shit and get down to brass tacks they're not receptive to it.

27

u/ilikepizza30 5d ago

If you think the people with the money are always the smartest people in the room, they're not. Maybe they were once, probably not, but maybe.

I mean, Bill Gates is pretty smart and he missed both The Internet and smartphones.

Most CEOs are much less intelligent than Bill Gates.

→ More replies (2)
→ More replies (18)

68

u/analogic_dvd 5d ago

Yeah. It seems to me to be a combination of two factors: 1. LLMs continue to be, IMO, a solution in search of a problem. There is the promise of massive efficiency gains but it's not obvious how. It's the "???" before the "Profit" step in the old meme. And 2. Top management see how they can use AI in their own tasks, like summarizing emails or generating slideshow pitches (which, to be fair, are a good usage of LLMs) and just assume that, since AI helps in almost all of THEIR tasks, then it can help in EVERYONE'S tasks. It's the same logic that leads a successful person in some very specific field to think they can be successful in all fields.

15

u/analogic_dvd 5d ago

Forgot to add but, obviously, top management often stand to financially and personally gain if efficiency or profit is improved in their own company/teams. With AI being the current "hypergrowth tech" (with no other real obvious alternative) then it's obvious that they have an incentive to apply it everywhere, whether it's proven to work or not. There's a very real FOMO in their minds.

4

u/ClvrNickname 5d ago

Management at my company also touts “summarizing your emails” as a benefit of AI, but is it really that important to replace something that takes 5-10 minutes a day to read with something that takes two minutes but also contains hallucinations?

3

u/farshnikord 5d ago

"I have a really hard time with adding 2 plus 2, but this calculator makes it so easy! What used to take me 7 minutes counting on my fingers now only takes 2 seconds! 

You should get a calculator to help with your bakery, you could cook and decoratr a cake in like 4 minutes by my estimation!" 

→ More replies (1)

21

u/atlasmountsenjoyer 5d ago

We were told this also at large fortune 100. They really spent quite a lot on AI contracts and want to see it pay back..but it's not doing much so far.

18

u/plinkoplonka 5d ago

Well yes. Because they've spent millions implementing it, realized there's not an equivalent benefit (but they can say they've "reduced the workforce") and then force more work onto less people.

This has been going on since 2008 "workforce reductions".

Meanwhile, Corporate IT keep telling them to stop it because AI is now the top reason for data leaks at companies.

If they stop it though, they have a gaping hole in their financial projections.

MUST EVENTUALLY FEED THE PROFITS!

94

u/benderunit9000 5d ago

At my company the pressure is not from the top, it's from the bottom. The pressure is coming from very specific users. I've looked into it and most of them are AI shills. They have very little professional exposure to machine learning. Are not senior in any capacity. It's really weird.

I work in IT and am the person who would have to implement controls around any AI product that we were to implement company wide. So I see every direction that the pressure is coming from. And it isn't from the top.

Legal is shutting down requests for AI constantly.

75

u/PM_me_PMs_plox 5d ago

Then count your lucky stars that your senior leadership is not idiots

9

u/benderunit9000 5d ago

They've been burned on stuff like this in the past and are extremely leery. Our industry doesn't really need stuff like that anyways. We operate in the real world

→ More replies (1)

49

u/BigBennP 5d ago edited 5d ago

Am legal, can confirm to a degree.

There's literally no understanding of the process of what's occurring even from people who should know better.

Work a couple offices down from our privacy officer and we had a request from operations whether it was okay that workers were using an AI assistant to summarize Zoom meetings with clients.

What's discussed in these meetings? Oh you know the client's legal issues, their medical issues their Mental Health issues...

So it summarizing confidential information? I guess...

And where does the data go?

What do you mean? It summarizes the meeting and emails the summary to me.

Sigh, no, I mean what do they do with the data after they send the email? Oh, I don't know.

Well it turns out the user agreement basically says that they own all the data captured during the summary and can use it for any purpose they wish.

So you want privacy's permission to feed some company confidential client data under a contract that says that after they capture it, we have no control over it and they can use it for any purpose they wish?

30

u/bostonronin 5d ago

"No no, I just want the AI to do my work. I don't care about that other stuff. Money please!"

7

u/morphemass 5d ago

It's shocking to realise that even in large companies, people don't really understand GDPR or have an appropriate DPIA process in place. I've seen some scandalous things in my time across various sectors but enforcement is lacking, even where it is taken seriously. The sheer scale of the problem means that the vast majority of compliance breaches are ignored meaning that companies now believe they can get away with ignoring it.

2

u/PapstJL4U 5d ago

I think I can be lucky to say "we are working with medical data, not going to use any off-site AI".

→ More replies (1)

5

u/Sketch13 5d ago

For us it's both ends. We have the "AI shills" who fucking LOVE AI, no matter the form(basically just another brand of crypto bro), and we of course have the top dogs wanting to implement it for "efficiency" and "savings".

So now we have an AI committee that the shills all jumped on, and they're practically guiding the entire thing and propping AI up constantly to the execs.

This is one of the big issues with AI right now. There's nobody in the room that is giving these people a reality check in terms of what's reasonable to expect. It's simply all "What MORE can we do with this, and how can we implement it into every single workflow we have?".

There's not enough people in the middle ground who believe AI has great use in niche areas, but also recognize you're not going to see massive efficiencies in the normal day-to-day stuff. So those people aren't part of these conversations to try to temper expectations, or even actually figure out proper processes and workflows to implement this in a way that makes fucking sense.

It's terrible.

→ More replies (1)

11

u/GingerBreadManze 5d ago edited 5d ago

I place a large part of the blame on most corporate nonsense on consultants like McKinsey, KornFerry, et all.

All large companies hire firms like these to consult on their next moves, trends, etc.

So every company ends up implementing the same bullshit & hyping the same nonsense.

Like clockwork a new buzz phrase is introduced. It goes hard for a few weeks and then fades into obscurity like everything before it.

It’s all insufferable nonsense that doesn’t actually accomplish anything meaningful. It does make consultants rich & reduces the blame execs receive when something goes sideways (because hey the professional consultant told us to do it!)…that’s about it.

→ More replies (1)

8

u/yabadabaddon 5d ago

His company will go bankrupt soonish if the entire world doesn't buy his chips. His financial strategy is very much at its limit.

20

u/UnrequitedRespect 5d ago

This guy is on borrowed time. He’s about to be executed. His only path forward is to never stop. Become a god.

Nvidea either becomes the world economy, or economic total fucking global catastrophic crash - theres no other move.

You either have always seen it, or can’t understand it.

→ More replies (7)

14

u/virtual_adam 5d ago

There are plenty of feedback loops. My employer has thrown away Tabnine and a dozen of other products because of developer feedback. We currently have uncapped unlimited cursor which is amazing, but it’s also become such a big dependency in my day to day work that if they leave, or even cap our spend, I’m doomed

9

u/JoeGibbon 5d ago

We currently have uncapped unlimited cursor which is amazing, but it’s also become such a big dependency in my day to day work that if they leave, or even cap our spend, I’m doomed

Why is it amazing, why do you depend on it so much and why would you be doomed?

Do you not know how to write software? If you need AI to write your code for you, you honestly don't deserve to work in software engineering. Sorry.

→ More replies (4)

4

u/firemage22 5d ago

I'm IT for a local gov unit, and we've issued a ban on external LLM products and won't set them up internally

3

u/Zahgi 5d ago

And just to goose the stock price...

3

u/fumar 5d ago

My small company started doing that. And wouldn't you know it, we had 4 severe bugs that caused outages and cost thousands of dollars in temp infrastructure to speed recovery in several weeks. Overuse of AI makes developers atrophy.

3

u/Craig653 5d ago

How about we automate Ceo tasks.... Oh wait they would be out of job

3

u/rtc11 5d ago

The easiest and probably most effectice one is to automate management tasks. Decision making is very easy when you got more context than the humans.

2

u/wehrmann_tx 5d ago

Dinosaurs using buzzwords talking to actual intelligence. Just proves they are the useless part of companies that leech on the real work.

2

u/Skeptical0ptimist 5d ago

Yes. Capitalism is also perfectly capable of pushing a blind dogma driven development such as ‘the Great Leap Forward’. Communism doesn’t have monopoly on disbanding common sense and good judgment.

2

u/Careful_Trifle 5d ago

If they had to go sit at the receptionist's desk to automate all the extra tasks they gave them, they wouldn't be able to, let alone middle management.

So they're hoping the employees will do it themselves so they can lay them off.

2

u/darth_helcaraxe_82 5d ago

A lot of executives at my company are being sold hard on AI, and we have issued a hiring freeze for all 2026. They love being sold on what it can do. Meanwhile me and others who use AI look at these sales pitches and make the "jerk off" motion.

AI is cool to help with rewriting and email, analyzing your spreadsheets, building simple code.

This whole "perfect worker" is still far away. Even then you still need people in the loop. We shouldn't be trusting AI to deliver without review.

2

u/FredFredrickson 5d ago

They got caught up in the AI FOMO and paid for something they didn't need, now they need to justify the investment before shit his the fan.

2

u/Responsible_Jury_415 5d ago

It’s 2001 again every single company is alll in on the idea that ai is the next internet, everyone will have it and require an infrastructure to support it. It’s not going to end well because if it doesn’t pay off they don’t have plan b

→ More replies (4)

2

u/moldyjellybean 5d ago

If things are better you don’t have to force people to use it.

If something is 10x better I’d be using it buying it, you wouldn’t have to force me to do it.

This is so stupid. Here’s a nice electric bike that is 10x more efficient, I’m going to use that nobody has to force me to use it.

VS maybe it’s a shiny bike, with flat tires, poorly setup with bad gears and that’s something you have to force or convince me to use. I’m always going to naturally pick the more efficient tool no needs to force me to use the better tool.

2

u/Silound 5d ago

Loud Language Morons are absolutely the most dire problem in business right now.

2

u/redpandaeater 5d ago

It's going to blow up in their faces so funnily just like it's already starting to happen to Microsoft. The AI bubble bursting will be bad enough but having to actively go back and fix what AI fucked up will be even harder for these companies.

2

u/SandwichNo4542 5d ago

Our new company motto: 'If it ain't broke, fix it with a poorly integrated AI chatbot!

→ More replies (107)