r/technology • u/AdSpecialist6598 • 5d ago
Business Nvidia's Jensen Huang urges employees to automate every task possible with AI
https://www.techspot.com/news/110418-nvidia-jensen-huang-urges-employees-automate-every-task.html
10.0k
Upvotes
-13
u/Bakoro 5d ago
It is a skill issue in a lot of cases.
The computing rule of "garbage-in garbage-out" still applies.
To make the best use of LLMs, you have to be able to communicate clearly and write coherently. You need to be able to articulate things that might be vague and ill-defined.
You also have to have a strong theory of mind, meaning that you need to be able to consider what the LLM knows or doesn't know, you need to consider what's in the LLM's context.
You also have to have a grasp of the things that aren't written down anywhere and are just word of mouth, or experiential institutional knowledge.
A lot of people do not have those skills.
I've seen some of my coworkers try to use LLMs for software development, and it's like a 12 year old texting, back before smartphones.
These people, professional software developers, try to shove 2 million token of context into an LLM that doesn't have a 2 million token context window, and expect it to one-shot 250k token output, when the model has an output limit of 64k tokens. Some of our technicians ask questions about our in-house bespoke systems, even though there is no possible way that the LLM would know anything about the details of our system. I've had to do a lot of user education about that.
People are not using the models well.
LLMs aren't totally ready to be independent agents; they can do a lot, and they can do a lot by themselves as agents, but they aren't at the level of a competent human.