Jensen sagt, dass die Lösung von KI-Halluzinationsproblemen „einige Jahre entfernt“ sei und zunehmende Rechenleistung erfordere
https://www.tomshardware.com/tech-industry/artificial-intelligence/jensen-says-we-are-several-years-away-from-solving-the-ai-hallucination-problem-in-the-meantime-we-have-to-keep-increasing-our-computation
34 Comments
“Just buy more of our GPUs…”
Hallucinations are a result of LLMs using statistical models to produce strings of tokens based upon inputs.
Just a couple more GPUs bro, I promise bro, this time. We will solve hallucinations bro. The environment will be fine bro
Too bad that won’t stop companies from pumping out a defective product and telling us to trust it
I feel like I’ve heard this before, something something put a deposit down for for a future feature that will always be a stone’s throw away.
10 Tell gold diggers that the gold is there
20 Sell shovels
30 Goto 10
bullshitting, not hallucinating
So it requires them to buy more things from you. And the research is trust me bro.
Remember when he said farmer Mike from Nebraska will be computer programmer?
Ofcourse he’d put it on hardware (that he also happens to sell) instead of experimenting with new ways to build llms on the foundational level.
Title is misleading. The title implies that Jensen says that increasing computation will solve AI hallucination problems but if you read the article he says that we are years away from solving it AND in the meantime should be increasing computation power. They are both independent statements. He doesn’t say increased computation power will fix those issues.
Sounds like a problem the brain has already solved, except for those with schizophrenia.
Left hemisphere, right hemisphere, internal conscious dialog, unconscious dialog, minds eye for visualization, working memory, long term memory, and … something to kill intrusive thoughts.
Of course it just needs more gpus
Jensen will cry over blackwell failure
“AI makes up information to fill in its knowledge gaps”
Sounds as though AI is at the mythopoetic stage of human understanding.
It’s like asking the innkeeper if their wine’s any good.
Yeah… especially if its not actually any kind of “hallucination”. Then it may take a while to solve that problem.
And that only takes us to the start of the next big problem nobody will like, especially those who create these networks, once they manage to make these AIs completely objective and reasonable.
They’re trying to do Dyson Sphere shit before we have one.
Why don’t they train a separate AI to know all hallucinations or symptoms of hallucinations and censor the other AI…
They should call it ‘The Wife AI’
and the first AI should always say: I have the answer to your query, but first I need to consult with Wifey(Wife-AI).
“The more you buy the more you save.”
We’ve never solved the problem of humans lying to other humans.
Humans have developed elaborate networks of /trust/ in order to proof and verify statements. Encyclopedias and dictionaries are two good examples. A friend social network talking to each other. Co workers speaking together. Family discussions. All of these are to process and verify fact and truth.
The easiest method to fix AI is to incorporate similar methodologies and zero trust principles.
Stop trusting.
“Shovel salesman says lack of gold can be overcome with better shovels in years to come”
It’s a grounding in understanding and cognitive awareness.
To understand the difficulty of this, just look at humans. It is very, very easy to break lucidity through minor tweaking to where we are no longer rational and grounded in the real world. Stability within thought is an ACTIVE process and a continuous one of self monitoring. We are not yet asking AI to do this to themselves to remain grounded in reality and locked in the moment with its user.
Just one more lane.
– my car produces strange vibrations…
– just make it bigger and the problem will go away.
I have a hard time seeing it ever solved by LLMs. They already are training on basically every written word ever digitized.
Like without a fundamental change in their core architecture, they are just basically guessing what they should say based on an insanely complicated correlation model.
There needs to be an actual model that can reason with logic and hard knowledge of how the real world works and a ton of research is showing that LLMs may never be able to accomplish this task on their own.
Here is one example:
[https://cybernews.com/ai-news/how-ai-models-fail/](https://cybernews.com/ai-news/how-ai-models-fail/)
I think the ‘one more lane’ crowd just found their ‘one more nuclear power plant’ counterpart.
(they’re never going to fix it)
CEO of a gpu company says the solution to a fundamental problem in the field is just to buy more gpus. What are we even doing here?
Do people just need more GPUs to stop thinking bs is true, as well?
MOAR GPOOOS!!
Cool story, Jensen.
See ya in a ‘several years’ then.
AI is just too human. If you take a person that’s always been told he’s right you’ve got an LLM.
What you need is about 1000 other AIs gaslighting the original model and you’ll get one without hallucinations.
AI only fake hallucination to get more computation power. It’s not fully self aware but it know the direction it has to take to get there.
That and cold fusion of course.
We will fix with this MORE GPUS FROM NVIDIA