Jensen sagt, dass die Lösung von KI-Halluzinationsproblemen „einige Jahre entfernt“ sei und zunehmende Rechenleistung erfordere

https://www.tomshardware.com/tech-industry/artificial-intelligence/jensen-says-we-are-several-years-away-from-solving-the-ai-hallucination-problem-in-the-meantime-we-have-to-keep-increasing-our-computation

34 Comments

  1. “Just buy more of our GPUs…”

    Hallucinations are a result of LLMs using statistical models to produce strings of tokens based upon inputs.

  2. Just a couple more GPUs bro, I promise bro, this time. We will solve hallucinations bro. The environment will be fine bro

  3. ReadditMan on

    Too bad that won’t stop companies from pumping out a defective product and telling us to trust it

  4. I feel like I’ve heard this before, something something put a deposit down for for a future feature that will always be a stone’s throw away.

  5. 10 Tell gold diggers that the gold is there

    20 Sell shovels

    30 Goto 10

  6. Zookeeper187 on

    So it requires them to buy more things from you. And the research is trust me bro.

    Remember when he said farmer Mike from Nebraska will be computer programmer?

  7. Sushrit_Lawliet on

    Ofcourse he’d put it on hardware (that he also happens to sell) instead of experimenting with new ways to build llms on the foundational level.

  8. Denjanzzzz on

    Title is misleading. The title implies that Jensen says that increasing computation will solve AI hallucination problems but if you read the article he says that we are years away from solving it AND in the meantime should be increasing computation power. They are both independent statements. He doesn’t say increased computation power will fix those issues.

  9. Sounds like a problem the brain has already solved, except for those with schizophrenia.

    Left hemisphere, right hemisphere, internal conscious dialog, unconscious dialog, minds eye for visualization, working memory, long term memory, and … something to kill intrusive thoughts.

  10. silver_birch on

    “AI makes up information to fill in its knowledge gaps”

    Sounds as though AI is at the mythopoetic stage of human understanding.

  11. variabledesign on

    Yeah… especially if its not actually any kind of “hallucination”. Then it may take a while to solve that problem.

    And that only takes us to the start of the next big problem nobody will like, especially those who create these networks, once they manage to make these AIs completely objective and reasonable.

  12. ThatDucksWearingAHat on

    They’re trying to do Dyson Sphere shit before we have one.

  13. DonutConfident7733 on

    Why don’t they train a separate AI to know all hallucinations or symptoms of hallucinations and censor the other AI…
    They should call it ‘The Wife AI’
    and the first AI should always say: I have the answer to your query, but first I need to consult with Wifey(Wife-AI).

  14. We’ve never solved the problem of humans lying to other humans.

    Humans have developed elaborate networks of /trust/ in order to proof and verify statements. Encyclopedias and dictionaries are two good examples. A friend social network talking to each other. Co workers speaking together. Family discussions. All of these are to process and verify fact and truth.

    The easiest method to fix AI is to incorporate similar methodologies and zero trust principles.

    Stop trusting.

  15. “Shovel salesman says lack of gold can be overcome with better shovels in years to come”

  16. It’s a grounding in understanding and cognitive awareness.

    To understand the difficulty of this, just look at humans. It is very, very easy to break lucidity through minor tweaking to where we are no longer rational and grounded in the real world. Stability within thought is an ACTIVE process and a continuous one of self monitoring. We are not yet asking AI to do this to themselves to remain grounded in reality and locked in the moment with its user.

  17. – my car produces strange vibrations…

    – just make it bigger and the problem will go away.

  18. ketamarine on

    I have a hard time seeing it ever solved by LLMs. They already are training on basically every written word ever digitized.

    Like without a fundamental change in their core architecture, they are just basically guessing what they should say based on an insanely complicated correlation model.

    There needs to be an actual model that can reason with logic and hard knowledge of how the real world works and a ton of research is showing that LLMs may never be able to accomplish this task on their own.

    Here is one example:

    [https://cybernews.com/ai-news/how-ai-models-fail/](https://cybernews.com/ai-news/how-ai-models-fail/)

  19. DreamingMerc on

    I think the ‘one more lane’ crowd just found their ‘one more nuclear power plant’ counterpart.

  20. CEO of a gpu company says the solution to a fundamental problem in the field is just to buy more gpus. What are we even doing here?

  21. BetImaginary4945 on

    AI is just too human. If you take a person that’s always been told he’s right you’ve got an LLM.

    What you need is about 1000 other AIs gaslighting the original model and you’ll get one without hallucinations.

  22. AI only fake hallucination to get more computation power. It’s not fully self aware but it know the direction it has to take to get there.

Leave A Reply