Die Studie von Apple beweist, dass LLM-basierte KI-Modelle fehlerhaft sind, weil sie nicht argumentieren können

https://appleinsider.com/articles/24/10/12/apples-study-proves-that-llm-based-ai-models-are-flawed-because-they-cannot-reason?utm_medium=rss

9 Comments

  1. thenewguyonreddit on

    They never could reason and the only people who believed this were laymen unfamiliar with how GPTs actually work.

    At their core, they are very fancy prediction and probability engines. Thats it. They either predict the next word in a sentence or the next pixel in an image. Most times they are right, sometimes they are laughably wrong. Even calling them AI is a huge stretch.

  2. I don’t disagree with the premise of the article, but when you’re testing an LLM “with a given math question” you’re unlikely to get good results. 

  3. Turtle_Online on

    Article makes no mention of GPT-4o1. I wonder if the study included the latest preview model from OpenAI which aims to solve this.

  4. Divine_Kittens on

    This is a significant problem, because as someone who works effectively in tech support, I can say the vast majority of humans do not have the ability to parse down what they want, or what problem they are having, into concise questions with only the relevant info.

    It’s usually either “my phone isn’t working” or it’s a story so meandering that even Luis from *Ant-Man* would be saying “Get to the point!!!”

    This will be a more important thing for AI researchers to figure out.

  5. Divine_Kittens on

    >

    Hence why LLM’s are called *predictive* models, and not *reasoning* models

Leave A Reply