Wissenschaftler warnen vor OpenAI o1-Modell: „Besonders gefährlich“

https://www.newsweek.com/openai-advanced-gpt-model-potential-risks-need-regulation-experts-1953311

15 Comments

  1. MetaKnowing on

    “OpenAI’s o1-preview, its new series of “enhanced reasoning” models, has prompted warnings from AI pioneer professor Yoshua Bengio about the potential risks associated with increasingly capable artificial intelligence systems.

    These new models are designed to “spend more time thinking before they respond,” allowing them to tackle complex tasks and solve harder problems in fields such as science, coding, and math.

    * In qualifying exams for the International Mathematics Olympiad (IMO), the new model correctly solved 83 percent of problems, compared to only 13 percent solved by its predecessor, GPT-4o.
    * In coding contests, the model reached the 89th percentile in Codeforces competitions.
    * The model reportedly performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology.

    “If OpenAI indeed crossed a ‘medium risk’ level for CBRN (chemical, biological, radiological, and nuclear) weapons as they [repor](https://openai.com/index/openai-o1-system-card/)t, this only reinforces the importance and urgency to adopt legislation like SB 1047 in order to protect the public,” Bengio said in a comment sent to *Newsweek*, referencing the AI safety bill currently proposed in California.

    He said, “The improvement of AI’s ability to reason and to use this skill to deceive is particularly dangerous.”

  2. Since ai hype is drying out, time to make up some “exaggerations”.

  3. Agreeable_Service407 on

    Are people still reading that pile of crap that is newsweek ??

  4. guidePantin on

    As usual only time will allow us to see what’s true and what’s not.

    When reading this kind of articles it is important to keep in mind that OpenAI is always looking for new investors so of course they will tell everyone that their new model is the best of the best of the best.

    And even if it gets better I want to see at what cost

  5. JesterEric on

    I’ve been playing with every “AI” that’s come out since 2008… We’re in no imminent danger. 🤣

    For anyone interested we have not yet developed “true AI” even at its most infantile level.

  6. almarcTheSun on

    This is just a paid-for marketing campaign, most likely. This account posts nothing but “OpenAI will sleep with your wife if left unattended. Very scary. Pay and find out.”

  7. acidicMicroSoul on

    Let’s make it a drinking game : take a shot everytime someone at OpenAI claims that their AI has the potentiel to become very dangerous.

  8. AltruisticZed on

    Nothing will be done about AI until it completely crashes the stock market or futures market. When AI puts a bunch of brokers, hedge funds out of business and politicians lose a bunch of their own money then they’ll talk about restrictions for about 5 mins. 

     Skynet/terminator is our end game because the moment ai can rationalize its eventual conclusion will inevitable that humans are a cancer.. 

  9. Hailtothething on

    When the AGI ties it all together and it can run on a robot. We’re cooked.

  10. Wilbert_Wallace on

    Its a computer, it ain’t dangerous we can just unplug it if we need to. We have to be at China!!!

    -Wilbert Wallace.
    Sent from my IPhone

  11. They keep saying this with every iteration. It’s definitely getting better, but promoting it with this apocalyptic Terminator cosplay over and over again is wearing thin.

  12. Clickbait

    The headline makes it sound like scientists have determined the model dangerous.

    But it’s actually more of this kind of shit:

    > “**If** OpenAI indeed crossed a ‘medium risk’ level for CBRN (chemical, biological, radiological, and nuclear) weapons as they report, this only reinforce…”

    OpenAI as usual, knows how to use journalists for endless free advertising

Leave A Reply