“OpenAI’s o1-preview, its new series of “enhanced reasoning” models, has prompted warnings from AI pioneer professor Yoshua Bengio about the potential risks associated with increasingly capable artificial intelligence systems.
These new models are designed to “spend more time thinking before they respond,” allowing them to tackle complex tasks and solve harder problems in fields such as science, coding, and math.
* In qualifying exams for the International Mathematics Olympiad (IMO), the new model correctly solved 83 percent of problems, compared to only 13 percent solved by its predecessor, GPT-4o.
* In coding contests, the model reached the 89th percentile in Codeforces competitions.
* The model reportedly performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology.
“If OpenAI indeed crossed a ‘medium risk’ level for CBRN (chemical, biological, radiological, and nuclear) weapons as they [repor](https://openai.com/index/openai-o1-system-card/)t, this only reinforces the importance and urgency to adopt legislation like SB 1047 in order to protect the public,” Bengio said in a comment sent to *Newsweek*, referencing the AI safety bill currently proposed in California.
He said, “The improvement of AI’s ability to reason and to use this skill to deceive is particularly dangerous.”
ISuckAtFunny on
They said the same thing about the last two models
dasdas90 on
Since ai hype is drying out, time to make up some “exaggerations”.
Kinu4U on
I hope AI cures cancer so everyone will be silent for a year.
Agreeable_Service407 on
Are people still reading that pile of crap that is newsweek ??
guidePantin on
As usual only time will allow us to see what’s true and what’s not.
When reading this kind of articles it is important to keep in mind that OpenAI is always looking for new investors so of course they will tell everyone that their new model is the best of the best of the best.
And even if it gets better I want to see at what cost
JesterEric on
I’ve been playing with every “AI” that’s come out since 2008… We’re in no imminent danger. 🤣
For anyone interested we have not yet developed “true AI” even at its most infantile level.
almarcTheSun on
This is just a paid-for marketing campaign, most likely. This account posts nothing but “OpenAI will sleep with your wife if left unattended. Very scary. Pay and find out.”
acidicMicroSoul on
Let’s make it a drinking game : take a shot everytime someone at OpenAI claims that their AI has the potentiel to become very dangerous.
Ok-Figure5775 on
Employment 5.0 is fast approaching. We are not ready for it.
Nothing will be done about AI until it completely crashes the stock market or futures market. When AI puts a bunch of brokers, hedge funds out of business and politicians lose a bunch of their own money then they’ll talk about restrictions for about 5 mins.
Skynet/terminator is our end game because the moment ai can rationalize its eventual conclusion will inevitable that humans are a cancer..
Hailtothething on
When the AGI ties it all together and it can run on a robot. We’re cooked.
Wilbert_Wallace on
Its a computer, it ain’t dangerous we can just unplug it if we need to. We have to be at China!!!
-Wilbert Wallace.
Sent from my IPhone
Mogwai987 on
They keep saying this with every iteration. It’s definitely getting better, but promoting it with this apocalyptic Terminator cosplay over and over again is wearing thin.
nsfwtttt on
Clickbait
The headline makes it sound like scientists have determined the model dangerous.
But it’s actually more of this kind of shit:
> “**If** OpenAI indeed crossed a ‘medium risk’ level for CBRN (chemical, biological, radiological, and nuclear) weapons as they report, this only reinforce…”
OpenAI as usual, knows how to use journalists for endless free advertising
15 Comments
“OpenAI’s o1-preview, its new series of “enhanced reasoning” models, has prompted warnings from AI pioneer professor Yoshua Bengio about the potential risks associated with increasingly capable artificial intelligence systems.
These new models are designed to “spend more time thinking before they respond,” allowing them to tackle complex tasks and solve harder problems in fields such as science, coding, and math.
* In qualifying exams for the International Mathematics Olympiad (IMO), the new model correctly solved 83 percent of problems, compared to only 13 percent solved by its predecessor, GPT-4o.
* In coding contests, the model reached the 89th percentile in Codeforces competitions.
* The model reportedly performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology.
“If OpenAI indeed crossed a ‘medium risk’ level for CBRN (chemical, biological, radiological, and nuclear) weapons as they [repor](https://openai.com/index/openai-o1-system-card/)t, this only reinforces the importance and urgency to adopt legislation like SB 1047 in order to protect the public,” Bengio said in a comment sent to *Newsweek*, referencing the AI safety bill currently proposed in California.
He said, “The improvement of AI’s ability to reason and to use this skill to deceive is particularly dangerous.”
They said the same thing about the last two models
Since ai hype is drying out, time to make up some “exaggerations”.
I hope AI cures cancer so everyone will be silent for a year.
Are people still reading that pile of crap that is newsweek ??
As usual only time will allow us to see what’s true and what’s not.
When reading this kind of articles it is important to keep in mind that OpenAI is always looking for new investors so of course they will tell everyone that their new model is the best of the best of the best.
And even if it gets better I want to see at what cost
I’ve been playing with every “AI” that’s come out since 2008… We’re in no imminent danger. 🤣
For anyone interested we have not yet developed “true AI” even at its most infantile level.
This is just a paid-for marketing campaign, most likely. This account posts nothing but “OpenAI will sleep with your wife if left unattended. Very scary. Pay and find out.”
Let’s make it a drinking game : take a shot everytime someone at OpenAI claims that their AI has the potentiel to become very dangerous.
Employment 5.0 is fast approaching. We are not ready for it.
Employment 5.0: The work of the future and the future of work https://www.sciencedirect.com/science/article/pii/S0160791X22002275
Nothing will be done about AI until it completely crashes the stock market or futures market. When AI puts a bunch of brokers, hedge funds out of business and politicians lose a bunch of their own money then they’ll talk about restrictions for about 5 mins.
Skynet/terminator is our end game because the moment ai can rationalize its eventual conclusion will inevitable that humans are a cancer..
When the AGI ties it all together and it can run on a robot. We’re cooked.
Its a computer, it ain’t dangerous we can just unplug it if we need to. We have to be at China!!!
-Wilbert Wallace.
Sent from my IPhone
They keep saying this with every iteration. It’s definitely getting better, but promoting it with this apocalyptic Terminator cosplay over and over again is wearing thin.
Clickbait
The headline makes it sound like scientists have determined the model dangerous.
But it’s actually more of this kind of shit:
> “**If** OpenAI indeed crossed a ‘medium risk’ level for CBRN (chemical, biological, radiological, and nuclear) weapons as they report, this only reinforce…”
OpenAI as usual, knows how to use journalists for endless free advertising