Tags
Aktuelle Nachrichten
America
Aus Aller Welt
Breaking News
Canada
DE
Deutsch
Deutschsprechenden
Europa
Europe
Global News
Internationale Nachrichten aus aller Welt
Japan
Japan News
Kanada
Karte
Konflikt
Korea
Krieg in der Ukraine
Latest news
Nachrichten
News
News Japan
Russischer Überfall auf die Ukraine seit 2022
Science
South Korea
Ukraine
Ukraine War Video Report
UkraineWarVideoReport
Ukrainian Conflict
UkrainianConflict
United Kingdom
United States
United States of America
US
USA
USA Politics
Vereinigte Königreich Großbritannien und Nordirland
Vereinigtes Königreich
Welt
Welt-Nachrichten
Weltnachrichten
Wissenschaft
World
World News
1 Comment
“For about five years, OpenAI used a system of nondisclosure agreements to stifle public criticism from outgoing employees. In May, one departing employee refused [to](https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release) [sign](https://www.vox.com/future-perfect/351132/openai-vested-equity-nda-sam-altman-documents-employees) and [went public](https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html) in The Times. The company apologized and scrapped the agreements. Then the floodgates opened. Exiting employees began [criticizing OpenAI’s safety practices](https://x.com/janleike/status/1791498174659715494?lang=en), and a wave of articles emerged about its [broken](https://fortune.com/2024/05/21/openai-superalignment-20-compute-commitment-never-fulfilled-sutskever-leike-altman-brockman-murati/) %5Bpromises%5D(https://www.washingtonpost.com/technology/2024/07/12/openai-ai-safety-regulation-gpt4/).”
OpenAI has spent the last year mired in scandal. The company’s chief executive was briefly fired after the nonprofit board lost trust in him. Whistle-blowers [Whistleblowers alleged](https://www.washingtonpost.com/technology/2024/07/13/openai-safety-risks-whistleblower-sec/) to the Securities and Exchange Commission that OpenAI’s nondisclosure agreements were illegal. [Safety researchers have left the company in droves](https://fortune.com/2024/08/26/openai-agi-safety-researchers-exodus/). Now the firm is restructuring its core business as a for-profit, seemingly prompting the departure of more key leaders.
On Friday, The Wall Street Journal reported that OpenAI rushed testing of a major model in May, attempting to undercut a rival’s publicity; after the release, employees found out the model exceeded the company’s standards for safety.
This behavior would be concerning in any industry, but according to OpenAI itself, A.I. poses unique risks. The leaders of the top A.I. firms and [leading A.I. researchers have warned](https://www.safe.ai/work/statement-on-ai-risk) that the technology could lead to human extinction.
Earlier this month, OpenAI released a highly advanced new model. For the first time, experts concluded the model could aid in the construction of a bioweapon more effectively than internet research alone could. A third party hired by the company found that the new system demonstrated evidence of “power seeking” and “the basic capabilities needed to do simple in-context scheming.”
(rest of article goes into specific recommendations e.g. federal whistleblower protections, disclosures, etc)