Führende KI-Unternehmen erhalten schlechte Sicherheitsnoten | Ein neuer Bericht des Future of Life Institute gab überwiegend Ds und Fs an

https://spectrum.ieee.org/ai-safety

1 Comment

  1. MetaKnowing on

    “The purpose of this is not to shame anybody,” says [Max Tegmark](https://spectrum.ieee.org/interview-max-tegmark-on-superintelligent-ai-cosmic-apocalypse-and-life-3-0), an MIT physics professor and president of the [Future of Life Institute](https://futureoflife.org/), which put out the report. “It’s to provide incentives for companies to improve.”

    He hopes that company executives will view the index like universities view the U.S. News and World Reports rankings: They may not enjoy being graded, but if the grades are out there and getting attention, they’ll feel driven to do better next year.

    He also hopes to help researchers working in those companies’ safety teams. If a company isn’t feeling external pressure to meet safety standards, Tegmark says,“then other people in the company will just view you as a nuisance, someone who’s trying to slow things down and throw gravel in the machinery.” But if those safety researchers are suddenly responsible for improving the company’s reputation, they’ll get resources, respect, and influence.

    The grades were given by seven independent reviewers, including big names like UC Berkeley professor Stuart Russell and Turing Award winner [Yoshua Bengio](https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/), who have said that superintelligent AI could pose an [existential risk](https://www.safe.ai/work/statement-on-ai-risk) to humanity.

    The Index graded the companies on how well they’re doing in six categories: risk assessment, current harms, safety frameworks, existential safety strategy, governance and accountability, and transparency and communication.

    All six companies scaled particularly badly on their [existential safety](https://spectrum.ieee.org/artificial-general-intelligence) strategies.”

Leave A Reply