Tags
Aktuelle Nachrichten
America
Aus Aller Welt
Breaking News
Canada
DE
Deutsch
Deutschsprechenden
Europa
Europe
Global News
Internationale Nachrichten aus aller Welt
Japan
Japan News
Kanada
Konflikt
Korea
Krieg in der Ukraine
Latest news
Map
Nachrichten
News
News Japan
Polen
Russischer Überfall auf die Ukraine seit 2022
Science
South Korea
Ukraine
UkraineWarVideoReport
Ukraine War Video Report
Ukrainian Conflict
United Kingdom
United States
United States of America
US
USA
USA Politics
Vereinigte Königreich Großbritannien und Nordirland
Vereinigtes Königreich
Welt
Welt-Nachrichten
Weltnachrichten
Wissenschaft
World
World News
6 Comments
“After two years of congressional deliberation, we need more than careful analysis — we need decisive action. AI development is accelerating rapidly, with new and more powerful systems deployed every few months. Without new guardrails, these AI systems pose extreme risks to humanity’s future.
As “AI godfather” Yoshua Bengio explained, a sufficiently advanced AI would most likely try to take over the world economy or even “eliminate humans altogether” in the interest of its own self-preservation.
Last month, former Google CEO Eric Schmidt [warned](https://www.axios.com/2024/12/15/ai-dangers-computers-google-ceo) that when a computer system reaches a point where it can self-improve, “we seriously need to think about unplugging it.”
Schmidt and Bengio are not alone. The average result from a recent [survey](https://arxiv.org/abs/2401.02843) of 2,778 machine learning experts estimated a 16 percent chance that superintelligent AI would completely disempower humanity when it arrives.
These aren’t science fiction scenarios to be dismissed. They are risks supported by the best available science, and they demand serious policy responses.”
[the rest of the article shares specific policy suggestions like whistleblower protection, creating an agency to regulate frontier AI, and funding NIST to create industry standards]
AI development is moving so so quickly. The average age of a US senator is 65. They have absolutely no idea what AI is or what its consequences will be. There have been a number of studies, this one from Princeton (https://www.princeton.edu/~mgilens/idr.pdf) that demonstrates that congress ultimately writes the wishes of the wealthy into law, not laws that protect or work for the average American. All of this considered, I have no faith in congress to understand AI or to do anything about its potential negative effects for average people or for the job market.
The incoming administration has made clear that they will not regulate AI or Crypto and will repeal the meager protections that exist.
There will be no AI protections from the US government.
Congress still hasn’t figured out how the internet works and half of them are geriatrics who are at risk of a bad fall. I don’t think they dropped the ball on AI I think they failed to arrive for the game altogether. A technologically minded government is going to be important in the coming years and I am starting to not see paths to get us where we need to be in time anymore.
LOL, I haven’t read the article yet but the first thought that came to mind after reading the title was:
*Has Congress ever NOT dropped the ball on protecting the public from corporate exploitation?* 😂
As they say… that’s a feature, not a bug.
My issue with “AI safety” reporting is that it focuses on hypothetical risks of a super-advanced AI that could cause enormous harm (which, in my view, is not imminent) rather than the concrete problems that are happening *now.* LLMs are being used by scam artists to take money from the elderly, it’s flooding social media and the Internet with spam, it’s displacing jobs in the creative community with lower-quality slop, and it’s causing a crisis in our colleges with widespread academic fraud among students. AI companies could easily do more to mitigate these problems, but they won’t because they’re cash-hungry and they make more money facilitating these activities. Most famously, OpenAI internally developed a tool that was extremely reliable at detecting OpenAI-generated text, but chose not to release it because their customers didn’t want it.
OpenAI, prior to Sam Altman, understood the potential harms of LLMs being used to impersonate humans and strictly limited their use. The release of ChatGPT bypassed these concerns (without knowledge of the board) and there is not much being done to prevent anything but the most obviously irresponsible uses. I think it would be good for Congress to step in and focus on the problems of *now* instead of just Skynet.
Congress never had the ball. Some of them are literally listing living in senior assisted living facilities as their primary residence. I know it’s popular to blame this or that politician for it, but voters should be ashamed of themselves first and foremost.