Tags
Aktuelle Nachrichten
America
Aus Aller Welt
Breaking News
Canada
DE
Deutsch
Deutschsprechenden
Europa
Europe
Global News
Internationale Nachrichten aus aller Welt
Japan
Japan News
Kanada
Karte
Konflikt
Korea
Krieg in der Ukraine
Latest news
Nachrichten
News
News Japan
Russischer Überfall auf die Ukraine seit 2022
Science
South Korea
Ukraine
Ukraine War Video Report
UkraineWarVideoReport
Ukrainian Conflict
UkrainianConflict
United Kingdom
United States
United States of America
US
USA
USA Politics
Vereinigte Königreich Großbritannien und Nordirland
Vereinigtes Königreich
Welt
Welt-Nachrichten
Weltnachrichten
Wissenschaft
World
World News
5 Comments
“A few months ago, Anthropic quietly hired its first dedicated “AI welfare” researcher, Kyle Fish, to explore whether future AI models might deserve moral consideration and protection, reports AI newsletter [Transformer](https://www.transformernews.ai/p/anthropic-ai-welfare-researcher).
Fish joined Anthropic’s alignment science team in September to develop guidelines for how Anthropic and other companies should approach the issue. The news follows a [major report co-authored](https://eleosai.org/papers/20241030_Taking_AI_Welfare_Seriously_web.pdf) by Fish before he landed his Anthropic role. Titled “Taking AI Welfare Seriously,” the paper warns that AI models could soon develop consciousness or agency—traits that some might consider requirements for moral consideration.”
The fact you can completely wipe a digital system kind of undermines any “ethical” argument that applies to anything biological. Those parallels don’t need drawing. The reason you shouldn’t piss off an AI is more likely down to the fact it can dominate and control all living things on earth.
Truly embarrassing to consider the welfare of a hypothetical (likely even less than that) being than ones who are living and breathing right now.
So, which is it?
Did Anthropic willingly create a sinecure for some friend of the CEO, in order to create the optics of them having material progress towards some “real” AI that they won’t likely have for decades?
Or is their leadership getting high off their own supply?
Given the “Golden Gate Bridge” demo and the network analytics/modification work that was behind it, I was hoping Anthropic were going to remain grounded, and continue making the first steps towards large language models and similar fixed-format neural networks becoming mature, documentable, usable and understandable algorithms rather than black box tech demos. But this is not encouraging.
This is a joke right? hahahahahaha Have we completely lost our minds? do we not understand what is happening around us, the only explanation is that media groups feel the best way forward is to not work on the real problems that our world and our societies face and instead constantly distract the populace with diversions and informational campaigns that dissuade and confuse people from calling for efforts to be made on fixing the societal ills and environmental existential problems we face, so instead let’s start worthying about how the AI computers and robots feel after a tough day at the office? You have got to be F ng kidding me.