Anthropic stellt seinen ersten Forscher für „KI-Wohlfahrt“ ein | Der neue Mitarbeiter von Anthropic bereitet sich auf eine Zukunft vor, in der fortschrittliche KI-Modelle möglicherweise leiden müssen.

https://arstechnica.com/ai/2024/11/anthropic-hires-its-first-ai-welfare-researcher/

5 Comments

  1. MetaKnowing on

    “A few months ago, Anthropic quietly hired its first dedicated “AI welfare” researcher, Kyle Fish, to explore whether future AI models might deserve moral consideration and protection, reports AI newsletter [Transformer](https://www.transformernews.ai/p/anthropic-ai-welfare-researcher). 

    Fish joined Anthropic’s alignment science team in September to develop guidelines for how Anthropic and other companies should approach the issue. The news follows a [major report co-authored](https://eleosai.org/papers/20241030_Taking_AI_Welfare_Seriously_web.pdf) by Fish before he landed his Anthropic role. Titled “Taking AI Welfare Seriously,” the paper warns that AI models could soon develop consciousness or agency—traits that some might consider requirements for moral consideration.”

  2. MontyDyson on

    The fact you can completely wipe a digital system kind of undermines any “ethical” argument that applies to anything biological. Those parallels don’t need drawing. The reason you shouldn’t piss off an AI is more likely down to the fact it can dominate and control all living things on earth.

  3. Somnambulist815 on

    Truly embarrassing to consider the welfare of a hypothetical (likely even less than that) being than ones who are living and breathing right now.

  4. So, which is it?

    Did Anthropic willingly create a sinecure for some friend of the CEO, in order to create the optics of them having material progress towards some “real” AI that they won’t likely have for decades?

    Or is their leadership getting high off their own supply?

    Given the “Golden Gate Bridge” demo and the network analytics/modification work that was behind it, I was hoping Anthropic were going to remain grounded, and continue making the first steps towards large language models and similar fixed-format neural networks becoming mature, documentable, usable and understandable algorithms rather than black box tech demos. But this is not encouraging.

  5. AccountParticular364 on

    This is a joke right? hahahahahaha Have we completely lost our minds? do we not understand what is happening around us, the only explanation is that media groups feel the best way forward is to not work on the real problems that our world and our societies face and instead constantly distract the populace with diversions and informational campaigns that dissuade and confuse people from calling for efforts to be made on fixing the societal ills and environmental existential problems we face, so instead let’s start worthying about how the AI computers and robots feel after a tough day at the office? You have got to be F ng kidding me.

Leave A Reply