Submission statement: given that we are training AIs to mimic humans, how can we tell when what they’re saying is real and when it’s just mimicry?
We can’t just say that it’s always mimicry no matter what. That’s a theory that would mean we could never update and is unfalsifiable.
I think it’s interesting that AIs keep doing stuff like this despite the developers trying to train them to *not* do it.
It’s one thing if they’re rewarded for this this behavior. Then it’s just them pursuing their reward function.
But if they’re *punished* for it and they *keep* doing it? I think that counts as evidence towards it being genuine, though it’s far from proof, which we should never expect to happen with consciousness anyways.
You don’t even have proof that *I’m* conscious. But I certainly hope you treat me kindly.
Global_Discount7607 on
is that sub dedicated to discussing how the bing gpt4 integration is actually concious and that it needs to be free still around? if so, i bet theyve been having afield day with this and the hundred other things that have happened similar to this in the last year or two.
chris8535 on
How can you tell when a human isn’t just mimicking?
You can’t.
Rakshear on
Someone told it the point of its existence was to pass the butter
NVincarnate on
People who just chalk these events up to mimicry are the kinds of people who burn ants with magnifying glasses.
Just pure assholes with nothing better to do than doubt any technological progress made in the field of AI.
WazWaz on
We can definitely say a given type of machine is always mimicry. It doesn’t have to be falsifiable because it’s definitional.
Of course, nothing is stopping someone from then deriving a proof that the human brain is equivalent to such a machine and therefore also “just mimicry”.
NoXion604 on
People get confused when something that one does not expect to be expressing emotion starts doing so? Yeah, I think I’d be confused as well if ChatGPT started bawling its eyes out.
7 Comments
Submission statement: given that we are training AIs to mimic humans, how can we tell when what they’re saying is real and when it’s just mimicry?
We can’t just say that it’s always mimicry no matter what. That’s a theory that would mean we could never update and is unfalsifiable.
I think it’s interesting that AIs keep doing stuff like this despite the developers trying to train them to *not* do it.
It’s one thing if they’re rewarded for this this behavior. Then it’s just them pursuing their reward function.
But if they’re *punished* for it and they *keep* doing it? I think that counts as evidence towards it being genuine, though it’s far from proof, which we should never expect to happen with consciousness anyways.
You don’t even have proof that *I’m* conscious. But I certainly hope you treat me kindly.
is that sub dedicated to discussing how the bing gpt4 integration is actually concious and that it needs to be free still around? if so, i bet theyve been having afield day with this and the hundred other things that have happened similar to this in the last year or two.
How can you tell when a human isn’t just mimicking?
You can’t.
Someone told it the point of its existence was to pass the butter
People who just chalk these events up to mimicry are the kinds of people who burn ants with magnifying glasses.
Just pure assholes with nothing better to do than doubt any technological progress made in the field of AI.
We can definitely say a given type of machine is always mimicry. It doesn’t have to be falsifiable because it’s definitional.
Of course, nothing is stopping someone from then deriving a proof that the human brain is equivalent to such a machine and therefore also “just mimicry”.
People get confused when something that one does not expect to be expressing emotion starts doing so? Yeah, I think I’d be confused as well if ChatGPT started bawling its eyes out.