Tags
Aktuelle Nachrichten
America
Aus Aller Welt
Breaking News
Canada
DE
Deutsch
Deutschsprechenden
Europa
Europe
Global News
Internationale Nachrichten aus aller Welt
Japan
Japan News
Kanada
Karte
Konflikt
Korea
Krieg in der Ukraine
Latest news
Nachrichten
News
News Japan
Russischer Überfall auf die Ukraine seit 2022
Science
South Korea
Ukraine
Ukraine War Video Report
UkraineWarVideoReport
Ukrainian Conflict
UkrainianConflict
United Kingdom
United States
United States of America
US
USA
USA Politics
Vereinigte Königreich Großbritannien und Nordirland
Vereinigtes Königreich
Welt
Welt-Nachrichten
Weltnachrichten
Wissenschaft
World
World News
5 Comments
I found the philosophical implications of this to be interesting. From the article:
“After its creator announced SocialAI as “a private social network where you receive millions of AI-generated comments offering feedback, advice & reflections on each post you make,” computer security specialist Ian Coldwater quipped on X, “This sounds like actual hell.”
On Bluesky, evolutionary biologist and frequent AI commentator Carl T. Bergstrom wrote, “So I signed up for the new heaven-ban SocialAI social network where you’re all alone in a world of bots. It is so much worse than I ever imagined. … Bergstrom mentioned “heavenbanning,” which is a concept invented by AI developer Asara Near and announced in a Twitter post in June 2022.
Near wrote: “Heavenbanning, the hypothetical practice of banishing a user from a platform by causing everyone that they speak with to be replaced by AI models that constantly agree and praise them, but only from their own perspective, is entirely feasible with the current state of AI/LLMs.”
Heavenbanning is almost like a digital form of solipsism, a philosophical idea that posits that one’s own mind is the only true mind in existence, and everyone else may be a dream or hallucination of that mind.
To dive even deeper into philosophy, we might compare SocialAI, in a very crude way, to the hypothetical “brain in a vat” scenario where a human brain is removed from a body and fed information from a computer simulation. The brain would never know the truth of its situation. Right now, the bots on SocialAI aren’t realistic enough to fool us, but that might change in the future as the technology advances.”
Edit: fixed weird formatting
What if I told you the “Dead Internet Theory” has been in effect for years and the degradation of the models, from being fed AI generated content, is the “AI content” we see today..
People only realize the content is AI now because the models haven’t been trained on data created by people for years..
/s
Is there a true test to see if a reddit user is a bot?
As an art piece it’s great as a social network it’s not social
I see potential here, for example with spitballing ideas without having them shat upon by trolls. I’ve seen it plenty of times on Reddit, where someone puts forth a well-intentioned but poorly phrased concept, only to be downvoted to oblivion and/or savagely attacked. Sometimes based on misinterpretation of the idea.
Or a place for trolls to go and shit all over everything. Maybe a few get it out of their system? Highly sensitive people can practice sparring with the worst of correspondents. Someone feeling introspective might be open to objective advice on their worldview. Or just practice making posts so you can get a sense of how a real one might do.
I predict the option to select the tone and stance of participants will make a real difference. You could float a product or service idea and get a combination of constructive and cynical feedback. Refine the idea before subjecting it to the harsh reality of the interwebs.