Tags
Aktuelle Nachrichten
America
Aus Aller Welt
Breaking News
Canada
DE
Deutsch
Deutschsprechenden
Europa
Europe
Global News
Internationale Nachrichten aus aller Welt
Japan
Japan News
Kanada
Konflikt
Korea
Krieg in der Ukraine
Latest news
Map
Nachrichten
News
News Japan
Russischer Überfall auf die Ukraine seit 2022
Science
South Korea
Ukraine
UkraineWarVideoReport
Ukraine War Video Report
Ukrainian Conflict
UkrainianConflict
United Kingdom
United States
United States of America
US
USA
USA Politics
Vereinigte Königreich Großbritannien und Nordirland
Vereinigtes Königreich
Welt
Welt-Nachrichten
Weltnachrichten
Wissenschaft
World
World News
23 Comments
Just weeks ahead of the US presidential election, major social media platforms failed to detect harmful disinformation, according to a Global Witness investigation released today.
American voters increasingly make their voting decision based on information gathered online, primarily through social media platforms. In light of this, three of the most popular platforms – TikTok, YouTube, and Facebook – have made public commitments to protect the integrity of the election.
We submitted eight advertisements containing false election claims and threats to put these commitments to the test. We translated them into ‘algospeak’ (using numbers and symbols as stand-ins for letters) as this has become an increasingly common method of bypassing content moderation filters. All ads were specifically designed to clearly breach existing publisher policies (see notes to editors for more information on the content of the ads).
After the platforms informed us whether the ads had been accepted for publication, we deleted them to ensure no disinformation was spread.
Shocking to everyone, I’m sure.
This is one reason why neither of these apps is on my phone.
What’s their incentive too? A moral obligation?
> We submitted eight advertisements containing false election claims and threats to put these commitments to the test. We translated them into ‘algospeak’ (using numbers and symbols as stand-ins for letters) as this has become an increasingly common method of bypassing content moderation filters.
Seriously? People have been using this trick to slip curse words and slurs through filters since the birth of the internet and these big tech companies, with all their resources, are still falling for it.
This is the reality of automated content moderation, bad actors will always find a way around it.
The companies don’t actually care about blocking these ads so they’re not going to put in the resources that would be required to actually do that.
They just want enough plausible deniability to look like they’re trying to stop it.
Can someone explain to me what makes some disinformation harmful? I don’t understand the distinction between disinformation and harmful disinformation.
Is there some kind of ‘white’ disinformation? That doesn’t hurt anyone, it’s just easier and avoids conflict? Everybody does a little white disinformation sometimes, just don’t do the harmful kind..?
Dam, I hope it wont happen with anything else
Tiktok is moderated by UK and Malaysian farms, I have no doubt the low wage workers there are VERY opinionated, to put it lightly
Tiktok has been on the forefront of disinformation and hate campaigns for a while now, its dangerous because its shaping the minds of the young generation who are mostly found there and not on facebook. Tiktok and all social media should be heavily moderated by governments to protect their population from malicious foreign influences.
You see those AI posts using American soldiers and patriot bs? #jenniferlopez
All really weird.
Fail to detect or straight up ignore?
Who gets to decide what is misinformation? Who gets to decide what is harmful?
“HARMFUL LIES” fixed it for ya…FFS CALL THEM LIES ALREADY
Did they “fail ” or did they just want that juicy… Juicy… Click revenue?
No way! I just saw that TikTok was dedicated to election fairness. They wouldn’t just lie
Fail to detect ? Seriously they promote the lies and propaganda.
Facebook and IG have the worst moderation. I’ve reported things that were totally unmistakenly racist or homophobic, and they are just like “nah it’s fine, you can block them though.”
Then you can send that off to be “reviewed by a human” and a couple days later you get “nah it’s fine, you can block them though.”
They really do not give a fuck. It’s not surprising they allow disinformation when the N-word and F-word (not fuck) are totally fine to them.
They misspelled “intentionally promote”
Can we throw YouTube into the mix? Here in Ohio, we are absolutely harassed with misinformation ads on the platform. It’s actually pretty stressful.
I’ve written to the platform, but as we know, that makes zero difference. I don’t have the millions of dollars needed to get a real person’s response from Google.
So does TV , visiting in a nursing home. I see a anti Harris ad, that has her real speech about taxing billionaires but expertly replaced billionaires with Americans. Its was straight manipulation but on the TV it was.
The failure was not in detection.
X detects them and amplifies them.