Just weeks ahead of the US presidential election, major social media platforms failed to detect harmful disinformation, according to a Global Witness investigation released today.
American voters increasingly make their voting decision based on information gathered online, primarily through social media platforms. In light of this, three of the most popular platforms – TikTok, YouTube, and Facebook – have made public commitments to protect the integrity of the election.
We submitted eight advertisements containing false election claims and threats to put these commitments to the test. We translated them into ‘algospeak’ (using numbers and symbols as stand-ins for letters) as this has become an increasingly common method of bypassing content moderation filters. All ads were specifically designed to clearly breach existing publisher policies (see notes to editors for more information on the content of the ads).
After the platforms informed us whether the ads had been accepted for publication, we deleted them to ensure no disinformation was spread.
itastesok on
Shocking to everyone, I’m sure.
intronert on
This is one reason why neither of these apps is on my phone.
baxi87 on
What’s their incentive too? A moral obligation?
rnilf on
> We submitted eight advertisements containing false election claims and threats to put these commitments to the test. We translated them into ‘algospeak’ (using numbers and symbols as stand-ins for letters) as this has become an increasingly common method of bypassing content moderation filters.
Seriously? People have been using this trick to slip curse words and slurs through filters since the birth of the internet and these big tech companies, with all their resources, are still falling for it.
Fthebo on
This is the reality of automated content moderation, bad actors will always find a way around it.
The companies don’t actually care about blocking these ads so they’re not going to put in the resources that would be required to actually do that.
They just want enough plausible deniability to look like they’re trying to stop it.
theodoremangini on
Can someone explain to me what makes some disinformation harmful? I don’t understand the distinction between disinformation and harmful disinformation.
Is there some kind of ‘white’ disinformation? That doesn’t hurt anyone, it’s just easier and avoids conflict? Everybody does a little white disinformation sometimes, just don’t do the harmful kind..?
getshrektdh on
Dam, I hope it wont happen with anything else
Specialist-Phase-567 on
Tiktok is moderated by UK and Malaysian farms, I have no doubt the low wage workers there are VERY opinionated, to put it lightly
Specialist-Phase-567 on
Tiktok has been on the forefront of disinformation and hate campaigns for a while now, its dangerous because its shaping the minds of the young generation who are mostly found there and not on facebook. Tiktok and all social media should be heavily moderated by governments to protect their population from malicious foreign influences.
martusfine on
You see those AI posts using American soldiers and patriot bs? #jenniferlopez
All really weird.
Bubbaganewsh on
Fail to detect or straight up ignore?
parker_fly on
Who gets to decide what is misinformation? Who gets to decide what is harmful?
Lank42075 on
“HARMFUL LIES” fixed it for ya…FFS CALL THEM LIES ALREADY
Fancy-Ambassador6160 on
Did they “fail ” or did they just want that juicy… Juicy… Click revenue?
Psychological_Pay230 on
No way! I just saw that TikTok was dedicated to election fairness. They wouldn’t just lie
CAM6913 on
Fail to detect ? Seriously they promote the lies and propaganda.
_hypnoCode on
Facebook and IG have the worst moderation. I’ve reported things that were totally unmistakenly racist or homophobic, and they are just like “nah it’s fine, you can block them though.”
Then you can send that off to be “reviewed by a human” and a couple days later you get “nah it’s fine, you can block them though.”
They really do not give a fuck. It’s not surprising they allow disinformation when the N-word and F-word (not fuck) are totally fine to them.
arbutus1440 on
They misspelled “intentionally promote”
its_called_life_dib on
Can we throw YouTube into the mix? Here in Ohio, we are absolutely harassed with misinformation ads on the platform. It’s actually pretty stressful.
I’ve written to the platform, but as we know, that makes zero difference. I don’t have the millions of dollars needed to get a real person’s response from Google.
monchota on
So does TV , visiting in a nursing home. I see a anti Harris ad, that has her real speech about taxing billionaires but expertly replaced billionaires with Americans. Its was straight manipulation but on the TV it was.
23 Comments
Just weeks ahead of the US presidential election, major social media platforms failed to detect harmful disinformation, according to a Global Witness investigation released today.
American voters increasingly make their voting decision based on information gathered online, primarily through social media platforms. In light of this, three of the most popular platforms – TikTok, YouTube, and Facebook – have made public commitments to protect the integrity of the election.
We submitted eight advertisements containing false election claims and threats to put these commitments to the test. We translated them into ‘algospeak’ (using numbers and symbols as stand-ins for letters) as this has become an increasingly common method of bypassing content moderation filters. All ads were specifically designed to clearly breach existing publisher policies (see notes to editors for more information on the content of the ads).
After the platforms informed us whether the ads had been accepted for publication, we deleted them to ensure no disinformation was spread.
Shocking to everyone, I’m sure.
This is one reason why neither of these apps is on my phone.
What’s their incentive too? A moral obligation?
> We submitted eight advertisements containing false election claims and threats to put these commitments to the test. We translated them into ‘algospeak’ (using numbers and symbols as stand-ins for letters) as this has become an increasingly common method of bypassing content moderation filters.
Seriously? People have been using this trick to slip curse words and slurs through filters since the birth of the internet and these big tech companies, with all their resources, are still falling for it.
This is the reality of automated content moderation, bad actors will always find a way around it.
The companies don’t actually care about blocking these ads so they’re not going to put in the resources that would be required to actually do that.
They just want enough plausible deniability to look like they’re trying to stop it.
Can someone explain to me what makes some disinformation harmful? I don’t understand the distinction between disinformation and harmful disinformation.
Is there some kind of ‘white’ disinformation? That doesn’t hurt anyone, it’s just easier and avoids conflict? Everybody does a little white disinformation sometimes, just don’t do the harmful kind..?
Dam, I hope it wont happen with anything else
Tiktok is moderated by UK and Malaysian farms, I have no doubt the low wage workers there are VERY opinionated, to put it lightly
Tiktok has been on the forefront of disinformation and hate campaigns for a while now, its dangerous because its shaping the minds of the young generation who are mostly found there and not on facebook. Tiktok and all social media should be heavily moderated by governments to protect their population from malicious foreign influences.
You see those AI posts using American soldiers and patriot bs? #jenniferlopez
All really weird.
Fail to detect or straight up ignore?
Who gets to decide what is misinformation? Who gets to decide what is harmful?
“HARMFUL LIES” fixed it for ya…FFS CALL THEM LIES ALREADY
Did they “fail ” or did they just want that juicy… Juicy… Click revenue?
No way! I just saw that TikTok was dedicated to election fairness. They wouldn’t just lie
Fail to detect ? Seriously they promote the lies and propaganda.
Facebook and IG have the worst moderation. I’ve reported things that were totally unmistakenly racist or homophobic, and they are just like “nah it’s fine, you can block them though.”
Then you can send that off to be “reviewed by a human” and a couple days later you get “nah it’s fine, you can block them though.”
They really do not give a fuck. It’s not surprising they allow disinformation when the N-word and F-word (not fuck) are totally fine to them.
They misspelled “intentionally promote”
Can we throw YouTube into the mix? Here in Ohio, we are absolutely harassed with misinformation ads on the platform. It’s actually pretty stressful.
I’ve written to the platform, but as we know, that makes zero difference. I don’t have the millions of dollars needed to get a real person’s response from Google.
So does TV , visiting in a nursing home. I see a anti Harris ad, that has her real speech about taxing billionaires but expertly replaced billionaires with Americans. Its was straight manipulation but on the TV it was.
The failure was not in detection.
X detects them and amplifies them.