Die Verwendung von ChatGPT zur Erstellung gefälschter Social-Media-Beiträge schlägt bei böswilligen Akteuren fehl

https://arstechnica.com/tech-policy/2024/10/using-chatgpt-to-make-fake-social-media-posts-backfires-on-bad-actors/

3 Comments

  1. Using ChatGPT to research cyber threats has backfired on bad actors, OpenAI revealed in a report analyzing emerging trends in how AI is currently amplifying online security risks.

    Not only do ChatGPT prompts expose what platforms bad actors are targeting—and in at least one case enabled OpenAI to link a covert influence campaign on X and Instagram for the first time—but they can also reveal new tools that threat actors are testing to evolve their deceptive activity online, OpenAI claimed.

  2. __TasteslikeCandy on

    Who knew that trying to outsmart AI would turn into a game of digital whack-a-mole?

  3. Trilobyte141 on

    Yeeeeeeaaah…. this feels like some thinly veiled PR bullshit from OpenAI. “Yes our barely-regulated toy is being used by bad actors to disrupt elections, scam people, and spam social media with bullshit, but it’s also making it easier to identify when that is happening!” 

    Not ‘stopping’ it from happening, mind. Not making it harder. Just letting us know. Thanks, buddy. We totally hadn’t noticed. 

    ETA: I think my favorite part of the article is the bad actor who blatantly asked CharGPT for advice on how to phish its own employees. The bad actor was stopped in their tracks!! … by the employee email spam filter. Wooo, go ChatGPT, you totally saved the day!

Leave A Reply