Hacker pflanzt falsche Erinnerungen in ChatGPT, um Benutzerdaten dauerhaft zu stehlen | E-Mails, Dokumente und andere nicht vertrauenswürdige Inhalte können bösartige Erinnerungen pflanzen.
https://arstechnica.com/security/2024/09/false-memories-planted-in-chatgpt-give-hacker-persistent-exfiltration-channel/
4 Comments
I’m sure this is the tip of the iceberg, a whole new world of exploits and hacking yet to be realized around AI.
Key bit of information:
> The attack isn’t possible through the ChatGPT web interface, thanks to an API OpenAI rolled out [last year](https://embracethered.com/blog/posts/2023/openai-data-exfiltration-first-mitigations-implemented/).
That is useful information to know! I giggled when I read “partial fix”:
> So Rehberger did what all good researchers do: He created a proof-of-concept exploit that used the vulnerability to exfiltrate all user input in perpetuity. OpenAI engineers took notice and issued a partial fix earlier this month.
You can see more information here: [https://embracethered.com/blog/posts/2023/openai-data-exfiltration-first-mitigations-implemented/](https://embracethered.com/blog/posts/2023/openai-data-exfiltration-first-mitigations-implemented/)
This is why I always avoid putting sensitive information (or personal information) in any “AI Bot” tool. I am curious if the other large models (Gemini?) have the same issues, or if this is limited to ChatGPT…need to read more.
People are already stupid enough to believe that whatever ChatGPT tells them is gospel truth based in facts & reality, as opposed to just a statistically probable stringing together of letters. We need regulations around this mess yesterday … or rather ~5 years ago.
This will be said one day about Neurolink