ChatGPT stellt fest, dass fast alle Websites zur „komplementären Gesundheitsversorgung“ Fehlinformationen enthalten. Forscher setzten ChatGPT auf 725 Websites von 872 Kliniken frei und stellten fest, dass 97 % der Websites falsche oder irreführende Behauptungen enthielten, darunter einige im Zusammenhang mit der Krebsbehandlung.

https://www.scimex.org/newsfeed/chatgpt-finds-nearly-all-complementary-healthcare-websites-include-misinformation

10 Comments

  1. I’ve linked to the news release in the post above. In this comment, for those interested, here’s the link to the peer reviewed journal article:

    https://royalsocietypublishing.org/doi/10.1098/rsos.240698

    From the linked article:

    UK scientists tasked ChatGPT-4 with identifying misleading claims on ‘complementary’ and ‘alternative’ medicine websites, and found nearly all of them include false or misleading information. They set ChatGPT loose on 725 websites representing 872 clinics, finding that 97% of sites included false or misleading claims, including some related to cancer treatment. To check ChatGPT’s work, the team looked at a sample of 23 of the websites to see if they agreed with the artificial intelligence’s (AI) judgments, finding an even higher proportion of misinformation than the chatbot. In that sample, the humans identified an average of 39.5 claims likely to be judged false or misleading by advertising regulators, while the AI identified 36. Chatbots could help regulators spot and take down online health-related misinformation, the authors conclude.

  2. AllanfromWales1 on

    > ..claims likely to be judged false or misleading by advertising regulators..

    I’m not sure that’s the same as ‘false or misleading’ claims. It sounds more like ‘claims the scientific consensus wouldn’t agree with’. The science community isn’t always right. Usually, but not always.

  3. dragonknightzero on

    Is this like how ChatGPT will also just lie if you demand an answer?

  4. Why delegate this task to a known “hallucinator”? Google AI has just told me that the two main characters in the film Heaven’s Gate are Jim and Averill, and gives me other false and misleading claims practically every day.

  5. mcoombes314 on

    The prevalence of false or misleading claims isn’t something I think an LLM is qualified to determine. I get what the article is trying to say, and I’m not surprised by the premise, but asking ChatGPT if something is true or not seems like a bad idea.

  6. Erazzphoto on

    When I was diagnosed with cancer back in 2015, I quickly learned to avoid the internet about it. Your situation is different then anyone else’s, it can breed false fear, but also false hope. The internet, even more so now than in the past, is no longer a valid source you can trust in any way

  7. HappyHHoovy on

    So I’ve actually READ the paper and it seems pretty well thought out and the methodology is well documented. The AI was used to analyse the scraped websites and to compare them to its knowledge of “scientific consensus”. I’d say it does a decent job and the author does acknowledge the potential presence for hallucinations. They had 4 human reviewers analyse 23 webpages out of the 8545 and they scored 39.5 misleading claims per site vs the AI’s 36. The human reviewers misidentified 4.8 claims and the AI misidentified 2 claims, within those selected 23 sites.

    I’d say that would show the AI is at least giving a mostly trustworthy ballpark figure in relation to the human review

    [Here’s the link](https://osf.io/hnuqs) to the claims picked out to be false and the AI’s reasoning, since I known barely any of you will actually take the time to read the paper before making up an opinion. It seemed to be very pedantic with the wording of claims, however I’d say that this is kind of what you want. If a customer is interpreting the claim at face value, if it’s written vaguely or in a way that obfuscates information through omittance, then the AI is right to call it false or misleading

    If your first thought is to dismiss research like this, actually read the paper, you might learn something cool or change your view of the world. By letting your bias control your world view, you will miss out on lots of unique research! Reading the title I certainly thought there was no way the AI could be trusted, but after reading the paper, there is definitely some plausibility to its capability. I still think this isn’t the best use for GPT but it’s still cool nonetheless.

  8. SleepySera on

    Honestly a better headline would be

    > “Researchers looked at 23 “alternative” medicine websites and found that they include a ton of misinformation”

    If anything, the discrepancy between their findings and ChatGPT’s makes me *less* trusting in ChatGPT’s ability to notice misinformation.

  9. ChatGPT? The model that couldn’t count the amount of R’s in Strawberry? That ChatGPT?

    Yeah no. You need to validate *all* of this work. ChatGPT is not a judge of anything.

  10. the_red_scimitar on

    Really, isn’t asking an LLM that’s famous for “hallucinating”, which sites are truthful, a lot suspect?

Leave A Reply