Ärzte sagen, dass KI die Patientenversorgung beeinträchtigt | Frühe Tests zeigen Ergebnisse, die für Patienten katastrophal sein könnten.

https://gizmodo.com/doctors-say-ai-is-introducing-slop-into-patient-care-2000543805

9 Comments

  1. BottleOfSmoke998 on

    Not surprised. I had the AI argument with a friend a few days ago, and he was extolling the virtues of AI being able to summarize long, boring texts (like legal documents) in seconds. Of course I’m saying… how can you trust 100% that AI is giving you an accurate summary, especially if understanding the document thoroughly is of the upmost importance? It’s craziness.

  2. kaishinoske1 on

    It’s probably the same A.I. to summarize health insurance claims well.

  3. ethereumfail on

    yeah let’s use predictive text to treat patients, makes perfect sense /s

  4. Patients say short consultations are introducing slop into their diagnosis | extensive testing demonstrates that many conditions are misdiagnosed

  5. jonnycanuck67 on

    Hallucinations = deaths in healthcare… lack of explainability is a non-starter.

  6. I noticed that societal bias gets convolved with medical advice within generative AI models. This has resulted in it giving patently false information about certain medical conditions.

    Specifically, when discussing matters of endocrinology and pharmacology, it really needs a lot of guardrails in order to keep focus and give answers that rhyme with the right answer. Even then, there are times where it’s saying the exact opposite of the truth.

    I’m amazed that it would be remotely considered for anything medically related. At least with software development, whatever it puts out is evident during runtime but that’s not how medicine works at all.

Leave A Reply