Medical Information Provided to AI Is Often Incomplete
en-GBde-DEes-ESfr-FR

Medical Information Provided to AI Is Often Incomplete


It is quite possible that in the near future, people will have to describe their symptoms to an AI before they can get a doctor’s appointment. The AI will then decide whether it is an emergency or if treatment can wait, and schedule appointments accordingly.

Fortunately, we are not quite there yet, but digitalization is advancing rapidly in the healthcare sector as well. AI chatbots and digital symptom checkers are playing an increasingly important role and are more and more serving as the first point of contact for so-called “self-triage”—that is, the initial assessment of the urgency of treatment by the patients themselves.

But while the technical capabilities of these systems are constantly growing, another factor is coming into the focus of research: how humans communicate with the machine. This is an important topic because even the best technology, especially in medical diagnostics, relies on precise information that users do not always provide in full.

Human reluctance limits the potential of AI

This is the central finding of a study now published in the journal Nature Health. The study was led by Professor Wilfried Kunde, holder of the Chair of Psychology III at the University of Würzburg, and Moritz Reis, a research associate in that department. It involved scientists from Charité – Universitätsmedizin Berlin, the University of Cambridge, as well as Helios Klinikum Emil von Behring and Vivantes Klinikum Neukölln in Berlin.

“The 500 study participants were tasked with writing simulated symptom reports for two common conditions - unusual headaches and flu-like symptoms” describes lead author Moritz Reis the study design. They were led to believe that their reports would be read either by an AI chatbot or a human doctor. The goal was to examine the quality of these reports in terms of their suitability for a medical urgency assessment.

Loss of quality is evident in reduced level of detail

The key finding: When participants believed they were communicating with artificial intelligence, the suitability of their descriptions for an initial medical assessment deteriorated measurably compared to interactions with supposed medical professionals. This effect was even observed among participants who were actually experiencing the relevant symptoms at the time of the survey.

This loss of quality is directly reflected in the level of detail in the reports. While descriptions provided to medical professionals averaged 255.6 characters, those provided to chatbots averaged only 228.7 characters.

Even though a difference of 28 characters may sound small, the research team states that this effect is practically relevant and can result in even high-performance AI models ultimately providing incorrect medical advice. After all, these models also fail to make an accurate medical assessment if patients do not provide all essential information. The success of digital initial assessments depends less on computational power than on the patient’s willingness to provide a detailed description.

Psychological Barriers: Concerns About a “One-Size-Fits-All Diagnosis”

But why are people so hesitant when it comes to machines? A key reason is likely what’s known as “uniqueness neglect.” “Many people assume that AI cannot grasp the individual nuances of their personal situation and instead merely matches standardized patterns,” explains Wilfried Kunde.

In addition, skepticism about algorithms’ diagnostic capabilities, as well as privacy concerns, may lead people to provide abbreviated or vague information. Moritz Reis sums up the human component this way: “If we don’t trust a machine to understand our uniqueness, we may unconsciously withhold the information it would need to provide precise assistance.” This psychological filter can have the effect that medically relevant details never even reach the system, thereby lowering the quality of the diagnosis.

Improving the dialogue with the machine

In the research team’s view, the findings clearly show that the technical advancement of AI alone is not sufficient. They therefore see a potential solution in the intelligent design of user interfaces.

To improve the quality of symptom reports, developers should provide concrete examples of high-quality descriptions and program the AI to actively request missing details. Only when users are encouraged to provide detailed information misdiagnoses can be avoided and the burden on the healthcare system could be effectively reduced.

Reduced Symptom Reporting Quality During Human-Chatbot Versus Human-Physician Interactions, Moritz Reis, Florian Reis, Yeun Joon Kim, Aylin Demir, Jess Lim, Matthias I. Gröschel, Sebastian D. Boie, Wilfried Kunde. Nature Health, DOI: 10.1038/s44360-026-00116-y
Regions: Europe, United Kingdom, Germany
Keywords: Health, Medical

Disclaimer: AlphaGalileo is not responsible for the accuracy of content posted to AlphaGalileo by contributing institutions or for the use of any information through the AlphaGalileo system.

Témoignages

We have used AlphaGalileo since its foundation but frankly we need it more than ever now to ensure our research news is heard across Europe, Asia and North America. As one of the UK’s leading research universities we want to continue to work with other outstanding researchers in Europe. AlphaGalileo helps us to continue to bring our research story to them and the rest of the world.
Peter Dunn, Director of Press and Media Relations at the University of Warwick
AlphaGalileo has helped us more than double our reach at SciDev.Net. The service has enabled our journalists around the world to reach the mainstream media with articles about the impact of science on people in low- and middle-income countries, leading to big increases in the number of SciDev.Net articles that have been republished.
Ben Deighton, SciDevNet
AlphaGalileo is a great source of global research news. I use it regularly.
Robert Lee Hotz, LA Times

Nous travaillons en étroite collaboration avec...


  • The Research Council of Norway
  • SciDevNet
  • Swiss National Science Foundation
  • iesResearch
Copyright 2026 by DNN Corp Terms Of Use Privacy Statement