ChatGPT is more empathetic than human doctors when treating patients, according to a study

The popular ChatGPT chatbot continues to demonstrate more capabilities in the area of ​​medicine. Now, a study in the United States with health professionals has found that this system based on artificial intelligence (AI) outperforms doctors when it comes to providing empathic and high-quality responses to written questions from patients.

Although AI will not replace doctors, it suggests that professionals working together with technologies like ChatGPT could “revolutionize medicine,” says the University of California at San Diego, responsible for the study, published in JAMA Internal Medicine.

According to the results, the group of professionals who evaluated the responses of ChatGPT preferred AI responses 79% of the timeand qualified these as higher quality and more empathetic.

ChatGPT is an advanced language model that uses artificial intelligence to answer user questions. Photo: Pexels

blind experiment

The team, led by John W. Ayers, set out to find out if ChatGPT can accurately answer the questions that patients send to their doctors.

If so, AI models could be integrated into healthcare systems to improve physician responses to patient-submitted questions, especially after the COVID-19 pandemic accelerated virtual care, to thus alleviating the increasing burden on physicians.

To obtain a large and diverse sample of physician questions and responses that did not contain personally identifiable information, the team turned to AskDocsa forum on the Reddit platform.

In this group, users post questions that are answered by verified healthcare professionals. Although anyone can answer a question, the moderators check the credentials of the professionals and the answers show the level of knowledge of the respondent, explains a statement from the university.

The team randomly selected 195 AskDocs exchanges in which a verified doctor answered a public question. He provided the original question to ChatGPT and asked it to compose an answer.

A group of three health professionals evaluated each question and the corresponding answers, without knowing if it came from a doctor or from ChatGPT. They compared responses based on information quality and empathy, noting which they preferred.

Graph showing the quality (left) and empathy (right) of the responses from ChatGPT (green) and doctors (blue). The score is lower on the left and higher on the right of the table. Photo: University of San Diego

More empathetic and higher quality responses

The panel of evaluators preferred the responses of ChatGPT to those of the doctors in 79% of the occasions.

“ChatGPT messages responded with nuanced and accurate information, often addressing more aspects of the patient’s questions than the physician’s responses,” said study co-author Jessica Kelley.

In addition, the quality of the ChatGPT responses was significantly higher than that of the physicians: responses of good or very good quality were 3.6 times higher for ChatGPT.

Responses were also more empathic: empathic or very empathic responses were 9.8 times higher for AI.

The objective, the authors point out, is for a doctor to take advantage of this AI in his day-to-day life, not to replace the professional.

“These results suggest that tools like ChatGPT can efficiently write high-quality, personalized medical advice for review by physicians,” says Christopher Longhurst of UC San Diego Health.