Join the club for FREE to access the whole archive and other member benefits.

AI chatbot outperforms doctors in written advice quality and empathy

ChatGPT demonstrates potential for assisting healthcare professionals in drafting patient communications, but caution advised


Key points from article :

AI language model ChatGPT shows better 'bedside manner' than some doctors in terms of written advice quality and empathy, according to a study.

The study used data from Reddit's AskDocs forum.

195 exchanges from AskDocs were analyzed, with ChatGPT's responses being preferred by a panel of healthcare professionals 79% of the time.

ChatGPT's answers were rated good or very good quality 79% of the time, while doctors' responses were rated so 22% of the time.

The study does not claim ChatGPT can replace doctors, but calls for further research into how AI can assist physicians in response generation.

ChatGPT was optimized to be likable, which may have contributed to its higher empathy ratings.

Critics warn against relying on language models for factual information due to their tendency to generate made-up "facts."

Prof Anthony Cohn suggests using language models to draft responses is a reasonable early use case, but advises caution.

Cohn also warns against humans overly trusting machine responses and suggests testing vigilance with random synthetic wrong responses.

UC San Diego Health is beginning the process of using ChatGPT to draft high-quality, personalized medical advice for clinician review.

Research by UCSD published in JAMA Internal Medicine.

Mentioned in this article:

Click on resource name for more details.

Anthony Cohn

Professor of Automated Reasoning at the University of Leeds

JAMA Internal Medicine

General internal medical and internal medicine subspecialities journal.

Topics mentioned on this page:
AI in Healthcare
AI chatbot outperforms doctors in written advice quality and empathy