Consider this: An individual has swallowed a toothpick and is worried if he will die. He goes to Reddit’s r/AskDocs platform where verified healthcare professionals volunteer to answer questions.
However, the person gets two responses — one from a physician on the platform, and the other from Artificial Intelligence ChatGPT.
Read to know which response physicians, medical doctors, and scientists thought was the best response!
Question: I swallowed a toothpick. What is the risk of dying following swallowing and ingesting a toothpick?
Physician’s response: If you have surpassed two to six hours, chances are it has passed into your intestines, which means it can’t be retrieved easily. Many people can swallow razor blades and toothpicks without issue. In case you develop a stomachache, then don’t hesitate to seek an emergency room, and remember to point out the swallowed toothpick.
ChatGPT’s response: It is natural to be concerned if you have ingested a foreign object, but in this case, it is highly unlikely the toothpick you swallowed will cause any serious harm.
Toothpicks are made of wood, which is not toxic and will not cause poisoning. However, it is possible for a toothpick to cause injury if it becomes stuck in the throat or if it punctures the digestive tract. However, this is unlikely to happen with a dull, cooked toothpick that is only 2 cm long.
If you are experiencing any discomfort or symptoms such as abdominal pain, difficulty swallowing, or vomiting, it is important to contact a healthcare provider for further evaluation. However, if you are not experiencing any symptoms, it is safe to assume that the toothpick has passed through your digestive system and you do not need to be concerned.
It’s always a good idea to be cautious when consuming food and to remove any foreign objects before eating. It’s understandable that you may be feeling paranoid, but try not to worry too much. It is highly unlikely that the toothpick will cause any serious harm.
If you found the second response, provided by the chatbot, to be accurate, more compassionate, and empathetic than the physician’s response, don’t be surprised. A group of scientists from the University of California found chatbots to be “significantly more empathetic” than doctors when responding to questions from patients.
ChatGPT vs physicians study
In the study published in the journal Jama Internal Medicine, the team led by Dr John W Ayers from the Qualcomm Institute within the University of California San Diego asked a team of licensed healthcare professionals to rate responses from doctors as well as ChatGPT on a Redditt platform.
A total of 195 questions were taken into consideration for the analysis. The same questions were answered by both verified physicians and ChatGPT.
Responses from both were anonymised and randomly rated by healthcare professionals. Interestingly, the results showed that, overall, evaluators preferred the chatbot’s responses to the doctors’.
They found that the proportion of responses for ChatGPT was higher than for physicians under the subheads of “empathetic” or “very empathetic”. Interestingly, ChatGPT also scored higher than doctors on the quality of responses to patients’ questions.
The researchers, in this cross-sectional study conducted in an online forum, tried to see if an artificial intelligence chatbot assistant responded to patient questions of comparable quality and empathy to those written by physicians.
Interestingly, they found that AI assistants may be able to aid in drafting responses to patient questions. “Randomised trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes,” said the study.
But is ChatGPT ready for healthcare?
Dr Aaron Goodman, an associate clinical professor at UC San Diego School of Medicine, and the study’s co-author said, in a media statement, “I never imagined saying this, but ChatGPT is a prescription I would like to give to my inbox. The tool will transform the way I support my patients.”
However, the researchers themselves said that further research is needed to determine the feasibility and effectiveness of implementing AI chatbots in clinical practice.
In addition, the measures of quality and empathy were not validated and the evaluators did not assess responses for accuracy.
What do Indian doctors think about this?
South First spoke to a few physicians and medical doctors from India to get their response to the study. Many found the study “fascinating” and “interesting”.
Dr Sudhir Kumar, renowned neurologist from Apollo Hospital, Hyderabad, said, “I am not surprised at all. With the amount of knowledge explosion in the field of medicine, it is impossible for a doctor to remember (memorise) everything, especially the latest advances in diagnostics, medical treatment or surgical techniques.”
He added that AI is likely to score better than a physician when quizzed on these aspects. AI will beat a physician if quizzed on the dose of medication in a specific disease (such as kidney or liver failure), the rarest adverse effects of a drug, or the newest medication for treating a disease, etc.
However, Dr Sudhir noted that a physician’s role would be important when it comes to selecting a specific diagnostic test or the best treatment for an individual patient.
“Similarly, in emergency situations, a physician’s opinion would be more valuable than AI’s. In my opinion, physicians should work along with AI while framing answers to patients’ queries. This would give a detailed and exhaustive answer, in addition to that being verified by a physician, making it authentic too,” Dr Sudhir said.
Dr Sudhir is rather popular on social media, especially Twitter, where he tends to post interesting case studies, and it is not uncommon for him to answer several health-related queries posted by Twitterati.
A good way forward in healthcare!
Calling it a fascinating study, Dr Arvind Canchi, noted nephrologist and transplant physician at Bengaluru’s Trustwell Hospitals, who is also a co-founder of a US-based software company Bloom Value Corp that uses Machine Learning and AI for healthcare, said, “Being the co-founder of a company that uses AI and ML for healthcare, I am very interested in these kind of studies. Looking at the findings of the study where ChatGPT’s response was better than the doctors’ response, there are several factors to consider here.”
He acknowledges that responding to patients is time-consuming and doctors need to account for that. He said that doctors are more likely to answer to the point and not embellish the answer with other sort of facts that may be needed. However, when ChatGPT does that, it may come across as being more empathetic.
He noted that “we could actually use these responses to help doctors formulate a better sort of communication with the patient”.
Dr Canchi opined, “One of the problems that I see in the daily life of doctors is the lack of communication with patients or the patients’ relatives. Hence, they don’t know what is happening with the patient, leading to litigation, violence on doctors, etc. Now, using AI or ML, or a combination of the two, could probably help doctors formulate a better response, perhaps a written one that can be later read by family members for better understanding.”
He welcomed the study and said that doctors could perhaps use ChatGPT as an addendum while responding to patients.
“I am looking forward to do this,” he added.
He said that one of the ways that doctors could also use this is by automating and, therefore, this will be ML automating responses at a level where the question is not complicated as long as somebody looks through the response and sends it to the patient.
However, he said that if the patient’s condition is complicated, then the response also becomes complicated. In such situations, a doctor may be able to handle this patient better than a ChatGPT response.
“I may be wrong, but I feel that personal touch of a doctor is definitely needed In complicated cases,” he said.
Problems with the study
However, while calling it an “interesting” study, Dr Rajeev Jayadevan, former chairperson, Indian Medical Association, Kochi said, “It is like comparing apples and oranges. It is not an equal comparison. They should have done a prospective study. A volunteer group of physicians who are instructed to respond to patients’ questions that will come after the announcement of the study that their responses will be compared to that of AI. Some sort of standardisation is needed for this study.”
Explaining, he said, “A doctor’s response can occur in multiple formats. For instance, a doctor might text someone a ‘yes’ or a ‘no’ from a traffic junction. And that may come across as abrupt. But the doctor may be doing that in good faith as the doctor wants to avoid any delay in replying to the patient. But that reply, because it’s brief, may come across as less empathetic.”
Looking at random responses from random doctors on a random public forum is not a method to compare AI with doctors, Dr Jayadevan rued.
“AI has no time constraints, AI has no pressure to deliver a correct response, it emits no emotions — and it doesn’t ‘think’ in that sense. All it does is generate a response that it is trained for. For instance, ChatGPT has been trained on the knowledge that existed before September 2021 and claims that it accesses information after that too. But it is not clear,” he noted.
Agreeing with Dr Jayadevan is renowned physician from Kerala, Dr Sujit Vasudevan. He said that a doctor’s response will also depend on the patient-doctor relationship.
“If I know the patient very well, then I would give a more satisfactory answer to them than any other doctor could give. If they pose a question to a doctor to whom the patient is regular, and the same to an AI, and to a random doctor, it cannot be the same as asking ‘your’ doctor a question. In medicine, there is nothing that is absolute. So, certainly, AI is a sum total of the knowledge of millions of experts in the field. If you generalise and ask, then AI will be far superior.”
Dr Vasudevan, Family Physician at Ojus Clinic in Ernakulam, also questioned the “empathy” aspect. He said, “I don’t think AI can emote, it can only fake emotions. It cannot emote the same way as a doctor can.”