top of page

Dr. Google

David Cowles

Dec 3, 2024

“I still believe that a Human-Bot partnership in medicine is possible…(but) for that to happen human beings need to learn their place.”

“In a small study recently published in JAMA Journal, researchers found that ChatGPT proved surprisingly effective in correctly diagnosing illnesses and ailments. In fact, ChatGPT-4 correctly diagnosed patients 90% of the time, besting the 74% accuracy rate of doctors operating without the chatbot, and even beating the 76% accuracy rate of those doctors who were assisted by ChatGPT. The obvious lesson here? Choose your human doctors well, at least so long as they are still around.” (The Daily Upside 11/19/2024). 


But is that the obvious lesson? Or just the politically correct one? 


The human and financial costs of misdiagnosis are enormous. Money is wasted on treatments that will necessarily be ineffective and may actually be harmful. Meanwhile, the patient continues to suffer and often deteriorates further. If and when a corrected diagnosis is delivered, the cost of the treatment now needed may have soared…and the prognosis plummeted.


Improving diagnostic accuracy and efficiency by 20% (90% vs. 75%) would be a game changer for American medicine. So what’s the problem? On their own, doctors get it right 74% of the time. Our Chatbot is right 90% of the time. Therefore, we might expect our Chatbot to correct 90% of our doctors’ errors resulting in joint accuracy of 97%. Who says Utopia is dead?


But that ignores two things: (1) Chatbot is also wrong 10% of the time. Therefore we must expect that it will contradict 10% of our doctors’ correct diagnoses, likely plunging the impacted patients into paralytic uncertainty. In that scenario, our best case is back to 90% accuracy (vs. 97%) but that’s still huge! 


(2) There may be things that distinguish the ‘hard cases’ from the others. We might expect these cases to resist even the joint efforts of our medical team. It may be that the 10% of cases that AI cannot diagnose resist diagnosis by human doctors as well.


Once again then, we’re back to a 90% best case but that means that adding human doctors to the AI medical team contributes nothing to the overall result. 


Observed Doctors alone = 74%

Observed AI alone = 90%

Ideal AI + Doctors = 97%

Projected AI + Doctors = 90%


I wish I could leave things there…but I can’t; I have actual data: 

Observed AI + Doctors = 76%.


So adding a physician to the medical team reduces the chances of a correct diagnosis by 15%. Wow! Didn’t see that coming.


This is a stunning refutation of The Daily Upside’s conclusion: “So choose your human doctor well.” In fact, your choice of a human doctor makes very little difference. What matters is the quality of your Chatbot. 


In fact, according to this data, the best thing a human doctor can do for you is to get out of the way and let your Chatbot do its thing…or so it would seem.


I believe that this is an accurate portrayal of medicine as it is practiced in the United States today, and I am impressed with AI’s diagnostic track record. But I cling to the belief that human doctors can make a constructive contribution to the diagnostic process. However, for that to be so,  those doctors will need to reconceptualize their role.


In the healthcare ecosystem of the future, every patient will have a Primary Care Provider and that Provider will be a Bot. PC Bot will search the medical records, organize the data, interview the patient, order tests, and deliver a differential diagnosis.


Only then does Dr. Smith enter the room. She reviews PC Bot’s work and engages Bot in dialogue; she can ask questions, make suggestions, propose various what ifs, etc. But she is not empowered to override the Bot without (1) the concurrence of at least one other physician, or failing that, without (2) allowing a qualified physician to defend Bot’s diagnosis to the patient.


My PCP (Dr. Smith) loves to talk about how she is competing with Doctor Google. I understand her point of view. Rather than merely diagnosing a patient to the best of her ability, she must first reeducate the patient…and she’s not always going to succeed at that. She would argue, I think, that her patient would be better served by relying exclusively on her experience and expertise. Humans can be very territorial.


Unfortunately, we know that Dr. Smith is wrong! Human doctors, practicing on their own, are correct only 74% of the time; a typical Bot, without any human assistance, will enjoy a 90% success rate (same patients). Still, what harm can it do to add a human doctor to the team, if for nothing more than failsafe?


What harm? Plenty! Just bringing a human doctor into the room reduces the Bot’s 90% success rate to 76%. How is that possible? Hubris. Dr. Smith went to the same medical school as Dr. Google. In fact, they co-delivered the valedictory address at their graduation. They have been fierce rivals ever since. When Google and Smith get going, it’s easy to forget there is a patient in the room.


Dr. Smith in particular is concerned to prove her worth. She has heard about AI induced redundancies and is only too willing to let Dr. G paint itself into a proverbial corner. Of course, Dr. Smith is unaware of her extraneous motives. But they are operable, nonetheless.


Then there’s the matter of bias. Dr. Smith walks into County General with 30+ years of learned bias under her hat. Dr. G only has bias to the extent that it was built-in by its programmer-trainers. 

Bottom line, I still believe that a Human-Bot healthcare partnership is possible and that it could operate in the best interests of its patients. For that to happen, however, human beings need to learn their place.


 

Keep the conversation going.


1. Click here to contact us on any matter. How did you like the post? How could we do better in the future? Suggestions welcome.
2. To subscribe (at no cost) to TWS and ATM, follow this link.
3. We encourage new articles and reprints from freelance writers; click here to view out Writers’ Specs.

Do you like what you just read and want to read more Thoughts? Subscribe today for free!

- the official blog of Aletheia Today Magazine. 

Have a thought to share about today's 'Thought'.png
bottom of page