top of page

AI for Healthcare

David Cowles

Sep 1, 2023

“Boka, is it true you used to drive 10 miles to see a doctor once a year and called that healthcare?”

A recent article posted online quoted a health plan executive as saying, “Solving consumer engagement is worth $1 trillion to our organization.” Considering that the market cap of the largest health plan today (UHC) is less than half that, this is obviously a bit of an exaggeration. But only a bit! 


There is no doubt that consumer engagement is healthcare’s Holy Grail. Since the collapse of ‘strong form’ Managed Care in the mid-1990s, Engaging the Consumer has been Job One for healthcare providers and benefit plan managers alike.  


We’ve tried cost shifting (high deductibles), Consumer Directed Health Care (HRAs & HSAs), health fairs, financial incentives for lifestyle management, urgent care and walk-in clinics, telemedicine, no cost preventive care, consumer education, and finally, computer assisted customer service. None has made the slightest dent in the titanium dome of consumer apathy! 


Two exceptions: Doctor Google and boutique pharmaceuticals. We do love our pharmaceuticals! We can’t wait to try the latest $300/dose drug that TV promises will give us marital bliss, obedient children, career success, limitless leisure, and a ‘scratch’ golf game.  


Plus, we never tire of entering our symptoms into various databases and then wringing our hands gleefully upon receiving their mechanically generated diagnoses. It’s a roller coaster ride for those who don’t go to amusement parks. 


It is said that the definition of insanity is repeating the same behavior over and over again while expecting a different outcome each time. That’s us! We’ve poured billions of dollars into consumer engagement initiatives over the last 30 years…and we’ve come up empty. 


My mother-in-law (RIP) used to say, “If you have your health, you have everything!” Americans don’t necessarily agree. We will invest great wealth into exercise equipment, gym memberships, vitamin therapies, fad diets, etc., but we are reluctant to spend a single hour or a single dollar on traditional medicine. Yet it is still the case, rightly or wrongly, that traditional medicine accounts for about 90% of our nation’s healthcare spending. 


Folks today want to shop for healthcare like they do electronics, IRT. They want all the information there is, neatly summarized by a ‘Consumer Reports’ Bot. We want to know the differential diagnoses, treatment options, prognoses, and risks, all in one shot. 


At the same time, we want our healthcare to be ‘bespoke’. We want to be able to tweak our data, play ‘what if’. We want a ‘one size fits one’ health strategy. In a phrase, we want Virtual Concierge Medicine (‘VCM’ – just what we don’t need…a new acronym to learn).  


On the supply side, we have an acute shortage of physicians, nurses, and healthcare professionals, especially at the primary care level. What providers we do have suffer from burn out. The combination of medical advances, patient flow, regulatory mandates, and burdensome paperwork is simply unsupportable. As a result, healthcare organizations on average turn over 100% of their staff every 5 years. 


If only a technology existed that could deliver features consumers want while reducing providers’ workload!  We might hope that such a technology would engage patients, improve overall population health, and ameliorate quality of life for healthcare professionals; we might hope it would reduce, or at least cap, the per patient cost of healthcare while allowing us to increase compensation for providers, especially those near the base of the economic pyramid (e.g., first responders, nurses, primary care physicians).  


If only there was such a thing as ‘Artificial Intelligence for Healthcare’… Now, here is where you expect me to say, “Introducing Health Bot 101”; sorry to disappoint! Sure, there are a number of ‘AI for Healthcare’ platforms in development right now, but early indications are that they won’t go nearly far enough to make any difference at all. (How’s that for a prognosis!) 


Consider ‘Hippocratic AI’ for example. This platform advertises itself as the first ‘safety-focused’ AI for Healthcare. Not surprisingly, they have adopted the ancient Greek motto (4th century BCE), “First do no harm,” which translates into modern English as, “Above all, don’t get sued."  


For all its initial hype and promise, Hippocratic has apparently surrendered even before it’s been launched. The site has decided that it will not suggest possible diagnoses. May I pose a 

question? What good is any healthcare resource if it doesn’t diagnose?  


Did you just say, “Pick on someone your own size!” Admittedly, Hippocratic AI is a start-up. So how about Google? Big enough for you? Google’s AI for Healthcare initiative grades its product(s) according to 4 measures: 


  1. How closely do AI’s answers reflect medical consensus? 

  2. Are AI’s answers free of bias? 

  3. How precise are AI’s answers? 

  4. Do any of AI’s answers risk harming the patient? 


As we’ll see below, Google’s application is not AI, it’s Anti-AI. Ok, it is ‘artificial’ but it’s not ‘intelligent’. All it’s doing is rearranging symbols within a very tight set of parameters. That is not the promise of AI. We need AI to discover and label new maladies, to model novel treatments and cures and test them virtually, to invent new treatment protocols, to envision new surgical techniques, to propose proper genetic intervention when appropriate, etc.  


In short, we need AI to go where no physician has gone before. Let’s see how this vision fares under Google’s 4 tests: 


  1. Sometimes consensus is a good thing; sometimes it’s not! Apparently, the Sun does not revolve around the Earth after all. We want doctors who’ve kept current, but we also want doctors who aren’t afraid to go against the tide when appropriate. Apparently, Google doesn’t agree. 

  2. Bias is bad, we can all agree on that, but sometimes one person’s ‘bias’ is another person’s ‘fair and balanced’ – and vice versa, of course. The risk is that the arbiter of ‘bias’ will be…‘biased’!  

  3. What doesn’t belong and why? ‘Precise’ doesn’t seem to fit here, does it? Of course, we want our answers to be as ‘precise’ as possible, but precisely what does that mean? Accurate? Detailed? Narrow? Litigation immune? What are Google’s ‘precision’ criteria?  

  4. Finally, ‘Don’t harm the patient!’ Come on now, who could possibly have a problem with that? Well, me for one! “ALL healthcare involves risk!” How many times have you heard that from your favorite characters on your favorite (pre-strike) TV medical dramas? But we don’t believe it, do we? We live in the age of warranties and product liability. We want our results guaranteed! 


Not happening! No healthcare regime will ever be able to deliver that level of assurance. By its very nature, healthcare is a ‘risk/reward’ proposition. If I treat diagnosis X with therapy Y, how likely is it that I will be helped (and by how much)? How likely is it that I will be hurt (and by how much)?  


Come to think of it, we apply this same thought process every time we make a decision. Should I steal this cookie? Should I rob this bank? Should I get married (or divorced)? Should I jump out of an airplane? Should I go ‘all in’ on this hand?  


I’m standing at the craps table at Bellagio. I’ve noticed that in the warp and woof of the game, it’s reached a point where I have $400 on the felt, spread among various numbers. Should I let it ride or pull it down? If I let it ride, I could walk away with $800…or more; plus I would be able to tell all my friends, “I beat Bellagio”. That’s worth something, right?  


On the other hand, I could easily lose all $400 on a single roll. So my decision comes down to this: just how badly do I need that $400? Why should healthcare be any different? The Fact is, it can’t be! We just pretend that it is.  


So, it looks as though the ‘AI Revolution’ in healthcare may be over before it began, but it doesn’t have to end this way! When self-driving vehicles were first introduced, people were horrified… me among them. No human being at the controls?  


Nope, no one to fall asleep at the wheel, drink and drive, text while moving, etc. What a disaster that would be! Now I can say with confidence: self-driving vehicles are coming soon to a neighborhood near you… if they’re not already pulling into your driveway… and I can’t wait! 


Sure, self-driving vehicles pose some risks. We’re doing the best we can to minimize those risks, but we’ll never eliminate them entirely. The question is, “How do the risks posed by self-driving vehicles compare with the risks of human-driven cars?” Now apply that to healthcare: 30% of all healthcare delivered in the United States is either ineffective or harmful; almost 20% of patients are misdiagnosed. Could a Bot do any worse?  


Still, who can argue with Hippocratic’s ‘safety first’ focus? We all want patients protected from incorrect diagnoses and ineffective or harmful treatments; we all want patients’ PHI (Personal Health Information) kept confidential. Nevertheless, this is a huge climb down from Hippocratic’s initial commitment to ‘always-on triage’.  


Beyond that, though, it is puzzling that an AI program would make safety its #1 priority. What about functionality? Shouldn’t we be looking to develop AI that works and protects, rather than AI that protects and may have some functionality as well? As a society, we need to recognize the fact that safety and functionality are often at cross purposes; we must make sober, adult, and may I say ‘non-ideological’, decisions about which strategy is best in what circumstances. 


Take, for example, HIPPA. It was intended to protect patients’ PHI without increasing the cost or compromising the quality of care; it has failed. Oh, don’t get me wrong: it’s done a marvelous job of securing personal data. But the cost of compliance has been massive and felt all along the supply chain, even to the level of the lone MD with her totemic ‘black bag’. 


More importantly, HIPPA has eroded the quality of patient care. It takes a village to treat a tumor! Optimal outcomes often require input from diverse stakeholders. But such input is only possible if relevant patient information is shared, and that sharing is precisely what HIPPA regulates. (Notice I said, “regulates”, not “prohibits”… or even “restricts”.) 


So, the fault does not lie with HIPPA! The law is careful to include a number of well-considered exceptions to its ‘data lockdown’. The problem comes at the application level. Healthcare professionals, by and large, do not understand the nuances of the law; why should they? They’re ‘doctors and nurses and such’ (Willie Nelson).  


Rather than risk Draconian fines or unpredictable litigation, many have adopted a ‘see no data, hear no data, share no data’ approach to patient care. That is neither what HIPAA intended, nor what it requires, but it is how it is being interpreted and administered on our mean streets. At one time, even my own doctor was refusing to respond to texts and emails “because of HIPPA”. 


Let’s compare the potential results of AI delivered healthcare vs. healthcare delivered by old reliable, Homo sapiens (HS). First HS: I go to my doctor and my blood pressure (BP) is a bit high; my doctor recommends some lifestyle changes and a particular medication. I return in 3 months to have my BP rechecked; still high. Did I follow the doc’s lifestyle instructions? Did I take my medication religiously? Ok, let’s try a new med and check again 90 days out. 6 months have gone by now and there’s still no guarantee that my BP is under control. 


Now AI: Again, upon initial examination, my BP is elevated.  My PCP hands me a blood pressure monitoring device to use at home, but then he turns all my future healthcare over to an AI Bot named Charley. Chas, as I call him, tells me to cut way back on my sodium in-take for 2 weeks to see if that has any impact; it doesn’t. Ok, am I willing to cut down on alcohol and switch to decaf? Hmm. 


Two weeks later, we’re moving the needle; it’s working, but not enough to satisfy Charley. Let’s tweak the dosage on your BP medication and check again in two weeks. Thank you, Charley, for the wonderful experience and the excellent result! 


Remember the days when you typed and mailed all your correspondence? Then you waited two weeks for a reply. Doesn’t seem possible, does it? So buck up; it’s early days yet. If it’s alright with you, I’ll continue to envision the day my great grandkids say to me, “Boka, is it true you used to drive 10 miles to see a doctor once a year and called that ‘healthcare’?”  


 

David Cowles is the founder and editor-in-chief of Aletheia Today Magazine. He lives with his family in Massachusetts where he studies and writes about philosophy, science, theology, and scripture. He can be reached at david@aletheiatoday.com.

 

Return to our AI Issue Table of Contents



Do you like what you just read? Subscribe today and receive sneak previews of Aletheia Today Magazine articles before they're published. Plus, you'll receive our quick-read, biweekly blog,  Thoughts While Shaving.

Thanks for subscribing!

Have a comment about this ATM essay Join the conversation, and share your thoughts today..
bottom of page