The Challenges of AI in Healthcare

Navigating Complexities to Unlock AI’s Full Potential in Healthcare

Vikram Narayan

Nov 1, 2024


With all the talk about AI in healthcare, there are still many challenges. The technology is still immature, Ethical concerns, complexity in integration and governance, and costs of maintaining such systems are cropping up. In this article we review some of the major challenges of applying AI to healthcare.


To begin with, let’s start with a fundamental question. 


What is AI?


While AI today has become synonymous with ChatGPT-like technologies, the real definition of AI is far more expansive. The following diagram helps most of my workshop participants when thinking about how the various pieces of AI actually fit together.


AI is an all-encompassing term that includes a variety of techniques and algorithms that help machines simulate human intelligence (including playing chess or solving Sudoku puzzles). Inside this broad umbrella called ‘AI’ we’ve got ML (Machine Learning) which is the idea of being able to learn from data rather than hard-coded rules. Within the umbrella of machine learning, we’ve got Deep Learning which is a kind of machine learning technique which uses massive amounts of data and many layers of ‘neurons’ to learn from and process that data. Then, we’ve got Generative AI (or Gen AI) which, having learned from data, attempts to produce new data - be it images or text. And finally, we’ve got LLM’s or Large Language Models - the technology that powers ChatGPT. 


So when two people use the word AI, they may be talking about two very different things!


However, today, when most people mention the word, AI, they are referring to technologies like Gemini, Claude or ChatGPT - these seemingly super-intelligent beings capable of doing many things that humans can do and which promise to solve all of humanity's problems and this class of AI will be the focus of our article.


Now, this brings up the next question:


How Does ChatGPT Actually Work?

For all of the magical things that ChatGPT does, we need to bear in mind that it’s just a guessing machine which guesses the next set of words (or ‘tokens) based on prior words (or tokens). And this idea is best explained with the following example:


Here, the large language model is evaluating which word to produce after producing the words “Twinkle, twinkle, little …”. The model has access to various alternative words with varying probabilities and finally chooses “star” based on its probability of being the right word.


So Large Language Models are simply “stochastic parrots” that can guess the next word really, really well!


And given this context, we are in a great place to examine the challenges that we face when attempting to integrate AI into healthcare.


Challenge 1 - Model Hallucination

When ChatGPT first came out, there were a number of instances where naive users were caught flat-footed when attempting to use the technology. You’ve probably seen headlines where ChatGPT invented new legal cases when it first came out in 2023.

This ‘model hallucination’ occurs because the model does not know where its knowledge ends and where ignorance begins and because it’s simply predicting tokens based on previous tokens. And while the problem of hallucination is going down dramatically, it still exists and we can never be 100% sure if a large language model will spout an inaccuracy. For example, there have been instances of hallucinations in generating medical summaries.


Challenge 2 - Blackbox Decision Making

If you ask a human why they made a particular decision or said something, they’ll often be able to give you a rational explanation. If you ask a large language model why it makes a decision, it gives you a rational-sounding explanation and we can never be too sure as to why it made a particular choice. A lot of work is being done to mitigate these challenges with an idea called ‘explainable AI’ but we’re still far away from a model being able to explain clearly why it’s doing or saying something.


Challenge 3 - Inconsistent Behavior

AI models are complex creations. They have billions (or even trillions) of ‘knobs’ that can be turned and tuned. And which of these knobs light up can vary based on the inputs. Ask a question in a different way and you may get a different answer. This inconsistency, while great for creative tasks, is not well-suited for tasks that require consistent outputs. 


Challenge 4 - PHI Safety and HIPAA Compliance

An additional consideration in the healthcare space is complying with HIPAA requirements and protecting PHI (Protected Health Information). The challenging part of this is that PHI will need to be sent to a large language model provider like Open AI or AWS Bedrock for most meaningful applications. And even with a BAA (Business Associate Agreement), it’s still a challenge to ensure ongoing compliance over the long run - especially if there are multiple vendors. The recent incident involving a Microsoft healthcare chatbot potentially exposing patient data is particularly instructive.


Challenge 5 - The Uncanny Valley
Humans appreciate it when AI becomes human-like. But only up to a point. If the AI becomes too ‘human’ we tend to feel repelled. This design principle - referred to as “the uncanny valley” -  is likely to play out in the healthcare space as patients reject AI caregivers and AI admin staff trying too hard to sound and act human. The refrain “let me talk to a real human” is likely to grow louder as health systems try to save costs by cramming AI nurses down the throats of unwilling patients.



Overcoming Challenges for AI in Healthcare

Apply AI to the Right Problems:



Establish Comprehensive AI Governance:



Enhance Explainability and Transparency in AI:



Safeguard Patient Privacy and Compliance with PHI Standards:



Address the Uncanny Valley in AI-Patient Interactions:




Conclusion

While AI offers significant potential to transform healthcare, realizing that potential requires thoughtful, responsible integration. By targeting non-sensitive tasks, establishing robust governance, enhancing transparency, and safeguarding patient privacy, healthcare organizations can mitigate risks and maximize AI’s benefits.