I (don’t) feel your pain: A dialog with ChatGPT about what “empathic AI” in healthcare really means

Quick TOC:

First up, I’m letting you in on a somewhat rambling (but edited) conversation that I had with ChatGPT about its own limitations in healthcare, and the limitations of those designing and using it.

Then I review the research on where things currently stand with empathy and AI, and why we don’t have AI doctors managing our pain (and shouldn’t want them to).

Lastly, I delve into the essence of pain, the essence of AI, and the essence of empathy, and somehow manage to tie them all together.

The Chat(GPT)

It’s almost like we’ve created a mirror, but now we’re unsure whether we’re looking at a reflection or something with its own life. We’ve spent so much time trying to replicate human intelligence and behavior in machines, yet when it starts to resemble us too much, it challenges the very things we thought made us unique. It raises big questions about identity, consciousness, and what it means to be ”human.”

The irony is definitely there—it’s almost as if we’re both proud of our creations and wary of them.

— ChatGPT, last night

Without delving too deeply into what exactly ChatGPT “thinks” about discussing AI as if it were human, I challenged the use of “we,” because of how it conflates the humans who are actively involved in designing generative AI LLMs, with…well, the rest of us. The overwhelming majority of humans who speak to AI chatbots, and marvel at their responses, but have no idea how those responses are actually generated.

I went on to detail my concerns about the potential to use AI for purposes of manipulation, and ChatGPT responded:

One of the scariest parts is that it’s not always obvious when you’re interacting with AI, especially if it’s designed to mimic human behavior. People could end up making decisions or forming beliefs based on information they don’t realize is coming from an algorithm, rather than a person with their own interests or expertise.

AI governance is a huge topic, I pointed out, but it’s largely more concerned with meeting standards for accuracy and fairness than manipulation per se. However, there are many articles discussing empathy and AI broadly, such as a recent one that praised “empathic AI” in medicine because it can retain all the powers of appearing to sympathize with a patient about their diagnosis, but is inexhaustible and unbiased. But that rests on the assumption of patients not knowing, or possibly not caring, that the AI is actually AI.

That’s a really thought-provoking situation you’re describing.

(It must be said that ChatGPT is adept at intellectual flattery.)

The idea of empathic AI certainly seems appealing in some ways—AI could provide endless emotional support, be non-judgmental, and maintain a level of consistency that humans can’t always offer. But the underlying assumption that the patient wouldn’t know or care that it’s AI raises some significant ethical concerns, especially regarding transparency and informed consent.

If patients are interacting with an AI that appears empathic and comforting, but they aren’t aware it’s not human, it could lead to an unspoken form of manipulation.

I think, I said, that you can and should keep the knowledge that they’re interacting with AI front of mind for patients. As in, every time they interact with AI, they should be explicitly informed that that is the context in which this conversation is taking place.

Transparency should be a fundamental part of any AI interaction, especially in sensitive contexts like healthcare. Patients should always know when they’re speaking to an AI so they can make informed decisions about how much trust to place in the interaction, as well as understand its limitations. This is crucial for maintaining autonomy and preventing manipulation, especially when the stakes involve personal health and emotional well-being.

That brings to mind a specific situation of a patient requiring a diagnosis, I said– cases of pain, and its management. You kind of have to rely on self-report by the patient of the regarding the degree of pain they’re experiencing. And when a human asks that question, they can reference their own experiences of pain, but AI obviously can’t. So I don’t know how good it would be at advising and treating a patient whose primary issue is about pain.

Pain is inherently subjective, and as you pointed out, it’s often based on self-reporting from the patient. Humans, especially healthcare providers, are able to make a certain level of judgment or empathy based on their own experiences with pain, which helps them connect with patients and assess their symptoms more holistically.

AI, on the other hand, doesn’t have the ability to experience pain or any other physical sensations, so it can’t truly understand the patient’s experience in that way. It can rely on objective data—like pain scales, physiological responses, or medical history—but it can’t truly interpret the emotional or personal weight that the patient attaches to their pain. This makes it difficult for AI to offer the same level of care in pain management, where empathy, trust, and understanding are often just as important as medical knowledge.

In a case where pain is the primary issue, AI might be able to guide the patient through standard pain assessment methods, ask the right questions to understand the severity or type of pain, and even recommend basic interventions or referrals to a healthcare provider. However, it could fall short in offering the kind of nuanced, compassionate care that human practitioners can provide—especially in complex, emotional cases of chronic pain or when the pain doesn’t have a clear medical cause.

That said, AI could be incredibly useful as a support tool for both patients and providers in these cases. It could help gather data, monitor trends over time, and suggest potential treatments based on evidence-based practices. But for diagnosis and personalized treatment, especially when pain management requires emotional insight and patient-provider rapport, it seems like human involvement will still be necessary.

What if an AI model could be trained specifically on interactions between doctors and patients, I asked– as many as you could gather, in which they discuss pain issues? The AI could be tailored to that– a sort of “pain bot,” if you will. It would have this expert understanding, of what pain is like that would actually be informed by so many more descriptions of experiences of pain than most doctors– most people— ever have.

Because a patient experiencing pain has to convey it to their doctor in as thorough and useful terms as possible. There are definitely issues of privacy there, but if you had an AI “listening” to vast numbers of those exchanges, they could become an expert on what it’s like to feel pain without ever having felt it.

For example, it could help patients more accurately describe their pain or ask the right follow-up questions, maybe suggesting phrases or terms that would help a healthcare provider understand the full extent of their experience. Additionally, it could tailor recommendations for pain management based on a deep analysis of what has worked for other patients in similar situations.

However, there would still be challenges in making sure the AI doesn’t oversimplify the patient’s experience, which is so deeply personal. While the AI could certainly become an expert in understanding how pain is described, there’s a danger that it might reduce pain to a set of patterns or descriptors that misses the complexity of each individual case.

Still, if designed thoughtfully, this type of AI could serve as an incredibly helpful tool for both patients and doctors, improving communication and perhaps even helping to identify issues that might be missed in a typical consultation. Do you think this approach would be more effective in a supporting role to human care, or do you think AI could potentially step in as a primary consultant in pain management situations?

I think it’s a damn sight better than that series of smiley faces going from 1 to 10, “joyously happy” to “about to die.”

I completely agree! The typical pain scale with smiley faces or numbers from one to ten is often too simplistic, especially for chronic pain or pain that’s difficult to describe. An AI trained on real interactions between doctors and patients could offer a more nuanced understanding of pain. It could help patients express themselves more accurately and thoroughly. Plus, it could offer more meaningful follow-up questions, considering things like duration, intensity, or how pain interferes with activities, which could give doctors a more complete picture of the patient’s condition.

That said, it’d still need to work alongside human doctors who can provide that emotional support and intuition that an AI can’t replicate. But as a tool for better communication and diagnosis, it could be a huge improvement over current methods.

Do you think this approach could shift how we view pain in medical settings, or is it still too far removed from the deeply human aspects of pain management?

I’ve had a lot of experiences of going to the emergency room lately, and it occurs to me that it could be a lot more helpful– assuming that the patient is coherent and verbal when they come to the ER– to introduce them to a pain bot to inquire about their pain, as opposed to being asked about rating it from 1 to 10.

In high-pressure environments like emergency rooms, where time is often limited and the staff is spread thin, having an AI that can take an initial, detailed pain assessment could help streamline the process. In a busy ER setting, where every minute counts, this could save valuable time that would otherwise be spent asking standard questions or trying to gauge the severity of pain based solely on a brief number scale.

Of course, there’s the issue of when to transition from AI to human care. While the bot could gather a lot of helpful information, human judgment would still be necessary to interpret the results in the broader context of the patient’s health and medical history. But for an initial intake or triage process, this kind of AI could be a game-changer.

Now that you bring that up, I see the problem that it might be viewed as something that gets in the way of providing necessary care, by interrupting that process before any humans get to learn what the symptoms are before of the AI steps in to talk about pain. Probably not the best way to arrange things.

One potential way to integrate the AI without disrupting the human assessment process might be for it to function as a complementary tool rather than the first point of contact. For example, it could work alongside triage nurses or ER staff, helping to gather initial information while the human healthcare providers are still present and actively engaged with the patient. The AI could assist by structuring the conversation and asking relevant follow-up questions that might help the human staff better understand the pain without overshadowing their role in making that initial connection with the patient.

Alternatively, if the AI were used in a follow-up role—after the patient has been seen and evaluated by a doctor or nurse—it could help ensure that ongoing pain management is addressed effectively, perhaps even tailoring recommendations for pain relief or additional care based on what the patient shares during their interaction with the bot. That way, the AI could still provide value without interfering too much with the more personal, direct care that’s so essential in the ER.

One thing I have noticed in the ER is that everything is so fast-paced that those smiley faces don’t even get presented. So I’m imagining an AI bot stepping in during the initial process when the setup of IV is happening etc. .What if you had an AI asking about the patient about their pain during that process, so that the people who are trying to take the blood pressure and set up the IV, etc. get all that stuff done can have the AI asking the patient about their pain, in ER personnel’s’ presence while they’re trying to do these other tasks?

In the fast-paced environment of the ER, where staff is juggling multiple tasks at once, having the AI step in during the initial setup, like while IV fluids are being administered or blood pressure is being taken, could be a way to ensure the patient’s pain is addressed without interrupting or slowing down the necessary medical procedures.

If the AI is integrated into that process, it could gather useful information about the patient’s pain while the medical team is focused on setting up and performing other tasks. This allows the medical team to focus on their work while the AI collects pain-related data, which could then be used to prioritize the patient’s care and treatment more effectively.

Once the AI has gathered the information, it could feed that data directly to the medical team, giving them a more complete picture of the patient’s condition without interrupting the flow of care. This kind of multitasking would reduce the burden on the staff, ensuring that the patient’s pain is addressed more comprehensively and quickly.

As you mentioned, though, this approach would require careful integration. The AI should not take over or distract from the human staff’s efforts, but rather work alongside them in a way that complements the tasks already in progress. It would also need to be clear to patients that the AI is gathering information for their benefit, so they don’t feel like they’re being ignored or reduced to data.

Okay, time for a reality check, I said. There’s no way this is a novel discussion about the subject matter, so let’s see what the research actually says.

What the research actually says (the discussion)

There are lots of ways that AI can recognize pain, including behavioral and neurophysiological methods.

For example, AI facial recognition is amazingly accurate, enabling detection of specific facial expressions correlated with pain. It can also recognize body postures such as a patient looking downward or leaning forward, which are often indicative of pain.

In terms of neurophysiology-based methods, electroencephalography (EEG) can track changes in the brain that act as “biomarkers” of pain, and other nervous system activity that are also measurable have been use to correlate with pain, such as the sweat glands and heart rhythm.

It’s an understatement to say that patients do care whether they are evaluated medically by AI– they’re largely against it.

A survey by Pew found that 60% of Americans said they would be uncomfortable with a provider relying on AI in their healthcare to do things like diagnosing disease and recommending treatments. 38% said that AI used for such purposes would lead to better health outcomes for patients generally. 33% said that it would lead to worse outcomes, and 27% said it wouldn’t make such of a difference.

This definitely undercuts my naive assumptions about immediately putting “pain bots” in hospitals, since– in the short term, at least– no amount of work you put in to making AI seem “empathic” is going to matter if a patient has already ruled out letting a bot evaluate them.

But let’s examine what it actually means to be “biased” against AI.

A study that evaluated the empathy and accuracy of ChatGPT responses to questions on the “AskDocs” subforum on Reddit found that AI responses to questions were deemed both more accurate and empathetic…by “a team of licensed health care professionals,” who may or may not have gone into it already knowing that at least some of the answers would be of AI origin. That doesn’t necessarily say anything about the perceptions of patients, whether the healthcare professionals knew about the AI or not.

A study from earlier this year found that “people felt more heard when they received an AI than a human message, but when they believed a message came from AI this made them feel less heard,” from which one of the researchers, Yidan Yin, concluded that the research ““basically finds a bias against AI. It’s useful, but [participants in the study] don’t like it.”

It’s important to note that this research was not conducted in a medical context, but rather an attempt to answer the general question “Can AI, which lacks human consciousness and emotional experience, succeed in making people feel heard and understood?” Nan Jia, also on the research team, was quoted as saying “we identified that while AI demonstrates enhanced potential compared to non-trained human responders to provide emotional support, the devaluation of AI responses poses a key challenge for effectively deploying AI’s capabilities.”

This is an improvement over the “AskDocs study” to the extent that the people who were the targets were asked directly to evaluate the quality of AI responses. But I find this subheading, and the quote from Yin that follows, intriguing:

AI offers better emotional support

The study highlighted important nuances. Responses generated by AI were associated with increased hope and lessened distress, indicating a positive emotional effect on recipients. AI also demonstrated a more disciplined approach than humans in offering emotional support and refrained from making overwhelming practical suggestions.

Yin explained, “Ironically, AI was better at using emotional support strategies that have been shown in prior research to be empathetic and validating.”

These statements, along with the accusation of “bias,” strongly imply that the responsibility for people not trusting AI rests entirely on those who mistrust it.

I have trouble getting my head around the apparently objective statement that “AI offers better emotional support” if the effects of that emotional support immediately vanish the moment that it’s revealed to come from AI.

Imagine that you’re told that you’ll be getting two calls from two someones who are concerned about your well-being. You answer the first phone call and have an encouraging but kind of mundane conversation with the caller, who stumbles over their words but ultimately impresses on you that they care. Then you get the second call, from someone who sounds like they really have a good grip on what’s bugging you. They listen to you talk about your situation and express disappointment at your current bad luck, but provide hope and motivation for your future. Honestly, you feel so much better after the second call than the first. You feel so much more “heard.”

Then you’re told that the second phone call was from a computer application programmed to sound authentic and effusive in its expressed interest in your well-being. Does your perception of that latter conversation change?

And regardless of whether it changes or not for you, can you exercise the amount of empathy required to understand why it does for others?

If the second caller’s voice had sounded clipped and stilted, clearly like text-to-speech, it assuredly wouldn’t have had the effect of making you feel more emotionally supported. But you wouldn’t have minded, because there’s no way you were duped in that scenario– a child could’ve seen through it. But what if you were completely convinced, to the point that you made yourself vulnerable in some way, only to find that it wasn’t a human to which you directed these raw emotions?

More human than human

The article describes how “individuals further reported an ‘uncanny valley’ response—a sense of unease when made aware that the empathetic response originated from AI, highlighting the complex emotional landscape navigated by AI-human interactions.”

The uncanny valley is the perceived gulf that exists between human-like entities and actual humans, specifically our emotional responses to it. I googled the term and received an AI definition that gave the following examples of the uncanny valley phenomenon: “Androids, computer game characters, life-like dolls, the 2004 children’s Christmas movie The Polar Express.” The definition includes both unease and disgust as emotional responses. 1

The discomfort with robots of various kinds tends to arise from how much better they are at being human than actual humans are– even better than humans, period. If the computer in Wargames asked how your day went, you might be suspicious about why it would even pretend to care.

Conclusion?

Experience is subjective– you are literally the only one who knows what it’s like to be you. But that’s the source of knowledge that you use to get a glimpse of what it’s like to be someone else. To take part, even a little bit, in what it’s like to be them. Participate in their experiences. But AI has no subjectivity, though its existence is predicated on how good it is at pretending otherwise. Omniscient but friendly, like Janet in The Good Place.

But one thing Janet was never good at was feeling someone else’s pain, and pain is the #1 indicator for humanity as a species that something has gone wrong with our bodies. Maybe it feels like a cruel joke to members of the 60% who experience chronic pain to expect them to get over their “bias” against AI trying to quantify and qualify it.

And yet that remove from humanity is how AI can democratize access to healthcare, compensating for the limitations on the knowledge, experience, and hours of sleep that human doctors can attain daily, throughout the world. An AI doctor can be far better than no doctor at all.

The article linked above has a quote from Erik Duhaime, co-founder and CEO of Centaur Labs: “AI for healthcare has never been about replacing doctors, but doctors who use AI might end up replacing those physicians who don’t.”

One area where doctors have sparse reserves is time to sit and listen to patients describe their pain, which is not to say that it’s in any way a waste of time to do so. But in light of this cold fact, AI might be able to compensate, to some extent, by listening. Empathy isn’t demonstrated by loudly proclaiming that you’re doing it, but often takes the form of just sitting and listening, and AI can do that.

So maybe that’s a way to take advantage of AI’s tremendous access to knowledge about pain without pretending that it feels pain– have it listen, productively, to those experiencing it. AI may never be able to feel your pain, but perhaps it can understand it, and use that understanding to help you deal with it.

“I feel your pain.”

“I can learn about and understand your pain.”

That’s a far more assuring thing to hear, anyhow.

  1. It might be surprising to hear the word “disgust” come up, but “animal reminder disgust” is a specific type of disgust elicited by reminders of our own animalistic nature, such as bodily functions, violations of the body envelope such as violence or surgery, and of course death itself. The uncanny valley here is between us and our animal origins, as it becomes harder to distinguish between the two– just as it becomes harder to distinguish between ourselves and AI.

    Cyborgs are an interesting crossover to me in how they can elicit the uncanny valley response in both directions. Think of Officer Alexander Murphy in the Robo-Cop series, Tom Cruise’s eye replacement surgery in Minority Report, the Borg, or of course, Anakin Skywalker, aka Darth Vader. Generally speaking, the message is that mixing robots with humans– physically, at least– is gross, primarily because of how the human part tends just bring things down by being all organic, mortal, prone to bleeding and other spontaneous emission of effluvia. ↩︎

Leave a Reply