Skip to content

I (don’t) feel your pain: A dialog with ChatGPT about what “empathic AI” in healthcare really means

I (don’t) feel your pain: A dialog with ChatGPT about what “empathic AI” in healthcare really means published on 1 Comment on I (don’t) feel your pain: A dialog with ChatGPT about what “empathic AI” in healthcare really means

Quick TOC:

First up, I’m letting you in on a somewhat rambling (but edited) conversation that I had with ChatGPT about its own limitations in healthcare, and the limitations of those designing and using it.

Then I review the research on where things currently stand with empathy and AI, and why we don’t have AI doctors managing our pain (and shouldn’t want them to).

Lastly, I delve into the essence of pain, the essence of AI, and the essence of empathy, and somehow manage to tie them all together.

The Chat(GPT)

It’s almost like we’ve created a mirror, but now we’re unsure whether we’re looking at a reflection or something with its own life. We’ve spent so much time trying to replicate human intelligence and behavior in machines, yet when it starts to resemble us too much, it challenges the very things we thought made us unique. It raises big questions about identity, consciousness, and what it means to be ”human.”

The irony is definitely there—it’s almost as if we’re both proud of our creations and wary of them.

— ChatGPT, last night

Without delving too deeply into what exactly ChatGPT “thinks” about discussing AI as if it were human, I challenged the use of “we,” because of how it conflates the humans who are actively involved in designing generative AI LLMs, with…well, the rest of us. The overwhelming majority of humans who speak to AI chatbots, and marvel at their responses, but have no idea how those responses are actually generated.

I went on to detail my concerns about the potential to use AI for purposes of manipulation, and ChatGPT responded:

One of the scariest parts is that it’s not always obvious when you’re interacting with AI, especially if it’s designed to mimic human behavior. People could end up making decisions or forming beliefs based on information they don’t realize is coming from an algorithm, rather than a person with their own interests or expertise.

AI governance is a huge topic, I pointed out, but it’s largely more concerned with meeting standards for accuracy and fairness than manipulation per se. However, there are many articles discussing empathy and AI broadly, such as a recent one that praised “empathic AI” in medicine because it can retain all the powers of appearing to sympathize with a patient about their diagnosis, but is inexhaustible and unbiased. But that rests on the assumption of patients not knowing, or possibly not caring, that the AI is actually AI.

That’s a really thought-provoking situation you’re describing.

(It must be said that ChatGPT is adept at intellectual flattery.)

The idea of empathic AI certainly seems appealing in some ways—AI could provide endless emotional support, be non-judgmental, and maintain a level of consistency that humans can’t always offer. But the underlying assumption that the patient wouldn’t know or care that it’s AI raises some significant ethical concerns, especially regarding transparency and informed consent.

If patients are interacting with an AI that appears empathic and comforting, but they aren’t aware it’s not human, it could lead to an unspoken form of manipulation.

I think, I said, that you can and should keep the knowledge that they’re interacting with AI front of mind for patients. As in, every time they interact with AI, they should be explicitly informed that that is the context in which this conversation is taking place.

Transparency should be a fundamental part of any AI interaction, especially in sensitive contexts like healthcare. Patients should always know when they’re speaking to an AI so they can make informed decisions about how much trust to place in the interaction, as well as understand its limitations. This is crucial for maintaining autonomy and preventing manipulation, especially when the stakes involve personal health and emotional well-being.

That brings to mind a specific situation of a patient requiring a diagnosis, I said– cases of pain, and its management. You kind of have to rely on self-report by the patient of the regarding the degree of pain they’re experiencing. And when a human asks that question, they can reference their own experiences of pain, but AI obviously can’t. So I don’t know how good it would be at advising and treating a patient whose primary issue is about pain.

Pain is inherently subjective, and as you pointed out, it’s often based on self-reporting from the patient. Humans, especially healthcare providers, are able to make a certain level of judgment or empathy based on their own experiences with pain, which helps them connect with patients and assess their symptoms more holistically.

AI, on the other hand, doesn’t have the ability to experience pain or any other physical sensations, so it can’t truly understand the patient’s experience in that way. It can rely on objective data—like pain scales, physiological responses, or medical history—but it can’t truly interpret the emotional or personal weight that the patient attaches to their pain. This makes it difficult for AI to offer the same level of care in pain management, where empathy, trust, and understanding are often just as important as medical knowledge.

In a case where pain is the primary issue, AI might be able to guide the patient through standard pain assessment methods, ask the right questions to understand the severity or type of pain, and even recommend basic interventions or referrals to a healthcare provider. However, it could fall short in offering the kind of nuanced, compassionate care that human practitioners can provide—especially in complex, emotional cases of chronic pain or when the pain doesn’t have a clear medical cause.

That said, AI could be incredibly useful as a support tool for both patients and providers in these cases. It could help gather data, monitor trends over time, and suggest potential treatments based on evidence-based practices. But for diagnosis and personalized treatment, especially when pain management requires emotional insight and patient-provider rapport, it seems like human involvement will still be necessary.

What if an AI model could be trained specifically on interactions between doctors and patients, I asked– as many as you could gather, in which they discuss pain issues? The AI could be tailored to that– a sort of “pain bot,” if you will. It would have this expert understanding, of what pain is like that would actually be informed by so many more descriptions of experiences of pain than most doctors– most people— ever have.

Because a patient experiencing pain has to convey it to their doctor in as thorough and useful terms as possible. There are definitely issues of privacy there, but if you had an AI “listening” to vast numbers of those exchanges, they could become an expert on what it’s like to feel pain without ever having felt it.

For example, it could help patients more accurately describe their pain or ask the right follow-up questions, maybe suggesting phrases or terms that would help a healthcare provider understand the full extent of their experience. Additionally, it could tailor recommendations for pain management based on a deep analysis of what has worked for other patients in similar situations.

However, there would still be challenges in making sure the AI doesn’t oversimplify the patient’s experience, which is so deeply personal. While the AI could certainly become an expert in understanding how pain is described, there’s a danger that it might reduce pain to a set of patterns or descriptors that misses the complexity of each individual case.

Still, if designed thoughtfully, this type of AI could serve as an incredibly helpful tool for both patients and doctors, improving communication and perhaps even helping to identify issues that might be missed in a typical consultation. Do you think this approach would be more effective in a supporting role to human care, or do you think AI could potentially step in as a primary consultant in pain management situations?

I think it’s a damn sight better than that series of smiley faces going from 1 to 10, “joyously happy” to “about to die.”

I completely agree! The typical pain scale with smiley faces or numbers from one to ten is often too simplistic, especially for chronic pain or pain that’s difficult to describe. An AI trained on real interactions between doctors and patients could offer a more nuanced understanding of pain. It could help patients express themselves more accurately and thoroughly. Plus, it could offer more meaningful follow-up questions, considering things like duration, intensity, or how pain interferes with activities, which could give doctors a more complete picture of the patient’s condition.

That said, it’d still need to work alongside human doctors who can provide that emotional support and intuition that an AI can’t replicate. But as a tool for better communication and diagnosis, it could be a huge improvement over current methods.

Do you think this approach could shift how we view pain in medical settings, or is it still too far removed from the deeply human aspects of pain management?

I’ve had a lot of experiences of going to the emergency room lately, and it occurs to me that it could be a lot more helpful– assuming that the patient is coherent and verbal when they come to the ER– to introduce them to a pain bot to inquire about their pain, as opposed to being asked about rating it from 1 to 10.

In high-pressure environments like emergency rooms, where time is often limited and the staff is spread thin, having an AI that can take an initial, detailed pain assessment could help streamline the process. In a busy ER setting, where every minute counts, this could save valuable time that would otherwise be spent asking standard questions or trying to gauge the severity of pain based solely on a brief number scale.

Of course, there’s the issue of when to transition from AI to human care. While the bot could gather a lot of helpful information, human judgment would still be necessary to interpret the results in the broader context of the patient’s health and medical history. But for an initial intake or triage process, this kind of AI could be a game-changer.

Now that you bring that up, I see the problem that it might be viewed as something that gets in the way of providing necessary care, by interrupting that process before any humans get to learn what the symptoms are before of the AI steps in to talk about pain. Probably not the best way to arrange things.

One potential way to integrate the AI without disrupting the human assessment process might be for it to function as a complementary tool rather than the first point of contact. For example, it could work alongside triage nurses or ER staff, helping to gather initial information while the human healthcare providers are still present and actively engaged with the patient. The AI could assist by structuring the conversation and asking relevant follow-up questions that might help the human staff better understand the pain without overshadowing their role in making that initial connection with the patient.

Alternatively, if the AI were used in a follow-up role—after the patient has been seen and evaluated by a doctor or nurse—it could help ensure that ongoing pain management is addressed effectively, perhaps even tailoring recommendations for pain relief or additional care based on what the patient shares during their interaction with the bot. That way, the AI could still provide value without interfering too much with the more personal, direct care that’s so essential in the ER.

One thing I have noticed in the ER is that everything is so fast-paced that those smiley faces don’t even get presented. So I’m imagining an AI bot stepping in during the initial process when the setup of IV is happening etc. .What if you had an AI asking about the patient about their pain during that process, so that the people who are trying to take the blood pressure and set up the IV, etc. get all that stuff done can have the AI asking the patient about their pain, in ER personnel’s’ presence while they’re trying to do these other tasks?

In the fast-paced environment of the ER, where staff is juggling multiple tasks at once, having the AI step in during the initial setup, like while IV fluids are being administered or blood pressure is being taken, could be a way to ensure the patient’s pain is addressed without interrupting or slowing down the necessary medical procedures.

If the AI is integrated into that process, it could gather useful information about the patient’s pain while the medical team is focused on setting up and performing other tasks. This allows the medical team to focus on their work while the AI collects pain-related data, which could then be used to prioritize the patient’s care and treatment more effectively.

Once the AI has gathered the information, it could feed that data directly to the medical team, giving them a more complete picture of the patient’s condition without interrupting the flow of care. This kind of multitasking would reduce the burden on the staff, ensuring that the patient’s pain is addressed more comprehensively and quickly.

As you mentioned, though, this approach would require careful integration. The AI should not take over or distract from the human staff’s efforts, but rather work alongside them in a way that complements the tasks already in progress. It would also need to be clear to patients that the AI is gathering information for their benefit, so they don’t feel like they’re being ignored or reduced to data.

Okay, time for a reality check, I said. There’s no way this is a novel discussion about the subject matter, so let’s see what the research actually says.

What the research actually says (the discussion)

There are lots of ways that AI can recognize pain, including behavioral and neurophysiological methods.

For example, AI facial recognition is amazingly accurate, enabling detection of specific facial expressions correlated with pain. It can also recognize body postures such as a patient looking downward or leaning forward, which are often indicative of pain.

In terms of neurophysiology-based methods, electroencephalography (EEG) can track changes in the brain that act as “biomarkers” of pain, and other nervous system activity that are also measurable have been use to correlate with pain, such as the sweat glands and heart rhythm.

It’s an understatement to say that patients do care whether they are evaluated medically by AI– they’re largely against it.

A survey by Pew found that 60% of Americans said they would be uncomfortable with a provider relying on AI in their healthcare to do things like diagnosing disease and recommending treatments. 38% said that AI used for such purposes would lead to better health outcomes for patients generally. 33% said that it would lead to worse outcomes, and 27% said it wouldn’t make such of a difference.

This definitely undercuts my naive assumptions about immediately putting “pain bots” in hospitals, since– in the short term, at least– no amount of work you put in to making AI seem “empathic” is going to matter if a patient has already ruled out letting a bot evaluate them.

But let’s examine what it actually means to be “biased” against AI.

A study that evaluated the empathy and accuracy of ChatGPT responses to questions on the “AskDocs” subforum on Reddit found that AI responses to questions were deemed both more accurate and empathetic…by “a team of licensed health care professionals,” who may or may not have gone into it already knowing that at least some of the answers would be of AI origin. That doesn’t necessarily say anything about the perceptions of patients, whether the healthcare professionals knew about the AI or not.

A study from earlier this year found that “people felt more heard when they received an AI than a human message, but when they believed a message came from AI this made them feel less heard,” from which one of the researchers, Yidan Yin, concluded that the research ““basically finds a bias against AI. It’s useful, but [participants in the study] don’t like it.”

It’s important to note that this research was not conducted in a medical context, but rather an attempt to answer the general question “Can AI, which lacks human consciousness and emotional experience, succeed in making people feel heard and understood?” Nan Jia, also on the research team, was quoted as saying “we identified that while AI demonstrates enhanced potential compared to non-trained human responders to provide emotional support, the devaluation of AI responses poses a key challenge for effectively deploying AI’s capabilities.”

This is an improvement over the “AskDocs study” to the extent that the people who were the targets were asked directly to evaluate the quality of AI responses. But I find this subheading, and the quote from Yin that follows, intriguing:

AI offers better emotional support

The study highlighted important nuances. Responses generated by AI were associated with increased hope and lessened distress, indicating a positive emotional effect on recipients. AI also demonstrated a more disciplined approach than humans in offering emotional support and refrained from making overwhelming practical suggestions.

Yin explained, “Ironically, AI was better at using emotional support strategies that have been shown in prior research to be empathetic and validating.”

These statements, along with the accusation of “bias,” strongly imply that the responsibility for people not trusting AI rests entirely on those who mistrust it.

I have trouble getting my head around the apparently objective statement that “AI offers better emotional support” if the effects of that emotional support immediately vanish the moment that it’s revealed to come from AI.

Imagine that you’re told that you’ll be getting two calls from two someones who are concerned about your well-being. You answer the first phone call and have an encouraging but kind of mundane conversation with the caller, who stumbles over their words but ultimately impresses on you that they care. Then you get the second call, from someone who sounds like they really have a good grip on what’s bugging you. They listen to you talk about your situation and express disappointment at your current bad luck, but provide hope and motivation for your future. Honestly, you feel so much better after the second call than the first. You feel so much more “heard.”

Then you’re told that the second phone call was from a computer application programmed to sound authentic and effusive in its expressed interest in your well-being. Does your perception of that latter conversation change?

And regardless of whether it changes or not for you, can you exercise the amount of empathy required to understand why it does for others?

If the second caller’s voice had sounded clipped and stilted, clearly like text-to-speech, it assuredly wouldn’t have had the effect of making you feel more emotionally supported. But you wouldn’t have minded, because there’s no way you were duped in that scenario– a child could’ve seen through it. But what if you were completely convinced, to the point that you made yourself vulnerable in some way, only to find that it wasn’t a human to which you directed these raw emotions?

More human than human

The article describes how “individuals further reported an ‘uncanny valley’ response—a sense of unease when made aware that the empathetic response originated from AI, highlighting the complex emotional landscape navigated by AI-human interactions.”

The uncanny valley is the perceived gulf that exists between human-like entities and actual humans, specifically our emotional responses to it. I googled the term and received an AI definition that gave the following examples of the uncanny valley phenomenon: “Androids, computer game characters, life-like dolls, the 2004 children’s Christmas movie The Polar Express.” The definition includes both unease and disgust as emotional responses. 1

The discomfort with robots of various kinds tends to arise from how much better they are at being human than actual humans are– even better than humans, period. If the computer in Wargames asked how your day went, you might be suspicious about why it would even pretend to care.

Conclusion?

Experience is subjective– you are literally the only one who knows what it’s like to be you. But that’s the source of knowledge that you use to get a glimpse of what it’s like to be someone else. To take part, even a little bit, in what it’s like to be them. Participate in their experiences. But AI has no subjectivity, though its existence is predicated on how good it is at pretending otherwise. Omniscient but friendly, like Janet in The Good Place.

But one thing Janet was never good at was feeling someone else’s pain, and pain is the #1 indicator for humanity as a species that something has gone wrong with our bodies. Maybe it feels like a cruel joke to members of the 60% who experience chronic pain to expect them to get over their “bias” against AI trying to quantify and qualify it.

And yet that remove from humanity is how AI can democratize access to healthcare, compensating for the limitations on the knowledge, experience, and hours of sleep that human doctors can attain daily, throughout the world. An AI doctor can be far better than no doctor at all.

The article linked above has a quote from Erik Duhaime, co-founder and CEO of Centaur Labs: “AI for healthcare has never been about replacing doctors, but doctors who use AI might end up replacing those physicians who don’t.”

One area where doctors have sparse reserves is time to sit and listen to patients describe their pain, which is not to say that it’s in any way a waste of time to do so. But in light of this cold fact, AI might be able to compensate, to some extent, by listening. Empathy isn’t demonstrated by loudly proclaiming that you’re doing it, but often takes the form of just sitting and listening, and AI can do that.

So maybe that’s a way to take advantage of AI’s tremendous access to knowledge about pain without pretending that it feels pain– have it listen, productively, to those experiencing it. AI may never be able to feel your pain, but perhaps it can understand it, and use that understanding to help you deal with it.

“I feel your pain.”

“I can learn about and understand your pain.”

That’s a far more assuring thing to hear, anyhow.

  1. It might be surprising to hear the word “disgust” come up, but “animal reminder disgust” is a specific type of disgust elicited by reminders of our own animalistic nature, such as bodily functions, violations of the body envelope such as violence or surgery, and of course death itself. The uncanny valley here is between us and our animal origins, as it becomes harder to distinguish between the two– just as it becomes harder to distinguish between ourselves and AI.

    Cyborgs are an interesting crossover to me in how they can elicit the uncanny valley response in both directions. Think of Officer Alexander Murphy in the Robo-Cop series, Tom Cruise’s eye replacement surgery in Minority Report, the Borg, or of course, Anakin Skywalker, aka Darth Vader. Generally speaking, the message is that mixing robots with humans– physically, at least– is gross, primarily because of how the human part tends just bring things down by being all organic, mortal, prone to bleeding and other spontaneous emission of effluvia. ↩︎

What should the bus driver call you?

What should the bus driver call you? published on 3 Comments on What should the bus driver call you?
How would you feel if this man called you “babe”?

Here’s a sticky one…or maybe not so sticky. Jo Walters writes in the Guardian about her experience of being called “babe” by a bus driver, and then her experience of how she has been viewed and treated following making a complaint about that:

In the past week I’ve been to the cinema twice (The Artist, and The Descendants – both fairly good), stocked up my fridge (meatballs and pizza on the menu this week) and arranged to catch up with friends. Oh and I’ve been called “an irate woman”, “a daft woman”, a “silly, silly woman” told I “must look like the old back of a bus”, to “Get a life!” and that “I need an operation, to remove the chip from [my] shoulder” – all by people I don’t know and have never met. What is my crime? Just politely contacting my local bus company to let them know that I don’t like it when their bus drivers use terms such as “love”, “darling” and “babe”. I pointed out that I generally find their drivers friendly and courteous but that when some of them use that language I find it demeaning. I wasn’t angry, I didn’t ask to make a formal complaint, I wasn’t trying to get anyone into trouble, I’m not trying to get anyone fired, I didn’t threaten legal action – I just thought they might like to know how the actions of some of their staff made me feel. I received a prompt and friendly response agreeing that it wasn’t really appropriate language and not something the company would condone. They promised to let drivers know that this sort of language isn’t appreciated and I didn’t really think much more of it until my local radio station, Brighton’s Juice 107.2 mentioned on Facebook that drivers had been asked not to call people babe. From there I spotted it in our local newspaper, the Metro, the Mail Online, found it was discussed on Loose Women and various local radio stations. The thing I find weird is that I don’t really think this is news; I just sent some feedback to a company. It seems that people find the idea that language can affect others a bizarre concept and that it is “just political correctness gone mad” (that gem came up a few times). Much of the coverage and comments paints me as some angry woman who should be grateful for the apparent compliment. I didn’t make it a gender issue; the coverage and comments did.
The thing is though, I personally find terms like “babe” coming from men to be overfamiliar, sexist and patronising. I’m allowed to interpret their words in that way, it doesn’t make me irrational or oversensitive. It doesn’t mean I don’t have a sense of humour or that I should be grateful for the attention. It is interesting to note that lots of the critical comments are from men.

I don’t think it’s that people find the idea that language affects others bizarre– it’s more likely that they either fail to understand the concept of benevolent sexism, fail to recognize benevolent sexism when they see it, or simply don’t agree that this counts. Why would anyone but a cold, angry, PC-obsessed woman fail to see being called ________ (“babe,” in this case) as flattering, or at least benign? What kind of person is offended by a compliment or a nicety?

Context matters, naturally. In this case the entire discussion is about context, but it’s important to point out a cultural difference specifically. I think most Americans would see it as a no-brainer that public servants– or indeed, anyone who works in customer service– should not call patrons/customers “babe,” but in the UK it’s not just kind old ladies in department stores who will refer to you in diminutives; it’s everybody. I didn’t mind hearing “Ta, love” from a ticket-taker on the train, and in fact found it nice, because I knew it’s something practically every ticket-taker says to practically everyone. It would make me sad if “Ta, love” went away, even though I no longer ride trains in the UK. So in that regard I can understand people being miffed about a crackdown on the kind of language bus drivers are allowed to use, except that “babe” seems to me to be fundamentally different (in England) from “love.” Here in the states, hearing either one from a male bus driver would probably seem equally inappropriate.

A male bus driver? Yes, because of course it’s a gender issue. Being called “honey” or “dear” by the old lady at the department store is a different beast from being called the exact same by a man in the same place, much less for example the DMV (the former being far more elective than the latter). The division between between a nicety and an inappropriate remark depends on who it’s coming from as well as where you are. And everyone seems to treat the matter of where that division lies the way Oliver Wendell Holmes famously described identifying pornography: “I know it when I see it.” Or in this case, hear it. One commenter on the Guardian article wrote:

I like it when I get called ‘bach’ which means little but is used like ‘pet’, by Welsh speakers in my local shops.
Feels like an endearment.
The writer should keep her outrage for the important issues.
If a bus driver calls you a ‘ho’ then complain by all means, but babe is used in a positive way by many people, girls call other girls babe all the time.
Using words like love, bach, pet, dear all help oil the wheels of social intercourse.
Rebuffing something said with good heart is just downright rude.

I didn’t see anything in Walters’ piece that sounded like “outrage,” but it’s not surprising to see her comments portrayed as such. Along with the sexism-specific trope of “You should find it flattering,” I wouldn’t be surprised if the term “outrage” was used more often to portray complaints of offense as irrational and hysterical (yes, that word used intentionally) than to describe actual reactions to wanton cruelty or gross violations of decency. When reacting to a complaint by someone that something is offensive which you find innocuous, it seems that the immediate response is to magnify the offense far beyond what was originally stated. I’m guilty of doing this myself all of the time, and it’s a hard urge to control. Why am I not doing it now? Because I don’t see a complaint about being called “babe” as a threat. I see the complaint as legitimate, but even if I didn’t it wouldn’t threaten my self-image to learn that in this case, someone finds something unacceptable that I don’t. Re-examining my assumptions, or examining them for the first time, wouldn’t be painful. Being intellectually humble is comparatively easy. It’s harder to be humble that way when you, or people you agree with and/or care about, are the source of the offense.

Notice I haven’t said that offense can’t simply be illegitimate. I certainly think it can, but would point out that our conclusions about such tend to be shaped by the effect the conclusion holds for our self-images. Ethical dissenters— and by that I mean, people who disagree with the majority for ethical reasons– are a living, breathing, practicing condemnation of what most people regard as normal or at least uncontroversial, and many find that disturbing. Understandably so, but the problem comes when the next step is to misrepresent the dissenters in order to deflect their grievance. This can be counted on to happen regardless of whether said grievance is legitimate or not. Simply speaking up about it is enough to set the wheels in motion.

A few other tropes from the comments:

Let me give you a tip. You always have a choice to take offence or not to take offence.
I strive never to take offence unless I’m absolutely certain that offence is intended. 

AKA “Your offense is your own fault” coupled with “Your offense isn’t legitimate unless I’m offended too.” The feeling of offense absolutely is not a choice, but the expression of offense is, which the commenter conflates here. He/she has it precisely backwards in suggesting that one shouldn’t express offense if none is intended, because people who have been offensive inadvertently are the only ones who would care and want to change their behavior. People who have offended on purpose will be at best unaffected, and at worst gratified by the news that their arrows have hit their mark.

Spot on!
I can see that despite the friendly intentions behind it, the language is totally and utterly degrading.
Oh hang on a second… I can’t
You must be so much fun to be around!

AKA “Can’t you take a joke?” coupled with another “Intent is all that matters.” Certainly intention matters, but again– that’s why we kindly explain to Grandma that it’s not the best idea to use the word “negro” anymore, and to Junior that calling his gaming pals “fags” when he bests them in a game isn’t cool.

are you seriously expecting generations of people to re think how they speak?

Yes, she is. This is the essence of political correctness; any word that someone, somewhere might find offensive must be eliminated, however harmlessly it was meant.
It’s all covered by that maddening word “inappropriate”. Inappropriate to whom? Also “unacceptable”. Unacceptable to whom?
Some self-righteous prude, that’s who.

Merriam-Webster defines a slur as “a: an insulting or disparaging remark or innuendo, b: a shaming or degrading effect.” I like that this definition includes both intent and effect, and doesn’t require that they be coupled. And yes, the process of discovering that certain language has the effect of degrading, dismissing, shaming, or trivializing people– that is, it amounts to a slur– and asking that it not be used on that basis is expecting generations of people to rethink how they speak. That’s sort of the point. Congratulations first commenter, you have grasped it!

I’m going to make some assumptions about the second commenter, but would bet money that they’re true: 1) he’s male (okay, his name is “Howard,” but I promise I didn’t look at that first), 2) white, and 3) straight. The grand trifecta of potential for dedicated ignorance of privilege and griping about political correctness. Which, if I were less of a person, would make me wish that he will be referred to as “babe” by every hulking male bus driver to enter his life forevermore.

But I’m nicer than that.

ETA: Okay, stop dancing for a minute while I clarify: No, I was not saying that white, straight, men are the only people with unexamined privilege, the only people who complain about political correctness, and certainly not the only people who can be prejudiced. Prejudice is, ironically, an equal-opportunity pursuit. I’m saying that the people most ignorant about privilege tend to be the ones who have the most privilege, which means you guys sitting at the top of the privilege pyramid: straight, while, males. I’m actually least certain about race amongst those three traits, since we’ve seen ample evidence recently of sneering at political correctness by a certain straight black male.

By all means, please resume dancing now.