Skip to content

Deux ex Smartphone: Healthcare Access Isn’t Going to Democratize Itself

Deux ex Smartphone: Healthcare Access Isn’t Going to Democratize Itself published on No Comments on Deux ex Smartphone: Healthcare Access Isn’t Going to Democratize Itself

One of my first-year classes in college was History of Theater, in which I learned how the Greeks built amphitheaters into hillsides, carving out a semicircle of seating for the audience around the stage to maximize. The scenery for a play completes the circle, just as it does for any show in an amphitheater today. It’s the structure providing the necessary atmosphere for the experience.

Imagine sitting in such a theater, watching Euripides’ Helen, and seeing the demigods Castor and Polydeuces (Helen’s pissed off brothers) descend into the scene by a wooden crane—a mechane — whereupon they put an end to all of this murderous nonsense, and everybody lives happily ever after. It’s a literal top-down solution.

That’s where the expression deus ex machina, or “god from the machine,” comes from. And it became used, and mocked, throughout the world of fiction as a plot device providing a too-convenient, cheap ending to a story.

But my mind just keeps going back to that silly crane. It used to dangle a man dressed as a god before the audience, but these days he’d more likely be a techbro holding a smartphone, probably talking about the wonders of AI.

That’s on my mind today because in this post, I’m about to dangle a hypothetical mobile app in front of my audience– you. I illustrate our country’s mess of a healthcare system, and perhaps even reckon with it. This play isn’t ending any time soon, and we need to find a role in it (else one is chosen for us).


Healthcare data and analytics company Arcadia recently launched its own talk show, Spicy Takes, to discuss “hot perspectives in healthcare” while sampling—you guessed it—spicy food. The first episode placed President and CEO Michael Meucci in conversation with Chief Product and Technology Officer Nick Stepro and Chief Medical Officer Dr. Kate Behan.

I watched it while reading about their SDoH (social determinants of health) package, which promises to justify the time and expense required of providers to consistently record SDoH data by creating registries mapping that data with diagnostic codes, for use in proactively identifying patients at risk and connecting them to resources. While looking over the tear sheet, I heard Meucci say this:

I think that this is such a great platform for digital health as we start to think about how do you democratize access. Because if a patient is concerned that they’re not going to get the right treatment because of the color of their skin or the community they live in, the smartphone is a great equalizer. We talk about what’s changed for the last 10 years—that, to me is the biggest thing, the fact that you can pull out your phone and get connected with a doctor in 15 minutes.

“To your point, Stepro replied, “all of the technology and all of the access to healthcare in the world doesn’t change the fact that the single worst diagnosis you can have as a patient is being poor. You can’t address that with a healthcare institution. We can measure that poor people have lower outcomes but ultimately, we need to find and attack the problem of homelessness and poverty because you can’t just solve that in a clinic or with a smartphone.

I stopped reading and played that section of the show again.  Meucci didn’t say that the healthcare industry can solve poverty with smartphones; he said we could democratize healthcare access. If that’s a spicy take then you can call me Spice Girl, because that’s my healthcare platform now. But I suppose coming from someone like him, that’s practically revolutionary.

And he’s right. As a country, America is primed for solutions like that: over 91% of Americans have smartphones. Even households without broadband hang on to their smartphones, because of course they would—it’s a tiny computer that can do more than any of us ever seem to realize, or ever will.

Democracy—another word with ancient Greek origins– literally means “power in the hands of the people.” What would it even look like to do that with a smartphone?

Let’s do a thought experiment to find out.

Time to design a smartphone app.

Imagine that in the beginning of The Legend of Navigating the American Healthcare System, our player character is given their first smartphone.

On that phone there’s an app installed (that I’ve just invented) called HACK: Health Agency, Care, and Knowledge.

Health – A full, patient-owned medical history

Agency – Control over your care, your records, your choices

Care – The power to find, compare, and advocate for treatment

Knowledge – Because to be informed is to be empowered

Does your vision of this app include it conferring access to all of an individual’s health records, stored securely but also accessible in their entirety at any time? If so, you’ve envisioned something better than what existing patient portal apps currently provide.

So yes, let’s absolutely start there, if we’re designing an app that democratizes healthcare in America.

And remember that democracy means that the power is in the hands of the people—not the “patients.”

Problem: we’re not in the driver’s seat.

Social Drivers of Health (SDoH) is the category of data on an EHR encompassing the non-medical factors affecting an individual’s health. In other words, your life, from the hospital where you were born (if you were born in a hospital) to the destination of your organs when you die.

They’ve been called the social determinants of health, but the word “determinant” suggests finality, immutability—that there’s nothing you (or anyone) can do about it. A driver, on the other hand, suggests that while the deck may be stacked against you, things could always change.

How easily could you could do that? *shrug* It depends, but we can safely say that “resident of the United States” is not an easy “driver” to change. We’re driving that road whether we want to or not.

And I hate to break it to you, but we live in a hostile health environment.

A 2024 study titled Mirror, Mirror 2024: A Portrait of the Failing U.S. Health System was conducted by The Commonwealth Fund to understand why America is doing so poorly by comparison—that is, going beyond the factor that rhymes with “schmooniversal schmealthcare.” The categories they used are:

  • Access to Care
  • Administrative Efficiency
  • Equity
  • Care Process
  • Health Outcomes

In all but one of those categories, America comes in dead last or next to last.

To summarize the report, it found that Americans spend more on healthcare as a percentage of GDP to receive lower healthcare system performance than other countries. It faces the most barriers to accessing and affording healthcare. Its physicians and patients are most likely to face hurdles related to insurance rules, billing disputes, and reporting requirements. Equity in healthcare access and experience is low. And we live the shortest lives and have the most avoidable deaths. All by a longshot. USA! USA!

The one exception in these categories is Care Process, where we came in second. Their comments:

Care process looks at whether the care that is delivered includes features and attributes that most experts around the world consider to be essential to high-quality care. The elements of this domain are preventionsafetycoordinationpatient engagement, and sensitivity to patient preferences

I interpret this result as an indication that some version of enabling people to take charge of their own healthcare is key to accessing that care in spite of all other factors. It could even, possibly, raise America in those other categories where we’re currently ranking dead last!

Okay, probably not, but it could definitely help us face the hostile health environment in which we currently exist:

Misinformation is everywhere.

  • We live in an era where vaccine misinformation spreads faster than the viruses they prevent, leading to the resurgence of eradicated diseases, overwhelmed hospitals, and preventable deaths fueled by fear rather than science.
  • We live in an era where people google their symptoms and often reach the worst, scariest conclusions that inadvertently contribute to their paranoia, where “doing their research” on healthcare can lead to being convinced of conspiracy theories and pseudoscience. 
  • We live in an era where the president of the United States once advocated for injecting disinfectant as a means of staving off Covid, and in his next term has appointed a raw-milk-drinking anti-vaxxer as Secretary of the Department of Health and Human Services. 
  • We live in an era where social media influencers with no medical expertise gain massive followings by promoting unproven “natural cures,” convincing people to reject evidence-based treatments in favor of detox teas, essential oils, and dangerous fad diets

We can’t afford anything.

  • We live in an era where Cost-Related Nonadherence (CRN) is the primary reason for medical nonadherence (failure of patients to take their medication as prescribed due to cost) in some cases forced to choose between “treating and eating.”
  • We live in an era where the term “dual ineligibility” refers to the status of undocumented immigrants in the U.S. who qualify for both Medicaid and Medicare, but are unable to access either one.
  • We live in an era where medical debt is the leading cause of personal bankruptcy, where a single hospital visit can trap families in a cycle of financial ruin, and where crowdfunding platforms have become a substitute for a functioning healthcare system.
  • We live in an era where rural hospitals are closing at alarming rates, leaving entire communities without nearby emergency care, prenatal services, or even a local doctor, forcing low-income patients to travel hours for basic medical attention they still might not be able to afford.

Neighbors hate and fear their neighbors.

  • We live in an era where in transgender healthcare, patients frequently encounter providers who lack adequate knowledge of gender-affirming care or hold prejudiced views that hinder appropriate treatment.
  • We live in an era where in reproductive healthcare, political and ideological barriers, including misinformation and ignorance, stand in the way of basic, safe medical care.
  • We live in an area where Black patients are more likely to have their pain underestimated and undertreated, leading to worse health outcomes. 
  • We live in an era where in disability healthcare, patients struggle to have their pain, symptoms, and autonomy taken seriously, with providers sometimes dismissing concerns as psychological or unavoidable aspects of their condition rather than treatable medical issues.
  • We live in an era where in chronic illness care, patients—especially women—are more likely to be dismissed as exaggerating their symptoms, leading to years-long delays in diagnosis for conditions such as endometriosis, fibromyalgia, and autoimmune diseases.
  • We live in an era where in elder care, aging patients often have their autonomy disregarded, with medical decisions made on their behalf without full consent, reinforcing the notion that age diminishes a person’s right to control their own body and treatment.
  • We live in an era where fat patients are often told to lose weight as the solution to every health issue, leading to delayed diagnoses and overlooked conditions that have nothing to do with body size.
  • We live in an era where for immigrants, language barriers, lack of documentation, and fear of discrimination or legal consequences discourage people from seeking medical care, exacerbating preventable conditions.

But remember: “they” are us, and we all deserve better.

If you’re still thinking about this in terms of how we can help them by this point, stop it. That’s “patient engagement” speak, and our identify is not “patient.”

Our identity is “person,” i.e. member of the human species, class Mammalia, spending every second of life alive, for 100% of the time (until we’re not), thus making our health, and healthcare a relevant part of our lives 100% of the time. Yes, even for doctors.

We all should get a remote control.

A note on dignity:
Meucci mentioned not getting the “right” treatment based on the color of your skin or the community you come from, suggesting that a smartphone could be “a great equalizer.”

That’s a powerful thought, given the indignity that confronts many Americans when they try to interface with the healthcare system at any level, including when they see their providers—whether the providers intend that or not. The hypothetical HACK app, simply by virtue of being an app, confers a sense of dignity that we might not get in the doctor’s office, or indeed anywhere else.

As a survey on dignified care put it, “Dignity is at the heart of personalization. Dignity means treating people who need care as individuals and enabling them to maintain the maximum possible level of independence, choice and control over their own lives.”

We live in an era where America’s healthcare system does not prioritize dignity. Is it possible to claw some of that back?

If you’re going to design a healthcare app to democratize healthcare access for people, that includes you.

In another Spicy Takes exchange, Stepro observes, “Isn’t it better when the consumer is educated and activated—after all, it’s our own body on the line? I’m glad folks are turning to Google or GPT for answers, even if they aren’t perfect, because it shows a healthier dynamic.” Behan responds that unvalidated or wrong information is hard to overcome, and Stepro sarcastically asks if misinformation in medicine has been a persistent issue.

Well, yeah, those problems face all of us, don’t they? We all consult with Dr. Google occasionally, because it’s free, and you can consult it at any hour and ask it any stupid question you want. The downside is that the answers aren’t reliable and can’t substitute for what an actual doctor might advise. And Dr. Google has no idea what your full medical history is (not that you want it to).

Some third-party apps like Ada Health improve dramatically on Dr. Google by using symptom checkers based on verified medical information. Chatbots based on large language models can certainly look up your ailments and dispense advice, although you should be wary if they encourage you to eat rocks. If you’re fortunate enough to have access to the Wolters Kluwer’s UptoDate clinical decision support service, you can find loads of evidence-based data refuting social misinformation. You can even get mobile access to it, and at $60 a month that’s not too shabby.

It’s still pretty far from “free,” however, and UptoDate doesn’t know whether you have a medical condition that could make any recommendations it offers highly dangerous. But if that feature is integrated into the HACK app, you lose the danger of uninformed recommendations, and get to keep the endlessly useful medical library.

On that subject, what else can we pack into this thing?

What an app wants, what an app needs

So far, the HACK app has two big features:

A library of trustworthy medical information that you can consult for any reason, at any time, that’s informed by your medical history included in the app.

Your entire medical history, including all lab results, hospital stays, specialist care, etc. regardless of which healthcare provider you saw for any of these treatments.

Let’s continue stealing important features from other smartphone apps to integrate them into the HACK app, bearing in mind that they must be for the individual using the HACK app—not features designed for providers to gather data from, or to influence the behavior of, the patients they treat. 

What else?

Let’s say the app has an UptoDate level of education materials in a database that connects to your specific data and diagnosis using MedlinePlus Connect. Give the app a chatbot that can pull from this database to answer all of your questions, regardless of how sensitive or embarrassing, and deliver that information in simplified terms without jargon. Now you’ve got a semi-omniscient doctor in your pocket who can tell your uncle (or RKF Jr.) to stuff it when he goes on about vaccines causing autism.

Let’s say the app prioritizes having control over your own data and lets you update and make corrections to your EHR data using a souped-up version of OpenNotes. It also includes a data permissions management dashboard, with the ability to see an audit trail of who has accessed that information—even if there’s nothing you can do about it.

Let’s say the app can also be a buddy who just happens to have a weird fixation on making sure you follow your treatment plan. It incorporates behavior modeling tools from Health Catalyst’s UpFront app to take over remembering stuff when your brain is full (i.e., cognitive offloading). “Hey, you were supposed to schedule that colonoscopy three weeks ago—want me to go ahead and set up the appointment, ya big baby?” Okay, to be fair, Upfront would be nicer than that.

Let’s say the app can create a localized map of all healthcare providers and resources in your area that you can filter by available services. It builds this using tools like Unite Us’s resource directory or ZocDoc’s appointment booking platform, but no referrals are required—you self-refer. “Hi, I have a weird rash and need to see somebody within a week. What do you have available and how much is it going to cost?”

Let’s say the app also has a filter that flags conditions you have, and procedures you might need in the future that might become, you know, illegal in your area at some point. The app could tell you the next closest location where it’s still legal, and point to ride-sharing and other assistance to help you get there/afford it. It could even alert you to events like Texas Attorney General Ken Paxton suing HHS to slide past HIPAA protects to access data indicating you had an abortion.

For that matter, the app could shield you from (some of) the effects of federal cuts to health services with built-in compliance to existing regulatory measures that protect and preserve your data.

Let’s say the app has access to population health data showing the health risks you face most imminently and what you can do about them, incorporating those insights from Arcadia’s population health platform and Health Catalyst’s Ignite platform. The risks matter whether they’re nature or nurture, and you need to know ASAP what you can do about those affecting you.

Let’s say that provider map also lets you sort by pricing, using resources like ClearHealthCosts. It could point out doctors working to alleviate medical debt in partnership with Undue Medical Debt.

Finally, let’s say the app, while placing all of this individualized information and these resources in a little device in your individual hand, also puts you in touch with communities of other human beings affected by the same conditions you are, by offering a feature like HealthUnlocked. You were never alone in this, and here’s the proof.

Nice little fantasy app you’ve got there. Who’s going to make it, though?

Ah, the mask has fallen. The jig is up. The cat’s out of the bag, and the deus is off the machina. What now?

Just kidding. This is a thought experiment for a reason—I don’t expect anyone to make the app. America is ripe for such an app, we need such an app, and we have the tools to create such an app—but that doesn’t mean we’re going to.

But let’s continue to be optimistic– perhaps I’m wrong on that second point. So, okay, what would developing the HACK app require?

  • A governing body to make sure the app is trustworthy
  • A sustainable funding model (Stop laughing– we just got started!)
  • Interoperability across all EHR vendors (I said stop laughing!)

Assume that we have satisfied all three requirements. This is, once again, a thought experiment.

Now, can we seriously address the matter of who makes the HACK app– and why?

What are our options?

The ONC

This one is obvious, because they already oversee FHIR and TEFCA, and interoperability is their dream. They also have regulatory power without a profit motive. But they don’t make software—they just regulate it. Somebody else would have to make it, and put the ONC in charge.

A private tech company (e.g. Microsoft, Google, Apple)

Microsoft attempted something similar with HealthVault, a site where users could store and share their health information, which fizzled and died in 2019.

Google Health was born in 2008, died, and then came back again, finally dying off for good in 2023.

But Apple Health is alive and kicking, using Fast Healthcare Interoperability Resources (FHIR) to let users import and view their health data on their iPhones and iPads after retrieving it. FHIR standards, importantly, were developed and adopted after Microsoft and Google made their respective shots.

When Microsoft and Google started leveraging FHIR, they were no longer in the “patient records for patients” business. Azure Health Data Services and Google Cloud Healthcare API are data platforms used by healthcare systems, payors, research institutions, and so on.

But in none of those cases was the focus on providing services based on patient records—just the records themselves. Apple Health can only function as a sort of meta-patient portal, requiring users to log into their actual patient portals to access their records, and their providers have to agree to letting Apple share the records in the first place.

If a private company like this developed the HACK app, you could argue that it democratizes access far more than the patient-portal-like products these companies previously developed, but, again—it would be their product, for better or worse, and arguably so would we.

A public-private partnership

This means:

  • Private tech company builds the infrastructure.
  • Nonprofit coalition manages the project.
  • ONC (or other federal agency) sets the standards and governs the data.

I guess that’s an option. But if this combination of entities could accomplish something like the HACK app today, why haven’t they done so already?

Who’s going to own it?

Taking on the project of creating the HACK app through that kind of partnership would be a tacit admission that the current system has failed, and that it’s going to take an app to save it—or at least, to survive in the face of that failure.

That’s the paradox of designing a “subversive” app promising to democratize healthcare through the backdoor, while only requiring access to all of the health records that healthcare systems are refusing to share right now, even after the ONC has hounded them to do so for over 20 years. 

Each of the app’s features “stolen” from an existing technology really would have to be stolen, and it’s hard to imagine healthcare tech companies welcoming someone pirating their platforms.

On the other hand, it’s also hard to imagine a better example of the healthcare industry doing what it can to make a difference. “I helped someone understand their own medical records and make plans for future treatment today, when otherwise they wouldn’t have” is not nearly as sexy a claim as “I helped someone out of poverty today,” but it’s a lot more realistic– and on a higher scale, both of those claims could easily be true.

But because healthcare tech platforms sell patient engagement tools to providers rather than to people, there’s no motivation to develop a HACK app per se.

And even if the motivation was there, America has a population of—what—over 340 million at this point? How’s the HACK app going to reach all of us, even a large fraction of us?

How do we get this kind of reach?

Let’s assume that the HHS is developing the app—it would have to, to approach anywhere near that reach.

I’ve actually done a lot of research and writing lately about another app, developed by another U.S. federal governmental department, that reached as many as 64 million—while also stringently adhering to high security and data protection standards and relying on nationwide interoperability and data integration. It’s installed on my phone now, actually, though I’ll admit that I haven’t used it recently.

Maybe the HACK app could take some lessons from it?

  • Federal development and oversight—If HHS takes direct ownership of the app, just as this other agency did, that would mean developing the app in-house rather than outsourcing it to private industry.
  • Security and data protection—The HACK app would need to encrypt personal data, require strict user authorization as well as access control and permissions management, and comply with federal security standards, just as the other app did.
  • AI and automation for user navigation—Both apps rely on automated data processing, proactive notifications and engagement, AI-driven risk assessment, and smart eligibility and routing systems that guide users through decision trees based on their data.
  • Large-scale user support and infrastructure—Both apps must be scalable to handle millions of simultaneous users, both use mobile-first design, and both require redundancy and real-time threat monitoring for resilience against system failures and cyberattacks.

That’s a very general list of requirements, but if another government-developed app can succeed on this level, couldn’t the HACK app do the same? Assuming that the HHS has access to all information and other resources required to do it, that is.

Now, if your answer is “Yes,” how shocked will you be to learn that the other app is CBP One? You know, the app developed by Customs and Border Patrol to scan the faces of migrants and use that as a basis to determine if they can enter the country? The one that Trump shut down on his first day in office, forcing me to defend it after bashing it for months? Yes, that one.

I know, different government agency altogether. Different goals, altogether.

But that’s my point– regardless of how you think about immigration or healthcare, it says a lot that even after such an app was (successfully) developed to regulate immigration, it’s impossible to imagine the government developing a similar app to get healthcare access to Americans.

CBP One has something else in common with regular patient portal apps—it wasn’t developed for its intended end users, but rather the organizations providing the app. And as with patient portal apps, that didn’t stop government officials from boasting about how the app provides migrant empowerment—”There’s a lot of people who would love to migrate to the United States. In essence, they see CBP One as sort of a self-petitioning mechanism that we’ve never had before.”

*cough* So, anyway…

After all of this, have we democratized access to healthcare yet?

No, but we’ve shown that it’s possible to make a tool for getting there.

The U.S. in 2025 is a country:

  • where the best way to reach the greatest number of the population, regardless of demographics, is via a smartphone
  • with a disaster of a healthcare system that we have no choice but to navigate
  • where, within in that system, our healthcare needs are socially driven out of our hands
  • where huge advancements in healthcare technology have been made, and continue to be made, every day
  • whose government has already built a large-scale, high-security, interoperable app for mass data processing, supporting daily access by millions of people. Granted, that was for a very different purpose– but still, they did it

All of the problems standing in the way have been solved—just in different directions, for different people, with different purposes.

And now, the goddess Panacea would like a word.

She’s been quietly waiting in the wings, refusing to step anywhere near that cursed crane, even though she’s arguably the most qualified to do so.

She wants us to remember that America is now an older country than it ever has been, and older folks are sicker folks. They’re also notoriously bad with tech—but they’ve come far since the days when everybody was posting screenshots of their parents failing spectacularly at texting. And we’re at the point where the first generation to grow up using computers is eligible for AARP, anyway. So while the HACK app won’t replace their knees later on, it would be the next best thing to having a personal nurse (or tireless family member) with them 24/7.

She also points out that administrative efficiency is one of the categories included in the Commonwealth study where the U.S. tanked, with wasteful administrative spending estimated as high as $570 billion in 2019. And the HACK app could streamline patient access to records, real-time cost transparency, and insurance verification outside of the doctor’s office. Just sayin’.

Lastly, she wants us to know that the deux ex machina isn’t always what we think it is.

If your job is making boots, and you make boots for soldiers to wear to go to war, then boots are not your deus ex machina for winning the war. They’re just the tiny but significant contribution you can make, using the power and skills you have, to make winning the war more possible.

Likewise, if you’re in the business of making healthcare apps, your apps are not your deus ex machina for democratizing access to healthcare—they’re the tiny but significant contribution you can make, using the power and skills you have, to make democratized access to healthcare more possible.

She departs stage left with a warning: Stop hanging gods from cranes, she says. Just build some damn ladders, and let people climb.

Things you might not want to say about hot car deaths

Things you might not want to say about hot car deaths published on No Comments on Things you might not want to say about hot car deaths

I live in Wichita, Kansas. Kansas is a place of extreme temperatures– it can get bitterly cold in the winter, and deathly hot in the summer. Today, for example, the high is supposed to be about 106.

On Thursday, a baby died here in the heat. Another hot car death. She was 10 months old, and left in the car for two hours while it was 90 degrees outside.

In this case her was name Kadylak, and she was the foster daughter of two men in their late 20’s who also have several other foster children.

If you live in any place where it routinely becomes very hot in the summer, you’re probably familiar with the story– the father forgot that the child was in the car. He went about his day somewhere else while she remained there. In that confined space, the baby died of heat stroke. The father is distraught. He didn’t mean for this to happen. That father, in this case named Seth Jackson, wants to die himself, according to his mother.

On average, 38 children die in the United States every year from hyperthermia, or heat stroke, inside of hot cars according to the advocacy group Kids And Cars. Over 600 have died in this way since 1998. In roughly half of the cases, the parent/driver forgot that the child was in the car.

Proposals have been made for technological solutions to this problem; a way to force parents to remember that there is a small child in the car. A child who may be asleep and therefore making no noise him/herself, a child whose car seat is in the back of the car because he/she is too young to sit in the front seat of a car with airbag technology, a child whose car seat might not only be in the back of the car, but facing the back of the car so the driver won’t even see his/her face without a mirror installed.

A high school student from Albuquerque (another hot place) named Alissa Chavez won an award last year for designing an alarm system called “The Hot Seat” which notifies the driver if a child is left in a vehicle. There are also, as you might expect, apps for that. Kids And Cars has a petition to the White House asking for funding to be allocated to the Department of Transportation to research technology (the nature of which isn’t specified in the petition) to tackle the problem of children being left in hot cars, and also to “require installation of technology in all vehicles and/or child safety seats to prevent children from being left alone left alone [sic] in vehicles.”

After so many years of hearing about children dying in this way, and listening to people’s reactions to the stories, I’ve noticed a few trends in these reactions. Not positive trends. Trends that sound, quite frankly, a lot like concerted efforts at empathy avoidance. I’d like to address a few of these and explain why I find them so problematic.

1. “I can’t believe he/she forgot that she had a child.”

In the roughly 54% of occasions on which a child was left in a hot car because he/she was forgotten, it wasn’t because the parent forgot that he/she had a child. He/she forgot that the child was left in the vehicle. Big difference.

2. “This parent must have been drunk/mentally disabled/pathologically stupid/evil.” 

In this case, at least,

Neighbors described Jackson and his partner as doting parents. “They are two of the most kind-hearted guys that I have ever met. And I hate that there’s so much controversy right now with babies’ being left in the car, because I truly don’t feel from the bottom of my heart they would ever do this on purpose,” said Lindey TenEyck, who lives across the street.

3. “This parent should be ‘forgotten’ in a jail cell for about 50 years and see how he/she likes it.”

…..
Never mind, your capacity to empathize is clearly broken. I dearly hope you have no children of your own– not because you might leave them in a hot car, but because I can see you banishing them to Siberia the moment they first burst into tears at the hospital. They wouldn’t even make it to car.

4. “I just can’t imagine doing/having done this with one of my children.” 

All right, this is the big one. This is the main thought I want to address.

The fact that you can’t imagine something like this means very, very little on the one hand, and quite a lot on the other.

Your not being able to imagine something means very, very little, I should say, in terms of its truth value. Not being able to imagine something is called a cognitive constraint, in that it’s hard to meaningfully process a concept if you lack the ability to get your mind around it in the first place. But that doesn’t mean it’s not true.

Plenty of people misconstrue evolution, for example, because they just can’t get their minds around the length of time it would take for the genetic structure of a species of organisms to change sufficiently for their progeny to become a different species, and so you get bizarre straw man characterizations of evolution that have no correlation to reality, like the crocoduck for example.

Now, just because Kirk Cameron is unable to properly imagine how evolution really works, that doesn’t mean that evolution doesn’t work. It just means that his poor brain, for whatever reason, is unable to grok the concept. He can’t grasp that evolution is true because the only version of it he’s willing or able to entertain is a caricature.

Likewise, your inability to do something like forget your own child in the back of your own car might be a caricature of a different sort– an unwarranted but entirely understandable mental distancing from the idea that such a horrendous tragedy could have ever happened, or especially could ever happen in the future, to one of your own children because of your own negligence.

Let me emphasize those two words again– entirely understandable. It’s entirely understandable to banish from your mind the thought of something like this happening in your own life, because if a parent went around seriously considering that any and all tragedies which have ever ended the life of any child could happen to his or her own children, he/she could be rendered paralyzed with fear. It’s possible that this person would become unable to function as a parent if that happened, because parenting involves risks, and imagining the worst possible consequence of every risk has a way of preventing people from being willing to take any risks.

Right?

Okay, but here’s the problem with that, and this is the part that means a lot, as I mentioned– being unable or unwilling to conceive of yourself doing something, especially a thing which involves forgetting something important with disastrous results, has the effect of inhibiting your ability to empathize with people who have done that thing. People who– this is important–  it’s very likely also would’ve said that they would never forget their child in a hot car, who would have themselves condemned any other parent who did so as drunk/mentally disabled/pathologically stupid and/or evil. Yes, I’m quite sure that Seth Jackson himself would’ve said that.

So what ends up happening is that when someone like Jackson does forget, and a child ends up dying, there are endless other parents out there, who aren’t necessarily any smarter or more responsible or loving or conscientious, who nevertheless have to condemn what he did in the strictest terms. This person who is described by his neighbor as lying on the ground near his car, “practically in the fetal position,” experiencing the sort of pain that no parent ever wants to experience. The kind no parent could ever forget. This person is assumed to be the worst sort of human being imaginable. And it’s very likely that right now, he would not disagree.

Except the problem is, he isn’t. He’s a parent who made a mistake. The problem with shutting off empathy to this person out of a sense of self-preservation, or rather a preservation of the image of oneself as a good parent who would never do this, is that it doesn’t fix anything. It does absolutely nothing to prevent this from happening again. And again, and again, and again. Which brings me to the last thought.

5. “Pushing for [insert proposed safety measure here] means blaming [insert manufacturer here] for this sort of thing instead of the negligent parent.” 

No, it doesn’t. No more than any other safety device invented since the beginning of time has meant this.

When you and I were babies, we didn’t travel in super-safe car seats in the back seat, facing backward. Maybe we were in car seats. But they weren’t the same kind, and they were probably in the front seat or maybe even on the floor. In such a position, I can’t help thinking that our presence there, even while asleep, was more of a reminder to Mom or Dad driving us around that we were in the car.

Does that mean that the backward-facing seats in the backseat are bad, and the practice should be ended? No, of course not. It means that in the act of moving car seats to the back seat, which was done in the first place because of the introduction and standardization of air bags because one of those being triggered could be dangerous to a small child in the front seat, may have created a new risk of its own which deserves its own safety concern. It makes absolutely no sense to slam on the brakes (figuratively speaking) when it comes to this concern, and insist that this is where safety measures end, that nothing should be done to prevent parents from forgetting a child in a car because it’s just their own fault. They’re horrible people and deserve to suffer, and that’s where it ends, right?

No.

Do you care more about making sure parents suffer when their children die, or do you care more about preventing the children from dying? Because trust me, the first one is going to happen regardless.

Parents can make horrible mistakes. Good ones. Smart ones. Capable ones. That’s the risk of being a parent– you’re going to screw up sometimes. If you’re lucky, the results won’t be devastating. That of course doesn’t mean that it’s all up to luck, but there is definitely a lot of luck involved.  It’s okay to acknowledge that. It doesn’t mean you’re admitting to being a terrible parent. If it helps, you don’t have to announce it to the world– I’ll do it for you.

I know that the pressure to appear perfect is neverending. But don’t let that get in the way of empathizing with people who have clearly experienced tragedy, because they’re already suffering enough. And certainly don’t let it get in the way of supporting help for parents who need it. Because in the end, it’s better that they get that help, isn’t it?

Who knows, you might even benefit from it too. Or your kids will. Or their kids.

Repost: Equality worth working for

Repost: Equality worth working for published on No Comments on Repost: Equality worth working for

I have a dream that one day this nation will rise up and live out the true meaning of its creed: “We hold these truths to be self-evident: that all men are created equal.”  — Martin Luther King, Jr.

The true meaning, mind you– not merely what is reflected in the law, but in how we see each other.  How we evaluate each other’s worth, respectability, humanity.  Not by the color of each other’s skin, but the content of our characters.  That, in turn, will reveal our collective character.  

Dr. King’s foundation for his beliefs was unquestionably in his faith.  Being a Baptist minister, that is where he found his strength: “I have a dream that one day every valley shall be exalted, every hill and mountain shall be made low, the rough places will be made plain, and the crooked places will be made straight, and the glory of the Lord shall be revealed, and all flesh shall see it together.”  For him, the glory of the Lord could only be revealed when people of different colors could love and value each other as equals.  Jennifer Sanborn writes:

You see, for me, the Rev. Dr. Martin Luther King, Jr. is first and foremost a Baptist minister, and a child of the same. I imagine it is because I am also the child of a Baptist pastor (and grandchild of two others) that I take particular pride in placing “the Reverend” at the start of his name. “Reverend” is a title that he earned with his education and his occupation, but also a title to which he was called, bringing unparalleled dignity and relevance to what it means to serve society as a religious leader.

I’m sure many people feel similarly, now as well as when MLK originally gave that iconic speech, which was essentially a sermon to America on the meaning of loving one’s fellow man.  As a non-believer I find no conflict in welcoming that sermon, and only a slight bit of discomfort in wondering how he would have responded if asked whether atheists would be included in the pluralistic group exhorted to “sing in the words of the old Negro spiritual, ‘Free at last! free at last! thank God Almighty, we are free at last!'”  I won’t remotely pretend, however, that there is any comparing the lot of atheists to that of black Americans in 1963.  That isn’t the point.  The point is, from whence is a committment to equality derived for those who don’t believe it was God-given?

It would be a fair bet to say that prejudice almost always precedes rationalization, whatever that rationalization is.  I’m pretty sure that human nature, perhaps ironically, includes both the justification for equality as well as the explanation for why humans are so prone to denying it.  And that is because of two salient facts:

1. Both science and religion have, at many points and many places in history, been used to rationalize bigotry. 
2. And yet, neither one has ever or will ever come up with a good reason to treat people unequally.  

If either of the above points seems at all contentious, remember that the numerous mentions of slavery in the Bible were used as a  primary reason to believe that black slavery was part of God’s divine order in the South, as well as the legacy of Spencerian “social Darwinism” which maintained that certain races were inherently inferior.  After all, if it weren’t so, why were they doing so poorly?  Why were they so easily conquered and used for the purposes of the more powerful white Europeans and Americans, if not because they are inherently inferior by evolution or design, whichever your preference? 

I’m still in the midst of my very long quest to discover what exactly human nature is, anyway, but the revelation of the above facts in my life can be attributed primarily to the cognitive psychologist Steven Pinker around 2004.  You see, after (and before) publishing a book called The Blank Slate which used powerful data from experimental psychology to demolish both the idea that there is no such thing as human nature as well as various myths about exactly what that nature is, Pinker and every other psychologist who uses evolution as a means to explain why humans behave as we do has been hounded by accusations that their work will be used to justify prejudice. 

And you know what? That’s exactly what has happened.  And it still happens.  People think that if they can show differences between the psychology of men and women, homosexuals and heterosexuals, blacks and whites, they will be able to show that treating any one or more of those groups as inherently less human is justified.  I really don’t want to get into all of the specific attempts to show that, because it would take away from the fundamental point that there’s nothing we can discover about a specific group of humans that would justify, for example, slavery.  Nothing that would justify physical or cultural genocide, rape, internment, disenfranchisement.  And that is because the humanity of humanity doesn’t need to be determined by conducting some elaborate experiment– it is literally standing right before us. 

I believe that tribalism is instinctive– that people find an element of safety in clinging tightly to those who are like themselves.  They will certainly base that in-group/out-group association on ideology, but it’s even easier to base it on traits that are evident at a glance.  Familiarity and similarity are the primary triggers for empathy, which means that strangers and people not like us are the “best” enemies.  And that is why, again and again throughout our history, we have been able to deny the humanity of certain groups of people in order to persecute them.  Not by knowing them, looking them in the face, having a conversation…because that would demonstrate that they’re more like us than we thought. 

I suppose that’s where I find my fundamental belief in equality– the abject failure, despite our best and most heart-felt efforts, to show that any class of humans really doesn’t deserve the label of “human.”  Martin Luther King Jr. managed to punch through that barrier of prejudice for so many people because he emphasized how much we have in common, how similar we are fundamentally, and how different life could be if we were just willing to encounter each other as fellow human beings, fairly and honestly.  That’s why his speech had and continues to have such a tremendous impact, and why we continue working to make his dreams come true.

*This post first posted Monday, Jan. 17, 2011

Shame, shame, know your name

Shame, shame, know your name published on 1 Comment on Shame, shame, know your name

Reading about film critic Rex Reed criticizing actress Melissa McCarthy’s appearance using such sophisticated and erudite terms as “tractor-sized” and “female hippo” has me reflecting on moral psychology. You know, as I’m prone to do. Specifically it has me going back to what I know about the way shame and guilt function for both the inflictor and the inflicted, and how they differ.

You see, guilt is a “what you did” emotion, while shame is a “who you are” emotion. Guilt points to an act, while shame points to a person. Guilt can be a productive emotion because it focuses on the morality of what a person did, encouraging him or her to recognize the immorality of that act, feel remorseful, and improve by not committing the act again. Shame, on the other hand, focuses on a person’s identity and traits, which tend to be more or less permanent. Inescapable, or at least not easily or immediately escapable. And oftentimes not even a moral concern. Shaming someone therefore is not generally a productive thing to do, and isn’t intended to be. Quite to the contrary, making the person feel shitty about him/herself, full stop, is the point.

Criticizing what someone did wrong arouses a feeling of guilt (if it works). Telling someone they’re horrible, ugly, stupid, evil, etc. arouses feelings of shame (if it works). Martha Nussbaum wrote an entire book about how guilt is a useful and necessary concept in a justice system, in a legal context, whereas shame…not so much. Guilt encourages rehabilitation; shame encourages despair and recidivism. Because if you succeed in convincing someone that who they are is a terrible person, then there’s nothing for them to do but be a terrible person. But good people can do bad things– all people do bad things– so being guilty of such simply means that you’re guilty of doing a bad thing. It doesn’t define you.

Americans are terribly individualistic, and an unfortunate consequence of this is how good we are at turning guilty matters into shameful matters. We’re amazingly talented at conflating “who you are” with “what you did,” so that even the things an individual has very little to no control over are things he/she can be made to feel bad about. It’s true that you can arrive at a characterization of someone’s general personality by adding up the things they’ve done, and this kind of shaming can theoretically be useful. But generally speaking, shaming skips that step and goes straight for things the person in question can’t do much about.

So I have devised this handy, basic, and utterly unoriginal rubric for determining what counts as not-useful-shaming:

The degree of control a person has over their situation is inversely proportional to how much of an ass you are being if you mock or blame them for it.

In other words, if there is no guilt, there should be no shame. And when there is guilt, focus on that.

But we American individualists decide that if an undesirable trait isn’t someone’s fault, then by golly we’ll find a way to make it their fault so we can properly blame them for it. The top two examples of this being, of course, obesity and poverty. People who would flinch at the idea of making fun of someone’s race or sexual orientation show no reluctance to ridicule someone for being poor or fat– the latter especially if the person happens to be female, because for women we have made “isn’t sexually appealing to me personally” into a moral wrong.
Especially for actresses.

Do people have control over being fat and/or poor? Somewhat. But:
  • It varies from person to person, and you sure can’t tell what it is for a particular person without actually knowing them.
  • Regardless of their specific situation, every person in either of these two categories you see is likely in the process of trying to remove themselves from it at any given point.
  • Membership in these categories is not in itself a “what you did,” and it isn’t a moral wrong. People do not harm others simply by being overweight and/or poor.
Therefore, according to the rule articulated above, we should…not shame them for it! Or else risk behaving in a way we should absolutely feel guilty about.
Take note, Mr. Reed.

Aping morality

Aping morality published on No Comments on Aping morality

Whenever I’ve been involved in a discussion of the evolution of morality, the English language trips things up a bit. Due to the fact that “morality” could mean “being good” or “the capacity and tendency to distinguish right from wrong,” it’s always important to note which, specifically, you’re talking about. Generally speaking, it would seem that the latter entails the former– if you have an idea of what it means to be good, then you can probably be good. We all have our failings and occasionally fail to live up to our own standards of morality. But when asked what it means to be a good person, we usually give a description that most human beings could live up to, if they put their minds and consciences to it. By contrast, if a being doesn’t distinguish right from wrong, we generally don’t hold him or her responsible for doing things that would normally be considered wrong. I touched on this last week when talking about what agency means in terms of moral responsibility. An entity with a concept of right and wrong has the capacity to behave morally– this concept is sometimes called a moral sense. Having a moral sense is not the same as being moral, any more than having a car is the same as driving.

Are we good so far? Not moral, I mean, but clear? Okay.

Whether non-human animals can have a moral sense, and to what extent, is a very hot topic. It calls into question our own capacity to make these determinations, where that capacity comes from, and how we can recognize it. Maybe other animals have a moral sense, but it’s so different from ours that we wouldn’t know it if we saw it! Maybe other animals make judgments about all kinds of things that humans just don’t care about. Humans certainly don’t share all of our moral views about things– moral standards can vary significantly from culture to culture and from individual to individual– but most of us have both an extensive repertoire of ways to express moral approbation or disapprobation and an adeptness for registering when others approve or disapprove of something. We’re excellent communicators, both vocally and non-vocally. We’re actually so good at communicating that we sometimes betray feelings we’d rather not. I’m particularly bad at lying about or otherwise misrepresenting how I feel about something, which is why my career as a professional poker player ended before it began.

Our means of registering how other people feel without their telling us, or even in spite of their telling us something to the contrary, is called empathy. It’s what enables us to “read minds”– not via literal ESP, but by  interpreting patterns of behavior and comparing the situation others are in to our own past experiences, and extrapolating from that how they must feel, what they must be thinking. The simplest form of empathy is emotional contagion– imagine a nursery in which one baby starts crying, and the sound sets off others as well. This form of empathy is reflexive, which means there’s no point at which you actually think “This person must be feeling/thinking ______.” There’s a scene in the movie Clue where Mrs. White describes how her husband was murdered: “His head had been cut off, and so had his…you know.” Cut to three men listening while sitting on the couch, all simultaneously crossing their legs at the knee.

With reflexive empathy, you are effectively projecting yourself into another person’s body and situation and feeling what you imagine they feel, whether you want to or not. This is generally referred to as sympathy or a sympathetic reaction, and it’s very effective in terms of getting us to care about the welfare of others. It’s the reason that witnessing suffering bothers us, and it inspires us to help those who are suffering and be angry with those who cause it. If the person who is suffering is familiar to us or similar to us, our sympathetic reaction to their suffering is both more likely and stronger when it happens. If you want to prevent someone having a sympathetic reaction to another’s suffering, a good way to go about doing it– after attempting to disguise the fact that there’s someone suffering at all– would be to make the person suffering seem as unfamiliar and/or dissimilar as possible, so that it’s harder to relate to them.

Hume characterized empathy as the origin of morality. That is, he said, how we become moral– we are moved by the pain of others because we associate them with ourselves, and from this we extrapolate general dispositions about how others should be treated. We derive a moral sense.

So if other animals have empathy, does that mean they have a moral sense?

I think the answer from Frans deWaal is “yes” and “yes.” That is, yes he believes that some non-human apes have the capacity for empathy, and that this constitutes a capacity to form moral judgments. That’s what I expect him to argue in the new book he has coming out, The Bonobo and the Atheist.

A primatologist– and one you should read, if for some reason you haven’t already– deWaal has decades of experience observing the behavior of captive chimpanzees and bonobos, and has written copious books and articles on the topic, especially the ways in which that behavior is similar to our own. And then he began writing books and articles defending his emphasis on the ways in which their behavior resembles our own. The charge, as you might expect, was anthropocentrism– an insistence on incorrectly interpreting things (in this case, non-human primate behavior) in terms of human thoughts and behavior. To this, deWaal responded by accusing his accusers of “anthropodenial”– an insistence on refusing to interpret things in terms of human thoughts and behavior, even when it’s correct (accurate) to do so. You can see this exchange take place explicitly in Primates and Philosophers: How Morality Evolved, where deWaal argues basically that chimpanzees and bonobos have the ability to empathize and therefore at least a precursor to a moral sense, which can be recognized in their behavior by its similarity to human empathy– and there’s nothing hasty or unparsimonious (i.e., inaccurate) about  it.

That’s not what this post is about, though. Nor is it, really, about the general topic of morality in non-human primates or other non-human animals. It’s really about the fact that The Bonobo and the Atheist will be deWaal’s first book addressing religion specifically, and what I’m afraid he’ll say about it. See, his books to date have (largely) been about the possibility and extent of a moral capacity in the great apes, non-human primates, particularly chimpanzees and bonobos. Now my concern is that he’s going to use this body of data to argue that we– human beings– don’t need morality to come from God, because we’ve evolved it. That our closest living primate relatives are, in effect, secular humanists (or at least capable of being such), and therefore we humans might as well be, too.

This position– if indeed that’s what deWaal argues, and I don’t know if it will be– doesn’t bother me because it’s false. It bothers me because it’s beside the point.

Let me back up.

If Great Apes-Who-Are-Not-Humans (that would include chimpanzees and bonobos, but also gorillas and orangutans) do indeed have the capacity for empathy, then I would say that “precursor to morality” is a fair description for it. It would seem, on the face of it, that if nonhuman primates  have the capacity for empathy, then it is indeed evolved. I expect deWaal to argue this– he has before. (However, this isn’t necessarily the case. It could be, for example, that the great apes have evolved to have the kind of brains which make it possible for us have an empathetic response, but not be “wired” for empathy per se. To continue the clumsy analogy I began with, this would be like saying that just because you have a car, doesn’t mean you have a drive-to-the-store device. You have a device which you can drive to places, including the store if you so desire. This distinction goes to the heart of the “general learning device” vs. “kludge” discussion of how our brains have evolved, which I do not have any desire to get further into here.)

But even if other Great Apes have the capacity for empathy and hence morality, that is not a good point of evidence with which to oppose a theological insistence that morality must come from belief in God. That’s why I think, if this is the arrow deWaal will be firing, it will miss the target. Because we don’t need to have evolved morality (that is, to have inherited a moral sense) in order to have it– both the capacity to be moral, and the tendency to exercise it. Clearly, however we came by these things, we have them. And they are universal, and they do not require belief in a deity.

Now you may ask, why does this matter? Shouldn’t demonstrating that we have evolved a moral sense answer that question just as well, if not better? I say no, for a few reasons. First, because a lot of the people who believe that if your morality doesn’t come from God you don’t have morality at all, don’t believe in evolution. They very likely don’t have a good grip on what evolution is. And plenty of people– theist and atheist alike– who do know what evolution is, and are fully onboard with it, nevertheless have a distaste for evolutionary psychology or anything that smacks of it. And even those who don’t have such a distaste at all but have a dedication to scientific rigor (which all of us should, presumably) will need to be convinced. And I’m saying this convincing is important– very much so– but also beside the point.

You don’t need to demonstrate that morality is evolved in order to show that it doesn’t need to come from God, or at least a belief in God. The reality of nonbelievers being moral now, and the immoral behavior of not only believers but by believers in the name of the deity who is supposedly the origin of morality (not just the capacity to be good, but Good itself), accomplishes that.

I think of this every time I see, for example, someone claiming that those who oppose him or her politically are opposing morality itself. As if there’s a monopoly on morality: it only comes in one brand, and anyone who doesn’t have that brand doesn’t have morality at all. No knock-offs, even. Fellow nonbelievers– you’re not the only ones who, it’s being maintained, are not just insufficiently moral but incapable of acknowledging morality itself because your concept of it is somewhat different from that of the person making the accusation. Often that person will pretend that members of the morally bereft group he/she is describing are nonbelievers, because no “true” believer would support the right to an abortion/separation of church and state/feminism/sex before marriage/ending school-sanctioned prayer/supporting the teaching of evolution/ending the War on Drugs/ending war, period etc. But realistically speaking, there are nowhere near enough nonbelievers to accomplish any of these goals. And yet there is ample support for them. Hmmm.

So…yeah. Perhaps I’m flailing at windmills, and in fact deWaal’s book will not go anywhere near making the we-evolved-morality-therefore-we-don’t-need-God argument. But since this argument exists, and is actually relatively common to see whenever a believer challenges a nonbeliever regarding where he/she finds his/her foundation of morality on the basis that if God does not exist we should all be out murdering, raping, stealing, etc., I think it’s worth discussing why this approach is not actually the best one.

The best one is far simpler: There are loads– loads– of moral standards which are not based on divine mandate. Many of them were endorsed by Greek philosophers before Jesus ever set foot in Bethlehem. It’s not possible to show that morality didn’t come from God, because God’s existence itself is non-falsifiable. Fine. But it’s easy to evaluate whether the morality that is claimed to come from God, is in fact, moral or not. This will very likely get a person accused of “judging God” (and who has a right to do that?), but since the person making these proclamations is invariably not God, but a man…well. It carries just as much weight as anything else said by man.

I’m really looking forward to deWaal’s book, despite my misgivings stated here– and hey, for all I know, they might be totally off-base. I hope so. And if you aren’t familiar with his books, go get Chimpanzee Politics when you can. Everybody should read that one, and will likely enjoy it.

————————————-

Prior relevant writing: Is Darwin Responsible for the Chimp Attack?

Dear Bill O’Reilly…

Dear Bill O’Reilly… published on 2 Comments on Dear Bill O’Reilly…

…no calculator, moral or otherwise, will make it less expensive to arrest people than to help them. Trust me on this. The more you deny it, the more ridiculous you are:

Is traditional America gone for good? That’s the question Bill O’Reilly tackled during his Talking Points Memo on Monday night. Criticizing “secular progressives,” O’Reilly called for the right kind of politician who will help us confront the “reality of our situation.” Traditional America can come back, O’Reilly said, with the right person to make it happen.
Specifically, he pointed to Mitt Romney‘s electoral loss among blacks, women and Latinos. “It was an entitlement election,” he said. The media would have you believing the election confirmed election ideology. While that’s not true, he said, secularism is “eroding traditional power.” “On paper, the stats look hopeless for traditional Americans,” O’Reilly said. “But they can be reversed. However, it will take a very special politician to do that. By the way, Mitt Romney didn’t even try to marginalize secularism. He basically ignored it.” Secular progressives don’t have the right approach, he argued, because they don’t want judgment on personal behavior. For examples, O’Reilly pointed to the issues of out-of-wedlock births, abortion and entitlements. Secular progressives “don’t want limitations on so-called private behavior,” he said. The majority of Americans can be persuaded, O’Reilly said, “that the far-left is dangerous outfit, bent of destroying traditional America and replacing it with a social free-fire zone that drives dependency and poverty.” We need to confront that, he added. But too many of our politicians are too cowardly to do so.

Refusing to place limitations on so-called private behavior…that’s called freedom, right? Yeah, sounded familiar. Those damn secular progressives and their desire for freedom.

O’Reilly for some reason doesn’t delve into the particular ways in which he’d like to limit private behavior, and how doing so would alleviate poverty and the need for “entitlements” and dependency. Probably because the only way he could suggest that his fans would actually get behind– banning abortion– would actually result in greater poverty and dependency. Not just because outlawing abortion would make criminals of women and their doctors, and criminals have to be identified, located, arrested, prosecuted, and punished, and that all costs money. But because childbirth costs money– a lot of money, far more than an abortion– and raising an unwanted child also costs money:

The women in the Turnaway Study were in comparable economic positions at the time they sought abortions. 45% were on public assistance and two-thirds had household incomes below the federal poverty level. One of the main reasons women cite for wanting to abort is money, and based on the outcomes for the turnaways, it seems they are right. Most of the women who were denied an abortion, 86%, were living with their babies a year later. Only 11% had put them up for adoption. Also a year later, they were far more likely to be on public assistance — 76% of the turnaways were on the dole, as opposed to 44% of those who got abortions. 67% percent of the turnaways were below the poverty line (vs. 56% of the women who got abortions), and only 48% had a full time job (vs. 58% of the women who got abortions). When a woman is denied the abortion she wants, she is statistically more likely to wind up unemployed, on public assistance, and below the poverty line. Another conclusion we could draw is that denying women abortions places more burden on the state because of these new mothers’ increased reliance on public assistance programs.

An abortion is a last ditch effort to prevent what other thing Bill O’Reilly is not fond of? Unwanted pregnancies. Actually, he doesn’t much care about pregnancies being unwanted; he cares about them being out of wedlock, because all babies born out of wedlock are going to be on welfare, and only unmarried women want abortions, because they’re a bunch of young sluts. Right.

The “young slut” argument is why O’Reilly and friends also stand firmly opposed to the single biggest thing in the way of unwanted pregnancies that government can actually do something about, which is of course contraception. Providing education about contraception and making it easier for people to access it would save loads of money and prevent abortions, but O’Reilly doesn’t like that because a) government spending money is wrong, at least if it’s to provide education or financial assistance to people rather than to arrest and prosecute them, and b) doing so would amount to the government implying that it’s okay to have sex without making a baby, and that’s only a message a secular progressive would want to send to the young sluts. The message Bill O’Reilly would send is, of course: Don’t have sex, until you get married. Then have sex, but without contraception, so you can have babies. But if you can’t afford to have babies, don’t come crying to me about abortions or welfare because you’re not getting them.

Let’s remember, nearly every American woman who is sexually active will use contraception at some point in her life. A typical American woman wants only two children. In order to accomplish this while having a normal sex life, she would have to be using contraception for roughly three decades. And 95% of Americans have had premarital sex.

So, Bill….tell me again how you’d propose to keep us out of poverty and independent by curtailing our personal freedoms? Oh, by being “traditional.”

Yeah, I think I’ll stick with being a “secular progressive.”

Proximate pratfall

Proximate pratfall published on 1 Comment on Proximate pratfall

Regarding Richard Mourdock’s “rape babies are a gift from God” comment

It’s fun to see people all over the internet making fun of Mourdock saying that a pregnancy which results from rape should be considered a gift from God, because that life is something God intended to happen. They can see the obvious dishonesty of it, and are going to town drawing the logical conclusions of such a statement. Those logical conclusions are how we can know it was dishonest– if it wasn’t, then the most charitable thing that can be said is that Mourdock didn’t exactly think it through.

You see, the position that God intended for a pregnancy to have resulted from a rape can be interpreted in one of two ways:

1. Ultimate: Of course God intended for it to happen, because God intends everything! God is the author of the universe, the primary force behind everything and everything. He is the ground of being, or at least the first cause who set everything in motion. Therefore if something happens, it is by his intention.

Why Mourdock’s statement is ridiculous, if that’s what he meant: Rape pregnancies, then, are intended by God in the same sense as cancer, earthquakes, and car accidents. The implication of Mourdock’s statement is of course that a pregnancy resulting from rape is intended by God, therefore the woman should not have an abortion. But our response to disease, natural disasters, and human-caused mishaps is not to proceed about our day as if nothing happened, whether we regard those things as ultimately intended by God or not. When those things happen, we attempt to fix them– to put things right. Oftentimes, to a woman whose pregnancy resulted from rape, getting an abortion is putting things right (well, as much as she can). God intending the pregnancy is not an argument against her doing so any more than it is an argument against chemotherapy for cancer patients.

2. Proximate: A rape victim’s pregnancy is a result of special intervention on God’s part. For reasons known only to God– and apparently to Mourdock– God looked down on that woman who had recently experienced the suffering of sexual violation and said “Hey, that raped lady needs a baby.” And presto! He put one inside her.

Why Mourdock’s statement is ridiculous, if that’s what he meant: Because it makes God– and Mourdock– a sadist. Unfortunately Mourdock’s use of the word “gift” makes it much more likely that this is the sense in which his statement was made, and that’s why people are reacting so badly to it even though he still appears to have no clue of the enormity of what he said. That’s what is making people mentally dry heave.

And by the way, you can give a gift back. It might be rude, but you can do it. Just saying.

This lead me, though, to think of an earlier rumination I had about conservatives conflating God’s behavior in the proximate vs. ultimate sense, so I’m re-posting that here:

1. “Everything is caused by a higher power. I call that higher power God.”

2. “Natural disasters are acts of God– they are part of the structure of the world and we just have to deal with them as they come.”

3. “Now that (insert natural disaster) has happened, are the people of (insert region of the world) going to wake up and see that God has a message for them?  Are they going to see that God is not happy, and change their ways?”

Three very different statements. The third person is claiming that a natural disaster is a specific act of God, performed in reaction to the behavior of people in the area affected by it. This person is either too uneducated to know the reality of why natural disasters happen in certain times and in certain places, or does not mind appearing to be. To put it less delicately, if you claim that natural disasters are actually divine punishment you are not only stunningly lacking in empathy but can also safely be thought less than bright. I don’t expect people to stop doing that any time soon, but our collective willingness to call their statements ridiculous has increased.  Previously there would have been no need for Michele Bachmann’s PR person to declare that she was simply joking [when she said that Hurricane Irene was God “getting Washington’s attention”].

We still don’t– or at least, shouldn’t– want people who are willing to make statements like that running the country. We shouldn’t want governors who think that you solve problems like property rights violations and drought by appealing to God to solve them. We shouldn’t want a president who decided to run in the first place because he/she thinks God told him/her to run, or that God will tell him/her things like whether to go to war or not while in office.

Why? Because these put God in front of natural and human causes for things. They make him a proximate cause, rather than the ultimate one. God might indeed favor Herman Cain for president, but the rest of us should be primarily concerned with whether he’s what the country needs, and whether he’ll do a good job. God might be concerned about property rights, but since it’s the job of politicians to make things right in that regard, they should be doing it. God might have an opinion about whether the country should go to war, but hopefully it’s based on the same things a president should be concerned about– whether the war is just, how much suffering it will cause, and so on. God might have very firm opinions about how Obama’s handling the deficit, but if you consider Irene to be a sign of that you’re a cretin and shouldn’t be in an elected position of power.

Digital dualism enables internet idiocy; monism motivates meaning

Digital dualism enables internet idiocy; monism motivates meaning published on No Comments on Digital dualism enables internet idiocy; monism motivates meaning

First things first– if you haven’t already, go over to technosociology and read Zeynep Tufekci’s excellent post Free Speech and Power: From Reddit Creeps to anti-Muslim Videos, It’s Not *Just* “Free Speech.” You can probably glean the subject matter from that title, but the post is a very nuanced and careful (and even more careful after some edits) consideration of what free speech means on the many and varied private venues of conversation that compose the internet. I’m not really going to add to that– just go read.

What I want to talk about here is actually something mentioned in a specific part of her post, on the significance of what happens on the internet as opposed to “real life”:

Another variant of the argument has been that “it’s just the Internet.” Chill. This, of course, rests of on something I’ve long been railing against, the notion that the Internet is somehow not real, that it’s virtual or that it is “trivial.” (My friend Nathan Jurgenson coined the phrase “digital dualism” to refer to this tendency).

Mind/body dualism is the term for a belief that the mind and body are fundamentally separate, made of different stuff in some way. The most common version of this is belief in a soul, the locus of all of the “important” thinking– aka, the mental stuff, the stuff that makes you, you– which either wasn’t ever part of your body or will cease to be part of your body upon your eventual demise. Digital dualism, then, is the casual belief that what happens in the internet is not part of real life– that it is somehow fundamentally separate. The soul is separate and therefore more significant, but life on the internet is separate and therefore less so. It’s not part of “real life,” but a diversion from it, or at best, a tool to assist in maintaining it. Jurgenson writes regarding the genesis of this thinking:

The digital dualism versus augmented reality debate relates to another outmoded conceptualization that argues the Internet has the power to transcend and remove social locatedness. At its onset, the Internet seemed to promise the possible deconstruction of dominant and oppressive social categorizations such as gender, race, age and even species; as the cartoon goes, “online, no one knows you’re a dog”. We can trace this line of thought through the classic Hacker ethic that ‘all information should be free’ through the open-source movement behind Linux and in the philosophy of Wikipedia, an online encyclopedia that anyone can edit. Essential to these projects was the idea that the Internet can be created as a sphere separate from (perhaps even better than) the offline world. Digitality promised a Wild-West frontier built without replicating the problems of our offline reality, fixing the its oppressive realities such as skin color, physical ability, resource scarcity as well as time and space constraints. The new digital frontier was a space where information could flow freely, national boundaries could be overcome, expertism and authority could be upended; those old structures would be wiped away in the name of a utopian and revolutionary cyber-libertarian path blazed by our heroic cyber-punk and hacker digital cowboys (indeed, those were boy’s clubs). This dream could only be maintained by holding the digital as conceptually distinct from the physical. Perhaps this is understandable given this new space was literally being invented. However, the novelty of the new digital reality betrayed the ultimate reality that none of this digitality really existed outside of long-standing social constructions, institutions and inequalities. (Emphasis in original)

This blog’s name is due in part to digital dualism. I was trying to think of what sums up the significance and purpose of a blog most, and got stuck on the fact that a blog is a means of expressing something to the world  with little to no expenditure necessary on the part of the author, no requirement of means, let alone credentials. It requires time and effort, and that’s it. That makes a blog nearly the cheapest signal possible, but that cheapness only refers to the ability of any yahoo with an internet connection to make and maintain one. The messages transmitted can also be cheap, or they can be incredibly valuable– but that value depends on, is determined by, the messages themselves. If the messages are valuable and have an effect, that breaks from digital dualism and betrays at they are in fact part of the real world.

PZ Myers slaps away the notion that the ease of putting something out into the internet makes it less substantial or important:

The internet made publication trivial. It apparently diminished the substance of communication — no more crackling bits of paper that pile up on your desk. Media like twitter and facebook encourage you to blurt casually, with little attention to the words you write. It leads to the illusion that communication online is as insubstantial as the conversation you had with your cat. But it isn’t. In the vast howling noise of the internet, what you say has become more important — voices that babble and shriek don’t rise to prominence and become regular draws (they can be brief freak show sensations, though, and we do see a tendency for voices of minimal talent or intelligence striving to become louder through more extreme viciousness or stupidity). Because something is written in the intangible pattern of electrons doesn’t make it less substantial, but instead makes it easier to distribute, copy, and archive — you could burn an incriminating letter, but once it is on the internet, it is spread far and wide and, while not completely unerasable, is harder to remove…and actively trying to remove something tends to make it more noticeable and more widely disseminated. Meanwhile, I’m finding hardcopy to be less useful — I get dunned with so much junk mail, all those crackling bits of paper that offer me new credit cards at low low rates and advertisements for big screen TVs on sale and sweepstakes I must enter to win millions of dollars, that I increasingly devalue stuff that is written down. I used to photocopy journal articles every week and file them away in a cabinet — I’ve still got a huge pile of these things from 20-30 years ago — but now I rarely print anything, it’s far more useful to have a searchable, indexable, archived PDF that I can also instantly email to students and colleagues. Just because some old fogies don’t comprehend or appreciate the volume and content of all the communication that goes on by this medium doesn’t make it less real. The internet is not the place where a billion ghosts chatter over matters of no consequence — it’s the new reality, the tool that many of us use to make connections that matter. It’s the greatest agent of information and communication humanity has yet invented, and it deserves a little more respect than dismissal as something “unreal” where trolls can roam unchecked.

It’s not just “old fogies” who are digital dualists, though– it’s those same trolls, and everyone who agrees with them that degrees of anonymity make everything matter less. The old fogies are ignorant of the reality of the internet, but the trolls are not– they are living in denial in order to avoid accepting the responsibility of being trolls. The distance makes it easy to pretend that there are not actual other people on the end of every barbed forum post or abusive tweet. It’s baffling to me to see people actually use their Facebook accounts to express every kind of bigotry and hate under the sun both on Facebook itself and on all sorts of news sites and other fora which use Facebook for commenting. Don’t they know they’re using their real identities for that? Of course they do, but they don’t care– the distance makes it seem like it doesn’t matter.

My dissertation was, in part, about how belief in a soul can actually inhibit the ability to practice empathy by establishing the body and the physical/social environment as less important, as mundane and disposable, and then dehumanizing people to place them solidly within that realm as opposed to the company of the ensouled. Digital dualism is not an exact analog to this, but I can certainly see how empathy can be switched off by relegating others to the status of “internet people” and dismissing their concerns in a very similar way. Perhaps it’s even the same move gone one step further– if the soul is what binds us with eternity and makes us children of God in contrast to everything else in this worldly world of ours, then perhaps so-called “real life” likewise divorces us from the fake, transient, shallow world of the internet. Maybe we always need some kind of existence to subvert and make into a meaningless playground.

That certainly seems, anyway, to be the mentality on display whenever there’s a discussion of poor behavior anywhere on the internet, but especially in gaming, where people can ramp “It’s just the internet” up into “It’s just a game.” We don’t need to worry about unfairness, bigotry, or general douchiness here– it’s just a game! Because I guess people who play games cease to be people. Or maybe just all other people aside the one steadfastly defending the right and appropriateness of his being a douche.

I’ve written on the subject of empathy inhibition on the internet before, here and here. But in the former of those two posts, I also wrote about how online interaction can foster empathy to the point of creating tremendous opportunities to help people who have been observed suffering– observed via the internet. When people are well and truly convinced that what they do on the internet affects real people even if those people are strangers, some beautiful things can happen. That being the case, I can see no benefit in promoting the notion that the internet is not “real life.” I can see only downsides. Not only is digital dualism false– what we do online has tremendous effects, even if they are not immediately obvious or consistent– but it’s also harmful, because it encourages people to harm others without taking responsibility for it, because they do not acknowledge that those others are also people. And it impedes the opportunity for and practice of great acts of empathy.

So let’s discard the dualism. This is real life. Let’s act like it.

Demonology

Demonology published on No Comments on Demonology
Art by Sandro Castelli

It’s getting close to Halloween, so let’s talk about demons.

I’m going to define a demon as a non-human agent who works in the world– the existence we inhabit– to create evil. Now, yes, I’ve said that I don’t believe in evil, that evil is a problematic concept. I don’t, and it is. But I don’t believe in demons either. This definition is a description of what demons are to people who do believe in them, and people who believe in demons typically believe in evil.

There are all kinds of demons. There are other kinds of “real” demons in folklore across the world, and other kinds of intentionally fictional demons depicted all over movies, literature, gaming, and so on. The Dungeons and Dragons Monster Manual for example has a whole slew of them, each with their own characteristics, rank, and abilities. Demons as mythological characters are really fun, because they can look like virtually anything although they typically have horns and a tail at least, sometimes hooves. A tiefling is a humanoid with demonic ancestry, and they’re not even necessarily evil.

But the kind some Christians believe in? They’re evil. Their reason for existing is, in fact, evil. They exist to prevent humans from flourishing and achieving spiritual salvation. That is, demons serve Satan and work to prevent the souls of humans from being saved so that those humans will go to a heavenly afterlife. In the Bible, demons usually take the form of “unclean spirits” who possess people and can only be removed via exorcism. Jesus was, among many other things, an exorcist. Catholic clergy have performed exorcisms for centuries and do to this day, while specifying that the allegedly demon-possessed person must be examined by a doctor to ensure that it is not actually a case of mental illness. After all,

“Not everyone who thinks they need an exorcism actually does need one,” said Bishop Thomas J. Paprocki of Springfield, Ill., who organized the conference. “It’s only used in those cases where the Devil is involved in an extraordinary sort of way in terms of actually being in possession of the person. “But it’s rare, it’s extraordinary, so the use of exorcism is also rare and extraordinary,” he said. “But we have to be prepared.”

Indeed. A 2008 Pew Forum Landscape Survey found that 68% of Americans “believe that angels and demons are active in the world.” According to a 2009 Barna Group study, in America,

A majority of Christians believe that a person can be under the influence of spiritual forces, such as demons or evil spirits. Two out of three Christians agreed that such influence is real (39% agreed strongly, 25% agreed somewhat), while just three out of ten rejected the influence of supernatural forces (18% disagreed strongly, 10% disagreed somewhat). The remaining 8% were undecided on this matter.

So these people believe that there are other agents in our world– non-human but human-like agents– which have an effect on the world for good or for evil. I’m focusing on evil for now, because I think it’s more interesting in terms of moral responsibility. Namely, how demons function to add, take away, or otherwise mess with it. See, the notion of a non-human agent which can possess people and make them do evil works excellently for two purposes: 1) asserting that someone else is doing something wrong when you can’t come up with any real evidence of the wrongness of the act, and 2) exculpating oneself of actual or at least allegedly actual harmful acts by taking the blame. You’ve heard someone speak of his or her demons? Some people actually mean that literally.

Whether they appear in someone’s explanation for the reason for whatever they consider evil happening in the world, or in a movie designed to scare the hell out of us, there’s one feature of demons that is particularly salient to me: they don’t work voluntarily. And by that I don’t mean demons are slaves. I mean that they can take control of things without any consent on the part of the person or people for whom they are ruining existence. Frequently when a demon shows up in a movie, it’s because somebody summoned it by accident by performing some ritual (Ritual: A sequence of actions which produces a supernatural result as an emergent property, in addition to the expected physical consequence of those actions. Dunking someone in water gets them wet. Baptizing someone confirms them as a child of God.) which brings them into the world, completely involuntarily on behalf of the summoner. And when a demon is summoned on purpose, it’s generally without the summoner having a complete understanding of what he/she is doing, which tends to turn out very badly indeed for him/her. Demons: not big fans of consent.

I watched Hellraiser recently for the Film Sack podcast. I’d never seen it before. I wasn’t big on horror movies at all growing up because they’d seep into my dreams whether I recognized them as fake and ridiculous or not, so I just avoided them altogether. But over the past few years I’ve started watching both and old and new ones, good and bad, from The Omen to Poultrygeist, and the thing that sticks out to me the most is how they screw with moral responsibility. Sure, anybody who has so much as seen a horror movie or watched Scream knows this. But there’s a lesson about morality that horror movies can tell you. I’m not saying it’s a good or correct lesson, but it’s a pretty darn consistent lesson:

Horrible things will happen, perhaps to you. They will be worse if you’re a bad person. Whether it’s “cheated on your significant other” bad or “serial killer” bad (or just “had sex” bad, if you’re female) doesn’t matter. Being a good person will not save you. You do have to be a good person to survive, but you also have to have access to and seize upon an opportunity presented for no real reason and based entirely on luck. If you do that, you might survive.

In the case of Hellraiser there is plenty of Hell and demons, and fortuitous opportunity is a small wooden box. A box which apparently (the movie is not very clear) provides both the opportunity to inadvertently turn your soul over to the complete control of demons, or, under the right circumstances, to banish those demons. I won’t spoil the movie for you, but I bet you can guess which sort of people get controlled and which get the banishing power. The important thing is how unintentional it was in all cases. In both movies and popular conceptions of demons, the matter of whether you end up being controlled by them or whether you are in a position of chase them away has very little to do with what you actually will to happen. In popular conception, here are some ways you can worship and/or summon demons completely inadvertently:

This last was the focus of a recent radio show rant by Linda Harvey of Mission: America, who said:

The core of Halloween is glittering artificiality, you can pretend to be someone you aren’t’ for a night, you can flirt with danger, you can divine a different destiny, but it is all void of the presence of or will of God. It’s a seduction that says, ‘don’t be afraid, do whatever you want, there’s nothing to fear,’ it’s one of Satan’s oldest tricks. Costume parties are fun but these costumes may even disguise our very souls. Most Christians with a sincere faith acknowledge that there is a demonic realm and that Satan and his minions are at work in the world to deceive humans, so why wouldn’t Halloween provide an extremely useful tool? Mixed in with the fun and games are frightening and disturbing experiences that may leave some children with nightmares. Then there is the flirtation with occult practices that are forbidden in Deuteronomy 18:10-12 and elsewhere, Christians aren’t supposed to be consulting fortune tellers, Ouija boards or palm readers about our future but all are frequently a part of Halloween festivities. ‘But it’s just for fun,’ parents will say, ‘God understands my children are not serious.’ Really? Do your kids know how risky these practices are and that real contact with real demons is quite possible. Satan doesn’t care about our intentions; he will take any willing participant.

Putting aside the fact that that “There’s nothing to fear” was one of the profound messages of divine insight delivered in neurosurgeon Eben Alexander’s trip to paradise, the most important thing here is that the only intentions which matter are those of Satan and his minions. Not even God’s omniscient understanding of the content of the minds and hearts of men is apparently good enough to rescue what would otherwise be completely morally neutral acts of fun from actually being rituals to deliver souls into demonic control and presumably a very warm afterlife. Believing in Satan and demons means that there are evil agents in the world actively trying to pull your soul away from God’s embrace, and they can do by means which coincidentally look just like the ways to have the most fun.

The arguments that practicing yoga is demonic center around the notion that you’re actually practicing the rituals of another religion, and if you’re doing that (knowingly or unknowingly) then you’re obviously not conforming properly to your own religion. And since there are no other gods but God, you can’t actually be worshipping the gods of other religions by mistake. What you are doing, then, is demonic. Since it is not God-focused, it must be Satan-focused, since Satan wants you to turn your attention from God. As if there isn’t enough pain and suffering going on in the world occurring naturally, accidentally, and deliberately by the acts of the malicious, we have to worry about accidentally serving Satan by being influenced by demons to commit ungodly acts which don’t appear ungodly because they harm nobody and actually seem fun, helpful, and/or educational.

The lesson of demon-believing Christianity seems uncannily like the lesson of horror movies, doesn’t it? Evil is actively working in the world to cause you to suffer and die. You will likely suffer more if you’re even slightly bad, but being good– according to rather questionable rules we’ve made– is not enough to save you. You must also be lucky enough to be given an unlikely and seemingly arbitrary opportunity, and you must seize on that opportunity in order to have any hope of surviving. If you ignore these rules and this opportunity in favor of enjoying yourself and doing what you think is right, you will unwittingly serve the interests of the evil agents and ultimately become theirs.

I submit that this portrayal of moral responsibility is absolutely incompatible with free will, which shouldn’t be shocking at all (it may shock you that I believe in free will, but that’s another commentary altogether. Sufficed to say I’m persuaded by Daniel Dennett’s portrayal). I think that demon-belief is completely fatuous, which should also be unsurprisingly. But I also want to say that I consider it an immoral belief, because of this effect of completely distorting moral responsibility to make evil out of acts which are not just benign, but intended as benign and actually morally neutral or perhaps even positive. Demon belief is a cheap cop-out in terms of morality in a way that angel belief is not, which is why I didn’t feel compelled to address angels here at all. I don’t believe in angels, but consider the belief  mostly benign except when people credit angels for things like successful surgery rather than, you know, their surgeons.

I can’t get people to stop believing in demons. But I think I have offered a sound argument for not taking people seriously when they attempt to invoke demons as moral justification for….well, anything. Linda Harvey’s full rant included the suggestion that demons are responsibility for homosexuality and what she calls “gender confusion.” We should not listen to people like Linda Harvey. They are literally making up supernatural support for their morality, and in the worst, most damaging possible way.

Internet antipathy

Internet antipathy published on No Comments on Internet antipathy

So much of what happens on the internet is petty squabbles between strangers. Arguments which flare up and then fade away, which will have no effects outside of making some people temporarily inflamed/enthused/bewildered. But if you dismissed every internet disagreement on that basis, you’d be highly naive. Reputations are built and ruined on the internet. Connections are made and broken, careers begun and ended, romances kindled and snuffed out. And oftentimes, it’s hard to recognize when one of these things is taking place because all you, the subjective observer, can see…is people talking.

So it’s hard to know sometimes which disagreements to pay attention to, what it means to be “internet famous” and whether anyone should actually want that any more than they’d want to be regular-famous, and how people can become so passionate about things you wouldn’t imagine anyone would care about for more than five seconds. On the internet, attention is a free market. People will care about what they care about, and frequently that will be things like voting for Taylor Swift to give a free concert at a school for the deaf. Because on the internet, people think some really stupid things are just hilarious. 
I blogged a few months ago on the topic of how empathy works in that atmosphere, and how charity can arise from anarchy when enough people are paying attention. Unfortunately, so can wrath, jealousy, and casual sadism. People can find both reasons and opportunities to be incredibly altruistic, but also to punish perceived wrongdoers exponentially more than the wrong that was done, and to generally be enormous douches when the mood strikes or they just get bored enough. 
I don’t, for example, know why you’d create necklaces similar to that of a person whose internet presence is shaped around the jewelry she makes which espouse a particular ideology, which mock both her and the ideology, and then wear them to a conference she’s attending for the specific purpose of provoking her. I just don’t. But I do know where the idea came up.