Skip to content

Things you might not want to say about hot car deaths

I live in Wichita, Kansas. Kansas is a place of extreme temperatures– it can get bitterly cold in the winter, and deathly hot in the summer. Today, for example, the high is supposed to be about 106.

On Thursday, a baby died here in the heat. Another hot car death. She was 10 months old, and left in the car for two hours while it was 90 degrees outside.

In this case her was name Kadylak, and she was the foster daughter of two men in their late 20’s who also have several other foster children.

If you live in any place where it routinely becomes very hot in the summer, you’re probably familiar with the story– the father forgot that the child was in the car. He went about his day somewhere else while she remained there. In that confined space, the baby died of heat stroke. The father is distraught. He didn’t mean for this to happen. That father, in this case named Seth Jackson, wants to die himself, according to his mother.

On average, 38 children die in the United States every year from hyperthermia, or heat stroke, inside of hot cars according to the advocacy group Kids And Cars. Over 600 have died in this way since 1998. In roughly half of the cases, the parent/driver forgot that the child was in the car.

Proposals have been made for technological solutions to this problem; a way to force parents to remember that there is a small child in the car. A child who may be asleep and therefore making no noise him/herself, a child whose car seat is in the back of the car because he/she is too young to sit in the front seat of a car with airbag technology, a child whose car seat might not only be in the back of the car, but facing the back of the car so the driver won’t even see his/her face without a mirror installed.

A high school student from Albuquerque (another hot place) named Alissa Chavez won an award last year for designing an alarm system called “The Hot Seat” which notifies the driver if a child is left in a vehicle. There are also, as you might expect, apps for that. Kids And Cars has a petition to the White House asking for funding to be allocated to the Department of Transportation to research technology (the nature of which isn’t specified in the petition) to tackle the problem of children being left in hot cars, and also to “require installation of technology in all vehicles and/or child safety seats to prevent children from being left alone left alone [sic] in vehicles.”

After so many years of hearing about children dying in this way, and listening to people’s reactions to the stories, I’ve noticed a few trends in these reactions. Not positive trends. Trends that sound, quite frankly, a lot like concerted efforts at empathy avoidance. I’d like to address a few of these and explain why I find them so problematic.

1. “I can’t believe he/she forgot that she had a child.”

In the roughly 54% of occasions on which a child was left in a hot car because he/she was forgotten, it wasn’t because the parent forgot that he/she had a child. He/she forgot that the child was left in the vehicle. Big difference.

2. “This parent must have been drunk/mentally disabled/pathologically stupid/evil.” 

In this case, at least,

Neighbors described Jackson and his partner as doting parents. “They are two of the most kind-hearted guys that I have ever met. And I hate that there’s so much controversy right now with babies’ being left in the car, because I truly don’t feel from the bottom of my heart they would ever do this on purpose,” said Lindey TenEyck, who lives across the street.

3. “This parent should be ‘forgotten’ in a jail cell for about 50 years and see how he/she likes it.”

…..
Never mind, your capacity to empathize is clearly broken. I dearly hope you have no children of your own– not because you might leave them in a hot car, but because I can see you banishing them to Siberia the moment they first burst into tears at the hospital. They wouldn’t even make it to car.

4. “I just can’t imagine doing/having done this with one of my children.” 

All right, this is the big one. This is the main thought I want to address.

The fact that you can’t imagine something like this means very, very little on the one hand, and quite a lot on the other.

Your not being able to imagine something means very, very little, I should say, in terms of its truth value. Not being able to imagine something is called a cognitive constraint, in that it’s hard to meaningfully process a concept if you lack the ability to get your mind around it in the first place. But that doesn’t mean it’s not true.

Plenty of people misconstrue evolution, for example, because they just can’t get their minds around the length of time it would take for the genetic structure of a species of organisms to change sufficiently for their progeny to become a different species, and so you get bizarre straw man characterizations of evolution that have no correlation to reality, like the crocoduck for example.

Now, just because Kirk Cameron is unable to properly imagine how evolution really works, that doesn’t mean that evolution doesn’t work. It just means that his poor brain, for whatever reason, is unable to grok the concept. He can’t grasp that evolution is true because the only version of it he’s willing or able to entertain is a caricature.

Likewise, your inability to do something like forget your own child in the back of your own car might be a caricature of a different sort– an unwarranted but entirely understandable mental distancing from the idea that such a horrendous tragedy could have ever happened, or especially could ever happen in the future, to one of your own children because of your own negligence.

Let me emphasize those two words again– entirely understandable. It’s entirely understandable to banish from your mind the thought of something like this happening in your own life, because if a parent went around seriously considering that any and all tragedies which have ever ended the life of any child could happen to his or her own children, he/she could be rendered paralyzed with fear. It’s possible that this person would become unable to function as a parent if that happened, because parenting involves risks, and imagining the worst possible consequence of every risk has a way of preventing people from being willing to take any risks.

Right?

Okay, but here’s the problem with that, and this is the part that means a lot, as I mentioned– being unable or unwilling to conceive of yourself doing something, especially a thing which involves forgetting something important with disastrous results, has the effect of inhibiting your ability to empathize with people who have done that thing. People who– this is important–  it’s very likely also would’ve said that they would never forget their child in a hot car, who would have themselves condemned any other parent who did so as drunk/mentally disabled/pathologically stupid and/or evil. Yes, I’m quite sure that Seth Jackson himself would’ve said that.

So what ends up happening is that when someone like Jackson does forget, and a child ends up dying, there are endless other parents out there, who aren’t necessarily any smarter or more responsible or loving or conscientious, who nevertheless have to condemn what he did in the strictest terms. This person who is described by his neighbor as lying on the ground near his car, “practically in the fetal position,” experiencing the sort of pain that no parent ever wants to experience. The kind no parent could ever forget. This person is assumed to be the worst sort of human being imaginable. And it’s very likely that right now, he would not disagree.

Except the problem is, he isn’t. He’s a parent who made a mistake. The problem with shutting off empathy to this person out of a sense of self-preservation, or rather a preservation of the image of oneself as a good parent who would never do this, is that it doesn’t fix anything. It does absolutely nothing to prevent this from happening again. And again, and again, and again. Which brings me to the last thought.

5. “Pushing for [insert proposed safety measure here] means blaming [insert manufacturer here] for this sort of thing instead of the negligent parent.” 

No, it doesn’t. No more than any other safety device invented since the beginning of time has meant this.

When you and I were babies, we didn’t travel in super-safe car seats in the back seat, facing backward. Maybe we were in car seats. But they weren’t the same kind, and they were probably in the front seat or maybe even on the floor. In such a position, I can’t help thinking that our presence there, even while asleep, was more of a reminder to Mom or Dad driving us around that we were in the car.

Does that mean that the backward-facing seats in the backseat are bad, and the practice should be ended? No, of course not. It means that in the act of moving car seats to the back seat, which was done in the first place because of the introduction and standardization of air bags because one of those being triggered could be dangerous to a small child in the front seat, may have created a new risk of its own which deserves its own safety concern. It makes absolutely no sense to slam on the brakes (figuratively speaking) when it comes to this concern, and insist that this is where safety measures end, that nothing should be done to prevent parents from forgetting a child in a car because it’s just their own fault. They’re horrible people and deserve to suffer, and that’s where it ends, right?

No.

Do you care more about making sure parents suffer when their children die, or do you care more about preventing the children from dying? Because trust me, the first one is going to happen regardless.

Parents can make horrible mistakes. Good ones. Smart ones. Capable ones. That’s the risk of being a parent– you’re going to screw up sometimes. If you’re lucky, the results won’t be devastating. That of course doesn’t mean that it’s all up to luck, but there is definitely a lot of luck involved.  It’s okay to acknowledge that. It doesn’t mean you’re admitting to being a terrible parent. If it helps, you don’t have to announce it to the world– I’ll do it for you.

I know that the pressure to appear perfect is neverending. But don’t let that get in the way of empathizing with people who have clearly experienced tragedy, because they’re already suffering enough. And certainly don’t let it get in the way of supporting help for parents who need it. Because in the end, it’s better that they get that help, isn’t it?

Who knows, you might even benefit from it too. Or your kids will. Or their kids.

Shame, sexism, and soft bigotry

So last month I wrote about the difference between the feelings of guilt and shame, and what they address. I noted that they’re not synonymous, actually work quite differently, and that one is far more productive than the other– that, actually, one may be necessary (albeit sometimes incorrectly applied) while the other is almost always counter-productive. And this, I said, is because guilt is a “what you did” emotion while shame is a “who you are” emotion. It’s the difference between our behavior and identity– you have considerably more control over the former than you do over the latter, and therefore a feeling of guilt over something you did that was wrong is a much better (more productive) emotion than the feeling of shame over who/what you are.

And I didn’t make this point specifically, but obviously this distinction also matters because who/what people are generally isn’t wrong.  It can certainly be unfortunate, for both the person and those around him/her, but it’s not wrong in the sense of being something it’s appropriate to blame a person for and be angry about it. Right? Any kind of illness or mental disorder, for example, we recognize as part of what a person is, not something they did. And however problematic it can be, we certainly don’t blame them for it because we recognize that they didn’t cause themselves to have that condition, and they can’t just will it to go away. Recognition of a person’s lack of control over something goes a long way toward holding us back from blame and anger. The problem just comes in when we assume they have more control over it than they actually do– that’s how we turn “who you are” into “what you did.” We make things which were unintentional deliberate. We exclude exculpatory circumstances and context. Instead of guilting, we shame.

And we do it all the time.

I’ve been a fan of Brene Brown since I saw her TED talk on shame and vulnerability, and I particularly like this recent blog post in which she talks about shame in the context of teen pregnancy (to argue that public shaming is not, amazingly, the way to go about fixing the problem). She says:

Here’s the rub:
Shame diminishes our capacity for empathy.
Shame corrodes the very part of us that believes we are capable of change. . . 
 I define shame as “the intensely painful feeling or experience of believing we are flawed and therefore unworthy of love and belonging.” Along with many other shame researchers, I’ve come to the conclusion that shame is much more likely to be the source of dangerous, destructive, and hurtful behaviors than it is to be the solution.  It is human nature, not just the nature of liberals (as Reeves argues), to want to feel affirmed and valued. When we experience shame, we feel disconnected and desperate for belonging and recognition. It’s when we feel shame or the fear of shame that we are more likely to engage in self-destructive behaviors, to attack or humiliate others, or to stay quiet when we see someone who needs our help.

“Shame corrodes the very part of us that believes we are capable of change” goes to the heart of the “what you did”/”who you are” distinction because the difference between the two is more of a continuum than a binary. The more capable you are of changing something, of doing something differently by the sheer desire to do it, the more of a “what you did” it is rather than a “who you are.” Your identity is a mire– you’re pretty much stuck with/in it, while things in the action category are fluid and fast, and obviously there’s a lot of room in between. What shaming does is address the identity and portray it negatively, and in the process it inspires hopelessness, despair and resignation. As Brown says, it makes attempting to change seem futile and even silly, and the result can be a shutting down of the ability to empathize which encourages judgment of and attacks on others. It’s bad news all around, really.

Now I want to talk about how that applies to bigotry– specifically, soft bigotry.

We know from Jay Smooth that one of the problems in conversations about bigotry is that when you try to tell someone that they sound bigoted, what they almost inevitably hear is that they are bigoted (and, it goes without saying, that’s what they hear if you start out by saying that they are bigoted). Your “what you did” statements get turned into “who you are” statements, because nobody wants– okay, most people don’t want– to believe they’re bigoted. And that’s not only understandable but pretty fair, because most people aren’t bigots. That does not, however, stop everyday, normal, well-intentioned non-bigots from saying and doing bigoted things, all the time. This could be called soft bigotry, and by that I’m not referring specifically to the soft bigotry of low expectations but rather a tacit, non-reflective bigotry that tends to arise from a combination of ignorance and our possibly inborn tendency to be more comfortable around and empathize with people who are more similar and familiar to us. You know, the kind of prejudice that tends to solidify as people get older and “set in their ways” (read: more mired in their identity).

You could call these people “soft bigots,” or you could be nicer about it and say that they have some unreflective and unacknowledged privilege, or you could say that they just haven’t given much of any thought to why people who aren’t like them are not worse, but may be worse off, of because of it. What I’m trying to get at, really, are the thousand or so different ways in which your everyday average person may display some astonishing prejudice without ever thinking of him/herself as bigoted or even ever being called such by others. After all, they don’t hate blacks/Hispanics/gays/women/foreigners/Jews/atheists/Muslims/etc., they just….you know, view them with a certain amount of mistrust. Think of them as different– not standard, not normal, because not like me. Such a person would be gobsmacked if the word “bigotry” passes your lips/keyboard in reference to him, and read/hear it as a dire accusation. Which, let’s be fair, it often is.

OBJECTION! For…you know…obvious reasons!

For example, there was an awesome discussion in the comments when PZ Myers posted the “excellent dad hacks Donkey Kong so his daughter can play as Pauline” story on Pharyngula. By “awesome,” I don’t mean lengthy (though it was that) but rather productive, because even if there were a couple of stalwarts who absolutely refused to believe that a) there’s anything wrong in the slightest with a history of video games in which time, after time, any female character is a non-playable goal for your (male) character to rescue, and b) anyone who says otherwise is not accusing everyone who loves or makes video games of being raging misogynists, there were also a bunch of very smart commenters dedicatedly explaining why this view is, in fact…problematic.

And some of these people were left wondering, by the end of the thread (read: when everyone gets mentally exhausted), why the stalwarts kept speaking as if “bigotry must be deliberate.” Why, over and over again, the stalwarts kept emphasizing that Donkey Kong was made many years ago, and taken in isolation it’s not like a very simple story of “Jump Man rescues girl from gorilla” is really that harmful, and it would be ridiculous to say that this suggests any widespread misogyny on the part of Nintendo developers– then or now. All of which is true. True, and beside the point.

Because:

  1. Those who accuse others of soft bigotry who care about fairness and accuracy will do their best to clarify that it is, in fact (non-reflective, non-deliberate, what-you-did-not-who-you-are) soft bigotry we’re talking about, and 
  2. Those who are accused of soft bigotry can, if tensions are not too high, and if this message is communicated clearly enough, gracefully acknowledge the misstep (for that’s what it is) and experience guilt for it rather than shame, express regret, learn from the experience, and move on. And that is, I’d like to think, the optimal result. 

Note the reference to guilt rather than shame and the low-tension discussion. The higher the profile, the higher the stakes, the higher the tension. You get the idea.

So I think this is how we can apply an understanding of the relative value of guilt and shame to bigotry, specifically the “soft” kind which is so much more pernicious than any of the antics of the WBC or KKK. It’s probably better, to that end, not to actually call it bigotry. Not because it isn’t, but because the goal is to get at “what you did” and labels invariably end up poking at the “who you are.” There’s no fail-safe way to prevent being interpreted as doing that, but there are ways to communicate which make it less likely. It seems, then, that those are the best approaches to take.

Thoughts on the first Tropes vs. Women video

So the first Feminist Frequency Tropes vs. Women video has been posted, and I was very excited to see it. I was not disappointed. If you haven’t seen it yet, have a watch:

What we have here is a thorough, polished bit of media analysis with obvious effort and expense put toward editing, research, design, and production generally. If the rest of the video series is like this, it’s so worth having helped fund the Kickstarter. Even if it wasn’t like this, it would be worth it– I contributed because I saw so much value in doing the project in the first place, so the fact that it’s being done so well is icing on the cake so far as I’m concerned.

This video is the first part of a discussion on the “damsel in distress” trope, in which the female character is taken from the hero in some way– by being literally kidnapped, or possessed, or otherwise removed and her rescue made a goal for our (male) hero to accomplish. Anita Sarkeesian spends some time talking about how this trope has appeared in film before moving to video games, so that it’s clear this trope isn’t something video games invented– it existed long before they did, but video games tell stories like movies and other media tell stories, so it’s not at all unexpected that the same tropes which show up in other media would appear in video games as well. Sarkeesian gives many examples of games in which this trope is employed– the sequence of heroine after heroine calling out for help after/while being kidnapped is particularly effective– and focuses on a couple of situations in particular to talk about how how the game’s story came to be structured that way.

Still from the ToM test video

Now, if you read my blog regularly, you know that I’m all about agency. It’s more than an interest; it’s a borderline obsession. Video games are interesting to me because I enjoy playing them and have since I was a kid, but they’re also endlessly fascinating psychologically because I love to see how people who can depict agency any way they want– video game designers– end up doing so in practice. A well-known psychological test for theory of mind– the ability to recognize others as having thoughts, intentions, and emotions and comprehend what they are– is to show the subject a video involving geometric shapes on a screen moving around in a way that suggests, to a person with a normally functioning theory of mind, a story about agents. What the subject sees are a larger triangle and a smaller triangle moving in and out of a square shape, but this sequence is commonly interpreted as two figures– two agents– interacting, with one “deliberately” “blocking” the other from “leaving” the “confined space.” Autism researcher Uta Frith explains it here. That’s us applying our theory of mind to the images on the screen, making them characters.

Combat. Pictured: two tanks in fierce (but slow) battle.

If you’re old enough to have played video games on the Atari 2600, you’re quite familiar with geometric shapes being presented as having agency. Two of the favorites played in my house as a kid were Combat and Adventure, the former being a series of different games played between two people operating planes or tanks and trying to kill each other, and the latter being a single-player game in which you are a hero who attempts to take a chalice from a castle while killing dragons with a sword. Or, according to what actually appears on the screen, a square which can become attached to an arrow which changes the shape of three differently colored patterns. Even though these games weren’t realistic in the slightest, it only took a few seconds of manipulating the joystick to figure out what shape on the screen was “you.” You pushed the joystick in a particular direction, and looked for which shape on the screen was moving in that direction. That was you. Even though the shape looked nothing like you, or indeed looking like nothing in existence, you knew by its behavior that this was your representation on the screen. Your agent.

Adventure. Pictured: the chalice, a dragon,
a sword, and you (really).

The thing about the damsel in distress trope is that it takes the female character away– she’s removed from the story, effectively, by being turned into a goal. We see her be captured; we see her cry out for help; we see her locked up, and maybe we see the villain torture her a bit, perhaps in view of the hero so he can have that added impetus to spring into action and save her. She’s locked in a tower, tied to the railroad tracks, in the grip of a giant ape scaling the Empire State Building, etc. She is, for all intents and purpose, incapacitated. Her job to is to look pretty and wait to be saved, while occasionally perhaps struggling and/or yelping in fear. She is a non-agent. She’s doing nothing, going nowhere.

It’s common enough to hear complaints about this trope– it’s a central feature in fairy tales, and there’s a cool children’s book called The Paper Bag Princess, now on its 25th anniversary, which subverts it completely. The notion of the woman always needing to be saved from something, and falling for her savior— what if Snow White, as it turns out, just wasn’t that into Prince Charming?– is actually more than a trope; it’s the most tired of tired cliches. But when it shows up in a video game, it’s a little different. And, I think, a lot worse, because when the damsel in distress is the focus of a video game story, it’s a story in which you, as player, are the protagonist. You control the central character of the story, the hero who must save the damsel in order to win the day, and the damsel is a thing to be won– literally. She’s a non-being, an NPC (non-player character), made of pixels and memory, and reaching her is the object of the game. Not only is she being treated as a non-agent; she is a non-agent, whereas you are not. You have goals and thoughts and feelings, as both player and character, and you are almost invariably playing a man. A straight man, presumably, because gay men don’t give a damn about kidnapped princesses.

I’m not sure if it was happenstance that just as the Damsels in Distress video came out, I started seeing people tweet and post about this parent whose three year old daughter wanted to play Pauline, the kidnapped heroine in Donkey Kong, rescuing Jump Man (aka Mario) instead of the other way around, so her father hacked the game to make it possible. You can see the result at that link; there’s a video. He says

Two days ago, she asked me if she could play as the girl and save Mario. She’s played as Princess Toadstool in Super Mario Bros. 2 and naturally just assumed she could do the same in Donkey Kong. I told her we couldn’t in that particular Mario game, she seemed really bummed out by that. So what else am I supposed to do? Now I’m up at midnight hacking the ROM, replacing Mario with Pauline.

Donkey Kong, as you know if you’ve watched the DiD video already, is one of the games discussed there in pretty significant detail. It’s awesome that this father had to skills to actually re-make the game his daughter loved in order to make the protagonist a girl saving a boy rather than the other way around, which is something most parents obviously wouldn’t have a clue how to do and probably wouldn’t trouble themselves about to begin with. I’m guessing most parents would reply “Sorry dear, but that’s not how this game works” and that would be the end of it. So this story has caught on like wildfire because of the extraordinary lengths this man went to in order to make his daughter happy, and it just happens to be the case that the thing his daughter wanted was to play herself in a video game (or at least, to play someone more like herself than Jump Man/Mario).

Or does it?

In some of the heated discussion that has already taken place about the DiD video, I’ve seen people argue that it shouldn’t matter whether the character you’re playing in a video game is at all like you and therefore there’s nothing wrong with the damsel in distress trope, which is a bit like saying that it doesn’t matter whether the “under God” part of the Pledge of Allegiance is religiously significant or not, and therefore you’d better not take it out, so help me, goddammit! Clearly it does matter, or the model in which the player’s character is male and the object of the game is to rescue a female NPC wouldn’t be so angrily defended. The player doth protest too much, and all that.

But I don’t think that most of us who do see a problem with it want to simply swap the roles around like in Donkey Kong and make all video games about female heroines saving dudes in distress– Sarkeesian sure doesn’t suggest that, and it doesn’t sound like an improvement on things to me. Of course it would be better to have more opportunities to play a female character as the protagonist, and there has been some significant improvement in that area in terms of at least making it possible for the player to have a choice about his/her character’s gender in character creation at the beginning. But really, it doesn’t seem like we suffer very much as video game players by not being able to play a character that resembles us closely– not if we’re just fine playing a square who fights dragons using an arrow. Rather, the problem with the damsel in distress trope in games is the fact that there is gender, and for half of us the gender isn’t ours, and can’t be ours. It’s clearly possible for the game to allow us to play as female, and yet it doesn’t. Instead it compels us to play a (straight) male character while dangling a female character in our faces and saying “You can’t be her. You can only save her. We assume that’s all you’d want to do, anyway.”

That’s the rub.

Haunted socks

In which Pat Robertson advises someone who frequently buys second-hand clothing that she should “rebuke any spirits that happen to have attached themselves to those clothes” lest there be any demons who might have become connected to them via the prayers of a witch, as he heard had happened once in the Philippines:

Okay, first let’s all remember Frazer’s rules of sympathetic magic: similarity, and contact/contagion. The rule of similarity entails that an object may take on properties of another object by virtue of being very like the original object in appearance (“like produces like”); whereas the rule of contact/contagion entails that the object takes on these properties by virtue of having been in contact with the previous object, long after the contact has ended. These rules appear to be intuitive, and take effect in various ways. For example, a person might be disgusted by a brownie that resembles a turd, and refuse to eat it (similarity), or refuse to drink from a glass that once contained a cockroach, regardless of how thoroughly or frequently it has been washed since (contact/contagion). We intuit that properties transfer in ways that they actually don’t, which produces some understandable (since we share intuitions) but illogical (since they’re not actually based on anything) conclusions about what it’s okay for us to look like, consume, or otherwise associate with.

The notion that negative spirits, or demons, are behind this association– that they are, in fact, the association itself– is just one step further, really. The religious step. Because religion puts agency behind everything. Most importantly, it puts agency behind what is important to us. What we value. What we feel. What frightens us. What we love. If you demanded that I define religion for you, I would…probably do so a lot more readily than a lot of people who study religion academically. But my definition would go something like “A practice of systematically placing non-human but human-like agency behind our most important intuitions.” That’s a little more elaborate than Stewart Guthrie’s “systematized anthropomorphism” and also more specific than and contrary to Pascal Boyer’s minimally counter-intuitive concepts, because Boyer thinks that traits such as invisibility and non-corporeality are counter-intuitive notions about an agent. I think they’re intuitive. I think that we perceive invisible, non-corporeal agency all of the time and probably have evolved to do so, and that demon-infested clothing is just one tiny, apparently ridiculous* but not actually unique or even very unusual manifestation of this tendency.

I’m looking forward to reading Robert McCauley’s book Why Religion is Natural and Science is Not, because I’m interested to hear his take on a now common refrain among cognitive scientists of religion, which is that perceiving agency in the world around us, behind unusual but significant events, behind our existence itself, is all-too-human even for the most secular of us. But subjecting supposed facts to rigorous testing and objective examination really isn’t– we are certainly quite capable of it, but it has to be learned. This is no longer an alarming hypothesis to me, and in fact it would take quite a lot of evidence to convince me that it’s not true. But the alarm really seems to stem from a notion that the intuitive is somewhere inherently more trustworthy, more insightful (see “women’s intuition”) and the counter-intuitive wrong, while nothing could be further from the truth. Intuitions exist for reasons, but after all, those reasons are not necessarily ours.

*Ridiculous, I think, because of the mundane subject matter. Not even the most materialist of us would be surprised to hear that someone associates supernatural agency with, say, an amulet or a relic. We might not believe it ourselves, but we’re familiar with the concept. Clothing from Goodwill, by contrast, is about as ordinary as it gets, so Robertson suggesting that demons might be attached to it sounds as superficial and weird as praying for a good parking space. But if there are supernatural agents out there who are interested in our lives, the postulate that Robertson accepts and many if not most religious people do as well, why should this interest not extend to such things as parking spots and laundry? 

Shame, shame, know your name

Reading about film critic Rex Reed criticizing actress Melissa McCarthy’s appearance using such sophisticated and erudite terms as “tractor-sized” and “female hippo” has me reflecting on moral psychology. You know, as I’m prone to do. Specifically it has me going back to what I know about the way shame and guilt function for both the inflictor and the inflictee, and how they differ. 

You see, guilt is a “what you did” emotion, while shame is a “who you are” emotion. Guilt points to an act, while shame points to a person. Guilt can be a productive emotion because it focuses on the morality of what a person did, encouraging him or her to recognize the immorality of that act, feel remorseful, and improve by not committing the act again. Shame, on the other hand, focuses on a person’s identity and traits, which tend to be more or less permanent. Inescapable, or at least not easily or immediately escapable. And oftentimes not even a moral concern. Shaming someone therefore is not generally a productive thing to do, and isn’t intended to be. Quite to the contrary, making the person feel shitty about him/herself, full stop, is the point
Criticizing what someone did wrong arouses a feeling of guilt (if it works). Telling someone they’re horrible, ugly, stupid, evil, etc. arouses feelings of shame (if it works). Martha Nussbaum wrote an entire book about how guilt is a useful and necessary concept in a justice system, in a legal context, whereas shame…not so much. Guilt encourages rehabilitation; shame encourages despair and recidivism. Because if you succeed in convincing someone that who they are is a terrible person, then there’s nothing for them to do but be a terrible person. But good people can do bad things– all people do bad things– so being guilty of such simply means that you’re guilty of doing a bad thing. It doesn’t define you. 
Americans are terribly individualistic, and by and large I consider this a good thing. However an unfortunate consequence of this is how good we are at turning guilty matters into shameful matters. We’re amazingly talented at conflating “who you are” with “what you did,” so that even the things an individual has very little to no control over are things he/she can be made to feel bad about. It’s true that you can arrive at a characterization of someone’s general personality by adding up the things they’ve done, and this kind of shaming can theoretically be useful. But generally speaking, shaming skips that step and goes straight for things the person in question can’t do much about. 
So I have devised this handy, basic, and utterly unoriginal rubric for determining what counts as not-useful-shaming:

The degree of control a person has over their situation is inversely proportional to how much of an ass you are being if you mock or blame them for it. 

In other words, if there is no guilt, there should be no shame. And when there is guilt, focus on that
But Americans, individualists that we are, are stunningly good at turning from “who you are” into “what you did” so that we can go right on shaming anyway. If an undesirable trait isn’t someone’s fault, then by golly we’ll find a way to make it their fault so we can properly blame them for it. The top two examples of this being, of course, obesity and poverty. People who would flinch at the idea of making fun of someone’s race or sexual orientation show no reluctance to ridicule someone for being poor or fat– the latter especially if the person happens to be female, because for women we have made “isn’t sexually appealing to me personally” into a moral wrong. Especially for actresses. 
Do people have control over being fat and/or poor? Somewhat. But:
  • it varies from person to person, and you sure can’t tell what it is for a particular person without actually knowing them, 
  • regardless of their specific situation, every person in either of these two categories you see is likely in the process of trying to remove themselves from it at any given point, and
  • membership in these categories is not in itself a “what you did,” and it isn’t a moral wrong. People do not harm others simply by being overweight and/or poor. 
Therefore, according to the rule articulated above, we should…..not shame them for it! Or else risk behaving in a way we should absolutely feel guilty about.
Take note, Mr. Reed. 

Aping morality: non-human (secular) humanists?

Whenever I’ve been involved in a discussion of the evolution of morality, the English language trips things up a bit. Due to the fact that “morality” could mean “being good” or “the capacity and tendency to distinguish right from wrong,” it’s always important to note which, specifically, you’re talking about. Generally speaking, it would seem that the latter entails the former– if you have an idea of what it means to be good, then you can probably be good. We all have our failings and occasionally fail to live up to our own standards of morality. But when asked what it means to be a good person, we usually give a description that most human beings could live up to, if they put their minds and consciences to it. By contrast, if a being doesn’t distinguish right from wrong, we generally don’t hold him or her responsible for doing things that would normally be considered wrong. I touched on this last week when talking about what agency means in terms of moral responsibility. An entity with a concept of right and wrong has the capacity to behave morally– this concept is sometimes called a moral sense. Having a moral sense is not the same as being moral, any more than having a car is the same as driving.

Are we good so far? Not moral, I mean, but clear? Okay.

Whether non-human animals can have a moral sense, and to what extent, is a very hot topic. It calls into question our own capacity to make these determinations, where that capacity comes from, and how we can recognize it. Maybe other animals have a moral sense, but it’s so different from ours that we wouldn’t know it if we saw it! Maybe other animals make judgments about all kinds of things that humans just don’t care about. Humans certainly don’t share all of our moral views about things– moral standards can vary significantly from culture to culture and from individual to individual– but most of us have both an extensive repertoire of ways to express moral approbation or disapprobation and an adeptness for registering when others approve or disapprove of something. We’re excellent communicators, both vocally and non-vocally. We’re actually so good at communicating that we sometimes betray feelings we’d rather not. I’m particularly bad at lying about or otherwise misrepresenting how I feel about something, which is why my career as a professional poker player ended before it began.

Our means of registering how other people feel without their telling us, or even in spite of their telling us something to the contrary, is called empathy. It’s what enables us to “read minds”– not via literal ESP, but by  interpreting patterns of behavior and comparing the situation others are in to our own past experiences, and extrapolating from that how they must feel, what they must be thinking. The simplest form of empathy is emotional contagion– imagine a nursery in which one baby starts crying, and the sound sets off others as well. This form of empathy is reflexive, which means there’s no point at which you actually think “This person must be feeling/thinking ______.” There’s a scene in the movie Clue where Mrs. White describes how her husband was murdered: “His head had been cut off, and so had his…you know.” Cut to three men listening while sitting on the couch, all simultaneously crossing their legs at the knee.

With reflexive empathy, you are effectively projecting yourself into another person’s body and situation and feeling what you imagine they feel, whether you want to or not. This is generally referred to as sympathy or a sympathetic reaction, and it’s very effective in terms of getting us to care about the welfare of others. It’s the reason that witnessing suffering bothers us, and it inspires us to help those who are suffering and be angry with those who cause it. If the person who is suffering is familiar to us or similar to us, our sympathetic reaction to their suffering is both more likely and stronger when it happens. If you want to prevent someone having a sympathetic reaction to another’s suffering, a good way to go about doing it– after attempting to disguise the fact that there’s someone suffering at all– would be to make the person suffering seem as unfamiliar and/or dissimilar as possible, so that it’s harder to relate to them.

Hume characterized empathy as the origin of morality. That is, he said, how we become moral– we are moved by the pain of others because we associate them with ourselves, and from this we extrapolate general dispositions about how others should be treated. We derive a moral sense.

So if other animals have empathy, does that mean they have a moral sense?

I think the answer from Frans deWaal is “yes” and “yes.” That is, yes he believes that some non-human apes have the capacity for empathy, and that this constitutes a capacity to form moral judgments. That’s what I expect him to argue in the new book he has coming out, The Bonobo and the Atheist.

A primatologist– and one you should read, if for some reason you haven’t already– deWaal has decades of experience observing the behavior of captive chimpanzees and bonobos, and has written copious books and articles on the topic, especially the ways in which that behavior is similar to our own. And then he began writing books and articles defending his emphasis on the ways in which their behavior resembles our own. The charge, as you might expect, was anthropocentrism– an insistence on incorrectly interpreting things (in this case, non-human primate behavior) in terms of human thoughts and behavior. To this, deWaal responded by accusing his accusers of “anthropodenial”– an insistence on refusing to interpret things in terms of human thoughts and behavior, even when it’s correct (accurate) to do so. You can see this exchange take place explicitly in Primates and Philosophers: How Morality Evolved, where deWaal argues basically that chimpanzees and bonobos have the ability to empathize and therefore at least a precursor to a moral sense, which can be recognized in their behavior by its similarity to human empathy– and there’s nothing hasty or unparsimonious (i.e., inaccurate) about  it.

That’s not what this post is about, though. Nor is it, really, about the general topic of morality in non-human primates or other non-human animals. It’s really about the fact that The Bonobo and the Atheist will be deWaal’s first book addressing religion specifically, and what I’m afraid he’ll say about it. See, his books to date have (largely) been about the possibility and extent of a moral capacity in the great apes, non-human primates, particularly chimpanzees and bonobos. Now my concern is that he’s going to use this body of data to argue that we– human beings– don’t need morality to come from God, because we’ve evolved it. That our closest living primate relatives are, in effect, secular humanists (or at least capable of being such), and therefore we humans might as well be, too.

This position– if indeed that’s what deWaal argues, and I don’t know if it will be– doesn’t bother me because it’s false. It bothers me because it’s beside the point.

Let me back up.

If Great Apes-Who-Are-Not-Humans (that would include chimpanzees and bonobos, but also gorillas and orangutans) do indeed have the capacity for empathy, then I would say that “precursor to morality” is a fair description for it. It would seem, on the face of it, that if nonhuman primates  have the capacity for empathy, then it is indeed evolved. I expect deWaal to argue this– he has before. (However, this isn’t necessarily the case. It could be, for example, that the great apes have evolved to have the kind of brains which make it possible for us have an empathetic response, but not be “wired” for empathy per se. To continue the clumsy analogy I began with, this would be like saying that just because you have a car, doesn’t mean you have a drive-to-the-store device. You have a device which you can drive to places, including the store if you so desire. This distinction goes to the heart of the “general learning device” vs. “kludge” discussion of how our brains have evolved, which I do not have any desire to get further into here.)

But even if other Great Apes have the capacity for empathy and hence morality, that is not a good point of evidence with which to oppose a theological insistence that morality must come from belief in God. That’s why I think, if this is the arrow deWaal will be firing, it will miss the target. Because we don’t need to have evolved morality (that is, to have inherited a moral sense) in order to have it– both the capacity to be moral, and the tendency to exercise it. Clearly, however we came by these things, we have them. And they are universal, and they do not require belief in a deity.

Now you may ask, why does this matter? Shouldn’t demonstrating that we have evolved a moral sense answer that question just as well, if not better? I say no, for a few reasons. First, because a lot of the people who believe that if your morality doesn’t come from God you don’t have morality at all, don’t believe in evolution. They very likely don’t have a good grip on what evolution is. And plenty of people– theist and atheist alike– who do know what evolution is, and are fully onboard with it, nevertheless have a distaste for evolutionary psychology or anything that smacks of it. And even those who don’t have such a distaste at all but have a dedication to scientific rigor (which all of us should, presumably) will need to be convinced. And I’m saying this convincing is important– very much so– but also beside the point.

You don’t need to demonstrate that morality is evolved in order to show that it doesn’t need to come from God, or at least a belief in God. The reality of nonbelievers being moral now, and the immoral behavior of not only believers but by believers in the name of the deity who is supposedly the origin of morality (not just the capacity to be good, but Good itself), accomplishes that.

I think of this every time I see, for example, someone claiming that those who oppose him or her politically are opposing morality itself. As if there’s a monopoly on morality: it only comes in one brand, and anyone who doesn’t have that brand doesn’t have morality at all. No knock-offs, even. Fellow nonbelievers– you’re not the only ones who, it’s being maintained, are not just insufficiently moral but incapable of acknowledging morality itself because your concept of it is somewhat different from that of the person making the accusation. Often that person will pretend that members of the morally bereft group he/she is describing are nonbelievers, because no “true” believer would support the right to an abortion/separation of church and state/feminism/sex before marriage/ending school-sanctioned prayer/supporting the teaching of evolution/ending the War on Drugs/ending war, period etc. But realistically speaking, there are nowhere near enough nonbelievers to accomplish any of these goals. And yet there is ample support for them. Hmmm.

So…yeah. Perhaps I’m flailing at windmills, and in fact deWaal’s book will not go anywhere near making the we-evolved-morality-therefore-we-don’t-need-God argument. But since this argument exists, and is actually relatively common to see whenever a believer challenges a nonbeliever regarding where he/she finds his/her foundation of morality on the basis that if God does not exist we should all be out murdering, raping, stealing, etc., I think it’s worth discussing why this approach is not actually the best one.

The best one is far simpler: There are loads– loads– of moral standards which are not based on divine mandate. Many of them were endorsed by Greek philosophers before Jesus ever set foot in Bethlehem. It’s not possible to show that morality didn’t come from God, because God’s existence itself is non-falsifiable. Fine. But it’s easy to evaluate whether the morality that is claimed to come from God, is in fact, moral or not. This will very likely get a person accused of “judging God” (and who has a right to do that?), but since the person making these proclamations is invariably not God, but a man…well. It carries just as much weight as anything else said by man.

I’m really looking forward to deWaal’s book, despite my misgivings stated here– and hey, for all I know, they might be totally off-base. I hope so. And if you aren’t familiar with his books, go get Chimpanzee Politics when you can. Everybody should read that one, and will likely enjoy it.

————————————-

Prior relevant writing: Is Darwin Responsible for the Chimp Attack?

Secret Agent Woman

Jennifer Shewmaker, a psychology professor at Abilene Christian University, has a blog post blaming the Steubenville rape case in part on objectification of women. You should go read it, but first read about the Steubenville matter if you haven’t already. I have some theories about what would possess teenagers to create videos of themselves mocking a fellow student for getting repeatedly sexually assaulted at a party and then post the videos online, but they’re half-baked. And right now I want to talk about the aspect Shewmaker focuses on.

First, I agree that objectification does contribute to this, but a “me too” isn’t good enough here. “Objectification” has become to pat a word, too cliche. It’s not wrong, but it’s so commonly used that I think the meaning has been largely sucked out of it and people’s eyes tend to glaze over when they see it. And I say this having written about objectification and the problems with it multiple times before, each time cringing a little internally while thinking about how the word, a very important word, has become a slogan.

So let’s focus instead on the opposite of sexual objectification– sexual agency. Or just, you know, agency to start.

An agent is a being with a will, desires, motivations, and responsibility. An agent does things for reasons, and can be blamed or praised when those things are wrong or right, respectively. In order to be a fully realized agent, you need to be capable, adult, mature.

An agent, when it comes to legality, is someone who can be party to a contract. We do not hold a person to a contract if important information was withheld from him or her in the contract’s arrangement (that would be fraud), or if the person him/herself was for some reason not mentally competent to enter into such an agreement, because these are factors that diminish agency. They make a person less capable of making an informed, responsible decision. And it’s wrong to deceive people into doing things against their best interest (that’s taking advantage of them), and it’s wrong to blame people for behavior that either wasn’t immoral or over which they had little or no control, or both.

When a child or someone with a severe mental disability does something bad, we temper our judgment according to their diminished agency. When an animal does something bad, we blame it scarcely at all. Children, the mentally disabled, and animals are placed in the care of rational, caring adults, fully-realized agents, who make decisions for them. Even though they are not fully-realized agents– especially because of this– we consider it wrong to abuse them. Though they are not moral agents, they are moral patients– beings we should treat morally, even though they may not be able to treat us in that same manner.

There are men who think that women are like children, the mentally disabled, or animals in this regard. No, they probably don’t think in terms of moral agents and moral patients, but to them the only people who can be fully responsible, mature actors are adult men. To this sort of person, sexually assaulting a woman is wrong– but primarily because it goes against the interests of whatever man is in charge of her, her husband or her father. A woman’s sexual “purity” (scare quotes here because having sex is not like dropping a bit of black paint into a can of white, or a fly into a pitcher of milk) is a commodity, the strength of which determines her value to these men. In that regard she hovers somewhere between child/mentally disabled person and animal, because children/the mentally disabled aren’t expected to provide a service, whereas animals often are. It would be more accurate to say, actually, that they are used for something– dogs for hunting or sniffing out drugs, horses for pulling carts, various livestock for eating, and so on. Women are used, to this mindset, for sex and baby-making. If they can no longer be used for these functions or nobody wants to use them for these functions, they are irrelevant. As Tina Fey said, “crazy” is a woman who keeps talking when nobody wants to fuck her.

To this mindset, rape is only as wrong as theft– and it’s theft not at her expense, but at the expense of another man. If no man is in charge of a woman, or if she’s been “used” too much, then….eh. If you take someone’s dog and beat it with a stick, you’re in serious trouble. If you take a stray dog and do the same thing, not nearly as big a deal.

A study performed earlier this year indicated that people, male and female, literally see women as more like objects and men as more like people. Of the images that Shewmaker used to accompany her blog post on objectification of women, the worst one to me is an ad depicting a woman in her underwear lying on a bed, with a Playstation controller lying nearby, its cord leading directly into her belly button. With this, you can control the woman, haha. The caption reads “Keep on dreaming of a better world.” Of all depictions of woman-as-sexbot in media– and there are so many the idea is well past cliche at this point– that’s certainly one of the clunkiest. Congratulations, Che Men’s Magazine– you’re even lousy at sexism!

But even so, even in spite of these, I find it easier to focus not on how women are turned into objects, but how they’re denied having agency. It seems more accessible to take what a man is generally considered to be, and then examine what is subtracted for a woman (“How do you write women so well?” “I think of a man, and I take away reason and accountability”). And then look at the ramifications.

There are people, and then there are women. 
There are two kinds of people: men and women.
There are people, and amongst them are men and women.

Yes, that’s better.

TMT

Terror management theory sounds like a government doctrine on how to combat suicide bombers. It is actually, however, the name for a discipline of psychology devoted to the study of how people deal with being…well, terrified. How they cope, mentally, with the knowledge that they are mortal– that they will eventually die, with reminders of this occurring regularly in the form of other people dying. Terror management theory asserts that mortality salience (being made to consider your own death) affects people’s decision-making on an individual and group level, which makes us different from every other species on earth because we are probably the only ones who can consider the possibility of our deaths. Many non-human animals can experience fear, but that instinctual avoidance of predators isn’t “death avoidance” per se. They don’t want to be eaten because being chomped on sucks, and being chomped on continually until there’s nothing left of you probably sucks even more. At least, they don’t want to try it and find out.

Humans? We also fear being chomped on– or shot, or stabbed, or infected, or run over, or anything else which causes pain and may end us in the process. But we also fear being ended for its own sake. The cessation of our individual lives. That’s scary, regardless of whether we believe another life is going to come after it. Terrence Deacon wrote in The Symbolic Species about how symbols define language use, and our ability as a species to do that amazing trick represents the co-evolution of language and the brains who use it.  In other words, symbols make us who we are– big brained apes with the capacity to entertain counter-factuals (things that could be true, or were true, or will be true, but aren’t true right this instant) and use them to communicate about things that aren’t right there in front of us (a symbol being something that stands for something else). We are so attached to this ability, he said, that we effectively have made symbols of ourselves, and we fear death because it represents the end of that symbol. If we end, all of the things we stand for will end. All of the significance, all of the meaning. That’s what we fear.

Which could explain why terror management theorists have found that mortality salience has a particular effect on us– it makes us want to preserve the cultural meanings we share with others in our group at all costs. When faced with something threatening, with the fact of others dying and the reminder that we ourselves could die and will die eventually, we become more insular. Less tolerant of dissent. More suspicious of defectors and traitors. More certain that our way is the right way, and more likely to adopt a “with us or against” us policy. The biggest example most Americans have of this is the aftermath of 9/11, but any tragic event can trigger these feelings. On the plus side, we can draw together with those who sympathize both emotionally and ideologically, and provide support for each other on both fronts. Unfortunately that comes with a corresponding desire to jettison, again ideologically or even physically, those who take a different view.

As with all biases, I don’t think that simply realizing the existence of one and learning about it can somehow shield you from experiencing and acting on it. But I think it helps.

I am not a cockroach– what materialism is, and isn’t

Several years ago, I bounded out of a faculty building on a university campus and, in a thoughtful and optimistic mood, joined a couple of lecturers in the pub across the street. After we’d settled on benches in the garden out back, I mentioned that in the course of my studies, I seemed to be becoming a materialist. The reaction was immediate and memorable: “A Marxist, you mean?”

Not memorable, mind you, because unusual or unexpected. I had, after all, been studying political philosophy that semester, and this was the United Kingdom– and my interlocutors were British and Austrian, respectively. What else could I possibly mean? After all, Marx was a continuation of a long line of becoming more and more about…well, the material. German philosophy in the 19th and 20th centuries has in large part been about coming down from the ideological rafters and starting to deal with mundane, real, ordinary life. Realism in reaction to idealism. Imagine that scene from Mary Poppins in which they visited a friend of hers stuck on the ceiling because he laughed so much, and eventually everyone started laughing along and floated up there with him, while Mary stood on the floor beneath them impatiently waiting for them to come down. Those people floating around, drinking tea? Hegelians. Mary Poppins on the floor (at least, at that specific moment)? Young Hegelians, which sounds like progeny but is actually more reactionary. Estranged progeny. Marx was one of them. He was impatient with philosophers pretending that philosophy could be about things that don’t really matter– or to be more charitable, things that don’t really matter in daily, practical existence, such as making a living and feeding yourself and your kids. While Hegel waxed on about the für sich (for itself) and the an sich (for us), Marx took from that a lesson to figure out what it means to exist for yourself as opposed to for someone else, and translated it into a matter of property, and who is control of property. That’s Marxist materialism.

That was not really what I meant. But it’s connected.

What I meant was that, in the course of studying religion and culture, I for some reason got it into my head that I ought to learn more about the mind and how it produces…well, anything, including culture, to begin with. And with that thought, in rapid succession I read a long list of books which included the following:

  • Consilience, by E.O. Wilson
  • Darwin’s Dangerous Idea, by Daniel Dennett
  • The Blank Slate, by Steven Pinker
  • How the Mind Works, by Steven Pinker
  • Consciousness: An Introduction, by Susan Blackmore (if you have not read this, and are interested in the science and philosophy of consciousness and the theories of principle thinkers on such…do)
  • The Meme Machine, by Susan Blackmore
  • Consciousness Explained, by Daniel Dennett
  • Freedom Evolves, by Daniel Dennett
(This was pre-Breaking the Spell. This was pre-, for that matter, a lot of the popular literature on the cognitive science of religion, which became a thing in 1993 but didn’t really catch fire until about ten years later)

When you think about that, it’s really no wonder my MA thesis was a mess. It was a struggle between social constructivism– “continental philosophy”– as I was being taught, and a much more…well, naturey approach which I’d undergone basically on my own. Now, I hasten to pull up a bit here and note that the constructivist perspectives I was hearing about in the classroom (“post modernist” would be the indelicate term) were not useless. Far from it. I learned how important perspective is– that it must always be taken into account, and that manifold factors shape one’s perspective without any requirement of awareness or acknowledgement on the part of the speaker. I learned what it means to have privilege, and to lack it, and that claims of objectivity must never be taken for granted. That differences are as important as generalities. That it’s important, critical, to understand where people with other views are coming from– but that you don’t “win” against them by knowing it; you can’t psychoanalyze someone into submission. Anthropology, sociology, psychology…studies of human thought and behavior can’t begin and end with what people say about their own motivations for doing things. You need a heterophenomenological approach, which acknowledges that experience but doesn’t take it as authoritative. And knowing someone’s motivation may not confirm or refute what he or she is saying, but it can tell you a hell of a lot about why they’re saying it.

Knowing all of this augmented, rather than detracted from, my understanding that we are simply organisms making our way in the world, in our environment (both natural and social). I started to see culture as more of an extended phenotype than an independent causal force. My thesis was, in retrospect, a rather weak project and a terribly ambitious one at the same time– I was trying to sell cognitive science to scholars of religion. Make what seemed obvious to me– that you need to understand the brain in order to understand belief and behavior, including religious belief and behavior– seem even palatable, much less relevant.

Admittedly, I didn’t do the best job. At least, it didn’t appear to be very convincing. When it became clear that my PhD was going to be more along those lines, a meeting was held and it was determined that I’d need to go elsewhere. Why not to Denmark, where this university is starting a brand new program for the cognitive science of religion?

Yes. 

Anyway, getting back to materialism. I’m writing this in the first place in reaction to an “open letter to atheists”  posted on Answers in Genesis, which repeats every last misconception and outright falsehood about what it’s like to be an atheist– and therefore a materialist (which doesn’t actually follow, but oh well)– there is. To wit:

Do you feel conflicted about the fact that atheism has no basis in morality (i.e., no absolute right and wrong; no good, no bad?) If someone stabs you in the back, treats you like nothing, steals from you, or lies to you, it doesn’t ultimately matter in an atheistic worldview where everything and everyone are just chemical reactions doing what chemicals do. And further, knowing that you are essentially no different from a cockroach in an atheistic worldview (since people are just animals) must be disheartening. Are you tired of the fact that atheism (which is based in materialism, a popular worldview today) has no basis for logic and reasoning? Is it tough trying to get up every day thinking that truth, which is immaterial, really doesn’t exist?

Okay, yes, there is a version of materialism which entails that nothing but physical objects exist. That’s why I now prefer not to call myself a materialist– or a material girl, for that matter (diamonds have never been my best friend, or even a close acquaintance, really). I much prefer the term naturalist (which should not be confused with naturist. No nudism in this instance). It means, basically, that the natural world is what we have. That science has it right, and we should consider things to be real only if they have an objectively demonstrable existence. Which means, yes, that supernatural factors should not be taken into account. Metaphysical naturalism pairs well with secular humanism, the ethical philosophy that as humans we have to rely on our own resources and abilities to make existence better. To flourish, to reach our full potential, to do what my former adviser called “becoming divine.” But by that, she did not mean we should literally become gods ourselves. She was talking about enabling fulfillment, becoming the best, most satisfying version of yourself. We might have disagreed on several things, including terminology such as this, but not on the concept itself. To hear the author of this “letter to atheists,” you’d think such a pursuit would be worthless without a belief in God.

Actually, the author is mistaken about a lot of things, and it makes my head spin to try and articulate exactly how many. Perhaps most ironically, the fact that not only is atheism not based in materialism (since not being convinced of something doesn’t need to be “based” in any particular philosophy) but there are plenty of non-materialist atheists out there. Believers in the supernatural are certainly the stars of the mind/body dualism debate, but they certainly aren’t the only players. The most obvious part of this portrayal of  “atheists are materialists, which is a crap philosophy” is the inability to imagine that there can be any meaning in life without a belief in God, which I don’t think most atheists acknowledge the strength of. That is some powerful conviction, even with the similarly powerful fear of eternal hellfire which frequently accompanies it. What the author of the above letter, Bodie Hodge, is doing is conflating naturalism– the belief that objective reality is all we have– with the naturalistic fallacy, which says that the way things are is the way things should be. This is a common mistake, perhaps the most common mistake made regarding any view of life which appears too reductionistic for the person critiquing it: You think this is all there is. That must mean that’s all you want it to be. Well, of course not, replies the naturalist. If I point out that we’ve got a newly built house and several cans of paint, that doesn’t mean I’m opposed to having a painted house. I’m simply refusing to believe that the house will be or has already been painted by magical elves. If we want that house to be painted, we’d better get out the brushes and roll up our shirt sleeves.

Similarly, the criticism that “everything and everyone are just chemicals doing what chemicals do” is only really a criticism if you fail to recognize that what chemicals do is freaking amazing. Complaining that what we do and are is chemicals is like complaining that the Sistine Chapel is made of bricks, only worse because a chemical is far more versatile than a brick (and bricks are pretty darn versatile). “Greedy reductionism” is Daniel Dennett’s term for when you explain how something works by describing the interactions of its components (reductionism), but in the process of doing so, you leave some things out. You fail to take into account the true complexity of what you’re explaining, and end up doing the equivalent of describing how to bake a cake without mentioning that it requires some heat, a move which is legitimately invalid. Anti-reductionism, by contrast, is a refusal to see something in terms of its components in the first place. Opponents of evolutionary theory, and of what I’m going to stick with calling naturalism, often seem to have a hard time with the concept of emergent properties. Or at least, the concept of us being emergent properties. It’s okay for a lot of cars to equal traffic, but not for the activity of a load of chemicals to equal consciousness. Dennett was famously quoted as saying that we have a soul, but it’s made of lots of tiny robots. Religious anti-reductionists don’t like the robots. They don’t like the idea of unthinking things combining to form a thinking thing, at least not without the outside help– the outside design– of some grander, elevated thinking thing who had this all planned out from the beginning. Whenever that was.

“Knowing that you are essentially no different from a cockroach in an atheistic worldview…” Religious anti-reductionists have a problem with essentialism, too. And by that I mean, they seem to be addicted to it. They are too fond of it. Things have properties, and those properties are immutable, and there’s no room for one thing to turn into another thing– the very notion is ridiculous. Gender essentialism is the belief that men have to be one thing and women another, and never the twain shall meet– except to have sex and make babies, of course. That’s common enough in religion, but the “atheists are just the same as cockroaches according to atheists” thing is saying that unless we consider humanity to be separate from the rest of existence as distinguished by our relationship with God (aka possession of a soul), then we might as well be cockroaches. Hodge assumes the conclusion of atheists by his own standards– we reject what he thinks distinguishes us from vermin, therefore we must perceive ourselves as vermin. And wow, that must suck for us, huh? That must be why when you enter a room and turn on the lights, all of the atheists scatter for the dark space under the stove or the fridge.

But strangely…no, we’re not. We’re living our lives as human beings, thinking thoughts, doing work, relating to others, practicing empathy and creating works of art and caring for family and occasionally taking a road trip or seeing Avatar in 3D or making a podcast about video games. No demonstrable diminished joie de vivre; no elevated angst; no visible heightened incidences of people being told to get off of lawns or general curmudgeonliness (well, I can’t exactly speak to that– I’ve been a curmudgeon since age 20 or so). Hodge is simply mistaken about the consequences of non-belief, apparently because he cannot comprehend what it’s like not to believe. It’s like the god-of-the-gaps wrapped up in an argument from incredulity– “I can’t fathom what it’s like to not have, much less not need, this thing I find so important. So I can’t help but conclude that people who lack it are missing something important, and must suffer from the lacking.”

That– assuming someone’s conclusion through the lens of your own philosophy– is part of prejudice, or more basically it’s a form of ignorance which gives birth to prejudice. It seems to be most easily overcome by not just actually getting to know members of the group you’re prejudiced against and seeing that they have no existential gaps in their lives which need to be filled, but also by coming to realize that the choice you made (more or less voluntarily, depending), was in fact a choice. There were/are others, equally legitimate. Comparative religion courses are valuable in part because they encourage this realization– they nudge a student to take note of the fact that if he or she had been born somewhere else, his/her beliefs about the order and creator of the universe might well be radically different. It’s fine to stop there– this is the foundation of inter-faith exchange, after all– but some of us go on to conclude that if all faith-based perspectives are equally valid, then they are all equally invalid, and that maybe it would be better to go about life on the assumption that they are. This is a conclusion I reached in my junior year of college as a religious studies major, as part of a program at Texas Christian University which I recall the local Campus Crusade for Christ called an “atheist training camp.” Not hardly– it simply wasn’t/isn’t a seminary.

Is it tough trying to get up every day thinking that truth, which is immaterial, really doesn’t exist?

No, because I have no trouble distinguishing between the legitimacy of beliefs and the reality of physical objects. I’m perfectly aware that the fact that modus ponens can’t be found anywhere in the universe using a GPS or any other tracking device makes it no less real. You will not catch me stepping out of an airplane at 10,000 feet without a parachute on the conviction that truth is relative, and therefore doesn’t matter. But you also won’t catch me declaring that gravity (which is not material, but is physical) or modus ponens (which is neither) created the universe, and therefore should be worshiped. One thing a naturalistic worldview does cut down on is relentlessly anthropomorphizing things.

Politics for creative types

Matthew Inman’s comic on the creative process (which you’ve almost certainly seen already because you already read The Oatmeal; and if you haven’t because you don’t, now’s the time to start) got me thinking about creativity and political leanings. I don’t know anything about Inman’s own politics, really, aside from the fact that he has a firm grasp of the notion of copyright, but I wouldn’t be surprised if he leans to the left at least a little bit. People who make a living– and people who wish they could make a living– producing creative content tend to, and I’ve been contemplating why that is.

I think it has something to do with just world bias and how utterly it conflicts with the creative market.

See, probably every creative person you know has at some point (probably many points) in their life had the thought about someone “That person produces complete crap, and yet people shower affections, praise, and cash upon him/her.” A creative person is intimately aware of how much of his/her success (or lack thereof) is based on a combination of the sheer caprice of public taste and plain’ old dumb luck. This does not mean that creative types who are successful didn’t earn their success, but rather that their success cannot be summed up simply as the reward of effort, and most of them know this. A creative person doesn’t want his/her success to be simply the reward for effort, because that totally discards the notion of talent. And how much of it they have. And how that makes them special.

Note: there’s nothing wrong with wanting to be special.

But what this means is that even the most full of him/herself, egotistical artist/writer/performer on the planet– and there’s no shortage of those– is at least tacitly aware that things could be very different, that he/she might not have been “discovered,” that his/her genius might have gone permanently unrecognized, and he/she could have become the proverbial starving artist. Or, in many cases, is one now. So the artist sees the importance of a social safety net, and doesn’t look down on those who find themselves needing to land in it. But, you could say, artists don’t have to starve– they could easily do something else! Many of them do do something else! Yes, but one of the things about creativity is that you have to do that, to be that. Creators gotta create. They find themselves doing it regardless of whether anyone’s paying attention, let alone paying them for it, and that takes time, energy, and other resources. Money that a non-creative person might spend on tickets to the Super Bowl (no, I’m not saying only non-creative people like football. But…well, hmm. Maybe I am) gets spent instead on paint, instruments, clay, fabric, microphones, and Photoshop. Etc.

But what does this have to do with being liberal, exactly? Well, conservatism is rife with just world bias– the assumption is “I built this,” or, when prompted to be religious, “I built this, with the opportunities God gave me.” A conservative’s success is his/her own, and a conservative’s lack of success is…temporary. Not necessary. A test of faith. Things along those lines. To a conservative, the market is not a matter of public taste– it’s a matter of public recognition of quality, and quality is produced through effort. Effort and know-how. The market approaches objectivity in that regard. Criticize a movie that won out big at the box office, and a conservative will be the first person to remind you of that fact. The existence of Jersey Shore is simply the public not knowing what quality is.

This is why, when a conservative talks about “personal responsibility,” he/she is talking about taking responsibility for the fact that you’re successful or not, and not bugging anyone else about it. You’re poor? Get a job. Got a job? Get another/better job. Do some work; work people will pay you for. Don’t take from others, you lazy grasshopper, when all of us ants are putting in an eight-hour day, every day, and providing goods and services the market wants. It might not be “fair” that the market doesn’t want whatever it is you are producing, but life ain’t fair. Suck it up.

The starving artist does have to suck it up. But they are very aware of the “have” in that sentence. This is why the expression “selling out” exists. This is why creative types can be suspicious of the notion of “property rights”– because it suggests that property is as important as people. Other rights we’re familiar with are about individuals and what individuals are allowed to say, think, and do…property rights are about what they’re allowed to have, and that’s suspicious. What we’re allowed to have has, after all, at some points included other people. The notion of a corporation has made what we have into a person, and liberals are not any happier about the thought of property becoming people than they are about people becoming property.

Property rights are important to me, but I had to learn why they should be. It wasn’t nearly as intuitive as the right to be creative, to produce things because you can and want to for your own pleasure and that of others.  I had to come to see property as the necessary condition for that that production, an extension of the individual which the denial of directly inhibits his or her pursuit of happiness. I think that’s how you sell the importance of the Fourth Amendment to liberals, to make them regard it as anywhere near as important as the First– you make it harder for a person to live, to create, to pursue happiness, when you take his or her things away. Creation is done via speaking and doing, and the speaking reduces to doing, and you can’t do without stuff. Artists are well-accustomed to doing with less than they’d prefer to have, making it work (because the alternative is to not create at all), but it’s possible to see the practical effect of taking away what a person needs, and recognize that the damage that does is similar to that done by attacking or silencing them. And creators are good at nuance, so they can recognize that this doesn’t mean taking someone’s stuff is identical to attacking or silencing them, though it can amount to the same thing or even be worse. Property rights aren’t just so that CEOs can live in enormous houses– they’re also so that your life savings doesn’t get confiscated by the police without so much as charging you with a crime, so that your privacy is not invaded for the sake of preventing you from ingesting materials which conservatives find morally objectionable, so that your autonomy is not taken from you because you were caught doing so.

The emphasis on autonomy is, incidentally, why I consider myself a libertarian, albeit a very left-leaning one. I support a safety net, but I also support the ability to do pretty much any kind of gymnastics you care to above it. My sense of personal responsibility doesn’t extend to being fully responsible for screwing up your life, and certainly not to others– or life itself– screwing it up for you. I strongly believe people should be allowed to make their own mistakes, but there’s a limit to how much suffering should be permissible as a consequence, and not everyone who finds themselves suffering made any mistake at all– certainly not one that the person looking down on them from the balcony of a mansion or the edge of a pulpit couldn’t have made just as easily him or herself, if things had gone slightly differently. Trading Places is a damn good movie.

And it was made by creative people. Probably liberals.

Primary Sidebar

Secondary Sidebar