The trolley and the psychopath

|

trolleyStop me if you’ve heard this one. A trolley carrying five school children is headed for a cliff. You happen to be standing at the switch, and you could save their lives by diverting the trolley to another track. But there he is – an innocent fat man, picking daisies on that second track, oblivious to the rolling thunder (potentially) hurtling his way. Divert the trolley, and you save the kids and kill a person. Do nothing, and you have killed no one but five children are dead. Which is the greater moral good?

This kind of thought experiment is known as a sacrificial dilemma, and it’s useful for teaching college freshmen about moral philosophy. What you maybe shouldn’t do is ask a guy on the street to answer these questions in an fMRI machine, and then use his answers to draw grand conclusions about the neurophysiological correlates of moral reasoning. But that’s exactly what some neuroscientists are doing. The trouble is, their growing body of research is built on a philosophical house of cards: sacrificial dilemmas are turning out to be exactly the opposite of what we thought they were. Guy Kahane wants to divert this trolley before it drives off a cliff.

Kahane, deputy director of Oxford’s Uehiro Centre for Practical Ethics, has never been a big fan of the sacrificial dilemma. The main problem, he says, is that it has been misapplied to situations it was never intended for.

Philshutterstock_159768710osophically, the sacrificial dilemma has a narrow purpose. Your choice supposedly illuminates whether you fall into one of two camps on moral reasoning: choose to hypothetically end a life to save a few more, and yours is described as utilitarian judgment. Reject it, and you are said to be making non-utilitarian (“deontological”) judgments. Roughly translated, the utilitarian is concerned primarily with outcomes, while the deontologist has  a morally absolute point of view that holds that you couldn’t even tell a lie to save someone’s life, because it’s wrong to tell a lie (Kant being the most extreme member of this camp). But when I say “roughly translated” I really mean roughly: to be truly understood on their own merits, these terms need the full battery of philosophical context.

So what do philosophers mean by utilitarianism? It means that you’re the kind of person who, as John Stuart Mill prescribed, is generally, genuinely concerned with the greater good. That you are capable of “transcend[ing] [y]our narrow, natural sympathies … to promote the greater good of humanity as a whole, or even the good of all sentient beings”. It’s an algorithmic way of seeing the world in which all your actions must aggressively maximise the good.

That’s a demanding moral framework! Let’s separate it from what I’ll refer to from now on as “scarequotes utilitarianism”, embodied by the reaction of “what, just kill the fat guy.”

Over time the distinction between the two has been been flattened because of inappropriate overuse of these sacrificial dilemmas. As a result we’ve begun to assume that “what, just kill the fat guy” is shorthand for an entire moral compass tuned to the kind of “God’s-eye” concern for the greater good that defines utilitarian ethics. And so, in addition to being “complex, far-fetched, and convoluted”, Kahane says, sacrificial dilemmas have been misunderstood and misapplied.

shutterstock_84267340But while it’s absurd to use them to pigeonhole average Joe non-philosophers into the utilitarian/deontological boxes, could sacrificial dilemmas still offer some small glimmer of insight into the average person’s real-world moral reasoning? For example, might a person who answers “just kill the fat guy” — while not also believing, in true utilitarian fashion, that she should maximise welfare by donating 90 percent of her money to distant strangers — be more likely to agree that she should give to charity?

To find out, Kahane teamed up with some other Oxford philosophers, including Brian Earp, Jim Everett and Julian Savulescu. They designed a series of experiments to examine exactly how well the answer you give to the sacrificial dilemmas maps to your larger moral framework.

The results, published in January in the journal Cognition, were not encouraging.

Not only does a “utilitarian” response (“just kill the fat guy”) not actual reflect a utilitarian outlook, it may actually be driven by broad antisocial tendencies, such as lowered empathy and a reduced aversion to causing someone harm. Which makes a kind of sense: in the real world, given the choice between two kinds of harm, most people wouldn’t be able to cost it up quite so coldly. In fact, respondents who “killed the fat guy” also scored high on a question that asked them to assess how likely they would be to actually, in real life, kill the fat guy (and other sacrificial dilemmas, like the one where you must smother a crying baby to save a group of hiding refugees). They similarly aced the psychopath test (featuring statements like “success is based on survival of the fittest; I am not concerned about the losers”) and flunked the empathy test (“When I see someone being taken advantage of, I feel kind of protective towards them”). As you might expect, “scarequote utilitarians” scored low on “concern for the greater good”. Taken together, the results of their experiments caused the authors to conclude that answering in the “utilitarian” fashion may reflect the inner workings of a broadly amoral mind.

So why should anyone care about this apart from some philosophers breathing pretty thin air? Because in recent years, psychologists and neuroscientists have seized on these sacrificial dilemmas as a tool of choice for understanding how the brain deals with moral choices:

In the current literature, when subjects judge that it is acceptable to sacrifice one person to save a greater number, this is classified as a utilitarian judgment, and thought to reflect a utilitarian cost–benefit analysis, which is argued by some to be uniquely based in deliberative processes (Cushman, Young, & Greene, 2010), and even in a distinctive neural subsystem (Greene, 2008; Greene et al., 2004).

shutterstock_152192852Neuroscientists have spent over a decade amassing research based on these types of thought experiments. In 2001, in one of the first neuroimaging studies of moral cognition, subjects in an fMRI were posed these dilemmas to draw deep conclusions about the neurophysical correlates of morality. It got a lot of attention. That attention led other researchers to follow suit with more studies. “Once a body of research grows around a paradigm, it is easier to build on it than to come up with a new experimental design,” Kahane wrote. “Soon everyone is using this paradigm, just because everyone else is.”

It wouldn’t be the first time neuroscience has stepped in it. In recent years, serious flaws in big studies’ design and reporting — the “dark patch” of the psychopath’s brain, the dead salmon whose brain appeared under an MRI scanner to spark to life when shown pictures of certain people — have led to questions about whether this discipline has much to add to science at all. This latest reliance on philosophical thought experiments is just asking for more trouble.

But this isn’t to say any investigation of what happens in the brains of people considering moral dilemmas is useless. Kahane just thinks we should jettison the useless sacrificial dilemmas and find something genuinely distinctive of utilitarian moral thinking. In a paper published in Social Neuroscience he recommends we drop the sacrificial dilemma — which is better at identifying B-school psychopaths than it is at identifying morality. Instead, we need to find new ways to suss out a person’s ability to “transcend our narrow focus on ourselves and those near and dear to us, and to extend our circle of concern to everyone, however geographically, temporally or even biologically distant.” Then neuroscientists can have at it with the brain mapping.

That might even give us a real moral compass for the 21st century. If you’re reading LWON, chances are you’re in the privileged position of being fairly insulated from climate change; your country will probably put adaptive measures in place to make sure you and your children never suffer. But climate change will devastate other parts of the world and kill people you’ve never met and their children. The US and their allies are drone bombing places you’ve never heard of. Your smartphone was made, and will be disassembled, in places you’ve never visited and don’t care about, and they’re polluted as hell. How do we start to make a better world? Not by defining morality in the scientific literature as a calculating numbers game.

A team of neuroscientists is on a trolley headed for a cliff. A lone philosopher stands at the switch…

 

Correction, 8 April 2015: post was updated to reflect the contribution of Jim Everett 

 

Image credits

Ominous trolley: T photography / Shutterstock.com

Moral dilemma: shutterstock

Moral compass: shutterstock

Neuroscience balloon: shutterstock

 

17 thoughts on “The trolley and the psychopath

  1. I’ve no doubt in this moment: I’d kill the fat man. But what about if on the other track there were a child, or 2 or 3 or 4, picking up daisies?

  2. Society has invested decades of training and resources in the fat man, who now has hard-to-replace skills and knowledge. The kids are much easier and faster to replace, and are probably annoying. Save the fat man!

  3. I don’t care who is on which track, in today’s inordinately litigious environment I’m not touching that lever with a barge pole.

  4. What the hell is the fat man doing picking daisies on the track anyway? Didn’t anyone ever tell him not to play on railway lines? And if you have time to assess the situation with the tracks, and then determine which lever it is you need to pull to divert it, then frankly, you’ve also got time to shout at the top of your voice for the fat man to move, or throw a stone at him if he can’t hear you. In reality I suspect most people would freeze, and look on in horror at the impending doom without being able to do anything.

  5. …& you were doing so very well until, “But climate change will devastate other parts of the world and kill people you’ve never met and their children.” Climate policy being the *very* thing that will kill millions of people you’ll never meet. By denying them access to electricity, transport, healthcare and all other bounteous benefits of human progress achievable through ready access to cheap energy. Instad, they’ll be selfishly set on track to economic oblivion. All those unknown millions, hostage to ideological lever pulling by amoral eco-imperialists. Condemned to be turned down the dead-end, economic branchline of no return.

  6. I’ve always been a tad skeptical of the sacrificial dilemma. Pulling the lever? Feels terrifying, a huge burden of responsibility, one I’d rather avoid. But saving those children FEELS like the right thing to do. But then also, as soon as you change the characters (now it’s a bunch of adults, now the person on the track is a brilliant doctor, now you know the person on the track, etc)–doesn’t that change the internal moral/emotional compass?

    Maybe the lens with which we view the question is as telling as the answer?
    So then, as you point out, it seems to be a pretty poor indication of what we CLAIM it’s an indication of–someone’s ability to step outside their own self interest, or absolute moral doctrine (depending on who is teaching the class) and judge in a more utilitarian way OR in a more broadly-applied-sense-of-the-good way (again, depending on who is teaching the class).

  7. what if we take the same scenario and frame the dilemma with different combinations of emotional associations?
    the 5 children + positive / negative emotional association (e.g. very mean kids)
    the fat guy + positive / negative emotional association (e.g. a respected acquaintance)

    Regardless, I find that the concept of armchair dilemmas runs contrary to the nature of moral judgement in real life situations: the emotional/cognitive processes that lead us to make a snap decision (or to paralysis) would be different from post-rationalized decisions/explanations or even gut-feelings involved in resolving the dilemma with an intellectual detachment

  8. I don’t think utilitarianism need be as extreme as Kahane portrays, because I don’t think morality need be seen as the ultimate arbiter of the decision-making process. It is perfectly consistent for a utilitarian to consider it morally best to donate all their money to charity, but to nevertheless decide that they are more interested in enjoying themselves or helping those close to them at the given moment; a person who takes a utilitarian view of morality does not need to strive to be a saint or to perfectly follow what utility calculations determine to be most moral. Kahane seems to completely ignore this type of utilitarianism perspective.

    And like Kahane, I don’t think I’m being pedantic.. all of the talk about “transcending our narrow focus” isn’t necessarily true for utilitarianism, unless you are speaking about strictly _moral_ evaluations – a distinction I’m not convinced was made effectively in some of the original research or Kahane. To use some of the deontological terms, one view of utilitarianism would be that only the duty of beneficence is moral, and all other duties, values, or commitments are either separate evaluative systems unrelated to morality, or secondary systems built on beneficence.

    I do think the sacrificial dilemmas are more about consequentialism vs deontological views as opposed to “utilitarianism.” But, I’m not convinced that these dilemmas or the neurological research based on them is as narrowly valuable as Kahane suggests. Although I haven’t investigated, I would suspect that all other things being equal, a tendency to override simple moral rules in specific fringe cases with unusual or dramatic consequences reveals a tendency towards some kind of consequentialism (util., egoism, or otherwise). If the rules are being overridden, they are mere heuristics and the source of the morality is deeper than those rules (whether it falls back on more fundamental rules or on evaluation of consequences).

    1. thanks for this very thoughtful comment. I do wonder though about the purpose of abstract ethics. Kahane is affiliated with the centre for practical ethics which tells you where he’s coming from. if you “know” something is wrong but do it anyway are you an ethical person?

  9. The essential problem for me is, that “moral dilemma” situation, or one remotely like it, will never occur in real life for any but a minuscule fraction of the world’s population. How much more effective — more credible — this whole business would be if the test situation were more realistic. Or is the realm of realism — the reality most of us face most of the time — beyond the ken of moral philosophers?

  10. This article (I presume Kahane knew better) muddles up two different dilemmas, the contrast between which is essential in separating moral philosophy from experimental psychology. Case 1, you pull a lever. Most people say they would pull the lever and send the trolley towards the one person rather than 5. Case 2, you and the fat guy (this is where fat comes into it) are standing on a bridge overlooking the track. Do you heave him onto it, thus stopping the trolley? Most people say no. Why this difference in response, when in terms of both rules and outcomes the two situations are the same?

Comments are closed.

Categorized in: History/Philosophy, Mind/Brain, Psychology, Sally

Tags: , ,