Technology Bringing Dumb Ethical Questions to Life (Kind Of)

It must be frustrating to be in the business of moral philosophy. Unless all ethics is meta-ethics of some sort, it seems as if it ought to be a field with direct, pragmatic implications to our day-to-day lives. But if that really were the case, one would imagine that professional ethicists would be better at living moral lives: we’d seek their advice when confronted with a dilemma, and, with the help of their research, we’d all be better at being better.
 
There are various thought experiments in ethics that most of us have heard of, but are none the wiser for having contemplated. Take the famous “Trolley Problem,” a perfect example of a supposed dilemma that is completely removed from ordinary life. Not only is it utterly theoretical, but compared to the real moral questions we face every day, pretty uninteresting.
 
Here’s the scene: A runaway train is about to hit a helpless group of five people who are tied up on the tracks, for whatever reason. You have the chance to pull a lever that would divert the engine onto another track. But wait! There’s one person on the other track as well! Quick, what do you do? (This reminds me of those choose-your-own-adventure books from my childhood!) (1) Do nothing, and kill the five people? (2) Pull the lever, and kill one person?
 
This scenario was first introduced back in the 1960s by the British philosopher Philippa Foot. Other ethicists soon took on the ambitious task of developing the question into something even dumber. For instance, the famous “fat man variant” involves pushing a fat man off a bridge and onto the tracks to stop the train.
 
My own take is called “the-fat-man-with-diabetes-saves-Hitler’s-mother variant.” Suppose the fat man is about to die of diabetes in the next 12 hours, and meanwhile, one of the five people will give birth to Hitler. Suppose, furthermore, that the fat man hasn’t been given adequate healthcare by us as a society. Do we blame his lifestyle choices for his weight? Can we toss the fatty under a train to save Hitler’s mom? What happened to the lever?
 
With the development of self-driving cars, engineers are currently in the process of figuring out how to make an autonomous vehicle behave optimally in an accident. According to some, this proposes unique and new, practical ethical questions. Should the car hit one person rather than a crowd? Hit a squirrel instead of a parked van?
 
MIT recently rolled out an online game masquerading as research, called the Moral Machine, which claims to harness our “human perspective.” Somehow, their hypothetical autonomous car has to choose between, say, killing doctors and homeless people. In the MIT Technology Review, Will Knight writes about a Stanford engineering team that is collaborating with a philosopher, Professor Patrick Lin, hired specifically to deal with these supposedly new ethical problems. The unfortunately-chosen title of the piece, How to Help Self-Driving Cars Make Ethical Decisions, echoes the confusion surrounding artificial intelligence and ethics, as do some of the statements quoted in it. (“What is the car’s responsibility?” asks Professor Chris Gerdes, head of the Stanford team.)
 
All manner of technology and infrastructure projects involve making these kinds of decisions: build a highway, decide on speed limits, design a conventional car, and you will have a statistical idea of the numbers of people who will, under given circumstances and over time, die based on your decisions. Important considerations, obviously, but not new — we are just used to seeing a car manifest the will of an individual person. An algorithm can have as little “responsibility” as a speed limit sign does, or the lever controlling a runaway trolley car.
 
Perhaps the very existence of these alienated thought experiments has contributed to the confusion. I am reminded of a very interesting post on the blog Thing of Things by Ozymandias, arguing that extremist thought experiments can be immoral in their own right. If you spend enough time discussing and inspecting how and when torture might be justifiable, you are training yourself and others to be OK with torture — regardless of what conclusions you draw.

Cars, taking responsibility.
 
 

Normalizing barbarism in this way is one of the trademark strategies of philosopher Sam Harris. Author of an article titled In Defense of Torture, Harris gets very defensive if you dare claim he’s ever written in defense of torture. A favorite device of his, “the ticking time bomb case” familiar from propagandistic TV shows, involves yet another scenario which never has, and never will, come to pass, and whose premise about torture is flawed.
 
All these thought experiments involve 100% certain outcomes (hence the choose-your-own-adventure comparison), making them completely irrelevant to our very uncertain lives. But there is no end to the depravity one can sink to by posing absurd hypotheticals — which is particularly demented when it’s done in defense of actual policies, as Harris does. What if mankind could be saved only by taking a completely innocent person and slowly torturing them to death? (The Bible has a story like that, seems like a valid line of inquiry.) Or suppose the only way to prevent total nuclear annihilation was to rape a baby with a cactus? Just hypothetically — what if?
 
What if a thought experiment to “clarify” a moral question is, in fact, designed to do the opposite? What are the ethical implications of that?
 
Harris, too, likes to use technology (non-existent technology) to illustrate some of his dumb ethical inquiries. One of these thought experiments involves the “perfect weapon”: one that could kill all the right targets, and only the right targets, from whatever distance, with no other damage to life or property. If the US had such a weapon, no innocents would die. On the other hand, if our enemies had it, they’d kill tons of people out of pure evil. Hence, our killing is benevolent and qualitatively different. Just first accept the premise, which was the same as the conclusion.
 
(This line of reasoning reminds me of the way people commonly, and incorrectly, summarize the “Turing test” to argue that machines can be conscious. If there were a computer that could consistently fool you into thinking it is conscious, the argument goes, you’d have no criteria left to dismiss its consciousness — just like you wouldn’t dismiss it in the case of another human being. Ergo, something. Too bad such a machine doesn’t exist, but hey, QED anyway.)
 
Speaking of the perfect weapon: another age-old and famous thought experiment involves remotely killing a random Asian person for financial gain with the push of a button or by merely willing it. In this scenario, there would be no threat of accountability, and no way of ever knowing your far-away victim, nor the consequences of the murder. Would you do it? (Hint: no.) I’ve seen this question attributed variously to Rousseau, Diderot, Balzac, Chateaubriand, and an episode of The Twilight Zone (“Button, Button”).

By the end of the 19th century, the phrase “tuer le mandarin” (or “killing the Mandarin”) was an established expression. After centuries of collectively contemplating this “problem,” we have an entire state-funded industry and profession around pressing buttons to murder people in Central Asia — and increasingly elsewhere — from the safe distance of places like Langley, Virginia. Just at the turn of this year, the Pentagon finally shelved the idea of awarding drone pilots with Distinguished Warfare Medals, after years of outcry and ridicule. Even with our monstrous contemporary answer to the “Mandarin Question,” we can’t pretend that there is anything courageous in killing by remote control with impunity — the cowardice is part of the question’s premise.

If thought experiments like these shine a light on anything, it’s our psychology. Interestingly, according to polls, most people hold the utilitarian view that a self-driving car should protect as many pedestrians as possible, at the expense of the passenger — but say that they’d go ahead and buy a car that does the opposite. Aside from being inconsistent, these answers are oddly honest. Similarly, it is horrifying to see how distance changes the value we put on human life. Drone operators notoriously call dead people on their Nintendo-like screens “bug splats.”
 
To build a good self-driving car does not, and cannot, imply “solving” the Trolley Problem. And in a civilized society, the morality of the US drone campaign would not even be discussed — at least as anything more than a pointless, horrific thought experiment.


[Related articles:
Putting the “B” Bach in GEB
Ten Jokes, Explained
The Art of Mixing Metaphors
]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s