Here's a Terrible Idea: Robot Cars With Adjustable Ethics Settings

Neither the car manufacturer nor the driver wins if we get to set a dial for who lives and who dies in unavoidable car crashes.
vehicleswithcircleshighway660
U.S. Department of Transportation

Do you remember that day when you lost your mind? You aimed your car at five random people down the road. By the time you realized what you were doing, it was too late to brake.

Thankfully, your autonomous car saved their lives by grabbing the wheel from you and swerving to the right. Too bad for the one unlucky person standing on that path, struck and killed by your car.

Did your robot car make the right decision? This scene, of course, is based on the infamous “trolley problem” that many folks are now talking about in AI ethics. It’s a plausible scene, since even cars today have crash-avoidance features: some can brake by themselves to avoid collisions, and others can change lanes too.

The thought-experiment is a moral dilemma, because there’s no clearly right way to go. It’s generally better to harm fewer people than more, to have one person die instead of five. But the car manufacturer creates liability for itself in following that rule, sensible as it may be. Swerving the car directly results in that one person’s death: this is an act of killing. Had it done nothing, the five people would have died, but you would have killed them, not the car manufacturer which in that case would merely have* let them die*.

#### Patrick Lin

##### About

Patrick Lin, PhD, is the director of the Ethics + Emerging Sciences Group at California Polytechnic State University and affiliate scholar at Stanford Law School’s Center for Internet and Society. He is the editor of [Robot Ethics](http://mitpress.mit.edu/books/robot-ethics). The statements expressed here are the author’s alone and do not necessarily reflect the views of the aforementioned organizations.

Even if the car didn’t swerve, the car manufacturer could still be blamed for ignoring the plight of those five people, when it held the power to save them. In other words: damned if you do, and damned if you don’t.

So why not let the user select the car’s “ethics setting”? The way this would work is one customer may set the car (which he paid for) to jealously value his life over all others; another user may prefer that the car values all lives the same and minimizes harm overall; yet another may want to minimize legal liability and costs for herself; and other settings are possible.

Plus, with an adjustable ethics dial set by the customer, the manufacturer presumably can’t be blamed for hard judgment calls, especially in no-win scenarios, right? In one survey, 44 percent of the respondents preferred to have a personalized ethics setting, while only 12 percent thought the manufacturer should predetermine the ethical standard. So why not give customers what they want?

It Doesn’t Solve Liability for the Company

If the goal is to limit liability for the car manufacturer, this tactic fails, as even if the user ultimately determines the weighting of different values factored into a crash decision the company can still be liable. To draw out that point, let’s make the ethical choices outrageous:

Imagine that manufacturers created preference settings that allow us to save hybrid cars over gas-guzzling trucks, or insured cars over uninsured ones, or helmeted motorcyclists over unhelmeted ones. Or more troubling, ethics settings that allow us to save children over the elderly, or men over women, or rich people over the poor, or straight people over gay ones, or Christians over Muslims.

In an accident that requires choosing one victim over another, the manufacturer could still be faulted for giving the user any option at all---that is, the option to discriminate against a particular class of drivers or people. Saving, protecting, or valuing one kind of thing effectively means choosing another kind to target in an unavoidable crash scenario.

Granted, some of these choices seem offensive and inappropriate in the first place. Some are rooted in hate, though not all are. But for many of us, it is also offensive and inappropriate to assume that your own life matters more than the lives of others---especially more than five, 10, 20, or 100 lives anonymous to you. (Is love of one’s self much different than hatred or indifference toward others?)

To be that self-centered seems to be a thoughtless or callous mindset that’s at the root of many social problems today. Likewise, many of us would be offended if life-and-death decisions about others were made according to costs (legal or financial) incurred by you. Doing the right thing is often difficult, exactly because it goes against our own interests.

Whatever the right value is to put on human life isn’t the issue here, and it’d be controversial any which way. In the same survey above, 36 percent of respondents would want a robot car to sacrifice their life to avoid crashing into a child, while 64 percent would want the child to die in order to save their own life. This is to say that we’re nowhere near a consensus on this issue.

The point is this: Even with an ethics setting adjusted by you, an accident victim hit by your robot car could potentially sue the car manufacturer for (1) creating an algorithm that makes her a target and (2) allowing you the option of running that algorithm when someone like her---someone on the losing end of the algorithm---would predictably be a victim under a certain set of circumstances.

Punting Responsibility to Customers

Even if an ethics setting lets the company off the hook, guess what? We, the users, may then be solely responsible for injury or death in an unavoidable accident. At best, an ethics setting merely punts responsibility from manufacturer to customer, but it still doesn’t make progress toward that responsibility. The customer would still need to undergo soul-searching and philosophical studies to think carefully about which ethical code he or she can live with, and all that it implies.

>In an important sense, any injury that results from our ethics setting may be premeditated if it’s foreseen.

And it implies a lot. In an important sense, any injury that results from our ethics setting may be premeditated if it’s foreseen. By valuing our lives over others, we know that others would be targeted first in a no-win scenario where someone will be struck. We mean for that to happen. This premeditation is the difference between manslaughter and murder, a much more serious offense.

In a non-automated car today, though, we could be excused for making an unfortunate knee-jerk reaction to save ourselves instead of a child or even a crowd of people. Without much time to think about it, we can only make snap decisions, if they’re even true decisions at all, as opposed to merely involuntary reflexes.

Deus in Machina

So, an ethics setting is not a quick workaround to the difficult moral dilemma presented by robotic cars. Other possible solutions to consider include limiting manufacturer liability by law, similar to legal protections for vaccine makers, since immunizations are essential for a healthy society, too. Or if industry is unwilling or unable to develop ethics standards, regulatory agencies could step in to do the job---but industry should want to try first.

With robot cars, we’re trying to design for random events that previously had no design, and that takes us into surreal territory. Like Alice’s wonderland, we don’t know which way is up or down, right or wrong. But our technologies are powerful: they give us increasing omniscience and control to bring order to the chaos. When we introduce control to what used to be only instinctive or random---when we put God in the machine---we create new responsibility for ourselves to get it right.