You Should Have a Say in Your Robot Car's Code of Ethics

One solution to this ethical problem is to adopt the same approach in engineering that has been tried and tested in healthcare: a robust standard of informed consent.
connectedcars645x403
U.S. DOT

Recently, writing in WIRED, ethicist Patrick Lin argued that building a programmable ethics button into future autonomous cars is not the right approach to dealing with the moral nuance of this new technology. But isn’t a car that ignores your moral choices worse? There is a middle path, and we need only look to modern healthcare to find it.

#### Jason Millar

##### About

Jason Millar is an engineer and philosopher. He teaches Robot Ethics at Carleton University in Ottawa, Canada.

First, let’s consider a thought experiment similar to the one Lin examined, called the Tunnel Problem: You are travelling along a single-lane mountain road in an autonomous car that is fast approaching a narrow tunnel. Just before entering the tunnel a child errantly runs into the road and trips in the centre of the lane, effectively blocking the entrance to the tunnel. The car is unable to brake in time to avoid a crash. It has but two options: hit and kill the child, or swerve into the wall on either side of the tunnel, thus killing you.

Now ask yourself, Who should decide whether the car goes straight or swerves? Manufacturers? Users? Legislators?

As Lin noted, in a recent poll conducted by the Open Roboethics Initiative participants were asked these very questions. A full 77 percent of respondents felt users or legislators should make the call. Manufacturers and designers were preferred by only 12 percent of respondents.

Should we be surprised by these results? Not really. The tunnel problem poses a deeply moral question, one that has no right answer. In such cases an individual’s deep moral commitments could make the difference between going straight or swerving.

According to philosophers like Bernard Williams, our moral commitments *should *sometimes trump other ethical considerations even if that leads to counterintuitive outcomes, like sacrificing the many to save the few. In the tunnel problem, arbitrarily denying individuals their moral preferences, by hard-coding a decision into the car, runs the risk of alienating them from their convictions. That is definitely not fantastic.

A Solution Already Exists

In healthcare, when moral choices must be made it is standard practice for nurses and physicians to inform patients of their reasonable treatment options, and let patients make informed decisions that align with personal preferences. This process of informed consent is based on the idea that individuals have the right to make decisions about their own bodies. Informed consent is ethically and legally entrenched in healthcare, such that failing to obtain informed consent exposes a healthcare professional to claims of professional negligence.

Informed consent wasn’t always the standard of practice in healthcare. It used to be common for physicians to make important treatment decisions on behalf of patients, often actively deceiving them as part of a treatment plan.

The introduction of informed consent was not meant to simplify things. In fact, it complicates healthcare enormously. Informed consent places significant burdens on patients who are faced with making very difficult decisions. It exposes healthcare professionals to new kinds of professional negligence. It increases the cost of healthcare delivery by introducing paperwork and time consuming conversations with patients. And, it requires healthcare professionals to communicate complex concepts to laypersons who might have difficulty understanding what they’re being told.

You could also argue that informed consent merely punts responsibility to the user. Critics argue that informed consent unfairly burdens individuals with difficult, often troubling, choices that they are ill prepared to make.

Yet, despite the challenges and complexity introduced by informed consent, it is hard to imagine that people would accept a return to a healthcare system where doctors and nurses could make difficult moral decisions about their treatment without first seeking their consent.

Why, then, would we accept designers and engineers making deeply moral decisions on our behalf, the kind represented by the tunnel problem, without first obtaining our explicit consent? One solution to this ethical problem is to adopt the same approach in engineering that has been tried and tested in healthcare: a robust standard of informed consent. Of course, one way to accomplish this in practice (there are likely others) is to build reasonable ethics settings into robot cars.

Is It Time to Rethink Robot Liability?

It’s a safe bet that lawyers will continue to sue people no matter what design approach roboticists adopt. It is also entirely possible that if we stick to a traditional model of product liability, the introduction of ethics settings could expose users and manufacturers to complicated new kinds of liability suits, just as informed consent requirements have in healthcare.

However, there is a growing belief that autonomous cars and other robots require a significant legal rethinking if we are to regulate them appropriately.

For starters, we could choose to consider a manufacturer’s failure to obtain informed consent from a user, in situations involving deep moral commitments, a kind of product defect. Just as a doctor would be liable for failing to seek a patient’s informed consent before proceeding with a medical treatment, so too could we consider manufacturers liable for failing to reasonably respect user’s explicit moral preferences in the design of autonomous cars and other technologies. This approach would add considerably to the complexity of design. Then again, nobody said engineering robots was supposed to be simple.

We Must Embrace Complexity

If we embrace robust informed consent practices in engineering the sky will not fall. There are some obvious limits to the kinds of ethics settings we should allow in our robot cars. It would be absurd to design a car that allows users to choose to continue straight only when a woman is blocking the road. At the same time, it seems perfectly reasonable to allow a person to sacrifice himself to save a child if doing so aligns with his moral convictions. We can identify limits, even if the task is complex.

Robots, and the ethical issues they raise, are immensely complex. But they require our thoughtful attention if we are to shift our thinking about the ethics of design and engineering, and respond to the burgeoning robotics industry appropriately. Part of this shift in thinking will require us to embrace moral and legal complexity where complexity is required. Unfortunately, bringing order to the chaos does not always result in a simpler world.