27 March 2019

Driverless Car Ethics

Welcome back. When my son, Noah, was working his way through a B.S. in Mechanical Engineering, we seldom spoke about his technical courses. But I do remember discussing one assignment he had in an elective humanities course; it pertained to what I now know to be labeled the trolley problem.

Although this thought experiment in ethics has been around for over a century with any number of variants, consider the following: A runaway trolley is going to hit several people. You can pull the lever, directing the trolley to a sidetrack, where it will strike only one person. Should you pull the lever? Is there a moral difference between doing harm and allowing harm to happen?

The basic trolley problem: Do nothing and trolley hits five people, or pull lever to redirect trolley to a sidetrack, where it will hit one person (from nymag.com/intelligencer/2016/08/trolley-problem-meme-tumblr-philosophy.html).
Noah graduated and moved on; however, the trolley problem is now being addressed in different studies of programming driverless vehicles (aka self-driving or autonomous vehicles). I thought you’d find examples of that work of interest.

The Social Dilemma
A 2016 study by researchers from France’s University of Toulouse Capitole, the University of Oregon and MIT examined the trolley problem in six online surveys of 182 to 393 U.S. participants.

Overall, the participants seemed to agree that autonomous vehicles (AVs) should be programmed to be utilitarian, minimizing the number of casualties. Yet given the incentive for self-protection, few would be willing to ride in utilitarian AVs. Further, they would not approve of regulations mandating self-sacrifice, and such regulations would make them less willing to purchase an AV.

German Guidelines

Report of German Ethics
Commission on Automated
and Connected Driving

(see link in P.S.).
Trying to stay ahead of the issue, Germany’s Federal Minister of Transport and Digital Infrastructure appointed an Ethics Commission on Automated and Connected Driving.

The commission’s 2017 report included 20 ethical rules. One, for example, states in part: In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited…

Study Results at Odds with Guidelines
A 2018 study by researchers from Germany’s Osnabrück University found that what is morally justified may not be socially acceptable. Their study had 189 participants (average age 24) complete several virtual reality simulations of driving alone on different two-lane roads. Obstacles emerged on both lanes giving the participants four seconds to switch or not switch lanes before hitting someone (which wasn’t shown).

The study found nearly all participants would change lanes to hit fewer people; over half would sacrifice themselves to save others, especially as the number saved grew; most would hit an elderly person before an adult, more so before a child; and most would swerve onto the sidewalk to save a greater number of people.

Introducing Probabilities
A more recent study by a research team from Germany’s Max Planck Institute for Human Development and University of Göttingen examined trolley problem options when the probabilities of hitting the pedestrian or bystander were known or unknown (872 U.S. participants). They also considered how people retrospectively evaluate those options when a road user has been harmed (766 U.S. participants).

They found that participants placed particular weight on staying in the lane. This tendency was seen when probabilities were known or uncertain and in hindsight after accidents occurred. Staying in the lane was considered more morally acceptable, particularly for autonomous vehicles.

International Survey
In the most recently published study, collaborators from MIT, Harvard, the University of British Columbia and University of Toulouse Capitole reported nearly 40 million trolley-problem decisions made by people from 233 countries.

MIT’s Moral Machine, an online experimental platform, presented the trolley problem in ten languages, examining nine scenarios (e.g., saving more vs. fewer lives).

Example Moral Machine question: Driverless car’s brakes fail; car will kill 3 elderly people (note skulls!) crossing on a do-not-cross signal (left). Swerving will hit barrier and kill 3 passengers--2 adults and 1 child (from www.nature.com/articles/s41586-018-0637-6).
Two key findings were:
- Globally, the strongest preferences are for saving humans over animals, more lives and young lives.
- Three distinct moral clusters of countries could be identified, suggesting that groups of territories might converge on shared preferences, while between-cluster differences may pose problems.

Wrap Up
Research on programming driverless cars continues. The MIT-led study noted they could not do justice to the complexity of autonomous vehicle dilemmas, even with the large sample they obtained--we need to have a global conversation to express our preferences to those who will program the vehicles and to those who will regulate them.

Thanks for stopping by.

P.S.
Trolley Problem:
www.wired.com/story/self-driving-cars-will-kill-people-who-decides-who-dies/
en.wikipedia.org/wiki/Trolley_problem
2016 University of Toulouse Capitole-led study in Science and article on Quartz website:
science.sciencemag.org/content/352/6293/1573
qz.com/536738/should-driverless-cars-kill-their-own-passengers-to-save-a-pedestrian/
German Ethics Commission on Automated and Connected Driving report:
www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.pdf?__blob=publicationFile
2018 Osnabrück University study in Frontiers in Behavioral Neuroscience and article on ScienceDaily website:
www.frontiersin.org/articles/10.3389/fnbeh.2018.00031/full
www.sciencedaily.com/releases/2018/05/180503142637.htm
2018 study with probabilities added in Risk Analysis and article on ScienceDaily website:
onlinelibrary.wiley.com/doi/abs/10.1111/risa.13178
www.sciencedaily.com/releases/2018/10/181009135828.htm
2018 Moral Machine experiment in Nature and articles on study:
www.nature.com/articles/s41586-018-0637-6
www.sciencedaily.com/releases/2018/10/181024131501.htm
www.technologyreview.com/s/612341/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/
Moral Machine (go ahead and try it): moralmachine.mit.edu/

A version of this blog post appeared earlier on www.warrensnotice.com.

No comments:

Post a Comment