License to Kill [...]

Self-driving cars will need to be programmed to make decisions when the only possible results are bad. As a simple example, a car may have to swerve off the road to avoid killing a number of people, and this may be at the expense of the safety of the driver. People’s opinions on this issue are conflicted: they want others in cars that are programmed to put general safety above driver safety, however they are unwilling to drive those cars themselves.

A 2015 article in the MIT Technology review explains the conundrum:

The results are interesting, if predictable. In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.

This utilitarian approach is certainly laudable but the participants were willing to go only so far. “[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” conclude Bonnefon and co.

And therein lies the paradox. People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves.


This is one counter argument to the claim that Google Car is a Glorified Train — there isn’t really a parallel with trains.

The work is reminiscent of the ethical experiments of Philippa Foot.

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on