There are many people who believe that driverless cars will be the end-all for traffic accidents, which claim almost 300,000 lives in the U.S. each decade. While the idea of no longer having to sit in traffic jams is appealing, there is one thing that is not: the moral dilemma that must be answered in the programming for the driverless car.
To understand this dilemma, consider this: Your driverless car is travelling along a stretch of highway. You are almost asleep since your car has everything under control. When you come around a curve, you see two people standing in the middle of the road looking at their front bumper. It’s smashed in from hitting a deer. The car must now make a split-second decision. Does it hit the car in the middle of the road and perhaps kill the two people or does it swerve into the guardrail to avoid the car and perhaps kill you.
This is a variation of the “Trolley Problem,” which is often used to demonstrate a similar moral dilemma. A completely utilitarian view would argue that since two is less than one in our scenario, the car would hit the guardrail. There is no capability for the driverless car to “think.” Algorithms will have to be used in order for the car to make life-and-death decisions like that. Most people who were asked a similar question choose the answer that would lead to the least number of people killed — that is unless they were in the car.
How would the courts rule on this type of accident? Would you hold the owner of the car that hit the deer liable or the person who programmed the driverless car? It’s likely at some point to be brought before a court.
It’s not yet known how the dilemma will be answered; however, it’s possible that someone’s life will be in the hands of a programmer he or she never met.
Source: NYMag.com, “Your Driverless Car Could Be Programmed to Kill You,” Ben Ellman, Oct. 28, 2015