Intelligent Machines

Top Safety Official Doesn’t Trust Automakers to Teach Ethics to Self-Driving Cars

Federal rules could lay out how cars decide whom to protect or harm in a crash.

Sep 2, 2016

Rapid progress on autonomous driving has led to concerns that future vehicles will have to make ethical choices, for example whether to swerve to avoid a crash if it would cause serious harm to people outside the vehicle.

Christopher Hart, chairman of the National Transportation Safety Board, is one of them. He told MIT Technology Review that federal regulations will be required to set the basic morals of autonomous vehicles, as well as safety standards for how reliable they must be.

Hart said the National Highway Traffic Safety Administration (NHTSA) will likely require designers of self-driving cars to build in fail-safes for critical components of their vehicles, similar to how aircraft manufacturers do.

A Mercedes-Benz self-driving car prototype.

“The government is going to have to come into play and say, ‘You need to show me a less than X likelihood of failure, or you need to show me a fail-safe that ensures that this failure won’t kill people,” said Hart.

Hart also said there would need to be rules for how ethical prerogatives are encoded into software. He gave the example of a self-driving car faced with a decision between a potentially fatal collision with an out-of-control truck or heading up on the sidewalk and hitting pedestrians. “That to me is going to take a federal government response to address,” said Hart. “Those kinds of ethical choices will be inevitable.”

That NHTSA has been evaluating how it will regulate driverless cars for the past eight months, and will release guidance in the near future. The agency hasn't so far discussed ethical concerns about automated driving.

What regulation exists for self-driving cars comes from states such as California, and is targeted at the prototype vehicles being tested by companies such as Alphabet and Uber. California requires that a safety driver always be ready to take over, and that a company file reports detailing incidents where a human needed to step in.

Ryan Calo, an expert on robotics law at the University of Washington, is skeptical that it’s possible to translate the so-far theoretical ethical discussions into practical rules or system designs. He doesn’t think autonomous cars are sophisticated enough to understand the different factors a human would in a real-life situation.

Calo believes the real quandary is whether we are willing to deploy vehicles that will prevent many accidents but also make occasional deadly blunders that no human would. “If it encounters a shopping cart and a stroller at the same time, it won’t be able to make a moral decision that groceries are less important than people,” says Calo. “But what if it’s saving tens of thousands of lives overall because it’s safer than people?”

Patrick Lin, a philosophy professor at Cal Poly State University, San Luis Obispo, who has studied ethics and autonomous driving with the nonprofit arm of Daimler-Benz, says the idea of cars making ethical deliberations should not be so quickly dismissed. Progress in sensors, artificial intelligence, and facial recognition software will likely lead to cars capable of deciding to save one life and sacrifice another, he says.

“It’s better if we proactively try to anticipate and manage the problem before we actually get there,” Lin says. “This is the kind of thing that’s going to make for a lawsuit that could destroy a company or leave a huge black mark on the industry.”

Federal standards in ethics or other safety aspects could also bring some transparency to how driverless cars make decisions. Lin says that car manufacturers generally want to keep the inner workings of their vehicles’ software secret from hackers and competitors. “Car manufacturers are notoriously secretive about their programming and their crash optimization thinking,” he says.