Proponents of the driverless car technology sometimes point out the chance to reduce car casualties compared to human-controlled cars. But how should the automated vehicles (AVs) decide when it comes to the following scenario consisting of two options?
- Running over one or more pedestrians or
- sacrificing the passengers
The recently released study “The social dilemma of autonomous vehicles”, published in the online journal Science and co-authored by an MIT, makes clear that the public is conflicted over such scenarios. According to the surveys of the scientists, people like to drive with driverless cars in order to reduce road casualties in hazardous situations. On the other hand, they are not interested to drive with a car which would crash against an obstacle to avoid killing a crowd of 10 pedestrians. They prefer cars that protect their passengers at all costs.
Regulations for the driverless car technology with utilitarian principles would make people less willing to buy an autonomous vehicle
People prefer pedestrian-friendly driverless cars but the car they are driving with should be an exception. “If everybody does that, then we would end up in a tragedy … whereby the cars will not minimize casualties,” says Iyad Rahwan, an associate professor in the MIT Media Lab and co-author of a new paper outlining the study.
“Most people want to live in a world where cars will minimize casualties,” Rahwan adds. “But everybody wants their own car to protect them at all costs.” Participants of the study disapprove utilitarian regulations for driverless cars. They would be less interested to buy such a vehicle. “This is a challenge that should be on the mind of carmakers and regulators alike,” the scholars write. Moreover, if autonomous vehicles actually turned out to be safer than regular cars, unease over the dilemmas of regulation “may paradoxically increase casualties by postponing the adoption of a safer technology.”
The scientists conclude:
Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today. As we are about to endow millions of vehicles with autonomy, a serious consideration of algorithmic morality has never been more urgent. Our data-driven approach highlights how the field of experimental ethics can provide key insights into the moral, cultural, and legal standards that people expect from autonomous driving algorithms. For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest – let alone account for different cultures with various moral attitudes regarding life-life trade-offs – but public opinion and social pressure may very well shift as this conversation progresses.