While split-second human decisions may affect the outcome of car accidents, driverless cars may need to be programmed to kill people in order to save laves.
While self-driving cars are thought to be safer, cleaner and more fuel-efficient than standard models, the autonomous vehicles could be subjected to some tricky ethical dilemmas.
In a situation where a driverless car could kill a group of 10 bystanders by saving its driver or swerve and kill the driver to save the group, in theory, it could be authorised to kill its driver for the ‘greater good’.
The Toulouse School of Economics carried out a study to gauge opinion on such a dilemma, reports MIT Technology Review.
The results showed that 75 per cent of the respondents supported the idea of killing the driver to save 10 people, while only 50 per cent supported the idea of self-sacrifice in order to save just one person.
Researchers found that the respondents were more receptive to the ‘utilitarian’ idea or prioritising the life of many over the few.
However, attitudes could well change in future if there were a need for regulations that required cars to potentially kill their own drivers to save a greater number of lives.
Toulouse School of Economics researcher Jean-Francois Bonnefon points out that the issue raises many mind boggling questions, such as ‘should different decisions be made when children are on board?’ and ‘if a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?.
He goes on to say: “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”
Image credit: MIT Technology Review