As exciting as it's going to be to have self-driving cars on the road, there are plenty of lingering questions about autonomous vehicles that need to be answered.

Human drivers can make a hair-trigger reaction to unavoidable accidents, whether they aim to minimize the loss of life — even if that means sacrificing themselves — or protecting a vehicles' occupants no matter what.

But how should self-driving cars be programmed to act in cases of unavoidable tragic accidents and should autonomous technology be toggled to choose between the worst extremes randomly? MIT's Technology Review went looking to Jean-Francois Bonnefon at the Toulouse School of Economics in France for answers recently.

Using the new science of experimental ethics, they sought out to get answers by asking the public for its opinion.

"Our results provide but a first foray into the thorny issues raised by moral algorithms for autonomous vehicles," they told MIT.

One dilemma they presented is a car heading toward a crowd of 10 people crossing the road without being able to stop in time. It can, however, avoid killing 10 people by steering into a wall, but that option would entail it killing the driver and every passenger in the car.

Choosing to crash into a wall would minimize loss of life, but set up the possible situation of people fearing autonomous technology, believing they're built to sacrifice their owners. Bonnefon's crew presented these ethical dilemmas to hundreds of Amazon Mechanical Turk workers to gauge their opinions. The results they got back? A majority of people are comfortable with self-driving vehicles being programmed to minimize death.

"[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason—they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves," Bonnefon and company concluded.

That's interesting because people, as long as they aren't driving an autonomous car, are OK with the technology sacrificing its driver instead of killing more people in an unavoidable accident.

The researchers presented further questions as well:

"Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the car than for the rider of the motorcycle? Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly choses one of them, is the buyer to blame for the harmful consequences of the algorithm's decisions?"

They added: "As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent."

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion