iRobots and so much of science fiction have been preparing us for questions that are becoming more than just hypothetical questions, as our driverless future materializes ahead on the horizon. How moral should our artificial intelligence systems be and whose life should they prioritize in pivotal moments?

A collection of surveys has revealed what many consumers feel are the correct answers to that question and others concerning driverless cars. Between June and November 2015, a collection of researchers, from various institutes, conducted six surveys to find out how people feel about autonomous vehicles and the morality that should be coded into them.

In the first two surveys, most of the respondents, roughly 76 percent, indicated that driverless cars should sacrifice the life of a passenger to avoid killing a group of pedestrians. But as the amount of pedestrians at stake was lowered in other questions, the tide began to turn in favor of saving the passenger.

The third survey revealed more conflict over opting for autonomous vehicles that either prioritize the passengers' safety or attempt to do the most good in saving the most people.

The survey found a generally low likelihood of buying cars that preserve passengers, and the respondents indicated that they were even less likely to buy a car that would sacrifice their passengers, friends and family, to even save up to 20 people. And the fourth survey backed those findings.

The fourth survey gave respondents 100 points each to spend on various algorithms, which ran the gamut from being designed to save the passengers at all costs to one that would always swerve to avoid hitting pedestrians.

The respondents spend their points to indicate how moral they felt each algorithm was, how comfortable they'd be on the road with cars programmed in such a manner and how likely they'd be to buy a car with such software.

Again, it seems "people praise utilitarian, self-sacrificing AVs and welcome them on the road, without actually wanting to buy one for themselves," the study's report says. "This is the classic signature of a social dilemma, in which everyone has a temptation to free-ride instead of adopting the behavior that would lead to the best global outcome."

The researchers proposed that regulators could step in to solve this potential social dilemma, in which people wanted their cars to protect their loved one while other vehicles are programmed to do the most good.

So in survey six, the researchers asked respondents the likelihood they'd purchase a government-regulated car programmed to save the lives of strangers and an unregulated car. Still, most people would rather everyone else drive cars programmed to do the most good.

For now, "there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest," the study's report stated.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion