With the development of the driverless car, Google envisions a future where roads are safe and free of human recklessness and errors, which end up injuring and, in many cases, killing pedestrians.

However, with recent evaluations, Google is now programming its cars to drive like humans.

The driverless car was built to be cautious—perhaps a little too cautious. The automated cars are pre-programmed to avoid collisions and, thus, it floors the brakes whenever it senses danger. The idea works but the car's own definition of danger and risk levels does not.

The self-driving car reportedly stops at a busy T-intersection where views of cross traffic are limited, waits for half a minute before making a left turn, but then halts at the intersection when a pedestrian on the other side walks near the edge of the sidewalk, according to notes on one particular test drive.

In June, Google reported that its driverless cars had covered more than 1 million miles. Throughout the six years since the first prototypes hit the streets, the A.I.-driven vehicle has had a total of 16 accidents.

Google came to the vehicle's defense and blamed the accidents on the other party: the humans, of course. In fact, the company went on to elaborate that the only time when the Google car was at fault was when a human was driving it.

When the issue about the vehicle's braking habits was attributed to the accidents, developers were again quick to defend the vehicle, saying "all of these cases have been when we are sitting still."

However, Nvidia CEO Jen-Hsung Huang, thought otherwise.

"Why is it getting rear-ended? It drives like a computer," Huang answered.

Because of the possibility that the car's inorganic responses are causing issues with other drivers on the road, developers are now on a path to making the vehicle more "human."

"[The cars are] a little more cautious than they need to be," said Chris Urmson, lead engineer for the driverless car development. "We are trying to make them drive more humanistically."

Before the "humanization," the cars cornered widely to avoid hitting pedestrians near the curb. It pauses to calculate what the people's movements would be. In contrast now, the implementation of more humanistic behavior will have the cars closely hugging the curb and cutting corners earlier.

The good thing is that, with the new algorithm, the cars no longer need to take a pause and calculate when coming across parked cars. The A.I. allows the cars to maneuver around parked vehicles by crossing the double-yellow lines, something previously prohibited in the codes.

However, with the "humanization" of the A.I., the driverless cars are also being questioned. If the entire project aims to eliminate human error to make roads safer, then wouldn't this newfound "humanistic" driving mean a step backwards?

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion