Skip to main content

LOOKING FOR SOMETHING?

Web-Waymo_FCA_Fully_Self-Driving_Chrysler_Pacifica_Hybrid_1q5vbdfjcsmn1vu361iopcko8fu.jpg

Confronting the Human Dilemma in a Brave New Self-Driving World

Written by Sean M. Lyden on . Posted in .

In his speech at the AutoMobili-D Conference in Detroit this past January, John Krafcik, the CEO at Waymo – formerly the Google self-driving car program – cited this compelling statistic: “Each year, more than 1.2 million people die on the roads around the world.”

He then put that number in context: “That’s equivalent to a 737 [airliner] falling from the sky every hour of every day all year long.”

Krafcik’s point is clear. Society would never tolerate having a major airline crash every day; so, how can it accept the same number of people dying in automotive crashes? If self-driving systems could prevent the vast majority of fatalities on the road, wouldn’t it be a moral imperative for society to adopt that technology?

That’s the argument that Krafcik, several Silicon Valley entrepreneurs and most automotive executives have been making in recent months as they present a vision of a “crash-less” society made possible by fully autonomous vehicles. After all, according to the National Highway Traffic Safety Administration, 94 percent of crashes can be tied to human error. Remove the driver, eliminate human error – right?

But despite bold predictions by industry executives and analysts that fully autonomous vehicles will be available for sale in the U.S. within the next four years, human psychological barriers could put the brakes on societal adoption of this technology.

How?

Fear of Autonomy
Consider this: Although autonomous vehicles offer the promise of significantly greater safety than their human-driven counterparts, U.S. drivers don’t believe it – at least not from an emotional and practical standpoint.

That’s based on the findings in a recent report from AAA, where three‐quarters of U.S. drivers said they would be afraid to ride in a self‐driving vehicle. And the majority – 54 percent – of those drivers said they would feel less safe sharing the road with fully autonomous vehicles while they drive a regular vehicle.

You might think, OK, that makes sense when you factor in older generations that may be more apprehensive about new technology, but what about millennials? Certainly, younger people would be much more open to riding in self-driving vehicles.

Yet according to the AAA study, 73 percent of millennials also indicated they were likely to be afraid to ride in a self-driving car, compared to 75 percent for Generation X and 85 percent for baby boomers – not that big of a difference.

So, how is the industry responding to counteract this fear?

Companies like Waymo, ride-hailing giant Uber and Boston-based nuTonomy have recently launched programs that offer self-driving rides to select passengers in limited locations around the world. The idea is to get people used to riding in these vehicles and to share their experiences with family, friends and colleagues, with the hopes of not only reducing fear but also increasing market demand for self-driving rides.

Collective Good vs. Self-Protection: The Double Standard
But then there’s also the issue of machine morality and how society will write the rules of the road for autonomous vehicles. When software assumes more and more of a human driver’s responsibility for decision-making, what moral model will govern those decisions?

Imagine this scenario: A self-driving vehicle is approaching a traffic situation where there will be an unavoidable crash. The car must decide between killing 10 pedestrians or its own passenger. What would you say would be the right moral choice?

According to a study titled “The Social Dilemma of Autonomous Vehicles” by scholars Jean-Francois Bonnefon, Azim Shariff and Iyad Rahwan, 76 percent of study participants said that it would be “more moral” for the autonomous vehicle to sacrifice one passenger than kill 10 pedestrians.

This is based on the moral philosophy of utilitarianism, where a morally good action is one that helps the greatest number of people – in this case, allowing the vehicle to sacrifice the one passenger to save 10 pedestrians.

But what if you’re the passenger of the self-driving car?

Now, that’s a different story. According to the study, you’re more likely to prefer a vehicle that will protect your life, not sacrifice it. “It appears that people praise utilitarian, self-sacrificing [autonomous vehicles] and welcome them on the road, without actually wanting to buy one for themselves,” the report states.

This is a prime example of what the researchers call a “social dilemma,” where people may have a strong consensus on what’s best for society as a whole but will still prefer to act in their own self-interest. And this double standard could have huge implications in terms of impeding the development of regulations that will make autonomous vehicles commercially available.

To encourage more public discussion on this issue on a global scale, one of the study’s authors, Massachusetts Institute of Technology professor Iyad Rahwan, launched Moral Machine (http://moralmachine.mit.edu/). It’s an online platform that invites the public to get involved with building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas and discussing potential scenarios of moral consequence.

The Bottom Line
The emergence of self-driving systems could have a significant impact on utility fleet operations – by improving worker safety, boosting productivity and achieving the highest possible utilization rate from all your fleet assets. But there are human factors that go beyond technology development that could slow the market availability of these systems. Watch this space closely as technology companies, automakers and governments grapple with these societal issues to pave the way to a brave new self-driving world.