When it comes to fully autonomous vehicles becoming commercially available, industry consensus is that it’s not a question of if but when. And that time frame appears to be within the next two to three years.
For example, industry research firm Navigant Research (www.navigantresearch.com) expects that highly automated light-duty vehicles will begin to be introduced in 2020, with steady growth anticipated starting in 2025.
Then there’s Waymo (https://waymo.com) – formerly the Google self-driving car project – pushing the pace, saying that it will roll out fully self-driving taxi rides to the public by the end of this year, with a plan to operate 1 million self-driving miles by 2020.
And at the NTEA Work Truck Show in March, Ed Peper, vice president of fleet at GM (www.gm.com), said that the automaker expects to launch fully self-driving vehicles "safely and at scale” in ridesharing applications in 2019.
But fatal crashes in recent weeks – involving an Uber vehicle in fully autonomous mode and a Tesla Model X with Autopilot engaged – also have caused many in the industry and government to pump the brakes on vehicle testing, creating some uncertainty around when robots will actually rule the roads.
So, what needs to happen for fully autonomous vehicles to be ready for prime time?
UFP spoke with Sam Abuelsamid, senior analyst for Navigant Research, to get his perspective.
Innovation vs. Regulation: Striking the Right Balance
Abuelsamid said that the industry and regulators need to develop standards that achieve a delicate balance between innovation and regulation.
“We don't want to stifle innovation or development of technology," he said. "At the same time, I think that if we're going to put these vehicles on public roads – either for testing purposes or commercial deployment – we need to take a look at some basic standards and make sure that the vehicles that we're putting on the road achieve at least a minimum level of safety."
What might those standards involve?
“We can begin by taking a look at developing standards for the sensing systems on these vehicles – to make sure that they can reliably ‘see' in a wide range of driving conditions,” Abuelsamid said. “And then we need to make sure that the systems can react and do the right thing under those conditions. We should have confidence that when the autonomous systems detect something, that they're going to make the right decision. As human drivers, we have to take a basic driving test to get a license to drive. We should have the same expectation when it comes to licensing a car to drive itself."
The Machine-to-Human Handoff Hassle
In both the Uber and Tesla crash incidents, the human driver in the vehicle was required to be fully aware and ready to take over control when necessary. But it’s precisely this machine-to-human handoff situation that can be dangerous, according to Abuelsamid.
“I get to drive a lot of different vehicles and try out these different technologies,” he said. “And unfortunately, I'm increasingly coming to the opinion that the partially automated systems like [Tesla] Autopilot and others that require a handoff to a human being are actually a bad idea. I think that having a human as a supervisor for these systems is fundamentally not going to be safe or workable because as soon as you start getting reasonably comfortable with the technology, people quickly become complacent.”
What’s the solution?
“What we should do is move away from these partially automated systems to fully automated systems, even if they're limited in their scope in terms of where they can operate,” Abuelsamid said. “I have no problem with testing the systems and using them in limited conditions. But I think that is a better approach than relying on a human to oversee the system because that’s what we had in the recent Uber [crash] case.”
Last year, Nissan (www.nissan-global.com) introduced a remote control system for autonomous vehicles called Seamless Autonomous Mobility, or SAM. The way it works is that when the car encounters an unpredictable situation – such as a new road-construction area – it brings itself to a safe stop and requests help from the command center. The request is routed to the first available mobility manager – a person who uses vehicle images and sensor data, streamed over a wireless network, to assess the situation, decide on the correct action and create a safe path around the obstruction. Once the vehicle has cleared the area, it resumes fully autonomous operations, and the mobility manager is free to help other vehicles calling for assistance.
So, is a remote control system like this a viable solution to give human passengers greater confidence in an autonomous vehicle?
“I think that some degree of remote control is going to be a necessity for autonomous vehicles,” Abuelsamid said. “There are going to be certain situations where the vehicle gets stuck and is unable to figure out what to do. So, having a remote operator who can see what the vehicle sees and guide it through certain scenarios or to a safe place if there's some sort of system failure is a good thing. But that's not something that's ever going to be scalable to all the vehicles on the road. It would be used as an emergency-only backup.”
What about cybersecurity? How much of a challenge is that right now?
“It’s a huge issue that all the OEMs are going to have to deal with to make sure that autonomous vehicles are both secure and resilient," Abuelsamid said. "When you have complex systems like these, you can never guarantee absolute security. It's not possible. No one can say that a system is 100 percent secure with as much code as these systems are running. The technology must be resilient so that when there is a security breach, it can be detected and the vehicle brought to a safe stop."
How optimistic is Abuelsamid that automakers are making cybersecurity a top priority?
“The good news is that manufacturers have recognized that cybersecurity is a real issue,” he said. “Four years ago, I could not say that was true; they weren't taking it very seriously. But now they are.”
The Bottom Line
What needs to happen for fully autonomous vehicles to expand beyond niche robo-taxi applications to achieve significant scale?
Here’s how Abuelsamid put it: "People have to trust that these vehicles are going to behave properly – that they're going to be reliable. The public needs the confidence that the autonomous vehicle is safer than a human driver."
And that timeline is not so certain.