Not So Fast

The push to bring driverless vehicles to the mass market appears to have hit a major roadblock. Proceed with caution.

A recent fatal crash involving a Tesla Model S, which was being driven by the car’s Autopilot assisted-driving system, has provoked a national debate over the safety of driverless vehicles. The Autopilot steered the Model S directly into the rear of a tractor-trailor in Florida, killing a man who was sitting in the driver’s seat with his hands off the wheel.

driverless vehiclesThe Tesla crash and other accidents involving driverless cars have raised concerns from technical experts that much more research is needed into aligning the capabilities of robot vehicles with the reality of existing driving norms (which are heavily skewed to human nature) before these driverless cars are cleared to cruise on America’s highways. The debate also is forcing us to consider a question for which there may not be an easy answer: is it riskier to give humans the option of overriding semi-autonomous systems than it is to make vehicles fully autonomous and take the driver out of the equation?

You could say this debate has come in the nick of time: the National Highway Traffic Safety Administration (NHTSA) is preparing to issue its guidelines for driverless vehicles later this summer. NHTSA has been under intense pressure from tech companies, including Google and Amazon—as well as major automakers—to flash the green light for driverless cars. But the concerns may be substantial enough to bring the pell-mell rush to the driverless age to an abrupt halt (or at least a lengthy time-out), sending these 21st-century marvels back to the labs for an overhaul.

In the Florida accident, Tesla’s Autopilot apparently did not detect the white truck body against the bright sky. Although the car’s owner, 40-year-old Joshua Brown, was in the driver’s seat and could have overridden the system by hitting the brakes, Brown reportedly was not paying attention; he assumed he was in safe hands with the robot system controlling the car.

Automated driver safety features, like sensors that activate the brakes when they detect an imminent collision, rapidly are becoming standard equipment on vehicles. But fully driverless vehicles go well beyond these systems, which are designed to work in tandem with the driver. The driverless cars now under development envision the driver becoming a passenger, not even required to sit behind the wheel.

Which brings us to human nature. Even though Tesla instructs Model S owners to sit in the driver’s seat and be ready to take over, a few Tesla owners have posted YouTube videos of themselves climbing into the back seat while the car drives itself. But the problem of human nature as it relates to driverless vehicles goes beyond Stupid Driver Tricks: it comes down to whether the vehicles themselves can be programmed to recognize the idiosyncrasies of driver behavior (and operating conditions) the robot cars will encounter on the road. Last year, a driverless test vehicle in California slammed into the rear of the car in front of it, which had stopped at a stop sign, begun moving forward and then hit the brakes again. The driver of the other car had made a typically illogical decision well known to other human drivers, but one that did not compute for the driverless vehicle’s pre-programmed “brain.”

Even Google, the tech giant that pioneered driverless vehicles, adjusted its strategy in the early stages of development to account for driver behavior. Chris Urmson, head of Google’s driverless car project, said in a TED talk last year that the company initially tested a driver-assistance system, but drivers became so dangerously distracted using the system that Google opted instead to focus on fully self-driving cars. Other experts have noted that people grabbing the controls in a moment of panic may actually pose greater potential hazards than fully autonomous systems.

Manufacturers pursuing the rapid introduction of driverless vehicles, including the major automakers, appear to be dividing into two camps: those gravitating towards systems that require someone to sit behind the wheel and enable that person to retake control of the vehicle from the autonomous system (known as Level 3 controls in NHTSA jargon) and those (including, most recently, Volvo) who are skipping that step and moving directly to fully autonomous vehicles (Level 4). [Meanwhile, the auto insurance giants are trying to figure out how to process claims involving self-driving cars. We’ll take a wild guess here and predict their solution will involve raising premiums for everyone.]

NHTSA reportedly is developing a guideline that would require a human driver to be “available for occasional control, but with sufficiently comfortable transition time.” Sounds like a bit of an oxymoron, doesn’t it? If you’ve got to grab the wheel because someone just ran a stop sign, the transition time will be measured in a fraction of a second.

The head of Volvo’s autonomous-vehicle research program recently told The New York Times that a car changing lanes at 50 m.p.h. would be a highly risky moment for a driver to retake the wheel. “Some people can take control in 10 seconds, but if someone falls asleep it could take two minutes,” he told the Times. Volvo conducted a test in which it asked a driver in a Level 3 car to play a video game on his phone and then interrupted him with a prompt to take control. The driver in the test was too immersed in beating the game to heed the prompt.

An NHTSA-funded Virginia Tech study found that it took drivers of semi-autonomous vehicles an average of 17 seconds to respond to takeover requests—the amount of time a vehicle going 65 m.p.h. can travel more than 1,600 feet.

One of the more promising NHTSA guidelines reportedly under consideration involves the development of an electronic communications system for vehicles that would allow driverless cars to transmit their location, speed and data to other vehicles when driving in close proximity. Such a system might have prevented the fatality in Florida (a long-term goal of the DOT’s Smart City Challenge is the installation of networks transmitting such data to all vehicles in a metropolitan area; Columbus, OH, which won the competition, will receive $50 million in DOT funding to develop a transportation network).

NHTSA’s history in setting guidelines for automotive safety innovations offers some useful precedents for its upcoming requirements for driverless cars. When airbags became standard equipment, the initial federal guidelines did not specify who could sit behind them; when it became clear that the force generated by the inflation of the bags endangered women and children, NHTSA adjusted the rules to reduce the force of the bags, mandated that they not inflate in low-speed crashes and required automakers to place prominent warnings an all cars about having children ride in the back seats.

There were more than 35,000 people killed in car crashes in the U.S. last year, up 7.7 percent from 2014. It’s reasonable to assume that the widespread introduction of semi-autonomous or even driverless vehicles, if properly vetted, could dramatically reduce that number. But until we’re certain that all of the legitimate concerns about this new technology have been addressed, NHTSA shouldn’t even think about letting the light turn green for driverless vehicles on America’s roads.

Set the light at yellow and tell all of the auto techies to slow down and proceed with extreme caution. The age of driverless cars is coming, but it can wait a few years until we get it right.

1 COMMENT

  1. Tweek the software & move on!! After some 13+million miles with one fatality, the safety record is quantum fold better than any human driver could ever acheive. Get some perspective here. The sooner we get humans out of the driving mode, the sooner the carnage on the road can stop. The human stupidity factor makes humans incapable of safily operating vehicles.

Comments are closed.