Almost as soon as news broke of a fatal crash involving Tesla’s Autopilot last year, fans and detractors of the electric-car manufacturer have been clear on the tragedy’s causes. Tesla’s supporters and investors never doubted that the system improves safety, so the driver must have failed to heed Tesla’s warnings and remain attentive. Detractors and short investors are all but certain that Autopilot somehow failed to protect the car’s driver, allowing him to drive directly into a semi at 74 mph.
After more than a year of debate a conclusive answer is finally at hand, courtesy of a National Transportation Safety Board investigation whose final results were presented last week. But the board’s findings aren’t likely to leave either side happy: Rather than blaming man or machine alone, it seems that both human drivers and the Autopilot system — specifically the complex relationship between the two — contributed to the deadly event.
At the heart of the matter is a dangerous dynamic: With billions at stake in the frantic race to develop self-driving car technology, there are huge incentives for carmakers to create the impression that vehicles for sale today are “autonomous.” But as the NTSB made clear, no vehicle now on the market is capable of safe autonomous driving. When consumers take high-tech hype at face value, a lethal gap between perception and reality can open.
Tesla reaped months of laudatory coverage and billions worth of market cap by presenting its Autopilot system as being more autonomous than any other advanced driver assist systems, even as it warned owners they must remain attentive and in control at all times. Though Autopilot did offer better performance than other advanced driver assistance systems, the key to its success was the lack of limitations Tesla put on its use. Because Autopilot allows owners to drive hands-free anywhere, even on roads where Tesla has warned that such use would not be safe, the company has been able to profit off the perception that its system was more autonomous than others.
But Autopilot was actually designed for use on well-marked, protected highways with no chance of cross-traffic. So when the tractor-trailer turned across Florida’s Highway 27 last May and the Tesla slammed directly into it without triggering any safety systems, Autopilot was working exactly as designed. The problem was that it was being used on a road with conditions it wasn’t designed to cope with, and the driver had apparently been lulled into complacency. Far from failing, Autopilot was actually so good that it led the driver to believe it was more capable than it really was.
This complex failure, which both man and machine contributed to, sounds an important warning about autonomous-drive technology: until the systems are so good they need no human input, the human driver must remain at the center of “semi-autonomous” drive system design. Engineers must assume that if there’s a way for people to misuse these systems, they will. Just as important, companies need to understand that if they over-promote a semi-autonomous drive system’s capabilities in hopes of pulling ahead in the race to autonomy, they run the risk of making the technology less safe than an unassisted human driver.
There’s a lesson to be learned here from aviation. As computers and sensors improved in the 1980s, aircraft manufacturers began to automate more and more of the controls simply because they could. Only later did the industry realize that adding automation for the sake of automation actually made aircraft less safe, so they re-oriented autopilot development around the principle of “human-centric” automation. Only when automation is deployed in ways that are designed to improve pilot performance does safety actually improve.