Articles
Stan Boland, chief executive of AI software development company FiveAI
With studies showing that nearly 95% of road accidents are caused as a result of driver error, our appetite as a society to replace the human driver with autonomous vehicles is on the rise.
Companies such as Volvo, Uber, Google and FiveAI are seeking to drastically reduce the number of road fatalities, currently standing at
1.2 million a year, by working on technologies designed to provide safe autonomous vehicles. Reducing death and injury on our roads must be one of the primary objectives, arguably the paramount objective, for any autonomous vehicle technology.
The industry appears fixed on delivering advanced driver-assistance systems that only partly automate the role of the human, such as Mercedes’ Drive Pilot and Nissan’s ProPilot. Regardless of the popularity of these features, they pose dangerous safety issues.
It’s not necessarily the fault of the technology. Human nature means that drivers quickly put too much trust in semi-autonomous technology and don’t provide the required degree of oversight. This can lead to incidents, such as the Tesla fatality in May.
It’s important to note that we are talking about driverless platforms that automate the driving task to some extent, but not completely. This level of automation technology (categorised as levels 2 and 3) is designed to respond to a relatively narrow range of scenarios and leaves to the driver decisions to make that are, as yet, beyond the system’s capabilities. Given the range of scenarios to which level 2/3 systems can’t yet respond, the potential for disaster is always lurking.
We believe the only safe approach is to avoid sharing any element of the driving task with humans. Systems must be capable of handling any situation without any other human intervention – known as level 5 autonomy.
This means that level 5 systems must respond appropriately to a far broader range of scenarios than current driver-assistance technologies. They may not initially have the full extent of complex reasoning ability that humans possess, but will perform consistently without the concentration or impairment issues, such as alcohol or tiredness, which human drivers regularly exhibit.
In fact, autonomous vehicles will be at least twice as safe as human drivers. Incidentally, this is a target set by Tesla for the “full self-driving” system recently announced by the company. Halving the human driver incident rate would deliver accident rates of only 200 per billion driven miles.
Rigidly applying the same failure targets currently used for limited-complexity vehicle electronics systems to the highly versatile fully autonomous technology under development would require an unfeasibly large validation effort. Manufacturers would need to simulate an almost infinite number of objects and situations to test new autonomous technologies. Because autonomous vehicles can be far safer than human drivers, even before they reach anywhere near the current low-complexity system failure rate, there’s a clear moral and practical case for establishing an intermediate failure target for autonomous systems.