I’m sure we’ve all heard the rather loaded statistic that over 94% of all road accidents are caused by human error, or theories that soon drivers will be relics of automotive history, replaced by autonomous machines, incapable of error. Perhaps then it’s already high time to concede and let the robots take over the wheel – we’d all be much safer for it, right?
With that in mind, you’d be forgiven for assuming this would be a rather short article. A simple “yes”, case closed, thanks for reading.
But as you’ve probably guessed, it’s a bit more complicated than that. The truth is: the notion that humans are terrible drivers is one of the biggest myths going. There are many reasons for this, not least the fact that statistics don’t account for all those near misses where good human judgment helped avoid a serious accident.
Human beings are the overwhelming majority of drivers today. We simply don’t have enough data about robot drivers to support the assumption that they could do the job any better.
But before we go further with the comparison, let’s start with a fundamental question:
There are, of course, many qualities that can make someone a good driver. But for the purpose of our discussion, let’s keep it simple and define a good driver as a “safe” driver. A driver that never engages in the type of behavior that would result in hurting or harming a human being.
So, if that is our baseline, who is the better driver?
Well, it all depends on context.
Whether it’s on an assembly line or in surgery, robots are arguably better suited than humans to a wide variety of tasks. Precise, consistent and efficient, they can do the same thing day in, day out without getting tired, sick, or distracted. Therefore, programming robots to become excellent drivers should be just as feasible, right?
Again, it depends on context.
Experience shows that for tasks that are repetitive and well-defined – like calculating square roots or playing chess — robots are an excellent choice. But can driving be categorized as repetitive or well-defined most of the time? Just imagine approaching a roundabout in a thunderstorm flanked by cyclists.
It’s in complex situations like these that human experience and intuition are likely of greater value. Because these situations require contextual awareness, operational judgment, creativity, and foresight to avoid an emergency. Characteristics autonomous machines currently lack when compared to humans.
It’s important to note that autonomy is largely a lesson in prevention rather than reaction times. Because an accident avoided is far better than an accident mitigated. Choosing a traveling speed that allows an autonomous vehicle to come to a safe stop in all situations, based on its weight and stopping distance, is much more effective than focusing on the lightning speed reaction time of the vehicle thanks to a specific piece of technology.
This brings me to another popular misconception: that to be considered truly “safe”, autonomous vehicles must be loaded with “perfect” technology – lidars capable of gazing hundreds of miles into the distance; radars that can scan entire cities. The reality is, it's not about how powerful a piece of technology is, it’s about how you design and integrate it into a system to ensure it’s safe. We need to reach perfection in the design of the system, where each component compensates for another’s shortcomings. When a camera fails, the system must be able to detect that vision is no longer working and take the safest course of action possible using other types of sensors.
That leads us to the concept of the Operational Design Domain, or ODD. ODD is a means to describe the specific domain or domains in which an automated driving system is designed to properly operate, including types of roadways, ranges of speed, weather, time of day, and environmental conditions. Basically a way to articulate the space in which the vehicles move, as well as how technology should be built to operate within that space.
In the right ODD, robot drivers could outperform humans, usually in a closed-off environment which involves repetitive, well-defined processes. By definition, if you operate in an ODD without any human beings, you are safe. The trouble is, ODDs don’t maintain their properties over time. If the weather changes, or there’s an unexpected obstacle, the vehicle will no longer be operating in the environment it is designed for. Therefore, rather than try to reach “perfection”, it’s better that an autonomous vehicle can detect conditions where it cannot drive and temporarily cease operating.
So, where does that leave us in the debate about who should take the wheel? As I’ve discussed, it’s all a matter of context. When it comes to repetitive, well-defined processes, robots are likely better suited. But in complex, ever-changing situations, humans arguably come out on top. Some industry segments, such as mines and quarries, as well as ports, offer easier environments to control and work in. At the other end of the safety spectrum, public roads provide a far greater challenge for autonomous vehicles.
As autonomous technology progresses, we shouldn’t waste our time comparing humans to machines or finding the perfect technology that will surpass human capabilities. We should rather focus our energy on building overarching systems that are safe – perhaps combining the capabilities of people and machines.
At Volvo Autonomous Solutions, we envisage a future where autonomy complements human processes, rather than replaces them. It’s high time we dropped the Us vs Them mindset and started exploring ways in which we can use autonomous technology to make our lives simpler and, above all, safer.