The trolley problem usually starts with assuming an autonomous vehicle is faced with an ‘unavoidable collision’. Like most players in the Autonomous Development industry, at Volvo Autonomous Solutions, we’re working towards not being faced with such a scenario. At least, a situation where someone can reasonably expect us to do anything about it.
For starters, how would I (an autonomous vehicle) get into this kind of scenario? Of course, a human being could misjudge things like braking distance etc, but for an automated system, one of the design targets is to drive in a way that ensure we won’t end up in an unavoidable collision situation later. An autonomous system can’t just go out and behave how it wants to. When developing our technology, we work closely with regulators and are building software systems we (and you) can trust.
I haven’t heard anyone create a trolley problem that could actually feasibly happen with autonomous vehicles. And interestingly, when analyzing real-world accident data, automotive researchers failed to find an actual instance of a trolley-type problem with humans. So this hypothetical situation is unrealistic – even with human drivers.
We’re designing a system that doesn’t make the mistakes of a human. But of course, if you (like almost anyone in the AD industry) want to participate in traffic, there’s always a situation where someone with extreme negligence or malice could involve our system in an accident.
Traffic laws are constructed in such a way that this can’t happen if everyone follows the rules. We are building systems (as society rightfully expects from us) that keep everyone safe – even in cases of foreseeable misuse by humans, e.g., a certain level of going over the speed limit, or people trying to “test” the automated system’s reactions, etc.
But even in the situation of an accident that’s beyond our control, our system will do everything in its power to mitigate harm. As a society, we have to decide: how many layers of protection do we want?
If society is asking for too many layers of protection, the system will simply not exist because it will be too expensive to develop and deploy. That would mean we’d continue to have a solution with human drivers where things are not as safe or as sustainable. So, we have to strike a balance – making autonomous systems safer than the human-driven alternative, but not infinitely so, to reap its benefits as soon as possible.
What we need to ask ourselves is: what do we want the machine to be able to detect and deal with? Do we want, as a society, for it to be able to differentiate between people, e.g. by age or other attributes?
This is of course a question of ethics and therefore shouldn’t be the responsibility of any one autonomy company. Rather, there should be universal guidelines. The German Ethics Commission for example has said it doesn’t want vehicles to make these kind of decisions: systems should detect whether human life is present or not, but should not try and solve difficult ethical equations.
In conclusion, the trolley problem is an unhelpful hypothetical scenario for autonomous vehicles. It’s a fun discussion in classrooms or bars, but in the real world it doesn’t hold much actual value.
At Volvo Autonomous Solutions, we take a holistic approach to safety. It's not about how powerful a piece of technology is, it’s about how you design and integrate it into a system to ensure it’s safe. It’s not about how quick we are to the market, it’s about how we demonstrate our commitment by deploying a safe system. Ultimately, this is what will help us save lives now and in the future.