Autonomous Solutions

Jonas Binding
2024-01-22
Blog Autonomous Next Safety
Author
Jonas Binding
With his background in biophysics and computer science, Jonas has spent 12+ years thinking about how digital solutions can make human lives better. He has more than 6 years experience working on autonomous driving for passenger cars, trucks and machine.

The misguided dilemma of the trolley problem: Why you shouldn’t worry about autonomous vehicles solving ethics riddles

Here is the age-old thought experiment of the trolley problem: walking towards a railway, you suddenly see a trolley that’s speeding down a track towards a crossroads. One direction will hit five people, the other only one. You are able to change the course of the trolley. You’re faced with two options: do nothing and accept the death of 5, or take action and save 5 but cause an action that means one other person will die. What would you as a human do? And if an automated vehicle was in a comparable situation, what would it do?

When people hear that I work on autonomous vehicles or “self-driving trucks”, they always ask me about the trolley problem. But at Volvo Autonomous Solutions, we don’t believe it’s an issue.

 

From a false statement, you can derive anything.

The trolley problem usually starts with assuming an autonomous vehicle is faced with an ‘unavoidable collision’. Like most players in the Autonomous Development industry, at Volvo Autonomous Solutions, we’re working towards not being faced with such a scenario. At least, a situation where someone can reasonably expect us to do anything about it.
 

For starters, how would I (an autonomous vehicle) get into this kind of scenario? Of course, a human being could misjudge things like braking distance etc, but for an automated system, one of the design targets is to drive in a way that ensure we won’t end up in an unavoidable collision situation later. An autonomous system can’t just go out and behave how it wants to. When developing our technology, we work closely with regulators and are building software systems we (and you) can trust.
 

I haven’t heard anyone create a trolley problem that could actually feasibly happen with autonomous vehicles. And interestingly, when analyzing real-world accident data, automotive researchers failed to find an actual instance of a trolley-type problem with humans. So this hypothetical situation is unrealistic – even with human drivers.
 

We’re only human. Machines aren’t.
 

We’re designing a system that doesn’t make the mistakes of a human. But of course, if you (like almost anyone in the AD industry) want to participate in traffic, there’s always a situation where someone with extreme negligence or malice could involve our system in an accident.
 

Traffic laws are constructed in such a way that this can’t happen if everyone follows the rules. We are building systems (as society rightfully expects from us) that keep everyone safe – even in cases of foreseeable misuse by humans, e.g., a certain level of going over the speed limit, or people trying to “test” the automated system’s reactions, etc.

 

But even in the situation of an accident that’s beyond our control, our system will do everything in its power to mitigate harm. As a society, we have to decide: how many layers of protection do we want?

 

If society is asking for too many layers of protection, the system will simply not exist because it will be too expensive to develop and deploy. That would mean we’d continue to have a solution with human drivers where things are not as safe or as sustainable. So, we have to strike a balance – making autonomous systems safer than the human-driven alternative, but not infinitely so, to reap its benefits as soon as possible.
 

What we need to ask ourselves is: what do we want the machine to be able to detect and deal with? Do we want, as a society, for it to be able to differentiate between people, e.g. by age or other attributes?
 

This is of course a question of ethics and therefore shouldn’t be the responsibility of any one autonomy company. Rather, there should be universal guidelines. The German Ethics Commission for example has said it doesn’t want vehicles to make these kind of decisions: systems should detect whether human life is present or not, but should not try and solve difficult ethical equations.
 

Useful problem or hypothetical hinderance?
 

In conclusion, the trolley problem is an unhelpful hypothetical scenario for autonomous vehicles. It’s a fun discussion in classrooms or bars, but in the real world it doesn’t hold much actual value.
 

At Volvo Autonomous Solutions, we take a holistic approach to safety. It's not about how powerful a piece of technology is, it’s about how you design and integrate it into a system to ensure it’s safe. It’s not about how quick we are to the market, it’s about how we demonstrate our commitment by deploying a safe system. Ultimately, this is what will help us save lives now and in the future.

 

Do you wish to stay updated with the insights like this one? Subscribe to our newsletter.