An open letter to the University of Leeds Inter-Disciplinary Ethics Applied Centre

To whom it may concern,

It was only once, that I’ve left any teaching engagement while in progress in a state of considerable annoyance, during my so far three years of studying Mechatronics. It was a seminar a couple years ago, hosted by IDEA, on the Trolley problem and self-driving. This discussion was no particular fault of IDEA, but it’s a discussion that is prevalent in the field, and it is my intention to change it.

The Trolley problem

The Trolley problem is a series of ethical thought experiments, whereby usually a hypothetical trolley will hit a number of people and you have the choice to pull a lever and have it hit a fewer number of people. Variations on the problem include having different number of people or different specific people in front of the trolley (5 schoolchildren vs 3 grandmothers). While its usefulness in ethics in general can be debated, my argument is that it is not only pointless, but outright dangerous when it comes to discussing autonomous vehicles.

The Trolley problem and autonomous vehicles

So how does the Trolley problem relate to autonomous? The usual example is the car is either going to hit a pedestrian, or it can steer into a wall, killing the occupant. Similar to the original problem, many variants of this can be debated, relating to the number and age of the people inside and outside the vehicle.

The reason why I believe this is a fundamentally flawed question, is that it assumes, that the vehicle makes a decision to kill a person. Again, this scenario implies, that it is a normal operating scenario for an autonomous vehicle, to decide to kill someone. I firmly believe that this implication is morally wrong. It would be impossible to establish the trust that is required to use an autonomous vehicle, knowing, that it can decide to kill you to save someone else, or decide to kill someone else to save you.

To further this point, let’s say, that it is known, that a certain kind of car applies such an ethical system, and it’s known, that this car will make decisions to save younger people. With this information, an adversarial person can make a dummy[1] of a child that is sufficiently life-like that they could use to spoof cars into killing its occupants.

The secret third option

The problem with framing such questions with the Trolley problem, is that it makes it look that it must be a choice of the two bad options. However, framing it as this choice glosses over the right answer. Which is that autonomous vehicles must always try to avoid getting into such a situation. The decision isn’t, “oh well, this person stepped in front of me, I guess I’m going to kill them”, it must be trying to avoid collisions and accidents. Killing someone isn’t a normal operating scenario, it must always be treated as abnormal, that the vehicle must try to avoid.

I’m not alone in thinking that this sort of framing of the question is incorrect. This is an excerpt of a discussion between Bryan Salesky, CEO of Argo.ai, one of the biggest autonomous vehicle companies, and Alex Roy racing driver[2]:

Bryan Salesky
The way we program these vehicles today, the concept of morality is not … The language we use to program these vehicles is not sufficiently verbose. It isn’t something that can be articulated. Morality isn’t something that can be articulated. And when you think about it as a human driver, how often do you get faced with these sort of decisions? It’s pretty much … It’s very rare that you’ve ever had to make a choice, “Do I hit A or B?” Usually, it’s there’s a C, a third choice, that allows you to avoid it altogether.
I think that’s what a lot of these … We call them the “trolley problems“. You can look that up.
Alex Roy
Call it the Kobayashi Maru. If you find yourself in that situation, you’ve already made a mistake.
Bryan Salesky
And that’s the key.
Alex Roy
You allowed yourself into a fork. The moral decision is to have the skills and prepare a vehicle such that you do not enter a fork of doom.
Bryan Salesky
Right.

Not only this option C is better from a morality point of view, it is important to understand that while a vehicle might be incapable of deciding who to hit, it can be capable of knowing how to avoid a situation better than any human every could. It can calculate it’s break distance, predicting the expected motion of all other vehicles around it, and plan a path that avoids the collision.

The problem with the question

So why is this question so prevalent? When first the realistic prospect of autonomous cars on the road hit the public sphere, the companies researching this started putting out such trolley problem questions, to engage the public, but also in my view, to deflect the debate as something that the public needs to decide in such easily digestible and endlessly bloggable questions. As serious outlets started to pick it up, academic institutions also joined the conversation putting out actual research, such as how answers vary to this question across countries[3]. With that, this idea has entered public discussion and now unavoidably got associated with the ethics of self-driving.

Why is that dangerous? In my opinion for two reasons. Firstly, if people take this as “autonomous cars can decide to kill you”, that erodes the trust fundamental to the global roll out of such vehicles, which would actually benefit humanity on the whole. Secondly, if decision makers in the auto industry start to think like this, we might see cases where autonomous vehicles do try to make an ethical decision, with grave consequences. This second part isn’t speculation from my part, a lecturer cited a case where at some industry conference he was asked by a leader of a large auto company about the Trolley problem. While I’m sure the lecturer meant it an example of the importance of ethics in engineering, I found this exchange deeply terrifying.

It also conveniently distracts from the actual ethical question that is key to self-driving, which is a question of liability. Who is responsible, if a vehicle kills someone? The answer isn’t obvious; even when the person killed was at fault it can be argued, if the vehicle was better, it could’ve avoided the situation. It is clear, that companies are also keen to avoid this discussion. In the aftermath of an experimental Uber self-driving vehicle striking and killing Elaine Herzberg in Tempe (AZ) in 2018, Uber quickly settled the case with the victim’s relatives, avoiding a court decision that could set a precedent. However, they were more than happy to have the distracted safety driver indicted for negligent homicide.

Final thoughts

I would like to request an end to teaching the Trolley problem in relation to autonomous vehicles in its current form to engineering students, including scheduled seminars and as example debate cases during open days. By no means am I suggesting that the ethics of self-driving isn’t something that needs discussing, it is more important than ever. But propagating the question as is, is dangerous and counterproductive. I would suggest discussing the question of liability and responsibility, using the case of Uber and other cases that might occur in the future. If you do decide to keep the current form of questions, I would like to request that my argument is presented alongside it, either in the form of this letter or given by the presenter. I’m happy to even present this argument myself, if it allows people to think more clearly of this problem.


[1] Or even cardboard cut-out

[2] No Parking podcast, Episode 4: From “Lone Survivor” to No Driver, at around 37 minutes

[3] Princeton University study in 2019: https://www.pnas.org/content/117/5/2332