Causes versus understandings: Why self-driving cars may not be a good idea

[Klik hier voor de Nederlandse versie van deze post]

To explain human behavior you can start from causes, such as our evolutionary instincts, learned reflexes or statistical regularities. However as we our actions are also informed by the way we understand the world around us, meanings and intentions can also figure as explanatory approaches. In daily life, all these approaches are seamlessly intertwined, but differences between them become crucial when it comes to automating systems such as self-driving cars. After all, for automation you can only rely on causes, not on meanings or intentions. This implies that the normative aspects of meanings and intentions, such as values ​​and responsibilities are transferred to a technical system without having the ability to change them – even though there may be future developments that require so. This makes it doubtful whether the self-driving car should be developed further.

Not much seems to coincide so perfectly with human life as driving a car. First you sit in the back, then you move to the passenger seat in front, after which you get your driver’s license at eighteen. If you get old, you must be checked regularly to see if you are still human enough to drive a car.

No wonder that self-driving cars have such an appeal, it would mean a big leap forward into a science fiction-like world. Not only car traffic makes this leap, but also humanity itself makes enters a new phase of progress.

The big question is which actions you want to automate in case of self-driving cars. What is the model of man that needs to be replaced? There are, in fact, different approaches to explaining human behavior, and only one of these allows automation. First, we can look at the behavior that can be explained causally, but we can also look at the behavior that is based on our understanding of the world. This behavior, unlike causal behavior, is linguistic and reflexive in nature.

It is undeniable that human behavior can be explained on the basis of predictable regularities. We can see these regularities as the causes of certain actions.

To begin with, we are evolutionary shaped and our genetic material largely determines how we respond to certain stimuli. Our brain and system is pre-programmed to make choices in certain cases. But not all of our causal choices are innate. After all, as a species of animal we behave as little different from Pavlov’s dog in many cases, our reactions being subconsciously conditioned by previous experiences. These types of choices are also unconscious and predictable. In addition, our behavior also shows regularities at an aggregate level. Using statistical techniques, these regularities can be mapped and used to say something about the human actions are made.

In sum, if you want to explain the choices that people make, then you can look for causes finding these them in your DNA, in the way you are conditioned or in the statistical correlations – many scientists and laymen will do so happily.

This usually does not concern actual physical laws, but correlations, mechanisms or other non-necessary relations. The main issue here is that the laws of physics serve as a starting point for how you should look at reality. As explanatory approaches, causes aim to provide a description of human behavior that is comparable to genuine causal laws.

The beauty of such (pseudo) laws is that they can be well automated. Not that it is easy to keep a car on the road safely, but in principle you can optimize the system once you know the correct regularities and have sufficient computing power to make the necessary calculations.

However, if we ask ourselves how we make choices, then it will not suffice to only look at causes. We are linguistic beings which gives rise to other types of explanation for the way we behave.

We understand the world through our language and when it comes to understanding we speak of so-called ‘hermeneutical’ explanation approaches. First of all you have to think of meanings, which concern the way in which our actions are based on understandings of the world. Such understandings are not genetically or statistically, but culturally determined.

Secondly, you can think of intentions. The actions that we consciously take after we have reflected on the consequences of a number of possible actions. Not all, but certainly many, decisions are decisions that we have made in a well-considered way.

Meanings serve multiple functions in human action. First of all, we enable us to coordinate our joint actions. Understandings are socially constructed: the meanings we give to our experiences and our thoughts are obtained through learning processes within our social environment. Because understanding is socially constructed and meanings are shared, you can come to explicit agreements and, much more often, implicit coordination of choices. You don’t have to think about how someone else understands a certain situation, because you already understand the situation in the same way.

Secondly, meanings have an important function for an individual. They enable us to make new decisions that will have an outcome in the future. By understanding a situation based on earlier, more or less comparable situations and circumstances, we can form an expectation about the effect of the decision to be made.

The third, and perhaps the most important, function of meanings is that they indicate what is important and what is not. Which matters do we find valuable, what agreements are central to interpersonal communication, which norms are the right ones, etc. All these questions are implicitly embedded in the meanings that we share within a culture or that are laid down in institutions such as the systems of law or democracy. In this way meanings can give a moral dimension to our experiences and observations.

One could see intentions as a kind of mirror image of meanings. As mentioned, you make choices based on the way a situation is understood and the outcome that you expect. Afterwards, others can hold you accountable for the fact that those intentions were formed on the basis of a shared understanding and a legitimate expectation about the future outcome of that decision. When you have to account for your decisions, you are in fact forced to give good reasons that you had for making those decisions. Of course, what counts as ‘good’ is determined by the web of meanings that is used within a certain culture or institution.

The hermeneutic duo of meanings and intentions cannot be automated. They constitute a dialectical and moral relationship between an individual and a social environment and they start from expectations that are not just based on supposed causes or extrapolations, but on the basis of analogies, creative hypotheses, poorly understood superstitions. And usually all of these are applied at the very same moment.

Meanings and intentions are not just regularities, they enable you to do things and to reflect on things done whether they have had right outcomes. In addition, ‘correct’ is primarily about ‘morally correct’, not so much about ‘empirically correct’. In other words, meanings and intentions are focused on action and on organizing responsibilities.

However, it is not always evident whether something belongs to the domain of causes or the domain of understandings. This becomes clear when you look at the way people deal with emergency situations such in case of as an imminent accident.

Some of us will then freak out. The interesting thing is that you can see panic as a situation that you cannot give meaning to at that moment. In a panic we respond in an atavistic way that can be perfectly explained as an evolutionary cause.

But, different as we all are, not everyone reacts the same way. Where some panic, others will keep their cool and make an informed decision based on their interpretation of the situation.

How to automate such situations? In the case of airplanes you could say that pilots act resolutely, they take the helm in situations that are not standard. With cars it seems that the opposite path is followed as most attention is given to way in which automated systems will cope in dangerous situations.

I do not want to suggest that it is thought that car drivers are unable to understand an emergency situation or that they are considered to be incapable of giving good reasons for their choices in times of danger. The problem is that the difference between causal and hermeneutical explanations is simply not well understood.

This can be seen, for example, in the tendency of ethicists to see the arrival of the self-driving car as a real life ‘trolley problem’, so they can engage in thinking about how an algorithm must choose between victims in the event of an imminent accident. With that, the self-driving car is mainly used as a vehicle to explore ethical dilemmas, while the real ethical issues at play are ignored.

The original trolley problem is about a situation in which doing nothing leads to five victims and intervening to one victim. An algorithm can never do nothing; an algorithm, after all, has no intentions, it just follows what is assigned to it by the code. This makes the trolley problem-scenario in the case of self-driving cars a silly one: a self-driving car is not programmed to kill five people instead of one person – unless the algorithm was designed by a psychopath.

It is no problem at all for everyday life that the explanatory approaches are not separated, because everyday life is organized so that we can deal with it. For example, if you do something wrong and you have to account for that afterwards, then you are basically given the opportunity to put forward good reasons. In this way causes are drawn into the domain of meanings and intentions.

With that, causes are given meaning. And that is important, because human life is mainly about humans living together. We must organize this in such a way that we can coordinate our decisions and we must ensure that these decisions are in line with shared moral standards.

Causes are not very helpful for that. You cannot be expected to account for things that you can’t do anything about. What matters is that you have to establish about which decisions you really can’t do anything and about which choices you can. The line between those decisions is thin. A bank robbery is unacceptable, but if you are at gunpoint, you do not bear moral responsibility for the money you steal. It is not tolerable if you hit someone, but things are different if you do so while you are asleep.

The point is that these types of borderline cases need to be discussed. On a case-by-case basis it has to be examined to which extent whether a wrongdoing is the result of a cause or whether the offender must be reproached for his inability to give a good reason for his behavior. Moreover, it must always be possible in principle to transfer an action from the domain of causes to the domain of good reasons and vice versa.

That brings us to the core of this story. The most important question is whether we want to move around in meaningless systems. Not because it is technically impossible to automate cars, but because an automated system implies an transfer from the domain of meanings to the domain of causes.

You can say, yes, that is what we should want, because automated car traffic ensures more safety, more fuel economy and the time you spend driving the vehicle can now be spent in a better way. Just think about it, instead of calling your kids back in the car that they have to be quiet because otherwise mom or dad can’t concentrate on the road, you can play a game with them.

And why not accountable? You can still hold the developer of the technical system responsible – at least in the criminal sense or in terms of insurance. It can also be the administrator of the system, or the government, or the user. It will take some time and some conflict, but ultimately there will be a solution.

In short, the transfer from a meaningful system to a meaningless system can be done if we find out what we consider to be important and who can be hold for certain actions. Or not? I have my doubts. In human interaction it is quite possible to reconsider a certain distribution of explanatory approaches if there is reason to do so. A technical system is something different: a design choice that you make now also determines the scope of choices you can make in the future – unless an entire system is replaced at once, something that is virtually impossible. The shift from meaningful to a meaningless system is definitive: we can’t go back.

There are plenty of examples of such ‘lock-in’ effects. Consider, for example, that air travel has been made attractive in order enable everybody to become acquainted with other cultures. To achieve this value, measures such as not excising duties on kerosene were taken. But now it seems difficult to reverse that, even if everyone is aware of the environmental impact of air traffic. The train doesn’t work as an alternative as it is hampered by other lock-in effects, such as the different track widths that exist within Europe. Also this can hardly be reversed.

The fundamental problem is that you never know how a technical system will turn out in the future. Moreover, moral issues that do not yet play a role can later be of the utmost importance. It is certainly undesirable that a system is designed in such a way that possible moral issues can no longer be resolved.

The leap forward to a fully automated traffic system thus becomes a much less appealing picture of the future. It implies that we construct technical, institutional and moral ‘lock-ins’, making it impossible to cope with new problems or changed insights.

I see little attention for this issue. Technology developers, ethicists, politicians are too busy with technical and legal questions or moral questions that don’t really matter – see the trolley problem. Let us think seriously about how to deal with the future moral implications of the autonomous systems that are currently being designed and how we can ensure that ethical issues are actually discussed as moral – that is, linguistic – issues. This means that we must collectively reflect on the changing circumstances and the moral demands that come into play. Whether the development of self-driving cars offers room for such collective reflection? To be honest, the answer to this question gives rise to skepticism rather than confidence.

Further reading:

Mecacci, Giulio, and Filippo Santoni de Sio. “Meaningful Human Control as Reason-Responsiveness: The Case of Dual-Mode Vehicles.” Ethics and Information Technology  (December 06 2019).

Be Sociable, Share!
This entry was posted in Uncategorized and tagged , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.