[Klik hier voor de Nederlandse versie van deze post]
As a counterpart to the ‘naturalistic fallacy’ you could speak of the ‘idealistic fallacy’, which − strangely enough – seems not have been introduced yet in ethics. This fallacy boils down to descriptions of social reality based on normative starting points. In ethics, the idea of a rational, autonomous individual seems to be often subjected to this fallacy. In itself, this idea helps us to reflect on the moral structure of society; but if this idea is accepted instead of aspired, the appeal of improving society disappears and, moreover, it becomes difficult to figure out how exactly to pursue this improvement.
—
One of the greatest insights from ethics is that you cannot just infer what ‘ought’ from what ‘is’. Whoever does that steps into the ‘naturalistic fallacy. On the basis of this fallacy you can effectively criticize any reference that something is good just because of the mere fact that it is just the way it is. Traditions, conventions, the natural order, etc. can never just be used as a moral legitimation of a certain practice or situation. Evil things like slavery, women’s oppression, colonialism and so on have all been resolved because we realized that there was no longer any valid moral legitimacy.
The importance of this fallacy is that we can never accept the situation the way it is. We will always have to be vigilant and force ourselves to explore how we can make the world a better place.
But there is also another, underexposed, side to the gap between ‘is’ and ‘ought’: namely that you derive empirical reality from a desired image. You could call the fallacy that arises from this the ‘idealistic error’, in which a description of social reality is based on previously given moral principles.
—
This idealistic error is ubiquitous. For example, someone who already takes the premise that ‘people are bad’ will be confirmed all the time. The same goes for those who think that ‘most people are virtuous’. Those may be propositions that have normative elements, because what does ‘bad’ or ‘virtuous’ exactly mean, but they are mainly proposition that is raise based on empirical questions about how to determine that are bad or virtuous.
It’s no wonder people commit this fallacy. After all, our perception is always normatively colored. We see the world from a moral perspective formed by an interplay between cultural teachings and personal experiences. We have created a framework through which we try to evaluate new events in terms of right and wrong, testing that moral framework time and time again.
Usually this leads to little adjustment, because we are the masters of the ‘confirmation bias’. We use most of the impressions we gain to confirm our existing frameworks, impressions that do not help are quick to be discarded.
This is why social research is so difficult and so important. All methods that have been developed serve to counteract the confirmation bias, while it is ingrained in our whole thinking. Whether quantitative or qualitative, from interpretive sociology to statistical economics, all of these approaches aim to compel ourselves to scrutinize our assumptions. The downside is that you can only see part of that social reality and never create a complete overview, but at least this provides credible insights about social reality.
—
It is difficult to find such methods in the field of ethics. There is no empirical context that can ensure that insights can be tested and rejected. What remains are arguments that are confronted with counter-arguments, but there is absolutely no guarantee that you can come to better descriptions of moral statements.
Hence, it may be no surprise that ethicists in particular are guilty of the idealistic fallacy. The most important of these is probably the starting point of the independent individual who makes rational choices – the starting point of an autonomous self.
To find the source of this autonomous self, you have to go back to social contract theories. In themselves these are no more than handy theoretical tricks to think about which form of government would be best. What would happen if there was no society that existed before we were mature, autonomous individuals, so that we could design a form of government in a conscious and rational way? According to Hobbes, we would then transfer our sovereignty to an absolute ruler, so that we would not get into a war of all against all — something no sane individual would prefer. As such, they come together and draw up a contract so that a peaceful society can be established. Locke took sovereignty from the totalitarian monarch of Hobbes and handed it over to the people. Rousseau saw the social treaty mainly as a historical error: before the contract was concluded, people were free. Once a contract has been chosen, we must make the most of it and choose a form of government based on full consent.
Like most of us, I have never signed a contract with anyone about any form of government. In fact, to my knowledge, no state has ever emerged on the basis of a voluntary contract signed by rational loners. This would not even be possible, a person is not an individual who precedes the community. Any person is an intrinsically social being, with a shared language, shared norms, a shared identity and so on.
Social contract theories present hypothetical situations of what could have happened if people had been individuals who had come to a form of government on the basis of rational self-interest. It is a powerful method of avoiding the naturalistic fallacy, because it frees us from the situation we accidentally find ourselves in and allows us to focus on which alternative society is possible and desirable.
—
The rational, autonomous individual has become the starting point for most ethical approaches and descriptions of human action are soon used to confirm this account of man. Ethics often seem to revolve around reconstructions of empirical phenomena that are not aimed at a better understanding of those phenomena, but above all aim at not having to touch the premise of the autonomous self. Insights from disciplines such as neurology or sociology are easily pushed aside.
For example, many an ethicists and philosophers still hold on to image of consciousness as if the external world is represented one-to-one somewhere in our mind’s eye. This is useful, because then you can see a choice as the result of a linear process: you observe something; you process that information; you assess different strategies; and then make the optimal choice.
This all sounds so simplistic that is quite implausible that philosophers and ethicists cling to such ideas. And indeed, their reasoning is much more subtle than this. You can demonstrate this, for example, by looking at the idea of a ‘collective intentionality’, which involves thoughts that are shared by more than one person. That is a difficult concept, because a thought can only exist in a brain − especially when a thought is considered to be an ‘object’ that is represented in the conscious mind. Indeed, there is no such thing as a collective brain. You can better describe ‘collective intentionality’ as pertaining to the thoughts of an individual who thinks that several individuals have the same thought. Basically, if I think that you think what I think that you think, we think the same thing and there is a ‘collective intentionality’ − or something like that.
Of course, a thought is something that belongs to an individual. The problem here is that it is believed that the thought precedes collective experience. It is a variant of the linear choice process described above: you make an observation, place it somewhere in your consciousness, consider the situation envisioned, and then make a choice. The experience of commonality has to be conceptually squeezed into this somehow. That is a difficult task and it is not surprising that there are many discussions about the true nature of collective intentionality.
You would expect that discussing philosophers are familiar with Ockham’s razor, the idea that the simplest explanation is the most plausible. The assumption of the autonomous self does not seem to be that simple. Empirical research into the workings of the brain and sociological research into how we relate to our community have produced much more elegant explanations.
That more elegant statement simply consists of turning this account around. If you say that the experience of commonality precedes thoughts and if the frameworks with which we perceive are formed from the premise of a shared experience, collective intentionality is no longer a conceptual problem, but a way in which we simply look at the world.
It all makes on think of the Copernican revolution. The astronomers of that time could calculate the orbits of the celestial bodies, but only with great difficulty. Putting the sun in the center of the universe made it a lot easier to calculate all those orbits.
—
But if we remove the autonomous self from the center of our moral universe, we need to rethink some of our ethical assumptions. We cannot simply assume that actions can be assessed as the result of conscious choices taken by an autonomous individual. Instead of an empirical description, it is better to see the autonomous self as a normative ideal that informs us how we can organize society, how we interact and what conditions apply to whether or not people are responsible for their decisions. More concretely, this comes down to the question of when a decision was still made consciously enough to address someone as if this person was an autonomous self.
In sum, if we accept that our moral principles are based on the ideal of the autonomous self, it is important to organize our society in such a way that we make that ideal as accessible as possible − for example by having the virtuous institutions that I introduced in a previous blog post. By being able to find out whether people can be held responsible for their decisions, by forgiving, encouraging, correcting people, and so on.
—
This also gives rise to the main problem of the idealistic fallacy of the autonomous self. If you stick to this idea at all costs, it becomes increasingly difficult to organize society in such a way that the ideal of the autonomous self comes within reach − after all, this ideal already seems to be there.
That is misleading and hazardous because the world is becoming increasingly complex. Thanks to digitization and globalization, it is becoming increasingly difficult to realize the ideal of the decisive self. If we don’t have a good empirical description of how people act and can act in a world where existing boundaries between jurisdictions and communities and between what is real and what is virtually blurring. An idealistic fallacy makes us empirically confused, but also morally lazy − with all the dangers that follow from that.