Super-rational ethics
I have mentioned the Evolution of Trust game before, and think it’s a wonderful introduction to the idea of how a moral sense may have evolved in the social animal man.
There is a vast literature on these types of games in game theory, regarding exactly what the pay-offs for the games are, how many times the game is to be played, whether either of those is randomized in some way, and whether or not communication between players is allowed (but that, also, is subject to gaming).
One interesting wrinkle in this research is Douglas Hofstadter’s idea of super-rationality, which has the players of the game imagine that the other player would come to the same conclusions they would in a similar position.
In the typical prisoners’ dilemma then,
where the so-called “rational” choice is to confess, the “super-rational” prisoners will both not confess. (In fact, “rational” seems to be a poor, or at least inflated, choice of terminology here. Really what we have is that confession from both is a stable equilibrium point in the payoff space, while no confession by both is an unstable equilibrium point.)
But what is the reasoning behind the so-called rational versus the so-called super-rational prisoners (ignoring repeated rounds, where we already know that cooperation can pay off)?
The rational prisoner might say: If I don’t confess, then I may either get 10 years (if my partner confesses) or just one year (if he doesn’t) — I have no control over his actions, so suppose it’s random, that’s a probable jail term of about 5.5 years. Meanwhile, if I do confess, then I may get either 5 years or 0 years, again depending on my partner’s actions, which I have no control over, so that’s a probable jail term of 2.5 years. Therefore, I should confess to obtain the lowest probable jail term.
But the problem with this reasoning is that the prisoner’s partner (the other prisoner) is not a random number generator — he’s another thinking human.
So, if we assume that both of the prisoners are going to apply similar reasoning and get to the same answer, then the 5 years for both actually becomes an unstable equilibrium (if the responses are restricted to being the same).
So, while a kind of tit-for-tat morality can evolve from simple repeated interactions similar to the prisoners’ dilemma (though hopefully with some positive payoffs), I think this kind of “super-rationality,” recognizing the other as rational and imagining what their response should be is the beginnings of a rational morality.
Note that this does not mean that all interactions between super-rational beings will be peaceful.
In the case of a positive sum game, where both players benefit from cooperation, then they should cooperate.
But even super-rational beings may be forced into situations where they are playing an asymmetric or zero-sum or even negative sum game.
In fact, most “games” that we play in real life are asymmetric. When I pay €1 for a coffee, this is evidence that I value the coffee more than the money and the bar values the money more than the coffee — but the differences in our valuations are almost certainly not equal — but this doesn’t prevent us from cooperating.
Consider a negative sum game though: a lion and a zebra on the savanna, if the lion doesn’t eat the zebra, he will die. But, clearly, if the zebra is eaten, he will die. (This game is also asymmetric, but easy to understand.) Now, once we have reached the age of reason, we don’t condemn the lion for eating the zebra — but neither do we expect the zebra to lie down and give itself to the lion to be eaten.
Similar difficulties may arise, for libertarians at least, regarding property rights, in so-called “lifeboat” situations. A typical example is this: after a shipwreck, two survivors spy a bit of wood that one person can float on to get to shore or to be saved — while the other is doomed to drown. Libertarian ethics would say that whoever reaches the wood first justly owns it. But can we condemn the late-comer if he fights for the wood in this negative sum situation?
Consider also the hypothetical situation of the vampire that must drink human blood to survive. If we cannot condemn the lion, how can we condemn the vampire? If there is literally no other known way for him to survive other than to kill humans, how could we call him evil, at least from his point of view, for doing so? And yet, how could he blame us for endeavoring to kill him?
But then, what is the real world? Are we, or at least some people, often placed in negative sum games? Or do we, or some people, just think they are? Or are some people just not super-rational?
Best,
ihaphleas