When taking others’ preferences into account, we will often want to idealize them rather than taking them too literally. Consider the following example. You hold a glass of transparent liquid in your hand. A woman walks by, says that she is very thirsty and would like to drink from your glass. What she doesn’t know, however, is that the water in the glass is (for some reason not relevant to this example) poisoned. Should you allow her to drink? Most people would say you should not. While she does desire to drink out of the glass, this desire would probably disappear upon gaining knowledge of its content. Therefore, one might say that her object-level preference is to drink from the glass, while her idealized preference would be not to drink from it. There is not too much literature on preference idealization, as far as I know, but, if you’re not already familiar with it, anyway, consider looking into “Coherent Extrapolated Volition“.
Preference idealization is not always as easy as inferring that someone doesn’t want to drink poison, and in this post, I will discuss a particular sub-problem: accounting for cognitive biases, i.e. systematic mistakes in our thinking, as they pertain to our moral judgments. However, the line between biases and genuine moral judgments is sometimes not clear.
Specifically, we look at cognitive biases that people exhibited in non-moral decisions, where their status as a bias to be corrected is much less controversial, but which can explain certain ethical intuitions. By offering such an error theory of a moral intuition, i.e. an explanation for how people could erroneously come to such a judgment, the intuition is called into question. Defendants of the intuition can respond that even if the bias can be used to explain the genesis of that moral judgment, they would nonetheless stick with that moral intuition. After all, the existence of all our moral positions can be explained by non-moral facts about the world – “explaining is not explaining away”. Consider the following examples.
Omission bias: People judge consequences of inaction as less severe than those of action. Again, this is clearly a bias in some cases, especially non-moral ones. For example, losing $1,000 by not responding to your bank in time is just as bad as losing $1,000 by throwing them out of the window. A business person who judges the two equivalent losses equally will ceteris paribus be more successful. Nonetheless, most people distinguish between act and omission in cases like the fat man trolley problem.
Scope neglect: The scope or size of something often has little or no effect on people’s thinking when it should have. For example, when three groups of people were asked what they would pay for interventions that would affect 2,000, 20,000, or 200,000 birds, people were willing to pay roughly the same amount of money irrespective of the number of birds. While scope neglect seems clearly wrong in this (moral) decision, it is less clearly so in other areas. For example, is a flourishing posthuman civilization with 2 trillion inhabitants really twice as good as one with 1 trillion? It is not clear to me whether answering “no” should be regarded as a judgment clouded by scope neglect (caused, e.g., by our inability to imagine the two civilizations in question) or a moral judgment that is to be accepted.
Contrast effect (also see decoy effect, social comparison bias, Ariely on relativity, mere subtraction paradox, Less-is-better effect): Consider the following market of computer hard drives, from which you are to choose one.
Hard drive model | Model 1 | Model 2 | Model 3 (decoy) |
Price | $80 | $120 | $130 |
Capacity | 250GB | 500GB | 360GB |
Generally, one wants to expend as little money as possible while maximizing capacity. In the absence of model 3, the decoy, people may be undecided between models 1 and 2. However, when model 3 is introduced into the market, it provides a new reference point. Model 2 is better than model 3 in all regards, which increases its attractiveness to people, even relative to model 1. That is, models 1 and 2 are judged by how they compare with model 3 rather than by their own features. The effect clearly exposes an instance of irrationality: the existence of model 3 doesn’t affect how model 1 compares with model 2. When applied to ethical evaluation, however, it calls into question a firmly held intrinsic moral preference for social equality and fairness. Proponents of fairness seem to assess a person’s situation by comparing it to that of Bill Gates rather than judging each person’s situation separately. Similar to how the overpriced decoy changes our evaluation of the other products, our judgments of a person’s well-being, wealth, status, etc. may be seen as irrationally depending on the well-being, wealth, status, etc. of others.
Other examples include peak-end rule/extension neglect/evaluation by moments and average utilitarianism; negativity bias and caring more about suffering than about happiness; psychological distance and person-affecting views; status-quo bias and various population ethical views (person-affecting views, the belief that most sentient beings that already exist have lives worth living); moral credential effect; appeal to nature and social Darwinism/normative evolutionary ethics.
Acknowledgment: This work was funded by the Foundational Research Institute (now the Center on Long-Term Risk).
Armstrong and Mindermann make a similar point in their paper “Impossibility of deducing preferences and rationality from human policy” ( https://arxiv.org/pdf/1712.05812.pdf ). Assume that we want to infer a human’s preferences from behavior. If the human was Bayes-rational for a known prior, then doing so is possible in principle. But in reality, humans are not Bayes-rational. Instead, they use some other planning algorithm that might be biased in systematic ways. Let’s say we model the human as a pair of a planning algorithm and a utility function (or some other form of preferences). Any such pair makes predictions about the human’s behavior. Unfortunately, for any such pair, there is an equivalence class of pairs with different preferences that make the same predictions (Theorem 1 in the paper). Essentially, this is because preferences/biases can be moved between the models of planning algorithm and preferences. E.g., a model of an agent as caring about eating apples and pears and using some planning algorithm makes the same predictions as a model of an agent as caring about apples only with a planning algorithm that is biased toward eating pears. As the authors note, this problem can in practice often be avoided by making some additional assumptions. For instance, one could give some weight to what humans describe their preferences to be. But my post argues that even with the strong normative assumptions that humans usually make, some ambiguities remain.
LikeLike
Pingback: Tranquilism Respects Individual Desires | Ataraxia