A survey of polls on Newcomb’s problem

One classic story about Newcomb’s problem is that, at least initially, people one-box and two-box in roughly equal numbers (and that everyone is confident in their position). To find out whether this is true or what exact percentage of people would one-box I conducted a meta-survey of existing polls of people’s opinion on Newcomb’s problem.

The surveys I found are listed in the following table:

I deliberately included even surveys with tiny sample sizes to test whether the results from the larger sample size surveys are robust or whether they depend on the specifics of how they obtained the data. For example, the description of Newcomb’s problem in the Guardian survey contained a paragraph on why one should one-box (written by Arif Ahmed, author of Evidence, Decision and Causality) and a paragraph on why one should two-box (by David Edmonds). Perhaps the persuasiveness of these arguments influenced the result of the survey?

Looking at all the polls together, it seems that the picture is at least somewhat consistent. The two largest surveys of non-professionals both give one-boxing almost the same small edge. The other results diverge more, but some can be easily explained. For example, decision theory is a commonly discussed topic on LessWrong with some of the opinion leaders of the community (including founder Eliezer Yudkowsky) endorsing one-boxing. It is therefore not surprising that opinions on LessWrong have converged more than elsewhere. Considering the low sample sizes, the other smaller surveys of non-professionals also seem reasonably consistent with the impression that one-boxing is only slightly more common than two-boxing.

The surveys also show that, as has often been remarked on, there exists a significant difference between opinion among the general population / “amateur philosophers” and professional philosophers / decision theorists (though the consensus among decision theorists is not nearly as strong as on LessWrong).

Complications in evaluating neglectedness

Neglectedness (or crowdedness) is a heuristic that effective altruists use to assess how much impact they could have in a specific cause area. It is usually combined with scale (a.k.a. importance) and tractability (a.k.a. solvability), which together are meant to approximate expected value. (In fact, under certain idealized definitions of the three factors, multiplying them is equivalent to expected value. However, this removes the heuristic nature of these factors and probably does not describe how people typically apply them.) For introductions and thoughts on the framework as well as neglectedness in particular see:

One reason why the neglectedness heuristic and the framework in general are so popular is that they are much easier to apply than explicit cost-effectiveness or expected value calculations. In this post, I will argue that evaluating neglectedness (which may usually be seen as the most heuristic and easiest to evaluate part of the framework) is actually quite complicated. This is in part to make people more aware of issues that are sometimes not and often only implicitly taken into account. In some cases, it may also be an argument against using the heuristic at all. Presumably, most of the following considerations won’t surprise many practitioners. Nonetheless, it appears useful to write them down, which, to my knowledge, hasn’t been done before.

Neglectedness and diminishing returns

There are a few different definitions of neglectedness. For example, consider the following three:

  1. “If we add more resources to the cause, we can expect more promising interventions to be carried out.” (source)
  2. You care about a cause much more than the rest of society. (source)
  3. “How many people, or dollars, are currently being dedicated to solving the problem?” (source)

The first one is quite close to expected value-type calculations and so it is quite clear why it is important. The second and third are more concrete and easier to measure but ultimately only relevant because they are proxies of the first. If society is already investing a lot into a cause, then the most promising interventions in that cause area are already taken up and only less effective ones remain.

Because the second and, even more so, the third are easier to measure, I expect that, in practice, most people use these two when they evaluate neglectedness. Incidentally, these definitions also fit the terms “neglectedness” and “crowdedness” much better. I will argue that neglectedness in the second and third sense has to be translated into neglectedness into the first sense and that this translation is difficult. Specifically, I will argue that the diminishing returns curves on which the connection between already invested resources and the value of the marginal dollar is based on can assume different scales and shapes that have to be taken into account.

A standard diminishing return curve may look roughly like this:


The x-axis represents the amount of resources invested into some intervention or cause area, the y-axis represents the returns of that investment. The derivative of the returns (i.e., the marginal returns) decreases, potentially in inverse proportion to the cumulative investment.

Even if returns diminish in a way similar to that shape, there is still the question of the scale of that graph (not to be confused with the scale/importance of the cause area), i.e. whether values on the x-axis are in the thousands, millions or billions. In general, returns probably diminish slower in cause areas that are in some sense large and uniform. Take the global fight against malaria. Intervening in some areas is more effective than in others. For example, it is more effective in areas where malaria is more common, or where it is easier to, say, provide mosquito nets, etc. However, given how widespread malaria is (about 300 million cases in 2015), I would expect that there is a relatively large number of areas almost tied for the most effective places to fight malaria. Consequently, I would guess that once the most effective intervention is to distribute provide mosquito nets, even hundreds of millions do not diminish returns all that much.

Other interventions have much less room for funding and thus returns diminish much more quickly. For example, the returns of helping some specific person will usually diminish way before investing, say, a billion dollars.

If you judge neglectedness only based on the raw amount of resources invested into solving a problem (as suggested by 80,000 hours), then this may make small cause areas look a lot more promising than they actually are. Depending on the exact definitions, this remains the case if you combine neglectedness with scale and tractability. For example, consider the following two interventions:

  1. The global fight against malaria.
  2. The fight against malaria in some randomly selected subset of 1/100th of the global area or population.

The two should usually be roughly equally promising. (Perhaps 1 is a bit more promising because every intervention contained in 2 is also in 1. On the other hand, that would make “solve everything” hard to beat as an intervention. Of course, 2 can also be more or less promising if an unusual 1/100th is chosen.) But because the raw amount of resources invested into 1 is presumably 100 times as big as the amount of resources invested into 2, 2 would, on a naive view, be regarded as much more neglected than 1. The product of scale and tractability is the same in 1 and 2. (1 is a 100 times bigger problem, but solving it in its entirety is also roughly 100 times more difficult, though I presume that some definitions of the framework judge this differently. In general, it seems fine to move considerations out of neglectedness into tractability and scope as long as they are not double-counted or forgotten.) Thus, the overall product of the three is greater for 2, which appears to be wrong. If on the other hand, neglectedness denotes the extent to which returns have diminished (the first of the three definitions given at the beginning of this section), then the neglectedness of 1 and 2 will usually be roughly the same.

Besides the scale of the return curve, the shape can also vary. In fact, I think many interventions initially face increasing returns from learning/research, creating economies of scale, specialization within the cause area, etc. For example, in most cause areas, the first $10,000 are probably invested into prioritization, organizing, or (potentially symbolic) interventions that later turn out to be suboptimal. So, in practice return curves may actually look more like the following:


This adds another piece of information (besides scale) that needs to be taken into account to translate the amount of invested resources into how much returns have diminished: how and when do returns start to diminish?

There are many other return curve shapes that may be less common but mess up the neglectedness framework more. For example, some projects produce some large amount of value if they succeed but produce close to no value if they fail. Thus, the (actual not expected) return curve for such projects may look like this:


Examples may include developing vaccines, colonizing Mars or finding cause X.

If such a cause area is already relatively crowded according to the third (and second) sense, that may make them less “crowded” in the first sense. For example, if nobody had invested money into finding a vaccine against malaria (and you don’t expect others to invest money into it into the future either, see below) then this cause area is maximally neglected in the second and third sense. However, given how expensive clinical trials are, the marginal returns of donating a few thousand dollars into it are essentially zero. If on the other hand, others have already contributed enough money to get a research project off the ground at all, then the marginal returns are higher, because there is at least some chance that your money will enable a trial in which a vaccine is found. (Remember that we don’t know the exact shape of the return curve, so we don’t know when the successful trial is funded.)

I would like to emphasize that the point of this section is not so much that people apply neglectedness incorrectly by merely looking at the amount of resources invested into a cause and not thinking about implications in terms of diminishing returns at all. Instead, I suspect that most people implicitly translate into diminishing returns and take the kind of the project into account. However, it may be beneficial if people were more aware of this issue and how it makes evaluating neglectedness more difficult.

Future resources

When estimating the neglectedness of a cause, we need to take into account, not only people who are currently working on the problem (as a literal reading of 80,000 hours’ definition suggests), but also people who have worked on it in the past and future. If a lot of people have worked on a problem in the past, then this indicates that the low-hanging fruit has already been picked. Thus, even if nobody is working in the area anymore, marginal returns have probably diminished a lot. I can’t think of a good example where this is a decisive consideration because if an area has been given up on (such that there is a big difference between past and current attention), it will usually score low in tractability, anyway. Perhaps one example is the search for new ways to organize society, government and economy. Many resources are still invested into thinking about this topic, so even if we just consider resources invested today, it would not do well in terms of neglectedness. However, if we consider that people have thought about and “experimented” in this area for thousands of years, it appears to be even more crowded.

We also have to take future people and resources into account when evaluating neglectedness. Of course, future people cannot “take away” the most promising intervention in the way that current and past people can. However, their existence causes the top interventions to be performed anyway. For example, let’s say that there are 1000 equally costly possible interventions in an area, generating 1000, 999, 998, …, 1 “utils” (or lives saved, years of suffering averted, etc.), respectively. Each intervention can only be performed once. The best 100 interventions have already been taken away by past people. Thus, if you have money for one intervention, you can now only generate 900 utils. But if you know that future people will engage in 300 further interventions in that area, then whether you intervene or not actually only makes a difference of 600 utils. All interventions besides the one generating 600 utils would have been executed anyway. (In Why Charities Don’t Differ Astronomically in Cost-Effectiveness, Brian Tomasik makes a similar point.)

The number of future people who would counterfactually engage in some cause area is an important consideration in many cause areas considered by effective altruists. In general, if a cause area is neglected by current and past people, the possibility of future people engaging in an intervention creates a lot of variance in neglectedness evaluations. If recently 10 people started working on an area, then it is very uncertain how much attention it will have in the future. And if it will receive a lot more attention regardless of our effort, then the neglectedness score may change by a factor of 100. The future resources that will go into long-established (and thus already less neglected) cause areas, on the other hand, are easier to predict and can’t make as much of a difference.

One example where future people and resources are an important consideration is AI safety. People often state that AI safety is a highly neglected cause area, presumably under the assumption that this should be completely obvious given how few people currently work in the area. At least, it is rare that the possibility of future people going into AI safety is considered explicitly. Langan-Dathi even writes that “due to [AI safety] being a recent development it is also highly neglected.” I, on the other hand, would argue that being a recent development only makes a cause look highly neglected if one doesn’t consider future people. (Again, Brian makes almost the same point regarding AI safety.)

Overall, I think many questions in AI safety should nonetheless be regarded as relatively neglected because I think there is a good chance that future people won’t recognize them as important fast enough. That said, I think some AI safety problems will become relevant in regular AI capability research or near time applications (such as self-driving cars). For example, I expect that some of Amodei et al.’s (2016) “Concrete Problems in AI Safety” will be (or would have been) picked up, anyway. Research in these areas of AI safety is thus potentially less intrinsically valuable, although it may still have a lot of instrumental benefits that make them worthwhile to pursue.

My impression is that neglecting future people in evaluating neglectedness is more common than forgetting to translate from invested resources into diminishing marginal returns. Nonetheless, in the context of this post the point of this section is that considering future resources makes neglectedness more difficult to evaluate. Obviously, it is hard to foresee how many resources will be invested into a project in the future. Because the most promising areas will not have received a lot of attention, yet, the question of their neglectedness will be dominated by how much resources they will receive in the future. Thus, in the most important cases, neglectedness is hard to estimate.

What should count as “the same cause area”?

At least the operationalization of neglectedness involves estimating the amount of (past, current and future) resources invested into a cause area. But which resources count as going into the same cause area? For example, if the cause area is malaria, should you count people who work in global poverty as working in the same cause area?

Because the number of people working in an area is only relevant as a proxy for how much marginal returns have diminished, the answer seems to be: Count people (and resources) to the extent that their activities diminish the marginal returns in the cause area in question. Thus, resources invested into alleviating global poverty have to be taken into account, because if people’s income increases, this will allow them to take measures against malaria as well.

As another example, consider the cause area of advocating some moral view X (say effective altruism). If only a few people currently promote that view, then one may naively view advocating X as neglected. However, if neglectedness is intended to be a proxy for diminishing returns, then it seems that we also have to take into account moral advocates of other views. Because most people regularly engage in some form of moral advocacy (e.g., when they talk about morality with their friends and children), many people already hold moral views that our advocacy has to compete with. Thus, we may want to take these other moral advocates into account for evaluating neglectedness. That said, if we apply neglectedness together with tractability and scope, it seems reasonable to include such considerations in either tractability or neglectedness. (As Rob Wiblin remarks, the three factors blur heavily into each other. In particular, neglectedness can make an intervention more tractable. As Wiblin notes, we should take care not to double-count arguments. We also shouldn’t forget to count arguments at all, though.)


I am indebted to Tobias Baumann for valuable comments.


Summary of Achen and Bartel’s Democracy for Realists

I just finished binge-reading Achen and Bartel’s great book Democracy for Realists and decided to write up a summary and a few comments to aid my memory and share some of the most interesting insights.

The folk theory of democracy

(Since chapter 1 contains little of interest besides giving a foretaste of later chapters, I will start with the content of chapter 2.) The “folk theory” of democracy is roughly the following:

Voters have a set of informed policy preferences (e.g., on abortion, social security, climate change, taxes, etc.) and vote for the candidate or party whose policy preferences most resemble their own (similar to how vote advice applications operate). That is, people vote based on the issues. Parties are then assumed to cater to the voters’ preferences to maximize their chance of getting elected. This way the people get what they want (as is guaranteed under certain theoretical assumptions, by the median voter theorem).

Achen and Bartel argue that this folk theory of democracy does not describe what is happening in real-world democracies:

  • Voters are often badly informed: “Michael Delli Carpini and Scott Keeter (1996) surveyed responses to hundreds of specific factual questions in U.S. opinion surveys over the preceding 50 years to provide an authoritative summary of What Americans Know about Politics and Why It Matters. In 1952, Delli Carpini and Keeter found, only 44% of Americans could name at least one branch of government. In 1972, only 22% knew something about Watergate. In 1985, only 59% knew whether their own state’s governor was a Democrat or a Republican. In 1986, only 49% knew which one nation in the world had used nuclear weapons (Delli Carpini and Keeter 1996, 70, 81, 74, 84). Delli Carpini and Keeter (1996, 270) concluded from these and scores of similar findings that ‘large numbers of American citizens are woefully underinformed and that overall levels of knowledge are modest at best.’” (p. 36f.)
    • Interestingly, the increasing availability of information has done little to change this. “[I]t is striking how little seems to have changed in the decades since survey research began to shed systematic light on the nature of public opinion. Changes in the structure of the mass media have allowed people with an uncommon taste for public affairs to find an unprecedented quantity and variety of political news; but they have also allowed people with more typical tastes to abandon traditional newspapers and television news for round-the-clock sports, pet tricks, or pornography, producing an increase in the variance of political information levels but no change in the average level of political information (Baum and Kernell 1999; Prior 2007). Similarly, while formal education remains a strong predictor of individuals’ knowledge about politics, substantial increases in American educational attainment have produced little apparent increase in overall levels of political knowledge. When Delli Carpini and Keeter (1996, 17) compared responses to scores of factual questions asked repeatedly in opinion surveys over the past half century, they found that ‘the public’s level of political knowledge is little different today than it was fifty years ago.’” (p. 37)
    • This lack of knowledge seems to matter for policy preferences – uninformed voters cannot use heuristics to mimic the choices of informed voters. “[S]ome scholars have […] asked whether uninformed citizens – using whatever ‘information shortcuts’ are available to them – manage to mimic the preferences and choices of better informed people. Alas, statistical analyses of the impact of political information on policy preferences have produced ample evidence of substantial divergences between the preferences of relatively uninformed and better informed citizens (Delli Carpini and Keeter 1996, chap. 6; Althaus 1998). Similarly, when ordinary people are exposed to intensive political education and conversation on specific policy issues, they often change their mind (Luskin, Fishkin, and Jowell 2002; Sturgis 2003). Parallel analyses of voting behavior have likewise found that uninformed citizens cast significantly different votes than those who were better informed. For example, Bartels (1996) estimated that actual vote choices fell about halfway between what they would have been if voters had been fully informed and what they would have been if everyone had picked candidates by flipping coins.” (p. 39f.)
    • Wisdom of the crowd-type arguments often don’t apply in politics because the opinions of different people are often biased in the the same direction: “Optimism about the competence of democratic electorates has often been bolstered (at least among political scientists) by appeals to what Converse (1990) dubbed the ‘miracle of aggregation’ – an idea formalized by the Marquis de Condorcet more than 200 years ago and forcefully argued with empirical evidence by Benjamin Page and Robert Shapiro (1992). Condorcet demonstrated mathematically that if several jurors make independent judgments of a suspect’s guilt or innocence, a majority are quite likely to judge correctly even if every individual juror is only modestly more likely than chance to reach the correct conclusion.

      Applied to electoral politics, Condorcet’s logic suggests that the electorate as a whole may be much wiser than any individual voter. The crucial problem with this mathematically elegant argument is that it does not work very well in practice. Real voters’ errors are quite unlikely to be statistically independent, as Condorcet’s logic requires. When thousands or millions of voters misconstrue the same relevant fact or are swayed by the same vivid campaign ad, no amount of aggregation will produce the requisite miracle; individual voters’ ‘errors’ will not cancel out in the overall election outcome, especially when they are based on constricted flows of information (Page and Shapiro 1992, chaps. 5, 9). If an incumbent government censors or distorts information regarding foreign policy or national security, the resulting errors in citizens’ judgments obviously will not be random. Less obviously, even unintentional errors by politically neutral purveyors of information may significantly distort collective judgment, as when statistical agencies or the news media overstate or understate the strength of the economy in the run-up to an election (Hetherington 1996).” (p.40f.)
  • Voters don’t have many strong policy preferences.
    • Their stated preferences are sensitive to framing effects. Some examples from p. 30f:
      “[E]xpressed political attitudes can be remarkably sensitive to seemingly innocuous variations in question wording or context. For example, 63% to 65% of Americans in the mid-1980s said that the federal government was spending too little on “assistance to the poor”; but only 20% to 25% said that it was spending too little on “welfare” (Rasinski 1989, 391). “Welfare” clearly had deeply negative connotations for many Americans, probably because it stimulated rather different mental images than “assistance to the poor” (Gilens 1999). Would additional federal spending in this domain have reflected the will of the majority, or not? We can suggest no sensible way to answer that question. […] [I]n three separate experiments conducted in the mid-1970s, almost half of Americans said they would “not allow” a communist to give a speech, while only about one-fourth said they would “forbid” him or her from doing so (Schuman and Presser 1981, 277). In the weeks leading up to the 1991 Gulf War, almost two-thirds of Americans were willing to “use military force,” but fewer than half were willing to “engage in combat,” and fewer than 30% were willing to “go to war” (Mueller 1994, 30).
    • Many voters have no opinions on many current issues (p. 31f.).
    • People’s policy preferences are remarkably inconsistent over time with correlations of just 0.3 to 0.5 between the stated policy preferences on two occasions that are two years apart.
  • Many voters don’t know the positions of the competing parties on the issues, which makes it hard for them to vote for a party based on their policy preferences (p. 32).
    • Lau and Redlawsk (1997; 2006) “found that about 70% of voters, on average, chose the candidate who best matched their own expressed preferences.” (p. 40)
  • If one asks people to place their own policy positions and that of parties on a seven-point issue scale, then issue proximity and vote choice will correlate. But this can be explained by more than one set of causal relationships. Of course, the naive interpretation is that people form a policy opinion and learn about the candidates’ opinions independently. Based on those, they decide which party to vote for. But this model of policy-oriented evaluation is only one possible explanation of the observed correlation between perceived issue proximity and voting behavior. Another is persuasion: Voters already prefer some party, know that party’s policies and then adjust their opinions to better match that party’s opinion. The third is projection: People already know which party to vote for, have some opinions on policy but don’t actually know what the party stands for. They then project their policy positions onto those of the party. (p. 42) Achen and Bartels report on evidence showing that policy-oriented evaluation is only a small contributor to the correlation between perceived issue proximity and vote choices. (p. 42-45)
  • They argue that, empirically, elected candidates often don’t represent the median voter. (p. 45-49)
  • To my surprise, they use Arrow’s impossibility theorem to argue against the feasibility of fair preference aggregation (pp. 26ff.). (See here for a nice video introduction.) Somehow, I always had the impression that Arrow’s impossibility theorem wouldn’t make a difference in practice. (As Arrow himself said, “Most [voting] systems are not going to work badly all of the time. All I proved is that all can work badly at times.”)

A weaker form of the folk theory is that, while voters may not know specific issues well enough to have an opinion, they do have some ideological preference (such as liberalism or conservatism). But this fails for similar reasons:

  • “Converse […] scrutinized respondents’ answers to open-ended questions about political parties and candidates for evidence that they understood and spontaneously employed the ideological concepts at the core of elite political discourse. He found that about 3% of voters were clearly classiffiable as “ideologues,” with another 12% qualifying as “near-ideologues”; the vast majority of voters (and an even larger proportion of nonvoters) seemed to think about parties and candidates in terms of group interests or the “nature of the times,” or in ways that conveyed “no shred of policy significance whatever” (Converse 1964, 217–218; also Campbell et al. 1960, chap. 10).”
  • Correlations between different policy views are only modest. This itself is not necessarily a bad thing but evidence against ideological voting. (If people fell into distinct ideological groups like liberals, conservatives, etc., one would observe such correlations. E.g., one may expect strong correlations between positions on foreign and domestic policy given that there are such correlations among political parties.) (p. 32f.)
    • This appears to conflict to some extent with how Haidt’s moral foundations theory characterizes the differences between liberals and conservatives. According to Haidt, conservatives form a cluster of people who care much more about loyalty, authority and sanctity than liberals. This predicts correlations between positions on topics in these domains, e.g. gay marriage and immigration (assuming that people’s loyalty, authority and sanctity intuitions tend to have similar content). However, it doesn’t seem to predict correlations between views on, say, aid to education and isolationism, which were the type of variables asked about in the study by Converse (1964) that Achen and Bartels refer to.
  • “Even in France, the presumed home of ideological politics, Converse and Pierce (1986, chap. 4) found that most voters did not understand political ‘left’ and ‘right.’ When citizens do understand the terms, they may still be uncertain or confused about where the parties stand on the left-right dimension (Butler and Stokes 1974, 323–337). Perhaps as a result, their partisan loyalties and issue preferences are often badly misaligned. In a 1968 survey in Italy, for example, 50% of those who identified with the right-wing Monarchist party took left-wing policy positions (Barnes 1971, 170). […] [C]areful recent studies have repeatedly turned up similar findings. For example, Elizabeth Zechmeister (2006, 162) found “striking, systematic differences … both within and across the countries” in the conceptions of “left” and “right” offered by elite private college students in Mexico and Argentina, while André Blais (personal communication) found half of German voters unable to place the party called “Die Linke” – the Left – on a left-right scale.” (p. 34f.)

Direct democracy

Chapter 3 discusses direct democracy. Besides making the point that everyone seems to believe that “more democracy” is a good thing (pp. 52-60, 70), they argue against a direct democracy version of the folk theory. In my view, the evidence presented in chapter 2 of the book (and the previous section of this summary) already provides strong reasons for skepticism and I think the best case against a direct democracy folk theory is based on arguments of this sort. In line with this view, Achen and Bartels re-iterate some of the arguments, e.g. that the average Joe often adopts other people’s policy preferences rather than making up his own mind (p. 73-76).

Most of the qualitatively new evidence presented in this section, on the other hand, seems quite weak to me. Much of it seems to be aimed at showing that direct democracy has yielded bad results. For example, based on the ratings of Arthur Schlesinger Jr., the Wall Street Journal, C-SPAN and Siena College, the introduction of primary elections hasn’t increased the quality of presidents (p. 66). As they concede themselves, the data set so small and the ratings of presidents contentious, so this evidence is not very strong at all. They also argue that direct democracy sometimes leads to transparently silly decisions, but the evidence seems essentially anecdotal to me.

Another interesting point of the section is that, in addition to potential ideological motives, politicians usually have strategic reasons to support the introduction of “more democratic” procedures:

[T]hroughout American history, debates about desirable democratic procedures have not been carried out in the abstract. They have always been entangled with struggles for substantive political advantage. In 1824, “politicos in all camps recognized” that the traditional congressional caucus system would probably nominate William Crawford; thus, “how people felt about the proper nominating method was correlated very highly indeed with which candidate they supported” (Ranney 1975, 66). In 1832, “America’s second great party reform was accomplished, not because the principle of nomination by delegate conventions won more adherents than the principle of nomination by legislative caucuses, but largely because the dominant factional interests … decided that national conventions would make things easier for them” (Ranney 1975, 69).

Similarly, Ranney (1975, 122) noted that the most influential champion of the direct primary, Robert La Follette, was inspired “to destroy boss rule at its very roots” when the Republican Party bosses of Wisconsin twice passed him over for the gubernatorial nomination. And in the early 1970s, George McGovern helped to engineer the Democratic Party’s new rules for delegate selection as cochair of the party’s McGovern-Fraser Commission, and “praised them repeatedly during his campaign for the 1972 nomination”; but less than a year later he advocated repealing some of the most significant rules changes. Asked why McGovern’s views had changed, “an aide said, ‘We were running for president then’” (Ranney 1975, 73–74).

I expect that this is a quite common phenomenon in deciding which decision process to use. E.g., when an organization decides which decision procedure to use (e.g., who will make the decision, what kind of evidence is accepted as valid), members of the organization might base a decision on these processes less on general principles (e.g., balance, avoidance of cognitive biases and groupthink) than on which decision process will yield the favored results in specific object-level decisions (e.g., who gets a raise, whether my prefered project is funded).

I guess processes that are instantiated for only a single decision are affected even more strongly by this problem. An example is deciding on how to do AI value loading, e.g. which idealization procedures to use.

The Retrospective Theory of Political Accountability

In chapter 4, Achen and Bartels discuss an attractive alternative to the folk theory: retrospective voting. On this view, voters decide not so much based on policy preferences but on how well the candidates or parties has performed in the past. For example, a president under which the economy improved may be re-elected. This theory is plausible as a descriptive theory for a number of reasons:

  • There is quite some empirical evidence that retrospective voting describes what voters are doing (ch. 5-7).
  • Retrospective voting, i.e. evaluating whether the passing term went well, is much easier than policy-based voting, i.e. deciding which candidate’s proposed policies will work better in the future (p. 91f.).

The retrospective theory also has some normative appeal:

  • It selects for good leaders (p. 98-100).
  • It incentivizes politicians to do what is best for the voters (p. 100-102).
  • To some extent it allows politicians to do what is best for the voters even if the voters disagree on what is best (p. 91).

While Achen and Bartels agree that retrospective voting is a large part of the descriptive picture, they also argue that, at least in the way it is implemented by real-world voters, “its implications for democracy are less unambiguously positive than existing literature tends to suggest”:

  • Proceeding on the theme of the ignorance of the electorate, voters’ evaluation of the past term and the current situation is unreliable (p. 92f.). For example, their perception of environmental threats does not correlate much with that of experts (p. 106), they think crime is increasing when it is in fact stable or decreasing (p. 107) and they cannot assess the state of the economy (p. 107f.).
    • Media coverage, partisan bias, popular culture, etc. often shape people’s judgments (p. 107, 138-142).
  • Voters are unable to differentiate whether bad times are an incumbent’s fault or not (p. 93). Consequently, there is some evidence that incumbents tend to be punished for shark attacks, droughts and floods (ch. 5).
  • “The theories of retrospective voting we have considered assume that voters base their choices at the polls entirely on assessments of how much the incumbent party has contributed to their own or the nation’s well-being. However, when voters have their own ideas about good policy, sensible or not, they may be tempted to vote for candidates who share those ideas, as in the spatial model of voting discussed in chapter 2. In that case incumbent politicians may face a dilemma: should they implement the policies voters want or the policies that will turn out to contribute to voters’ welfare?” (p. 109, also see pp. 108-111)
    • “[E]lected officials facing the issue of fluoridating drinking water in the 1950s and 1960s were significantly less likely to pander to their constituents’ ungrounded fears when longer terms gave them some protection from the “sudden breezes of passion” that Hamilton associated with public opinion.” (p. 110)
  • The electorate’s decisions are often based only on the most recent events, in particular the economic growth in the past year or so (cf. the peak-end rule). This not only makes their judgments worse than necessary (as they throw information away), it also sets the wrong incentives to the incumbent. Indeed, there is some evidence of a “political business cycle”, i.e. politicians attempting to maximize for growth, in particular growth of real income, in the last year of their term. (See chapter 6. Additional evidence is given in ch. 7.)
  • “Another way to examine the effectiveness of retrospective voting is to see what happens after each election. If we take seriously the notion that reelection hinges on economic competence, one implication is that we should expect to see more economic growth when the incumbent party is reelected than when it is dismissed by the voters. In the former case the incumbent party has presumably been retained because its past performance makes it a better than average bet to provide good economic management in the future. In the latter case the new administration is presumably an unknown quantity, a random draw from some underlying distribution of economic competence. A secondary implication of this logic is that future economic performance should be less variable when the incumbent party is retained, since reelected administrations are a truncated subsample of the underlying distribution of economic competence (the worst economic performers having presumably been weeded out at reelection time).” (p. 164) Based on a tiny sample (US presidential elections between 1948-2008), this does not seem to be the case. Of course, one could argue that the new administration often is not a random quantity – the parties in US presidential elections are almost always the same and the candidates have often proven themselves in previous political roles. In fact, the challenger may have a longer track record than the incumbent. For example, this may come to be the case in 2020.
  • Using a subset of the same tiny sample, they show that post-reelection economic growth is not a predictor of popular vote margin (p. 166-168). So, retrospective voting as current voters apply it doesn’t seem to work in selecting competent leaders. That said, and as Achen and Bartels acknowledge themselves (p. 168), the evidence they use is only very tentative.

Overall, the electorate’s evaluation of a candidate may be some indicator of how well they are going to perform in the future, but it is an imperfect and manipulable one.

Group loyalties and social identities

In addition to retrospective voting, Achen and Bartels tentatively propose that group loyalties and social identities play a big role for politics. Whereas the retrospection theory appears to be relatively well-studied, this new theory is much less worked out, yet (pp. 230f.).

It seems clear that vast parts of psychology and social psychology in particular – Achen and Bartels refer to ingroups and outgroups, Asch’s conformity experiments, cognitive dissonance, rationalization, etc. – should be a significant explanatory factor in political science. Indeed, Achen and Bartels start chapter 8 by stating that the relevance of social psychology for politics has been recognized by past generations of researchers (pp. 213-222), it only became unpopular when some theories that it was associated with failed (pp. 222-225).

Achen and Bartels discuss a few ways in which social groups, identities and loyalties influence voting behavior:

  • While voters’ retrospection focuses on the months leading up to the election, these short-term retrospections translate into the formation of long-term partisan loyalties. So, in a way, partisan loyalties are, in part, the cumulation of these short-term retrospections (p. 197-199).
  • Many people are loyal to one party (p. 233).
  • People adopt the political views of the groups they belong to or identify with (p. 219f., 222f., 246-, p. 314).
    • People often adopt the party loyalties of their parents (p. 233f.).
    • People adopt the views of their party (or project their views onto the party) (ch. 10). Party identification also influences one’s beliefs about factual matters. For example, when an opposing party is in office people judge the economy as worse (pp. 276-284).
  • People reject the political views of groups that they dislike (pp. 284-294).
  • People choose candidates based on what they perceive to be best for their group (p. 229).
  • Catholic voters (even one’s who rarely go to church) tend to prefer catholic candidates, even if the candidate emphasizes the separation of church and state (pp. 238-246).
  • If, say, Catholics discriminate against Jews, then Jews are much less likely to vote for a Catholic candidate or a party dominated by Catholics (p. 237f.).
  • Better-informed voters are often influenced more strongly by identity issues, presumably because they are more aware of them (pp. 284-294). For example, they are sometimes less likely than worse-informed voters to get the facts right (p. 283).
  • “When political candidates court the support of groups, they are judged in part on whether they can ‘speak our language.’ Small-business owners, union members, evangelical Christians, international corporations – each of these has a set of ongoing concerns and challenges, and a vocabulary for discussing them. Knowing those concerns, using that vocabulary, and making commitments to take them seriously is likely to be crucial for a politician to win their support (Fenno 1978).“

Unfortunately, I think that Achen and Bartels stretch the concept of identity-based voting a bit too much. The clearest example is their analysis of the case of abortion (pp. 258-266). Women tend to have more stable views on abortion than men. They are also more likely to leave the Republican party if they are pro-choice and less likely to assimilate their opinions to that of their party. Achen and Bartels’ explanation is that women’s vote is affected by their identifying as women. But I don’t see why it is necessary to bring the concept of identity into this. A much simpler explanation would be that voters are, to some extent, selfish and thus put more weight on the issues that are most relevant to them. If this counts as voting based on identity, is there any voting behavior that cannot be ascribed to identities?

I also find many of the explanations based on social identity unsatisfactory – they often don’t really explain a phenomenon. For example, Achen and Bartels argue that the partisan realignment of white southerners in the second half of the 20th century was not so much driven by racial policy issues but by white southern identity (pp. 246-258). But they don’t explain how white southern identity led people into the open arms of the Republicans. For example, was it that Republicans explicitly appealed to that identity? Or did southern opinion leaders change their mind based on policy issues?

Implications for democracy

Chapter 11 serves as a conclusion of the book. It summarizes some of the points made in earlier sections but also discusses the normative implications.

Unsurprisingly, Achen and Bartels argue against naive democratization:

[E]ffective democracy requires an appropriate balance between popular preferences and elite expertise. The point of reform should not simply be to maximize popular influence in the political process but to facilitate more effective popular influence. We need to learn to let political parties and political leaders do their jobs, too. Simple-minded attempts to thwart or control political elites through initiatives, direct primaries, and term limits will often be counterproductive. Far from empowering the citizenry, the plebiscitary implications of the folk theory have often damaged people’s real interests. (p. 303)

At the same time, they again point out that elite political judgment is often not much better than that of the worse-informed majority. In addition to being more aware of identity issues, the elites are a lot better at rationalizing, which makes them sound more rational, but often does not yield more rational opinions (p. 309-311).

Another interesting point they make is that it is usually the least-informed voters who decide who wins an election because the non-partisan swing voters tend to be relatively uninformed (p. 312, also p.32).

Achen and Bartels give some reasons why democracy might be better than its alternatives. I think the arguments, as given in the book, drastically vary in appeal, but here all five:

  • “[E]lections generally provide authoritative, widely accepted agreement about who shall rule. In the United States, for example, even the bitterly contested 2000 presidential election – which turned on a few hundred votes in a single state and a much-criticized five-to-four Supreme Court decision – was widely accepted as legitimate. A few Democratic partisans continued to grumble that the election had been “stolen”; but the winner, George W. Bush, took office without bloodshed, or even significant protest, and public attention quickly turned to other matters.” This makes sense, although it would have been interesting to test this argument empirically. I.e., is violent power struggle more or less prevalent in democracies than in other forms of government, such as hereditary monarchies? (I would guess that it is less prevalent in democracies.)
  • “[I]n well-functioning democratic systems, parties that win office are inevitably defeated at a subsequent election. They may be defeated more or less randomly, due to droughts, floods, or untimely economic slumps, but they are defeated nonetheless. Moreover, voters seem increasingly likely to reject the incumbent party the longer it has held office, reinforcing the tendency for governmental power to change hands. This turnover is a key indicator of democratic health and stability. It implies that no one group or coalition can become entrenched in power, unlike in dictatorships or one-party states where power is often exercised persistently by a single privileged segment of society. And because the losers in each election can reasonably expect the wheel of political fortune to turn in the not-too-distant future, they are more likely to accept the outcome than to take to the streets.” (p. 317) Here it is not so clear whether this constant change is a good thing. Having the same party, group or person rule for long stretches of time ensures stability and avoids friction between consecutive legislations. It also ensures that office is most of the time held by politicians with experience. Presumably, Achen and Bartels are right in judging high turnover as beneficial, but they have little evidence to back it up.
  • “[E]lectoral competition also provides some incentives for rulers at any given moment to tolerate opposition. The notion that citizens can oppose the incumbent rulers and organize to replace them, yet remain loyal to the nation, is fundamental both to real democracy and to social harmony.” (p. 317f.) This also seems non-obvious. Perhaps the monarchist could argue that only rulers who do not have to worry about losing their position can fruitfully engage with criticism. They also have less reason to get the press under their control (although, empirically, dictators usually use their power to limit the press in ways that democratic governments cannot).
  • “[A] long tradition in political theory stemming from John Stuart Mill (1861, chap. 3) has emphasized the potential benefits of democratic citizenship for the development of human character (Pateman 1970). Empirical scholarship focusing squarely on effects of this sort is scant, but it suggests that democratic political engagement may indeed have important implications for civic competence and other virtues (Finkel 1985; 1987; Campbell 2003; Mettler 2005). Thus, participation in democratic processes may contribute to better citizenship, producing both self-reinforcing improvements in ‘civic culture’ (Almond and Verba 1963) and broader contributions to human development.” (p. 318) This may be true, but it appears to be a relatively weak consideration. Perhaps, the monarchist could counter that doing away with elections saves people more time than the improvements in “civic culture” are worth. They may not be as virtuous, but maybe they can nonetheless spend more time with their family and friends or create more economic value.
  • “Finally, reelection-seeking politicians in well-functioning democracies will strive to avoid being caught violating consensual ethical norms in their society. As Key (1961a, 282) put it, public opinion in a democracy ‘establishes vague limits of permissiveness within which governmental action may occur without arousing a commotion.’ Thus, no president will strangle a kitten on the White House lawn in view of the television cameras. Easily managed governmental tasks will get taken care of, too. Chicago mayors will either get the snow cleared or be replaced, as Mayor Michael Bilandic learned in the winter of 1979. Openly taking bribes will generally be punished. When the causal chain is clear, the outcome is unambiguous, and the evaluation is widely shared, accountability will be enforced (Arnold 1990, chap. 3). So long as a free press can report dubious goings-on and a literate public can learn about them, politicians have strong incentives to avoid doing what is widely despised. Violations occur, of course, but they are expensive; removal from office is likely. By contrast, in dictatorships, moral or financial corruption is more common because public outrage has no obvious, organized outlet. This is a modest victory for political accountability.” (p. 318f.) Of the five reasons given, I find this one the most convincing. It basically states that retrospective voting and to some extent even the folk theory work, they just don’t work as well as one might naively imagine. So, real-world democracy doesn’t do a better job than a coin flip at representing people’s “real opinions” on controversial issues like abortion. Democracy does ensure, however, that important, universally agreed upon measures will be implemented.

In their last section, Achen and Bartels propose an idea for how to make governments more responsive to the interests of the people. Noting that elites have much more influence, they suggest that economic and social equality, as well as limitations on lobbying and campaign financing, could make governments more responsive to the preferences of the people. While plausibly helpful, these ideas are much more trite than the rest of the book.

General comments

  • Overall I recommend reading the book if you’re interested in the topic.
  • Since I don’t know the subject area particularly well, I read a few reviews of the book (Paris 2016; Schwennicke, Cohen, Roberts, Sabl, Mares, and Wright 2017; Malhotra 2016; Mann 2016; Cox 2017; Somin 2016). All of these seemed positive overall. Some even said that large parts of the book are more mainstream than the authors claim (which is a good thing in my book).
  • It’s quite Americentric. Sometimes an analysis of studies conducted in the US is followed by references to papers confirming the results in other countries, but often it is not. In many ways, politics in the US is different than in other countries, e.g. only two parties matter and the variability in wealth and education within the US is much bigger than in many other Western nations. This makes me unsure to which extent many of the results carry over to other countries. Often it is also an unnecessary limitation of sample sizes. E.g., one analysis (p. 165) relates whether the incumbent party was replaced to post-presidential-election income and GDP growth in the years 1948-2008 in the US. It seems hard to conclude all that much from 16 data points. Perhaps taking a look at other countries would have been a cheap way to increase the sample size. Because the book is not about the details of particular democratic systems, the book seems quite accessible to non-US American readers with only superficial knowledge of US politics and history.
  • It often gives a lot of detail on how empirical evidence was gathered and analyzed. E.g., the entire chapter seven is about how people’s voting behavior after the Great Depression – which is often explained by policy preferences (in the US related to Roosevelt’s New Deal) – can be explained well by retrospective voting.
  • I also feel like the book is somewhat balanced despite their view differing somewhat from the mainstream within political science. E.g., they often mention explicitly what the mainstream view is and refer to studies supporting that view. I also feel like they are relatively transparent about how reliable or tentative the empirical evidence for some parts of the book is.
  • A similar book is Jason Brennan’s Against Democracy, which I haven’t read. As suggested by the names, Against Democracy differs from Democracy for Realists in that it proposes epistocracy as an alternative form of government.


I thank Max Daniel and Stefan Torges for comments.

Talk on Multiverse-wide Cooperation via Correlated Decision-Making

In the past few months, I thought a lot about the implications of non-causal decision theory. In addition to writing up my thoughts in a long paper that we plan to publish on the FRI website soon, I also prepared a presentation, which I delivered to some researchers at FHI and my colleagues at FRI/EAF. Below you can find a recording of the talk.

The slides are available here.

Given the original target audiences, the talk assumes prior knowledge of a few topics:

Anthropic uncertainty in the Evidential Blackmail

I’m currently writing a piece on anthropic uncertainty in Newcomb problems. The idea is that whenever someone simulates us to predict our actions, this leads us to have anthropic uncertainty about whether we’re in this simulation or not. (If we knew whether we were in the real world or in the simulation, then the simulation wouldn’t fulfill its purpose anymore.) This kind of reasoning changes quite a lot about the answers that decision theories give in predictive dilemmas. It makes their reasoning “more updateless”, since they reason from a more impartial stance: a stance from which they don’t know their exact position in the thought experiment, yet.

This topic isn’t new, but it hasn’t been discussed in-depth before. As far as I am aware, it has been brought up on LessWrong by gRR and in two blog posts by Stuart Armstrong. Outside LessWrong, there is a post by Scott Aaronson, and one by Andrew Critch. The idea is also mentioned in passing by Neal (2006, p. 13). Are there any other sources and discussions of it that I have overlooked?

In this post, I examine what the assumption that predictions or simulations lead to anthropic uncertainty implies for the Evidential Blackmail (also XOR Blackmail), a problem which is often presented as a counter-example to evidential decision theory (EDT) (Cf. Soares & Fallenstein, 2015, p. 5; Soares & Levinstein, 2017, pp. 3–4). A similar problem has been introduced as “Yankees vs. Red Sox” by Arntzenius (2008), and discussed by Ahmed and Price (2012). I would be very grateful for any kind of feedback on my post.

We could formalize the blackmailer’s procedure in the Evidential Blackmail something like this:

def blackmailer():
    your_action = your_policy(receive_letter)
    if predict_stock() == “retain” and your_action == “pay”:
        return “letter”
    elif predict_stock() == “fall” and your_action == “not pay”:
        return “letter”
        return “no letter”

Let p denote the probability P(retain) with which our stock retains its value a. The blackmailer asks us for an amount of money b, where 0<b<a. The ex ante expected utilities are now:

EU(pay) = P(letter|pay) * (a – b) + P(no letter & retain|pay) * a = p (a – b),

EU(not pay) = P(no letter & retain|not pay) * a = p a.

According to the problem description, P(no letter & retain|pay) is 0, and P(no letter & retain|not pay) is p.1 As long as we don’t know whether a letter has been sent or not (even if it might already be on its way to us), committing to not paying gives us only information about whether the letter has been sent, not about our stock, so we should commit not to pay.

Now for the situation in which we have already received the letter. (All of the following probabilities will be conditioned on “letter”.) We don’t know whether we’re in the simulation or not. But what we do if we’re in the simulation can actually change our probability that we’re in the simulation in the first place. Note that the blackmailer has to simulate us one time in any case, regardless of whether our stock goes down or not. So if we are in the simulation and we receive the letter, P(retain|pay) is still equal to P(retain|not pay): neither paying nor not paying gives us any evidence about whether our stock retains its value or not, conditional on being in the simulation. But if we are in the simulation, we can influence whether the blackmailer sends us a letter in the real world. In the simulation, our action decides over whether we receive the letter in the cases where we keep our money, or whether we receive the letter when we lose.

Let’s begin by calculating EDT’s expected utility of not paying. We will lose all money for certain if we’re in the real world and don’t pay, so we only consider the case where we’re in the simulation:

EU(not pay) = P(sim & retain|not pay) * a.

For both SSA and SIA, if our stock doesn’t go down and we don’t pay up, then we’re certain to be in the simulation: P(sim|retain, not pay) = 1, while we could be either simulated or real if our stock falls: P(sim|fall, not pay) = 1/2. Moreover, P(sim & retain|not pay) = P(retain|sim, not pay) * P(sim) = P(sim|retain, not pay) * P(retain). Under SSA, P(retain) is just p.2 We hence get

EU_SSA(not pay) = P(sim|retain, not pay) * p * a = p a.

Our expected utility for paying is:

EU_SSA(pay) = P(sim & retain|pay) * (a – b) + P(not sim|pay) * (a – b)

= P(sim|retain, pay) * p * (a – b) + P(not sim|pay) * (a – b).

If we pay up and the stock retains its value, there is exactly one of us in the simulation and one of us in the real world, so P(sim|retain, pay) = 1/2, while we’re sure to be in the simulation for the scenario in which our stock falls: P(sim|fall, pay) = 1. Knowing both P(sim & retain|pay) and P(sim & fall|pay), we can calculate P(not sim|pay) = p/2. This gives us

EU_SSA(pay) = 1/2 * p * (a – b) + 1/2 * p * (a – b) = p (a – b).

Great, EDT + SSA seems to calculate exactly the same payoffs as all other decision theories – namely, that by paying the Blackmailer, one just loses the money one pays the blackmailer, but gains nothing.

For SIA probabilities, P(retain|letter) depends on whether we pay or don’t pay. If we pay, then there are (in expectation) 2 p observers in the “retain” world, while there are (1 – p) observers in the “fall” world. So our updated P(retain|letter, pay) should be (2 p)/(1 + p). If we don’t pay, it’s p/(2 – p) respectively. Using the above probabilities and Bayes’ theorem, we have P(sim|pay) = 1/(1 + p) and P(sim|not pay) = 1/(2 – p). Hence,

EU_SIA(not pay) = P(sim & retain|not pay) * a = (p a)/(2 – p),


EU_SIA(pay) = P(sim) * P(retain|sim) * (a – b) + P(not sim) * (a – b)

= (p (a – b))/(1 + p) + (p (a – b))/(1 + p)

= (2 p (a – b))/(1 + p).

It seems like paying the blackmailer would be better here than not paying, if p and b are sufficiently low.

Why doesn’t SIA give the ex ante expected utilities, as SSA does? Up until now I have just assumed correlated decision-making, so that the decisions of the simulated us will also be those of the real-world us (and of course the other way around – that’s how the blackmail works in the first place). The simulated us hence also gets attributed the impact of our real copy. The problem is now that SIA thinks we’re more likely to be in worlds with more observers. So the worlds in which we have additional impact due to correlated decision-making get double-counted. In the world where we pay the blackmailer, there are two observers for p, while there is only one observer for (1 – p). If we don’t pay the blackmailer, there is only one observer for p, and two observers for (1 – p). SIA hence slightly favors paying the blackmailer, to make the p-world more likely.

To remediate the problem of double-counting for EDT + SIA, we could use something along the lines of Stuart Armstrong’s Correlated Decision Principle (CDP). First, we aggregate the “EDT + SIA” expected utilities of all observers. Then, we divide this expected utility by the number of individuals who we are deciding for. For EU_CDP(pay), there is with probability 1 an observer in the simulation, and with probability p one in the real world. To get the aggregated expected utility, we thus have to multiply EU(pay) by (1 + p). Since we have decided for two individuals, we divide this EU by 2 and get EU_CDP(pay) = ((2 p (a – b))/(1 + p)) * 1/2 * (1 + p) = p (a – b).

For EU_CDP(not pay), it gets more complex: the number of individuals any observer is making a decision for is actually just 1 – namely, the observer in the simulation. The observer in the real world doesn’t get his expected utility from his own decision, but from influencing the other observer in the simulation. On the other hand, we multiply EU(not pay) by (2 – p), since there is one observer in the simulation with probability 1, and with probability (1 – p) there is another observer in the real world. Putting this together, we get EU_CDP(not pay) = ((p a)/(2 – p)) * (2 – p) = p a. So EDT + SIA + CDP arrives at the same payoffs as EDT + SSA, although it is admittedly a rather messy and informal approach.

I conclude that, when taking into account anthropic uncertainty, EDT doesn’t give in to the Evidential Blackmail. This is true for SSA and possibly also for SIA + CDP. Fortunately, at least for SSA, we have avoided any kind of anthropic funny-business. Note that this is not some kind of dirty hack: if we grant the premise that simulations have to involve anthropic uncertainty, then per definition of the thought experiment – because there is necessarily a simulation involved in the Evidential Blackmail –, EDT doesn’t actually pay the blackmailer. Of course, this still leaves open the question of whether we have anthropic uncertainty in all problems involving simulations, and hence whether my argument applies to all conceivable versions of the problem. Moreover, there are other anthropic problems, such as the one introduced by Conitzer (2015a), in which EDT + SSA are still exploitable (in absence of a method to “bind themselves”).

1 P(no letter & retain|not pay) = P(no letter|retain, not pay) * P(retain|not pay) = 1 * P(retain|not pay) = P(retain|not pay, letter) * P(letter|not pay) + P(retain|not pay, no letter)* P(no letter|not pay) = p.

2 This becomes apparent if we compare the Evidential Blackmail to Sleeping Beauty. SSA is the “halfer position”, which means that after updating on being an observer (receiving the letter), we should still assign the prior probability p, regardless of how many observers there are in either of the two possible worlds.

3 The result that EDT and SIA lead to actions that are not optimal ex ante is also featured in several publications about anthropic problems, e.g., Arntzenius, 2002; Briggs, 2010; Conitzer, 2015b; Schwarz, 2015.

Ahmed, A., & Price, H. (2012). Arntzenius on “Why ain”cha rich?’. Erkenntnis. An International Journal of Analytic Philosophy, 77(1), 15–30.

Arntzenius, F. (2002). Reflections on Sleeping Beauty. Analysis, 62(1), 53–62.
Arntzenius, F. (2008). No Regrets, or: Edith Piaf Revamps Decision Theory. Erkenntnis. An International Journal of Analytic Philosophy, 68(2), 277–297.

Briggs, R. (2010). Putting a value on Beauty. Oxford Studies in Epistemology, 3, 3–34.

Conitzer, V. (2015a). A devastating example for the Halfer Rule. Philosophical Studies, 172(8), 1985–1992.

Conitzer, V. (2015b). Can rational choice guide us to correct de se beliefs? Synthese, 192(12), 4107–4119.

Neal, R. M. (2006, August 23). Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning. arXiv [math.ST]. Retrieved from http://arxiv.org/abs/math/0608592

Schwarz, W. (2015). Lost memories and useless coins: revisiting the absentminded driver. Synthese, 192(9), 3011–3036.

Soares, N., & Fallenstein, B. (2015, July 7). Toward Idealized Decision Theory. arXiv [cs.AI]. Retrieved from http://arxiv.org/abs/1507.01986

Soares, N., & Levinstein, B. (2017). Cheating Death in Damascus. Retrieved from https://intelligence.org/files/DeathInDamascus.pdf

The average utilitarian’s solipsism wager

The following prudential argument is relatively common in my circles: We probably live in a simulation, but if we don’t, our actions matter much more. Thus, expected value calculations are dominated by the utility under the assumption that we (or some copies of ours) are in the real world. Consequently, the simulation argument affects our prioritization only slightly — we should still mostly act under the assumption that we are not in a simulation.

A commonly cited analogy is due to Michael Vassar: “If you think you are Napoleon, and [almost] everyone that thinks this way is in a mental institution, you should still act like Napoleon, because if you are, your actions matter a lot.” An everyday application of this kind of argument is the following: Probably, you will not be in an accident today, but if you are, the consequences for your life are enormous. So, you better fasten your seat belt.

Note how these arguments do not affect the probabilities we assign to some event or hypothesis. They are only about the event’s (or hypothesis’) prudential weight — the extent to which we tailor our actions to the case in which the event occurs (or the hypothesis is true).

For total utilitarians (and many other consequentialist value systems), similar arguments apply to most theories postulating a large universe or multiverse. To the extent that it makes a difference for our actions, we should tailor them to the assumption that we live in a large multiverse with many copies of us because under this assumption we can affect the lives of many more beings.

For average utilitarians, the exact opposite applies. Even if they have many copies, they will have an impact on a much smaller fraction of beings if they live in a large universe or multiverse. Thus, they should usually base their actions on the assumption of a small universe, such as a universe in which Earth is the only inhabited planet. This may already have some implications, e.g. via the simulation argument or the Fermi paradox. If they also take the average over time — I do not know whether this is the default for average utilitarianism — they would also base their actions on the assumption that there are just a few past and future agents. So, average utilitarians are subject to a much stronger Doomsday argument.

Maybe the bearing of such prudential arguments is even more powerful, though. There is some chance that metaphysical solipsism is true: the view that only my (or your) own mind exists and that everything else is just an illusion. If solipsism were true, our impact on average welfare (or average preference fulfillment) would be enormous, perhaps 7.5 billion times bigger than it would be under the assumption that Earth exists — about 100 billion times bigger if you also count humans that have lived in the past. Solipsism seems to deserve a probability larger than one in 6 (or 100) billion. (In fact, I think solipsism is likely enough for this to qualify as a non-Pascalian argument.) So, perhaps average utilitarians should maximize primarily for their own welfare?


The idea of this post is partly due to Lukas Gloor.

A Non-Comprehensive List of Human Values

Human values are said to be complex (cf. Stewart-Williams 2015, section “Morality Is a Mess”; Muehlhauser and Helm 2012, ch. 3, 4, 5.3). As evidence, the following is a non-comprehensive list of things that many people care about:

Abundance, art, asceticism, autarky, authority, autonomy, beauty, benevolence, challenge, community, competence, competitiveness, complexity, cooperation, creativity, crime, critical thinking, curiosity, democracy, dignity, diligence, diversity, duties, emotion, equality, excellence, excitement, experience, fairness, faithfulness, family, free will, freedom, friendship, frugality, fulfillment, fun, gender differences, gender equality, happiness, health, honesty, humbleness, idealism, improvement, intelligence, justice, knowledge, law abidance, life, love, loyalty, modesty, nature, novelty, openness, optimism, organization, pain, parsimony, peace, privacy, progress, promises, prosperity, purity, rationality, religion, respect, rights, sadness, safety, sanctity, self-determination, simplicity, sincerity, society, spirituality, stability, striving, suffering, surprise, technology, tolerance, tradition, variety, veracity, welfare, truth.

Note that from the inside, most of these values feel distinct from each other. Also, note that most of these do not feel instrumental to each other. For instance, people often want to find out the truth even when that truth is not useful for, e.g., reducing suffering or preserving tradition.

Some (articles with) lists that helped me to compile this list are Keith‑Spiegel’s moral characteristics list, moral foundations theory, Your Dictionary’s Examples of Morals, Eliezer Yudkowsky’s 31 laws of fun, table A1 in Bain et al.’s Collective Futures and Peter Levine’s an alternative to Moral Foundations Theory.