A behaviorist approach to building phenomenological bridges

A few weeks ago, I wrote about the BPB problem and how it poses a problem for classical/non-logical decision theories. In my post, I briefly mentioned a behaviorist approach to BPB, only to immediately discard it:

One might think that one could map between physical processes and algorithms on a pragmatic or functional basis. That is, one could say that a physical process A implements a program p to the extent that the results of A correlate with the output of p. I think this idea goes into the right direction and we will later see an implementation of this pragmatic approach that does away with naturalized induction. However, it feels inappropriate as a solution to BPB. The main problem is that two processes can correlate in their output without having similar subjective experiences. For instance, it is easy to show that Merge sort and Insertion sort have the same output for any given input, even though they have very different “subjective experiences”.

Since writing the post I became more optimistic about this approach because the counterarguments I mentioned aren’t particularly persuasive. The core of the idea is the following: Let A and B be parameterless algorithms1. We’ll say that A and B are equivalent if we believe that A outputs x iff B outputs x. In the context of BPB, your current decision is an algorithm A and we’ll say B is an instance or implementation of A/you iff A and B are equivalent. In the following sections, I will discuss this approach in more detail.

You still need interpretations

The definition only solves one part of the BPB problem: specifying equivalence between algorithms. This would solve BPB if all agents were bots (rather than parts of a bot or collections of bots) in Soares and Fallenstein’s Botworld 1.0. But in a world without any Cartesian boundaries, one still has to map parts of the environment to parameterless algorithms. This could, for instance, be a function from histories of the world onto the output set of the algorithm. For example, if one’s set of possible world models is a set of cellular automata (CA) with various different initial conditions and one’s notion of an algorithm is something operating on natural numbers, then such an interpretation i would be a function from CA histories to the set of natural numbers. Relative to i, a CA with initial conditions contains an instance of algorithm A if A outputs x <=> i(H)=x, where H is a random variable representing the history created by that CA. So, intuitively, i is reading A’s output off from a description the world. For example, it may look at the physical signals sent by a robot’s microprocessor to a motor and convert these into the output alphabet of A. E.g., it may convert a signal that causes a robot’s wheels to spin to something like “forward”. Every interpretation i is a separate instance of A.

Joke interpretations

Since we still need interpretations, we still have the problem of “joke interpretations” (Drescher 2006, sect. 2.3; also see this Brian Tomasik essay and references therein). In particular, you could have an interpretation i that does most of the work, so that the equivalence of A and i(H) is the result of i rather than the CA doing something resembling A.

I don’t think it’s necessarily a problem that an EDT agent might optimize its action too much for the possibility of being a joke instantiation, because it gives all its copies in a world equal weight no matter which copy it believes to be. As an example, imagine that there is a possible world in which joke interpretations lead to you to identify with a rock. If the rock’s “behavior” does have a significant influence on the world and the output of your algorithm correlates strongly with it, then I see no problem with taking the rock into account. At least, that is what EDT would do anyway if it has a regular copy in that world.2 If the rock has little impact on the world, EDT wouldn’t care much about the possibility of being the rock. In fact, if the world also contains a strongly correlated non-instance3 of you that faces a real decision problem, then the rock joke interpretation would merely lead you to optimize for the action of that non-copy.

If you allow all joke interpretations, then you would view yourself in all worlds. Thus, the view may have similar implications as the l-zombie view where the joke interpretations serve as the l-zombies.4 Unless we’re trying to metaphysically justify the l-zombie view, this is not what we’re looking for. So, we may want to remove “joke interpretations” in some way. One idea could be to limit the interpretation’s computational power (Aaronson 2011, sect. 6). My understanding is that this is what people in CA theory use to define the notion of implementing an algorithm in a CA, see, e.g., Cook (2004, sect. 2). Another idea would be to include only interpretations that you yourself (or A itself) “can easily predict or understand”. Assuming that A doesn’t know its own output already, this means that i cannot do most of the work necessary to entangle A with i(H). For example, if i would just compute A without looking at H, then A couldn’t predict i very well if it cannot predict itself. If, on the other hand, i reads off the result of A from a computer screen in H, then A would be able to predict i’s behavior for every instance of H. Brian Tomasik lists a few more criteria to judge interpretations by.

Introspective discernibility

In my original rejection of the behaviorist approach, I made an argument about two sorting algorithms which always compute the same result but have different “subjective experiences”. I assumed that a similar problem could occur when comparing two equivalent decision-making procedures with different subjective experiences. But now I actually think that the behaviorist approach nicely aligns with what one might call introspective discernibility of experiences.

Let’s say I’m an agent that has, as a component, a sorting algorithm. Now, a world model may contain an agent that is just like me except that it uses a different sorting algorithm. Does that agent count as an instantiation of me? Well, that depends on whether I can introspectively discern which sorting algorithm I use. If I can, then I could let my output depend on the content of the sorting algorithm. And if I do that, then the equivalence between me and that other agent breaks. E.g., if I decide to output an explanation of my sorting algorithm, then my output would explain, say, bubble sort, whereas the other algorithm’s output would explain, say, merge sort. If, on the other hand, I don’t have introspective access to my sorting algorithm, then the code of the sorting algorithm cannot affect my output. Thus, the behaviorist view would interpret the other agent as an instantiation of me (as long as, of course, it, too, doesn’t have introspective access to its sorting algorithm). This conforms with the intuition that which kind of sorting algorithm I use is not part of my subjective experience. I find this natural relation to introspective discernibility very appealing.

That said, things are complicated by the equivalence relation being subjective. If you already know what A and B output, then they are equivalent if their output is the same — even if it is “coincidentally” so, i.e., if they perform completely unrelated computations. Of course, a decision algorithm will rarely know its own output in advance. So, this extreme case is probably rare. However, it is plausible that an algorithm’s knowledge about its own behavior excludes some conditional policies. For example, consider a case like Conitzer’s (2016, 2017), in which copies of an EU-maximizing agent face different but symmetric information. Depending on what the agent knows about its algorithm, it may view all the copies as equivalent or not. If it has relatively little self-knowledge, it could reason that if it lets its action depend on the information, the copies’ behavior would diverge. With more self-knowledge, on the other hand, it could reason that, because it is an EU maximizer and because the copies are in symmetric situations, its action will be the same no matter the information received.5

Consciousness

The BPB problem resembles the problem of consciousness: the question “does some physical system implement my algorithm?” is similar to the question “does some physical system have the conscious experience that I am having?”. For now, I don’t want to go too much into the relation between the two problems. But if we suppose that the two problems are connected, we can draw from the philosophy of mind to discuss our approach to BPB.

In particular, I expect that a common objection to the behaviorist approach will be that most instantiations in the behaviorist sense are behavioral p-zombies. That is, their output behavior is equivalent to the algorithm’s but they compute the output in a different way, and in particular in a way that doesn’t seem to give rise to conscious (or subjective) experiences. While the behaviorist view may lead us to identify with such a p-zombie, we can be certain, so the argument goes, that we are not given that we have conscious experiences.

Some particular examples include:

  • Lookup table-based agents
  • Messed up causal structures, e.g. Paul Durham’s experiments with his whole brain emulation in Greg Egan’s novel Permutation City.

I personally don’t find these arguments particularly convincing because I favor Dennett’s and Brian Tomasik’s eliminativist view on consciousness. That said, it’s not clear whether eliminativism would imply anything other than relativism/anti-realism for the BPB problem (if we view BPB and philosophy of mind as sufficiently strongly related).


1. I use the word “algorithm” in a very broad sense. I don’t mean to imply Turing computability. In fact, I think any explicit formal specification of the form “f()=…” should work for the purpose of the present definition. Perhaps, even implicit specifications of the output would work. 

2. Of course, I see how someone would find this counterintuitive. However, I suspect that this is primarily because the rock example triggers absurdity heuristics and because it is hard to imagine a situation in which you believe that your decision algorithm is strongly correlated with whether, say, some rock causes an avalanche. 

3. Although the behaviorist view defines the instance-of-me property via correlation, there can still be correlated physical subsystems that are not viewed as an instance of me. In particular, if you strongly limit the set of allowed interpretations (see the next paragraph), then the potential relationship between your own and the system’s action may be too complicated to be expressed as A outputs x <=> i(H)=x

4. I suspect that the two might differ in medical or “common cause” Newcomb-like problems like the coin flip creation problem

5. If this is undesirable, one may try to use logical counterfactuals to find out whether B also “would have” done the same as A if A had behaved differently. However, I’m very skeptical of logical counterfactuals in general. Cf. the “Counterfactual Robustness” section in Tomasik’s post. 

Multiverse-wide cooperation via correlated decision making – Summary

This is a short summary of some of the main points from my paper on multiverse-wide superrationality. For details, caveats and justifications, see the full paper.

The target audience for this post consists of:

  • people who have already thought about the topic and thus don’t want to read through the long explanations given in the paper;
  • people who have already read (some of) the full paper and just want to refresh their memory;
  • people who don’t yet know whether they should read the full paper and thus want to know whether the content is interesting or relevant to them.
If you are not in any of these groups, this post may be confusing and not very helpful for understanding the main

Main idea

  • Take values of agents with your decision algorithm into account to make it more likely that they do the same. I’ll use Hofstadter’s (1983) term superrationality to refer to this kind of cooperation.
  • Whereas acausal trade as it is usually understood seems to require mutual simulation and is thus hard to get right as a human, superrationality is easy to apply for humans (if they know how they can benefit agents that use the same decision algorithm).
  • Superrationality may not be relevant among agents on Earth, e.g. because on Earth we already have causal cooperation and few people use the same decision algorithm as we use. But if we think that we might live in a vast universe or multiverse (as seems to be a common view among physicists, see, e.g., Tegmark (2003)), then there are (potentially infinitely) many agents with whom we could cooperate in the above way.
  • This multiverse-wide superrationality (MSR) suggests that when deciding between policies in our part of the multiverse, we should essentially adopt a new utility function (or, more generally, a new set of preferences) which takes into account the preferences of all agents with our decision algorithm. I will call that our compromise utility function (CUF). Whatever CUF we adopt, the others will (be more likely to) adopt a structurally similar CUF. E.g., if our CUF gives more weight to our values, then the others’ CUF will also give more weight to their values. The gains from trade appear to be highest if everyone adopts the same CUF. If this is the case, multiverse-wide superrationality has strong implications for what decisions we should make.

The superrationality mechanism

  • Superrationality works without reciprocity. For example, imagine there is one agent for every integer and that for every i, agent i can benefit agent i+1 at low cost to herself. If all the agents use the same decision algorithm, then agent i should benefit agent i+1 to make it more likely that agent i-1 also cooperates in the same way. That is, agent i should give something to an agent that cannot in any way return the favor. This means that when cooperating superrationally, you don’t need to identify which agents can help you.
  • How should the new criterion for making decisions, our compromise utility function, look like?
    • Harsanyi’s (1955) aggregation theorem suggests that it should be a weighted sum of the utility functions of all the participating agents.
    • To maximize gains from trade, everyone should adopt the same weights.
    • Variance-voting (Cotton-Barratt 2013; MacAskill 2014, ch. 3) is a promising candidate.
    • If some of the values require coordination (e.g., if one of the agents wants there to be at least one proof of the Riemann conjecture in the multiverse), then things get more complicated.
  • “Updatelessness” has some implications. E.g., it means that one should, under certain conditions, accept a superrational compromise that is bad for oneself.

The values of the other agents

  • To maximize the compromise utility function, it is very useful (though not strictly necessary, see section “Interventions”) to know what other agents with similar decision algorithms care about.
  • The orthogonality thesis (Bostrom 2012) implies that the values of the other agents are probably different from ours, which means that taking them into account makes a difference.
  • Not all aspects of the values of agents with our decision algorithm are relevant:
    • Only the consequentialist parts of their values matter (though things like minimizing the number of rule violations committed by all agents is a perfectly fine consequentialist value system).
    • Only values that apply to our part of the multiverse are relevant. (Some agents may care exclusively or primarily about their part of the multiverse.)
    • At least humans care differently about far away than about near things. Because we are far away from most agents with our decision algorithm, we only need to think about what they care about in distant things.
    • Superrationalists may care more about their idealized values, so we may try to idealize their values. However, we should be very careful to idealize only in ways consistent with their meta-preferences. (Otherwise, your values may be mis-idealized.)
  • There are some ways to learn about what other superrational agents care about.
    • The empirical approach: We can survey the relevant aspects of human values. The values of humans who take superrationality seriously are particularly relevant.
      • An example of relevant research is Bain et al.’s (2013) study on what people care about in future societies. They found that people put most weight on how warm, caring and benevolent members of these societies are. If we believe that construal level theory (see Trope and Liberman (2010) for an excellent summary) is roughly correct, then such results should carry over to evaluations of other psychologically distant societies. Although these results have been replicated a few times (Bain et al. 2012; Park et al. 2015; Judge and Wilson 2015; Bain et al. 2016), they are tentative and merely exemplify relevant research in this domain.
      • Another interesting data point is the values of the EA/LW/SSC/rationalist community, to my knowledge the only group of people who plausibly act on superrationality.
    • The theoretical approach: We could think about the processes that affect the distribution of values in the multiverse.
      • Biological evolution
      • Cultural evolution (see, e.g., Henrich 2015)
      • Late great filters
        • For example, if a lot of civilizations self-destruct with weapons of mass destruction, then the compromise utility function may contain a lot more peaceful values than an analysis based on biological and cultural evolution suggests.
      • The transition to whole brain emulations (Hanson 2016)
      • The transition to de novo AI (Bostrom 2014)

Interventions

  • There are some general ways in which we can effectively increase our compromise utility function without knowing its exact content.
    • Many meta-activities don’t require any such knowledge as long as we think that it can be acquired in the future. E.g., we could convince other people of MSR, do research on MSR, etc.
    • Sometimes, very very small bits of knowledge suffice to identify promising interventions. For example, if we believe that the consequentialist parts of human values are a better approximation of the consequentialist parts of other agents’ values than non-consequentialist human values, then we should make people more consequentialist (without necessarily promoting any particular consequentialist morality).
    • Another relevant point is that no matter how well we know the content of the compromise function, the argument in favor of maximizing it in our part of the universe is still just as valid. Thus, even if we know very little about its content, we should still do our best at maximizing it. (That said, we will often be better at maximizing the values of humans, in great part because we know and understand these values better.)
  • Meta-activities
    • Further research
    • Promoting multiverse-wide superrationality
  • Probably ensuring that superintelligent AIs have a decision theory that reasons correctly about superrationality is ultimately the most important intervention (although promoting multiverse-wide superrationality among humans can be instrumental for doing so).
  • There are some interventions in the moral advocacy space which align people’s preferences more with those of other superrational agents about our universe.
    • Promoting consequentialism
      • This is also good because consequentialism enables cooperation with the agents in other parts of the multiverse.
    • Promoting pluralism (e.g., convincing utilitarians to also take things other than welfare into account)
    • Promoting concern for benevolence and warmth (or whatever other value is much stronger represented in high versus low construal preferences)
    • Facilitating moral progress (i.e., presenting people with the arguments for both sides). Probably valuing preference idealization is more common than disvaluing it.
    • Promoting multiverse-wide preference utilitarianism
  • Promoting causal cooperation

A survey of polls on Newcomb’s problem

One classic story about Newcomb’s problem is that, at least initially, people one-box and two-box in roughly equal numbers (and that everyone is confident in their position). To find out whether this is true or what exact percentage of people would one-box I conducted a meta-survey of existing polls of people’s opinion on Newcomb’s problem.

The surveys I found are listed in the following table:

I deliberately included even surveys with tiny sample sizes to test whether the results from the larger sample size surveys are robust or whether they depend on the specifics of how they obtained the data. For example, the description of Newcomb’s problem in the Guardian survey contained a paragraph on why one should one-box (written by Arif Ahmed, author of Evidence, Decision and Causality) and a paragraph on why one should two-box (by David Edmonds). Perhaps the persuasiveness of these arguments influenced the result of the survey?

Looking at all the polls together, it seems that the picture is at least somewhat consistent. The two largest surveys of non-professionals both give one-boxing almost the same small edge. The other results diverge more, but some can be easily explained. For example, decision theory is a commonly discussed topic on LessWrong with some of the opinion leaders of the community (including founder Eliezer Yudkowsky) endorsing one-boxing. It is therefore not surprising that opinions on LessWrong have converged more than elsewhere. Considering the low sample sizes, the other smaller surveys of non-professionals also seem reasonably consistent with the impression that one-boxing is only slightly more common than two-boxing.

The surveys also show that, as has often been remarked on, there exists a significant difference between opinion among the general population / “amateur philosophers” and professional philosophers / decision theorists (though the consensus among decision theorists is not nearly as strong as on LessWrong).

Complications in evaluating neglectedness

Neglectedness (or crowdedness) is a heuristic that effective altruists use to assess how much impact they could have in a specific cause area. It is usually combined with scale (a.k.a. importance) and tractability (a.k.a. solvability), which together are meant to approximate expected value. (In fact, under certain idealized definitions of the three factors, multiplying them is equivalent to expected value. However, this removes the heuristic nature of these factors and probably does not describe how people typically apply them.) For introductions and thoughts on the framework as well as neglectedness in particular see:

One reason why the neglectedness heuristic and the framework in general are so popular is that they are much easier to apply than explicit cost-effectiveness or expected value calculations. In this post, I will argue that evaluating neglectedness (which may usually be seen as the most heuristic and easiest to evaluate part of the framework) is actually quite complicated. This is in part to make people more aware of issues that are sometimes not and often only implicitly taken into account. In some cases, it may also be an argument against using the heuristic at all. Presumably, most of the following considerations won’t surprise many practitioners. Nonetheless, it appears useful to write them down, which, to my knowledge, hasn’t been done before.

Neglectedness and diminishing returns

There are a few different definitions of neglectedness. For example, consider the following three:

  1. “If we add more resources to the cause, we can expect more promising interventions to be carried out.” (source)
  2. You care about a cause much more than the rest of society. (source)
  3. “How many people, or dollars, are currently being dedicated to solving the problem?” (source)

The first one is quite close to expected value-type calculations and so it is quite clear why it is important. The second and third are more concrete and easier to measure but ultimately only relevant because they are proxies of the first. If society is already investing a lot into a cause, then the most promising interventions in that cause area are already taken up and only less effective ones remain.

Because the second and, even more so, the third are easier to measure, I expect that, in practice, most people use these two when they evaluate neglectedness. Incidentally, these definitions also fit the terms “neglectedness” and “crowdedness” much better. I will argue that neglectedness in the second and third sense has to be translated into neglectedness into the first sense and that this translation is difficult. Specifically, I will argue that the diminishing returns curves on which the connection between already invested resources and the value of the marginal dollar is based on can assume different scales and shapes that have to be taken into account.

A standard diminishing return curve may look roughly like this:

IMG_20170621_133952

The x-axis represents the amount of resources invested into some intervention or cause area, the y-axis represents the returns of that investment. The derivative of the returns (i.e., the marginal returns) decreases, potentially in inverse proportion to the cumulative investment.

Even if returns diminish in a way similar to that shape, there is still the question of the scale of that graph (not to be confused with the scale/importance of the cause area), i.e. whether values on the x-axis are in the thousands, millions or billions. In general, returns probably diminish slower in cause areas that are in some sense large and uniform. Take the global fight against malaria. Intervening in some areas is more effective than in others. For example, it is more effective in areas where malaria is more common, or where it is easier to, say, provide mosquito nets, etc. However, given how widespread malaria is (about 300 million cases in 2015), I would expect that there is a relatively large number of areas almost tied for the most effective places to fight malaria. Consequently, I would guess that once the most effective intervention is to distribute provide mosquito nets, even hundreds of millions do not diminish returns all that much.

Other interventions have much less room for funding and thus returns diminish much more quickly. For example, the returns of helping some specific person will usually diminish way before investing, say, a billion dollars.

If you judge neglectedness only based on the raw amount of resources invested into solving a problem (as suggested by 80,000 hours), then this may make small cause areas look a lot more promising than they actually are. Depending on the exact definitions, this remains the case if you combine neglectedness with scale and tractability. For example, consider the following two interventions:

  1. The global fight against malaria.
  2. The fight against malaria in some randomly selected subset of 1/100th of the global area or population.

The two should usually be roughly equally promising. (Perhaps 1 is a bit more promising because every intervention contained in 2 is also in 1. On the other hand, that would make “solve everything” hard to beat as an intervention. Of course, 2 can also be more or less promising if an unusual 1/100th is chosen.) But because the raw amount of resources invested into 1 is presumably 100 times as big as the amount of resources invested into 2, 2 would, on a naive view, be regarded as much more neglected than 1. The product of scale and tractability is the same in 1 and 2. (1 is a 100 times bigger problem, but solving it in its entirety is also roughly 100 times more difficult, though I presume that some definitions of the framework judge this differently. In general, it seems fine to move considerations out of neglectedness into tractability and scope as long as they are not double-counted or forgotten.) Thus, the overall product of the three is greater for 2, which appears to be wrong. If on the other hand, neglectedness denotes the extent to which returns have diminished (the first of the three definitions given at the beginning of this section), then the neglectedness of 1 and 2 will usually be roughly the same.

Besides the scale of the return curve, the shape can also vary. In fact, I think many interventions initially face increasing returns from learning/research, creating economies of scale, specialization within the cause area, etc. For example, in most cause areas, the first $10,000 are probably invested into prioritization, organizing, or (potentially symbolic) interventions that later turn out to be suboptimal. So, in practice return curves may actually look more like the following:

IMG_20170621_134248

This adds another piece of information (besides scale) that needs to be taken into account to translate the amount of invested resources into how much returns have diminished: how and when do returns start to diminish?

There are many other return curve shapes that may be less common but mess up the neglectedness framework more. For example, some projects produce some large amount of value if they succeed but produce close to no value if they fail. Thus, the (actual not expected) return curve for such projects may look like this:

IMG_20170621_134241

Examples may include developing vaccines, colonizing Mars or finding cause X.

If such a cause area is already relatively crowded according to the third (and second) sense, that may make them less “crowded” in the first sense. For example, if nobody had invested money into finding a vaccine against malaria (and you don’t expect others to invest money into it into the future either, see below) then this cause area is maximally neglected in the second and third sense. However, given how expensive clinical trials are, the marginal returns of donating a few thousand dollars into it are essentially zero. If on the other hand, others have already contributed enough money to get a research project off the ground at all, then the marginal returns are higher, because there is at least some chance that your money will enable a trial in which a vaccine is found. (Remember that we don’t know the exact shape of the return curve, so we don’t know when the successful trial is funded.)

I would like to emphasize that the point of this section is not so much that people apply neglectedness incorrectly by merely looking at the amount of resources invested into a cause and not thinking about implications in terms of diminishing returns at all. Instead, I suspect that most people implicitly translate into diminishing returns and take the kind of the project into account. However, it may be beneficial if people were more aware of this issue and how it makes evaluating neglectedness more difficult.

Future resources

When estimating the neglectedness of a cause, we need to take into account, not only people who are currently working on the problem (as a literal reading of 80,000 hours’ definition suggests), but also people who have worked on it in the past and future. If a lot of people have worked on a problem in the past, then this indicates that the low-hanging fruit has already been picked. Thus, even if nobody is working in the area anymore, marginal returns have probably diminished a lot. I can’t think of a good example where this is a decisive consideration because if an area has been given up on (such that there is a big difference between past and current attention), it will usually score low in tractability, anyway. Perhaps one example is the search for new ways to organize society, government and economy. Many resources are still invested into thinking about this topic, so even if we just consider resources invested today, it would not do well in terms of neglectedness. However, if we consider that people have thought about and “experimented” in this area for thousands of years, it appears to be even more crowded.

We also have to take future people and resources into account when evaluating neglectedness. Of course, future people cannot “take away” the most promising intervention in the way that current and past people can. However, their existence causes the top interventions to be performed anyway. For example, let’s say that there are 1000 equally costly possible interventions in an area, generating 1000, 999, 998, …, 1 “utils” (or lives saved, years of suffering averted, etc.), respectively. Each intervention can only be performed once. The best 100 interventions have already been taken away by past people. Thus, if you have money for one intervention, you can now only generate 900 utils. But if you know that future people will engage in 300 further interventions in that area, then whether you intervene or not actually only makes a difference of 600 utils. All interventions besides the one generating 600 utils would have been executed anyway. (In Why Charities Don’t Differ Astronomically in Cost-Effectiveness, Brian Tomasik makes a similar point.)

The number of future people who would counterfactually engage in some cause area is an important consideration in many cause areas considered by effective altruists. In general, if a cause area is neglected by current and past people, the possibility of future people engaging in an intervention creates a lot of variance in neglectedness evaluations. If recently 10 people started working on an area, then it is very uncertain how much attention it will have in the future. And if it will receive a lot more attention regardless of our effort, then the neglectedness score may change by a factor of 100. The future resources that will go into long-established (and thus already less neglected) cause areas, on the other hand, are easier to predict and can’t make as much of a difference.

One example where future people and resources are an important consideration is AI safety. People often state that AI safety is a highly neglected cause area, presumably under the assumption that this should be completely obvious given how few people currently work in the area. At least, it is rare that the possibility of future people going into AI safety is considered explicitly. Langan-Dathi even writes that “due to [AI safety] being a recent development it is also highly neglected.” I, on the other hand, would argue that being a recent development only makes a cause look highly neglected if one doesn’t consider future people. (Again, Brian makes almost the same point regarding AI safety.)

Overall, I think many questions in AI safety should nonetheless be regarded as relatively neglected because I think there is a good chance that future people won’t recognize them as important fast enough. That said, I think some AI safety problems will become relevant in regular AI capability research or near time applications (such as self-driving cars). For example, I expect that some of Amodei et al.’s (2016) “Concrete Problems in AI Safety” will be (or would have been) picked up, anyway. Research in these areas of AI safety is thus potentially less intrinsically valuable, although it may still have a lot of instrumental benefits that make them worthwhile to pursue.

My impression is that neglecting future people in evaluating neglectedness is more common than forgetting to translate from invested resources into diminishing marginal returns. Nonetheless, in the context of this post the point of this section is that considering future resources makes neglectedness more difficult to evaluate. Obviously, it is hard to foresee how many resources will be invested into a project in the future. Because the most promising areas will not have received a lot of attention, yet, the question of their neglectedness will be dominated by how much resources they will receive in the future. Thus, in the most important cases, neglectedness is hard to estimate.

What should count as “the same cause area”?

At least the operationalization of neglectedness involves estimating the amount of (past, current and future) resources invested into a cause area. But which resources count as going into the same cause area? For example, if the cause area is malaria, should you count people who work in global poverty as working in the same cause area?

Because the number of people working in an area is only relevant as a proxy for how much marginal returns have diminished, the answer seems to be: Count people (and resources) to the extent that their activities diminish the marginal returns in the cause area in question. Thus, resources invested into alleviating global poverty have to be taken into account, because if people’s income increases, this will allow them to take measures against malaria as well.

As another example, consider the cause area of advocating some moral view X (say effective altruism). If only a few people currently promote that view, then one may naively view advocating X as neglected. However, if neglectedness is intended to be a proxy for diminishing returns, then it seems that we also have to take into account moral advocates of other views. Because most people regularly engage in some form of moral advocacy (e.g., when they talk about morality with their friends and children), many people already hold moral views that our advocacy has to compete with. Thus, we may want to take these other moral advocates into account for evaluating neglectedness. That said, if we apply neglectedness together with tractability and scope, it seems reasonable to include such considerations in either tractability or neglectedness. (As Rob Wiblin remarks, the three factors blur heavily into each other. In particular, neglectedness can make an intervention more tractable. As Wiblin notes, we should take care not to double-count arguments. We also shouldn’t forget to count arguments at all, though.)

Acknowledgements

I am indebted to Tobias Baumann for valuable comments.

 

Summary of Achen and Bartel’s Democracy for Realists

I just finished binge-reading Achen and Bartel’s great book Democracy for Realists and decided to write up a summary and a few comments to aid my memory and share some of the most interesting insights.

The folk theory of democracy

(Since chapter 1 contains little of interest besides giving a foretaste of later chapters, I will start with the content of chapter 2.) The “folk theory” of democracy is roughly the following:

Voters have a set of informed policy preferences (e.g., on abortion, social security, climate change, taxes, etc.) and vote for the candidate or party whose policy preferences most resemble their own (similar to how vote advice applications operate). That is, people vote based on the issues. Parties are then assumed to cater to the voters’ preferences to maximize their chance of getting elected. This way the people get what they want (as is guaranteed under certain theoretical assumptions, by the median voter theorem).

Achen and Bartel argue that this folk theory of democracy does not describe what is happening in real-world democracies:

  • Voters are often badly informed: “Michael Delli Carpini and Scott Keeter (1996) surveyed responses to hundreds of specific factual questions in U.S. opinion surveys over the preceding 50 years to provide an authoritative summary of What Americans Know about Politics and Why It Matters. In 1952, Delli Carpini and Keeter found, only 44% of Americans could name at least one branch of government. In 1972, only 22% knew something about Watergate. In 1985, only 59% knew whether their own state’s governor was a Democrat or a Republican. In 1986, only 49% knew which one nation in the world had used nuclear weapons (Delli Carpini and Keeter 1996, 70, 81, 74, 84). Delli Carpini and Keeter (1996, 270) concluded from these and scores of similar findings that ‘large numbers of American citizens are woefully underinformed and that overall levels of knowledge are modest at best.’” (p. 36f.)
    • Interestingly, the increasing availability of information has done little to change this. “[I]t is striking how little seems to have changed in the decades since survey research began to shed systematic light on the nature of public opinion. Changes in the structure of the mass media have allowed people with an uncommon taste for public affairs to find an unprecedented quantity and variety of political news; but they have also allowed people with more typical tastes to abandon traditional newspapers and television news for round-the-clock sports, pet tricks, or pornography, producing an increase in the variance of political information levels but no change in the average level of political information (Baum and Kernell 1999; Prior 2007). Similarly, while formal education remains a strong predictor of individuals’ knowledge about politics, substantial increases in American educational attainment have produced little apparent increase in overall levels of political knowledge. When Delli Carpini and Keeter (1996, 17) compared responses to scores of factual questions asked repeatedly in opinion surveys over the past half century, they found that ‘the public’s level of political knowledge is little different today than it was fifty years ago.’” (p. 37)
    • This lack of knowledge seems to matter for policy preferences – uninformed voters cannot use heuristics to mimic the choices of informed voters. “[S]ome scholars have […] asked whether uninformed citizens – using whatever ‘information shortcuts’ are available to them – manage to mimic the preferences and choices of better informed people. Alas, statistical analyses of the impact of political information on policy preferences have produced ample evidence of substantial divergences between the preferences of relatively uninformed and better informed citizens (Delli Carpini and Keeter 1996, chap. 6; Althaus 1998). Similarly, when ordinary people are exposed to intensive political education and conversation on specific policy issues, they often change their mind (Luskin, Fishkin, and Jowell 2002; Sturgis 2003). Parallel analyses of voting behavior have likewise found that uninformed citizens cast significantly different votes than those who were better informed. For example, Bartels (1996) estimated that actual vote choices fell about halfway between what they would have been if voters had been fully informed and what they would have been if everyone had picked candidates by flipping coins.” (p. 39f.)
    • Wisdom of the crowd-type arguments often don’t apply in politics because the opinions of different people are often biased in the the same direction: “Optimism about the competence of democratic electorates has often been bolstered (at least among political scientists) by appeals to what Converse (1990) dubbed the ‘miracle of aggregation’ – an idea formalized by the Marquis de Condorcet more than 200 years ago and forcefully argued with empirical evidence by Benjamin Page and Robert Shapiro (1992). Condorcet demonstrated mathematically that if several jurors make independent judgments of a suspect’s guilt or innocence, a majority are quite likely to judge correctly even if every individual juror is only modestly more likely than chance to reach the correct conclusion.

      Applied to electoral politics, Condorcet’s logic suggests that the electorate as a whole may be much wiser than any individual voter. The crucial problem with this mathematically elegant argument is that it does not work very well in practice. Real voters’ errors are quite unlikely to be statistically independent, as Condorcet’s logic requires. When thousands or millions of voters misconstrue the same relevant fact or are swayed by the same vivid campaign ad, no amount of aggregation will produce the requisite miracle; individual voters’ ‘errors’ will not cancel out in the overall election outcome, especially when they are based on constricted flows of information (Page and Shapiro 1992, chaps. 5, 9). If an incumbent government censors or distorts information regarding foreign policy or national security, the resulting errors in citizens’ judgments obviously will not be random. Less obviously, even unintentional errors by politically neutral purveyors of information may significantly distort collective judgment, as when statistical agencies or the news media overstate or understate the strength of the economy in the run-up to an election (Hetherington 1996).” (p.40f.)
  • Voters don’t have many strong policy preferences.
    • Their stated preferences are sensitive to framing effects. Some examples from p. 30f:
      “[E]xpressed political attitudes can be remarkably sensitive to seemingly innocuous variations in question wording or context. For example, 63% to 65% of Americans in the mid-1980s said that the federal government was spending too little on “assistance to the poor”; but only 20% to 25% said that it was spending too little on “welfare” (Rasinski 1989, 391). “Welfare” clearly had deeply negative connotations for many Americans, probably because it stimulated rather different mental images than “assistance to the poor” (Gilens 1999). Would additional federal spending in this domain have reflected the will of the majority, or not? We can suggest no sensible way to answer that question. […] [I]n three separate experiments conducted in the mid-1970s, almost half of Americans said they would “not allow” a communist to give a speech, while only about one-fourth said they would “forbid” him or her from doing so (Schuman and Presser 1981, 277). In the weeks leading up to the 1991 Gulf War, almost two-thirds of Americans were willing to “use military force,” but fewer than half were willing to “engage in combat,” and fewer than 30% were willing to “go to war” (Mueller 1994, 30).
    • Many voters have no opinions on many current issues (p. 31f.).
    • People’s policy preferences are remarkably inconsistent over time with correlations of just 0.3 to 0.5 between the stated policy preferences on two occasions that are two years apart.
  • Many voters don’t know the positions of the competing parties on the issues, which makes it hard for them to vote for a party based on their policy preferences (p. 32).
    • Lau and Redlawsk (1997; 2006) “found that about 70% of voters, on average, chose the candidate who best matched their own expressed preferences.” (p. 40)
  • If one asks people to place their own policy positions and that of parties on a seven-point issue scale, then issue proximity and vote choice will correlate. But this can be explained by more than one set of causal relationships. Of course, the naive interpretation is that people form a policy opinion and learn about the candidates’ opinions independently. Based on those, they decide which party to vote for. But this model of policy-oriented evaluation is only one possible explanation of the observed correlation between perceived issue proximity and voting behavior. Another is persuasion: Voters already prefer some party, know that party’s policies and then adjust their opinions to better match that party’s opinion. The third is projection: People already know which party to vote for, have some opinions on policy but don’t actually know what the party stands for. They then project their policy positions onto those of the party. (p. 42) Achen and Bartels report on evidence showing that policy-oriented evaluation is only a small contributor to the correlation between perceived issue proximity and vote choices. (p. 42-45)
  • They argue that, empirically, elected candidates often don’t represent the median voter. (p. 45-49)
  • To my surprise, they use Arrow’s impossibility theorem to argue against the feasibility of fair preference aggregation (pp. 26ff.). (See here for a nice video introduction.) Somehow, I always had the impression that Arrow’s impossibility theorem wouldn’t make a difference in practice. (As Arrow himself said, “Most [voting] systems are not going to work badly all of the time. All I proved is that all can work badly at times.”)

A weaker form of the folk theory is that, while voters may not know specific issues well enough to have an opinion, they do have some ideological preference (such as liberalism or conservatism). But this fails for similar reasons:

  • “Converse […] scrutinized respondents’ answers to open-ended questions about political parties and candidates for evidence that they understood and spontaneously employed the ideological concepts at the core of elite political discourse. He found that about 3% of voters were clearly classiffiable as “ideologues,” with another 12% qualifying as “near-ideologues”; the vast majority of voters (and an even larger proportion of nonvoters) seemed to think about parties and candidates in terms of group interests or the “nature of the times,” or in ways that conveyed “no shred of policy significance whatever” (Converse 1964, 217–218; also Campbell et al. 1960, chap. 10).”
  • Correlations between different policy views are only modest. This itself is not necessarily a bad thing but evidence against ideological voting. (If people fell into distinct ideological groups like liberals, conservatives, etc., one would observe such correlations. E.g., one may expect strong correlations between positions on foreign and domestic policy given that there are such correlations among political parties.) (p. 32f.)
    • This appears to conflict to some extent with how Haidt’s moral foundations theory characterizes the differences between liberals and conservatives. According to Haidt, conservatives form a cluster of people who care much more about loyalty, authority and sanctity than liberals. This predicts correlations between positions on topics in these domains, e.g. gay marriage and immigration (assuming that people’s loyalty, authority and sanctity intuitions tend to have similar content). However, it doesn’t seem to predict correlations between views on, say, aid to education and isolationism, which were the type of variables asked about in the study by Converse (1964) that Achen and Bartels refer to.
  • “Even in France, the presumed home of ideological politics, Converse and Pierce (1986, chap. 4) found that most voters did not understand political ‘left’ and ‘right.’ When citizens do understand the terms, they may still be uncertain or confused about where the parties stand on the left-right dimension (Butler and Stokes 1974, 323–337). Perhaps as a result, their partisan loyalties and issue preferences are often badly misaligned. In a 1968 survey in Italy, for example, 50% of those who identified with the right-wing Monarchist party took left-wing policy positions (Barnes 1971, 170). […] [C]areful recent studies have repeatedly turned up similar findings. For example, Elizabeth Zechmeister (2006, 162) found “striking, systematic differences … both within and across the countries” in the conceptions of “left” and “right” offered by elite private college students in Mexico and Argentina, while André Blais (personal communication) found half of German voters unable to place the party called “Die Linke” – the Left – on a left-right scale.” (p. 34f.)

Direct democracy

Chapter 3 discusses direct democracy. Besides making the point that everyone seems to believe that “more democracy” is a good thing (pp. 52-60, 70), they argue against a direct democracy version of the folk theory. In my view, the evidence presented in chapter 2 of the book (and the previous section of this summary) already provides strong reasons for skepticism and I think the best case against a direct democracy folk theory is based on arguments of this sort. In line with this view, Achen and Bartels re-iterate some of the arguments, e.g. that the average Joe often adopts other people’s policy preferences rather than making up his own mind (p. 73-76).

Most of the qualitatively new evidence presented in this section, on the other hand, seems quite weak to me. Much of it seems to be aimed at showing that direct democracy has yielded bad results. For example, based on the ratings of Arthur Schlesinger Jr., the Wall Street Journal, C-SPAN and Siena College, the introduction of primary elections hasn’t increased the quality of presidents (p. 66). As they concede themselves, the data set so small and the ratings of presidents contentious, so this evidence is not very strong at all. They also argue that direct democracy sometimes leads to transparently silly decisions, but the evidence seems essentially anecdotal to me.

Another interesting point of the section is that, in addition to potential ideological motives, politicians usually have strategic reasons to support the introduction of “more democratic” procedures:

[T]hroughout American history, debates about desirable democratic procedures have not been carried out in the abstract. They have always been entangled with struggles for substantive political advantage. In 1824, “politicos in all camps recognized” that the traditional congressional caucus system would probably nominate William Crawford; thus, “how people felt about the proper nominating method was correlated very highly indeed with which candidate they supported” (Ranney 1975, 66). In 1832, “America’s second great party reform was accomplished, not because the principle of nomination by delegate conventions won more adherents than the principle of nomination by legislative caucuses, but largely because the dominant factional interests … decided that national conventions would make things easier for them” (Ranney 1975, 69).

Similarly, Ranney (1975, 122) noted that the most influential champion of the direct primary, Robert La Follette, was inspired “to destroy boss rule at its very roots” when the Republican Party bosses of Wisconsin twice passed him over for the gubernatorial nomination. And in the early 1970s, George McGovern helped to engineer the Democratic Party’s new rules for delegate selection as cochair of the party’s McGovern-Fraser Commission, and “praised them repeatedly during his campaign for the 1972 nomination”; but less than a year later he advocated repealing some of the most significant rules changes. Asked why McGovern’s views had changed, “an aide said, ‘We were running for president then’” (Ranney 1975, 73–74).

I expect that this is a quite common phenomenon in deciding which decision process to use. E.g., when an organization decides which decision procedure to use (e.g., who will make the decision, what kind of evidence is accepted as valid), members of the organization might base a decision on these processes less on general principles (e.g., balance, avoidance of cognitive biases and groupthink) than on which decision process will yield the favored results in specific object-level decisions (e.g., who gets a raise, whether my prefered project is funded).

I guess processes that are instantiated for only a single decision are affected even more strongly by this problem. An example is deciding on how to do AI value loading, e.g. which idealization procedures to use.

The Retrospective Theory of Political Accountability

In chapter 4, Achen and Bartels discuss an attractive alternative to the folk theory: retrospective voting. On this view, voters decide not so much based on policy preferences but on how well the candidates or parties has performed in the past. For example, a president under which the economy improved may be re-elected. This theory is plausible as a descriptive theory for a number of reasons:

  • There is quite some empirical evidence that retrospective voting describes what voters are doing (ch. 5-7).
  • Retrospective voting, i.e. evaluating whether the passing term went well, is much easier than policy-based voting, i.e. deciding which candidate’s proposed policies will work better in the future (p. 91f.).

The retrospective theory also has some normative appeal:

  • It selects for good leaders (p. 98-100).
  • It incentivizes politicians to do what is best for the voters (p. 100-102).
  • To some extent it allows politicians to do what is best for the voters even if the voters disagree on what is best (p. 91).

While Achen and Bartels agree that retrospective voting is a large part of the descriptive picture, they also argue that, at least in the way it is implemented by real-world voters, “its implications for democracy are less unambiguously positive than existing literature tends to suggest”:

  • Proceeding on the theme of the ignorance of the electorate, voters’ evaluation of the past term and the current situation is unreliable (p. 92f.). For example, their perception of environmental threats does not correlate much with that of experts (p. 106), they think crime is increasing when it is in fact stable or decreasing (p. 107) and they cannot assess the state of the economy (p. 107f.).
    • Media coverage, partisan bias, popular culture, etc. often shape people’s judgments (p. 107, 138-142).
  • Voters are unable to differentiate whether bad times are an incumbent’s fault or not (p. 93). Consequently, there is some evidence that incumbents tend to be punished for shark attacks, droughts and floods (ch. 5).
  • “The theories of retrospective voting we have considered assume that voters base their choices at the polls entirely on assessments of how much the incumbent party has contributed to their own or the nation’s well-being. However, when voters have their own ideas about good policy, sensible or not, they may be tempted to vote for candidates who share those ideas, as in the spatial model of voting discussed in chapter 2. In that case incumbent politicians may face a dilemma: should they implement the policies voters want or the policies that will turn out to contribute to voters’ welfare?” (p. 109, also see pp. 108-111)
    • “[E]lected officials facing the issue of fluoridating drinking water in the 1950s and 1960s were significantly less likely to pander to their constituents’ ungrounded fears when longer terms gave them some protection from the “sudden breezes of passion” that Hamilton associated with public opinion.” (p. 110)
  • The electorate’s decisions are often based only on the most recent events, in particular the economic growth in the past year or so (cf. the peak-end rule). This not only makes their judgments worse than necessary (as they throw information away), it also sets the wrong incentives to the incumbent. Indeed, there is some evidence of a “political business cycle”, i.e. politicians attempting to maximize for growth, in particular growth of real income, in the last year of their term. (See chapter 6. Additional evidence is given in ch. 7.)
  • “Another way to examine the effectiveness of retrospective voting is to see what happens after each election. If we take seriously the notion that reelection hinges on economic competence, one implication is that we should expect to see more economic growth when the incumbent party is reelected than when it is dismissed by the voters. In the former case the incumbent party has presumably been retained because its past performance makes it a better than average bet to provide good economic management in the future. In the latter case the new administration is presumably an unknown quantity, a random draw from some underlying distribution of economic competence. A secondary implication of this logic is that future economic performance should be less variable when the incumbent party is retained, since reelected administrations are a truncated subsample of the underlying distribution of economic competence (the worst economic performers having presumably been weeded out at reelection time).” (p. 164) Based on a tiny sample (US presidential elections between 1948-2008), this does not seem to be the case. Of course, one could argue that the new administration often is not a random quantity – the parties in US presidential elections are almost always the same and the candidates have often proven themselves in previous political roles. In fact, the challenger may have a longer track record than the incumbent. For example, this may come to be the case in 2020.
  • Using a subset of the same tiny sample, they show that post-reelection economic growth is not a predictor of popular vote margin (p. 166-168). So, retrospective voting as current voters apply it doesn’t seem to work in selecting competent leaders. That said, and as Achen and Bartels acknowledge themselves (p. 168), the evidence they use is only very tentative.

Overall, the electorate’s evaluation of a candidate may be some indicator of how well they are going to perform in the future, but it is an imperfect and manipulable one.

Group loyalties and social identities

In addition to retrospective voting, Achen and Bartels tentatively propose that group loyalties and social identities play a big role for politics. Whereas the retrospection theory appears to be relatively well-studied, this new theory is much less worked out, yet (pp. 230f.).

It seems clear that vast parts of psychology and social psychology in particular – Achen and Bartels refer to ingroups and outgroups, Asch’s conformity experiments, cognitive dissonance, rationalization, etc. – should be a significant explanatory factor in political science. Indeed, Achen and Bartels start chapter 8 by stating that the relevance of social psychology for politics has been recognized by past generations of researchers (pp. 213-222), it only became unpopular when some theories that it was associated with failed (pp. 222-225).

Achen and Bartels discuss a few ways in which social groups, identities and loyalties influence voting behavior:

  • While voters’ retrospection focuses on the months leading up to the election, these short-term retrospections translate into the formation of long-term partisan loyalties. So, in a way, partisan loyalties are, in part, the cumulation of these short-term retrospections (p. 197-199).
  • Many people are loyal to one party (p. 233).
  • People adopt the political views of the groups they belong to or identify with (p. 219f., 222f., 246-, p. 314).
    • People often adopt the party loyalties of their parents (p. 233f.).
    • People adopt the views of their party (or project their views onto the party) (ch. 10). Party identification also influences one’s beliefs about factual matters. For example, when an opposing party is in office people judge the economy as worse (pp. 276-284).
  • People reject the political views of groups that they dislike (pp. 284-294).
  • People choose candidates based on what they perceive to be best for their group (p. 229).
  • Catholic voters (even one’s who rarely go to church) tend to prefer catholic candidates, even if the candidate emphasizes the separation of church and state (pp. 238-246).
  • If, say, Catholics discriminate against Jews, then Jews are much less likely to vote for a Catholic candidate or a party dominated by Catholics (p. 237f.).
  • Better-informed voters are often influenced more strongly by identity issues, presumably because they are more aware of them (pp. 284-294). For example, they are sometimes less likely than worse-informed voters to get the facts right (p. 283).
  • “When political candidates court the support of groups, they are judged in part on whether they can ‘speak our language.’ Small-business owners, union members, evangelical Christians, international corporations – each of these has a set of ongoing concerns and challenges, and a vocabulary for discussing them. Knowing those concerns, using that vocabulary, and making commitments to take them seriously is likely to be crucial for a politician to win their support (Fenno 1978).“

Unfortunately, I think that Achen and Bartels stretch the concept of identity-based voting a bit too much. The clearest example is their analysis of the case of abortion (pp. 258-266). Women tend to have more stable views on abortion than men. They are also more likely to leave the Republican party if they are pro-choice and less likely to assimilate their opinions to that of their party. Achen and Bartels’ explanation is that women’s vote is affected by their identifying as women. But I don’t see why it is necessary to bring the concept of identity into this. A much simpler explanation would be that voters are, to some extent, selfish and thus put more weight on the issues that are most relevant to them. If this counts as voting based on identity, is there any voting behavior that cannot be ascribed to identities?

I also find many of the explanations based on social identity unsatisfactory – they often don’t really explain a phenomenon. For example, Achen and Bartels argue that the partisan realignment of white southerners in the second half of the 20th century was not so much driven by racial policy issues but by white southern identity (pp. 246-258). But they don’t explain how white southern identity led people into the open arms of the Republicans. For example, was it that Republicans explicitly appealed to that identity? Or did southern opinion leaders change their mind based on policy issues?

Implications for democracy

Chapter 11 serves as a conclusion of the book. It summarizes some of the points made in earlier sections but also discusses the normative implications.

Unsurprisingly, Achen and Bartels argue against naive democratization:

[E]ffective democracy requires an appropriate balance between popular preferences and elite expertise. The point of reform should not simply be to maximize popular influence in the political process but to facilitate more effective popular influence. We need to learn to let political parties and political leaders do their jobs, too. Simple-minded attempts to thwart or control political elites through initiatives, direct primaries, and term limits will often be counterproductive. Far from empowering the citizenry, the plebiscitary implications of the folk theory have often damaged people’s real interests. (p. 303)

At the same time, they again point out that elite political judgment is often not much better than that of the worse-informed majority. In addition to being more aware of identity issues, the elites are a lot better at rationalizing, which makes them sound more rational, but often does not yield more rational opinions (p. 309-311).

Another interesting point they make is that it is usually the least-informed voters who decide who wins an election because the non-partisan swing voters tend to be relatively uninformed (p. 312, also p.32).

Achen and Bartels give some reasons why democracy might be better than its alternatives. I think the arguments, as given in the book, drastically vary in appeal, but here all five:

  • “[E]lections generally provide authoritative, widely accepted agreement about who shall rule. In the United States, for example, even the bitterly contested 2000 presidential election – which turned on a few hundred votes in a single state and a much-criticized five-to-four Supreme Court decision – was widely accepted as legitimate. A few Democratic partisans continued to grumble that the election had been “stolen”; but the winner, George W. Bush, took office without bloodshed, or even significant protest, and public attention quickly turned to other matters.” This makes sense, although it would have been interesting to test this argument empirically. I.e., is violent power struggle more or less prevalent in democracies than in other forms of government, such as hereditary monarchies? (I would guess that it is less prevalent in democracies.)
  • “[I]n well-functioning democratic systems, parties that win office are inevitably defeated at a subsequent election. They may be defeated more or less randomly, due to droughts, floods, or untimely economic slumps, but they are defeated nonetheless. Moreover, voters seem increasingly likely to reject the incumbent party the longer it has held office, reinforcing the tendency for governmental power to change hands. This turnover is a key indicator of democratic health and stability. It implies that no one group or coalition can become entrenched in power, unlike in dictatorships or one-party states where power is often exercised persistently by a single privileged segment of society. And because the losers in each election can reasonably expect the wheel of political fortune to turn in the not-too-distant future, they are more likely to accept the outcome than to take to the streets.” (p. 317) Here it is not so clear whether this constant change is a good thing. Having the same party, group or person rule for long stretches of time ensures stability and avoids friction between consecutive legislations. It also ensures that office is most of the time held by politicians with experience. Presumably, Achen and Bartels are right in judging high turnover as beneficial, but they have little evidence to back it up.
  • “[E]lectoral competition also provides some incentives for rulers at any given moment to tolerate opposition. The notion that citizens can oppose the incumbent rulers and organize to replace them, yet remain loyal to the nation, is fundamental both to real democracy and to social harmony.” (p. 317f.) This also seems non-obvious. Perhaps the monarchist could argue that only rulers who do not have to worry about losing their position can fruitfully engage with criticism. They also have less reason to get the press under their control (although, empirically, dictators usually use their power to limit the press in ways that democratic governments cannot).
  • “[A] long tradition in political theory stemming from John Stuart Mill (1861, chap. 3) has emphasized the potential benefits of democratic citizenship for the development of human character (Pateman 1970). Empirical scholarship focusing squarely on effects of this sort is scant, but it suggests that democratic political engagement may indeed have important implications for civic competence and other virtues (Finkel 1985; 1987; Campbell 2003; Mettler 2005). Thus, participation in democratic processes may contribute to better citizenship, producing both self-reinforcing improvements in ‘civic culture’ (Almond and Verba 1963) and broader contributions to human development.” (p. 318) This may be true, but it appears to be a relatively weak consideration. Perhaps, the monarchist could counter that doing away with elections saves people more time than the improvements in “civic culture” are worth. They may not be as virtuous, but maybe they can nonetheless spend more time with their family and friends or create more economic value.
  • “Finally, reelection-seeking politicians in well-functioning democracies will strive to avoid being caught violating consensual ethical norms in their society. As Key (1961a, 282) put it, public opinion in a democracy ‘establishes vague limits of permissiveness within which governmental action may occur without arousing a commotion.’ Thus, no president will strangle a kitten on the White House lawn in view of the television cameras. Easily managed governmental tasks will get taken care of, too. Chicago mayors will either get the snow cleared or be replaced, as Mayor Michael Bilandic learned in the winter of 1979. Openly taking bribes will generally be punished. When the causal chain is clear, the outcome is unambiguous, and the evaluation is widely shared, accountability will be enforced (Arnold 1990, chap. 3). So long as a free press can report dubious goings-on and a literate public can learn about them, politicians have strong incentives to avoid doing what is widely despised. Violations occur, of course, but they are expensive; removal from office is likely. By contrast, in dictatorships, moral or financial corruption is more common because public outrage has no obvious, organized outlet. This is a modest victory for political accountability.” (p. 318f.) Of the five reasons given, I find this one the most convincing. It basically states that retrospective voting and to some extent even the folk theory work, they just don’t work as well as one might naively imagine. So, real-world democracy doesn’t do a better job than a coin flip at representing people’s “real opinions” on controversial issues like abortion. Democracy does ensure, however, that important, universally agreed upon measures will be implemented.

In their last section, Achen and Bartels propose an idea for how to make governments more responsive to the interests of the people. Noting that elites have much more influence, they suggest that economic and social equality, as well as limitations on lobbying and campaign financing, could make governments more responsive to the preferences of the people. While plausibly helpful, these ideas are much more trite than the rest of the book.

General comments

  • Overall I recommend reading the book if you’re interested in the topic.
  • Since I don’t know the subject area particularly well, I read a few reviews of the book (Paris 2016; Schwennicke, Cohen, Roberts, Sabl, Mares, and Wright 2017; Malhotra 2016; Mann 2016; Cox 2017; Somin 2016). All of these seemed positive overall. Some even said that large parts of the book are more mainstream than the authors claim (which is a good thing in my book).
  • It’s quite Americentric. Sometimes an analysis of studies conducted in the US is followed by references to papers confirming the results in other countries, but often it is not. In many ways, politics in the US is different than in other countries, e.g. only two parties matter and the variability in wealth and education within the US is much bigger than in many other Western nations. This makes me unsure to which extent many of the results carry over to other countries. Often it is also an unnecessary limitation of sample sizes. E.g., one analysis (p. 165) relates whether the incumbent party was replaced to post-presidential-election income and GDP growth in the years 1948-2008 in the US. It seems hard to conclude all that much from 16 data points. Perhaps taking a look at other countries would have been a cheap way to increase the sample size. Because the book is not about the details of particular democratic systems, the book seems quite accessible to non-US American readers with only superficial knowledge of US politics and history.
  • It often gives a lot of detail on how empirical evidence was gathered and analyzed. E.g., the entire chapter seven is about how people’s voting behavior after the Great Depression – which is often explained by policy preferences (in the US related to Roosevelt’s New Deal) – can be explained well by retrospective voting.
  • I also feel like the book is somewhat balanced despite their view differing somewhat from the mainstream within political science. E.g., they often mention explicitly what the mainstream view is and refer to studies supporting that view. I also feel like they are relatively transparent about how reliable or tentative the empirical evidence for some parts of the book is.
  • A similar book is Jason Brennan’s Against Democracy, which I haven’t read. As suggested by the names, Against Democracy differs from Democracy for Realists in that it proposes epistocracy as an alternative form of government.

Acknowledgements

I thank Max Daniel and Stefan Torges for comments.

Talk on Multiverse-wide Cooperation via Correlated Decision-Making

In the past few months, I thought a lot about the implications of non-causal decision theory. In addition to writing up my thoughts in a long paper that we plan to publish on the FRI website soon, I also prepared a presentation, which I delivered to some researchers at FHI and my colleagues at FRI/EAF. Below you can find a recording of the talk.

The slides are available here.

Given the original target audiences, the talk assumes prior knowledge of a few topics:

The average utilitarian’s solipsism wager

The following prudential argument is relatively common in my circles: We probably live in a simulation, but if we don’t, our actions matter much more. Thus, expected value calculations are dominated by the utility under the assumption that we (or some copies of ours) are in the real world. Consequently, the simulation argument affects our prioritization only slightly — we should still mostly act under the assumption that we are not in a simulation.

A commonly cited analogy is due to Michael Vassar: “If you think you are Napoleon, and [almost] everyone that thinks this way is in a mental institution, you should still act like Napoleon, because if you are, your actions matter a lot.” An everyday application of this kind of argument is the following: Probably, you will not be in an accident today, but if you are, the consequences for your life are enormous. So, you better fasten your seat belt.

Note how these arguments do not affect the probabilities we assign to some event or hypothesis. They are only about the event’s (or hypothesis’) prudential weight — the extent to which we tailor our actions to the case in which the event occurs (or the hypothesis is true).

For total utilitarians (and many other consequentialist value systems), similar arguments apply to most theories postulating a large universe or multiverse. To the extent that it makes a difference for our actions, we should tailor them to the assumption that we live in a large multiverse with many copies of us because under this assumption we can affect the lives of many more beings.

For average utilitarians, the exact opposite applies. Even if they have many copies, they will have an impact on a much smaller fraction of beings if they live in a large universe or multiverse. Thus, they should usually base their actions on the assumption of a small universe, such as a universe in which Earth is the only inhabited planet. This may already have some implications, e.g. via the simulation argument or the Fermi paradox. If they also take the average over time — I do not know whether this is the default for average utilitarianism — they would also base their actions on the assumption that there are just a few past and future agents. So, average utilitarians are subject to a much stronger Doomsday argument.

Maybe the bearing of such prudential arguments is even more powerful, though. There is some chance that metaphysical solipsism is true: the view that only my (or your) own mind exists and that everything else is just an illusion. If solipsism were true, our impact on average welfare (or average preference fulfillment) would be enormous, perhaps 7.5 billion times bigger than it would be under the assumption that Earth exists — about 100 billion times bigger if you also count humans that have lived in the past. Solipsism seems to deserve a probability larger than one in 6 (or 100) billion. (In fact, I think solipsism is likely enough for this to qualify as a non-Pascalian argument.) So, perhaps average utilitarians should maximize primarily for their own welfare?

Acknowledgement

The idea of this post is partly due to Lukas Gloor.