Three wagers for multiverse-wide superrationality

In this post, I outline three wagers in favor of the hypothesis that multiverse-wide superrationality (MSR) has action-guiding implications. MSR is based on three core assumptions:

  1. There is a large or infinite universe or multiverse.
  2. Applying an acausal decision theory.
  3. An agent’s actions provide evidence about the actions of other, non-identical agents with different goals in other parts of the universe.

There are three wagers corresponding to these three assumptions. The wagers works only with those value systems that can also benefit from MSR (for instance, with total utilitarianism) (see Oesterheld, 2017, sec. 3.2). I assume such a value system in this post. I am currently working on a longer paper about a wager for (ii), which will discuss the premises for this wager in more detail.

A wager for acausal decision theory and a large universe

If this universe is very large or infinite, then it is likely that there is an identical copy of the part of the universe that is occupied by humans somewhere far-away in space (Tegmark 2003, p. 464). Moreover, there will be vastly many or infinitely many such copies. Hence, for example, if an agent prevents a small amount of suffering on Earth, this will be accompanied by many copies doing the same, resulting in multiple amounts of suffering averted throughout the universe.

Assuming causal decision theory (CDT), the impact of an agent’s copies is not taken into account when making decisions—there is an evidential dependence between the agent’s actions and the actions of their copies, but no causal influence. According to evidential decision theory (EDT), on the other hand, an agent should take such dependences into account when evaluating different choices. For EDT, a choice between two actions on Earth is also a choice between the actions of all copies throughout the universe. The same holds for all other acausal decision theories (i.e., decision theories that take such evidential dependences into account): for instance, for the decision theories developed by MIRI researchers (such as functional decision theory (Yudkowsky and Soares, 2017)), and for Poellinger’s variation of CDT (Poellinger, 2013).

Each of these considerations on its own would not be able to get a wager off the ground. But jointly, they can do so: on the one hand, given a large universe, acausal decision theories will claim a much larger impact with each action than causal decision theory does. Hence, there is a wager in favor of these acausal decision theories. Suppose an agent applies some meta decision theory (see MacAskill, 2016, sec. 2) that aggregates the expected utilities provided by individual decision theories. Even if the agent assigns a small credence to acausal decision theories, these theories will still dominate the meta decision theory’s expected utilities. On the other hand, if an agent applies an acausal decision theory, they can have a much higher impact in a large universe than in a small universe. The agent should thus always act as if the universe is large, even if they only assign a very small credence to this hypothesis.

In conclusion, most of an agent’s impact comes from applying an acausal decision theory in a large universe. Even if the agent assigns a small credence both to acausal decision theories and to the hypothesis that the universe is large, they should still act as if they placed a high credence in both.

A wager in favor of higher correlations

In explaining the third wager, it is important to note that I assume a subjective interpretation of probability. If I say that there is a correlation between the actions of two agents, I mean that, given one’s subjective beliefs, observing one agent’s action provides evidence about the other agent’s action. Moreover, I assume that agents are in a symmetrical decision situation—for instance, this is the case for two agents in a prisoner’s dilemma. If the decision situation is symmetrical, and if the agents are sufficiently similar, their actions will correlate. The theory of MSR says that agents in a large universe probably are in a symmetrical decision situation (Oesterheld, 2017, sec. 2.8).

There exists no general theory of correlations between different agents. It seems plausible to assume that a correlation between the actions of two agents must be based on a logical correlation between the decision algorithms that these two agents implement. But it is not clear how to think about the decision algorithms that humans implement, for instance, and how to decide whether two decision algorithms are functionally equivalent (Yudkowsky and Soares, sec. 3). There exist solutions to these problems only in some narrow domains—for instance, for agents represented by programs written in some specific programming language.

Hence, it is also not clear which agents’ actions in a large universe correlate, given that all are in a symmetrical decision situation. It could be that an agent’s actions correlate only with very close copies. If these copies thus share the same values as the agent, then MSR does not have any action-guiding consequences. The agent will just continue to pursue their original goal function. If, on the other hand, there are many correlating agents with different goals, then MSR has strong implications. In the latter case, there can be gains from trade between these agents’ different value systems.

Just as there is a wager for applying acausal decision theory in general, there is also a wager in favor of assuming that an agent’s actions correlate with more rather than fewer different agents. Suppose there are two hypotheses: (H1) Alice’s actions only correlate with the actions of (G1) completely identical copies of Alice, and (H2) Alice’s actions correlate with (G2) all other agents that ever gave serious consideration to MSR or some equivalent idea.

(In both cases, I assume that Alice has seriously considered MSR herself.) G1 is a subset of G2, and it is plausible that G2 is much larger than G1. Moreover, it is plausible that there are also agents with Alice’s values among the agents in G2 which are not also in G1. Suppose 1-p is Alice’s credence in H1, and p her credence in H2. Suppose further that there are n agents in G1 and m agents in G2, and that q is the fraction of agents in G2 sharing Alice’s values. All agents have the choice between (A1) only pursuing their own values, and (A2) pursuing the sum over the values of all agents in G2. Choosing A1 gives an agent 1 utilon. Suppose g denotes the possible gains from trade; that is, choosing A2 produces (1+gs utilons for each value system, where s is the fraction of agents in G2 supporting that value system. If everyone in G2 chooses A2, this produces (1+g)×q×m utilons for Alice’s value system, while, if everyone chooses A1, this produces only q×m utilons in total for Alice.

The decision situation for Alice can be summarized by the following choice matrix (assuming, for simplicity, that all correlations are perfect):

H1 H2
A1 n+c q×m
A2 (1+gq×n+c (1+gq×m

Here, the cells denote the expected utilities that EDT assigns to either of Alice’s actions given either H1 or H2. c is a constant that denotes the expected value generated by the agents in G2 that are non-identical to Alice, given H1. It plays no role in comparing A1 and A2, since, given H1, these agents are not correlated with Alice: the value will be generated no matter which action she picks. The value for H1∧A2 is unrealistically high, since it supposes the same gains from trade as H2∧A2, but this does not matter here. According to EDT, Alice should choose A2 over A1 iff

g×p×q×m > (1-pn – (1+g)×(1-pn×q.

It seems likely that q×m is larger than n—the requirement that an agent must be a copy of Alice restricts the space of agents more than that of having thought about MSR and sharing Alice’s values. Therefore, even if the gains from trade and Alice’s credence in H2 (i.e., g×p) are relatively small, g×p×q×m is still larger than n, and EDT recommends A2.

While the argument for this wager is not as strong as the argument for the first two wagers, it is still plausible. It is plausible that there are much more agents having thought about MSR and sharing a person’s values than there are identical copies of the person. Hence, if the person’s actions correlate with the actions of all the agents in the larger group, the person’s actions have a much higher impact. Moreover, in this case, they plausibly also correlate with the actions of many agents holding different values, allowing for gains from trade. Therefore, one should act as if there were more rather than fewer correlations, even if one assigns a rather low credence to that hypothesis.

Acknowledgements

I am grateful to Caspar Oesterheld and Max Daniel for helpful comments on a draft of this post. I wrote this post while working for the Foundational Research Institute, which is now the Center on Long-Term Risk.

A wager against Solomonoff induction

The universal prior assigns zero probability to non-computable universes—for instance, universes that could only be represented by Turing machines in which uncountably many locations need to be updated, or universes in which the halting problem is solved in physics. While such universes might very likely not exist, one cannot justify assigning literally zero credence to their existence. I argue that it is of overwhelming importance to make a potential AGI assign a non-zero credence to incomputable universes—in particular, universes with uncountably many “value locations”.

Here, I assume a model of universes as sets of value locations. Given a specific goal function, each element in such a set could specify an area in the universe with some finite value. If a structure contains a sub-structure, and both the structure and the sub-structure are valuable in their own regard, there could either be one or two elements representing this structure in the universe’s set of value locations. If a structure is made up of infinitely many sub-structures, all of which the goal function assigns some positive but finite value to, then this structure could (if the sum of values does not converge) possibly only be represented by infinitely many elements in the set. If the set of value locations representing a universe is countable, then the value of said universe could be the sum over the values of all elements in the set (granted that some ordering of the elements is specified). I write that a universe is “countable” if it can be represented by a finite or countably infinite set, and a universe is “uncountable” if it can only be represented by an uncountably infinite set.

A countable universe, for example, could be a regular cellular automaton. If the automaton has infinitely many cells, then, given a goal function such as total utilitarianism, the automaton could be represented by a countably infinite set of value locations. An uncountable universe, on the other hand, could be a cellular automaton in which there is a cell for each real number, and interactions between cells over time are specified by a mathematical function. Given some utility functions over such a universe, one might be able to represent the universe only by an uncountably infinite set of value locations. Importantly, even though the universe could be described in logic, it would be incomputable.

Depending on one’s approach to infinite ethics, an uncountable universe could matter much more than a countable universe. Agents in uncountable universes might—with comparatively small resource investments—be able to create (or prevent), for instance, amounts of happiness or suffering that could not be created in an entire countable universe. For instance, each cell in the abovementioned cellular automaton might consist of some (possibly valuable) structure in of itself, and the cells’ structures might influence each other. Moreover, some (uncountable) set of cells might be regarded as an agent. The agent might then be able to create a positive amount of happiness in uncountably many cells, which—at least given some definitions of value and approaches to infinite ethics—would have created more value than could ever be created in a countable universe.

Therefore, there is a wager in favor of the hypothesis that humans actually live in an uncountable universe, even if it appears unlikely given current scientific evidence. But there is also a different wager, which applies if there is a chance that such a universe exists, regardless of whether humans live in that universe. It is unclear which of the two wagers dominates.

The second wager is based on acausal trade: there might be agents in an uncountable universe that do not benefit from the greater possibilities of their universe—e.g., because they do not care about the number of individual copies of some structure, but instead care about an integral over the structures’ values relative to some measure over structures. While agents in a countable universe might be able to benefit those agents equally well, they might be much worse at satisfying the values of agents with goals sensitive to the greater possibilities in uncountable universes. Thus, due to different comparative advantages, there could be great gains from trade between agents in countable and uncountable universes.

The above example might sound outlandish, and it might be flawed in that one could not actually come up with interaction rules that would lead to anything interesting happening in the cellular automaton. But this is irrelevant. It suffices that there is only the faintest possibility that an AGI could have an acausal impact in an incomputable universe which, according to one’s goal function, would outweigh all impact in all computable universes. There probably exists a possible universe like that for most goal functions. Therefore, one could be missing out on virtually all impact if the AGI employs Solomonoff induction.

There might not only be incomputable universes represented by a set that has the cardinality of the continuum, but there might be incomputable universes represented by sets of any cardinality. In the same way that there is a wager for the former, there is an even stronger wager for universes with even higher cardinalities. If there is a universe of highest cardinality, it appears to make sense to optimize only for acausal trade with that universe. Of course, there could be infinitely many different cardinalities, so one might hope that there is some convergence as to the values of the agents in universes of ever higher cardinalities (which might make it easier to trade with these agents).

In conclusion, there is a wager in favor of considering the possibility of incomputable universes: even a small acausal impact (relative to the total resources available) in an incomputable universe could counterbalance everything humans could do in a computable universe. Crucially, an AGI employing Solomonoff induction will not consider this possibility, hence potentially missing out on unimaginable amounts of value.

Acknowledgements

Caspar Oesterheld and I came up with the idea for this post in a conversation. I am grateful to Caspar Oesterheld and Max Daniel for helpful feedback on earlier drafts of this post.