Three wagers for multiverse-wide superrationality

In this post, I outline three wagers in favor of the hypothesis that multiverse-wide superrationality (MSR) has action-guiding implications. MSR is based on three core assumptions:

  1. There is a large or infinite universe or multiverse.
  2. Applying an acausal decision theory.
  3. An agent’s actions provide evidence about the actions of other, non-identical agents with different goals in other parts of the universe.

There are three wagers corresponding to these three assumptions. The wagers works only with those value systems that can also benefit from MSR (for instance, with total utilitarianism) (see Oesterheld, 2017, sec. 3.2). I assume such a value system in this post. I am currently working on a longer paper about a wager for (ii), which will discuss the premises for this wager in more detail.

A wager for acausal decision theory and a large universe

If this universe is very large or infinite, then it is likely that there is an identical copy of the part of the universe that is occupied by humans somewhere far-away in space (Tegmark 2003, p. 464). Moreover, there will be vastly many or infinitely many such copies. Hence, for example, if an agent prevents a small amount of suffering on Earth, this will be accompanied by many copies doing the same, resulting in multiple amounts of suffering averted throughout the universe.

Assuming causal decision theory (CDT), the impact of an agent’s copies is not taken into account when making decisions—there is an evidential dependence between the agent’s actions and the actions of their copies, but no causal influence. According to evidential decision theory (EDT), on the other hand, an agent should take such dependences into account when evaluating different choices. For EDT, a choice between two actions on Earth is also a choice between the actions of all copies throughout the universe. The same holds for all other acausal decision theories (i.e., decision theories that take such evidential dependences into account): for instance, for the decision theories developed by MIRI researchers (such as functional decision theory (Yudkowsky and Soares, 2017)), and for Poellinger’s variation of CDT (Poellinger, 2013).

Each of these considerations on its own would not be able to get a wager off the ground. But jointly, they can do so: on the one hand, given a large universe, acausal decision theories will claim a much larger impact with each action than causal decision theory does. Hence, there is a wager in favor of these acausal decision theories. Suppose an agent applies some meta decision theory (see MacAskill, 2016, sec. 2) that aggregates the expected utilities provided by individual decision theories. Even if the agent assigns a small credence to acausal decision theories, these theories will still dominate the meta decision theory’s expected utilities. On the other hand, if an agent applies an acausal decision theory, they can have a much higher impact in a large universe than in a small universe. The agent should thus always act as if the universe is large, even if they only assign a very small credence to this hypothesis.

In conclusion, most of an agent’s impact comes from applying an acausal decision theory in a large universe. Even if the agent assigns a small credence both to acausal decision theories and to the hypothesis that the universe is large, they should still act as if they placed a high credence in both.

A wager in favor of higher correlations

In explaining the third wager, it is important to note that I assume a subjective interpretation of probability. If I say that there is a correlation between the actions of two agents, I mean that, given one’s subjective beliefs, observing one agent’s action provides evidence about the other agent’s action. Moreover, I assume that agents are in a symmetrical decision situation—for instance, this is the case for two agents in a prisoner’s dilemma. If the decision situation is symmetrical, and if the agents are sufficiently similar, their actions will correlate. The theory of MSR says that agents in a large universe probably are in a symmetrical decision situation (Oesterheld, 2017, sec. 2.8).

There exists no general theory of correlations between different agents. It seems plausible to assume that a correlation between the actions of two agents must be based on a logical correlation between the decision algorithms that these two agents implement. But it is not clear how to think about the decision algorithms that humans implement, for instance, and how to decide whether two decision algorithms are functionally equivalent (Yudkowsky and Soares, sec. 3). There exist solutions to these problems only in some narrow domains—for instance, for agents represented by programs written in some specific programming language.

Hence, it is also not clear which agents’ actions in a large universe correlate, given that all are in a symmetrical decision situation. It could be that an agent’s actions correlate only with very close copies. If these copies thus share the same values as the agent, then MSR does not have any action-guiding consequences. The agent will just continue to pursue their original goal function. If, on the other hand, there are many correlating agents with different goals, then MSR has strong implications. In the latter case, there can be gains from trade between these agents’ different value systems.

Just as there is a wager for applying acausal decision theory in general, there is also a wager in favor of assuming that an agent’s actions correlate with more rather than fewer different agents. Suppose there are two hypotheses: (H1) Alice’s actions only correlate with the actions of (G1) completely identical copies of Alice, and (H2) Alice’s actions correlate with (G2) all other agents that ever gave serious consideration to MSR or some equivalent idea.

(In both cases, I assume that Alice has seriously considered MSR herself.) G1 is a subset of G2, and it is plausible that G2 is much larger than G1. Moreover, it is plausible that there are also agents with Alice’s values among the agents in G2 which are not also in G1. Suppose 1-p is Alice’s credence in H1, and p her credence in H2. Suppose further that there are n agents in G1 and m agents in G2, and that q is the fraction of agents in G2 sharing Alice’s values. All agents have the choice between (A1) only pursuing their own values, and (A2) pursuing the sum over the values of all agents in G2. Choosing A1 gives an agent 1 utilon. Suppose g denotes the possible gains from trade; that is, choosing A2 produces (1+gs utilons for each value system, where s is the fraction of agents in G2 supporting that value system. If everyone in G2 chooses A2, this produces (1+g)×q×m utilons for Alice’s value system, while, if everyone chooses A1, this produces only q×m utilons in total for Alice.

The decision situation for Alice can be summarized by the following choice matrix (assuming, for simplicity, that all correlations are perfect):

H1 H2
A1 n+c q×m
A2 (1+gq×n+c (1+gq×m

Here, the cells denote the expected utilities that EDT assigns to either of Alice’s actions given either H1 or H2. c is a constant that denotes the expected value generated by the agents in G2 that are non-identical to Alice, given H1. It plays no role in comparing A1 and A2, since, given H1, these agents are not correlated with Alice: the value will be generated no matter which action she picks. The value for H1∧A2 is unrealistically high, since it supposes the same gains from trade as H2∧A2, but this does not matter here. According to EDT, Alice should choose A2 over A1 iff

g×p×q×m > (1-pn – (1+g)×(1-pn×q.

It seems likely that q×m is larger than n—the requirement that an agent must be a copy of Alice restricts the space of agents more than that of having thought about MSR and sharing Alice’s values. Therefore, even if the gains from trade and Alice’s credence in H2 (i.e., g×p) are relatively small, g×p×q×m is still larger than n, and EDT recommends A2.

While the argument for this wager is not as strong as the argument for the first two wagers, it is still plausible. It is plausible that there are much more agents having thought about MSR and sharing a person’s values than there are identical copies of the person. Hence, if the person’s actions correlate with the actions of all the agents in the larger group, the person’s actions have a much higher impact. Moreover, in this case, they plausibly also correlate with the actions of many agents holding different values, allowing for gains from trade. Therefore, one should act as if there were more rather than fewer correlations, even if one assigns a rather low credence to that hypothesis.

Acknowledgements

I am grateful to Caspar Oesterheld and Max Daniel for helpful comments on a draft of this post. I wrote this post while working for the Foundational Research Institute, which is now the Center on Long-Term Risk.

A wager against Solomonoff induction

The universal prior assigns zero probability to non-computable universes—for instance, universes that could only be represented by Turing machines in which uncountably many locations need to be updated, or universes in which the halting problem is solved in physics. While such universes might very likely not exist, one cannot justify assigning literally zero credence to their existence. I argue that it is of overwhelming importance to make a potential AGI assign a non-zero credence to incomputable universes—in particular, universes with uncountably many “value locations”.

Here, I assume a model of universes as sets of value locations. Given a specific goal function, each element in such a set could specify an area in the universe with some finite value. If a structure contains a sub-structure, and both the structure and the sub-structure are valuable in their own regard, there could either be one or two elements representing this structure in the universe’s set of value locations. If a structure is made up of infinitely many sub-structures, all of which the goal function assigns some positive but finite value to, then this structure could (if the sum of values does not converge) possibly only be represented by infinitely many elements in the set. If the set of value locations representing a universe is countable, then the value of said universe could be the sum over the values of all elements in the set (granted that some ordering of the elements is specified). I write that a universe is “countable” if it can be represented by a finite or countably infinite set, and a universe is “uncountable” if it can only be represented by an uncountably infinite set.

A countable universe, for example, could be a regular cellular automaton. If the automaton has infinitely many cells, then, given a goal function such as total utilitarianism, the automaton could be represented by a countably infinite set of value locations. An uncountable universe, on the other hand, could be a cellular automaton in which there is a cell for each real number, and interactions between cells over time are specified by a mathematical function. Given some utility functions over such a universe, one might be able to represent the universe only by an uncountably infinite set of value locations. Importantly, even though the universe could be described in logic, it would be incomputable.

Depending on one’s approach to infinite ethics, an uncountable universe could matter much more than a countable universe. Agents in uncountable universes might—with comparatively small resource investments—be able to create (or prevent), for instance, amounts of happiness or suffering that could not be created in an entire countable universe. For instance, each cell in the abovementioned cellular automaton might consist of some (possibly valuable) structure in of itself, and the cells’ structures might influence each other. Moreover, some (uncountable) set of cells might be regarded as an agent. The agent might then be able to create a positive amount of happiness in uncountably many cells, which—at least given some definitions of value and approaches to infinite ethics—would have created more value than could ever be created in a countable universe.

Therefore, there is a wager in favor of the hypothesis that humans actually live in an uncountable universe, even if it appears unlikely given current scientific evidence. But there is also a different wager, which applies if there is a chance that such a universe exists, regardless of whether humans live in that universe. It is unclear which of the two wagers dominates.

The second wager is based on acausal trade: there might be agents in an uncountable universe that do not benefit from the greater possibilities of their universe—e.g., because they do not care about the number of individual copies of some structure, but instead care about an integral over the structures’ values relative to some measure over structures. While agents in a countable universe might be able to benefit those agents equally well, they might be much worse at satisfying the values of agents with goals sensitive to the greater possibilities in uncountable universes. Thus, due to different comparative advantages, there could be great gains from trade between agents in countable and uncountable universes.

The above example might sound outlandish, and it might be flawed in that one could not actually come up with interaction rules that would lead to anything interesting happening in the cellular automaton. But this is irrelevant. It suffices that there is only the faintest possibility that an AGI could have an acausal impact in an incomputable universe which, according to one’s goal function, would outweigh all impact in all computable universes. There probably exists a possible universe like that for most goal functions. Therefore, one could be missing out on virtually all impact if the AGI employs Solomonoff induction.

There might not only be incomputable universes represented by a set that has the cardinality of the continuum, but there might be incomputable universes represented by sets of any cardinality. In the same way that there is a wager for the former, there is an even stronger wager for universes with even higher cardinalities. If there is a universe of highest cardinality, it appears to make sense to optimize only for acausal trade with that universe. Of course, there could be infinitely many different cardinalities, so one might hope that there is some convergence as to the values of the agents in universes of ever higher cardinalities (which might make it easier to trade with these agents).

In conclusion, there is a wager in favor of considering the possibility of incomputable universes: even a small acausal impact (relative to the total resources available) in an incomputable universe could counterbalance everything humans could do in a computable universe. Crucially, an AGI employing Solomonoff induction will not consider this possibility, hence potentially missing out on unimaginable amounts of value.

Acknowledgements

Caspar Oesterheld and I came up with the idea for this post in a conversation. I am grateful to Caspar Oesterheld and Max Daniel for helpful feedback on earlier drafts of this post.

UDT is “updateless” about its utility function

Updateless decision theory (UDT) (or some variant thereof) seems to be widely accepted as the best current solution to decision theory by MIRI researchers and LessWrong users. In this short post, I outline one potential implication of being completely updateless. My intention is not to refute UDT, but to show that:

  1. It is not clear how updateless one might want to be, as this could have unforeseen consequences.
  2. If one endorses UDT, one should also endorse superrational cooperation on a very deep level.

My argument is simple, and draws on the idea of multiverse-wide superrational cooperation (MSR), which is a form of acausal trade between agents with correlated decision algorithms. Thinking about MSR instead of general acausal trade has the advantage that it seems conceptually easier, while the conclusions gained should hold in the general case as well. Nevertheless, I am very uncertain and expect the reality of acausal cooperation between AIs to look different from the picture I draw in this post.

Suppose humans have created a friendly AI with a CEV utility function and UDT as its decision theory. This version of UDT has solved the problem of logical counterfactuals and algorithmic correlation, and can readily spot any correlated agent in the world. Such an AI will be inclined to trade acausally with other agents—agents in parts of the world it does not have causal access to. This is, for instance, to achieve gains from comparative advantages given empirical circumstances, and to exploit diminishing marginal returns of pursuing any single value system at once.

For the trade implied by MSR, the AI does not have to simulate other agents and engage in some kind of Löbian bargain with them. Instead, the AI has to find out whether the agents’ decision algorithms are functionally equivalent to the AI’s decision algorithm, it has to find out about the agents’ utility functions, and it has to make sure the agents are in an empirical situation such that trade benefits both parties in expectation. (Of course, to do this, the AI might also have to perform a simulation.) The easiest trading step seems to be the one with all other agents using updateless decision theory and the same prior. In this context, it is possible to neglect many of the usual obstacles to acausal trade. These agents share everything except their utility function, so there will be little if any “friction”—as long as the compromise takes differences between utility functions into account, the correlation between the agents will be perfect. It would get more complicated if the versions of UDT diverged a bit, and if the priors were slightly different. (More on this later.) I assume here that the agents can find out about the other agents’ utility functions. Although these are logically determined by the prior, the agents might be logically uncertain, and calculating the distribution of utility functions of UDT agents might be computationally expensive. I will ignore this consideration here.

A possible approach to this trade is to effectively choose policies based on a weighted sum of the utility functions of all UDT agents in all the possible worlds contained in the AI’s prior (see Oesterheld 2017, section 2.8 for further details). Here, the weights will be assigned such that in expectation, all agents will have an incentive to pursue this sum of utility functions. It is not exactly clear how such weights will be calculated, but it is likely that all agents will adopt the same weights, and it seems clear that once this weighting is done based on the prior, it won’t change after finding out which of the possible worlds from the prior is actual (Oesterheld 2017, section 2.8.6). If all agents adopt the policy of always pursuing a sum of their utility functions, the expected marginal additional goal fulfillment for all AIs at any point in the future will be highest. The agents will act according to the “greatest good for the greatest number.” Any individual agent won’t know whether they will benefit in reality, but that is irrelevant from the updateless perspective. This becomes clear if we compare the situation to thought experiments like the Counterfactual Mugging. Even if in the actual world, the AI cannot benefit from engaging in the compromise, then it was still worth it from the prior viewpoint, since (given sufficient weight in the sum of utility functions) the AI would have stood to gain even more in another, non-actual world.

If the agents are also logically updatelessness, this reduces the information the weights of the agents’ utility functions are based on. There probably are many logical implications that could be drawn from an empirical prior and the utility functions about aspects of the trade—e.g., that the trade will benefit only the most common utility functions, that some values won’t be pursued by anyone in practice, etc.—that might be one logical implication step away from a logical prior. If the AI is logically updateless, it will always perform the action that it would have committed to before it got to know about these implications. Of course, logical updatelessness is an unresolved issue, and its implications for MSR will depend on possible solutions to the problem.

In conclusion, in order to implement the MSR compromise, the AI will start looking for other UDT agents in all possible (and, possibly, impossible) worlds in its prior. It will find out about their utility functions and calculate a weighted sum over all of them. This is what I mean by the statement that UDT is “updateless” about its utility function: no matter what utility function it starts out with, its own function might still have negligible weight in the goals the UDT AI will pursue in practice. At this point, it becomes clear that it really matters what this prior looks like. What is the distribution of the utility functions of all UDT agents given the universal prior? There might be worlds less complex than the world humans live in—for instance, a cellular automaton, such as Rule 110 or Game of Life, with a relatively simple initial state—which still contain UDT agents. Given that these worlds might have a higher prior probability than the human world, they might get a higher weight in the compromise utility function. The AI might end up maximizing the goal functions of the agents in the simplest worlds.

Is updating on your existence a sin?

One of the features of UDT is that it does not even condition the prior on the agent’s own existence—when evaluating policies, UDT also considers their implications in worlds that do not contain an instantiation of the agent, even though by the time the agent thinks its first thought, it can be sure that these worlds do not exist. This might not be a problem if one assigns high weight to a modal realism/Tegmark Level 4 universe anyway. An observation can never distinguish between a world in which all worlds exist, and one in which only the world featuring the current observation exists. So if the measure of all the “single worlds” is small, then updating on existence won’t change much.

Suppose that this is not the case. Then there might be many worlds that can already be excluded as non-actual based on the fact that they don’t contain humans. Nevertheless, they might contain UDT agents with alien goals. This poses a difficult choice: Given UDT’s prior, the AI will still cooperate with agents living in non-actual (and impossible, if the AI is logically updatelessness) worlds. This is because given UDT’s prior, it could have been not humans, but these alien agents, that turned out actual—in which case they could have benefited humans in return. On the other hand, if the AI is allowed to condition on such information, then it loses in a kind of counterfactual prisoner’s dilemma:

  • Counterfactual prisoner’s dilemma: Omega has gained total control over one universe. In the pursuit of philosophy, Omega flips a fair coin to determine which of two agents she should create. If the coin comes up heads, Omega will create a paperclip maximizer. If it comes up tails, she creates a perfectly identical agent, but with one difference: the agent is a staple maximizer. After the creation of these agents, Omega hands either of them total control over the universe and lets them know about this procedure. There are gains from trade: producing both paperclips and staples creates 60% utility for both of the agents, while producing only one of those creates 100% for one of the agents. Hence, both agents would (in expectation) benefit from a joint precommitment to a compromise utility function, even if only one of the agents is actually created. What should the created agent do?

If the agents condition on their existence, then they will not gain as much in expectation as they could otherwise expect to gain before the coin flip (when neither of the agents existed). I have chosen this thought experiment because it is not confounded by the involvement of simulated agents, a factor which could lead to anthropic uncertainty and hence make the agents more updateless than they would otherwise be.

UDT agents with differing priors

What about UDT agents using differing priors? For simplicity, I suppose there are only two agents. I also assume that both agents have equal capacity to create utilons in their universes. (If this is not the case, the weights in the resulting compromise utility function have to be adjusted.) Suppose both agents start out with the same prior, but update it on their own existence—i.e., they both exclude any worlds that don’t contain an instantiation of themselves. This posterior is then used to select policies. Agent B can’t benefit from any cooperative actions by agent A in a world that only exists in agent A’s posterior. Conversely, agent A also can’t benefit from agent B in worlds that agent A doesn’t think could be actual anymore. So the UDT policy will recommend pursuing a compromise function only in worlds lying in the intersection of worlds that exist in both agent’s posteriors. If either agent updates that they are in some of the worlds to which the other agent assigns approximately zero probability, then they won’t cooperate.

More generally, if both agents know which world is actual, and this is a world which they both inhabit, then it doesn’t matter which prior they used to select their policies. (Of course, this world must have nonzero probability in both of their priors; otherwise they wouldn’t ever update that said world is actual.) From the prior perspective, for agent A, every sacrificed utilon in this world is weighted by its prior measure of the world. Every gained utilon from agent B is also weighted by the same prior measure. So there is no friction in this compromise—if both agents decide between action a which gives themselves d utilons, and an action b which gives the other agent c utilons, then any agent will prefer option b iff c divided by this agent’s prior measure of the world is greater than d divided by the same prior measure, so iff c is greater than d. Given that there is a way to normalize both agents’ utility functions, pursuing a sum of those utility functions seems optimal.

We can even expand this to the case wherein the two agents have any differing priors with a nonempty intersection between the corresponding sets of possible worlds. In expectation, the policy that says: “if any world outside the intersection is actual: don’t compromise; if any world from the intersection is actual: do the standard UDT compromise, but use the posterior distribution in which all worlds outside the intersection have zero probability for policy selection” seems best. When evaluating this policy, both agents can weight both utilons sacrificed for others, as well as utilons gained from others, in any of the worlds from the intersection by the measure of the entire intersection in their own respective priors. This again creates a symmetrical situation with a 1:1 trade ratio between utilons sacrificed and gained.

Another case to consider is if the agents also distribute the relative weights between the worlds in the intersection differently. I think that this does not lead to asymmetries (in the sense that conditional on some of the worlds being actual, one agent stands to gain and lose more than the other agent). Suppose agent A has 30% on world S1, and 20% on World S2. Agent B, on the other hand, has 10% on world S1 and 20% on world S2. If both agents follow the policy of pursuing the sum of utility functions, given that they find themselves in either of the two shared worlds, then, ceteris paribus, both will in expectation benefit to an equal degree. For instance, let c1 (c2) be the amount of utilons either agent can create for the other agent in world S1 (S2), and d1 (d2) the respective amount agents can create for themselves. Then agent A gets either 0.3×c1+0.2×c2 or 0.3×d1+0.2×d2, while B chooses between 0.1×c1+0.2×c2 and 0.1×d1+0.2×d2. Here, it’s not the case that A prefers cooperating iff B prefers cooperating. But assuming that in expectation, c1 = c2 as well as d1 = d2, this leads to a situation where both prefer cooperation iff c1 > d1. It follows that just pursuing a sum of both agents’ utility functions is, in expectation, optimal for both agents.

Lastly, consider a combination of non-identical priors with empirical uncertainty. For UDT, empirical uncertainty between worlds translates into anthropic uncertainty about which of the possible worlds the agent inhabits. In this case, as expected, there is “friction”. For example, suppose agent A assigns p to the intersection of the worlds in both agents’ priors, while agent B assigns p/q. Before they find out whether one of the worlds from the intersection or some other world is actual, the situation is the following: B can benefit from A’s cooperation in only p/q of the worlds. A can benefit in p of the worlds from B, but for everything A does, this will only mean p/q as much to agent B. Now each agent can again either create d utilons for themselves, or perform a cooperative action that gives c utilons to the other agent in the world where the action is performed. Given uncertainty about which world is actual, if both agents choose cooperation, agent A receives c×p utilons in expectation, while agent B receives c×p/q utilons in expectation. Defection gives both agents d utilons. So for cooperation to be worth it, c×p and c×p/q both have to be greater than d. If this is the case, then if p is unequal to p/q, both agents’ gains from trade are still not equal. This appears to be a bargaining problem that doesn’t solve as easily as the examples from above.

Conclusion

I actually endorse the conclusion that humans should cooperate with all correlating agents. Although humans’ decision algorithms might not correlate with as many other agents, and they might not be able to compromise as efficiently as super-human AIs, humans should nevertheless pursue some multiverse-wide sum of values. What I’m uncertain about is how far updatelessness should go. For instance, it is not clear to me which empirical and logical evidence humans should and shouldn’t take into account when selecting policies. If an AI does not start out with the knowledge that humans possess but instead uses the universal prior, then it might perform actions that seem irrational given human knowledge. Even if observations are logically inconsistent with the existence of a fellow cooperation partner (i.e., in the updated distribution, the cooperation partner’s world has zero probability), then UDT might still cooperate with and possibly adopt that partner’s values. I doubt at this point whether everyone still agrees with the hypothesis that UDT always achieves the highest utility.

Acknowledgements

I thank Caspar Oesterheld, Max Daniel, Lukas Gloor, and David Althaus for helpful comments on a draft of this post, and Adrian Rorheim for copy editing.

Anthropic uncertainty in the Evidential Blackmail

I’m currently writing a piece on anthropic uncertainty in Newcomb problems. The idea is that whenever someone simulates us to predict our actions, this leads us to have anthropic uncertainty about whether we’re in this simulation or not. (If we knew whether we were in the real world or in the simulation, then the simulation wouldn’t fulfill its purpose anymore.) This kind of reasoning changes quite a lot about the answers that decision theories give in predictive dilemmas. It makes their reasoning “more updateless”, since they reason from a more impartial stance: a stance from which they don’t know their exact position in the thought experiment, yet.

This topic isn’t new, but it hasn’t been discussed in-depth before. As far as I am aware, it has been brought up on LessWrong by gRR and in two blog posts by Stuart Armstrong. Outside LessWrong, there is a post by Scott Aaronson, and one by Andrew Critch. The idea is also mentioned in passing by Neal (2006, p. 13). Are there any other sources and discussions of it that I have overlooked?

In this post, I examine what the assumption that predictions or simulations lead to anthropic uncertainty implies for the Evidential Blackmail (also XOR Blackmail), a problem which is often presented as a counter-example to evidential decision theory (EDT) (Cf. Soares & Fallenstein, 2015, p. 5; Soares & Levinstein, 2017, pp. 3–4). A similar problem has been introduced as “Yankees vs. Red Sox” by Arntzenius (2008), and discussed by Ahmed and Price (2012). I would be very grateful for any kind of feedback on my post.

We could formalize the blackmailer’s procedure in the Evidential Blackmail something like this:

def blackmailer():
    your_action = your_policy(receive_letter)
    if predict_stock() == “retain” and your_action == “pay”:
        return “letter”
    elif predict_stock() == “fall” and your_action == “not pay”:
        return “letter”
    else
        return “no letter”

Let p denote the probability P(retain) with which our stock retains its value a. The blackmailer asks us for an amount of money b, where 0<b<a. The ex ante expected utilities are now:

EU(pay) = P(letter|pay) * (a – b) + P(no letter & retain|pay) * a = p (a – b),

EU(not pay) = P(no letter & retain|not pay) * a = p a.

According to the problem description, P(no letter & retain|pay) is 0, and P(no letter & retain|not pay) is p.1 As long as we don’t know whether a letter has been sent or not (even if it might already be on its way to us), committing to not paying gives us only information about whether the letter has been sent, not about our stock, so we should commit not to pay.

Now for the situation in which we have already received the letter. (All of the following probabilities will be conditioned on “letter”.) We don’t know whether we’re in the simulation or not. But what we do if we’re in the simulation can actually change our probability that we’re in the simulation in the first place. Note that the blackmailer has to simulate us one time in any case, regardless of whether our stock goes down or not. So if we are in the simulation and we receive the letter, P(retain|pay) is still equal to P(retain|not pay): neither paying nor not paying gives us any evidence about whether our stock retains its value or not, conditional on being in the simulation. But if we are in the simulation, we can influence whether the blackmailer sends us a letter in the real world. In the simulation, our action decides over whether we receive the letter in the cases where we keep our money, or whether we receive the letter when we lose.

Let’s begin by calculating EDT’s expected utility of not paying. We will lose all money for certain if we’re in the real world and don’t pay, so we only consider the case where we’re in the simulation:

EU(not pay) = P(sim & retain|not pay) * a.

For both SSA and SIA, if our stock doesn’t go down and we don’t pay up, then we’re certain to be in the simulation: P(sim|retain, not pay) = 1, while we could be either simulated or real if our stock falls: P(sim|fall, not pay) = 1/2. Moreover, P(sim & retain|not pay) = P(retain|sim, not pay) * P(sim) = P(sim|retain, not pay) * P(retain). Under SSA, P(retain) is just p.2 We hence get

EU_SSA(not pay) = P(sim|retain, not pay) * p * a = p a.

Our expected utility for paying is:

EU_SSA(pay) = P(sim & retain|pay) * (a – b) + P(not sim|pay) * (a – b)

= P(sim|retain, pay) * p * (a – b) + P(not sim|pay) * (a – b).

If we pay up and the stock retains its value, there is exactly one of us in the simulation and one of us in the real world, so P(sim|retain, pay) = 1/2, while we’re sure to be in the simulation for the scenario in which our stock falls: P(sim|fall, pay) = 1. Knowing both P(sim & retain|pay) and P(sim & fall|pay), we can calculate P(not sim|pay) = p/2. This gives us

EU_SSA(pay) = 1/2 * p * (a – b) + 1/2 * p * (a – b) = p (a – b).

Great, EDT + SSA seems to calculate exactly the same payoffs as all other decision theories – namely, that by paying the Blackmailer, one just loses the money one pays the blackmailer, but gains nothing.

For SIA probabilities, P(retain|letter) depends on whether we pay or don’t pay. If we pay, then there are (in expectation) 2 p observers in the “retain” world, while there are (1 – p) observers in the “fall” world. So our updated P(retain|letter, pay) should be (2 p)/(1 + p). If we don’t pay, it’s p/(2 – p) respectively. Using the above probabilities and Bayes’ theorem, we have P(sim|pay) = 1/(1 + p) and P(sim|not pay) = 1/(2 – p). Hence,

EU_SIA(not pay) = P(sim & retain|not pay) * a = (p a)/(2 – p),

and

EU_SIA(pay) = P(sim) * P(retain|sim) * (a – b) + P(not sim) * (a – b)

= (p (a – b))/(1 + p) + (p (a – b))/(1 + p)

= (2 p (a – b))/(1 + p).

It seems like paying the blackmailer would be better here than not paying, if p and b are sufficiently low.

Why doesn’t SIA give the ex ante expected utilities, as SSA does? Up until now I have just assumed correlated decision-making, so that the decisions of the simulated us will also be those of the real-world us (and of course the other way around – that’s how the blackmail works in the first place). The simulated us hence also gets attributed the impact of our real copy. The problem is now that SIA thinks we’re more likely to be in worlds with more observers. So the worlds in which we have additional impact due to correlated decision-making get double-counted. In the world where we pay the blackmailer, there are two observers for p, while there is only one observer for (1 – p). If we don’t pay the blackmailer, there is only one observer for p, and two observers for (1 – p). SIA hence slightly favors paying the blackmailer, to make the p-world more likely.

To remediate the problem of double-counting for EDT + SIA, we could use something along the lines of Stuart Armstrong’s Correlated Decision Principle (CDP). First, we aggregate the “EDT + SIA” expected utilities of all observers. Then, we divide this expected utility by the number of individuals who we are deciding for. For EU_CDP(pay), there is with probability 1 an observer in the simulation, and with probability p one in the real world. To get the aggregated expected utility, we thus have to multiply EU(pay) by (1 + p). Since we have decided for two individuals, we divide this EU by 2 and get EU_CDP(pay) = ((2 p (a – b))/(1 + p)) * 1/2 * (1 + p) = p (a – b).

For EU_CDP(not pay), it gets more complex: the number of individuals any observer is making a decision for is actually just 1 – namely, the observer in the simulation. The observer in the real world doesn’t get his expected utility from his own decision, but from influencing the other observer in the simulation. On the other hand, we multiply EU(not pay) by (2 – p), since there is one observer in the simulation with probability 1, and with probability (1 – p) there is another observer in the real world. Putting this together, we get EU_CDP(not pay) = ((p a)/(2 – p)) * (2 – p) = p a. So EDT + SIA + CDP arrives at the same payoffs as EDT + SSA, although it is admittedly a rather messy and informal approach.

I conclude that, when taking into account anthropic uncertainty, EDT doesn’t give in to the Evidential Blackmail. This is true for SSA and possibly also for SIA + CDP. Fortunately, at least for SSA, we have avoided any kind of anthropic funny-business. Note that this is not some kind of dirty hack: if we grant the premise that simulations have to involve anthropic uncertainty, then per definition of the thought experiment – because there is necessarily a simulation involved in the Evidential Blackmail –, EDT doesn’t actually pay the blackmailer. Of course, this still leaves open the question of whether we have anthropic uncertainty in all problems involving simulations, and hence whether my argument applies to all conceivable versions of the problem. Moreover, there are other anthropic problems, such as the one introduced by Conitzer (2015a), in which EDT + SSA are still exploitable (in absence of a method to “bind themselves”).

Acknowledgement

I wrote this post while working for the Foundational Research Institute, which is now the Center on Long-Term Risk.

1 P(no letter & retain|not pay) = P(no letter|retain, not pay) * P(retain|not pay) = 1 * P(retain|not pay) = P(retain|not pay, letter) * P(letter|not pay) + P(retain|not pay, no letter)* P(no letter|not pay) = p.

2 This becomes apparent if we compare the Evidential Blackmail to Sleeping Beauty. SSA is the “halfer position”, which means that after updating on being an observer (receiving the letter), we should still assign the prior probability p, regardless of how many observers there are in either of the two possible worlds.

3 The result that EDT and SIA lead to actions that are not optimal ex ante is also featured in several publications about anthropic problems, e.g., Arntzenius, 2002; Briggs, 2010; Conitzer, 2015b; Schwarz, 2015.


Ahmed, A., & Price, H. (2012). Arntzenius on “Why ain”cha rich?’. Erkenntnis. An International Journal of Analytic Philosophy, 77(1), 15–30.

Arntzenius, F. (2002). Reflections on Sleeping Beauty. Analysis, 62(1), 53–62.
Arntzenius, F. (2008). No Regrets, or: Edith Piaf Revamps Decision Theory. Erkenntnis. An International Journal of Analytic Philosophy, 68(2), 277–297.

Briggs, R. (2010). Putting a value on Beauty. Oxford Studies in Epistemology, 3, 3–34.

Conitzer, V. (2015a). A devastating example for the Halfer Rule. Philosophical Studies, 172(8), 1985–1992.

Conitzer, V. (2015b). Can rational choice guide us to correct de se beliefs? Synthese, 192(12), 4107–4119.

Neal, R. M. (2006, August 23). Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning. arXiv [math.ST]. Retrieved from http://arxiv.org/abs/math/0608592

Schwarz, W. (2015). Lost memories and useless coins: revisiting the absentminded driver. Synthese, 192(9), 3011–3036.

Soares, N., & Fallenstein, B. (2015, July 7). Toward Idealized Decision Theory. arXiv [cs.AI]. Retrieved from http://arxiv.org/abs/1507.01986

Soares, N., & Levinstein, B. (2017). Cheating Death in Damascus. Retrieved from https://intelligence.org/files/DeathInDamascus.pdf

“Betting on the Past” by Arif Ahmed

[This post assumes knowledge of decision theory, as discussed in Eliezer Yudkowsky’s Timeless Decision Theory and in Arbital’s Introduction to Logical Decision Theory.]

I recently discovered an interesting thought experiment, “Betting on the Past” by Cambridge philosopher Arif Ahmed. It can be found in his book Evidence, Decision and Causality, which is an elaborate defense of Evidential Decision Theory (EDT). I believe that Betting on the Past may be used to money-pump non-EDT agents, refuting Causal Decision Theories (CDT), and potentially even ones that use logical conditioning, such as Timeless Decision Theory (TDT) or Updateless Decision Theory (UDT). At the very least, non-EDT decision theories are unlikely to win this bet. Moreover, no conspicuous perfect predicting powers, genetic influences, or manipulations of decision algorithms are required to make Betting on the Past work, and anyone can replicate the game at home. For these reasons, it might make a more compelling case in favor of EDT than the Coin Flip Creation, a problem I recently proposed in an attempt to defend EDT’s answers in medical Newcomb problems. In Ahmed’s thought experiment, Alice faces the following decision problem:

Betting on the Past: In my pocket (says Bob) I have a slip of paper on which is written a proposition P. You must choose between two bets. Bet 1 is a bet on P at 10:1 for a stake of one dollar. Bet 2 is a bet on P at 1:10 for a stake of ten dollars. So your pay-offs are as in [Figure 1]. Before you choose whether to take Bet 1 or Bet 2 I should tell you what P is. It is the proposition that the past state of the world was such as to cause you now to take Bet 2. [Ahmed 2014, p. 120]

Ahmed goes on to specify that Alice could indicate which bet she’ll take by either raising or lowering her hand. One can find a detailed discussion of the thought experiment’s implications, as well as a formal analysis of CDT’s and EDT’s decisions in Ahmed’s book. In the following, I want to outline a few key points.

Would CDT win in this problem? Alice is betting on a past state of the world. She can’t causally influence the past, and she’s uncertain whether the proposition is true or not. In either case, Bet 1 strictly dominates Bet 2: no matter which state the past is in, Bet 1 always yields a higher utility. For these reasons, causal decision theories would take Bet 1. Nevertheless, as soon as Alice comes to a definite decision, she updates on whether the proposition is true or false. If she’s a causal agent, she then finds out that she has lost: the past state of the world was such as to cause her to take Bet 1, so the proposition is false. If she had taken Bet 2, she would have found out that the proposition was correct, and she would have won, albeit a smaller amount than if she had won with Bet 1.

Betting on the Past seems to qualify as a kind of Newcomb’s paradox; it seems to have an equivalent payoff matrix (Figure 1).

Figure 1: Betting on the past has a similar payoff matrix to Newcomb’s paradox

P is true P is false
 Take Bet 1 10 -1
 Take Bet 2 1 -10

Furthermore, its causal structure seems to resemble those of e.g. the Smoking Lesion or Solomon’s problem, indicating it as a kind of medical Newcomb problem. In medical Newcomb problems, a “Nature” node determines both the present state of the world (whether the agent is sick/will win the bet) and the agent’s decision (see Figure 2). In this regard, they differ from Newcomb’s original problem, where said node refers to the agent’s decision algorithm.

Figure 2: Betting on the past (left) has a similar causal structure to medical Newcomb problems (right).

One could object to Betting on the Past being a medical Newcomb problem, since the outcomes conditional on our actions here are certain, while e.g. in the Smoking Lesion, observing our actions only shifts our probabilities in degrees. I believe this shouldn’t make a crucial difference. On the one hand, we can conceive of absolutely certain medical Newcomb cases like the Coin Flip Creation. On the other hand, Newcomb’s original problem is often formalized with absolute certainties as well. I’d be surprised if probabilistic vs. certain reasoning would make a difference to decision theories. First, we can always approximate certainties to an arbitrarily high degree. We might ask ourselves why a negligible further increase in certainty would at some point suddenly completely change the recommended action, then. Secondly, we’re never really certain in the real world anyway, so if the two cases would be different, this would render all thought experiments useless that use absolute certainties.

If Betting on the Past is indeed a kind of medical Newcomb problem, this would be an interesting conclusion. It would follow that if one prefers Bet 2, one should also one-box in medical Newcomb problems. And taking Bet 2 seems so obviously correct! I point this out because one-boxing in medical Newcomb problems is what EDT would do, and it is often put forward as both a counterexample to EDT and as the decision problem that separates EDT from Logical Decision Theories (LDT), such as TDT or UDT. (See e.g. Yudkowsky 2010, p.67)

Before we examine the case for EDT further, let’s take a closer look at what LDTs would do in Betting on the Past. As far as I understand, LDTs would take correlations with other decision algorithms into account, but they would ignore “retrocausality” (i.e. smoke in the smoker’s lesion, chew gum in the chewing gum problem, etc.). If there is a purely physical cause, then this causal node isn’t altered in the logical counterfactuals that an LDT agent reasons over. Perhaps if the bet was about the state of the world yesterday, LDT would still take Bet 2. Clearly, LDT’s algorithm already existed yesterday, and it can influence this algorithm’s output; so if it chooses Bet 2, it can change yesterday’s world and make the proposition true. But at some point, this reasoning has to break down. If we choose a more distant point in the past as a reference for Alice’s bet – maybe as far back as the birth of our universe – she’ll eventually be unable to exert any possible influence via logical counterfactuals. At some point, the correlation becomes a purely physical one. All she can do at that point is what opponents of evidential reasoning would call “managing the news” (Lewis, 1981) – she can merely try to go for the action that gives her the best Bayesian update.

So, do Logical Decision Theories get it wrong? I’m not sure about that; they come in different versions, and some haven’t yet been properly formalized, so it’s hard for me to judge. I can very well imagine that e.g. Proof-Based Decision Theory would take Bet 2, since it could prove P to be either true or false, contingent on the action it would take. I would argue, though, that if a decision theory takes Bet 2 – and if I’m right about Betting on the Past being a medical Newcomb problem – then it appears it would also have to “one-box”, i.e. take the option recommended by EDT, in other medical Newcomb problems.

If all of this is true, it might imply that we don’t really need LDT’s logical conditioning and that EDT’s simple Bayesian conditioning on actions could suffice. The only remaining difference between LDT and EDT would then be EDT’s lack of updatelessness. What would an updateless version of EDT look like? Some progress on this front has already been made by Everitt, Leike, and Hutter 2015. Caspar Oesterheld and I hope to be able to say more about it soon ourselves.

Acknowledgement

I wrote this post while working for the Foundational Research Institute, which is now the Center on Long-Term Risk.