Goertzel’s GOLEM implements evidential decision theory applied to policy choice

I’ve written about the question of which decision theories describe the behavior of approaches to AI like the “Law of Effect”. In this post, I would like to discuss GOLEM, an architecture for a self-modifying artificial intelligence agent described by Ben Goertzel (2010; 2012). Goertzel calls it a “meta-architecture” because all of the intelligent work of the system is done by sub-programs that the architecture assumes as given, such as a program synthesis module (cf. Kaiser 2007).

Roughly, the top-level self-modification is done as follows. For any proposal for a (partial) self-modification, i.e. a new program to replace (part of) the current one, the “Predictor” module predicts how well that program would achieve the goal of the system. Another part of the system — the “Searcher” — then tries to find programs that the Predictor deems superior to the current program. So, at the top level, GOLEM chooses programs according to some form of expected value calculated by the Predictor. The first interesting decision-theoretical statement about GOLEM is therefore that it chooses policies — or, more precisely, programs — rather than individual actions. Thus, it would probably give the money in at least some versions of counterfactual mugging. This is not too surprising, because it is unclear on what basis one should choose individual actions when the effectiveness of an action depends on the agent’s decisions in other situations.

The next natural question to ask is, of course, what expected value (causal, evidential or other) the Predictor computes. Like the other aspects of GOLEM, the Predictor is subject to modification. Hence, we need to ask according to what criteria it is updated. The criterion is provided by the Tester, a “hard-wired program that estimates the quality of a candidate Predictor” based on “how well a Predictor would have performed in the past” (Goertzel 2010, p. 4). I take this to mean that the Predictor is judged based the extent to which it is able to predict the things that actually happened in the past. For instance, imagine that at some time in the past the GOLEM agent self-modified to a program that one-boxes in Newcomb’s problem. Later, the agent actually faced a Newcomb problem based on a prediction that was made before the agent self-modified into a one-boxer and won a million dollars. Then the Predictor should be able to predict that self-modifying to one-boxing in this case “yielded” getting a million dollar even though it did not do so causally. More generally, to maximize the score from the Tester, the Predictor has to compute regular (evidential) conditional probabilities and expected utilities. Hence, it seems that the EV computed by the Predictor is a regular EDT-ish one. This is not too surprising, either, because as we have seen before, it is much more common for learning algorithms to implement EDT, especially if they implement something which looks like the Law of Effect.

In conclusion, GOLEM learns to choose policy programs based on their EDT-expected value.

Acknowledgements

This post is based on a discussion with Linda Linsefors, Joar Skalse, and James Bell. I wrote this post while working for the Foundational Research Institute, which is now the Center on Long-Term Risk.

Three wagers for multiverse-wide superrationality

In this post, I outline three wagers in favor of the hypothesis that multiverse-wide superrationality (MSR) has action-guiding implications. MSR is based on three core assumptions:

  1. There is a large or infinite universe or multiverse.
  2. Applying an acausal decision theory.
  3. An agent’s actions provide evidence about the actions of other, non-identical agents with different goals in other parts of the universe.

There are three wagers corresponding to these three assumptions. The wagers works only with those value systems that can also benefit from MSR (for instance, with total utilitarianism) (see Oesterheld, 2017, sec. 3.2). I assume such a value system in this post. I am currently working on a longer paper about a wager for (ii), which will discuss the premises for this wager in more detail.

A wager for acausal decision theory and a large universe

If this universe is very large or infinite, then it is likely that there is an identical copy of the part of the universe that is occupied by humans somewhere far-away in space (Tegmark 2003, p. 464). Moreover, there will be vastly many or infinitely many such copies. Hence, for example, if an agent prevents a small amount of suffering on Earth, this will be accompanied by many copies doing the same, resulting in multiple amounts of suffering averted throughout the universe.

Assuming causal decision theory (CDT), the impact of an agent’s copies is not taken into account when making decisions—there is an evidential dependence between the agent’s actions and the actions of their copies, but no causal influence. According to evidential decision theory (EDT), on the other hand, an agent should take such dependences into account when evaluating different choices. For EDT, a choice between two actions on Earth is also a choice between the actions of all copies throughout the universe. The same holds for all other acausal decision theories (i.e., decision theories that take such evidential dependences into account): for instance, for the decision theories developed by MIRI researchers (such as functional decision theory (Yudkowsky and Soares, 2017)), and for Poellinger’s variation of CDT (Poellinger, 2013).

Each of these considerations on its own would not be able to get a wager off the ground. But jointly, they can do so: on the one hand, given a large universe, acausal decision theories will claim a much larger impact with each action than causal decision theory does. Hence, there is a wager in favor of these acausal decision theories. Suppose an agent applies some meta decision theory (see MacAskill, 2016, sec. 2) that aggregates the expected utilities provided by individual decision theories. Even if the agent assigns a small credence to acausal decision theories, these theories will still dominate the meta decision theory’s expected utilities. On the other hand, if an agent applies an acausal decision theory, they can have a much higher impact in a large universe than in a small universe. The agent should thus always act as if the universe is large, even if they only assign a very small credence to this hypothesis.

In conclusion, most of an agent’s impact comes from applying an acausal decision theory in a large universe. Even if the agent assigns a small credence both to acausal decision theories and to the hypothesis that the universe is large, they should still act as if they placed a high credence in both.

A wager in favor of higher correlations

In explaining the third wager, it is important to note that I assume a subjective interpretation of probability. If I say that there is a correlation between the actions of two agents, I mean that, given one’s subjective beliefs, observing one agent’s action provides evidence about the other agent’s action. Moreover, I assume that agents are in a symmetrical decision situation—for instance, this is the case for two agents in a prisoner’s dilemma. If the decision situation is symmetrical, and if the agents are sufficiently similar, their actions will correlate. The theory of MSR says that agents in a large universe probably are in a symmetrical decision situation (Oesterheld, 2017, sec. 2.8).

There exists no general theory of correlations between different agents. It seems plausible to assume that a correlation between the actions of two agents must be based on a logical correlation between the decision algorithms that these two agents implement. But it is not clear how to think about the decision algorithms that humans implement, for instance, and how to decide whether two decision algorithms are functionally equivalent (Yudkowsky and Soares, sec. 3). There exist solutions to these problems only in some narrow domains—for instance, for agents represented by programs written in some specific programming language.

Hence, it is also not clear which agents’ actions in a large universe correlate, given that all are in a symmetrical decision situation. It could be that an agent’s actions correlate only with very close copies. If these copies thus share the same values as the agent, then MSR does not have any action-guiding consequences. The agent will just continue to pursue their original goal function. If, on the other hand, there are many correlating agents with different goals, then MSR has strong implications. In the latter case, there can be gains from trade between these agents’ different value systems.

Just as there is a wager for applying acausal decision theory in general, there is also a wager in favor of assuming that an agent’s actions correlate with more rather than fewer different agents. Suppose there are two hypotheses: (H1) Alice’s actions only correlate with the actions of (G1) completely identical copies of Alice, and (H2) Alice’s actions correlate with (G2) all other agents that ever gave serious consideration to MSR or some equivalent idea.

(In both cases, I assume that Alice has seriously considered MSR herself.) G1 is a subset of G2, and it is plausible that G2 is much larger than G1. Moreover, it is plausible that there are also agents with Alice’s values among the agents in G2 which are not also in G1. Suppose 1-p is Alice’s credence in H1, and p her credence in H2. Suppose further that there are n agents in G1 and m agents in G2, and that q is the fraction of agents in G2 sharing Alice’s values. All agents have the choice between (A1) only pursuing their own values, and (A2) pursuing the sum over the values of all agents in G2. Choosing A1 gives an agent 1 utilon. Suppose g denotes the possible gains from trade; that is, choosing A2 produces (1+gs utilons for each value system, where s is the fraction of agents in G2 supporting that value system. If everyone in G2 chooses A2, this produces (1+g)×q×m utilons for Alice’s value system, while, if everyone chooses A1, this produces only q×m utilons in total for Alice.

The decision situation for Alice can be summarized by the following choice matrix (assuming, for simplicity, that all correlations are perfect):

H1 H2
A1 n+c q×m
A2 (1+gq×n+c (1+gq×m

Here, the cells denote the expected utilities that EDT assigns to either of Alice’s actions given either H1 or H2. c is a constant that denotes the expected value generated by the agents in G2 that are non-identical to Alice, given H1. It plays no role in comparing A1 and A2, since, given H1, these agents are not correlated with Alice: the value will be generated no matter which action she picks. The value for H1∧A2 is unrealistically high, since it supposes the same gains from trade as H2∧A2, but this does not matter here. According to EDT, Alice should choose A2 over A1 iff

g×p×q×m > (1-pn – (1+g)×(1-pn×q.

It seems likely that q×m is larger than n—the requirement that an agent must be a copy of Alice restricts the space of agents more than that of having thought about MSR and sharing Alice’s values. Therefore, even if the gains from trade and Alice’s credence in H2 (i.e., g×p) are relatively small, g×p×q×m is still larger than n, and EDT recommends A2.

While the argument for this wager is not as strong as the argument for the first two wagers, it is still plausible. It is plausible that there are much more agents having thought about MSR and sharing a person’s values than there are identical copies of the person. Hence, if the person’s actions correlate with the actions of all the agents in the larger group, the person’s actions have a much higher impact. Moreover, in this case, they plausibly also correlate with the actions of many agents holding different values, allowing for gains from trade. Therefore, one should act as if there were more rather than fewer correlations, even if one assigns a rather low credence to that hypothesis.

Acknowledgements

I am grateful to Caspar Oesterheld and Max Daniel for helpful comments on a draft of this post. I wrote this post while working for the Foundational Research Institute, which is now the Center on Long-Term Risk.

A wager against Solomonoff induction

The universal prior assigns zero probability to non-computable universes—for instance, universes that could only be represented by Turing machines in which uncountably many locations need to be updated, or universes in which the halting problem is solved in physics. While such universes might very likely not exist, one cannot justify assigning literally zero credence to their existence. I argue that it is of overwhelming importance to make a potential AGI assign a non-zero credence to incomputable universes—in particular, universes with uncountably many “value locations”.

Here, I assume a model of universes as sets of value locations. Given a specific goal function, each element in such a set could specify an area in the universe with some finite value. If a structure contains a sub-structure, and both the structure and the sub-structure are valuable in their own regard, there could either be one or two elements representing this structure in the universe’s set of value locations. If a structure is made up of infinitely many sub-structures, all of which the goal function assigns some positive but finite value to, then this structure could (if the sum of values does not converge) possibly only be represented by infinitely many elements in the set. If the set of value locations representing a universe is countable, then the value of said universe could be the sum over the values of all elements in the set (granted that some ordering of the elements is specified). I write that a universe is “countable” if it can be represented by a finite or countably infinite set, and a universe is “uncountable” if it can only be represented by an uncountably infinite set.

A countable universe, for example, could be a regular cellular automaton. If the automaton has infinitely many cells, then, given a goal function such as total utilitarianism, the automaton could be represented by a countably infinite set of value locations. An uncountable universe, on the other hand, could be a cellular automaton in which there is a cell for each real number, and interactions between cells over time are specified by a mathematical function. Given some utility functions over such a universe, one might be able to represent the universe only by an uncountably infinite set of value locations. Importantly, even though the universe could be described in logic, it would be incomputable.

Depending on one’s approach to infinite ethics, an uncountable universe could matter much more than a countable universe. Agents in uncountable universes might—with comparatively small resource investments—be able to create (or prevent), for instance, amounts of happiness or suffering that could not be created in an entire countable universe. For instance, each cell in the abovementioned cellular automaton might consist of some (possibly valuable) structure in of itself, and the cells’ structures might influence each other. Moreover, some (uncountable) set of cells might be regarded as an agent. The agent might then be able to create a positive amount of happiness in uncountably many cells, which—at least given some definitions of value and approaches to infinite ethics—would have created more value than could ever be created in a countable universe.

Therefore, there is a wager in favor of the hypothesis that humans actually live in an uncountable universe, even if it appears unlikely given current scientific evidence. But there is also a different wager, which applies if there is a chance that such a universe exists, regardless of whether humans live in that universe. It is unclear which of the two wagers dominates.

The second wager is based on acausal trade: there might be agents in an uncountable universe that do not benefit from the greater possibilities of their universe—e.g., because they do not care about the number of individual copies of some structure, but instead care about an integral over the structures’ values relative to some measure over structures. While agents in a countable universe might be able to benefit those agents equally well, they might be much worse at satisfying the values of agents with goals sensitive to the greater possibilities in uncountable universes. Thus, due to different comparative advantages, there could be great gains from trade between agents in countable and uncountable universes.

The above example might sound outlandish, and it might be flawed in that one could not actually come up with interaction rules that would lead to anything interesting happening in the cellular automaton. But this is irrelevant. It suffices that there is only the faintest possibility that an AGI could have an acausal impact in an incomputable universe which, according to one’s goal function, would outweigh all impact in all computable universes. There probably exists a possible universe like that for most goal functions. Therefore, one could be missing out on virtually all impact if the AGI employs Solomonoff induction.

There might not only be incomputable universes represented by a set that has the cardinality of the continuum, but there might be incomputable universes represented by sets of any cardinality. In the same way that there is a wager for the former, there is an even stronger wager for universes with even higher cardinalities. If there is a universe of highest cardinality, it appears to make sense to optimize only for acausal trade with that universe. Of course, there could be infinitely many different cardinalities, so one might hope that there is some convergence as to the values of the agents in universes of ever higher cardinalities (which might make it easier to trade with these agents).

In conclusion, there is a wager in favor of considering the possibility of incomputable universes: even a small acausal impact (relative to the total resources available) in an incomputable universe could counterbalance everything humans could do in a computable universe. Crucially, an AGI employing Solomonoff induction will not consider this possibility, hence potentially missing out on unimaginable amounts of value.

Acknowledgements

Caspar Oesterheld and I came up with the idea for this post in a conversation. I am grateful to Caspar Oesterheld and Max Daniel for helpful feedback on earlier drafts of this post.

UDT is “updateless” about its utility function

Updateless decision theory (UDT) (or some variant thereof) seems to be widely accepted as the best current solution to decision theory by MIRI researchers and LessWrong users. In this short post, I outline one potential implication of being completely updateless. My intention is not to refute UDT, but to show that:

  1. It is not clear how updateless one might want to be, as this could have unforeseen consequences.
  2. If one endorses UDT, one should also endorse superrational cooperation on a very deep level.

My argument is simple, and draws on the idea of multiverse-wide superrational cooperation (MSR), which is a form of acausal trade between agents with correlated decision algorithms. Thinking about MSR instead of general acausal trade has the advantage that it seems conceptually easier, while the conclusions gained should hold in the general case as well. Nevertheless, I am very uncertain and expect the reality of acausal cooperation between AIs to look different from the picture I draw in this post.

Suppose humans have created a friendly AI with a CEV utility function and UDT as its decision theory. This version of UDT has solved the problem of logical counterfactuals and algorithmic correlation, and can readily spot any correlated agent in the world. Such an AI will be inclined to trade acausally with other agents—agents in parts of the world it does not have causal access to. This is, for instance, to achieve gains from comparative advantages given empirical circumstances, and to exploit diminishing marginal returns of pursuing any single value system at once.

For the trade implied by MSR, the AI does not have to simulate other agents and engage in some kind of Löbian bargain with them. Instead, the AI has to find out whether the agents’ decision algorithms are functionally equivalent to the AI’s decision algorithm, it has to find out about the agents’ utility functions, and it has to make sure the agents are in an empirical situation such that trade benefits both parties in expectation. (Of course, to do this, the AI might also have to perform a simulation.) The easiest trading step seems to be the one with all other agents using updateless decision theory and the same prior. In this context, it is possible to neglect many of the usual obstacles to acausal trade. These agents share everything except their utility function, so there will be little if any “friction”—as long as the compromise takes differences between utility functions into account, the correlation between the agents will be perfect. It would get more complicated if the versions of UDT diverged a bit, and if the priors were slightly different. (More on this later.) I assume here that the agents can find out about the other agents’ utility functions. Although these are logically determined by the prior, the agents might be logically uncertain, and calculating the distribution of utility functions of UDT agents might be computationally expensive. I will ignore this consideration here.

A possible approach to this trade is to effectively choose policies based on a weighted sum of the utility functions of all UDT agents in all the possible worlds contained in the AI’s prior (see Oesterheld 2017, section 2.8 for further details). Here, the weights will be assigned such that in expectation, all agents will have an incentive to pursue this sum of utility functions. It is not exactly clear how such weights will be calculated, but it is likely that all agents will adopt the same weights, and it seems clear that once this weighting is done based on the prior, it won’t change after finding out which of the possible worlds from the prior is actual (Oesterheld 2017, section 2.8.6). If all agents adopt the policy of always pursuing a sum of their utility functions, the expected marginal additional goal fulfillment for all AIs at any point in the future will be highest. The agents will act according to the “greatest good for the greatest number.” Any individual agent won’t know whether they will benefit in reality, but that is irrelevant from the updateless perspective. This becomes clear if we compare the situation to thought experiments like the Counterfactual Mugging. Even if in the actual world, the AI cannot benefit from engaging in the compromise, then it was still worth it from the prior viewpoint, since (given sufficient weight in the sum of utility functions) the AI would have stood to gain even more in another, non-actual world.

If the agents are also logically updatelessness, this reduces the information the weights of the agents’ utility functions are based on. There probably are many logical implications that could be drawn from an empirical prior and the utility functions about aspects of the trade—e.g., that the trade will benefit only the most common utility functions, that some values won’t be pursued by anyone in practice, etc.—that might be one logical implication step away from a logical prior. If the AI is logically updateless, it will always perform the action that it would have committed to before it got to know about these implications. Of course, logical updatelessness is an unresolved issue, and its implications for MSR will depend on possible solutions to the problem.

In conclusion, in order to implement the MSR compromise, the AI will start looking for other UDT agents in all possible (and, possibly, impossible) worlds in its prior. It will find out about their utility functions and calculate a weighted sum over all of them. This is what I mean by the statement that UDT is “updateless” about its utility function: no matter what utility function it starts out with, its own function might still have negligible weight in the goals the UDT AI will pursue in practice. At this point, it becomes clear that it really matters what this prior looks like. What is the distribution of the utility functions of all UDT agents given the universal prior? There might be worlds less complex than the world humans live in—for instance, a cellular automaton, such as Rule 110 or Game of Life, with a relatively simple initial state—which still contain UDT agents. Given that these worlds might have a higher prior probability than the human world, they might get a higher weight in the compromise utility function. The AI might end up maximizing the goal functions of the agents in the simplest worlds.

Is updating on your existence a sin?

One of the features of UDT is that it does not even condition the prior on the agent’s own existence—when evaluating policies, UDT also considers their implications in worlds that do not contain an instantiation of the agent, even though by the time the agent thinks its first thought, it can be sure that these worlds do not exist. This might not be a problem if one assigns high weight to a modal realism/Tegmark Level 4 universe anyway. An observation can never distinguish between a world in which all worlds exist, and one in which only the world featuring the current observation exists. So if the measure of all the “single worlds” is small, then updating on existence won’t change much.

Suppose that this is not the case. Then there might be many worlds that can already be excluded as non-actual based on the fact that they don’t contain humans. Nevertheless, they might contain UDT agents with alien goals. This poses a difficult choice: Given UDT’s prior, the AI will still cooperate with agents living in non-actual (and impossible, if the AI is logically updatelessness) worlds. This is because given UDT’s prior, it could have been not humans, but these alien agents, that turned out actual—in which case they could have benefited humans in return. On the other hand, if the AI is allowed to condition on such information, then it loses in a kind of counterfactual prisoner’s dilemma:

  • Counterfactual prisoner’s dilemma: Omega has gained total control over one universe. In the pursuit of philosophy, Omega flips a fair coin to determine which of two agents she should create. If the coin comes up heads, Omega will create a paperclip maximizer. If it comes up tails, she creates a perfectly identical agent, but with one difference: the agent is a staple maximizer. After the creation of these agents, Omega hands either of them total control over the universe and lets them know about this procedure. There are gains from trade: producing both paperclips and staples creates 60% utility for both of the agents, while producing only one of those creates 100% for one of the agents. Hence, both agents would (in expectation) benefit from a joint precommitment to a compromise utility function, even if only one of the agents is actually created. What should the created agent do?

If the agents condition on their existence, then they will not gain as much in expectation as they could otherwise expect to gain before the coin flip (when neither of the agents existed). I have chosen this thought experiment because it is not confounded by the involvement of simulated agents, a factor which could lead to anthropic uncertainty and hence make the agents more updateless than they would otherwise be.

UDT agents with differing priors

What about UDT agents using differing priors? For simplicity, I suppose there are only two agents. I also assume that both agents have equal capacity to create utilons in their universes. (If this is not the case, the weights in the resulting compromise utility function have to be adjusted.) Suppose both agents start out with the same prior, but update it on their own existence—i.e., they both exclude any worlds that don’t contain an instantiation of themselves. This posterior is then used to select policies. Agent B can’t benefit from any cooperative actions by agent A in a world that only exists in agent A’s posterior. Conversely, agent A also can’t benefit from agent B in worlds that agent A doesn’t think could be actual anymore. So the UDT policy will recommend pursuing a compromise function only in worlds lying in the intersection of worlds that exist in both agent’s posteriors. If either agent updates that they are in some of the worlds to which the other agent assigns approximately zero probability, then they won’t cooperate.

More generally, if both agents know which world is actual, and this is a world which they both inhabit, then it doesn’t matter which prior they used to select their policies. (Of course, this world must have nonzero probability in both of their priors; otherwise they wouldn’t ever update that said world is actual.) From the prior perspective, for agent A, every sacrificed utilon in this world is weighted by its prior measure of the world. Every gained utilon from agent B is also weighted by the same prior measure. So there is no friction in this compromise—if both agents decide between action a which gives themselves d utilons, and an action b which gives the other agent c utilons, then any agent will prefer option b iff c divided by this agent’s prior measure of the world is greater than d divided by the same prior measure, so iff c is greater than d. Given that there is a way to normalize both agents’ utility functions, pursuing a sum of those utility functions seems optimal.

We can even expand this to the case wherein the two agents have any differing priors with a nonempty intersection between the corresponding sets of possible worlds. In expectation, the policy that says: “if any world outside the intersection is actual: don’t compromise; if any world from the intersection is actual: do the standard UDT compromise, but use the posterior distribution in which all worlds outside the intersection have zero probability for policy selection” seems best. When evaluating this policy, both agents can weight both utilons sacrificed for others, as well as utilons gained from others, in any of the worlds from the intersection by the measure of the entire intersection in their own respective priors. This again creates a symmetrical situation with a 1:1 trade ratio between utilons sacrificed and gained.

Another case to consider is if the agents also distribute the relative weights between the worlds in the intersection differently. I think that this does not lead to asymmetries (in the sense that conditional on some of the worlds being actual, one agent stands to gain and lose more than the other agent). Suppose agent A has 30% on world S1, and 20% on World S2. Agent B, on the other hand, has 10% on world S1 and 20% on world S2. If both agents follow the policy of pursuing the sum of utility functions, given that they find themselves in either of the two shared worlds, then, ceteris paribus, both will in expectation benefit to an equal degree. For instance, let c1 (c2) be the amount of utilons either agent can create for the other agent in world S1 (S2), and d1 (d2) the respective amount agents can create for themselves. Then agent A gets either 0.3×c1+0.2×c2 or 0.3×d1+0.2×d2, while B chooses between 0.1×c1+0.2×c2 and 0.1×d1+0.2×d2. Here, it’s not the case that A prefers cooperating iff B prefers cooperating. But assuming that in expectation, c1 = c2 as well as d1 = d2, this leads to a situation where both prefer cooperation iff c1 > d1. It follows that just pursuing a sum of both agents’ utility functions is, in expectation, optimal for both agents.

Lastly, consider a combination of non-identical priors with empirical uncertainty. For UDT, empirical uncertainty between worlds translates into anthropic uncertainty about which of the possible worlds the agent inhabits. In this case, as expected, there is “friction”. For example, suppose agent A assigns p to the intersection of the worlds in both agents’ priors, while agent B assigns p/q. Before they find out whether one of the worlds from the intersection or some other world is actual, the situation is the following: B can benefit from A’s cooperation in only p/q of the worlds. A can benefit in p of the worlds from B, but for everything A does, this will only mean p/q as much to agent B. Now each agent can again either create d utilons for themselves, or perform a cooperative action that gives c utilons to the other agent in the world where the action is performed. Given uncertainty about which world is actual, if both agents choose cooperation, agent A receives c×p utilons in expectation, while agent B receives c×p/q utilons in expectation. Defection gives both agents d utilons. So for cooperation to be worth it, c×p and c×p/q both have to be greater than d. If this is the case, then if p is unequal to p/q, both agents’ gains from trade are still not equal. This appears to be a bargaining problem that doesn’t solve as easily as the examples from above.

Conclusion

I actually endorse the conclusion that humans should cooperate with all correlating agents. Although humans’ decision algorithms might not correlate with as many other agents, and they might not be able to compromise as efficiently as super-human AIs, humans should nevertheless pursue some multiverse-wide sum of values. What I’m uncertain about is how far updatelessness should go. For instance, it is not clear to me which empirical and logical evidence humans should and shouldn’t take into account when selecting policies. If an AI does not start out with the knowledge that humans possess but instead uses the universal prior, then it might perform actions that seem irrational given human knowledge. Even if observations are logically inconsistent with the existence of a fellow cooperation partner (i.e., in the updated distribution, the cooperation partner’s world has zero probability), then UDT might still cooperate with and possibly adopt that partner’s values. I doubt at this point whether everyone still agrees with the hypothesis that UDT always achieves the highest utility.

Acknowledgements

I thank Caspar Oesterheld, Max Daniel, Lukas Gloor, and David Althaus for helpful comments on a draft of this post, and Adrian Rorheim for copy editing.

Market efficiency and charity cost-effectiveness

In an efficient market, one can expect that most goods are sold at a price-quality ratio that is hard to improve upon. If there was some easy way to produce a product cheaper or to produce a higher-quality version of it for a similar price, someone else would probably have seized that opportunity already – after all, there are many people who are interested in making money. Competing with and outperforming existing companies thus requires luck, genius or expertise. Also, if you trust other buyers to be reasonable, you can more or less blindly buy any “best-selling” product.

Several people, including effective altruists, have remarked that this is not true in the case of charities. Since most donors don’t systematically choose the most cost-effective charities, most donations go to charities that are much less cost-effective than the best ones. Thus, if you sit on a pile of resources – your career, say – outperforming the average charity at doing good is fairly easy.

The fact that charities don’t compete for cost-effectiveness doesn’t mean there’s no competition at all. Just like businesses in the private sector compete for customers, charities compete for donors. It just happens to be the case that being good at convincing people to donate doesn’t correlate strongly with cost-effectiveness.

Note that in the private sector, too, there can be a misalignment between persuading customers and producing the kind of product you are interested in, or even the kind of product that customers in general will enjoy or benefit from using. Any example will be at least somewhat controversial, as it will suggest that buyers make suboptimal choices. Nevertheless, I think addictive drugs like cigarettes are an example that many people can agree with. Cigarettes seem to provide almost no benefits to consumers, at least relative to taking nicotine directly. Nevertheless, people buy them, perhaps because smoking is associated with being cool or because they are addictive.

One difference between competition in the for-profit and nonprofit sectors is that the latter lacks monetary incentives. It’s nearly impossible to become rich by founding or working at a charity. Thus, people primarily interested in money won’t start a charity, even if they have developed a method of persuading people of some idea that is much more effective than existing methods. However, making a charity succeed is still rewarded with status and (the belief in) having had an impact. So in terms of persuading people to donate, the charity “market” is probably somewhat efficient in areas that confer status and that potential founders and employees intrinsically care about.

If you care about investing your resource pile most efficiently, this efficiency at persuading donors offers little consolation. On the contrary, it even predicts that if you use your resources to found or support an especially cost-effective charity, fundraising will be difficult. Perhaps you previously thought that, since your charity is “better”, it will also receive more donations than existing ineffective charities. But now it seems that if cost-effectiveness really helped with fundraising, more charities would have already become more cost-effective.

There are, however, cause areas in which the argument about effectiveness at persuasion carries a different tone. In these cause areas, being good at fundraising strongly correlates with being good at what the charity is supposed to do. An obvious example is that of charities whose goal it is to fundraise for other charities, such as Raising for Effective Giving. (Disclosure: I work for REG’s sister organization FRI and am a board member of REG’s parent organization EAF.) If an organization is good at fundraising for itself, it’s probably also good at fundraising for others. So if there are already lots of organizations whose goal it is to fundraise for other organizations, one might expect that these organizations already do this job so well that they are hard to outperform in terms of money moved per resources spent. (Again, some of these may be better because they fundraise for charities that generate more value according to your moral view.)

Advocacy is another cause area in which successfully persuading donors correlates with doing a very good job overall. If an organization can persuade people to donate and volunteer to promote veganism, it seems plausible that they are also good at promoting veganism. Perhaps most of the organization’s budget even comes from people they persuaded to become vegan, in which case their ability to find donors and volunteers is a fairly direct measure of their ability to persuade people to adopt a vegan diet. (Note that I am, of course, not saying that competition ensures that organizations persuade people of the most useful ideas.) As with fundraising organizations, this suggests that it’s hard to outperform advocacy groups in areas where lots of people have incentives to advocate, because if there were some simple method of persuading people, it’s very likely that some large organization based on that method would have already been established.

That said, there are many caveats to this argument for a strong correlation between fundraising and advocacy effectiveness. First off, for many organizations, fundraising appears to be primarily about finding, retaining and escalating a small number of wealthy donors. For some organizations, a similar statement might be true about finding volunteers and employees. In contrast, the goal of most advocacy organizations is to persuade a large number of people.1 So there may be organizations whose members are very persuasive in person and thus capable of bringing in many large donors, but who don’t have any idea about how to run a large-scale campaign oriented toward “the masses”. When trying to identify cost-effective advocacy charities, this problem can, perhaps, be addressed by giving some weight to the number of donations that a charity brings in, as opposed to donation sizes alone.2 However, the more important point is that if growing big is about big donors, then a given charity’s incentives and selection pressures for survival and growth are misaligned with persuading many people. Thus, it becomes more plausible again that the average big or fast-growing advocacy-based charity is a suboptimal use of your resource pile.

Second, I stipulated that a good way of getting new donors and volunteers is to simply persuade as many people of your general message as possible, and then hope that some of these will also volunteer at or donate to your organization. But even if all donors contribute similar amounts, some target audiences are more likely to donate than others.3 In particular, people seem more likely to contribute larger amounts if they have been involved for longer, have already donated or volunteered, and/or hold a stronger or more radical version of your organization’s views. But persuading these community members to donate works in very different ways than persuading new people. For example, being visible to the community becomes more important. Also, if donating is about identity and self-expression, it becomes more important to advocate in ways that express the community’s shared identity rather than in ways that are persuasive but compromising. The target audiences for fundraising and advocacy may also vary a lot along other dimensions: for example, to win an election, a political party has to persuade undecided voters, who tend to be uninformed and not particularly interested in politics (see p. 312 of Achen and Bartel’s Democracy for Realists); but to collect donations, one has to mobilize long-term party members who probably read lots of news, etc.

Third, the fastest-growing advocacy organizations may have large negative externalities.4 Absent regulations and special taxes, the production of the cheapest products will often damage some public good, e.g., through carbon emissions or the corruption of public institutions. Similarly, advocacy charities may damage some public good. The fastest way to find new members may involve being overly controversial, dumbing down the message or being associated with existing powerful interests, which may damage the reputation of a movement. For example, the neoliberals often suffer from being associated with special/business interests and crony capitalism (see sections “Creating a natural constituency” and “Cooption” in Kerry Vaughan’s What the EA community can learn from the rise of the neoliberals), perhaps because associating with business interests often carries short-term benefits for an individual actor. Again, this suggests that the fastest-growing advocacy charity may be much worse overall than the optimal one.

Acknowledgements

I thank Jonas Vollmer, Persis Eskander and Johannes Treutlein for comments. This work was funded by the Foundational Research Institute (now the Center on Long-Term Risk).


1. Lobbying organizations, which try to persuade individual legislators, provide a useful contrast. Especially in countries with common law, organizations may also attempt to win individual legal cases.

2. One thing to keep in mind is that investing effort into persuading big donors is probably a good strategy for many organizations. Thus, a small-donor charity that grows less quickly than a big-donor charity may be be more or less cost-effective than the big-donor charity.

3. One of the reasons why one might think that drawing in new people is most effective is that people who are already in the community and willing to donate to an advocacy org probably just fund the charity that persuaded them in the first place. Of course, many people may simply not follow the sentiment of donating to the charity that persuaded them. However, many community members may have been persuaded in ways that don’t present such a default option. For example, many people were persuaded to go vegan by reading Animal Liberation. Since the book’s author, Peter Singer, has no room for more funding, these people have to find other animal advocacy organizations to donate to.

4. Thanks to Persis Eskander for bringing up this point in response to an early version of this post.

The law of effect, randomization and Newcomb’s problem

The law of effect (LoE), as introduced on p. 244 of Thorndike’s (1911) Animal Intelligence, states:

Of several responses made to the same situation, those which are accompanied or closely followed by satisfaction to the animal will, other things being equal, be more firmly connected with the situation, so that, when it recurs, they will be more likely to recur; those which are accompanied or closely followed by discomfort to the animal will, other things being equal, have their connections with that situation weakened, so that, when it recurs, they will be less likely to occur. The greater the satisfaction or discomfort, the greater the strengthening or weakening of the bond.

As I (and others) have pointed out elsewhere, an agent applying LoE would come to “one-box” (i.e., behave like evidential decision theory (EDT)) in Newcomb-like problems in which the payoff is eventually observed. For example, if you face Newcomb’s problem itself multiple times, then one-boxing will be associated with winning a million dollars and two-boxing with winning only a thousand dollars. (As noted in the linked note, this assumes that the different instances of Newcomb’s problem are independent. For instance, one-boxing in the first does not influence the prediction in the second. It is also assumed that CDT cannot precommit to one-boxing, e.g. because precommitment is impossible in general or because the predictions have been made long ago and thus cannot be causally influenced anymore.)

A caveat to this result is that with randomization one can derive more causal decision theory-like behavior from alternative versions of LoE. Imagine an agent that chooses probability distributions over actions, such as the distribution P with P(one-box)=0.8 and P(two-box)=0.2. The agent’s physical action is then sampled from that probability distribution. Furthermore, assume that the predictor in Newcomb’s problem can only predict the probability distribution and not the sampled action and that he fills box B with the probability the agent chooses for one-boxing. If this agent plays many instances of Newcomb’s problem, then she will ceteris paribus fare better in rounds in which she two-boxes. By LoE, she may therefore update toward two-boxing being the better option and consequently two-box with higher probability. Throughout the rest of this post, I will expound on the “goofiness” of this application of LoE.

Notice that this is not the only possible way to apply LoE. Indeed, the more natural way seems to be to apply LoE only to whatever entity the agent has the power to choose rather than something that is influenced by that choice. In this case, this is the probability distribution and not the action resulting from that probability distribution. Applied at the level of the probability distribution, LoE again leads to EDT. For example, in Newcomb’s problem the agent receives more money in rounds in which it chooses a higher probability of one-boxing. Let’s call this version of LoE “standard LoE”. We will call other versions, in which choice is updated to bring some other variable (in this case the physical action) to assume values that are associated with high payoffs, “non-standard LoE”.

Although non-standard LoE yields CDT-ish behavior in Newcomb’s problem, it can easily be criticized on causalist grounds. Consider a non-Newcomblike variant of Newcomb’s problem in which there is no predictor but merely an entity that reads the agent’s mind and fills box B with a million dollars in causal dependence on the probability distribution chosen by the agent. The causal graph representing this decision problem is given below with the subject of choice being marked red. Unless they are equipped with an incomplete model of the world – one that doesn’t include the probability distribution step –, CDT and EDT agree that one should choose the probability distribution over actions that one-boxes with probability 1 in this variant of Newcomb’s problem. After all, choosing that probability distribution causes the game master to see that you will probably one-box and thus also causes him to put money under box B. But if you play this alternative version of Newcomb’s problem and use LoE on the level of one- versus two-boxing, then you would converge on two-boxing because, again, you will fare better in rounds in which you happen to two-box.

RandomizationBlogPost.jpg

Be it in Newcomb’s original problem or in this variant of Newcomb’s problem, non-standard LoE can lead to learning processes that don’t seem to match LoE’s “spirit”. When you apply standard LoE (and probably also in most cases of applying non-standard LoE), you develop a tendency to exhibit rewarded choices, and this will lead to more reward in the future. But if you adjust your choices with some intermediate variable in mind, you may get worse and worse. For instance, in either the regular or non-Newcomblike Newcomb’s problem, non-standard LoE adjusts the choice (the probability distribution over actions) so that the (physically implemented) action is more likely to be the one associated with higher reward (two-boxing), but the choice itself (high probability of two-boxing) will be one that is associated with low rewards. Thus, learning according to non-standard LoE can lead to decreasing rewards (in both Newcomblike and non-Newcomblike problems).

All in all, what I call non-standard LoE looks a bit like a hack rather than some systematic, sound version of CDT learning.

As a side note, the sensitivity to the details of how LoE is set up relative to randomization shows that the decision theory (CDT versus EDT versus something else) implied by some agent design can sometimes be very fragile. I originally thought that there would generally be some correspondence between agent designs and decision theories, such that changing the decision theory implemented by an agent usually requires large-scale changes to the agent’s architecture. But switching from standard LoE to non-standard LoE is an example where what seems like a relatively small change can significantly change the resulting behavior in Newcomb-like problems. Randomization in decision markets is another such example. (And the Gödel machine is yet another example, albeit one that seems less relevant in practice.)

Acknowledgements

I thank Lukas Gloor, Tobias Baumann and Max Daniel for advance comments. This work was funded by the Foundational Research Institute (now the Center on Long-Term Risk).

Pearl on causality

Here’s a quote by Judea Pearl (from p. 419f. of the Epilogue of the second edition of Causality) that, in light of his other writing on the topic, I found surprising when I first read it:

Let us examine how the surgery interpretation resolves Russell’s enigma concerning the clash between the directionality of causal relations and the symmetry of physical equations. The equations of physics are indeed symmetrical, but when we compare the phrases “A causes B” versus “B causes A,” we are not talking about a single set of equations. Rather, we are comparing two world models, represented by two different sets of equations: one in which the equation for A is surgically removed; the other where the equation for B is removed. Russell would probably stop us at this point and ask: “How can you talk about two world models when in fact there is only one world model, given by all the equations of physics put together?” The answer is: yes. If you wish to include the entire universe in the model, causality disappears because interventions disappear – the manipulator and the manipulated lose their distinction. However, scientists rarely consider the entirety of the universe as an object of investigation. In most cases the scientist carves a piece from the universe and proclaims that piece in – namely, the focus of investigation. The rest of the universe is then considered out or background and is summarized by what we call boundary conditions. This choice of ins and outs creates asymmetry in the way we look at things, and it is this asymmetry that permits us to talk about “outside intervention” and hence about causality and cause-effect directionality.

Futarchy implements evidential decision theory

Futarchy is a meta-algorithm for making decisions using a given set of traders. For every possible action a, the beliefs of these traders are aggregated using a prediction market for that action, which, if a is actually taken, evaluates to an amount of money that is proportional to how much utility is received. If a is not taken, the market is not evaluated, all trades are reverted, and everyone keeps their original assets. The idea is that – after some learning and after bad traders lose most of their money to competent ones – the market price for a will come to represent the expected utility of taking that action. Futarchy then takes the action whose market price is highest.

For a more detailed description, see, e.g., Hanson’s (2007) original paper on the futarchy, which also discusses potential objections. For instance, what happens in markets for actions that are very unlikely to be chosen? Note, however, that for this blog post you’ll only need to understand the basic concept and none of the minutia of real-world implementation. The above description deliberately ignores and abstracts away from these. One example of such a discrepancy between standard descriptions of futarchy and my above account is that, in real-world governance, there is often a “default action” (such as, leave law and government as is). To keep the number of markets small, markets are set up to evaluate proposed changes relative to that default (such as the introduction of a new law) rather than simply for all possible actions. I should also note that I only know basic economics and am not an expert on the futarchy.

Traditionally, the futarchy has been thought of as a decision-making procedure for governance of human organizations. But in principle, AIs could be built on futarchies as well. Of course, many approaches to AI (such as most Deep Learning-based ones) already have all their knowledge concentrated into a single entity and thus don’t need any procedure (such as democracy’s voting or futarchy’s markets) to aggregate the beliefs of multiple entities. However, it has also been proposed that intelligence arises from the interaction and sometimes competition of a large number of simple subagents – see, for instance, Minsky’s book The Society of Mind, Dennett’s Consciousness Explained, and the modularity of mind hypothesis. Prediction markets and futarchies would be approaches to (or models of) combining the opinions of many of these agents, though I doubt that the human mind functions like either of the two. A theoretical example of the use of prediction markets in AI is MIRI’s logical induction paper. Furthermore, markets are generally similar to evolutionary algorithms.1

So, if we implement a futarchy-like system in an AI, what decision theory would that AI come to implement? It seems that the answer is EDT. Consider Newcomb’s problem as an example. Traders that predict one-boxing to yield a million and two-boxing to yield a thousand will earn money, since the agent will, in fact, receive a million if it one-boxes and a thousand if it two-boxes. More generally, the futarchy rewards traders based on how accurately they predict what is actually going to happen if the agent makes a particular choice. This leads the traders to estimate the value of an action as proportional to the expected utility conditional on that action since conditional probabilities are the correct way to make predictions.

There are some caveats, though. For instance, prediction markets only work if the question at hand can eventually be answered. Otherwise, the market cannot be evaluated. For instance, in Newcomb’s problem, one would usually assume that your winnings are eventually given and thus shown to you. But other versions of Newcomb’s problems are conceivable. For instance, if you are consequentialist, Omega could donate your winnings to your favorite charity in such a way that you will never be able to tell how much utility this has generated for you. Unless you simply make estimates – in which case the behavior of the markets depends primarily on what kind of expected value (regular or causal) you will use as an estimate –, you cannot set up a prediction market for this problem at all. An example of such a “hidden” Newcomb problem is cooperation via correlated decision making between distant agents.

Another unaddressed issue is whether the futarchy can deal correctly with other problems of space-time embedded intelligence, such as the BPB problem.

Notwithstanding the caveats, EDT seems to be an inherent the way the futarchy works. To get the futarchy to implement CDT, it would have to reward traders based on what the agent is causally responsible for or based on some untestable counterfactual (“what would have happened if I had two-boxed”). Whereas EDT arises naturally from the principles of the futarchy, other decision theories require modification and explicit specification.

I should mention that this post is not primarily intended as a futarchist argument for EDT. Most readers will already be familiar with the underlying pro-EDT argument, i.e., EDT making decisions based on what will actually happen if a particular decision is made. In fact, it may also be viewed as a causalist argument against the futarchy.2 Rather than either of these two, it is a small part of the answer to the “implementation problem of decision theory”, which is: if you want to create an AI that behaves in accordance to some particular decision theory, how should that AI be designed? Or, conversely, if you build an AI without explicitly implementing a specific decision theory, what kind of behavior (EDT or CDT or other) results from it?

Acknowledgment: This work was funded by the Foundational Research Institute (now the Center on Long-Term Risk).


1. There is some literature comparing the way markets function to evolution-like selection (see the first section of Blume and Easley 1992) – i.e., how irrational traders are weeded out and rational traders accrue more and more capital. I haven’t read much of that literature, but the main differences between the futarchy and evolutionary algorithms seem to be the following. First, the futarchy doesn’t specify how new traders are generated, because it classically relies on humans to do the betting (and the creation of new automated trading systems), whereas this is a central concern in evolutionary algorithms. Second, futarchies permanently leave the power in the hands of many algorithms, whereas evolutionary algorithms eventually settle for one. This also means that the individual traders in a futarchy can be permanently narrow and specialized. For instance, there could be traders who exploit a single pattern and rarely bet at all. I wonder whether it makes sense to combine evolutionary algorithms and prediction markets. 

2. Probably futarchist governments wouldn’t face sufficiently many Newcomb-like situations in which the payoff can be tested for the difference to be relevant (see chapter 4 of Arif Ahmed’s Evidence, Decision and Causality).

A behaviorist approach to building phenomenological bridges

A few weeks ago, I wrote about the BPB problem and how it poses a problem for classical/non-logical decision theories. In my post, I briefly mentioned a behaviorist approach to BPB, only to immediately discard it:

One might think that one could map between physical processes and algorithms on a pragmatic or functional basis. That is, one could say that a physical process A implements a program p to the extent that the results of A correlate with the output of p. I think this idea goes into the right direction and we will later see an implementation of this pragmatic approach that does away with naturalized induction. However, it feels inappropriate as a solution to BPB. The main problem is that two processes can correlate in their output without having similar subjective experiences. For instance, it is easy to show that Merge sort and Insertion sort have the same output for any given input, even though they have very different “subjective experiences”.

Since writing the post I became more optimistic about this approach because the counterarguments I mentioned aren’t particularly persuasive. The core of the idea is the following: Let A and B be parameterless algorithms1. We’ll say that A and B are equivalent if we believe that A outputs x iff B outputs x. In the context of BPB, your current decision is an algorithm A and we’ll say B is an instance or implementation of A/you iff A and B are equivalent. In the following sections, I will discuss this approach in more detail.

You still need interpretations

The definition only solves one part of the BPB problem: specifying equivalence between algorithms. This would solve BPB if all agents were bots (rather than parts of a bot or collections of bots) in Soares and Fallenstein’s Botworld 1.0. But in a world without any Cartesian boundaries, one still has to map parts of the environment to parameterless algorithms. This could, for instance, be a function from histories of the world onto the output set of the algorithm. For example, if one’s set of possible world models is a set of cellular automata (CA) with various different initial conditions and one’s notion of an algorithm is something operating on natural numbers, then such an interpretation i would be a function from CA histories to the set of natural numbers. Relative to i, a CA with initial conditions contains an instance of algorithm A if A outputs x <=> i(H)=x, where H is a random variable representing the history created by that CA. So, intuitively, i is reading A’s output off from a description the world. For example, it may look at the physical signals sent by a robot’s microprocessor to a motor and convert these into the output alphabet of A. E.g., it may convert a signal that causes a robot’s wheels to spin to something like “forward”. Every interpretation i is a separate instance of A.

Joke interpretations

Since we still need interpretations, we still have the problem of “joke interpretations” (Drescher 2006, sect. 2.3; also see this Brian Tomasik essay and references therein). In particular, you could have an interpretation i that does most of the work, so that the equivalence of A and i(H) is the result of i rather than the CA doing something resembling A.

I don’t think it’s necessarily a problem that an EDT agent might optimize its action too much for the possibility of being a joke instantiation, because it gives all its copies in a world equal weight no matter which copy it believes to be. As an example, imagine that there is a possible world in which joke interpretations lead to you to identify with a rock. If the rock’s “behavior” does have a significant influence on the world and the output of your algorithm correlates strongly with it, then I see no problem with taking the rock into account. At least, that is what EDT would do anyway if it has a regular copy in that world.2 If the rock has little impact on the world, EDT wouldn’t care much about the possibility of being the rock. In fact, if the world also contains a strongly correlated non-instance3 of you that faces a real decision problem, then the rock joke interpretation would merely lead you to optimize for the action of that non-copy.

If you allow all joke interpretations, then you would view yourself in all worlds. Thus, the view may have similar implications as the l-zombie view where the joke interpretations serve as the l-zombies.4 Unless we’re trying to metaphysically justify the l-zombie view, this is not what we’re looking for. So, we may want to remove “joke interpretations” in some way. One idea could be to limit the interpretation’s computational power (Aaronson 2011, sect. 6). My understanding is that this is what people in CA theory use to define the notion of implementing an algorithm in a CA, see, e.g., Cook (2004, sect. 2). Another idea would be to include only interpretations that you yourself (or A itself) “can easily predict or understand”. Assuming that A doesn’t know its own output already, this means that i cannot do most of the work necessary to entangle A with i(H). (For a similar point, cf. Bishop 2004, sect. “Objection 1: Hofstadter, ‘This is not science’”.) For example, if i would just compute A without looking at H, then A couldn’t predict i very well if it cannot predict itself. If, on the other hand, i reads off the result of A from a computer screen in H, then A would be able to predict i’s behavior for every instance of H. Brian Tomasik lists a few more criteria to judge interpretations by.

Introspective discernibility

In my original rejection of the behaviorist approach, I made an argument about two sorting algorithms which always compute the same result but have different “subjective experiences”. I assumed that a similar problem could occur when comparing two equivalent decision-making procedures with different subjective experiences. But now I actually think that the behaviorist approach nicely aligns with what one might call introspective discernibility of experiences.

Let’s say I’m an agent that has, as a component, a sorting algorithm. Now, a world model may contain an agent that is just like me except that it uses a different sorting algorithm. Does that agent count as an instantiation of me? Well, that depends on whether I can introspectively discern which sorting algorithm I use. If I can, then I could let my output depend on the content of the sorting algorithm. And if I do that, then the equivalence between me and that other agent breaks. E.g., if I decide to output an explanation of my sorting algorithm, then my output would explain, say, bubble sort, whereas the other algorithm’s output would explain, say, merge sort. If, on the other hand, I don’t have introspective access to my sorting algorithm, then the code of the sorting algorithm cannot affect my output. Thus, the behaviorist view would interpret the other agent as an instantiation of me (as long as, of course, it, too, doesn’t have introspective access to its sorting algorithm). This conforms with the intuition that which kind of sorting algorithm I use is not part of my subjective experience. I find this natural relation to introspective discernibility very appealing.

That said, things are complicated by the equivalence relation being subjective. If you already know what A and B output, then they are equivalent if their output is the same — even if it is “coincidentally” so, i.e., if they perform completely unrelated computations. Of course, a decision algorithm will rarely know its own output in advance. So, this extreme case is probably rare. However, it is plausible that an algorithm’s knowledge about its own behavior excludes some conditional policies. For example, consider a case like Conitzer’s (2016, 2017), in which copies of an EU-maximizing agent face different but symmetric information. Depending on what the agent knows about its algorithm, it may view all the copies as equivalent or not. If it has relatively little self-knowledge, it could reason that if it lets its action depend on the information, the copies’ behavior would diverge. With more self-knowledge, on the other hand, it could reason that, because it is an EU maximizer and because the copies are in symmetric situations, its action will be the same no matter the information received.5

Consciousness

The BPB problem resembles the problem of consciousness: the question “does some physical system implement my algorithm?” is similar to the question “does some physical system have the conscious experience that I am having?”. For now, I don’t want to go too much into the relation between the two problems. But if we suppose that the two problems are connected, we can draw from the philosophy of mind to discuss our approach to BPB.

In particular, I expect that a common objection to the behaviorist approach will be that most instantiations in the behaviorist sense are behavioral p-zombies. That is, their output behavior is equivalent to the algorithm’s but they compute the output in a different way, and in particular in a way that doesn’t seem to give rise to conscious (or subjective) experiences. While the behaviorist view may lead us to identify with such a p-zombie, we can be certain, so the argument goes, that we are not given that we have conscious experiences.

Some particular examples include:

  • Lookup table-based agents
  • Messed up causal structures, e.g. Paul Durham’s experiments with his whole brain emulation in Greg Egan’s novel Permutation City.

I personally don’t find these arguments particularly convincing because I favor Dennett’s and Brian Tomasik’s eliminativist view on consciousness. That said, it’s not clear whether eliminativism would imply anything other than relativism/anti-realism for the BPB problem (if we view BPB and philosophy of mind as sufficiently strongly related).

Acknowledgment

This work was funded by the Foundational Research Institute (now the Center on Long-Term Risk).


1. I use the word “algorithm” in a very broad sense. I don’t mean to imply Turing computability. In fact, I think any explicit formal specification of the form “f()=…” should work for the purpose of the present definition. Perhaps, even implicit specifications of the output would work. 

2. Of course, I see how someone would find this counterintuitive. However, I suspect that this is primarily because the rock example triggers absurdity heuristics and because it is hard to imagine a situation in which you believe that your decision algorithm is strongly correlated with whether, say, some rock causes an avalanche. 

3. Although the behaviorist view defines the instance-of-me property via correlation, there can still be correlated physical subsystems that are not viewed as an instance of me. In particular, if you strongly limit the set of allowed interpretations (see the next paragraph), then the potential relationship between your own and the system’s action may be too complicated to be expressed as A outputs x <=> i(H)=x

4. I suspect that the two might differ in medical or “common cause” Newcomb-like problems like the coin flip creation problem

5. If this is undesirable, one may try to use logical counterfactuals to find out whether B also “would have” done the same as A if A had behaved differently. However, I’m very skeptical of logical counterfactuals in general. Cf. the “Counterfactual Robustness” section in Tomasik’s post. 

Multiverse-wide cooperation via correlated decision making – Summary

This is a short summary of some of the main points from my paper on multiverse-wide superrationality. For details, caveats and justifications, see the full paper. For shorter, accessible introductions, see here.

The target audience for this post consists of:

  • people who have already thought about the topic and thus don’t want to read through the long explanations given in the paper;
  • people who have already read (some of) the full paper and just want to refresh their memory;
  • people who don’t yet know whether they should read the full paper and thus want to know whether the content is interesting or relevant to them.
If you are not in any of these groups, this post may be confusing and not very helpful for understanding the main ideas.

Main idea

  • Take values of agents with your decision algorithm into account to make it more likely that they do the same. I’ll use Hofstadter’s (1983) term superrationality to refer to this kind of cooperation.
  • Whereas acausal trade as it is usually understood seems to require mutual simulation and is thus hard to get right as a human, superrationality is easy to apply for humans (if they know how they can benefit agents that use the same decision algorithm).
  • Superrationality may not be relevant among agents on Earth, e.g. because on Earth we already have causal cooperation and few people use the same decision algorithm as we use. But if we think that we might live in a vast universe or multiverse (as seems to be a common view among physicists, see, e.g., Tegmark (2003)), then there are (potentially infinitely) many agents with whom we could cooperate in the above way.
  • This multiverse-wide superrationality (MSR) suggests that when deciding between policies in our part of the multiverse, we should essentially adopt a new utility function (or, more generally, a new set of preferences) which takes into account the preferences of all agents with our decision algorithm. I will call that our compromise utility function (CUF). Whatever CUF we adopt, the others will (be more likely to) adopt a structurally similar CUF. E.g., if our CUF gives more weight to our values, then the others’ CUF will also give more weight to their values. The gains from trade appear to be highest if everyone adopts the same CUF. If this is the case, multiverse-wide superrationality has strong implications for what decisions we should make.

The superrationality mechanism

  • Superrationality works without reciprocity. For example, imagine there is one agent for every integer and that for every i, agent i can benefit agent i+1 at low cost to herself. If all the agents use the same decision algorithm, then agent i should benefit agent i+1 to make it more likely that agent i-1 also cooperates in the same way. That is, agent i should give something to an agent that cannot in any way return the favor. This means that when cooperating superrationally, you don’t need to identify which agents can help you.
  • How should the new criterion for making decisions, our compromise utility function, look like?
    • Harsanyi’s (1955) aggregation theorem suggests that it should be a weighted sum of the utility functions of all the participating agents.
    • To maximize gains from trade, everyone should adopt the same weights.
    • Variance-voting (Cotton-Barratt 2013; MacAskill 2014, ch. 3) is a promising candidate.
    • If some of the values require coordination (e.g., if one of the agents wants there to be at least one proof of the Riemann conjecture in the multiverse), then things get more complicated.
  • “Updatelessness” has some implications. E.g., it means that one should, under certain conditions, accept a superrational compromise that is bad for oneself.

The values of the other agents

  • To maximize the compromise utility function, it is very useful (though not strictly necessary, see section “Interventions”) to know what other agents with similar decision algorithms care about.
  • The orthogonality thesis (Bostrom 2012) implies that the values of the other agents are probably different from ours, which means that taking them into account makes a difference.
  • Not all aspects of the values of agents with our decision algorithm are relevant:
    • Only the consequentialist parts of their values matter (though things like minimizing the number of rule violations committed by all agents is a perfectly fine consequentialist value system).
    • Only values that apply to our part of the multiverse are relevant. (Some agents may care exclusively or primarily about their part of the multiverse.)
    • At least humans care differently about far away than about near things. Because we are far away from most agents with our decision algorithm, we only need to think about what they care about in distant things.
    • Superrationalists may care more about their idealized values, so we may try to idealize their values. However, we should be very careful to idealize only in ways consistent with their meta-preferences. (Otherwise, your values may be mis-idealized.)
  • There are some ways to learn about what other superrational agents care about.
    • The empirical approach: We can survey the relevant aspects of human values. The values of humans who take superrationality seriously are particularly relevant.
      • An example of relevant research is Bain et al.’s (2013) study on what people care about in future societies. They found that people put most weight on how warm, caring and benevolent members of these societies are. If we believe that construal level theory (see Trope and Liberman (2010) for an excellent summary) is roughly correct, then such results should carry over to evaluations of other psychologically distant societies. Although these results have been replicated a few times (Bain et al. 2012; Park et al. 2015; Judge and Wilson 2015; Bain et al. 2016), they are tentative and merely exemplify relevant research in this domain.
      • Another interesting data point is the values of the EA/LW/SSC/rationalist community, to my knowledge the only group of people who plausibly act on superrationality.
    • The theoretical approach: We could think about the processes that affect the distribution of values in the multiverse.
      • Biological evolution
      • Cultural evolution (see, e.g., Henrich 2015)
      • Late great filters
        • For example, if a lot of civilizations self-destruct with weapons of mass destruction, then the compromise utility function may contain a lot more peaceful values than an analysis based on biological and cultural evolution suggests.
      • The transition to whole brain emulations (Hanson 2016)
      • The transition to de novo AI (Bostrom 2014)

Interventions

  • There are some general ways in which we can effectively increase our compromise utility function without knowing its exact content.
    • Many meta-activities don’t require any such knowledge as long as we think that it can be acquired in the future. E.g., we could convince other people of MSR, do research on MSR, etc.
    • Sometimes, very very small bits of knowledge suffice to identify promising interventions. For example, if we believe that the consequentialist parts of human values are a better approximation of the consequentialist parts of other agents’ values than non-consequentialist human values, then we should make people more consequentialist (without necessarily promoting any particular consequentialist morality).
    • Another relevant point is that no matter how well we know the content of the compromise function, the argument in favor of maximizing it in our part of the universe is still just as valid. Thus, even if we know very little about its content, we should still do our best at maximizing it. (That said, we will often be better at maximizing the values of humans, in great part because we know and understand these values better.)
  • Meta-activities
    • Further research
    • Promoting multiverse-wide superrationality
  • Probably ensuring that superintelligent AIs have a decision theory that reasons correctly about superrationality is ultimately the most important intervention (although promoting multiverse-wide superrationality among humans can be instrumental for doing so).
  • There are some interventions in the moral advocacy space which align people’s preferences more with those of other superrational agents about our universe.
    • Promoting consequentialism
      • This is also good because consequentialism enables cooperation with the agents in other parts of the multiverse.
    • Promoting pluralism (e.g., convincing utilitarians to also take things other than welfare into account)
    • Promoting concern for benevolence and warmth (or whatever other value is much stronger represented in high versus low construal preferences)
    • Facilitating moral progress (i.e., presenting people with the arguments for both sides). Probably valuing preference idealization is more common than disvaluing it.
    • Promoting multiverse-wide preference utilitarianism
  • Promoting causal cooperation