The average utilitarian’s solipsism wager

The following prudential argument is relatively common in my circles: We probably live in a simulation, but if we don’t, our actions matter much more. Thus, expected value calculations are dominated by the utility under the assumption that we (or some copies of ours) are in the real world. Consequently, the simulation argument affects our prioritization only slightly — we should still mostly act under the assumption that we are not in a simulation.

A commonly cited analogy is due to Michael Vassar: “If you think you are Napoleon, and [almost] everyone that thinks this way is in a mental institution, you should still act like Napoleon, because if you are, your actions matter a lot.” An everyday application of this kind of argument is the following: Probably, you will not be in an accident today, but if you are, the consequences for your life are enormous. So, you better fasten your seat belt.

Note how these arguments do not affect the probabilities we assign to some event or hypothesis. They are only about the event’s (or hypothesis’) prudential weight — the extent to which we tailor our actions to the case in which the event occurs (or the hypothesis is true).

For total utilitarians (and many other consequentialist value systems), similar arguments apply to most theories postulating a large universe or multiverse. To the extent that it makes a difference for our actions, we should tailor them to the assumption that we live in a large multiverse with many copies of us because under this assumption we can affect the lives of many more beings.

For average utilitarians, the exact opposite applies. Even if they have many copies, they will have an impact on a much smaller fraction of beings if they live in a large universe or multiverse. Thus, they should usually base their actions on the assumption of a small universe, such as a universe in which Earth is the only inhabited planet. This may already have some implications, e.g. via the simulation argument or the Fermi paradox. If they also take the average over time — I do not know whether this is the default for average utilitarianism — they would also base their actions on the assumption that there are just a few past and future agents. So, average utilitarians are subject to a much stronger Doomsday argument.

Maybe the bearing of such prudential arguments is even more powerful, though. There is some chance that metaphysical solipsism is true: the view that only my (or your) own mind exists and that everything else is just an illusion. If solipsism were true, our impact on average welfare (or average preference fulfillment) would be enormous, perhaps 7.5 billion times bigger than it would be under the assumption that Earth exists — about 100 billion times bigger if you also count humans that have lived in the past. Solipsism seems to deserve a probability larger than one in 6 (or 100) billion. (In fact, I think solipsism is likely enough for this to qualify as a non-Pascalian argument.) So, perhaps average utilitarians should maximize primarily for their own welfare?

Acknowledgements

The idea of this post is partly due to Lukas Gloor. This work was funded by the Foundational Research Institute (now the Center on Long-Term Risk).

3 thoughts on “The average utilitarian’s solipsism wager

  1. Johannes Treutlein

    >Note how these arguments do not affect the probabilities we assign to some event or hypothesis. They are only about the event’s (or hypothesis’) prudential weight — the extent to which we tailor our actions to the case in which the event occurs (or the hypothesis is true).

    I’d be curious whether there is a fundamental difference between “probability” and “prudential weight”.

    >So, average utilitarians are subject to a much stronger Doomsday argument.

    Note that if you use Anthropic Decision Theory, than being an average utilitarian is exactly what causes you to act according to the SSA probabilities that imply the Doomsday argument in the first place. Using SSA and being an average utilitarian on top of that seems to be somehow double counting the averagism.

    There is a possible difference regarding reference classes, though: In average utilitarianism, the reference class seems to be all other conscious beings. In anthropics, the reference class could be much smaller – e.g., something like “all physical structures that instantiate the same algorithm as my brain and have the same information”. So in that sense, average utilitarianism has an even strong tendency to favor smaller worlds than SSA.

    Liked by 1 person

    1. >I’d be curious whether there is a fundamental difference between “probability” and “prudential weight”.

      I might write about this more, but the prudential weight of some hypothesis X is something like the probability of X times our potential impact if X is true.

      >Note that if you use Anthropic Decision Theory, than being an average utilitarian is exactly what causes you to act according to the SSA probabilities that imply the Doomsday argument in the first place. Using SSA and being an average utilitarian on top of that seems to be somehow double counting the averagism.

      I think these are two different “average utilitarianism”s. Also, I think the “average utilitarianism” in anthropic decision theory only makes sense to talk about for egoists. If you have altruistic goals, you don’t need to choose between average versus total utilitarianism w.r.t. to your copies.

      Also, the arguments are of different kinds. The SSA doomsday argument is about the probability of doomsday, the average utilitarianism argument for doomsday is about the other aspect of prudential weights.

      Like

  2. Great post!

    > Solipsism seems to deserve a probability larger than one in 6 (or 100) billion.

    I’m unsure about this. 🙂 The shortest computer program that creates a solipsist world containing just you is not extremely short. Perhaps such a program would specify the laws of physics as we know them, run the ordinary universe forward, and then point to the person in that universe who will be the only mind to actually “exist” (whatever that means). Specifying you out of a pool of 100 billion people may require ~log_2(100 billion) bits or so, in which case the prior for solipsism of _you_ (rather than solipsism of _some human or other_) gets a penalty of around 1/(100 billion) multiplied on.

    Of course, we might have model uncertainty about this method of assigning priors.

    Liked by 1 person

Leave a comment