The Stag Hunt against a similar opponent

[I assume that the reader is familiar with Newcomb’s problem and the Prisoner’s Dilemma against a similar opponent and ideally the equilibrium selection problem in game theory.]

The trust dilemma (a.k.a. Stag Hunt) is a game with a payoff matrix kind of like the following:

SH
S4,4-1,3
H3,-12,2

Its defining characteristic is the following: The Pareto-dominant outcome (i.e., the outcome that is best for both players) (S,S) is Nash equilibrium. However, (H,H) is also a Nash equilibrium. Moreover, if you’re sufficiently unsure what your opponent is going to do, then H is the best response. If two agents learn to play this game and they start out playing the game at random, then they are more likely to converge to (H,H). Overall, we would like it if the two agents played (S,S), but I don’t think we can assume this to happen by default.

Now what if you played the trust dilemma against a similar opponent (specifically one that is similar w.r.t. how they play games like the trust dilemma)? Clearly, if you play against an exact copy, then by the reasoning behind cooperating in the Prisoner’s Dilemma against a copy, you should play S. More generally, it seems that a similarity between you and your opponent should push towards trusting that if you play S, the opponent will also play S. The more similar you and your opponent are, the more you might reason that the decision is mostly between (S,S) and (H,H) and the less relevant are (S,H) and (H,S).

What if you played against an opponent who knows you very well and who has time to predict how you will choose in the trust dilemma? Clearly, if you play against an opponent who can perfectly predict you (e.g., because you are an AI system and they have a copy of your source code source code), then by the reasoning behind one-boxing in Newcomb’s problem, you should play S. More generally, the more you trust your opponent’s ability to predict what you do, the more you should trust that if you play S, the opponent will also play S.

Here’s what I find intriguing about these scenarios. In these scenarios, one-boxers might systematically arrive at a different (more favorable) conclusion than two-boxers. However, this conclusion is still compatible with two-boxing, or with blindly applying Nash equilibrium. In the trust dilemma, one-boxing type reasoning merely affects how we resolve the equilibrium selection problem, which the orthodox theories generally leave open. This is in contrast to the traditional examples (Prisoner’s Dilemma, Newcomb’s problem) in which the two ways of reasoning are in conflict. So there is room for implications of one-boxing, even in an exclusively Nash equilibrium-based picture of strategic interactions.

2 thoughts on “The Stag Hunt against a similar opponent

  1. Daniel Kokotajlo

    It sounds like you think one-boxing is compatible with an exclusively Nash equilbrium-based picture of strategic interactions. Why? Can you elaborate?

    Like

    1. Good question! I don’t think one-boxing is compatible with an _exclusively_ Nash equilibrium-based picture. (Obviously one-boxing implies cooperation with exact copies in the PD. I certainly hope that it generalizes as much as possible to not-so-exact copies!) But I find it at least plausible that _there are_ interactions between (some kinds of) one-boxers in which Nash equilibrium is predictive.

      Like

Leave a comment