Decision Theory Research Overview

This page is an overview of the decision theory research we (Johannes Treutlein and Caspar Oesterheld) do at FRI. In particular, we discuss the decision theory of Newcomb-like scenarios and whether evidential decision theory is correct and if not, whether and how it can be fixed.

A short summary of our views

Overall, we think EDT gets almost everything right:

  • “Reference class” Newcomb-like problems: One should cooperate in a prisoner’s dilemma against a copy and one-box in Newcomb’s problem.
  • “Common cause” or medical Newcomb-like problems: We think EDT gets medical Newcomb problems, such as the Smoking lesion, right. We believe that most alleged medical Newcomb problems do not actually lead CDT and EDT to disagree, because of the tickle defense (Ahmed 2014, ch. 4). Perhaps it is possible to construct medical Newcomb-like problems where CDT and EDT actually disagree (i.e. where the tickle defense does not apply), such as Arif Ahmed’s “Betting on the Past”, the two-boxing gene, and the coin flip creation problem. However, in those problems, (the equivalent of) one-boxing can be defended for the same reasons it can be defended in reference class problems.
  • Updatelessness: Some have argued that evidential decision theory gets counterfactual mugging (or similar problems such as Soares and Levinstein’s (2017, sect. 2) XOR blackmail) wrong. We, on the other hand, think that EDT’s behavior makes sense. We don’t think that reflective inconsistency is a conclusive argument against EDT, given that reflective consistency is at odds with other plausible principles.

However, there are also some challenges to EDT:

  • Naturalized induction: EDT (and other non-logical [link] decision theories) make their decisions based on some probability distribution P. In principle, Bayesian updating solves the problem of assigning these probabilities in the non-naturalized [link]/Cartesian case [ref to first chapters of Jaynes]. However, two new problems arise in the naturalized case.
    • Building phenomenological bridges: Which collections of objects in the world model are instances of myself? The problem of translating between mental experiences and physics has been discussed at length in a few different fields and a few solutions have been proposed (see, e.g., the behaviorist approach to the BPB problem; this Brian Tomasik post; cf. the literatures surrounding “Mary the color scientist“, the computational theory of mind, computation in cellular automata, etc.). Unfortunately, it seems that statements about whether a particular physical process implements a particular algorithm can’t be objectively true or false. As an alternative, one could switch to a logical version of EDT. Unfortunately, switching from EDT to logical EDT has a lot of side-effects.
    • Anthropics: Anthropic decision theory (ADT) (Armstrong 2017), i.e. the way updateless decision theories automatically solve anthropics, seems satisfactory. In Sleeping Beauty-based anthropic decision problems, ADP’s behavior can be imitated by EDT+SSA (or CDT+SIA) (Schwarz 2015; Briggs 2010). However, without a precommitment to updatelessness, there seems to be no way to make EDT+SSA agree with ADT in an example by Conitzer (2015). Whereas it is apparent why one would want to be reflectively inconsistent in the counterfactual mugging, this is much less clear for these anthropic problems. Hence, we view Conitzer’s example as an open problem for EDT. A possible solution could be to assign reference classes according to the behaviorist approach to the BPB problem.
  • Perhaps some version(s) of the 5 and 10 problem

Our work on decision theory

Overview pages

Other valuable resources on decision theory

Here are some pieces on decision theory written by others that we find particularly valuable, although we may not agree with them entirely.