A survey of polls on Newcomb’s problem

One classic story about Newcomb’s problem is that, at least initially, people one-box and two-box in roughly equal numbers (and that everyone is confident in their position). To find out whether this is true or what exact percentage of people would one-box I conducted a meta-survey of existing polls of people’s opinion on Newcomb’s problem.

The surveys I found are listed in the following table:

I deliberately included even surveys with tiny sample sizes to test whether the results from the larger sample size surveys are robust or whether they depend on the specifics of how they obtained the data. For example, the description of Newcomb’s problem in the Guardian survey contained a paragraph on why one should one-box (written by Arif Ahmed, author of Evidence, Decision and Causality) and a paragraph on why one should two-box (by David Edmonds). Perhaps the persuasiveness of these arguments influenced the result of the survey?

Looking at all the polls together, it seems that the picture is at least somewhat consistent. The two largest surveys of non-professionals both give one-boxing almost the same small edge. The other results diverge more, but some can be easily explained. For example, decision theory is a commonly discussed topic on LessWrong with some of the opinion leaders of the community (including founder Eliezer Yudkowsky) endorsing one-boxing. It is therefore not surprising that opinions on LessWrong have converged more than elsewhere. Considering the low sample sizes, the other smaller surveys of non-professionals also seem reasonably consistent with the impression that one-boxing is only slightly more common than two-boxing.

The surveys also show that, as has often been remarked on, there exists a significant difference between opinion among the general population / “amateur philosophers” and professional philosophers / decision theorists (though the consensus among decision theorists is not nearly as strong as on LessWrong).

Acknowledgment: This work was funded by the Foundational Research Institute (now the Center on Long-Term Risk).

4 thoughts on “A survey of polls on Newcomb’s problem

  1. One possible issue is that if the one-boxers are correct, you can earn an extra $1 million (or, actually, $999,000) by making the better choice, while if the two-boxers are correct, you can get only an extra $1000 by making a better choice. If you’re uncertain about decision theory, then expected-value logic suggests one-boxing just because the upsides are so much greater.

    It’d be interesting to see an alternate poll where the $1 million box is replaced by one with $1001. Then uncertainty about decision theory would seem to favor two-boxing.

    Liked by 3 people

    1. Nice point! Carl Shulman makes the same argument here: http://lesswrong.com/r/discussion/lw/hpy/normative_uncertainty_in_newcombs_problem/ In one comment, he also asked LWers about the lowest payoff ratio at which they would one-box: http://lesswrong.com/r/discussion/lw/hpy/normative_uncertainty_in_newcombs_problem/969i This is the only survey with alternative ratios that I am aware of.

      From my own experience (and many decision theorists have reported similar anecdotal evidence), it seems that most people are so confident in their view that they don’t implement decision-theoretical uncertainty. (Or maybe some do take it into account but already accept Jonny’s EDT wager. 😛 ) If I didn’t know that there were so many two-boxers, I would probably also be a lot more confident in one-boxing. (That said, it’s not clear whether decision theory is the kind of thing you can/should be uncertain about. It’s not like one- and two-boxers disagree on some empirical matter. Perhaps it is more like morality or Bayesian priors… Probably most people do have meta-decision-theoretical intuitions but that just passes the buck.)

      Liked by 2 people

  2. FeepingCreature

    If Omega’s decision lay in the causal future of your decision, you’d one box almost no matter what the polls said, right? It’d be plain obvious. You’d just go “well, looks like lots and lots of people are being stupid, not like that hasn’t happened before, give me my free money.”

    The TDT view, which I believe with near certainty, is that Newcomb’s is equivalent to a situation where Omega’s decision lies in the causal future of your decision, where your decision is actually happening inside Omega’s prediction logic of you. The reason why I am not epistemically uncertain is that once you realize that two minds that run the same decision algorithm (obviously) cannot come to a different answer, the problem appears trivial.

    Once you realize that prediction requires something equivalent to your decision theory, the problem is obvious. It’s a “trick question” and people don’t generally consider those with epistemic doubt, once they know the trick.

    Liked by 1 person

  3. Pingback: The lack of performance metrics for CDT versus EDT versus … – The Universe from an Intentional Stance

Leave a comment