Cheating at thought experiments

Thought experiments are important throughout the sciences. For example, it appears to be that a thought experiment played an important role in Einstein’s discovery of special relativity. In philosophy and ethics and the theory of the mind in particular, thought experiments are especially important and there are many famous ones. Unfortunately, many thought experiments might just be ways of tricking people. Like their empirical counterparts, they are prone to cheating if they lack rigor and the reader does not try to reproduce (or falsify) the results.

In his book, Consciousness Explained, Daniel Dennett gives (at least) three examples of cheating in thought experiments. The first one is from chapter 9.5 and Dennett’s argument roughly runs as follows. After having described his top-level theory of the human mind, he addresses the question “Couldn’t something unconscious – a zombie, for instance – have [all this machinery]?” This argument against functionalism, computationalism and the like, is often accompanied by the following argument: “That’s all very well, all those functional details about how the brain does this and that, but I can imagine all that happening in an entity without the occurrence of any real consciousness.” To this Dennett replies: “Oh, can you? How do you know? How do you know you’ve imagined ‘all that’ in sufficient detail, and with sufficient attention to all the implications.”

With regard to another thought experiment, Mary, the color scientist, Dennett elaborates (ch. 12.5): “[Most people] are simply not following directions! The reason no one follows directions is because what they ask you to imagine is so preposterously immense, you can’t even try. The crucial premise […] is not readily imaginable, so no one bothers.”

In my opinion, this summarizes the problems with many thought experiments (specifically intuition pumps): Readers do not (most often because they cannot) follow instructions and are thus unable to mentally set up the premises of the thought experiment. And then they try to reach a conclusion anyway based on their crude approximation to the situation.

Another example is Searle’s Chinese Room, which Dennett covers in chapter 14.1 of his book. When Searle asks people to imagine a person that does not speak Chinese, but answers queries in Chinese using a large set of rules and a library, they probably think of someone looking up definitions in a lexicon or something. At least, this is feasible and also resembles the way people routinely pretend to have knowledge that they don’t. What people don’t imagine is the thousands of steps that it would take the person to compose even short replies (and choosing Chinese as a language does not help most English speaking readers to imagine the complexity of the procedure of composing a message). If they did simulate the entire behavior of the whole system (the Chinese room with the person in it), they might conclude that it has an understanding of Chinese after all. And thus, this thought experiment is not suitable for debunking the idea that consciousness can arise from following rules.

Going beyond what Dennett discusses in his book, I’d like to consider further thought experiments that fit the pattern. For example, people often argue that hedonistic utilitarianism demands that the universe be tiled with some (possibly very simple) object that is super-happy. Or at least that individual humans should be replaced this way. In an interview, Eliezer Yudkowsky said:

[A utilitarian superintelligence] goes out to the stars, takes apart the stars for raw materials, and it builds whole civilizations full of minds experiencing the most exciting thing ever, over and over and over and over and over again.

The whole universe is just tiled with that, and that single moment is something that we would find this very worthwhile and exciting to happen once. But it lost the single aspect of value that we would name boredom […].

And so you lose a single dimension, and the [worthwhileness of the universe] – from our perspective – drops off very rapidly.

This thought experiment is meant to prove that having pure pleasure alone is not a desirable result. Instead, many people endorse complexity of value – which is definitely true from a descriptive point of view – and describe in detail many good things that utopia should contain. While I have my own doubts about the pleasure-filled universe, my suspicion is that one reason why people don’t like it is that they don’t consider it for very long and  don’t actually imagine all the happiness. “Sure, some happiness is nice, but happiness gets less interesting when having large amounts of it.” The more complex scenario on the other hand can actually be imagined more easily and due to having different kinds of good stuff, one does not have to base judgment entirely on some number being very large. Closing the discussion of this example, I would like to remark that I am, at the time of writing this, not a convinced hedonistic utilitarian. (Rather I am inclined towards a more preferentist view, which, I feel, is in line with endorsing complexity of value and value extrapolation, though I am skeptical of preference idealization as proposed in Yudkowsky’s Coherent Extrapolated Volition. Furthermore, I care more about suffering than about happiness, but that’s a different story…) I just think that the universe filled with eternal bliss cannot be used as a very convincing argument against hedonistic utilitarianism. Similar arguments may apply to deciding whether currently, the bad things on earth are outweighed by the good things.

The way out of this problem of unimaginable thought experiments is to confine ourselves to thought experiments that are within our cognitive reach. Results may then, if possible, be extrapolated to the more complex situations. For example, I find it more fruitful to talk about whether I only care about pleasure in other individuals, or also about whether they are doing something that is very boring from the outside.

8 thoughts on “Cheating at thought experiments

  1. I think when people say a universe filled with uniform pleasure would be not valuable, they’re not saying it wouldn’t be valuable to the agents in the universe but that it wouldn’t be valuable to the observer making the choice, since such a universe wouldn’t be beautiful. In other words, people seem to be saying that aesthetic appreciation trumps purer altruism.

    I think a person’s assessment of the net balance of happiness vs. suffering in the world can be strongly influenced by that person’s current hedonic state. When I’m in a bad mood, the world seems much worse than when not.

    Even if you only care about humans, the world contains extreme horrors — e.g., as I write this, there are people being tortured.

    Liked by 1 person

    1. Thanks for your comments, Brian!

      >I think when people say a universe filled with uniform pleasure would be not valuable, they’re not saying it wouldn’t be valuable to the agents in the universe but that it wouldn’t be valuable to the observer making the choice, since such a universe wouldn’t be beautiful. In other words, people seem to be saying that aesthetic appreciation trumps purer altruism.

      Yeah, that’s probably true, though I am not sure they would admit to “aesthetic appreciation” trumping “purer altruism”.

      Nevertheless, I think some mis-assess the trade-off of “interesting complexity”/beauty versus happiness as it would be implied by their own intuition about smaller examples. They don’t seem to reject HU based on imaginable examples in which, e.g., a creature can either be boring/ugly and happy or interestingly complex/beautiful and unhappy. Instead they construct much larger examples, and then they don’t “shut up and multiply”, but try to treat them directly without sufficiently imagining the two options.

      So, I am claiming that their extrapolated intuition about imaginable examples might actually prefer the universe filled with bliss over the interestingly beautiful one.

      By the way, there’s a post on LessWrong by Wei Dai comparing scope insensitivity and boredom, which is somewhat related to the discussion, but which I didn’t find particularly enlightening.

      Of course, I fully agree with the other two remarks of yours.


  2. Two more examples of thought experiments which are usually not set up properly in a reader’s mind:

    – Nozick’s utility monster. See ( may also be of interest as it makes a similar argument)

    – Some people argue that utilitarianism sometimes does the right things but for the wrong reasons. Such arguments often go like this: “Utilitarianism advises against X on the ground that X would reduce overall happiness. Imagine, however, a world in which X increases overall happiness. Surely, X would still be wrong! Utilitarianism can’t be reconciled with this fact. Therefore, utilitarianism fails as a moral imperative.” Examples of X are torture, murder, rape, etc. In some such cases, people do imagine the right kind of possible world. For example, it’s easy to think of situations in which killing someone is believed (on a gut-level) to increase overall happiness: Hitler, euthanasia, etc. In many other cases, however, imagining a world where X is good is really difficult and therefore probably not done properly by both authors and readers. For example, it’s difficult for me to think of a world where rape increases overall happiness. (The inhabitants of such a world would have to strongly differ psychologically from humans.)


  3. Pingback: Is it a bias or just a preference? An interesting issue in preference idealization – The Universe from an Intentional Stance

  4. Dennett revisits this failure mode of thought experiments in “Intuition Pumps and Other Tools for Thinking”.

    From chapter 32: “Several of the boom crutches we will dismantle actually suppress the imagination of readers, distorting their intuitions and thus invalidating the “results” of the thought experiment.”

    It’s also, again, one of his arguments against Searle’s Chinese Room in chapter 60 and Mary the color scientist in chapter 64.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s