A wager against Solomonoff induction

The universal prior assigns zero probability to non-computable universes—for instance, universes that could only be represented by Turing machines in which uncountably many locations need to be updated, or universes in which the halting problem is solved in physics. While such universes might very likely not exist, one cannot justify assigning literally zero credence to their existence. I argue that it is of overwhelming importance to make a potential AGI assign a non-zero credence to incomputable universes—in particular, universes with uncountably many “value locations”.

Here, I assume a model of universes as sets of value locations. Given a specific goal function, each element in such a set could specify an area in the universe with some finite value. If a structure contains a sub-structure, and both the structure and the sub-structure are valuable in their own regard, there could either be one or two elements representing this structure in the universe’s set of value locations. If a structure is made up of infinitely many sub-structures, all of which the goal function assigns some positive but finite value to, then this structure could (if the sum of values does not converge) possibly only be represented by infinitely many elements in the set. If the set of value locations representing a universe is countable, then the value of said universe could be the sum over the values of all elements in the set (granted that some ordering of the elements is specified). I write that a universe is “countable” if it can be represented by a finite or countably infinite set, and a universe is “uncountable” if it can only be represented by an uncountably infinite set.

A countable universe, for example, could be a regular cellular automaton. If the automaton has infinitely many cells, then, given a goal function such as total utilitarianism, the automaton could be represented by a countably infinite set of value locations. An uncountable universe, on the other hand, could be a cellular automaton in which there is a cell for each real number, and interactions between cells over time are specified by a mathematical function. Given some utility functions over such a universe, one might be able to represent the universe only by an uncountably infinite set of value locations. Importantly, even though the universe could be described in logic, it would be incomputable.

Depending on one’s approach to infinite ethics, an uncountable universe could matter much more than a countable universe. Agents in uncountable universes might—with comparatively small resource investments—be able to create (or prevent), for instance, amounts of happiness or suffering that could not be created in an entire countable universe. For instance, each cell in the abovementioned cellular automaton might consist of some (possibly valuable) structure in of itself, and the cells’ structures might influence each other. Moreover, some (uncountable) set of cells might be regarded as an agent. The agent might then be able to create a positive amount of happiness in uncountably many cells, which—at least given some definitions of value and approaches to infinite ethics—would have created more value than could ever be created in a countable universe.

Therefore, there is a wager in favor of the hypothesis that humans actually live in an uncountable universe, even if it appears unlikely given current scientific evidence. But there is also a different wager, which applies if there is a chance that such a universe exists, regardless of whether humans live in that universe. It is unclear which of the two wagers dominates.

The second wager is based on acausal trade: there might be agents in an uncountable universe that do not benefit from the greater possibilities of their universe—e.g., because they do not care about the number of individual copies of some structure, but instead care about an integral over the structures’ values relative to some measure over structures. While agents in a countable universe might be able to benefit those agents equally well, they might be much worse at satisfying the values of agents with goals sensitive to the greater possibilities in uncountable universes. Thus, due to different comparative advantages, there could be great gains from trade between agents in countable and uncountable universes.

The above example might sound outlandish, and it might be flawed in that one could not actually come up with interaction rules that would lead to anything interesting happening in the cellular automaton. But this is irrelevant. It suffices that there is only the faintest possibility that an AGI could have an acausal impact in an incomputable universe which, according to one’s goal function, would outweigh all impact in all computable universes. There probably exists a possible universe like that for most goal functions. Therefore, one could be missing out on virtually all impact if the AGI employs Solomonoff induction.

There might not only be incomputable universes represented by a set that has the cardinality of the continuum, but there might be incomputable universes represented by sets of any cardinality. In the same way that there is a wager for the former, there is an even stronger wager for universes with even higher cardinalities. If there is a universe of highest cardinality, it appears to make sense to optimize only for acausal trade with that universe. Of course, there could be infinitely many different cardinalities, so one might hope that there is some convergence as to the values of the agents in universes of ever higher cardinalities (which might make it easier to trade with these agents).

In conclusion, there is a wager in favor of considering the possibility of incomputable universes: even a small acausal impact (relative to the total resources available) in an incomputable universe could counterbalance everything humans could do in a computable universe. Crucially, an AGI employing Solomonoff induction will not consider this possibility, hence potentially missing out on unimaginable amounts of value.

Acknowledgements

Caspar Oesterheld and I came up with the idea for this post in a conversation. I am grateful to Caspar Oesterheld and Max Daniel for helpful feedback on earlier drafts of this post.

Notes on the 24 November 2015 conference on machine ethics

The day before yesterday, I attended a German-speaking conference on robo- and machine ethics, organized by the Daimler and Benz foundation and the Cologne center for ethics, rights, economics, and social sciences of health. Speakers included Prof. Oliver Bendel, author of a German-language blog on machine ethics and Norbert Lammert, president of the German Bundestag. The conference wasn’t meant to be for researchers only – though a great many scientists were present –, so most talks were of introductory nature. Ignoring the basics, which are, for example, covered in the collection on machine ethics by Anderson and Anderson and the book by Wallach and Allen, I will in the following summarize some thoughts regarding the event.

webvisual_roboterethik_150831
Poster from the conference website

Conservatism

Understandably, the conference focused on the short-term relevance and direct application of machine ethics (see below). Robots with human-level capabilities were only alluded to as science fiction. Nick Bostrom’s book Superintelligence was not even mentioned.

Having researched the speakers only a bit, it also seems like most of them have not cared to comment on such scenarios, before.

Immediately relevant fields for machine ethics

Lammert began his talk by saying that governments are usually led to change/introduce legislation when problems are urgent but not before. And thus, significant parts of the conference was dedicated to specific problems in machine ethics that robots face today or might face in the near future. The three main areas seem to be

  • robots in medicine and care of the elderly,
  • military robots, and
  • autonomous vehicles (also see Lin (2015) on why ethics matters for autonomous cars).

Lammert also argued that smart home applications might be relevant. Furthermore, Oliver Bendel pointed to some specific examples:

AIs and full moral agency

There was some agreement that AIs should (or could) not become full moral agents (at least within the foreseeable future). For example, upon being asked about the possibility of users programming robots to commit acts of terrorism, Prof. Jochen Steil argued that illegitimate usage can never really be ruled out and that the moral responsibility lies with the user. With full moral agency, however, robots could in principle resist any kind of illegal or immoral use. AI seems to be the only general purpose tool that can be made safe in this way and it seems odd to miss the chance to make use of this to increase the safety of this powerful technology.

In his talk, Oliver Bendel said that he was opposed to the idea of letting robots make all moral decisions. For example, he proposed the idea that robot vacuum cleaners could stop when coming across a bug or spider, but ultimately let the user whether to suck in the creature. Also, he would like cars to make him decide in ethically relevant situations. As some autonomous vehicle researchers from the audience pointed out (and Bendel himself conceded), this will not be possible in most situations – ethical problems lurk around every corner and quick reactions are required more often than not. In response to the question of why machines should not make certain crucial decisions, he argued that people and their lack of rationality was the problem. For example, if one were to introduce autonomous cars, people whose relatives would be killed in accidents by these vehicles would complain if the AI had chosen their relatives as victims even if the overall number of deaths was decreased by using autonomous vehicles. I don’t find this argument very convincing, though. It seems to be a descriptive point rather than a normative one: of course, it would be difficult for people to accept machines as moral agents, but that does not mean that machines should not make moral decisions. And the preferences that are violated, the additional unhappiness or the public outcry caused by introducing autonomous vehicles are morally relevant, but people dying (and therefore also more relatives being unhappy) is much more important and should be the priority.

Weird views on free will, consciousness and morality

Some of the speakers made comments on the nature of free will, consciousness and morality that surprised me. For example, Lammert said that morality necessarily had to be based on personal experience and reflection and that this made machine morality impossible in principle. Machines could only be “perfected to behave according to some external norms” and he said that this has nothing to do with morality and another speaker agreed.

Also, most speakers naturally assumed that machines of the foreseeable future don’t possess consciousness or free will, which I disagree with (see this article by Eliezer Yudkowsky on free will and Dan Dennett’s Consciousness explained or Brian Tomasik’s articles on consciousness). I am not so surprised about disagreeing with them, because many of the ideas of Yudkowsky and Tomasik would be considered “crazy” by most people (though not necessarily philosophers, I believe), but by how confident they are given that free will, consciousness and the nature of morality are still the subject of ongoing discussion in mainstream, contemporary philosophy. Indeed, digital consciousness seems to be a possibility in Daniel Dennett’s view on consciousness (see his book Consciousness explained) or Thomas Metzinger’s self-model theory of subjectivity (see, for example, The ego tunnel) and theories like computationalism in general. All of this is quite mainstream.

The best way out of this debate, in my opinion, is to only talk about the kind of morality that we really care about, namely “functional morality”, i.e. acting morally, but, if that is even possible, not thinking morally, feeling empathy etc. I don’t really think it matters much whether AIs are really consciously reflecting about things or whether they just act morally in some mechanic way and I expect most people to agree. I made a similar argument about consequentialism and machine ethics elsewhere.

I expect that machines themselves could become morally relevant and maybe some are already to some extent, but that’s a different topic.

AI politicians

Towards the end, Lammert was asked about politics being robosourced. While saying that he is certain that it will not happen within his life time (Lammert was born 1948), he said that politics will probably develop in this way unless explicitly prevented.

In the preceding talk, Prof. Johannes Weyer mentioned that real time data processing could be used for making political decisions.

Another interesting comment to Lammert’s talk was that many algorithms (or programs) basically act as laws in that they direct the behavior of millions of computers and thereby millions of people.

Overall, this leads me to believe that besides the application in robotic areas (see above), the morality of artificial intelligence could become important in non-embodied systems that make political or maybe management decisions.

Media coverage

Due to the presence of Norbert Lammert (president of the German Bundestag) and all the other high-profile researchers and based on the large fraction of media people on the list of attendees, I expect the conference will receive a lot of press coverage.