Futarchy is a meta-algorithm for making decisions using a given set of traders. For every possible action a, the beliefs of these traders are aggregated using a prediction market for that action, which, if a is actually taken, evaluates to an amount of money that is proportional to how much utility is received. If a is not taken, the market is not evaluated, all trades are reverted, and everyone keeps their original assets. The idea is that – after some learning and after bad traders lose most of their money to competent ones – the market price for a will come to represent the expected utility of taking that action. Futarchy then takes the action whose market price is highest.
For a more detailed description, see, e.g., Hanson’s (2007) original paper on the futarchy, which also discusses potential objections. For instance, what happens in markets for actions that are very unlikely to be chosen? Note, however, that for this blog post you’ll only need to understand the basic concept and none of the minutia of real-world implementation. The above description deliberately ignores and abstracts away from these. One example of such a discrepancy between standard descriptions of futarchy and my above account is that, in real-world governance, there is often a “default action” (such as, leave law and government as is). To keep the number of markets small, markets are set up to evaluate proposed changes relative to that default (such as the introduction of a new law) rather than simply for all possible actions. I should also note that I only know basic economics and am not an expert on the futarchy.
Traditionally, the futarchy has been thought of as a decision-making procedure for governance of human organizations. But in principle, AIs could be built on futarchies as well. Of course, many approaches to AI (such as most Deep Learning-based ones) already have all their knowledge concentrated into a single entity and thus don’t need any procedure (such as democracy’s voting or futarchy’s markets) to aggregate the beliefs of multiple entities. However, it has also been proposed that intelligence arises from the interaction and sometimes competition of a large number of simple subagents – see, for instance, Minsky’s book The Society of Mind, Dennett’s Consciousness Explained, and the modularity of mind hypothesis. Prediction markets and futarchies would be approaches to (or models of) combining the opinions of many of these agents, though I doubt that the human mind functions like either of the two. A theoretical example of the use of prediction markets in AI is MIRI’s logical induction paper. Furthermore, markets are generally similar to evolutionary algorithms.1
So, if we implement a futarchy-like system in an AI, what decision theory would that AI come to implement? It seems that the answer is EDT. Consider Newcomb’s problem as an example. Traders that predict one-boxing to yield a million and two-boxing to yield a thousand will earn money, since the agent will, in fact, receive a million if it one-boxes and a thousand if it two-boxes. More generally, the futarchy rewards traders based on how accurately they predict what is actually going to happen if the agent makes a particular choice. This leads the traders to estimate the value of an action as proportional to the expected utility conditional on that action since conditional probabilities are the correct way to make predictions.
There are some caveats, though. For instance, prediction markets only work if the question at hand can eventually be answered. Otherwise, the market cannot be evaluated. For instance, in Newcomb’s problem, one would usually assume that your winnings are eventually given and thus shown to you. But other versions of Newcomb’s problems are conceivable. For instance, if you are consequentialist, Omega could donate your winnings to your favorite charity in such a way that you will never be able to tell how much utility this has generated for you. Unless you simply make estimates – in which case the behavior of the markets depends primarily on what kind of expected value (regular or causal) you will use as an estimate –, you cannot set up a prediction market for this problem at all. An example of such a “hidden” Newcomb problem is cooperation via correlated decision making between distant agents.
Another unaddressed issue is whether the futarchy can deal correctly with other problems of space-time embedded intelligence, such as the BPB problem.
Notwithstanding the caveats, EDT seems to be an inherent the way the futarchy works. To get the futarchy to implement CDT, it would have to reward traders based on what the agent is causally responsible for or based on some untestable counterfactual (“what would have happened if I had two-boxed”). Whereas EDT arises naturally from the principles of the futarchy, other decision theories require modification and explicit specification.
I should mention that this post is not primarily intended as a futarchist argument for EDT. Most readers will already be familiar with the underlying pro-EDT argument, i.e., EDT making decisions based on what will actually happen if a particular decision is made. In fact, it may also be viewed as a causalist argument against the futarchy.2 Rather than either of these two, it is a small part of the answer to the “implementation problem of decision theory”, which is: if you want to create an AI that behaves in accordance to some particular decision theory, how should that AI be designed? Or, conversely, if you build an AI without explicitly implementing a specific decision theory, what kind of behavior (EDT or CDT or other) results from it?
1. There is some literature comparing the way markets function to evolution-like selection (see the first section of Blume and Easley 1992) – i.e., how irrational traders are weeded out and rational traders accrue more and more capital. I haven’t read much of that literature, but the main differences between the futarchy and evolutionary algorithms seem to be the following. First, the futarchy doesn’t specify how new traders are generated, because it classically relies on humans to do the betting (and the creation of new automated trading systems), whereas this is a central concern in evolutionary algorithms. Second, futarchies permanently leave the power in the hands of many algorithms, whereas evolutionary algorithms eventually settle for one. This also means that the individual traders in a futarchy can be permanently narrow and specialized. For instance, there could be traders who exploit a single pattern and rarely bet at all. I wonder whether it makes sense to combine evolutionary algorithms and prediction markets. ↩
2. Probably futarchist governments wouldn’t face sufficiently many Newcomb-like situations in which the payoff can be tested for the difference to be relevant (see chapter 4 of Arif Ahmed’s Evidence, Decision and Causality). ↩