(Edit: I'm no longer confident that the two definitions I used below are useful. I still stand by the broad thrust of this post, but am in the process of rethinking the details).

Rohin Shah has recently criticised Eliezer’s argument that “sufficiently optimised agents appear coherent”, on the grounds that any behaviour can be rationalised as maximisation of the expectation of some utility function. In this post I dig deeper into this disagreement, concluding that Rohin is broadly correct, although the issue is more complex than he makes it out to be. Here’s Eliezer’s summary of his original argument:

Violations of coherence constraints in probability theory and decision theory correspond to qualitatively destructive or dominated behaviors. Coherence violations so easily computed as to be humanly predictable should be eliminated by optimization strong enough and general enough to reliably eliminate behaviors that are qualitatively dominated by cheaply computable alternatives. From our perspective this should produce agents such that, ceteris paribus, we do not think we can predict, in advance, any coherence violation in their behavior.

First we need to clarify what Eliezer means by coherence. He notes that there are many formulations of coherence constraints: restrictions on preferences which imply that an agent which obeys them is maximising the expectation of some utility function. I’ll take the standard axioms of VNM utility as one representative set of constraints. In this framework, we consider a set O of disjoint outcomes. A lottery is some assignment of probabilities to the elements of O such that they sum to 1. For any pair of lotteries, an agent can either prefer one to the other, or to be indifferent between them; let P be the function (from pairs of lotteries to a choice between them) defined by these preferences. The agent is incoherent if P violates any of the following axioms: completeness, transitivity, continuity, and independence. Eliezer gives several examples of how an agent which violates these axioms can be money-pumped, which is an example of the “destructive or dominated” behaviour he mentions in the quote above. And any agent which doesn’t violate these axioms has behaviour which corresponds to maximising the expectation of some utility function over O (a function mapping the outcomes in O to real numbers).

It’s crucial to note that, in this setup, coherence is a property of an agent’s preferences at a

*single point in time*. The outcomes that we are considering are all mutually exclusive, so an agent’s preferences over other outcomes are irrelevant after one outcome has already occurred. In addition, preferences are not

*observed*but rather

*hypothetical*: since outcomes are disjoint, we can’t actually observe the agent choosing a lottery and receiving a corresponding outcome (more than once).¹ But Eliezer’s argument above makes use of a concept of coherence which differs in two ways: it is a property of the

*observed*

*behaviour*of agents

*over time*. VNM coherence is not well-defined in this setup, so if we want to formulate a rigorous version of this argument, we’ll need to specify a new definition of coherence which extends the standard instantaneous-hypothetical one. Here are two possible ways of doing so:

- Definition 1: Let O be the set of all possible “snapshots” of the state of the universe at a single instant (which I shall call world-states). At each point in time when an agent chooses between different actions, that can be interpreted as a choice between lotteries over states in O. Its behaviour is coherent iff the set of all preferences revealed by those choices is consistent with some coherent preference function P over all pairs of lotteries over O AND there is a corresponding utility function which assigns values to each state that are consistent with the relevant Bellman equations. In other words, an agent’s observed behaviour is coherent iff there’s some utility function such that the utility of each state is some fixed value assigned to that state + the expected value of the best course of action starting from that state, and the agent has always chosen the action with the highest expected utility.²
- Definition 2: Let O be the set of all possible ways that the entire universe could play out from beginning to end (which I shall call world-trajectories). Again, at each point in time when an agent chooses between different actions, that can be interpreted as a choice between lotteries over O. However, in this case no set of observed choices can ever be “incoherent” - because, as Rohin notes, there is always a utility function which assigns maximal utility to all and only the world-trajectories in which those choices were made.

Okay, so what about definition 1? This is a more standard interpretation of having preferences over time: requiring choices under uncertainty to move between different states makes this setup very similar to POMDPs, which are often used in reinforcement learning. It would be natural to now interpret the non-transitive travel example as follows: let F, J and B be the states of being in San Francisco, San Jose and Berkeley respectively. Then paying to go from F to J to B to F demonstrates incoherent preferences over states (assuming there’s also an option to just stay put in any of those states).

First problem with this argument: there are no coherence theories saying that an agent needs to maintain the same utility function over time. In fact, there are plenty of cases where you might choose to change your utility function (or have that change thrust upon you). I like Nate Soares’ example of wanting to become a rockstar; other possibilities include being blackmailed to change it, or sustaining brain damage. However, it seems unlikely that a sufficiently intelligent AGI will face these particular issues - and in fact the more capable it is of implementing its utility function, the more valuable it will consider the preservation of that utility function.³ So I’m willing to accept that, past a certain high level of intelligence, changes significant enough to affect what utility function a human would infer from that AGI’s behaviour seem unlikely.

Here’s a more important problem, though: we’ve now ruled out some preferences which seem to be reasonable and natural ones. For example, suppose you want to write a book which is so timeless that at least one person reads it every year for the next thousand years. There is no single point at which the state of the world contains enough information to determine whether you’ve succeeded or failed in this goal: in any given year there may be no remaining record of whether somebody read it in a previous year (or the records could have been falsified, etc). This goal is fundamentally a preference over world-trajectories.⁴ In correspondence, Rohin gave me another example: a person whose goal is to play a great song in its entirety, and who isn’t satisfied with the prospect of playing the final note while falsely believing that they’ve already played the rest of the piece.⁵ More generally, I think that virtue-ethicists and deontologists are more accurately described as caring about world-trajectories than world-states - and almost all humans use these theories to some extent when choosing their actions. Meanwhile Eric Drexler’s CAIS framework relies on services which are bounded in time taken and resources used - another constraint which can’t be expressed just in terms of individual world-states.

There’s a third issue with this framing: in examples like non-transitive travel, we never actually end up in quite the same state we started in. Perhaps we’ve gotten sunburned along the journey. Perhaps we spent a few minutes editing our next blog post. At the very least, we’re now slightly older, and we have new memories, and the sun’s position has changed a little. So really we’ve ended up in state F’, which differs in many ways from F. You can presumably see where I’m going with this: just like with definition 2, no series of choices can ever demonstrate incoherent revealed preferences in the sense of definition 1, since every choice actually made is between a different set of possible world-state outcomes. (At the very least, they differ in the agent’s memories of which path it took to get there.⁶ And note that outcomes which are identical except for slight differences in memories should sometimes be treated in very different ways, since having even a few bits of additional information from exploration can be incredibly advantageous.)

Now, this isn’t so relevant in the human context because we usually abstract away from the small details. For example, if I offer to sell you an ice-cream and you refuse it, and then I offer it again a second later and you accept, I’d take that as evidence that your preferences are incoherent - even though technically the two offers are different because accepting the first just leads you to a state where you have an ice-cream, while accepting the second leads you to a state where you both have an ice-cream and remember refusing the first offer. Similarly, I expect that you don’t consider two outcomes to be different if they only differ in the precise pattern of TV static or the exact timing of leaves rustling. But again, there are no coherence constraints saying that an agent can’t consider such factors to be immensely significant, enough to totally change their preferences over lotteries when you substitute in one such outcome for the other.

So for the claim that sufficiently optimised agents appear coherent to be non-trivially true under my first definition of coherence, we’d need to clarify that such coherence is only with respect to outcomes when they’re categorised according to the features which humans consider important,

*except*for the ones which are intrinsically temporally extended. But then the standard arguments from coherence constraints no longer apply. At this point I think it’s better to abandon the whole idea of formal coherence as a predictor of real-world behaviour, and replace it with Rohin’s notion of “goal-directedness”, which is more upfront about being inherently subjective, and doesn’t rule out any of the goals that humans actually have.

*Thanks to Tim Genewein, Ramana Kumar, Victoria Krakovna and Rohin Shah for discussions which led to this post, and helpful comments.*

[1] Disjointedness of outcomes makes this argument more succinct, but it’s not actually a necessary component, because once you’ve received one outcome, your preferences over all other outcomes are allowed to change. For example, having won $1000000, the value you place on other financial prizes will very likely go down. This is related to my later argument that you never actually have multiple paths to ending up in the “same” state.

[2] Technical note: I’m assuming an infinite time horizon and no discounting, because removing either of those conditions leads to weird behaviour which I don’t want to dig into in this post. In theory this leaves open the possibility of states with infinite expected utility, as well as lotteries over infinitely many different states, but I think we can just stipulate that neither of those possibilities arises without changing the core idea behind my argument. The underlying assumption here is something like: whether we model the universe as finite or infinite shouldn’t significantly affect whether we expect AI behaviour to be coherent over the next few centuries, for any useful definition of coherent.

[3] Consider the two limiting cases: if I have no power to implement my utility function, then it doesn’t make any difference what it changes to. By comparison, if I am able to perfectly manipulate the world to fulfil my utility function, then there is no possible change in it which will lead to better outcomes, and many which will lead to worse (from the perspective of my current utility function).

[4] At this point you could object on a technicality: from the unitarity of quantum mechanics, it seems as if the laws of physics are in fact reversible, and so the current state of the universe (or multiverse, rather) actually does contain all the information you theoretically need to deduce whether or not any previous goal has been satisfied. But I’m limiting this claim to macroscopic-level phenomena, for two reasons. Firstly, I don’t think our expectations about the behaviour of advanced AI should depend on very low-level features of physics in this way; and secondly, if the objection holds, then preferences over world-states have all the same problems as preferences over world-trajectories.

[5] In a POMDP, we don’t usually include an agent’s memories (i.e. a subset of previous observations) as part of the current state. However, it seems to me that in the context of discussing coherence arguments it’s necessary to do so, because otherwise going from a known good state to a known bad state and back in order to gain information is an example of incoherence. So we could also formulate this setup as a belief MDP. But I prefer talking about it as a POMDP, since that makes the agent seem less Cartesian - for example, it makes more sense to ask what happens after the agent “dies” in a POMDP than a belief MDP.

[6] Perhaps you can construct a counterexample involving memory loss, but this doesn’t change the overall point, and if you’re concerned with such technicalities you’ll also have to deal with the problems I laid out in footnote 4.