Reply to Jebari and Lundborg on Artificial Superintelligence

Jebari and Lundborg have recently published an article entitled Artificial superintelligence and its limits: why AlphaZero cannot become a general agent. It focuses on the thorny issue of agency in superintelligent AIs. I’m glad to see more work on this crucial topic; however, I have significant disagreements with their terminology and argumentation, as I outline in this reply. Note that it was written rather quickly, and so might lack clarity in some places, or fail to convey some nuances of the original article. I welcome comments and further responses.

Their paper runs roughly as follows: Jebari and Lundborg first discuss the belief–desire model for intentional action, under which agents act in ways that they believe will bring about their desires. They then distinguish between different degrees of generality that agents can have: “general agency is, loosely speaking, the ability to act in a diverse set of situations.” They consider thermostats to be very specialised agents, and dogs to be fairly general agents. They then introduce introduce Legg and Hutter’s definition of intelligence as “an agent’s ability to achieve its goals in a wide range of environments”. This is, unfortunately, a misquotation of Legg and Hutter, and one which leads Jebari and Lundborg astray, as they follow it with:

Claim 1: If an agent has very specialized goals or desires, it can be superintelligent with regards to those desires without being a general agent.

The actual definition given in both A collection of definitions of intelligence and Universal intelligence is: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments”. Note the important distinction between an agent which can achieve its goals versus one which can achieve goals in general. In the original context, it is clear that to be highly intelligent according to this definition, an agent doesn’t just need to be able to achieve a single (potentially specialised) set of goals, but rather a wide variety of them! More specifically, in Universal intelligence goals are formalised as a reward function implemented by the environment, with intelligence measured over all computable environments (and therefore all computable reward functions). So Legg and Hutter’s definition does not allow us to call a non-general agent superintelligent. Bostrom, too, defines superintelligence to refer specifically to "intellects that greatly outperform the best current human minds across many very general cognitive domains".


Perhaps Jebari and Lundborg aren’t making a claim based on Legg and Hutter’s definition, but rather trying to define a new concept: intelligence with respect to a set of desires. But this is still a highly misleading use of the term ‘intelligence’, because there are many possible desires whose achievement doesn’t require any of the cognitive faculties traditionally associated with intelligence (planning, memory, and so on). Consider an agent with a goal that is trivially easy to fulfil in almost all environments - for example, the goal of never taking any actions. Claim 1 implies that, even if this agent has no other cognitive faculties beyond the possession of that goal, it can nevertheless be superintelligent with respect to that goal, because it achieves the goal in almost all environments. Yet why would it be useful to conclude that an agent with nearly no cognitive abilities is superintelligent in any way? Or, perhaps more pithily: if this definition forces us to conclude that rocks could be superintelligent if only they possessed the goal of behaving in a rocklike way, we should suspect that it’s a misuse of the term.


In their next paragraph, Jebari and Lundborg argue that superintelligence also requires the ability to perform a task better than humans. But this doesn’t salvage claim 1, because there are many tasks on which intelligence is not needed to outperform humans - e.g. both of the examples I discussed in the previous paragraph. Humans, unlike rocks, are notoriously bad at taking no actions; eventually we need to eat and sleep, or just get bored. They also note, in defense of their argument, that “the term “intelligence” has a lot of connotations that are misleading”. But at some point, when you separate a term from all of its connotations, you are merely talking about a different concept. We can already say that objects have the property of x, or a strong ability or capability to do x; we do not need to claim that this is equivalent to them being superintelligent at x.


Having said all this, I do agree with them that it’s reasonable and indeed necessary to separate agency and intelligence. I think this will require us to move beyond Legg and Hutter’s definition of intelligence, in a way which I discuss briefly here and hope to explicate further soon.


Ascribing desires to simple agents


Jebari and Lundborg go on to note: “In many situations, machines acting in our environment are often described as trying to attain some general aim, for example, the goal of a vacuum cleaner robot is often described as “clean a room”… But vacuum cleaner robots are not literally trying to clean a room. Rather they follow a set of specific instructions that often results in cleaning a room.”


This is a very reasonable observation. To me, the implication is that it is not very useful to ascribe goals to vacuum cleaner robots; saying that they have the goal to “follow a set of specific instructions that often results in cleaning a room” adds nothing. In Dennett’s terminology, we can take the design stance towards such objects, and merely say that they have been designed to clean rooms without needing to invoke any goals belonging to the vacuum cleaner itself.


But Jebari and Lundborg disagree. For example, they claim that “[a] thermostat has no desire with respect to the temperature of the room. It only has a desire to activate the AC if the temperature, as registered by its thermometer, reaches a certain threshold.” They introduce a distinction between these and other desires: they call desires like the ones they ascribe to thermometers and vacuum cleaners unproductive desires.


I should note, however, that this is primarily a semantic dispute. I say that unproductive desires shouldn’t qualify as desires; Jebari and Lundborg think that they are a special type of desire. I think that their terminology is misleading because, like their definition of intelligence, it’s too permissive - it allows us to ascribe unproductive desires to rocks and other inanimate objects. And I expect that if this terminology becomes more common, there’ll be a lot of unnecessary contention between people who use this terminology and people who think that rocks and thermostats don’t really have desires. But it’s not a crucial issue.


Two ways of acquiring desires


A more serious disagreement arises in the subsequent section, where they argue that “an AI with a set of desires constrained to a specific domain cannot acquire desires relevant to other domains”, because “desires can only be reinforced from within”. We should immediately be suspicious of this argument, because it gives rise to a regress problem: how does an AI acquire its first desire in any given domain? In fact, Jebari and Lundborg are conflating two ways in which new desires can arise. The first is that new desires can arise by thinking about ways to achieve old desires; these are often called instrumental desires. But the second is that desires can arise via a modification to the system itself. For example, if I train an AI to play Starcraft, then that AI goes from an initial state in which it has no desires, to a state in which it desires (whether “productively” or “unproductively”) to win at Starcraft, due to the parameter updates implemented by its optimiser.


In this case, I have described the situation as if there is a “policy” agent which chooses what actions to take, and a separate optimiser. But we could also think about this as one system: a single agent which contains a policy module and an optimiser module that updates the policy module. Some sources use the former terminology, some use the latter. Yet in either case, the “agent” starts off without general agency, and might develop general agency during training if the training environment is complex enough. Because of this, the distinction Jebari and Lundborg draw between “spontaneous emergence” and “accidental emergence” of generality seems to merely reflect the arbitrary choice of where we draw the line of what qualifies as an “agent”.


Perhaps Jebari and Lundborg might reply that the combination of a randomly initialised policy and an optimiser already constitutes a general agent, because the optimiser is able to update the policy in a wide range of circumstances. Yet if so, then they are incorrect to claim that Bostrom argues for the possibility of spontaneous emergence of a general agent from a non-general agent. In fact, the example Bostrom uses is of evolutionary search processes, which are a type of optimisation algorithm. So if an optimiser updating a randomly initialised policy constitutes a general agent, then so does Bostrom’s proposed evolutionary search process. But I think that saying that either of these are “general” agents is too much of a stretch; they’re simple, blind optimisation processes that are able to give rise to general agents. I consider this distinction to be crucial in thinking about agency and what behaviour we should expect from superintelligences.

Comments

  1. On the regress issue - I think they do address it by adding that desires can be "acquired with the help of an external influence" (e.g. the programmer).

    I agree with you that the paper seems to go down some questionable trails. To me, it feels like the authors would do better to focus on domain specifically, rather than on desires. Alpha Zero, for instance, can't become a general agent because its input domain is restricted to the chess board; if we were to expand its domain to include senses of the natural world, we'd be less certain about its inability to achieve generality (and vice versa for the paperclip maximizer). Their "unproductive desires" only remain that way for as long as the system's domain remains limited.

    ReplyDelete

Post a Comment

Popular posts from this blog

In Search of All Souls

Book review: Very Important People

Moral strategies at different capability levels