A model of intelligence

Epistemic status: very speculative, exploratory and messy. I'll try to put up a summary later.

Humans are very good at many cognitive tasks which weren't selected for in the ancestral environment. Notable examples include doing mathematics, designing, building and using very complex tools such as computers, discussing abstract ideas in philosophy, creating complex and detailed narratives set in entirely fictional worlds, and making plans for the long-term future which involve anticipating the behaviour of millions of people. This implies that when we evolved to perform well on simpler versions of these tasks, we didn't just develop specific skills, but rather a type of general intelligence which is qualitatively different to that of animals. This was probably a pretty abrupt shift, since our capacity for higher thought is orders of magnitude above that of any other animal. If our intellectual capabilities were the combination of many small cognitive abilities, we'd expect some other animals to have developed most of them, and be half as good as a human at learning language, or a quarter as good at building tools - something we just don't observe. So it's likely that there are a few core cognitive competencies which underlie human abilities. If I had to guess, I'd say the keys are compositionality, abstraction and imagination.

Compositionality - the ability to combine concepts in novel ways which still make sense - is the bedrock of language and thought. If I teach you a new verb, "to dax", you implicitly know how to combine it with the rest of the English language to form arbitrarily complex sentences which express facts, thoughts or counterfactuals: "I sometimes dax"; "You think that I dax too much"; "If I hadn't daxed as often, you'd be happier." The way language achieves compositionality is by having a recursive structure based mostly on noun phrases and verb phrases: for example, in the sentence "This is the dog that worried the cat that killed the rat that ate the malt that lay in the house that Jack built", starting at any 'the' gives you a complete noun phrase, and starting at any verb gives you a complete verb phrase (e.g. "ate the malt that lay in the house that Jack built") which needs to be preceded by a noun phrase to make a full sentence.

By itself, though, this isn't enough for humans to process complicated thoughts. Our memories are limited - in fact, most people can only hold 4-7 items in working memory. The next step is to encapsulate certain properties so that we can reason about them holistically - let's call this abstraction. You are a collection of atoms; if we abstract up from that, you're an arrangement of cells; upwards further, and you're a bundle of organs and tissues; upwards again, and you're a person; above that, you're a member of the human species; and even more generally, you're a living being. All of these claims are true, but they're descriptions of reality at different levels. In general, abstraction loses information, but makes certain types of analysis much easier. In game theory, for instance, we abstract away from the actual identities of the players, and treat them as rational agents with fixed goals. Interestingly, when children learn a language they automatically associate different words with different levels of abstraction, based on the words they already know - for example, if they have already learned 'dog', when you point at one and say 'mammal', they're predisposed to think you're making a generalisation not just giving them a synonym for 'dog'.

Abstractions don't have to just be broader categories of physical things, but can also identify shared properties or characteristics - such as "love" or "democracy". Once we have those concepts, we can form thoughts and sentences about them in the same compositional way we do with tangible objects. Of course, reasoning about dogs is easier than reasoning about love. Fortunately, we have a tool to make abstract reasoning easier, which I'll call metaphorical thought. Metaphors transport relationships or properties up layers of abstraction, so that you can understand the claim that "love is hell" without needing to define it in terms of any physical lovers being in any physical Hell. Note that while some systems in the brain (such as the visual system) use a strict hierarchy of abstraction, in general we are able to flexibly abstract properties to other domains whenever useful. People are made out of cells, but when we imagine cells it's sometimes useful to anthropomorphise them as using their abilities (e.g. phagocytosis) to achieve certain goals (defeating pathogens), just like people do.

One more mental ability seems very important: the ability to imagine and then extrapolate situations, particularly those involving other people. Sometimes we do this using implicit rules, like "People get angry if you hit them". In unusual cases, we may need to visualise and manipulate mental images (e.g. to figure out if two items will fit together), or pretend that we are in another person's position (to figure out how they'd respond). More generally, we can use our imaginations to flesh out ideas by generating details and context. Just a few words - like "snakes on a trampoline" - are enough to trigger a cascade of associations and connotations which we automatically combine to envisage several ways that scenario might play out. ("Association" probably deserves to be considered a separate cognitive skill, but let's lump it in with imagination for now). We're good enough at this sort of creativity that people can come up with totally fictitious justifications for their actions on the fly (as shown in studies of split-brain patients). In a sense, imagination is the opposite of abstraction: the latter goes from examples to concepts, the former from concepts to examples. And not just any examples, but ones which are typical and relevant - otherwise we'd have to fight our way through endless edge cases to predict any scenario.

Out of the three traits I've just discussed, it seems like human imagination is the one in which our differences from animals are the least qualitative, and instead more quantitative. I'd guess carnivores who stalk their prey are imagining possible reactions to some extent; and animals are definitely able to learn associations between phenomena that aren't intrinsically related (like a bell ringing and food arriving). Even neural networks are already able to generate novel outputs such as photorealistic human faces which are different from all training examples. On the other hand, we can imagine a much wider range of scenarios than animals can - including ones which are displaced in time or space, which are entirely fictional, or which represent abstract concepts. The ability to use language to specify a story is undoubtedly helpful; another part of it, I'd speculate, is our ability to abstract away from individual stories to get a broader narrative.

Beyond human intelligence

What would it take to make humans significantly more intelligent than we currently are? Broadly speaking, there are two possible answers. The first is that making our brains bigger and faster would suffice. The second is that we would need to gain cognitive abilities which we're currently entirely lacking, and without which we simply couldn't progress much further - in the same way that a dog couldn't learn rocket science even if its memory and thinking speed had been dramatically enhanced. Personally, I'd guess that bigger brains would increase our intelligence quite a lot, before we hit a threshold at which we'd need new cognitive abilities. We know that evolution applied a great deal of pressure to increase the size of human brains - to the extent that babies are born years before they would be ready to leave the womb (by other animals' standards), and death or injury during childbirth is common - simply so that their heads can grow as large as possible. These are severe costs which would not have been borne unless the gradient of intelligence increase with respect to brain size were still fairly high. But unless we think that the cognitive abilities we possess are the only ones required for arbitrarily high intelligence - a rather anthropocentric assumption - that gradient won't remain high forever. Eventually, we'd need to improve the underlying cognitive architecture.

We can't do this in humans, but we can in AIs. The argument in the paragraph above suggests that if we wanted a roughly human-level AI which already possessed our three key cognitive competencies to become smarter quickly, then giving it a "bigger brain" (faster processors, more memory) would probably work. Once the AI was smarter, it would be able to design even better hardware in a positive feedback loop. But this would be a relatively slow process: given the complexities of manufacturing processors, and the delays from having humans involved, it would take months per iteration. If we wanted it to become much smarter very quickly, it'd need fundamentally different cognitive abilities. Here are three which I think are important:
  • The ability to absorb information quickly
  • The ability to easily change its cognitive architecture
  • The ability to introspect accurately and in detail

The first is something which humans are very bad at. The bandwidth at which we can acquire new knowledge is very low - we can read or listen at perhaps a few hundred words a minute, and it takes decades to teach children most of what adults know. By contrast, computers can transfer gigabytes of information per second. However, that doesn't mean that an AI can understand that information as quickly. A neural network, for example, might spend seconds downloading a dataset which it takes days to train on. By contrast, a symbolic reasoner which stores knowledge as lists of properties might be able to "internalise" new information as soon as it's been downloaded, simply by appending it to existing lists. Of course, that information might need to be in a specialised format which takes a lot of computation to produce - but even in that case, once it's been processed by one AI, every other instance would be able to learn it immediately. On the other hand, uncritically accepting new information is probably not a good strategy even if it'd be possible - instead, you'd want to spend time making sure the new information fits with previous knowledge, and extrapolating to new conclusions wherever possible.

Of course, it's very unlikely that the first human-level artificial general intelligence (AGI) we build has anywhere near optimal knowledge representation, or indeed near-optimal anything. How quickly it improves from that point depends on how easy it is for us - or the AI itself - to improve its architecture and reasoning process. The fastest growth in the short term relies on improvements to software - at first mostly by human engineers, but later mostly by the AI itself, since eventually it'd be much more intelligent than any team of humans. If you have trouble picturing what sort of possibilities that would allow, one rough heuristic I use is is to imagine many smart humans working together over a long period of time - for example, a thousand top scientists over two centuries. Since we can trace many of the biggest intellectual breakthroughs to a small number of people, this is pretty similar to picturing the technological advances humanity as a whole will make within the next few centuries, which I think will be pretty substantial (more on that in my follow-up essay; note, though, that the extent to which that progress relies on physical experimentation and better hardware is an open question).

However, improving an AI to that stage might take a long time. If we asked even a relatively smart human to significantly improve a state-of-the-art AI by themselves, they'd have a pretty difficult time. There are thousands of very competent AI researchers contributing new algorithms and ideas at a rate that no individual could hope to match. So will the role of self-improvement only become important when AIs are already significantly smarter than humans? That depends on what advantages the AI has over humans. Even if it's around human-level at most tasks, it may be particularly good at inspecting its own reasoning process, for the same reason that it's much easier for you to figure out what you're thinking than what another person is. Granted, you don't have access to that person's entire "source code", but there's still a significant difference between even well-informed reasoning about thought processes vs actually experiencing those thought processes. And the introspective transparency of an AI could potentially dwarf that of humans. Humans have pretty minimal access to what our brains are doing behind the scenes during quick, reflexive System 1 tasks such as free association. When system 2 is engaged we have slightly more of a clue, but almost all of the processing is still unconscious - consider how people can talk fluently about complex topics without planning their sentences in advance. If an AI could do complicated tasks while also inspecting the thought processes used, it could gain a much deeper understanding of where its weaknesses lay, and how to fix them.

Thanks to Alex, Beth and Oliver for the discussions which inspired this essay. I'd be very interested in reading more papers or books on these topics, if anyone has any to recommend. In my next post, I'll explore whether even my "simple" model of human intelligence might be very complicated to implement.

Comments

  1. Interesting post, thanks. I updated particularly on the two points:
    (i) given we are hitting the threshold of brain size ourselves, making humans brains larger should lead to more intelligence. This is hard for us but easy for our AI models.
    (ii) it is easier for me to introspect on my own thoughts than infer those of others. The same is likely to be true for the AI too. I am thinking about run time profilers and using one of these vs feeling what parts of a thought process take the longest.

    Re recommendations have you read Symbolic Species by Terrence Deacon? I think you may really like it. The book was used heavily in this recent and also thought provoking paper: https://arxiv.org/pdf/2102.03406.pdf

    ReplyDelete

Post a Comment

Popular posts from this blog

In Search of All Souls

25 poems

Book review: Very Important People