Sunday, 22 April 2018

Neurons all the way down

My views on intelligence have shifted recently after listening to an episode of the Rationally Speaking podcast where Dr Herculano-Houzel talked about her book The Human Advantage. She made a number of fascinating points based on her pioneering research on measuring the number of neurons in various species' brains:
  1. The brains of primates have higher neuron density than the brains of other mammals, particularly in the cerebral cortex, which is largely responsible for higher-level abstract thought.
  2. Amongst primates, number of neurons is roughly proportional to body mass.
  3. Apes are the main exception to the latter rule; their diets aren't calorie-rich enough for them to support brains as large as the trend would suggest.
  4. Humans, by contrast, fit right on the trend line: our innovations such as using tools and fire to cook food allowed us to obtain more calories and therefore grow bigger brains than other apes (bipedalism also helped, reducing the energy cost of walking by a factor of 4).
  5. This means that humans have many more cerebral cortex neurons than any other species, even those which have much larger brains than ours like elephants and whales (edit: except for the long-finned pilot whale, which apparently has over twice as many cerebral cortex neurons as we do).
I take these claims to be significant evidence that there are few 'hard' steps between animal-level and human-level intelligence. Firstly because the simplest explanation (that human brains grew fairly normally, without major cognitive innovations) is now more likely to me. Secondly because, if modifying mammalian brain architectures to support human-level intelligence were hard, then we should probably expect to observe other mammals with similar numbers of cerebral cortex neurons as we have, but which don't have the right brain architectures to use that capacity and so aren't as intelligent as us. As noted in point 5, though, that's not the case. Thirdly, because other primates - even apes - don't seem to be much smarter than elephants or dolphins, and so the main effect of the "primate advantage" described in point 1 is probably more about making neuron-dense brains physically viable than making fundamental changes to the mammalian cognitive architecture.

The podcast also reminded me of the well-known observation that AI researchers used to expect tasks involving abstract thought to be hard, and tasks involving primitive brain functions to be easy, and it turned out to be the other way around. I really should have taken that idea more seriously before now in considering the difficulty of implementing general intelligence.

In conclusion, I expect building an animal-level AI to take longer than I thought; that AI to go from animal level to human level much more quickly than I thought; and a greater likelihood that we'll do the above using neural networks than I thought.

Book review: 23 things they don't tell you about capitalism

Right from the title of his book 23 things they don't tell you about capitalism, it's clear that Ha-Joon Chang is pitting himself against the establishment. In doing so, he lays the ground for both the book's significant insights and its serious flaws. Chang's arguments are divided into 23 chapters, covering a range of economic issues. Each chapter starts off with two headings: "What they tell you" and "What they don't tell you". "They" refers to neo-liberal economists peddling "the free-market ideology that has ruled the world since the 1980s". Chang is not shy in setting up in opposition: in the introduction, he claims that the result of this ideology has been "the polar opposite of what was promised". Instead, he wants to push everything leftwards - still capitalist, but under the supervision of a bigger and more active government which intervenes to promote some sectors and impede others.

Let's start with four of Chang's lines of argumentation which I found convincing. The first is that we shouldn't promote the idea of selfishness as an acceptable goal or even a moral good, as discussed in Thing 2 (Companies should not be run in the interest of their owners) and Thing 5 (Assume the worst about people and you get the worst). I think that examining the value of economic ideas in terms of their effects on cultural mindsets is underrated, and am glad that Chang and others are pushing back on this. The second is the argument that the neoliberal approach to international development has failed, discussed in Thing 7 (Free-market policies rarely make poor countries rich), Thing 11 (Africa is not destined for underdevelopment) and Thing 15 (People in poor countries are more entrepreneurial than people in rich countries). This is a much more complicated question than I'm able to properly evaluate, but as far as I can tell, these are sound and reasonable arguments which deserve serious consideration (and are elaborated upon further in his book Kicking Away the Ladder, where he argues that America and the UK became rich by using the very trade barriers they now rail against). Thirdly, he claims in Thing 22 (Financial markets need to become less, not more, efficient) that innovations in financial markets are, on the margin, doing more harm than good. I've seen this argument floated before, particularly with reference to high-frequency trading, and have yet to see a convincing rebuttal. Lastly, he offers a defense of job security and the welfare state, discussed in Thing 10 (The US does not have the highest living standard in the world), Thing 20 (Equality of opportunity may not be fair), and Thing 21 (Big government makes people more open to change). These lines of argument are fairly common, but worth reading another take on. In Thing 21, there's a nice analogy between welfare for employees and bankruptcy laws for employers: both are measures which encourage risk-taking by reducing the possible worst-case loss.

Yet that last chapter also showcases one of the book's main failings. Thing 21's title is about the benefits of big government, but its content is only about the welfare state. I'm happy to grant that social safety nets can be beneficial for job mobility, while still strongly believing that increased regulation and state-sector employment have the exact opposite effect. Perhaps Chang's conflation of big government with the former alone is an honest mistake, but if so it's one of several cases in the book where good arguments are used to imply bad conclusions. Phrasing his points as 23 challenges to conventional wisdom seems powerful, but disguises the fact that quite a few of them fail to support his overall anti-establishment, pro-government stance, and some actively undermine it.

A good example of this is Thing 3 (Most people in rich countries are paid more than they should be). According to Chang, the conventional wisdom is the following: "In a market economy, people are rewarded according to their productivity. Bleeding-heart liberals may find it difficult to accept that a Swede gets paid fifty times what an Indian gets paid for the same job, but that is a reflection of their relative productivities." However, he argues, it's largely artificial barriers to free movement which perpetuate income differences. This point is, in my mind, entirely correct: increasing international mobility is just about the best thing we could do to help those in poor countries. Yet I doubt you'd find any leading economist who'd deny that international borders are a huge contributor to international inequality. In fact, the Open Borders movement is driven disproportionately by the libertarian right, the strongest supporters of the free markets which Chang rails against elsewhere.

Similarly, Thing 17 (More education in itself is not going to make a country richer) is spot-on in its analysis - but Chang conveniently leaves out the fact that overeducation is perpetuated by massive government subsidies for universities. Meanwhile in Thing 18 (What is good for General Motors is not necessarily good for the US), he argues that regulation can be a force for good and claims that GM "should have been forced to invest in the technologies and machines needed to build better cars, instead of lobbying for protection". The missing link, of course, is the question of whether, even if good regulations are possible, they will ever be consistently implemented. What the example of GM actually suggests is that if regulation is on the cards, lobbyists will more likely than not manage to twist it into the harmful sort.

A second issue is the fallacy of grey: the idea that because there's no black and white answer to a question, we can't sensibly choose between the remaining options. This is particularly prevalent in Chang's discussion of Thing 1 (There is no such thing as a free market), where he argues that all markets have regulations - such as those against child labour, pollution, unapproved medicines, practicing medicine or law without a license, and the sale of votes, organs, or courtroom decisions - which are determined by culture and politics, and so the goal of reducing government interference is an illusory one. But firstly, the mere fact that these regulations exist doesn't make them a good idea: many libertarians would argue that occupational licensing and obligatory pharmaceutical testing, for example, should be repealed. Even apart from that, I think Chang's point is rather misguided: the real world is complicated, of course, but broadly freer markets can be well worth striving for even if there's no platonic ideal of a totally free market (are the benefits of freeing slaves illusory just because we can't define what "perfect individual freedom" means?). Thing 19 (Despite the fall of communism, we are still living in planned economies) falls prey to the same fallacy.

Other arguments that Chang makes, which I have fewer strong opinions about: that inflation isn't nearly as bad as it's made out to be; that where multinationals are based affects their behaviour significantly; that manufacturing is underrated and knowledge-based services like banking and consulting are overrated (although he skims over the most scalable ones, like software and entertainment); that governments have an important role in "picking winners" (historically true in research at least, unclear elsewhere); that trickle-down economics doesn't work (tying in to the much bigger debate about inequality overall); that CEOs are overpaid (probably true, but also probably of mainly symbolic importance); that labour-saving devices like the washing machine have changed the world more than the internet (plausible in general, but false when it comes to washing machines in particular, according to the author of 50 Inventions that Shaped the Modern Economy: apparently people haven't saved time overall, because they wash clothes way more frequently now); and that government regulation is good at restricting choices and thus reducing the complexity of businesses' problems (an interesting idea in theory, but in reality adding a morass of legislation probably makes the decision landscape even more complex).

The book ends on an ironic note with Thing 23 (Good economic policy does not require good economists). Chang points out that the miraculous growth of the Asian economies was led by engineers, lawyers, generals, and practically everyone except actual economists. That's a thought-provoking fact, but upon reflection I'm not sure this chapter would actually be controversial amongst economists. After all, almost all economists would agree that getting rid of crony capitalism and pork-barrel subsidies is good, increasing immigration is also good, taxing negative externalities like pollution is good, and increasing trade barriers is bad. The fact that these ideas aren't implemented is not due to lack of expertise, but rather to lack of political will. In other words, coming up with ideas which work is much easier than coming up with ideas which work within the constraints of the current political orthodoxy. Perhaps, in the short term, we need economists mainly for the latter; and in the long term, we need them to change the overall political orthodoxy in the right direction. (My essay on conflict theory vs mistake theory is also relevant here).

Overall, this book is a worthwhile read, and identifies a number of important and relevant ideas. I think it would have been better off without its ideological slant - perhaps as "23 things you didn't know about capitalism"? - and with acknowledgement that the left-right divide is a rather limited explanatory tool. At the same time, Chang's facts are interesting even when they are rhetorically misused; and my existing political views make me interpret his arguments more harshly than most would. I now know a few more things about capitalism - which is what he promised, after all.

Wednesday, 18 April 2018

A pragmatic approach to interpreting moral claims

Epistemic status: moderately confident in the core ideas, not confident in my portrayal of existing approaches, since neither meta-ethics nor linguistics are focuses of mine. This is meant to be primarily explicatory not rigorously persuasive.

One important question in meta-ethics is how to interpret the meanings of moral claims. This is closely related to but distinct from the question of whether there are any objectively true moral facts (i.e. moral realism vs moral anti-realism). If it were common knowledge that moral realism were true, then we should obviously interpret moral claims as referring to objective moral facts. But anti-realism is consistent with a number of ways of interpreting moral claims, such as those offered by emotivism, subjectivism, and error theory. Given that there is no philosophical consensus in favour of either realism or anti-realism, it seems to me that we should interpret moral claims in a way that allows coherent conversations about ethics to occur between people with entirely different meta-ethical views. By ‘coherent’ I mean that we interpret each side to be a) talking about something at least roughly similar to what they think they’re talking about, and b) talking about the same thing. Without these two criteria, ethics would be held hostage to meta-ethics: a subjectivist who heard a realist give a convincing argument for allowing euthanasia would either have to judge the entire argument to be false - since the realist intended their claims to refer to objective moral facts - or else inform the realist that even if they were right, they weren’t actually talking about the things they thought they were.

As an analogy, consider two ways we could interpret a scientific claim like "All matter is made out of quarks and leptons", an implication of the standard model of particle physics. The first possible reading is that the claim is meant to be literally true. However, under this reading anyone who makes that scientific claim is implicitly endorsing some version of scientific realism. Whether or not scientific realism is true, there are plenty of scientists who don't believe in it, and it's rather pointless to insist that what they mean contradicts what they actually believe. If we want to be able to talk about science without dragging in metaphysical commitments, we should use a second interpretation, under which "All matter is made out of quarks and leptons" would be taken to mean something like "The standard model of particle physics models all matter as being made out of quarks and leptons; also, the standard model is the best theory of matter we've got, in terms of its virtues as a scientific theory (like agreeing with the evidence, and being simple)." Note that the meaning we’ve read into the original statement is not the meaning a scientific realist would have intended, but it’s not very far off - to get there, we just need to add “and we have good reasons to believe that our best theories are literally true”. But by leaving off this last, implicit, clause, we get statements which can be agreed upon even by scientists who disagree about whether our best theories are literally true. Since such scientists do in fact agree about object-level claims on topics like the composition of matter, using an interpretation like the second seems to be the more sensible approach.

I think that this example is illustrative of how we should understand moral claims. Roughly speaking, the "literal truth" interpretation corresponds to the standard moral realist position that moral claims are truth-apt, with some being true and some being false. The second interpretation corresponds to my preferred way of characterising moral claims from an anti-realist perspective. When I say "Murder is bad", what I mean is that my system of ethics condemns murder; and also, that I endorse this system due to its virtues as a moral system (like agreeing with common intuitions, and being simple).

I like this approach because it combines the strengths of a number of other positions. As in moral realism, there is a component of moral claims which is truth-apt: the subclaim that my preferred theory condemns murder (a similar sort of claim to “these axioms imply this theorem”). As in emotivism, there is a component of moral claims which expresses an attitude: my endorsement of that theory. As in error theory, there need not be any moral facts which are objectively true. And under this approach, ethicists can meaningfully debate about morality even if their meta-ethical views differ, by implicitly agreeing upon a common framework for that conversation. Perhaps they both accept utilitarianism, but one prefers person-affecting views, the other non-person-affecting. Then their discussion would take place in the context of a certain (incomplete) system of ethics, trying to discover which extension of it is better. If the two ethicists find they have different intuitions, they can go down a level and attempt to persuade the other to renounce those intuitions, using other intuitions which they do share. If they have fundamentally different ideas about the virtues required from a moral system, they might not be able to have a direct conversation about what is good or bad, because they could only form a minimal shared framework. But even then, they could still have a meaningful and useful conversation by making temporary concessions or assumptions, in order to determine which conclusions follow from which premises. This is roughly similar to the way that you can do mathematics without believing the axioms, or even having an opinion on what it would mean for the axioms to be true. In my mind, the role of ethicists is not to directly discover moral truths, but rather to explicate which intuitions imply which claims, and which theories have which properties. Fortunately, many of us have similar moral intuitions, so in practice they need to concentrate on a very small class of theories.

Monday, 16 April 2018

European history in 100 words

Recently I've visited a number of Spanish cathedrals, which are some of the most absurdly ornate buildings I've ever seen (apparently, statues of Jesus aren't regal enough unless both his cross and his crown of thorns are inlaid with gold). They're a poignant reminder that Spain was once the wealthiest and most powerful country in Europe. This made me think about how, if you wanted to compress European history into only a few dozen words, probably the best way to do it would be to list which was the most influential and/or dominant European power during which time periods. This is inherently subjective and misses a lot, but is still a fun exercise. I also think that pithy frameworks which allow you to have basic reference points for a whole area of knowledge are underrated (for a similarly broad framework for machine learning, see point 1 in this essay). So here's my crack at it:

Roman Empire - from 27 BC, formation of the Empire.
Byzantine Empire - from 476, fall of the Western Empire.
Holy Roman Empire - from 962, (re)formation under Otto.
Italian city-states - from 1250, defeat and death of Frederick II.
Ottoman Empire - from 1453, conquest of Constantinople.
Spain - from 1529, Ottoman siege of Vienna defeated.
France - from 1648, victory in the wars of religion and Peace of Westphalia.
Britain - from 1763, victory in Seven Years' War.
Germany - from 1871, unification.
Britain - from 1918, end of WW1.
USSR - from 1949, creation of nuclear weapons.
Germany - from 1989, fall of the Berlin wall and reunification.

  • I've ignored turns in fortune which only lasted a few years, e.g. temporary strategic gains during wars. Otherwise France under Napoleon and Germany under Hitler would feature.
  • The most dubious inclusion is the Italian city-states. But the list feels incomplete without them, given the Lombard League's defeat of the Holy Roman Empire under Frederick II, plus the spiritual authority wielded by the Pope, plus the general economic and cultural flourishing of the area during that period.
  • 1529 is a decade after the Habsburg unification under Charles V, and also around when Spain started getting significant income from its South American colonies.
  • France's ascendancy over Spain is often dated more specifically to the Battle of Rocroi in 1643, an important symbolic defeat for the Spanish tercio military units; or else to the Treaty of the Pyrenees in 1659. I've gone with the more broadly significant Westphalian treaties.
  • The end of the Seven Years' War is an easy line to draw between periods of French and English dominance. But two other factors around the same time would have ensured English supremacy regardless: the conquest of India, and the industrial revolution.
  • I put 1949 because with nuclear weapons, the USSR was undeniably the world’s second superpower. The formation of NATO in the same year indicates how threatened the UK felt. And only a few years beforehand, the British Empire had lost "the jewel in its crown", India.
  • To convert this to an equivalent history of the world, just replace everything between Rome and Spain with China, then replace everything after WW1 with America.

Thursday, 12 April 2018

Topics on my mind: March 2018

This month, I've compiled a list of issues that I've been wrong about.

International development. I'm pretty libertarian on domestic issues, and so I automatically assumed the same mindset regarding the international economy: thinking that the "Washington consensus" was a broadly good idea, and that encouraging free trade is the biggest priority. However, I was wrong on that: I underestimated the extent to which Asian growth relied on strong and stable governments, and the extent to which deregulation eroded aspects of African governments necessary for private enterprise. I want to put up a few political blog posts in the next month which explore these ideas further.

Moral realism. As a good reductionist, I used to think that moral realism was entirely incoherent. Then I read Parfit on personal identity, and had conversations with a few friends (notably Paul F-R), and realised that insofar as we can identify what is "rational" and "irrational" without reference to any facts about the physical world, we might be able to extend this to morality. Note that I still think moral realism is misguided and incorrect - but I was wrong to think that there is no defensible version of it.

Atheism. I used to think that atheism was a slam-dunk case, and that arguments like Pascal's wager were stupid. I ignored a nagging feeling of doubt about my attempted rebuttals to the wager (although my recollection of that doubt may well just be hindsight bias). Eventually I found that there are a lot of problems with infinite ethics in general, of which Pascal's wager (and Pascal's mugging) are just specific examples. I also realised that the fine-tuning argument for God is pretty strong. I think it can be addressed by appealing to a multiverse, which is why I'm still an an atheist, but that's a pretty unusual argument which it wouldn't be unreasonable to doubt. Of course, even if you did believe that "There is an entity which deliberately created a universe suitable for humans", going from there to "That entity wrote the Bible and wants me to follow specific commandments" is still a bit of a leap.

Progress. I used to think that the scientific and technological progress of the last few decades has improve people's lives in many ways; I'm now skeptical about most of them. I think that my first mistake was being emotionally tied to an overall judgement of modernity as good, and therefore thinking that the most modern period must be the best, instead of explicitly acknowledging that there have been many trends over different timescales, some positive and some negative. But even on the specific question of how useful recent technological progress has been, I implicitly conflated "the last few decades were shaped by many great advances" with "in the last few decades, we've made many great advances". I think the tipping point was reading Tyler Cowan, who argues that America has been undergoing a 'great stagnation' compared with most of the 20th century. Peter Thiel's dismay about software innovation replacing hardware innovation was also an influence. It's true that many people have moved out of extreme poverty over the last few decades, but that's mostly been due to political changes and older advances like the Green revolution,

The value of university. I've swung back and forth on this a few times. After reading arguments like Bryan Caplan's on how degrees are mostly about signalling, I became convinced that the spike in university attendance over the last few decades was a bad thing. My friend Mahmoud tipped me back a bit, with his idea that optimising for happiness over economic growth is plausibly a good thing, and also something which favours higher university attendance. When finishing my undergrad, I became disillusioned about how many interesting jobs require graduate qualifications. But I've actually learned a huge amount during my masters, and several people I've talked to (notably people trying hard to advance AI safety research) are convinced that PhDs are valuable preparation for doing good research. Again, I guess I need to stifle my instinct to generate binary good/bad classifications.

Wednesday, 11 April 2018

Implementations of immortality

(Note: this essay on designing utopias by Eliezer Yudkowsky is still the best I've read on the topic, and was a major inspiration for this piece.)

I was recently talking to my friend Julio about what key features society would need for people to be happy throughout arbitrarily-long lives - in other words, what would a utopia for immortals look like? He said that it would require continual novelty. But I think that by itself, novelty is under-specified. Almost every game of Go ever played is novel and unique, but eventually playing Go would get boring. Then you could try chess, I suppose - but at a certain point the whole concept of board games would become tiresome. I don't think bouncing around between many types of different activities would be much better, in the long run. Rather, the sort of novelty that's most desirable is a change of perspective, such that you find meaning in things you didn't appreciate before. That interpretation of novelty is actually fairly similar to my answer; I said that the most important requirement is a feeling of progress. By this I mean:
  • Your past isn't being lost as it recedes from you.
  • Your future will be better than your past - in qualitative ways as well as quantitative.
  • You receive increasing social recognition for your achievements.
  • You feel like you are continually growing as a person.

Some of these criteria are amenable to technical solutions - for example, memory enhancements would be very helpful for the first. But simply abiding by the basic principles of liberalism makes others very difficult to universalise. As long as we allow people to make their own choices, there will be some people who end up falling into addiction, or lethargy, or self-destructive spirals. We could theoretically make this rarer by having stronger social or legal norms, so that people still have freedom, but not total freedom. We could also make the consequences of failure less unpleasant (e.g. with welfare systems) and less permanent (e.g. by eradicating the most addictive drugs). Yet even then, the social repercussions of being unsuccessful would still weigh on people heavily - man does not live by bread alone, but by the status judgements of his peers.

Fortunately, perceived social status is not zero-sum, because different subcultures value different things. That helps many more people receive social recognition for their achievements; ideally everyone would find a Dunbar-sized community in which they can distinguish themselves. Sure, some will be envious of other communities, but I think abstract concerns like those would mostly be outweighed by their tangible activities and relationships. For example (although I don't have good data on this) anecdotally it seems that modern homeless communities can often be tighter-knit and more supportive than well-off suburbs.

But splintering society into fragments doesn't solve the question of what the overarching cultural framework should look like - and we do need one. Firstly because subcultures usually need something to define themselves in opposition to; also because moving between subcultures would be much more difficult if they didn't share fundamental tenets. Yet now we're back to the problem of what general society should prize and reward in order for almost everyone to feel like their lives are valuable and progressing towards even more value.

Here's a slightly unusual solution, which embraces Eliezer's recommendation that utopias should be weird: we should strictly stratify society by age. This doesn't mean that people of different ages must isolate themselves from each other (although some will), but rather that:
  • Older people are respected greatly simply by virtue of their age.
  • Access to some prestigious communities or social groups is age-restricted.
  • There are strong norms (or even rules?) about which types of activities one should do at a given age.
Age hierarchies are not a new idea; they've been the norm throughout human history, and were only relatively recently discarded from western culture. Granted, that was for good reasons, like the fact that they tend to hold back social and technological progress. But our challenge here is to come up with a steady-state culture which can provide lasting happiness, not one which maximises speedy progress. Age hierarchies are the one sort of hierarchy in which everyone gets to advance arbitrarily far upwards. They also tap into fundamental aspects of human motivation. Video games are addictive because you can keep unlocking new content or "levelling up". But they're also frustrating because that progress isn't grounded in anything except a counter on the screen. Games like Starcraft and League of Legends instead provide satisfaction through victory over others - but that's zero-sum, which we don't want. The third class of games which people spend most time on - MMORPGs such as World of Warcraft or EVE - augment the experience of gaining resources and levelling up with a community in which high-achieving players are respected. In a long-lived society with age hierarchies, people could always look forward to "unlocking new content" based on their age. Even people who aren't very high-status within their own age group could find respect amongst younger groups; and as long as people continued having children, your relative status in society would always increase.

Note that this doesn't imply that children and young adults would have bad lives. Firstly, despite being respected the least, they also experience the greatest sense of novelty and opportunity. Even if they feel like they're missing out on some opportunities, they can be consoled by the thought that their time will come. And they probably won't be very upset about "missing out" anyway - the experiences which older people prize highly are often not those which young people envy. Teens don't feel like their lives are worse because they don't yet have children and go to nightclubs not cocktail parties. Champagne-sipping parents, on the other hand, usually think their lives have become deeper and richer since their teenage years; neither side is unhappy with their lot. A more speculative example which comes to mind is the traditional progression through stages of enlightenment in Buddhism, where you simply don't understand what you're missing out on until you're no longer missing it. To take this to an extreme, access to the next stage of society could depend on learning a new language or framework of knowledge, in which you can discuss ideas you couldn't even conceptualise before.

Stepping back from the weirder implementations, how might longevity impact close relationships? There's a story I remember reading about a world in which people live arbitrarily long; every few decades, though, they simply leave their entire social circle, move away, and start afresh. This seems like a reasonable way to keep a sense of excitement and novelty alive. However, Julio argues that if you live for a very long time, and become close to many people, then you'll eventually stop thinking of them as unique, valuable individuals. I do think that there are enough possible ways to have relationships, and enough variation between people, to last many, many lifetimes, so that you can continually be surprised and grateful, and develop as a person. But I concede that left to themselves, people wouldn't necessarily seek out this variety - they might just decide on a type, and stick to it. I think that age stratification helps solve this problem too. Perhaps there can be expectations for how to conduct relationships based on your age bracket: at some points cultivating a few deep friendships, at others being a social butterfly; sometimes monogamous, sometimes polyamorous; sometimes dating people similar to you, sometimes people totally different; sometimes staying within your age group, and sometimes spending time with people who are much older or younger and have totally different perspectives. In our world, people would shirk from this - but I imagine that a very long-lived society would have a culture more open to trying new things.

Lastly, we should remember that the dark side of having strong social norms is enforcement of those norms. To some extent this can be avoided by creating a mythos which people buy into. Cultural narratives affect people on such a deep level that many never question the core tenets (like, in the west, the value of individualism). It would also help if there were separate communities which people could join as an act of rebellion, instead of wreaking havoc in their original one. But in general, creating norms such that even people who challenge those norms do so in non-destructive ways seems like a very difficult problem. We're basically trying to find stable, low-entropy configurations for a chaotic system (as opposed to stable high-entropy configurations, such as total collapse). Even worse, the system is self-referential - individuals within it can reason about the system as a whole, and some will then try to subvert it. There's much more which needs to be figured out, including entirely new fields of research - but nobody ever said designing a utopia would be easy.

Sunday, 8 April 2018

How hard is implementing intelligence?

Is implementing a model of intelligence like the one which I outlined in my last essay easy or hard? How surprised should we be if we learn that it won't be achieved in the next 200 years? My friend Alex and I have very different priors for these questions. He's a mathematician, and constantly sees the most intelligent people in the world bashing their minds against problems which are simple and natural to pose, but whose solutions teeter on the edge of human ability (e.g. Fermat's last theorem), and where any tiny error can invalidate a proof. Many took hundreds of years to solve, or are still open.

I'm a computer scientist, and am therefore based in a field which has blossomed in less than a century. It's a field where theoretical problems naturally cluster into complexity classes which are tightly linked, so that solutions to one problem can easily be transformed into solutions for others. We can alter Turing machines in many ways - adding extra tapes, making those tapes infinite in both directions, allowing non-determinism - without changing the range of algorithms they can implement at all. And even in cases where we can't find exact solutions (including most of machine learning) we can often approximate them fairly well. My instincts say that if we clarify the conceptual issues around abilities like abstraction and composition, then the actual implementation should be relatively easy.

Of course, these perspectives are both quite biased. The study of maths is so old that all the easy problems have been solved, and so of course the rest are at the limits of human ability. Conversely, computer science is so new a field that few problems have been solved except the easy ones, and so we haven't gotten into the really messy bits. But our perspectives also reflect underlying differences in the fields. Maths is simply much more rigorous than computer science. Proofs are, by and large, evaluated using a binary metric: valid or not. There are often many ways to present a proof, but they don't alter that fundamental property. Machine learning algorithms, by contrast, can improve on the previous best by arbitrarily small gradations, and often require many different hyperparameters and implementation choices which subtly change the performance. So improving them via experimentation is much easier. That's also a drawback, since the messier a domain is, the more difficult it is to make a crisp conceptual advance.

I'm really not sure how our expectations about the rate of progress towards AGI should be affected by these two properties. I do think that significant conceptual advances are required before we get anywhere near AGI, and I can imagine machine learning instead getting bogged down for decades on incrementally improving neural network architectures. We can't assume that better scores on standard benchmarks demonstrate long-term potential - in fact the head of Oxford's CS department, Michael Wooldridge, thinks there's been very little progress towards AGI (as opposed to narrow AI) in the last decade. Meanwhile, theoretical physics has been in a rut for thirty years according to some physicists, who blame top researchers unsuccessfully plugging away at string theory without reevaluating the assumptions behind it. On the other hand, there's an important way in which the two cases aren't analogous: deep learning is fundamentally driven by improving performance, whereas string theory is essentially untestable. And the historical record is pretty clear: given the choice between high-minded armchair theorising vs hypotheses informed by empirical investigation, bet on the latter (Einstein is the most notable exception, but still definitely an exception).

What other evidence can we marshal, one way or the other? We might think that the fact that evolution managed to make humans intelligent is proof that it's not so hard. But here we're stymied by anthropic reasoning. We can't use our own existence to distinguish between intelligence being so hard that it only evolved once, or so easy that it has evolved billions of times. So we have evidence against the two extremes - that it's so difficult intelligence is unlikely to arise on any planet, and that it's so easy intelligence should have evolved several times on Earth already - but can't really distinguish anything in the middle. (Also, if there is an infinite multiverse, then the upper bound on difficulty basically vanishes.)

We could instead identify specific steps in the evolution of humans and estimate their difficulty based on how long they took, but here we run into anthropic considerations again. For example, we have evidence that life evolved only a few hundred million years after the Earth itself formed 4.5 billion years ago, which suggests that it was a relatively easy step. However, intelligent species would only ever be formed by a series of steps which took less time than the habitable lifetime of their planet. On Earth, temperatures are predicted to rise sharply in about a billion years and render animal life impossible within the following few hundred million years. Let's round this off to a 6 billion year habitable period, which we're 3/4 of the way through. Then even if the average time required for the formation of life on earth-like planets were 100 billion years, on planets which produced intelligent life within 6 billion years the average would be much lower.

On the other hand, the last common ancestor of humans and chimpanzees lived only 6 million years ago, which is a surprisingly short time given how much smarter we are than apes. So either relatively few evolutionary steps are required to go from ape-level to human-level intelligence, or else evolution progressed unusually quickly during those 6 million years. The latter is plausible because increasing intelligence is a fast way to increase reproductive fitness, both in absolute terms - since using tools and coordinating hunting allowed larger human populations - and relative terms - several theories hold that a main driver of human intelligence was an "intelligence arms race" to outsmart rivals and get to the top of social hierarchies. However, my "simple" model of intelligence makes me also sympathetic to the former. Most mammals seem to have ontologies - to view the world around them as being composed of physical objects with distinct properties. I wouldn't be surprised if implementing ape-level ontologies ended up being the hardest part of building an AGI, and the human ability to reason about abstract objects turned out to be a simple extension of it. That would fit with previous observations that what we think of as high-level thought, like playing chess, is usually much easier to reproduce than low-level capabilities like object recognition, which nature took much longer to hone.

How hard are these low-level capabilities, then? The fact that most vertebrates can recognise and interact with a variety of objects and environments isn't particularly good evidence, since those abilities might just have evolved once and been passed on to all of them. But in fact, many sophisticated behaviours have evolved independently several times, which would be very unlikely if they were particularly difficult steps. Examples include extensive nervous systems and tool use in octopuses (even though the last common ancestor we shared with them was a very primitive wormlike creature), inventive and deceptive behaviour in crows and ravens, and creativity and communication in dolphins. However, we don't know whether these animal behaviours are implemented in ways which have the potential to generalise to human-level intelligence, or whether they use fundamentally different mechanisms which are dead ends in that respect. The fact that they didn't in fact lead to human-level intelligence is some evidence for the latter, but not much: even disregarding cognitive architecture, none of the animals I mentioned above have all the traits which accelerated our own evolution. In particularly, birds couldn't support large brains; dolphins would have a lot of trouble evolving appendages well-suited to tool use; and octopuses lack group environments with complicated social interactions. Apart from apes, the only species I can think of which meets all these criteria is elephants - and it turns out they're pretty damn smart. So I doubt that that dead-end cognitive architectures are common in mammals (although I really have no idea about octopuses). For a more detailed discussion of ideas from the last few paragraphs, see (Shulman and Bostrom, 2012).

What can we deduce from the structure of the brain? It seems like most of the neocortex, where abstract thought occurs, consists of "cortical columns" with a regular, layered structure. This is further evidence that the jump to human-level intelligence wasn't particularly difficult from an evolutionary standpoint. Another important fact is that the brain is a very messy environment. Neurons depend on the absorption of various nutrients and hormones from the blood; signals between them are transmitted by the Brownian diffusion of chemicals. Meanwhile the whole thing is housed in a skull which is constantly shaking around and occasionally sustains blunt traumas. And we know that in cases of brain damage or birth defects, whole sections of the brain can reorganise themselves to pick up the slack. In short, there's a great deal of error tolerance in how brains work, which is pretty strong evidence that intelligence isn't necessarily fiddly to implement. This suggests that once we figure out roughly what the algorithms behind human intelligence are, fine-tuning them until they actually work will be fairly easy. If anything, we'd want to make our implementations less precise as a form of regularisation (something which I'll discuss in detail in my next literature review).

Could it be the case that those algorithms require very advanced hardware, and our lack of it is what's holding us back from AGI? At first, it seems not: the computational power required for intelligence is bounded by that of brains, which supercomputers already exceed. But there are reasons to think hardware is still a limiting factor. If our progress were bounded mainly by the quality of our algorithms, then we should expect to see "hardware overhangs": cases where we invent algorithms that require much less computing power than what is currently available to us. But in fact it's difficult to think of these cases - most breakthroughs in AI (such as Deep Blue, Watson, AlphaGo and deep learning in general) required new algorithmic methods to be implemented on state-of-the-art hardware. My best guess for how to reconcile these two positions: it would be possible to run an efficient AGI on today's hardware, but it's significantly harder to figure out how to implement AGI efficiently than it is to implement it at all. And since AGI is already pretty hard, we won't be able to actually build one until we have far more processing power than we'd theoretically need - also because the more compute you have, the more prolifically you can experiment. This model predicts that we should now be able to implement much more efficient versions of algorithms invented in previous decades - for example, that the best chess software we could implement today using Deep Blue's hardware would vastly outperform the original. However, serious efforts to make very efficient versions of old algorithms are rare, since compute is so abundant now (even smartphones have an order of magnitude more processing power than Deep Blue did).

Lastly, even if all we need to do is solve the "conceptual problems" I identified above, AGI is probably still a long way away. If the history of philosophy should teach us one thing, it's that clarifying and formalising concepts is hard. And in this case, the formalisations need to be rigorous enough that they can be translated into actual code, which is a pretty serious hurdle. But I don't think these conceptual problems are 200-years-away hard; my guesstimate for the median time until transformative AGI is roughly 1/3 of that. This post and the one before it contain many of the reasons my estimate isn't higher. My other recent post, on skepticism about deep learning, contains many of the reasons my estimate isn't lower. But note that this headline number summarises a probability distribution with high variance. A very rough outline, which I haven't thought through a great deal, is something like 10% within the next 20 years, 20% within the 20 years after that, 20% within the 20 years after that, 20% in the 40 years after that, 20% in the 50 years after that, and 10% even later or never. This spread balances my suspicion that the median should be near the mode, like in a Poisson distribution, with the idea that when we're very uncertain about probabilities, we should have priors which are somewhat scale-invariant, and therefore assign less probability to a k-year period the further away it is.

In conclusion, I have no catchy conclusion - but given the difficulty of the topic, that's probably a good thing. Thanks to Vlad for the argument about the role of hardware in algorithm development, and to Alex again for inspiring this pair of essays. Do get in touch if you have any comments or feedback.

Thursday, 5 April 2018

A model of intelligence

Epistemic status: very speculative, exploratory and messy. I'll try to put up a summary later.

Humans are very good at many cognitive tasks which weren't selected for in the ancestral environment. Notable examples include doing mathematics, designing, building and using very complex tools such as computers, discussing abstract ideas in philosophy, creating complex and detailed narratives set in entirely fictional worlds, and making plans for the long-term future which involve anticipating the behaviour of millions of people. This implies that when we evolved to perform well on simpler versions of these tasks, we didn't just develop specific skills, but rather a type of general intelligence which is qualitatively different to that of animals. This was probably a pretty abrupt shift, since our capacity for higher thought is orders of magnitude above that of any other animal. If our intellectual capabilities were the combination of many small cognitive abilities, we'd expect some other animals to have developed most of them, and be half as good as a human at learning language, or a quarter as good at building tools - something we just don't observe. So it's likely that there are a few core cognitive competencies which underlie human abilities. If I had to guess, I'd say the keys are compositionality, abstraction and imagination.

Compositionality - the ability to combine concepts in novel ways which still make sense - is the bedrock of language and thought. If I teach you a new verb, "to dax", you implicitly know how to combine it with the rest of the English language to form arbitrarily complex sentences which express facts, thoughts or counterfactuals: "I sometimes dax"; "You think that I dax too much"; "If I hadn't daxed as often, you'd be happier." The way language achieves compositionality is by having a recursive structure based mostly on noun phrases and verb phrases: for example, in the sentence "This is the dog that worried the cat that killed the rat that ate the malt that lay in the house that Jack built", starting at any 'the' gives you a complete noun phrase, and starting at any verb gives you a complete verb phrase (e.g. "ate the malt that lay in the house that Jack built") which needs to be preceded by a noun phrase to make a full sentence.

By itself, though, this isn't enough for humans to process complicated thoughts. Our memories are limited - in fact, most people can only hold 4-7 items in working memory. The next step is to encapsulate certain properties so that we can reason about them holistically - let's call this abstraction. You are a collection of atoms; if we abstract up from that, you're an arrangement of cells; upwards further, and you're a bundle of organs and tissues; upwards again, and you're a person; above that, you're a member of the human species; and even more generally, you're a living being. All of these claims are true, but they're descriptions of reality at different levels. In general, abstraction loses information, but makes certain types of analysis much easier. In game theory, for instance, we abstract away from the actual identities of the players, and treat them as rational agents with fixed goals. Interestingly, when children learn a language they automatically associate different words with different levels of abstraction, based on the words they already know - for example, if they have already learned 'dog', when you point at one and say 'mammal', they're predisposed to think you're making a generalisation not just giving them a synonym for 'dog'.

Abstractions don't have to just be broader categories of physical things, but can also identify shared properties or characteristics - such as "love" or "democracy". Once we have those concepts, we can form thoughts and sentences about them in the same compositional way we do with tangible objects. Of course, reasoning about dogs is easier than reasoning about love. Fortunately, we have a tool to make abstract reasoning easier, which I'll call metaphorical thought. Metaphors transport relationships or properties up layers of abstraction, so that you can understand the claim that "love is hell" without needing to define it in terms of any physical lovers being in any physical Hell. Note that while some systems in the brain (such as the visual system) use a strict hierarchy of abstraction, in general we are able to flexibly abstract properties to other domains whenever useful. People are made out of cells, but when we imagine cells it's sometimes useful to anthropomorphise them as using their abilities (e.g. phagocytosis) to achieve certain goals (defeating pathogens), just like people do.

One more mental ability seems very important: the ability to imagine and then extrapolate situations, particularly those involving other people. Sometimes we do this using implicit rules, like "People get angry if you hit them". In unusual cases, we may need to visualise and manipulate mental images (e.g. to figure out if two items will fit together), or pretend that we are in another person's position (to figure out how they'd respond). More generally, we can use our imaginations to flesh out ideas by generating details and context. Just a few words - like "snakes on a trampoline" - are enough to trigger a cascade of associations and connotations which we automatically combine to envisage several ways that scenario might play out. ("Association" probably deserves to be considered a separate cognitive skill, but let's lump it in with imagination for now). We're good enough at this sort of creativity that people can come up with totally fictitious justifications for their actions on the fly (as shown in studies of split-brain patients). In a sense, imagination is the opposite of abstraction: the latter goes from examples to concepts, the former from concepts to examples. And not just any examples, but ones which are typical and relevant - otherwise we'd have to fight our way through endless edge cases to predict any scenario.

Out of the three traits I've just discussed, it seems like human imagination is the one in which our differences from animals are the least qualitative, and instead more quantitative. I'd guess carnivores who stalk their prey are imagining possible reactions to some extent; and animals are definitely able to learn associations between phenomena that aren't intrinsically related (like a bell ringing and food arriving). Even neural networks are already able to generate novel outputs such as photorealistic human faces which are different from all training examples. On the other hand, we can imagine a much wider range of scenarios than animals can - including ones which are displaced in time or space, which are entirely fictional, or which represent abstract concepts. The ability to use language to specify a story is undoubtedly helpful; another part of it, I'd speculate, is our ability to abstract away from individual stories to get a broader narrative.

Beyond human intelligence

What would it take to make humans significantly more intelligent than we currently are? Broadly speaking, there are two possible answers. The first is that making our brains bigger and faster would suffice. The second is that we would need to gain cognitive abilities which we're currently entirely lacking, and without which we simply couldn't progress much further - in the same way that a dog couldn't learn rocket science even if its memory and thinking speed had been dramatically enhanced. Personally, I'd guess that bigger brains would increase our intelligence quite a lot, before we hit a threshold at which we'd need new cognitive abilities. We know that evolution applied a great deal of pressure to increase the size of human brains - to the extent that babies are born years before they would be ready to leave the womb (by other animals' standards), and death or injury during childbirth is common - simply so that their heads can grow as large as possible. These are severe costs which would not have been borne unless the gradient of intelligence increase with respect to brain size were still fairly high. But unless we think that the cognitive abilities we possess are the only ones required for arbitrarily high intelligence - a rather anthropocentric assumption - that gradient won't remain high forever. Eventually, we'd need to improve the underlying cognitive architecture.

We can't do this in humans, but we can in AIs. The argument in the paragraph above suggests that if we wanted a roughly human-level AI which already possessed our three key cognitive competencies to become smarter quickly, then giving it a "bigger brain" (faster processors, more memory) would probably work. Once the AI was smarter, it would be able to design even better hardware in a positive feedback loop. But this would be a relatively slow process: given the complexities of manufacturing processors, and the delays from having humans involved, it would take months per iteration. If we wanted it to become much smarter very quickly, it'd need fundamentally different cognitive abilities. Here are three which I think are important:
  • The ability to absorb information quickly
  • The ability to easily change its cognitive architecture
  • The ability to introspect accurately and in detail

The first is something which humans are very bad at. The bandwidth at which we can acquire new knowledge is very low - we can read or listen at perhaps a few hundred words a minute, and it takes decades to teach children most of what adults know. By contrast, computers can transfer gigabytes of information per second. However, that doesn't mean that an AI can understand that information as quickly. A neural network, for example, might spend seconds downloading a dataset which it takes days to train on. By contrast, a symbolic reasoner which stores knowledge as lists of properties might be able to "internalise" new information as soon as it's been downloaded, simply by appending it to existing lists. Of course, that information might need to be in a specialised format which takes a lot of computation to produce - but even in that case, once it's been processed by one AI, every other instance would be able to learn it immediately. On the other hand, uncritically accepting new information is probably not a good strategy even if it'd be possible - instead, you'd want to spend time making sure the new information fits with previous knowledge, and extrapolating to new conclusions wherever possible.

Of course, it's very unlikely that the first human-level artificial general intelligence (AGI) we build has anywhere near optimal knowledge representation, or indeed near-optimal anything. How quickly it improves from that point depends on how easy it is for us - or the AI itself - to improve its architecture and reasoning process. The fastest growth in the short term relies on improvements to software - at first mostly by human engineers, but later mostly by the AI itself, since eventually it'd be much more intelligent than any team of humans. If you have trouble picturing what sort of possibilities that would allow, one rough heuristic I use is is to imagine many smart humans working together over a long period of time - for example, a thousand top scientists over two centuries. Since we can trace many of the biggest intellectual breakthroughs to a small number of people, this is pretty similar to picturing the technological advances humanity as a whole will make within the next few centuries, which I think will be pretty substantial (more on that in my follow-up essay; note, though, that the extent to which that progress relies on physical experimentation and better hardware is an open question).

However, improving an AI to that stage might take a long time. If we asked even a relatively smart human to significantly improve a state-of-the-art AI by themselves, they'd have a pretty difficult time. There are thousands of very competent AI researchers contributing new algorithms and ideas at a rate that no individual could hope to match. So will the role of self-improvement only become important when AIs are already significantly smarter than humans? That depends on what advantages the AI has over humans. Even if it's around human-level at most tasks, it may be particularly good at inspecting its own reasoning process, for the same reason that it's much easier for you to figure out what you're thinking than what another person is. Granted, you don't have access to that person's entire "source code", but there's still a significant difference between even well-informed reasoning about thought processes vs actually experiencing those thought processes. And the introspective transparency of an AI could potentially dwarf that of humans. Humans have pretty minimal access to what our brains are doing behind the scenes during quick, reflexive System 1 tasks such as free association. When system 2 is engaged we have slightly more of a clue, but almost all of the processing is still unconscious - consider how people can talk fluently about complex topics without planning their sentences in advance. If an AI could do complicated tasks while also inspecting the thought processes used, it could gain a much deeper understanding of where its weaknesses lay, and how to fix them.

Thanks to Alex, Beth and Oliver for the discussions which inspired this essay. I'd be very interested in reading more papers or books on these topics, if anyone has any to recommend. In my next post, I'll explore whether even my "simple" model of human intelligence might be very complicated to implement.