Posts

Showing posts from April, 2018

Neurons all the way down

My views on intelligence have shifted recently after listening to an episode of the Rationally Speaking podcast where Dr Herculano-Houzel talked about her book The Human Advantage . She made a number of fascinating points based on her pioneering research on measuring the number of neurons in various species' brains: The brains of primates have higher neuron density than the brains of other mammals, particularly in the cerebral cortex, which is largely responsible for higher-level abstract thought. Amongst primates, number of neurons is roughly proportional to body mass. Apes are the main exception to the latter rule; their diets aren't calorie-rich enough for them to support brains as large as the trend would suggest. Humans, by contrast, fit right on the trend line: our innovations such as using tools and fire to cook food allowed us to obtain more calories and therefore grow bigger brains than other apes (bipedalism also helped, reducing the energy cost of walking by a

Book review: 23 things they don't tell you about capitalism

Image
Right from the title of his book 23 things they don't tell you about capitalism , it's clear that Ha-Joon Chang is pitting himself against the establishment. In doing so, he lays the ground for both the book's significant insights and its serious flaws. Chang's arguments are divided into 23 chapters, covering a range of economic issues. Each chapter starts off with two headings: "What they tell you" and "What they don't tell you". "They" refers to neo-liberal economists peddling "the free-market ideology that has ruled the world since the 1980s". Chang is not shy in setting up in opposition: in the introduction, he claims that the result of this ideology has been "the polar opposite of what was promised". Instead, he wants to push everything leftwards - still capitalist, but under the supervision of a bigger and more active government which intervenes to promote some sectors and impede others. Let's star

A pragmatic approach to interpreting moral claims

Epistemic status: moderately confident in the core ideas, not confident in my portrayal of existing approaches, since neither meta-ethics nor linguistics are focuses of mine. This is meant to be primarily explicatory not rigorously persuasive. One important question in meta-ethics is how to interpret the meanings of moral claims. This is closely related to but distinct from the question of whether there are any objectively true moral facts (i.e. moral realism vs moral anti-realism). If it were common knowledge that moral realism were true, then we should obviously interpret moral claims as referring to objective moral facts. But anti-realism is consistent with a number of ways of interpreting moral claims, such as those offered by emotivism, subjectivism, and error theory. Given that there is no philosophical consensus in favour of either realism or anti-realism, it seems to me that we should interpret moral claims in a way that allows coherent conversations about ethics to occur be

European history in 100 words

Recently I've visited a number of Spanish cathedrals, which are some of the most absurdly ornate buildings I've ever seen (apparently, statues of Jesus aren't regal enough unless both his cross and his crown of thorns are inlaid with gold). They're a poignant reminder that Spain was once the wealthiest and most powerful country in Europe. This made me think about how, if you wanted to compress European history into only a few dozen words, probably the best way to do it would be to list which was the most influential and/or dominant European power during which time periods. This is inherently subjective and misses a lot, but is still a fun exercise. I also think that pithy frameworks which allow you to have basic reference points for a whole area of knowledge are underrated (for a similarly broad framework for machine learning, see point 1 in this essay ). So here's my crack at it: Roman Empire - from 27 BC, formation of the Empire. Byzantine Empire - from 476, f

Topics on my mind: March 2018

This month, I've compiled a list of issues that I've been wrong about. International development. I'm pretty libertarian on domestic issues, and so I automatically assumed the same mindset regarding the international economy: thinking that the "Washington consensus" was a broadly good idea, and that encouraging free trade is the biggest priority. However, I was wrong on that: I underestimated the extent to which Asian growth relied on strong and stable governments, and the extent to which deregulation eroded aspects of African governments necessary for private enterprise. I want to put up a few political blog posts in the next month which explore these ideas further. Moral realism. As a good reductionist, I used to think that moral realism was entirely incoherent. Then I read Parfit on personal identity, and had conversations with a few friends (notably Paul F-R), and realised that insofar as we can identify what is "rational" and "irrational&q

Implementations of immortality

(Note: this essay on designing utopias  by Eliezer Yudkowsky is still the best I've read on the topic, and was a major inspiration for this piece.) I was recently talking to my friend Julio about what key features society would need for people to be happy throughout arbitrarily-long lives - in other words, what would a utopia for immortals look like? He said that it would require continual novelty. But I think that by itself, novelty is under-specified. Almost every game of Go ever played is novel and unique, but eventually playing Go would get boring. Then you could try chess, I suppose - but at a certain point the whole concept of board games would become tiresome. I don't think bouncing around between many types of different activities would be much better, in the long run. Rather, the sort of novelty that's most desirable is a change of perspective, such that you find meaning in things you didn't appreciate before. That interpretation of novelty is actually fairl

How hard is implementing intelligence?

Is implementing a model of intelligence like the one which I outlined in my last essay easy or hard? How surprised should we be if we learn that it won't be achieved in the next 200 years? My friend Alex and I have very different priors for these questions. He's a mathematician, and constantly sees the most intelligent people in the world bashing their minds against problems which are simple and natural to pose, but whose solutions teeter on the edge of human ability (e.g. Fermat's last theorem), and where any tiny error can invalidate a proof. Many took hundreds of years to solve, or are still open. I'm a computer scientist, and am therefore based in a field which has blossomed in less than a century. It's a field where theoretical problems naturally cluster into complexity classes which are tightly linked, so that solutions to one problem can easily be transformed into solutions for others. We can alter Turing machines in many ways - adding extra tapes, making

A model of intelligence

Epistemic status: very speculative, exploratory and messy. I'll try to put up a summary later. Humans are very good at many cognitive tasks which weren't selected for in the ancestral environment. Notable examples include doing mathematics, designing, building and using very complex tools such as computers, discussing abstract ideas in philosophy, creating complex and detailed narratives set in entirely fictional worlds, and making plans for the long-term future which involve anticipating the behaviour of millions of people. This implies that when we evolved to perform well on simpler versions of these tasks, we didn't just develop specific skills, but rather a type of general intelligence which is qualitatively different to that of animals. This was probably a pretty abrupt shift, since our capacity for higher thought is orders of magnitude above that of any other animal. If our intellectual capabilities were the combination of many small cognitive abilities, we'd e