Posts

Showing posts from 2020

My fictional influences

I’ve identified as a bookworm for a very long time. Throughout primary school and high school I read voraciously, primarily science fiction and fantasy. But given how much time I’ve spent reading fiction, it’s surprisingly difficult to pin down how it’s influenced me. (This was also tricky to do for nonfiction, actually - see  my attempt in this post .) Thinking back to the fiction I’ve enjoyed the most, two themes emerge: atmosphere, and cleverness. The atmosphere that really engages me in fiction is one that says: the world is huge; there’s so much to explore; and there’s a vastness of potential. But one that’s also a little melancholy - because you can’t possibly experience all of it, and time always flows onwards. I was particularly struck by the ending of The Lord of the Rings , when Frodo leaves all of Middle-Earth behind; by His Dark Materials , when Lyra gains, and loses, uncountable worlds; by the Malazan saga, occurring against a fictional backdrop of hundreds of thousands o

My intellectual influences

Prompted by a friend's question about my reading history, I've been thinking about what shaped the worldview I have today. This has been a productive exercise, which I recommend to others. Although I worry that some of what's written below is post-hoc confabulation, at the very least it's forced me to pin down what I think I learned from each of the sources listed, which I expect will help me track how my views change from here on. This blog post focuses on non-fiction books (and some other writing); I've also written a blog post on  how fiction has influenced me . My first strong intellectual influence was Eliezer Yudkowsky’s writings on Less Wrong (now collected in Rationality: from AI to Zombies ). I still agree with many of his core claims , but don’t buy into the overarching narratives as much. In particular, the idea of “rationality” doesn’t play a big role in my worldview any more. Instead I focus on specific habits and tools for thinking well (as in Sup

Why philosophy of science?

During my last few years working as an AI researcher, I increasingly came to appreciate the distinction between what makes science successful and what makes scientists successful. Science works because it has distinct standards for what types of evidence it accepts, with empirical data strongly prioritised. But scientists spend a lot of their time following hunches which they may not even be able to articulate clearly, let alone in rigorous scientific terms - and throughout the history of science, this has often paid off. In other words, the types of evidence which are most useful in choosing which hypotheses to prioritise can differ greatly from the types of evidence which are typically associated with science. In particular, I’ll highlight two ways in which this happens. First is scientists thinking in terms of concepts which fall outside the dominant paradigm of their science. That might be because those concepts are too broad, or too philosophical, or too interdisciplinary. For e

What is past, and passing, and to come?

I've realised lately that I haven't posted much on my blog this year. Funnily enough, this coincides with 2020 being my most productive year so far. So in addition to belatedly putting up a few cross-posts from elsewhere, I thought it'd be useful to share here some of the bigger projects I've been working on which haven't featured elsewhere on this blog. The most important is AGI safety from first principles   (also available here as a PDF ), my attempt to put together the most compelling case for why the development of artificial general intelligence might pose an existential threat to humanity. It's long (about 15,000 words) but I've tried to make it as accessible as possible to people without a machine learning background, because I think the topic is so critically important, and because there's an appalling lack of clear explanations of what might go wrong and why. Early work by Bostrom and Yudkowsky is less relevant in the context of modern machine

Against strong bayesianism

In this post ( cross-posted from Less Wrong ) I want to lay out some intuitions about why bayesianism is not very useful as a conceptual framework for thinking either about AGI or human reasoning. This is not a critique of bayesian statistical methods; it’s instead aimed at the philosophical position that bayesianism defines an ideal of rationality which should inform our perspectives on less capable agents, also known as "strong bayesianism". As described here : The Bayesian machinery is frequently used in statistics and machine learning, and some people in these fields believe it is very frequently the right tool for the job.  I’ll call this position “weak Bayesianism.”  There is a more extreme and more philosophical position, which I’ll call “strong Bayesianism,” that says that the Bayesian machinery is the  single correct way  to do not only statistics, but science and inductive inference in general – that it’s the “aspirin in willow bark” that makes science, and perhaps

The Future of Science

This is the transcript of a short talk I gave a few months ago, which contains a (fairly rudimentary) presentation of some ideas about the future of science that I've been mulling over for a while. I'm really hoping to develop them much further, since I think this is a particularly important and neglected area of inquiry. Cross-posted from Less Wrong ; thanks to Jacob Lagerros and David Lambert for editing the transcript, and to various other people for asking thought-provoking questions. Today I'll be talking about the future of science. Even though this is an important topic (because science is very important) it hasn’t received the attention I think it deserves. One reason is that people tend to think, “Well, we’re going to build an AGI, and the AGI is going to do the science.” But this doesn’t really offer us much insight into what the future of science actually looks like. It seems correct to assume that AGI is going to figure a lot of things out. I am interested in wh