Topics on my mind: February 2018

Thanks to Tom for pointing me to this presentation by a couple of Oxford philosophers who may just have solved Fermi's Paradox (the puzzle of why we haven't seen any alien civilisations, given that plausible estimates for terms in the Drake equation predict that there should be many thousands in this galaxy alone). The main idea is actually quite simple: that people have generally tried to find plausible expected values for each term, then multiplied them together to give an expected number of alien civilisations in the thousands or millions. But if we instead multiply together a whole probability distribution for each term, we get the same mean but a median less than 10, and a double-digit probability that there are no other alien civilisations (the first couple of slides explain this quite well). Their final best guess, given our observations so far, is that there's something like a 40% chance that we're alone in the universe. While this isn't a particularly definitive answer, I am impressed because the Fermi paradox was one of a very few questions which felt truly mysterious (the others are consciousness, the nature of maths, the origin and nature of the uni(multi?)verse, and how abstract thought occurs in the brain), and so it's very cool to see this sort of progress.

I mentioned last month that I was beginning to regret my former dismissive attitude towards continental philosophy. Upon further reflection, I think it was actually just one example of a more general mistake which I've been making, which has the following steps: 1. Observe philosophers trying to analyse a concept; 2. Notice that the concept is vague or messy and there is probably no underlying "truth", which makes most analyses aimed at finding it predictably fail; 3. Think that I've "dissolved the question" enough that it's not worth worrying about any more.

But in fact, there's often a lot more important conceptual work to do even if there is no metaphysical truth. A good example is causation. Metaphysically, causation is a mess; it seems clear that human intuitions are a hodgepodge of whatever was evolutionarily useful, and there may not be any coherent truth about what caused what. But that didn't stop Judea Pearl from revolutionising the analysis of causality in statistics and machine learning by formalising causal reasoning using directed graphs. Some more questions which we can hopefully make similar progress on: what does it mean to implement a computation; what is a "meaningful life"; what is knowledge? (The last has already been partially solved by Bayesianism). Perhaps the most important application of this argument is to language itself. Almost all nouns, verbs and adjectives are vague and fail to specify a precise set of objects or actions; acknowledging this provides a simple resolution to philosophical dilemmas based on edge cases, such as the Ship of Theseus "paradox". But just noticing that vagueness isn't enough: the AIs we are building need to represent sentences somehow, or else we'll be unable to communicate important concepts to them, which seems pretty dangerous. Also note that the concept of vagueness is itself vague; it seems likely that better analyses of vagueness will help us solve the problems above, because then we'll actually understand what we're grappling with.

I've also been thinking about how to take "epistemic modesty" seriously. I've written about this before here, and it's also related to my discussion of cluelessness last month. (See also this new Stanford Encyclopedia of Philosophy article on the topic). Intellectually, I don't there's any clear place to draw a line between sensible (and indeed obligatory) modesty ("all the experts disagree with me, maybe I should reconsider") and radical skepticism ("but how do I know that I'm not insane?"): both are about reevaluating your beliefs based on the difficulty of identifying mistakes in your reasoning "from the inside". Practically speaking, though, I think the most effective step towards minimising intellectual overconfidence would be to deliberately and consciously try to suppress feelings of "tribal affiliation" when having discussions, especially about things I consider part of my identity. Other strategies I've heard about: when arguing, first rephrase the other person's ideas in a way that they're happy with, before giving your own argument (this is basically an ideological Turing test). Apparently the Wright brothers took this even further, and used to try switching sides whenever they argued. In fact, when disputants are close to ideal Bayesians, each would offer new considerations favouring each side with roughly equal frequency until their beliefs converged. Some of the best discussions I've had looked roughly like that; and, I hope, many more will.

Comments

Popular posts from this blog

In Search of All Souls

Book review: Very Important People

Moral strategies at different capability levels