Epistemic modesty

Inspired by Eliezer's new book, and a conversation I had with Ben Pace a few months back, I decided to write up some thoughts on the problem of trying to form opinions while accounting for the fact that other people disagree with you - in other words, epistemic modesty. After finishing, I realised that a post making very similar points had been published by Gregory Lewis on an effective altruism forum a few days beforehand. I decided to upload this anyway since I come at it from a slightly different angle.

The most basic case I'm interested in would be meeting somebody who is in just as good an epistemic position are you are - someone who is exactly as intelligent, rational and well-informed as you, and who shares your basic assumptions (i.e. your 'prior') - but who nevertheless disagrees with you to a significant extent. Let's say they assign 20% confidence to a proposition you believe with 80% confidence. At least one of you is missing something; since your opinions are symmetric in all relevant ways, it's just as likely to be you as them. It seems inescapable that, if you don't have enough time to resolve the dispute, and you're really sure that you reason equally well, you should split the difference and both update to 50%. Call this the symmetry argument. Now the possibility of meeting your epistemic "evil twin" is a bit of a farfetched one, but we can easily modify the scenario to stipulate that they are in at least as good an epistemic position as you are, so that if you meet them your credence in the proposition you disagree upon should drop from 80% to 50% or below.

In reality we don't have isolated cases like this; you meet people who agree or disagree with you all the time, so that the effect of any given individual's opinions on your own is fairly small. To be clear, I'm talking here about the effect of simply learning that they have that opinion, assuming again that you don't get a chance to discuss it with them. Even if this constitutes only a small amount of evidence for that opinion, it's still some information. By this logic, though, you shouldn't just be swayed by the opinions of the people you've met, but also everyone else who has an opinion on this subject. You can think of the people whose opinions you know as samples from that population, each of which shifts your beliefs about the underlying distribution. Most of these samples will be very biased, because of filter effects and social bubbles, but theoretically you can control for that using standard statistics to get your best estimate of how opinions on a given topic vary by expertise, intelligence, rationality and other relevant factors.

We face two further issues. The first is how to account for the distribution of other people's beliefs when forming your own. This is something that an ideal Bayesian reasoner would do by using a prior. It's also a domain where I think humans naturally do well at approximating a prior, since our social intuitions are heavily optimised for figuring out other people's trustworthiness. Additionally, there are plenty of historical examples, although it's difficult to quantify them in a way that we can use to draw conclusions. (It seems like Eliezer's new book will attempt to provide a framework which can be used to address this issue).

The second issue is this: what if you meet someone who is in just as good an epistemic position as you are (as described above), and who agrees with you about the distribution of everyone else's beliefs about some proposition, but who still disagrees with you about the probability of that proposition? According to the methodology I described above, it's enough to just update your estimates for who believes what, and your resulting probability estimates. This will usually only result in a small shift, because they're only one person - whereas the symmetry argument in the first paragraph suggests that you should in fact end up halfway between their opinion and your own. But if we accepted the symmetry argument, so that considering the disagreement of just one other person could rationally change any of your beliefs from arbitrary confidence to 50%, then the effect of the opinions billions of other people must be so overwhelming that no rational person could ever defy a consensus. What gives?

(I'm eliding over the issue of how confident you are that the other person is in at least as good an epistemic position as you are. While obviously uncertainty about that will have some effect, almost all of us should be very confident that such people exist, and can imagine being convinced that someone you've met is one of them).

The problem, I think, is the clash between the reasonable intuition towards epistemic humility, and any formalisation of it. Suppose you reason according to some system R, which assigns probabilities to various propositions. There's always a chance that R is systematically flawed, or you've made a mistake in implementing it, and so you should update your reasoning system to take that into account. Let's call this new reasoning system R'; it's very similar to R except that it's a bit less confident in most beliefs. But then we can apply exactly the same logic to R' to get R'', and so on indefinitely, each time becoming less and less certain of your beliefs until you end up not believing very much at all!

I can see three potential ways to defuse this issue. Firstly, if such adjustments are limited in scope, so that even arbitrarily many changes from R to R' to R'' only change your beliefs by a small, bounded amount, then you can still be confident in some beliefs. But I don't think that this can be the case, because if you are using R and you meet someone else using R who very confidently disagrees with you, then by the symmetry argument you need to be able to adjust any belief all the way down to 50% (or very near it). This sort of adjustment is the responsibility of R', and so R' must be able to modify R to a significant extent. Further, it's almost certain that there are people out there who are in general in a better epistemic position than you and disagree with you strongly! So your resulting beliefs will still end up slavishly following other people's. A second possibility is if you're certain that you have already taken every factor into account - so that if you meet someone who shares your priors, and who reasons similarly to you, then it's impossible for the two of you to have come to different conclusions. This is ideal Bayesianism. But even ideal Bayesian agents have blind spots - for example, they can't assign any nonzero probability to revisions to the laws of logic or mathematics, or any other possibility that implies that Bayesian reasoning is incorrect. But we ourselves were convinced of such correctness by evidence which could be overturned, for example by someone publishing a proof that Bayesianism is inconsistent. It seems like we'd want an "ideal" agent to be capable of evaluating that proof in a way that an ideal Bayesian can't.

If you think this sounds a bit Godellian, though, you're right. What we're looking for seems to be very similar to asking a proof-based system to demonstrate its own consistency, which is impossible. I don't know how that result might extent to probabilistic reasoners; it seems likely that asking them to assign coherent probabilities to the prospect of their own failure would be similarly futile. But it might not be, which is the third way out: for your epistemology to be self-referential in a way which allows it to sensibly assign a small probability to the possibility that it is systematically wrong. This seems closely related to the issue of Vingean uncertainty, and also hints at problems with defining agenthood which I plan to explore in upcoming posts.  

Comments

Popular posts from this blog

In Search of All Souls

Book review: Very Important People

Moral strategies at different capability levels