Realism about rationality
Epistemic status: trying to vaguely gesture at vague intuitions. Cross-posted to Less Wrong , where there's been some good discussion. A similar idea was explored here under the heading "the intelligibility of intelligence", although I hadn't seen it before writing this post. There’s a mindset which is common in the rationalist community, which I call “realism about rationality” (the name being intended as a parallel to moral realism). I feel like my skepticism about agent foundations research is closely tied to my skepticism about this mindset, and so in this essay I try to articulate what it is. Humans ascribe properties to entities in the world in order to describe and predict them. Here are three such properties: "momentum", "evolutionary fitness", and "intelligence". These are all pretty useful properties for high-level reasoning in the fields of physics, biology and AI, respectively. There's a key difference between the