Deutsch and Yudkowsky on scientific explanation

Science aims to come up with good theories about the world - but what makes a theory good? The standard view is that the key traits are predictive accuracy and simplicity. Deutsch focuses instead on the concepts of explanation and understanding: a good theory is an explanation which enhances our understanding of the world. This is already a substantive claim, because various schools of instrumentalism have been fairly influential in the philosophy of science. I do think that this perspective has a lot of potential, and later in this essay explore some ways to extend it. First, though, I discuss a few of Deutsch's arguments which I don't think succeed, in particular when compared to the bayesian rationalist position defended by Yudkowsky.

To start, Deutsch says that good explanations are “hard to vary”, because every part of the explanation is playing a role. But this seems very similar to the standard criterion of simplicity. Deutsch rejects simplicity as a criterion because he claims that theories like “The gods did it” are simple. Yet I’m persuaded by Yudkowsky’s argument that a version of “The gods did it” theory which could actually predict a given set of data would essentially need to encode all that data, making it very complex. I’m not sold on Yudkowsky’s definition of simplicity in terms of Kolmogorov complexity (for reasons I’ll explain later on) but re-encoding a lot of data should give rise to a complex hypothesis by any reasonable definition. So it seems most parsimonious to interpret the “hard to vary” criterion as an implication of the simplicity criterion.

Secondly, Deutsch says that good explanations aren’t just predictive, but rather tell us about the underlying mechanisms which generate those predictions. As an illustration, he argues that even if we can predict the outcome of a magic trick, what we really want to know is how the trick works. But this argument doesn’t help very much in adjudicating between scientific theories - in practice, it’s often valuable to accept purely predictive theories as stepping-stones to more complete theories. For example, Newton’s inverse square law of gravity was a great theory despite not attempting to explain why gravity worked that way; instead it paved the way for future theories which did so (and which also made better predictions). If Deutsch is just arguing that eventually science should aim to identify all the relevant underlying mechanisms, then I think that most scientific realists would agree with him. The main exception would be in the context of foundational physics. Yet that’s a domain in which it’s very unclear what it means for an underlying mechanism to “really exist”; it’s so far removed from our everyday intuitions that Deutsch’s magician analogy doesn’t seem very applicable.

Thirdly, Deutsch says that we can understand the importance of testability in terms of the difference between good and bad explanations:

“The best explanations are the ones that are most constrained by existing knowledge – including other good explanations as well as other knowledge of the phenomena to be explained. That is why testable explanations that have passed stringent tests become extremely good explanations.”

But this doesn’t help us distinguish between explanations which have themselves been tested, versus explanations which were formulated afterwards to match the data from those same tests. Both are equally constrained by existing knowledge - why should we be more confident in the former? Without filling in this step of the argument, it’s hard to understand the central role of testability in science. I think, again, that Yudkowsky provides the best explanation: that the human tendency towards hindsight bias means we dramatically overestimate how well our theories explain observed data, unless we’re forced to make predictions in advance.

Having said all this, I do think that Deutsch’s perspective is valuable in other ways. I was particularly struck by his argument that the “theory of everything” which fundamental physicists search for would be less interesting than a high-level “theory of everything” which forges deep links between ideas from many disciplines (although I wish he’d say a bit more about what it means for a theory to be “deep”). This argument (along with the rest of Deutsch’s framework) pushes back against the longstanding bias in philosophy of science towards treating physics as the central example of science. In particular, thinking of theories as sets of equations is often appropriate for physics, but much less so for fields which are less formalism-based - i.e. almost all of them.[0] For example, the theory of evolution is one of the greatest scientific breakthroughs, and yet its key insights can’t be captured by a formal model. In Chapman’s terminology, evolution and most other theories are somewhat nebulous. This fits well with Deutsch’s focus on science as a means of understanding the world - because even though formalisms don’t deal well with nebulosity, our minds do.

Another implication of the nebulosity of scientific theories is that we should move beyond the true-false dichotomy when discussing them. Bayesian philosophy of science is based on our credences about how likely theories are to be true. But it’s almost never the case that high-level theories are totally true or totally false; they can explain our observations pretty well even if they don’t account for everything, or are built on somewhat leaky abstractions. And so assigning probabilities only to the two outcomes “true” and “false” seems simplistic. I still consider probabilistic thinking about science to be valuable, but I expect that thinking in terms of degrees of truth is just as valuable. And the latter comes naturally from thinking of theories as explanations, because we intuitively understand that the quality of explanations should be evaluated in a continuous rather than binary way.[1]

Lastly, Deutsch provides a good critique of philosophical positions which emphasise prediction over explanation. He asks us to imagine an “experiment oracle” which is able to tell us exactly what the outcome of any specified experiment would be:

“If we gave it the design of a spaceship, and the details of a proposed test flight, it could tell us how the spaceship would perform on such a flight. But it could not design the spaceship for us in the first place. And even if it predicted that the spaceship we had designed would explode on take-off, it could not tell us how to prevent such an explosion. That would still be for us to work out. And before we could work it out, before we could even begin to improve the design in any way, we should have to understand, among other things, how the spaceship was supposed to work. Only then would we have any chance of discovering what might cause an explosion on take-off. Prediction – even perfect, universal prediction – is simply no substitute for explanation.”

Although I assume it isn’t intended as such, this is a strong critique of Solomonoff induction, a framework which Yudkowsky defends as an idealised model for how to reason. The problem is that the types of hypotheses considered by Solomonoff induction are not explanations, but rather computer programs which output predictions. This means that even a hypothesis which is assigned very high credence by Solomonoff induction might be nearly as incomprehensible as the world itself, or more so - for example, if it merely consists of a simulation of our world. So I agree with Deutsch: even idealised Solomonoff induction (with infinite compute) would lack some crucial properties of explanatory science.[2]

Extending the view of science as explanation

How could Deutsch’s identification of the role of science as producing human-comprehensible explanations actually improve science in practice? One way is by making use of the social science literature on explanations. Miller identifies four overarching lessons:
  1. Explanations are contrastive — they are sought in response to particular counterfactual cases.
  2. Explanations are selected (in a biased manner) - humans are adept at selecting one or two causes from a sometimes infinite number of causes to be the explanation.
  3. Referring to probabilities or statistical relationships in explanation is not as effective as referring to causes.
  4. Explanations are social — they are a transfer of knowledge, presented as part of a conversation or interaction, and are thus presented relative to the explainer’s beliefs about the explainee’s beliefs.
We can apply some of these lessons to improve scientific explanations. Consider that scientific theories are usually formulated in terms of existing phenomena. But to formulate properly contrastive explanations, science will need to refer to counterfactuals. For example, in order to fully explain the anatomy of an animal species, we’ll need to understand other possible anatomical structures, and the reasons why those didn’t evolve instead. Geoffrey West’s work on scaling laws in biology provides a good example of this type of explanation. Similarly, we shouldn’t think of fundamental physics as complete until we understand not only how our universe works, but also which counterfactual laws of physics could have generated other universes as interesting as ours.

A second way we can try to use Deutsch’s framework to improve science: what does it mean for a human to understand an explanation? Can we use findings from cognitive science, psychology or neuroscience to make suggestions for the types of theories scientists work towards? This seems rather difficult, but I’m optimistic that there’s some progress to be made. For example, analogies and metaphors play an extensive role in everyday human cognition, as highlighted by Lakoff’s Metaphors we live by. So instead of thinking about analogies as useful ways to communicate a scientific theory, perhaps we should consider them (in some cases) to be a core part of the theory itself. Focusing on analogies may slightly reduce those theories’ predictive power (because it’s hard to cash out analogies in terms of predictions) while nevertheless increasing the extent to which they allow us to actually understand the world. I’m reminded of the elaborate comparison between self-reference in mathematics and self-replication in biology drawn by Hofstadter in Godel, Escher, Bach - if we prioritise a vision of science as understanding, then this sort of work should be much more common. However, the human tendency towards hindsight bias is a formidable opponent, and so we should always demand that such theories also provide novel predictions, in order to prevent ourselves from generating an illusion of understanding.


[0]. As an example of this bias, see the first two perspectives on scientific theories discussed here; my position is closest to the third, the pragmatic view.
[1]. Work on logical induction and embedded agency may partly address this issue; I’m not sure.
[2]. I was originally planning to go on to discuss Deutsch’s broader critiques of empiricism and induction. But Deutsch makes it hard to do this, because he doesn’t refer very much to the philosophical literature, or specific people whose views he disagrees with. It seems to me that this leads to a lot of linguistic disagreements. For example, when he critiques the idea of knowledge being “derived” from experience, or scientific theories being “justified” by empirical experience, I feel like he’s using definitions of these terms which diverge both from what most people take them to mean, and also from what most philosophers take them to mean. Nor do I think that his characterisation of observation as theory-laden is inconsistent with standard inductivism; he seems to think it is, but doesn’t provide evidence for that. So I’ve decided not to go deeper on these issues, except to note my skepticism about his position.

Comments

Popular posts from this blog

In Search of All Souls

Book review: Very Important People

Moral strategies at different capability levels