What makes a good explanation?
It’s not straightforward to provide an answer. Wikipedia says:
An explanation is a set of statements usually constructed to describe a set of facts which clarifies the causes, context, and consequences of those facts. This description may establish rules or laws, and may clarify the existing rules or laws in relation to any objects, or phenomena examined.
Philosophers, of course, have quite a lot more to say about the matter.
In this post I want to offer my own sketch, with an eye more toward the practical work of explanatory journalism than to philosophy. Wikipedia’s Causes, consequences, context has a nice alliterative ring to it so I’ll amend that to offer my own C’s of explanation.
Causes and consequences
A good explanation “fits the facts,” and suggests cause-effect relationships to make sense of them. Another way of saying that is, to borrow from pragmatist accounts of explanation, a good explanation should be “’empirically adequate’ (that is, that yield a true or correct description of observables).”
As for causes (of which consequences are one type), consider the difference between explanation and prediction. A forecaster might say a candidate has an 80% chance to win an election; their model “fits the facts.” But it does not say why. It offers no explanation because it has no causal content.
A good explanation allows for the consideration of at least one counterfactual. Max Weber wrote, about causality, that
“The attribution of effects to causes take place through a process of thought which includes a series of abstractions. The first decisive one occurs when we conceive of one or a few of the actual causal components as modified in a certain direction and then ask our selves whether under the conditions which have been thus changed, the same effect (the same, i.e. in ‘essential’ points) or some other effect ‘would be expected.'” Max Weber: The interpretation of social reality, p. 20
Thinking about the counterfactual means breaking a problem up into components, and that requires concepts.
A good explanation clearly defines its concepts, and chooses ones that are useful. Defining concepts helps the listener follow the explanation. Picking the right ones means choosing concepts that enable a more accurate and more useful causal model. The need for clear, useful concepts in explanation is central to the idea of “explainability” in machine learning.
A deep learning model might be extremely good at prediction; it fits the facts. And it might even seem to offer causal models: a causal effect is, statistically, just the difference between two conditional probabilities and some machine learning models can estimate causal effects reasonably accurately. But a deep learning model trained on individual pixels or characters won’t be interpretable or explainable. Its causal insights don’t always transfer easily into the heads of human beings. And that’s because it lacks easily defined, useful concepts. A deep learning model arguably “learns” meaningful notions of things as it translates pixels or characters, across layers, toward a prediction. But those notions aren’t recognizable concepts that people can work with. To make the model explainable, we need to provide concepts that people can make sense of and use.
A good explanation is logical, or at least not illogical. An explanation links together concepts and facts into causal models in reasonable ways, without logical or mathematical error or contradiction. That’s easy enough to say; the question is what standard of logic we hold an explanation to. Must it come with a formal proof of its coherence? Or is some loose feeling that it “makes sense” enough? Deciding on that standard depends on context.
A good explanation fits its context. It’s appropriate for its audience: A good explanation of macroeconomics is different if the audience is a four-year-old than if it is a college student. It includes the right (and right amount) of background information to help the audience understand what’s going to be explained. And it considers the goals of speaker, listener, and society at large. It aims to help actual people in the world achieve their purposes. That admittedly hazy criteria is the start for deciding what is good enough in terms of both empirical adequacy and coherence.
So there it is. Pretty loose and subjective and imperfect, of course. But in my estimation a good explanation:
- Fits the facts and proposes empirically plausible cause-effect relationships
- Defines its terms and relies on concepts that feel useful and appropriate
- Makes logical sense
- Offers helpful background context and takes into account its audience