What “evidence-based” thinking leaves out

A few loosely related items I was reading today…

Zeynep Tufekci makes the case against election forecasts, and says this:

This is where weather and electoral forecasts start to differ. For weather, we have fundamentals — advanced science on how atmospheric dynamics work — and years of detailed, day-by-day, even hour-by-hour data from a vast number of observation stations. For elections, we simply do not have anything near that kind of knowledge or data. While we have some theories on what influences voters, we have no fine-grained understanding of why people vote the way they do, and what polling data we have is relatively sparse.

How much does that matter? That, really, has been the motivating question behind all the posts I’ve put up over the last several months on the relative roles of theory vs. evidence. (On models: here, here, here, and here. And on theory and data: here, here, here, here, and here.)

I was thinking about evidence-based medicine today for a reason I’ll share in a second, and it led me to this 2003 paper on the philosophy of evidence-based medicine which the authors see as essentially prizing randomized controlled trials over other forms of evidence. Here’s a worthwhile bit:

Even when a clinical trial returns positive results in the treatment arm that satisfy tests of statistical significance, we will have more confidence in these results when they have some antecedent biological plausibility.[28,29] Put more generally, we would suggest that the degree of confidence appropriate for a clinically tested claim is a function of both the strength of the clinical result and the claim’s antecedent biological plausibility.

This gets at my point the other day about theory and data as complements. (The whole section of the paper “EBM and Basic Science” is worth a read.)

Why was I reading about evidence-based medicine? Because of the comparison between evidence-based medicine and effective altruism from this talk and this paper by philosopher Hilary Greaves. Effective altruism also tends to prize causal evidence like RCTs above other forms of evidence. Greaves’ point is that many (most?) of the consequences of any given intervention aren’t (can’t be?) measured by these methods. If someone sets out to maximize the net consequences of their actions, this raises real problems.

Far as I can tell, a lot hinges in this discussion on whether unmeasured effects correlate with measured ones, and how. But Greaves ends this way:

If we deliberately try to beneficially influence the course of the very far future, can we find things where we more robustly have at least some clue that what we’re doing is beneficial and of how beneficial it is? I think the answer is yes.

That is a tall order, but potentially an area where attention to theoretical plausibility can help.

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s