Trusting expertise

More on applied epistemology (which should just be called epistemology!) Here’s Holden Karnofsky of Open Philanthropy describing his process for “minimal-trust investigations”–basically trying to understand something yourself as close to from-the-ground-up as you can. Along the way he makes some very good points about social learning, i.e. how and when to trust others to reach accurate beliefs:

Over time, I’ve developed intuitions about how to decide whom to trust on what. For example, I think the ideal person to trust on topic X is someone who combines (a) obsessive dedication to topic X, with huge amounts of time poured into learning about it; (b) a tendency to do minimal-trust investigations themselves, when it comes to topic X; (c) a tendency to look at any given problem from multiple angles, rather than using a single framework, and hence an interest in basically every school of thought on topic X. (For example, if I’m deciding whom to trust about baseball predictions, I’d prefer someone who voraciously studies advanced baseball statistics and watches a huge number of baseball games, rather than someone who relies on one type of knowledge or the other.)

Here’s what I said on basically the same topic a while back:

You want to look for people who think clearly but with nuance (it’s easy to have one but not both), who seriously consider other perspectives, and who are self-critical.

The sites that dominated the economics blogosphere

Several years ago I posted the results of an analysis of top economics sites. I’ve redone that work a bit differently, this time with the data on Github. This time the analysis is purely of the curation done on economist Mark Thoma’s blog during the 2010s: the result is ~14,000 links that he recommended. Here’s some background on Thoma and his blog that also explains why analyzing it is useful.

So first some results, then a few more lists and observations… The top domains in this dataset:

The first thing to note is just how big a deal blogs were. Obvious if you were reading stuff back then, but just a few years later it’s striking! Blogspot, Typepad, and WordPress all make the top domains list because they hosted blogs by individual economists or small groups of them.

I hand-coded the top 60 domains and 23 of them are blogs or blog-hosting platforms. Weighted by how many individual links there are in the dataset, and restricting it just to those top 60 domains I hand-coded not the full list, 43% of links are to blogs.

Top blogs

Krugman’s blog tops the list at 542 links: that alone (not counting his columns) was 3.8% of the dataset. Author data isn’t perfect, but there are 218 other links with Krugman as author, including some non-NYT stuff and one Onion article. Combined, that puts him at 5.4% of the dataset.

No other economist is close to Krugman in terms of influence. But here are the blogs that come next, not counting group blogs at institutions like the New York Times or Federal Reserve.

Amazingly, these blogs all seem to still be going, and a couple of them I personally still read regularly.

Top media

The other big category in the top domains list is media. And here the New York Times dominates. I leave it to you to parse the cause-and-effect here vis a vis Krugman. Here are the top media sites:

DomainTitle
nytimes.com1906
ft.com640
project-syndicate.org435
washingtonpost.com251
economist.com241
bloomberg.com170
newyorker.com108

Top think tanks

DomainTitle
brookings.edu69
promarket.org68
cfr.org65
equitablegrowth.org60
piie.com38
epi.org37
bruegel.org32

Research

That leaves one last big category: research institutions. And here the basic list is: Voxeu, NBER, and the Federal Reserve. IMF makes the list, too, albeit a lot lower.

You can download the data for yourself here.

Revisiting the housing bubble

Timothy Lee has a good post on the revisionist history of the mid-2000s housing bubble in the US. I find the basic premise interesting and pretty compelling: what looked like a housing bubble might have just been prices responding to a mismatch between supply and demand. Lee further says this analytical error—seeing a bubble where there wasn’t one—had huge policy consequences:

This mistake had profound consequences because the perceived size of the housing bubble influenced decision-making by the Federal Reserve. The Fed started raising its benchmark interest rate in 2004, reaching a peak of 5.25 percent in mid-2006. Part of the Fed’s goal was to raise mortgage rates and thereby cool a housing market it viewed as overheated… If the Fed had understood this at the time and acted accordingly, it could have averted a lot of human misery. Home prices would not have fallen so much, and fewer people would have lost their jobs. That, in turn, would have limited the losses of banks that bet on the mortgage market, and might have prevented the 2008 financial crisis.

That’s all fine as far as it goes. But as we reevaluate the housing bubble it’s essential to remember what really caused the crisis in the late 2000s—and led to such an unusually severe recession—and that was the financial complexity and opacity that was built on top of the housing market.

So while this reassessment is important, it’s hard for me to see a counterfactual where things turned out well. The key cause of the Great Recession was a financial panic caused by derivatives, the risk of which were poorly understood. Financial institutions took on more housing-related risk than they realized, and their counterparties did, too—and at key moments in the panic they couldn’t tell just how exposed their counterparties were to the housing market. Housing prices and the Fed’s response are key elements of this story, but they’re the tip of the iceberg.

(Though it’s been years since I have read it and even longer since it was published, Alan Blinder’s After the Music Stopped remains my key reference on this subject.)

The social science side of science

Derek Thompson in a very good piece about Fast Grants:

A third feature of American science isthe experimentation paradox: The scientific revolution, which still inspires today’s research, extolled the virtues of experiments. But our scientific institutions are weirdly averse to them. The research establishment created after World War II concentrated scientific funding at the federal level. Institutions such as the NIH and NSF finance wonderful work, but they are neither nimble nor innovative, and the economist Cowen got the idea for Fast Grants by observing their sluggishness at the beginning of the pandemic. Many science reformers propose spicing things up with new lotteries that offer lavish rewards for major breakthroughs, or giving unlimited and unconditional funding to superstars in certain domains. “We need a better science of science,” the writer José Luis Ricón has argued. “The scientific method needs to examine the social practice of science as well, and this should involve funders doing more experiments to see what works.” In other words, we ought to let a thousand Fast Grants–style initiatives bloom, track their long-term productivity, and determine whether there are better ways to finance the sort of scientific breakthroughs that can change the course of history.

This is what I have heard in my reporting as well. The US has pioneered some extremely successful institutions for funding science and developing technology. That includes the NIH, as well as ARPA, the venture capital industry, etc.

But while those institutions are good at some things, they have flaws and are ill suited to certain tasks. Speed in the case of science funding, as Derek explains; for VC it’s a growing disinterest in deep technical risk, a mismatch with some capital-intensive forms of energy tech, and a model that demands 10X returns at minimum.

And yet we keep going back to the channels we have: more money pours into VC; Congress proposes more money for NIH. Neither of those is a bad idea, per se. But if you talk to folks who study the innovation process what they most want to see is experimentation in new institutions for developing science and tech. Fast Grants is a nice example but there’s a lot more experimenting still to do.

Technical leadership

Brookings categorizes a couple dozen countries by AI proficiency. The US is highest on technical measures but fails to reach the “leader” quadrant because it scores poorly on “people” measures like number of STEM graduates.

You can quibble with the methodology, but I see it as in line with something Paul Romer noted last year: the US is the leader in science (the production of new ideas) but, when it comes to how those ideas spread and are applied, the US is lagging.

Data and theory in economics

Noah Smith on the Nobel for the architects of the “credibility revolution” in economics:

Anyone who expects the credibility revolution to replace theory is going to be disappointed. Science seeks not merely to catalogue things that happen, but to explain why — chemistry is more than a collection of reaction equations, biology is more than a catalogue of drug treatment effects, and so on. Econ will be the same way. But what the credibility revolution does do is to change the relationship between theory and evidence. When evidence is credible, it means that theory must bend to evidence’s command — it means that theories can be wrong, at least in a particular time and place. And that means that every theory that can be checked with credible evidence needs to be checked before it’s put to use in real-world policymaking. Just like you wouldn’t prescribe patients a vaccine without testing it first. This is a very new way for economists to have to force themselves to think. But this is a field in its infancy — we’re still at the Francis Bacon/Galileo stage. Give it time.

In other words, new empirical techniques brought economics closer to following Michael Strevens’ iron rule of explanation: “that all arguments must be carried out with reference to empirical evidence.”

Quartz’s coverage of the Nobel is here and here.

A definition of culture

From an NBER review of the economics of company culture. The authors describe the varied ways “culture” has been defined, not just with respect to companies, and then offer this list:

A sensible list of elements in that package, though neither nearly exhaustive nor likely satisfactory to all, is as follows, adapted from a variety of such lists in the literature:

• unwritten codes, implicit rules, and regularities in interactions;

• identities,self-image, and guiding purpose;

• espoused values and evolving norms of behavior;

• conventions, customs, and traditions;

• symbols, signs, rituals, and group celebrations;

• knowledge, discourse, emergent understanding, doctrine, ideology;

• memes, jokes, style, and shared meaning;

• shared mental models, expectations, and linguistic paradigms.

Fixing the internet

The other day I rewatched one of my favorite talks about the internet, a 2015 lecture on algorithmic decisions by Jonathan Zittrain of Harvard Law School titled “Love the processor, hate the process.” Like all his talks, it’s funny, wide-ranging, and hard to summarize. But I think reflecting on it you can see him proposing a few categories of ways to fix what’s gone wrong with the internet:

  • regulation
  • competition
  • public goods and open standards

There’s so much wrong with the current internet and so many ideas floating around on what might be done about it that I find just these simple three buckets helpful in sorting out our choices. The fix, if there is one, will require some of all three.

On explanation

What makes a good explanation?

It’s not straightforward to provide an answer. Wikipedia says:

An explanation is a set of statements usually constructed to describe a set of facts which clarifies the causes, context, and consequences of those facts. This description may establish rules or laws, and may clarify the existing rules or laws in relation to any objects, or phenomena examined.

Philosophers, of course, have quite a lot more to say about the matter.

In this post I want to offer my own sketch, with an eye more toward the practical work of explanatory journalism than to philosophy. Wikipedia’s Causes, consequences, context has a nice alliterative ring to it so I’ll amend that to offer my own C’s of explanation.

Causes and consequences

A good explanation “fits the facts,” and suggests cause-effect relationships to make sense of them. Another way of saying that is, to borrow from pragmatist accounts of explanation, a good explanation should be “’empirically adequate’ (that is, that yield a true or correct description of observables).”

As for causes (of which consequences are one type), consider the difference between explanation and prediction. A forecaster might say a candidate has an 80% chance to win an election; their model “fits the facts.” But it does not say why. It offers no explanation because it has no causal content.

A good explanation allows for the consideration of at least one counterfactual. Max Weber wrote, about causality, that

“The attribution of effects to causes take place through a process of thought which includes a series of abstractions. The first decisive one occurs when we conceive of one or a few of the actual causal components as modified in a certain direction and then ask our selves whether under the conditions which have been thus changed, the same effect (the same, i.e. in ‘essential’ points) or some other effect ‘would be expected.'”

Max Weber: The interpretation of social reality, p. 20

Thinking about the counterfactual means breaking a problem up into components, and that requires concepts.

Concepts

A good explanation clearly defines its concepts, and chooses ones that are useful. Defining concepts helps the listener follow the explanation. Picking the right ones means choosing concepts that enable a more accurate and more useful causal model. The need for clear, useful concepts in explanation is central to the idea of “explainability” in machine learning.

A deep learning model might be extremely good at prediction; it fits the facts. And it might even seem to offer causal models: a causal effect is, statistically, just the difference between two conditional probabilities and some machine learning models can estimate causal effects reasonably accurately. But a deep learning model trained on individual pixels or characters won’t be interpretable or explainable. Its causal insights don’t always transfer easily into the heads of human beings. And that’s because it lacks easily defined, useful concepts. A deep learning model arguably “learns” meaningful notions of things as it translates pixels or characters, across layers, toward a prediction. But those notions aren’t recognizable concepts that people can work with. To make the model explainable, we need to provide concepts that people can make sense of and use.

Coherence

A good explanation is logical, or at least not illogical. An explanation links together concepts and facts into causal models in reasonable ways, without logical or mathematical error or contradiction. That’s easy enough to say; the question is what standard of logic we hold an explanation to. Must it come with a formal proof of its coherence? Or is some loose feeling that it “makes sense” enough? Deciding on that standard depends on context.

Context

A good explanation fits its context. It’s appropriate for its audience: A good explanation of macroeconomics is different if the audience is a four-year-old than if it is a college student. It includes the right (and right amount) of background information to help the audience understand what’s going to be explained. And it considers the goals of speaker, listener, and society at large. It aims to help actual people in the world achieve their purposes. That admittedly hazy criteria is the start for deciding what is good enough in terms of both empirical adequacy and coherence.

Summing up

So there it is. Pretty loose and subjective and imperfect, of course. But in my estimation a good explanation:

  • Fits the facts and proposes empirically plausible cause-effect relationships
  • Defines its terms and relies on concepts that feel useful and appropriate
  • Makes logical sense
  • Offers helpful background context and takes into account its audience

Durkheim on empiricism and economics

From 1938:

“The famous law of supply and demand for example, has never been inductively established, as should be the case with a law referring to economic reality. No experiment or systematic comparison has ever been undertaken for the purpose of establishing that in fact economic relations do conform to this law. All that these economists do, and actually did do, was to demonstrate by dialectics that, in order properly to promote their interests, individuals ought to proceed according to this law, and that every other line of action would be harmful to those who engage in it and would imply a serious error of judgement. It is fair and logical that the most productive industries should be the most attractive and that the holders of the products most in demand and most secure should sell them at the highest prices. But this quite logical necessity resembles in no way the true laws of nature present. The latter express the regulations according to which facts are really interconnected, not the way in which it is good that they should be interconnected.”

The Rules of Sociological Method, p. 26. Via Max Weber: The Interpretation of Social Reality, p. 18.