Bias in the market for change

Earlier this year I wrote about loss aversion and politics. Here’s a quick snippet on this from Felix Oberholzer-Gee’s excellent new book Better, Simpler Strategy. He’s covering three cases of technological change (radio, PCs, and ATMs) and notes that while they were expected to be pure substitutes for records, paper, and bank tellers, respectively, they were actually complements and increased demand for those things:

Did you notice a pattern in the three examples? In each instance, we predicted substitution when in fact the new technology turned out to increase the willingness-to-pay for existing products and activities. This type of bias is the norm. We fear change; potential losses loom larger than similar gains, a phenomenon that psychologists Amos Tversky and Daniel Kahneman call loss aversion. Loss aversion keeps us preoccupied with the risk of substitution even when we look at complementarities.

p. 81

Loss aversion doesn’t just change the politics of change, but the market for it, too.

More on social epistemology

A few weeks back I wrote about the importance of social learning. Yes, it’s important to try to think clearly and logically where you can, but in practice we’re mostly forced to rely on cues from others to reach our beliefs.

Will Wilkinson makes this point well and in much greater depth in a recent post on conspiracy and epistemology. Along the way he highlights where he breaks from the “rationalist” community:

Now, I’ve come to think that people who really care about getting things right are a bit misguided when they focus on methods of rational cognition. I’m thinking of the so-called “rationalist” community here. If you want an unusually high-fidelity mental model of the world, the main thing isn’t probability theory or an encyclopedic knowledge of the heuristics and biases that so often make our reasoning go wrong. It’s learning who to trust. That’s really all there is to it. That’s the ballgame…

It’s really not so hard. In any field, there are a bunch of people at the top of the game who garner near-universal deference. Trusting those people is an excellent default. On any subject, you ought to trust the people who have the most training and spend the most time thinking about that subject, especially those who are especially well-regarded by the rest of these people.

I mostly agree: this is the point I was trying to make in my post on social learning.

But for the sake of argument we should consider the rationalist’s retort. Like at least some corners of the rationalist community, I’m a fan of Tetlock’s forecasting research and think it has a lot to teach us about epistemology in practice. But Tetlock found that experts aren’t necessarily that great at reaching accurate beliefs about the future, and that a small number of “superforecasters” seem, on average, to outperform the experts.

Is Wilkinson wrong? Might the right cognitive toolkit (probability, knowledge of biases, etc.) be better than deferring to experts?

I think not, for a couple reasons. First off, sure some people are better than experts at certain forms of reasoning, but what makes you think that’s you? I’ve done forecasting tournaments; they’re really hard. Understanding Bayesian statistics does not mean you’re a superforecaster with a track record of out-reasoning others. Unless you’ve proven it, it’s hubris to think you’re better than the experts.

I’d also argue that the superforecasters are largely doing a form of what Wilkinson is suggesting, albeit with extra stuff on top. Their key skill is arguably figuring out who and what to trust. Yes, they’re also good at probabilistic thinking and think about their own biases, but they’re extremely good information aggregators.

And that leads me to maybe my key clarification on Wilkinson. He says:

A solid STEM education isn’t going to help you and “critical thinking” classes will help less than you’d think. It’s about developing a bullshit detector — a second sense for the subtle sophistry of superficially impressive people on the make. Collecting people who are especially good at identifying trustworthiness and then investing your trust in them is our best bet for generally being right about things.

I’d put it a bit differently. If by critical thinking he means basically logical reasoning class then sure I am with him. What you need is not just the tools to reason yourself: you need to learn how to figure out who to trust. So far, so good.

But I wouldn’t call this a “bullshit detector” exactly, though of course that’s nice to have. Another key lesson from the Tetlock research (and I think confirmed elsewhere) is that a certain sort of open-mindedness is extremely valuable–you want to be a “many model” thinker who considers and balances multiple explanations when thinking about a topic.

That’s the key part of social learning that I’d emphasize. You want to look for people who think clearly but with nuance (it’s easy to have one but not both), who seriously consider other perspectives, and who are self-critical. Ideally, you want to defer to those people. And if you can’t find them, you want to perform some mental averaging over the perspectives of everyone else.

Best case, you find knowledgeable “foxes” and defer to them. Failing that, you add a bit of your own fox thinking on top of what you’re hearing.

Doing that well has almost nothing to do with Bayes’ theorem. Awareness of your own biases can, I think, help–though it doesn’t always. And knowledge of probability is often useful. But reaching true beliefs is, in practice, still a social activity. Like Wilkinson says, it’s mostly a matter of trust.

Theory and replication

Economics studies tend to replicate at a higher rate than psychology studies. Why? One possibility is that economics has a more unified theoretical framework to help guide researchers toward hypotheses that are more likely true, whereas theories in psychology are numerous and not well integrated.

Joseph Henrich has made this argument, and wants psychology to root itself in evolutionary theory. And Matt Clancy, an innovation researcher, lays out the case at Works in Progress. He argues that theory helps lead researchers to hypotheses that are more likely to be true and broad, unified theoretical frameworks allow more chances for a new study to support or refute an old study without it being an actual replication.

I’ve been skeptical of the idea that more theory is the answer to the replication crisis, mostly because I think the dominant unified framework in economics has a lot of downsides. One of the biggest movements in the field in the last 30 years was behavioral economics, which largely just pointed out empirically the many ways that the dominant theoretical framework failed to capture economic behavior. It borrowed from psychology, wasn’t that theoretically unified, and represented real progress for the field of economics. In macro, meanwhile, large chunks of the profession seemingly failed to understand the great recession because of their preference for theory — for beauty over truth in Krugman’s estimation.

But maybe theory does help with replication. Maybe that’s what you get in return for these other limitations; sure you miss a huge chunk of human behavior but where your theories do apply you produce stable results.

Clancy writes on his substack about a study looking at how theory affects publication bias. It presents evidence that when a theory predicts a specific relationship (rather than being ambiguous and allowing for multiple results) there’s more publication bias. Microeconomic theory predicts that less of a good is demanded at higher prices (demand curve slopes down). So:

Studies that estimate demand exhibit much more selection bias than those that don’t… In other words, when economists get results that say there is no relationship between price and demand, or that demand goes up when prices go up, these results appear less likely to be published.

So what do we make of this?

If you’re really skeptical of the empirical turn in economics and think there are a laundry list of other problems with these papers outside of publication bias, you might argue that this “bias” is what’s helping economics papers replicate better. Publication bias sounds bad, but in this view empirical social science is so screwed up that theory is serving as a useful line of defense. You have a paper finding that the demand curve actually slopes up? That defies basic theory, so get that out of here. One more spurious result saved from publication.

The more straightforward response, I think, is to view this as a risk of too much deference to theory. Yes, theory saves some bad, spurious papers from being published, but it’s a real problem if a theory is allowed to capture the publication process. Theory is essentially banning results that contradict it!

My hunch is that this is another reason to favor a “many models” approach. Sure, maybe you need more theory than psychology. But rather than aspiring to one dominant unified framework — everything is optimizing, self-interested agents! everything is grounded in evolution — I think you’d more realistically want a manageable collection of theoretical frameworks. For example, economics needs models that can account for irrationality and cooperation, even if they aren’t perfect fits with the basic workhorse micro models.

This is the Dani Rodrik view:

“Models” — the abstract, typically mathematical frameworks that economists use to make sense of the world — form the heart of the book. Models are both economics’ strength and its Achilles’ heel; they are also what makes economics a science — not a science like quantum physics or molecular biology, but a science nonetheless.

Rather than a single, specific model, economics encompasses a collection of models. The discipline advances by expanding its library of models and by improving the mapping between these models and the real world. The diversity of models in economics is the necessary counterpart to the flexibility of the social world. Different social settings require different models. Economists are unlikely ever to uncover universal, general-purpose models.

(Posts on economic models here, here, here, here, here.)

This is all very speculative, but my sense is that the many model approach still allows theory to inform hypotheses while also allowing data that challenges one or more of those theories to get published.

The politics of diffusion

Last post I pointed to a study noting the huge costs of delaying the spread of beneficial technologies. But I ended by adding a caveat: When the rapid spread of a technology makes it harder to regulate (broadly construed) that potentially strengthens the case for moving slower.

So what does that look like? And how often does it happen?

Three buckets come to mind:

  • Influence peddling: The spread of a technology creates a new interest group, most obviously the people making it, that lobbies for its interests in a way that limits the technology’s benefits.
  • Destabilization: The rapid spread of a technology destabilizes politics or some other aspect of society in a way that threatens well-being.
  • Entrenchment: The rapid spread of a technology entrenches existing elites or regimes such that further political progress is harder.

The first one is certainly common, but I suspect the takeaway isn’t so much so slow down tech as it is just build good political institutions. Business will always have some sway in politics, but tech or no tech you want to set up political institutions that cannot be easily captured. And in a lot of cases, new interest groups created by tech aren’t exclusively bad: Yes, big tech companies might capture some areas of legislation but ridesharing pushes back on cab cartels and, far more importantly, renewable energy companies are a countervailing force to the fossil fuel lobby. Broadly, it seems, the lesson isn’t to slow down tech until you’ve limited businesses’ influence on politics but just to do as much as you can to limit businesses’ influence on politics! Sometimes new tech makes that a bit harder and sometimes it might even make it a bit easier.

That leaves destabilizing and entrenching technologies.

Perhaps the most obvious category is weapons. Their spread can certainly be destabilizing or entrenching. But it’s not obvious that they’re beneficial in the first place, so the lesson is Don’t spread malicious technologies more than it is Slow the spread of useful tech to make it easier to regulate.

A better example might be radio, which spread really quickly from 1920 to 1940.

Remember, the question isn’t whether radio will be entrenching or destabilizing, both, or neither. It’s whether, assuming it seems net positive at the beginning, faster diffusion limits its eventual benefits.

Radio certainly had all sorts of unanticipated consequences, like nationalizing a new pop culture and bringing more advertising into homes. And it would be used as a tool of propaganda during World War II. But it’s not clear that any of those effects depended on the pace of its spread. The main effect of that rapid spread was egalitarian. The rapid drop in price allowed rural Americans to get access just a few years after their urban counterparts.

Robert Gordon’s book The Rise and Fall of American Growth shares a couple of quotes on its impact:

“[A survey in the 1930s found] Americans would rather sell heir refrigerators, bath tubs, telephones, and beds to make rent payments, than to part with the radio box that connected the world.”

“The radio… offered the compensations of fantasy to lonely people with deadening jobs or loveless lives, to people who were deprived or emotionally starved.” (p. 195)

Radio is the sort of technology, like social media, with potentially far reaching social effects. But as far as I can tell the speed of its diffusion was net positive.

Last post I ended by noting that it’s an empirical question how often rapid diffusion prevents adequate regulation such that you’d want to seriously slow it down until the institutional context improves. My hunch is that it’s more the exception than the rule.

The benefits of tech adoption

Dylan Matthews has a good column in Vox’s Future Perfect newsletter (can’t find a link) that gets at something I’ve been thinking about a lot: the potentially large, but unseen costs of slowing the spread of useful technologies.

He’s writing about a new paper estimating the benefits of the Green Revolution:

The [Green Revolution] was a widespread global agricultural shift in the 1960s and 1970s, encouraged by US-based foundations like Ford and Rockefeller and implemented by the governments of countries in Asia and Latin America, toward higher-yield varieties and cultivation methods for rice, wheat, and other cereals…

The new paper, from economists Douglas Gollin, Casper Worm Hansen, and Asger Wingender and set to be published in the influential Journal of Political Economy soon, estimates what the effects of delaying the Green Revolution by 10 years would have been. IRRI started breeding rice crops in 1965, so in this counterfactual the revolution would have begun in 1975 instead.

They estimate that such a delay would have reduced GDP per capita in countries analyzed by about one-sixth; worldwide, the cost to GDP would total some $83 trillion, or about as much as one year of world GDP. And if the Green Revolution had never happened, GDP per capita in poor countries would be half what it is today.

I have not read the paper, but I trust Dylan’s research coverage.

This is the sort of hidden cost that the folks I characterize as pursuing the “innovation agenda” worry about. Might we look back one day and kick ourselves for having delayed the spread of artificial intelligence? Of offshore wind? Of self-driving cars?

I suspect that in at least some of these cases there are large costs to delay.

That isn’t an argument for just charging full speed ahead without concern for safety and equity. It was clear from my look back at the early days of electricity, for example, that getting the regulations right was an essential part of making the technology actually good for well-being. Spreading it too fast, without the right laws and institutions and even norms, can cause considerable suffering.

And then there’s the point Steven Johnson made in his New York Times Magazine excerpt of his book on human lifespans:

How did this great doubling of the human life span happen? When the history textbooks do touch on the subject of improving health, they often nod to three critical breakthroughs, all of them presented as triumphs of the scientific method: vaccines, germ theory and antibiotics. But the real story is far more complicated. Those breakthroughs might have been initiated by scientists, but it took the work of activists and public intellectuals and legal reformers to bring their benefits to everyday people. From this perspective, the doubling of human life span is an achievement that is closer to something like universal suffrage or the abolition of slavery: progress that required new social movements, new forms of persuasion and new kinds of public institutions to take root. And it required lifestyle changes that ran throughout all echelons of society: washing hands, quitting smoking, getting vaccinated, wearing masks during a pandemic.

His point is that it’s not just that we can’t spread the tech until the right laws are in place; it’s that laws, norms, and institutions are an important part of how the tech develops. There’s no clear dividing line between the tech itself and the context in which it spreads.

Social media makes that point well, I think. It’s not just that social media’s effects depend on the norms and rules of the day; it’s that its very development reflects that context. It is the way it is because of the context in which it developed.

Back to the Green Revolution. Surely some of this is true there as well; the development and spread of those agricultural techniques no doubt depended in part on the specifics of time and place. Is that a reason to doubt the paper’s main idea, that spreading those technologies a decade earlier would have been massively beneficial? I don’t think so; that’s taking the Johnson point too far.

So what does this add up to? The effects of a technology depend both on its technical features today (the ability to improve crop yields or to drive a car using software without crashing) along with the laws, norms, and institutions that exist around it.

Delaying the spread of a useful technology can be extraordinarily costly–unless you hold some very specific ideas about the institutional context, how it will change, and how the pace of technological diffusion affects that process.

All else equal, we should want useful technologies to spread quickly. All else equal, we should want our laws, norms, and institutions to improve a technology’s benefits–both making it more beneficial at a given point in time by say outlawing forms of exploitation and by changing its development path to make it more beneficial over time.

To the extent these two processes are separate, these points mostly hold: deploy self-driving cars as fast as we can, and improve the laws around self-driving cars as fast as we can. But they’re not wholly separate. Where we most need to worry is when a technology’s spread constraints our ability to improve it later. You could write a fun example of an AI that captures Congress or something but you don’t need to go all sci-fi, just think about Uber. You might think, all else equal, it’s good to spread the (admittedly minor) technology of mobile ride-hailing. And then alongside it you’d want to change laws to prevent exploitation, pollution, and the like. But Uber’s spread in some ways makes it harder to change those laws. Or think of the shipping container: Its spread changes the political economy around trade in ways that are hard to predict ahead of time. Context shapes diffusion, but diffusion also shapes context.

This is the challenge for the innovation agenda crowd. It’s important to loudly explain the hidden potential costs to delaying the spread of useful technologies. But tallying those costs depends on political economy: speeding a useful technology’s spread doesn’t make sense if that spread hampers institutions to the point of negating the technology’s benefits.

It’s at least plausible that sometimes the faster a technology spreads, the harder it is to (ever) successfully regulate. The empirical question is how often that happens.

Thinking clearly

Really nice piece from Aeon’s Psyche magazine on thinking clearly. I’ve quoted a few bits, but read the whole thing:

In philosophy, what’s known as standard form is often used to set out the essentials of a line of thought as clearly as possible. Expressing your thinking in standard form means writing out a numbered list of statements followed by a conclusion. If you’ve done it properly, the numbered statements should present a line of reasoning that justifies your final conclusion…

You might have seen examples of this approach before, or used it in your own work. You might also have encountered a great deal of discussion around logical forms, reasonable and unreasonable justifications, and so on. What I find most useful about standard form, however, is not so much its promise of logical rigour as its insistence that I break down my thinking into individual steps, and then ask two questions of each one:

Why should a reasonable person accept this particular claim?

What follows from this claim, once it’s been accepted?

When it comes to clarifying my thoughts and feelings, the power of such an approach is that anything relevant can potentially be integrated into its accounting – but only if I’m able to make this relevance explicit…

Upon what basis can I justify any claims? Some will rely on external evidence; some on personal preferences and experiences; some on a combination of these factors. But all of them will at some point invoke certain assumptions that I’m prepared to accept as fundamental. And it’s in unearthing and analysing these assumptions that the most important clarifications await…

This, I’d suggest, is the most precious thing about clearly presenting the thinking behind any point of view: not that it proves your rightness or righteousness, but that it volunteers your willingness to participate in a reasoned exchange of ideas. At least in principle, it suggests that you’re prepared to:

Justify your position via evidence and reasoned analysis.

Listen to, and learn from, perspectives other than your own.

Accept that, in the face of sufficiently compelling arguments or evidence, it might be reasonable to change your mind.

Social learning

Via Bloomberg Opinion I came across this essay by David Perell on “How philosophers think.” It’s really a critique of shallow and conformist thinking.

The point is, you can read all the Wikipedia summaries you want, but they won’t give you a holistic understanding of an idea. That only happens once you have a layered, three-dimensional perspective, which writing helps you achieve. 

Charlie Munger calls this the difference between “real knowledge” and “chauffeur knowledge.” He tells an apocryphal story about Max Planck, who went around the world giving the same knowledge about quantum mechanics after he won the Nobel Prize. After hearing the speech multiple times, the chauffeur asked Planck if he could give the next lecture. Planck said, “Sure.” At first, the lecture went well. But afterwards, a physics professor in the audience asked a follow-up question that stumped the chauffeur. Only Max Planck, who had the background knowledge to support the ideas in the talk, could answer it. 

From the chauffeur’s story, we learn that you understand an idea not when you’ve memorized it, but when you know why its specific form was chosen over all the alternatives. Only once you’ve traveled the roads that were earnestly explored but ultimately rejected can you grasp an idea firmly and see it clearly, with all the context that supports it. 

The more pressure people feel to have an opinion on every subject, the more chauffeur knowledge there will be. In that state of intellectual insecurity, people rush to judgment. When they do, they abandon the philosophical mode of thinking. In turn, they become slaves to fashionable ideas and blind to unconscious assumptions. 

This resonates, in that I think back to lots of opinions I gathered quickly at some point in time via blogs and newspaper columns. I was like the chauffeur, able to repeat what I’d read about, say, the merits of fiscal stimulus, but without any deep understanding of the subject matter.

On the other hand, there were opinions I formed in roughly that manner that seem to me to have been basically right, even as I’ve learned more about the topic.

The essay continues:

People who don’t have the tools to reason independently make up their minds by adopting the opinions of prestigious people. When they do, they favor socially rewarded positions over objective accounts of reality. A Harvard anthropologist named Joseph Henrich laid the empirical groundwork for this idea in his book, The Secret of Our Success. In it, he showed that evolution doesn’t prioritize independent thinking. Humanity has succeeded not because of the intelligence of atomic individuals, but because we’ve learned to outsource knowledge to the tribe…

…Humans are such prolific imitators that they even copy the stylistic movements of people they admire, even when they seem unnecessary. Most of this happens outside of conscious awareness. And they don’t just copy the actions of successful people. They copy their opinions, too. Henrich calls this “the conformist transmission” of information. All this suggests that social learning is humanity’s primary advantage over primates and, in Henrich’s words, “the secret of our success.”

But sometimes, that conformity spirals out of control.

Perell goes on to talk about all the problems this can cause but, as he acknowledges, this kind of epistemological outsourcing is also really useful.

And when I think about what good thinkers do, it’s not just that they constantly reason deeply and independently. It’s that they’re good at using “social learning”–chauffeur knowledge–to get toward the truth.

Take forecasters, since that’s a well-studied area. Tetlock’s superforecasters are certainly capable of reasoning deeply, but often times they succeed because of efficient social learning. They read widely, make good calls about who to trust, and then mentally average over several perspectives. Sometimes they reach the truth without deeply understanding the topic. You see similar stuff reading about how fact checkers do their job or what information literacy experts recommend.

All of which makes sense, since we just can’t understand everything sufficiently; outsourcing is the only option. Social learning is the norm, and it’s an underrated skill to do it in a way that helps you get at the truth when that’s your aim, rather than just signaling an affiliation.

We shouldn’t just ask people to turn on “philosopher mode” more often–although sometimes, sure–we should also teach them to be better social learners.

More posts on related topics:

And my writing elsewhere about forecasting and Tetlock:

Innovation and well-being

Are innovative places good places to live?

I’ve been thinking about that question, and what follows is a quick bit of exploration — not anything definitive.

I wanted to see if the states with more innovative economies are also the states with better quality of life. To do that, I compared the Social Progress Index, a measure of well-being, to the tech think tank ITIF’s “New Economy” index which purports to rank states’ economies by their emphasis on innovation. As with all broad rankings like this, your mileage may vary.

  • Here’s detail on the ITIF ranking
  • Social Progress Index for states info here and here

Without further ado…

(There are a million questions and caveats that I won’t get into but one to note: the Social Progress score is 2018; ITIF is 2020.)

The innovation agenda

Ezra Klein has a column about the coronavirus agenda of economist Alex Tabarrok:

Here’s a question I’ve been mulling in recent months: Is Alex Tabarrok right? Are people dying because our coronavirus response is far too conservative?

I don’t mean conservative in the politicized, left-right sense. Tabarrok, an economist at George Mason University and a blogger at Marginal Revolution, is a libertarian, and I am very much not. But over the past year, he has emerged as a relentless critic of America’s coronavirus response, in ways that left me feeling like a Burkean in our conversations.

I don’t have anything to add on Covid-19, but I see the Tabarrok agenda as being centrally about putting innovation first — something I’ve described before. He’s concerned, first and foremost, with applying new ideas to solve important problems and pushes back against policies and institutions that exhibit a status quo bias.

Tabarrok is a libertarian, albeit not an incredibly doctrinaire one. And that’s the community I see most well-represented in the “innovation-first” agenda. It’s not that hard to see why pragmatic libertarians would be drawn to it: if you’re not defending libertarianism on purely principled grounds but on consequences, the argument typically hinges on the idea that innovation and economic growth are incredibly important and that governments often suppress them. Libertarians need a theory of how innovation happens for their arguments to work, so it’s not all that surprising that many of them have one.

But what do liberals have to offer in the discussion around innovation and innovation-fueled growth? In the case of Covid-19, left-of-center thinkers seemed to me quite reasonable in their approach to balancing speed and certainty. But on other innovation issues it can sometimes seem that they are borrowing a page from conservatives, standing athwart technology and yelling stop.

When Ezra has said multiple times on his podcast (paraphrasing) that the left needs a better theory of technology I suspect this is what he means.

I think of some of my own work as speaking to this need. It’s part of what I have had in mind when writing about the welfare state and entrepreneurship: my read of the evidence is that a more generous government support system increases innovation and dynamism. See here and here.

But do liberals care? If skepticism about growth and technology are the norm, the argument that liberal policies can further both won’t hold much weight. I think that’s a mistake. In the long run, ideas-driven growth is a (the?) primary driver of living standards. The Tabarrok’s of the world are right to put it in the foreground. We really do need to worry about status quo bias in our institutions and make innovation policy a central concern. Doing that doesn’t commit you to libertarianism.