Durkheim on empiricism and economics

From 1938:

“The famous law of supply and demand for example, has never been inductively established, as should be the case with a law referring to economic reality. No experiment or systematic comparison has ever been undertaken for the purpose of establishing that in fact economic relations do conform to this law. All that these economists do, and actually did do, was to demonstrate by dialectics that, in order properly to promote their interests, individuals ought to proceed according to this law, and that every other line of action would be harmful to those who engage in it and would imply a serious error of judgement. It is fair and logical that the most productive industries should be the most attractive and that the holders of the products most in demand and most secure should sell them at the highest prices. But this quite logical necessity resembles in no way the true laws of nature present. The latter express the regulations according to which facts are really interconnected, not the way in which it is good that they should be interconnected.”

The Rules of Sociological Method, p. 26. Via Max Weber: The Interpretation of Social Reality, p. 18.

The political economy of attention

A review paper from NBER hits on something I’ve been thinking about lately: how media and attention factor into political economy.

How do groups of people coordinate to take political action? When are they able to overcome free rider problems? These are central questions in political economy, and one line of thinking says that smaller, organized groups will have an easier time than larger, diffuse groups.

From that you get the notion of distributed benefits and diffuse costs, and vice versa: when a small, organized group bears most of the benefits or costs of a policy, they tend to get their way, even if, on net, it’s a bad policy. Loose banking regulations have concentrated benefits (for bankers) and diffuse costs (to the public). It’s easier for banks to coordinate and hire a lobbyist than for the public to pay close attention to banking regulation. Even if, on net, loose banking regulations are a bad idea, the banks have more motivation and an easier time organizing and so they get what they want.

Except it doesn’t always work that way. Sometimes the media directs enough public attention to an issue that the diffuse public prevails over concentrated, organized interests. And that suggests a big role for models of attention in political economy: when do people pay enough attention and care enough about something to overcome the difficulty that diffuse groups face in politics?

The paper is mostly an empirical review but they have a basic model in which people are trying to decide if it’s worth their time to invest in political action. That involves gauging how likely other people are to take that action, too, and what kind of information they get from the media matters:

“The first key lesson is: the role of media in spreading information may facilitate or hinder collective action, depending on the content of that information…

There is a second key lesson: the effectiveness of the media in spreading information eventually facilitates collective action…

Our third key lesson: homophily in social networks dampens the effect of information on collective action.”

Basically everyone is looking for evidence that other people are willing to participate, too. Media gives them hints as to whether that’s the case or not; the more the media gives you a sense that others will join, the more likely you are to join. The more your network is just filled with people like you, the less confident you are that the information you’re getting is actually a clue about whether others will join (maybe you’re just being fed the small subset of people who are like you).

This is a great topic, and there’s clearly something to this model: you probably are more likely to join the cause if you think there’s lots of energy around it and a realistic chance of success.*

But would you really be put off by the fact that the information you were getting from media was reflecting back just what people like you thought? One, it’s hard to think of someone reflecting on that and then deciding the prudent thing was to discount the quality of the information. Two, for most people the fact that lots of people like you are participating is probably a reason they would choose to participate. Everyone who cares about what you care about will be there: that’s a reason for most people to join, not to say ‘That makes me uncertain of our prospects.’

And that gets to my skepticism about this model. Why model attention rationally in the first place? What if we thought about media and attention as non-rational way that people overcame the selfish desire to free ride? Usually it doesn’t make narrowly selfish, ‘rational’ sense to put in the time for some cause where the benefits are diffuse (opposition to loose banking regulation!) but people don’t just make that sort of decision rationally. They decide in part based on emotion, social cues, and a sense of identity.

In the notes for his political economy course, Daron Acemoglu describes the problem diffuse groups face:

All individuals within the social groups must find a profitable to take the same actions, and often, take actions that are in the interest of the group as a whole. This leads to what Olson has termed the “free rider” problem: individuals may free ride and not undertake actions that are costly for themselves but beneficial for the group. Therefore, any model that uses social groups as the actor must implicitly use a way of solving the free-rider problem. The usual solutions are
• Ideology: groups may develop an ideology that makes individuals derive utility from following the group’s interests.
• Repeated interactions: if individuals within groups interact more often with each other, certain punishment mechanisms may be available to groups to coerce members to follow the group’s interests.
• Exclusion: certain groups might arrange the benefits from group action such that those who free ride do not receive the benefits of group action.
[…Currently, there is little systematic work in economics on how social groups solve the free-rider problem, and this may be an important area for future work…]

The direction I’m thinking comes closest to “ideology”: media taps into individuals’ emotions and sense of identity in ways that make them more likely to participate. You can write that down as a model of utility maximizing with preferences, if you must, but it’s not mostly about gauging the likelihood of success or whether people not like you will contribute.

The question to my way of thinking is why some policy areas capture attention and so make it easier for the public to overcome free rider problems. The minutiae of banking regulation, for example, doesn’t seem to lend itself to that sort of attention-driven coordination; even a couple years after the financial crisis banks were able to defang aspects of Dodd-Frank with out much media attention or uproar.

But the sort of emotion- and affinity-driven model of attention I’m gesturing toward would help understand whether, say, YIMBYism can succeed. That’s a classic political economy problem: a concentrated cohort of property owners benefit from limits on construction while a diffuse group (including people who don’t yet live in the city) would benefit from more building. The property owners show up to all the meetings because they have so much at stake. Can the YIMBY movement overcome that?

The NBER paper’s model would say it depends on whether renters think other renters care. And also that if they think all the loud YIMBYs on Twitter aren’t representative of the public they’ll rationally discount the strength of that as a signal.

Whereas I’d say the question for YIMBYism is whether it can develop an emotional appeal, build a community people want to be a part of, and become a marker of identity and status. Either it’s a movement that sustains attention or it isn’t.

That’s what I’d like to see: a behavioral model of attention, and then study of why different issues so and don’t capture it.

*In some other models this may just make you want to free ride. Oddly that’s not really discussed much; there’s only one mention of free riding in the paper.

Software, management, competition

Software startups often target applications that many companies share – accounting, human resources, communications, etc. Companies want to digitize by purchasing off-the-shelf software. No one creates software for processes that underly their unique competitive advantages. They buy excess capacity in departments that aren’t their core business instead.

That’s one of many interesting bits from this post on software as management. I can’t recall where I came across it and don’t really know who the author is.

But it relates to the piece I wrote a few years back with James Bessen for HBR. In it, we linked firms’ software capabilities and startups’ ability to create new organizational architectures to the rise of large firms.

Software is at the center of competition between firms but many firms lack the ability and/or the incentive to adopt software in ways that actually give them competitive advantage.

Notes on innovation economics

This post is just to link together a few resources I want to keep track of, occasioned by the publication of a concise review of innovation economics by NBER this week.

Some posts on the “innovation agenda” here and here.

  • Update: Adding this overview post from New Things Under the Sun.
  • Bias in the market for change

    Earlier this year I wrote about loss aversion and politics. Here’s a quick snippet on this from Felix Oberholzer-Gee’s excellent new book Better, Simpler Strategy. He’s covering three cases of technological change (radio, PCs, and ATMs) and notes that while they were expected to be pure substitutes for records, paper, and bank tellers, respectively, they were actually complements and increased demand for those things:

    Did you notice a pattern in the three examples? In each instance, we predicted substitution when in fact the new technology turned out to increase the willingness-to-pay for existing products and activities. This type of bias is the norm. We fear change; potential losses loom larger than similar gains, a phenomenon that psychologists Amos Tversky and Daniel Kahneman call loss aversion. Loss aversion keeps us preoccupied with the risk of substitution even when we look at complementarities.

    p. 81

    Loss aversion doesn’t just change the politics of change, but the market for it, too.

    More on social epistemology

    A few weeks back I wrote about the importance of social learning. Yes, it’s important to try to think clearly and logically where you can, but in practice we’re mostly forced to rely on cues from others to reach our beliefs.

    Will Wilkinson makes this point well and in much greater depth in a recent post on conspiracy and epistemology. Along the way he highlights where he breaks from the “rationalist” community:

    Now, I’ve come to think that people who really care about getting things right are a bit misguided when they focus on methods of rational cognition. I’m thinking of the so-called “rationalist” community here. If you want an unusually high-fidelity mental model of the world, the main thing isn’t probability theory or an encyclopedic knowledge of the heuristics and biases that so often make our reasoning go wrong. It’s learning who to trust. That’s really all there is to it. That’s the ballgame…

    It’s really not so hard. In any field, there are a bunch of people at the top of the game who garner near-universal deference. Trusting those people is an excellent default. On any subject, you ought to trust the people who have the most training and spend the most time thinking about that subject, especially those who are especially well-regarded by the rest of these people.

    I mostly agree: this is the point I was trying to make in my post on social learning.

    But for the sake of argument we should consider the rationalist’s retort. Like at least some corners of the rationalist community, I’m a fan of Tetlock’s forecasting research and think it has a lot to teach us about epistemology in practice. But Tetlock found that experts aren’t necessarily that great at reaching accurate beliefs about the future, and that a small number of “superforecasters” seem, on average, to outperform the experts.

    Is Wilkinson wrong? Might the right cognitive toolkit (probability, knowledge of biases, etc.) be better than deferring to experts?

    I think not, for a couple reasons. First off, sure some people are better than experts at certain forms of reasoning, but what makes you think that’s you? I’ve done forecasting tournaments; they’re really hard. Understanding Bayesian statistics does not mean you’re a superforecaster with a track record of out-reasoning others. Unless you’ve proven it, it’s hubris to think you’re better than the experts.

    I’d also argue that the superforecasters are largely doing a form of what Wilkinson is suggesting, albeit with extra stuff on top. Their key skill is arguably figuring out who and what to trust. Yes, they’re also good at probabilistic thinking and think about their own biases, but they’re extremely good information aggregators.

    And that leads me to maybe my key clarification on Wilkinson. He says:

    A solid STEM education isn’t going to help you and “critical thinking” classes will help less than you’d think. It’s about developing a bullshit detector — a second sense for the subtle sophistry of superficially impressive people on the make. Collecting people who are especially good at identifying trustworthiness and then investing your trust in them is our best bet for generally being right about things.

    I’d put it a bit differently. If by critical thinking he means basically logical reasoning class then sure I am with him. What you need is not just the tools to reason yourself: you need to learn how to figure out who to trust. So far, so good.

    But I wouldn’t call this a “bullshit detector” exactly, though of course that’s nice to have. Another key lesson from the Tetlock research (and I think confirmed elsewhere) is that a certain sort of open-mindedness is extremely valuable–you want to be a “many model” thinker who considers and balances multiple explanations when thinking about a topic.

    That’s the key part of social learning that I’d emphasize. You want to look for people who think clearly but with nuance (it’s easy to have one but not both), who seriously consider other perspectives, and who are self-critical. Ideally, you want to defer to those people. And if you can’t find them, you want to perform some mental averaging over the perspectives of everyone else.

    Best case, you find knowledgeable “foxes” and defer to them. Failing that, you add a bit of your own fox thinking on top of what you’re hearing.

    Doing that well has almost nothing to do with Bayes’ theorem. Awareness of your own biases can, I think, help–though it doesn’t always. And knowledge of probability is often useful. But reaching true beliefs is, in practice, still a social activity. Like Wilkinson says, it’s mostly a matter of trust.

    Theory and replication

    Economics studies tend to replicate at a higher rate than psychology studies. Why? One possibility is that economics has a more unified theoretical framework to help guide researchers toward hypotheses that are more likely true, whereas theories in psychology are numerous and not well integrated.

    Joseph Henrich has made this argument, and wants psychology to root itself in evolutionary theory. And Matt Clancy, an innovation researcher, lays out the case at Works in Progress. He argues that theory helps lead researchers to hypotheses that are more likely to be true and broad, unified theoretical frameworks allow more chances for a new study to support or refute an old study without it being an actual replication.

    I’ve been skeptical of the idea that more theory is the answer to the replication crisis, mostly because I think the dominant unified framework in economics has a lot of downsides. One of the biggest movements in the field in the last 30 years was behavioral economics, which largely just pointed out empirically the many ways that the dominant theoretical framework failed to capture economic behavior. It borrowed from psychology, wasn’t that theoretically unified, and represented real progress for the field of economics. In macro, meanwhile, large chunks of the profession seemingly failed to understand the great recession because of their preference for theory — for beauty over truth in Krugman’s estimation.

    But maybe theory does help with replication. Maybe that’s what you get in return for these other limitations; sure you miss a huge chunk of human behavior but where your theories do apply you produce stable results.

    Clancy writes on his substack about a study looking at how theory affects publication bias. It presents evidence that when a theory predicts a specific relationship (rather than being ambiguous and allowing for multiple results) there’s more publication bias. Microeconomic theory predicts that less of a good is demanded at higher prices (demand curve slopes down). So:

    Studies that estimate demand exhibit much more selection bias than those that don’t… In other words, when economists get results that say there is no relationship between price and demand, or that demand goes up when prices go up, these results appear less likely to be published.

    So what do we make of this?

    If you’re really skeptical of the empirical turn in economics and think there are a laundry list of other problems with these papers outside of publication bias, you might argue that this “bias” is what’s helping economics papers replicate better. Publication bias sounds bad, but in this view empirical social science is so screwed up that theory is serving as a useful line of defense. You have a paper finding that the demand curve actually slopes up? That defies basic theory, so get that out of here. One more spurious result saved from publication.

    The more straightforward response, I think, is to view this as a risk of too much deference to theory. Yes, theory saves some bad, spurious papers from being published, but it’s a real problem if a theory is allowed to capture the publication process. Theory is essentially banning results that contradict it!

    My hunch is that this is another reason to favor a “many models” approach. Sure, maybe you need more theory than psychology. But rather than aspiring to one dominant unified framework — everything is optimizing, self-interested agents! everything is grounded in evolution — I think you’d more realistically want a manageable collection of theoretical frameworks. For example, economics needs models that can account for irrationality and cooperation, even if they aren’t perfect fits with the basic workhorse micro models.

    This is the Dani Rodrik view:

    “Models” — the abstract, typically mathematical frameworks that economists use to make sense of the world — form the heart of the book. Models are both economics’ strength and its Achilles’ heel; they are also what makes economics a science — not a science like quantum physics or molecular biology, but a science nonetheless.

    Rather than a single, specific model, economics encompasses a collection of models. The discipline advances by expanding its library of models and by improving the mapping between these models and the real world. The diversity of models in economics is the necessary counterpart to the flexibility of the social world. Different social settings require different models. Economists are unlikely ever to uncover universal, general-purpose models.

    (Posts on economic models here, here, here, here, here.)

    This is all very speculative, but my sense is that the many model approach still allows theory to inform hypotheses while also allowing data that challenges one or more of those theories to get published.

    The politics of diffusion

    Last post I pointed to a study noting the huge costs of delaying the spread of beneficial technologies. But I ended by adding a caveat: When the rapid spread of a technology makes it harder to regulate (broadly construed) that potentially strengthens the case for moving slower.

    So what does that look like? And how often does it happen?

    Three buckets come to mind:

    • Influence peddling: The spread of a technology creates a new interest group, most obviously the people making it, that lobbies for its interests in a way that limits the technology’s benefits.
    • Destabilization: The rapid spread of a technology destabilizes politics or some other aspect of society in a way that threatens well-being.
    • Entrenchment: The rapid spread of a technology entrenches existing elites or regimes such that further political progress is harder.

    The first one is certainly common, but I suspect the takeaway isn’t so much so slow down tech as it is just build good political institutions. Business will always have some sway in politics, but tech or no tech you want to set up political institutions that cannot be easily captured. And in a lot of cases, new interest groups created by tech aren’t exclusively bad: Yes, big tech companies might capture some areas of legislation but ridesharing pushes back on cab cartels and, far more importantly, renewable energy companies are a countervailing force to the fossil fuel lobby. Broadly, it seems, the lesson isn’t to slow down tech until you’ve limited businesses’ influence on politics but just to do as much as you can to limit businesses’ influence on politics! Sometimes new tech makes that a bit harder and sometimes it might even make it a bit easier.

    That leaves destabilizing and entrenching technologies.

    Perhaps the most obvious category is weapons. Their spread can certainly be destabilizing or entrenching. But it’s not obvious that they’re beneficial in the first place, so the lesson is Don’t spread malicious technologies more than it is Slow the spread of useful tech to make it easier to regulate.

    A better example might be radio, which spread really quickly from 1920 to 1940.

    Remember, the question isn’t whether radio will be entrenching or destabilizing, both, or neither. It’s whether, assuming it seems net positive at the beginning, faster diffusion limits its eventual benefits.

    Radio certainly had all sorts of unanticipated consequences, like nationalizing a new pop culture and bringing more advertising into homes. And it would be used as a tool of propaganda during World War II. But it’s not clear that any of those effects depended on the pace of its spread. The main effect of that rapid spread was egalitarian. The rapid drop in price allowed rural Americans to get access just a few years after their urban counterparts.

    Robert Gordon’s book The Rise and Fall of American Growth shares a couple of quotes on its impact:

    “[A survey in the 1930s found] Americans would rather sell heir refrigerators, bath tubs, telephones, and beds to make rent payments, than to part with the radio box that connected the world.”

    “The radio… offered the compensations of fantasy to lonely people with deadening jobs or loveless lives, to people who were deprived or emotionally starved.” (p. 195)

    Radio is the sort of technology, like social media, with potentially far reaching social effects. But as far as I can tell the speed of its diffusion was net positive.

    Last post I ended by noting that it’s an empirical question how often rapid diffusion prevents adequate regulation such that you’d want to seriously slow it down until the institutional context improves. My hunch is that it’s more the exception than the rule.

    The benefits of tech adoption

    Dylan Matthews has a good column in Vox’s Future Perfect newsletter (can’t find a link) that gets at something I’ve been thinking about a lot: the potentially large, but unseen costs of slowing the spread of useful technologies.

    He’s writing about a new paper estimating the benefits of the Green Revolution:

    The [Green Revolution] was a widespread global agricultural shift in the 1960s and 1970s, encouraged by US-based foundations like Ford and Rockefeller and implemented by the governments of countries in Asia and Latin America, toward higher-yield varieties and cultivation methods for rice, wheat, and other cereals…

    The new paper, from economists Douglas Gollin, Casper Worm Hansen, and Asger Wingender and set to be published in the influential Journal of Political Economy soon, estimates what the effects of delaying the Green Revolution by 10 years would have been. IRRI started breeding rice crops in 1965, so in this counterfactual the revolution would have begun in 1975 instead.

    They estimate that such a delay would have reduced GDP per capita in countries analyzed by about one-sixth; worldwide, the cost to GDP would total some $83 trillion, or about as much as one year of world GDP. And if the Green Revolution had never happened, GDP per capita in poor countries would be half what it is today.

    I have not read the paper, but I trust Dylan’s research coverage.

    This is the sort of hidden cost that the folks I characterize as pursuing the “innovation agenda” worry about. Might we look back one day and kick ourselves for having delayed the spread of artificial intelligence? Of offshore wind? Of self-driving cars?

    I suspect that in at least some of these cases there are large costs to delay.

    That isn’t an argument for just charging full speed ahead without concern for safety and equity. It was clear from my look back at the early days of electricity, for example, that getting the regulations right was an essential part of making the technology actually good for well-being. Spreading it too fast, without the right laws and institutions and even norms, can cause considerable suffering.

    And then there’s the point Steven Johnson made in his New York Times Magazine excerpt of his book on human lifespans:

    How did this great doubling of the human life span happen? When the history textbooks do touch on the subject of improving health, they often nod to three critical breakthroughs, all of them presented as triumphs of the scientific method: vaccines, germ theory and antibiotics. But the real story is far more complicated. Those breakthroughs might have been initiated by scientists, but it took the work of activists and public intellectuals and legal reformers to bring their benefits to everyday people. From this perspective, the doubling of human life span is an achievement that is closer to something like universal suffrage or the abolition of slavery: progress that required new social movements, new forms of persuasion and new kinds of public institutions to take root. And it required lifestyle changes that ran throughout all echelons of society: washing hands, quitting smoking, getting vaccinated, wearing masks during a pandemic.

    His point is that it’s not just that we can’t spread the tech until the right laws are in place; it’s that laws, norms, and institutions are an important part of how the tech develops. There’s no clear dividing line between the tech itself and the context in which it spreads.

    Social media makes that point well, I think. It’s not just that social media’s effects depend on the norms and rules of the day; it’s that its very development reflects that context. It is the way it is because of the context in which it developed.

    Back to the Green Revolution. Surely some of this is true there as well; the development and spread of those agricultural techniques no doubt depended in part on the specifics of time and place. Is that a reason to doubt the paper’s main idea, that spreading those technologies a decade earlier would have been massively beneficial? I don’t think so; that’s taking the Johnson point too far.

    So what does this add up to? The effects of a technology depend both on its technical features today (the ability to improve crop yields or to drive a car using software without crashing) along with the laws, norms, and institutions that exist around it.

    Delaying the spread of a useful technology can be extraordinarily costly–unless you hold some very specific ideas about the institutional context, how it will change, and how the pace of technological diffusion affects that process.

    All else equal, we should want useful technologies to spread quickly. All else equal, we should want our laws, norms, and institutions to improve a technology’s benefits–both making it more beneficial at a given point in time by say outlawing forms of exploitation and by changing its development path to make it more beneficial over time.

    To the extent these two processes are separate, these points mostly hold: deploy self-driving cars as fast as we can, and improve the laws around self-driving cars as fast as we can. But they’re not wholly separate. Where we most need to worry is when a technology’s spread constraints our ability to improve it later. You could write a fun example of an AI that captures Congress or something but you don’t need to go all sci-fi, just think about Uber. You might think, all else equal, it’s good to spread the (admittedly minor) technology of mobile ride-hailing. And then alongside it you’d want to change laws to prevent exploitation, pollution, and the like. But Uber’s spread in some ways makes it harder to change those laws. Or think of the shipping container: Its spread changes the political economy around trade in ways that are hard to predict ahead of time. Context shapes diffusion, but diffusion also shapes context.

    This is the challenge for the innovation agenda crowd. It’s important to loudly explain the hidden potential costs to delaying the spread of useful technologies. But tallying those costs depends on political economy: speeding a useful technology’s spread doesn’t make sense if that spread hampers institutions to the point of negating the technology’s benefits.

    It’s at least plausible that sometimes the faster a technology spreads, the harder it is to (ever) successfully regulate. The empirical question is how often that happens.

    Thinking clearly

    Really nice piece from Aeon’s Psyche magazine on thinking clearly. I’ve quoted a few bits, but read the whole thing:

    In philosophy, what’s known as standard form is often used to set out the essentials of a line of thought as clearly as possible. Expressing your thinking in standard form means writing out a numbered list of statements followed by a conclusion. If you’ve done it properly, the numbered statements should present a line of reasoning that justifies your final conclusion…

    You might have seen examples of this approach before, or used it in your own work. You might also have encountered a great deal of discussion around logical forms, reasonable and unreasonable justifications, and so on. What I find most useful about standard form, however, is not so much its promise of logical rigour as its insistence that I break down my thinking into individual steps, and then ask two questions of each one:

    Why should a reasonable person accept this particular claim?

    What follows from this claim, once it’s been accepted?

    When it comes to clarifying my thoughts and feelings, the power of such an approach is that anything relevant can potentially be integrated into its accounting – but only if I’m able to make this relevance explicit…

    Upon what basis can I justify any claims? Some will rely on external evidence; some on personal preferences and experiences; some on a combination of these factors. But all of them will at some point invoke certain assumptions that I’m prepared to accept as fundamental. And it’s in unearthing and analysing these assumptions that the most important clarifications await…

    This, I’d suggest, is the most precious thing about clearly presenting the thinking behind any point of view: not that it proves your rightness or righteousness, but that it volunteers your willingness to participate in a reasoned exchange of ideas. At least in principle, it suggests that you’re prepared to:

    Justify your position via evidence and reasoned analysis.

    Listen to, and learn from, perspectives other than your own.

    Accept that, in the face of sufficiently compelling arguments or evidence, it might be reasonable to change your mind.