How economics thinks about technology and labor

A recent David Autor review paper sums up the evolution:

I began by asking what the role of technology—digital or otherwise—is in determining wages and shaping wage inequality. I presented four answers corresponding to four strands of thinking on this topic: the education race, the task-polarization model, the automation-reinstatement race, and the era of AI uncertainty. The nuance of economic understanding has improved substantially across these epochs. Yet, traditional economic optimism about the beneficent effects of technology for productivity and welfare has eroded as understanding has advanced. Fundamentally, technological change expands the frontier of human possibilities, but one should expect it to create many winners and many losers, and to pose vast societal challenges and opportunities along the way.

What are the policy implications of these observations? The question is so broad that almost any answer is bound to appear vague and inadequate. One can reliably predict that technological innovations will foster new ways of accomplishing existing work, new business models, and entirely new industries, and these in turn will generate new jobs and spur some productivity gains. But absent complementary institutional investments, technological innovation alone will not generate broadly shared gains. Autor et al. (2022) sketch a long-form policy vision of what form these investments may take, focusing on three domains: education and training; labor market institutions; and innovation policy itself.

Deference to experts

This is a good tweet:

The value of deferring to experts depends on the alternative. If the alternative is deferring to a market or the consensus of smart generalists with good incentives or to a carefully calibrated statistical model, then deference to experts might not look so good–or at least is likely incomplete.

But a lot of the time the alternative is leaning on your own biases or those of your group, or deferring to pundits or to prevailing views shaped by attention algorithms that no one fully understands. In those cases, deference to experts looks pretty good!

Ultimately, the value of deferring to experts is in tying yourself to the mast, in epistemological terms. You defer to avoid your own biases. But as always, getting it right is about choosing wisely whom to trust.

Paul Romer on theory

In a great post defending economist Lisa Cook’s appointment to the Fed Board of Governors, Nobel-winner and famed theorist Paul Romer gets into the role of theory vs. empirics in social science:

There is a role for the type of theory that John [Cochrane, another theorist to whom he is responding…] and I do. Theorists build tools. Some of these tools turn out to be useful because they fit the facts. Many do not. Little harm comes from positing an imaginative new theory that turns out to be wrong provided that empiricists check it against the facts before someone uses it to make an important decision. John is a glider pilot so he understands the importance of this intellectual division of labor. When a passenger plane runs out of fuel – and yes, this can happen – neither John nor I would rely on a theorist skilled in computational aerodynamics to choose between landing at a nearby airport that has a short runway or a more distant one with a longer runway. We’d both want to give the judgment call about where to land to someone who knows the facts about the landing speed and glide ratio of the plane.

William James on certainty

From The Will to Believe in 1896:

Objective evidence and certitude are doubtless very fine ideals to play with, but where on this moonlit and dream-visited planet are they found? I am, therefore, myself a complete empiricist so far as my theory of human knowledge goes. I live, to be sure, by the practical faith that we must go on experiencing and thinking over our experience, for only thus can our opinions grow more true; but to hold any one of them — I absolutely do not care which — as if it never could be reinterpretable or corrigible, I believe to be tremendously mistaken attitude, and I think that the whole history of philosophy will bear me out…

…But please observe, now, that when as empiricists we give up the doctrine of objective certitude, we do not thereby give up the quest or hope of truth itself. We still pin our faith on its existence and still believe that we gain an ever better position towards it by systematically continuing to roll up experiences and think. Our great difference from the scholastic lies in the way we face. The strength of his system lies in the principles, he origin, the terminus a quo of his thought; for us the strength is in the outcome, the upshot, the terminus ad quem. Not where it comes from but what it leads to is to decide. It matters not to an empiricist from what quarter an hypothesis may come to him: he may have acquired it by means fair or foul; passion may have whispered or accident suggested it; but if the total drift of thinking continues to confirm it, that is what he means by its being true.

Pragmatism: A Reader, p. 79-81

Models of war

For the past few weeks, The Ezra Klein Show has been doing episodes about Russia and Ukraine from a variety of perspectives. In the most recent one, Ezra described his approach:

I want to begin today by taking a moment and getting at the theory of how we’re covering Russia’s invasion of Ukraine on the show. There is no way to fully understand an event this vast, where the motivations of the players and the reality on the ground are this unknowable. There’s no one explanation, no one interpretation that can possibly be correct. And if anyone tells you they’ve got that, you should be very skeptical. But even if all models are incomplete, some are useful. And so each episode has been about a different model, a different framework, you can use to understand part of the crisis.

I approve. And I tried my own attempt of a many-model explanation of the conflict in a piece for Quartz a couple weeks back. I tried to balance structural and game-theoretic explanations with historical and personality-driven ones, and to present the outside view as well as the inside one.

For the outside view, I relied on Chris Blattman’s excellent forthcoming book Why We Fight. Here’s the summary:

In Why We Fight, Blattman uses game theory to explain why war does and doesn’t happen. His starting point is that war is rare because it’s expensive. But five factors can overwhelm the incentives for peace:

💪 Unchecked interests. War is more likely when the people in charge don’t pay the price for it. That’s almost always true to an extent, but some leaders are more or less insulated from the costs of conflict.

🎲 Uncertainty. Neither side knows for sure how strong the other is. One side could be bluffing about its strength or resolve, so sometimes the other side calls.

🗝️ Commitment problems. When one side is growing stronger, the other may want to attack before its adversary gets too powerful. The growing power might promise not to attack later on when it’s the dominant power, but that commitment can’t be trusted.

🤔 Misperceptions. Decision makers are overconfident and don’t understand how their adversaries think.

🖤 Intangible incentives. Sometimes people care about things that can’t be bargained for and go beyond costs and benefits—like vengeance, glory, or freedom.

You can read the rest here.

David Leonhardt on logic

On the Josh Barro Very Serious podcast, all about making use of expert knowledge, here’s David Leonhardt of the New York Times:

Don’t go to the nihilist place of ‘Well, there’s no such thing as a fact’, right? And ‘We can all pick our experts on climate change.’ And ‘Maybe it’s happening or maybe it’s not.’ Or, ‘Maybe communism works or maybe it doesn’t…’

I would just tell people: Don’t think that everything is a 50/50 issue. It’s true that there are often [expert] divides but it’s often true that the weight of the evidence often lines up more strongly for one argument than another.

I do think in terms of tips for people who are not journalists or academics, I think logic is an underused tool. And I think too often people are saying ‘Wait is there a peer reviewed study that proves this point?’ And OK if there is we should take that seriously. But listen to the argument that people are making and ask yourself if it made sense. Early on in the pandemic when the CDC and other experts told us not to wear masks, it didn’t make any logical sense. There’s a reason doctors and nurses wear masks in hospitals. There’s a reason why societies in Asia that have been battling contagious viruses a lot recently put a lot of emphasis on masks. Use logic. Ask yourself where does the evidence line up. And recognize that people — all of us — are going to more heavily weight evidence that fits our priors but that every question is not simply a coin flip and that you actually can find useful knowledge. And often logic is your best tool for sorting through who’s full of it and who’s actually saying stuff that makes sense.

Integrative thinking

On the Ezra Klein Show last year, Phil Tetlock (being interviewed by Julia Galef) described how good forecasters integrate multiple perspectives into their own:

JULIA GALEF: So we’ve kind of touched on a few things that made the superforecasters super, but if you had to kind of pick one or two things that really made the superforecasters what they were, what would they be?

PHIL TETLOCK: We’ve already talked about one of them, which is their skill at balancing conflicting arguments, their skill of perspective taking. However, although, but. They put the cognitive breaks on arguments before arguments develop too much momentum. So they’re naturally inclined to think that the truth is going to be some blurry integrative mix of the major arguments that are in the current intellectual environment, as opposed to the truth is going to be way, way out there. Now, of course, if the truth happens to be way, way out there, and we’re on the verge of existential catastrophe, I’m not going to count on them to pick it up.

JULIA GALEF: In addition to these dispositions and sort of general thinking patterns that the superforecasters had, are there any kind of concrete habits that they would always or often make use of when they were trying to make a forecast that other people could adopt to?

PHIL TETLOCK: One of them is this tendency to be integratively complex and qualify your arguments, howevers and buts and all those, a sign that you recognize the legitimacy of competing perspectives. As an intellectual reflex, you’re inclined to do that. And that’s actually a challenge to Festinger and cognitive dissonance. They’re basically saying, look, these people have more tolerance for cognitive dissonance that Leon Festinger realized was possible.

(Emphasis mine.)

Cognitive dissonance is the state of having inconsistent beliefs. Tetlock is saying that good forecasters are more willing than most to have inconsistent beliefs. (In his book Superforecasting he uses the term “consistently inconsistent.”)

How could inconsistency be a good thing? Well, as he says, the integrative mindset tends to think “that the truth is going to be some blurry integrative mix of the major arguments.”

You could imagine two different ways of integrating seemingly disparate arguments or evidence. Say someone shows evidence that raising the minimum wage caused job losses in France (these are made up examples). And someone else showed evidence that a higher minimum wage didn’t lead to any job losses in the U.S. Say you think in both cases the evidence is high quality. How do you integrate those two views?

One way would be to try and think of reasons why they could both be true: What’s difference about France and the U.S. such that the causal arrow might reverse in the two cases? That, I think, is a form of the integrative mindset. You’re trying to logically “integrate” two views into a consistent model of the world.

But the other integrative approach is basically to average the two pieces of evidence: to presume that on average the answer is in the middle, that maybe minimum wage hikes cause modest job losses. That is a “blurry integrative mix,” and it’s not super rigorous. But it often seems to work.

For the rest of the post I want to just quote a couple other descriptions of integrative thinking…

How Politifact, the fact-checking organization, “triangulates the truth”:

PolitiFact items often feature analysis from experts or groups with opposing ideologies, a strategy described internally as “triangulating the truth.” “Seek multiple sources,” an editor told new fact-checkers during a training session. “If you can’t get an independent source on something, go to a conservative and go to a liberal and see where they overlap.” Such “triangulation” is not a matter of artificial balance, the editor argued: the point is to make a decisive ruling by forcing these experts to “focus on the facts.” As noted earlier, fact-checkers cannot claim expertise in the complex areas of public policy their work touches on. But they are confident in their ability to choose the right experts and to distill useful information from political arguments.

Roger Martin, in HBR in 2007, says great leaders are defined by their ability “to hold in their heads two opposing ideas at once.”

And then, without panicking or simply settling for one alternative or the other, they’re able to creative resolve the tension between those two ideas by generating a new one that contains elements of the other but is superior to both. This process of consideration and synthesis can be termed integrative thinking.

Forecasting

Let’s get one thing straight: I am not a “superforecaster.”

Over the past decade, I’ve written about forecasting research and forecasting platforms. And I’ve participated in them as well. In this post I’ll share some of my results to date. Though I’m nowhere near superforecaster level (the top 2% of participants) I’m pleased to have been consistently above average.

Here are my results:

  • Good Judgment Project (~2017): 23 questions, 68th percentile
  • Good Judgment Open (2015-2017): 9 questions, 60th percentile*
  • Good Judgment Open (2021): 4 questions, 76th percentile*
  • Foretell/Infer (2021): 2 questions, 90th percentile

The number of questions is not the number of forecasts: in many cases I made several forecasts over time on the same question. I’ve given percentiles rather than relative Brier scores or other measures because a) they’re more intuitive and b) the GJ Project setup I did was a market (no real money), and so the results were given in terms of total (fake) dollars made and where that scored by percentile. The latter is more comparable to the other scoring systems.

(*) GJP and Infer report percentile scores across an entire season so I used those above. GJ Open doesn’t, best I can tell, so in these cases I’ve averaged my percentile scores for each question, which is a bit different than percentile in total score.

Here’s another view, this one excluding my Good Judgment Project results because I don’t have percentile scores for each question.

For Good Judgment Project, not included in the chart, I “made money” (again: no actual money involved) on 17 of 23 questions, lost money on 4 and was basically even on 2.

Some of my worst scores across all of this involved the 2016 election (including primaries). One of my best involved venture capital. My impression is that, although subject matter knowledge is nice to have, time spent is the major limiting factor. Spending more time and updating forecasts more regularly pays off, even in areas where I’m coming in fairly fresh.

To close out, here is some of my writing about forecasting:

Sociology, history, and epistemology

More than 50 years ago, Quine suggested that epistemology must be “naturalized.” Here is Kwame Anthony Appiah explaining this idea in his book Thinking It Through:

To claim that a belief is justified is not just to say when it will be believed but also to say when it ought to be believed. And we don’t normally think of natural science as telling us what we ought to do. Science, surely, is about describing and explaining the world, not about what we should do?

One way to reconcile these two ideas would be to build on the central idea of reliabilism and say that what psychology can teach us is which belief-forming processes are in fact reliable. So here epistemology and psychology would go hand in hand. Epistemology would tell us that we ought to form our beliefs in ways that are reliable, while psychology examines which ways these are.

p. 74-75

This role for psychology should be familiar to anyone who’s read Thinking Fast and Slow — cognitive biases are rampant and get in the way of accurate belief — or Superforecasting — here are some practices to overcome those limitations — or any number of similar books.

But why stop at psychology?

Belief formation is necessarily social, as I’ve pointed out in a few recent posts. In one I quoted Will Wilkinson:

If you want an unusually high-fidelity mental model of the world, the main thing isn’t probability theory or an encyclopedic knowledge of the heuristics and biases that so often make our reasoning go wrong. It’s learning who to trust. That’s really all there is to it. That’s the ballgame.

In another I quoted Naomi Oreskes:

Feminist philosophers of science, most notably Sandra Harding and Helen Longino, turned that argument on its head, suggest[ed] that objectivity could be reenvisaged as a social accomplishment, something that is collectively achieved.

In one of those posts I unwittingly used the term “social epistemology” to make my point that belief is social; that turns out to be its own philosophical niche. Per Stanford Encyclopedia of Philosophy:

Social epistemology gets its distinctive character by standing in contrast with what might be dubbed “individual” epistemology. Epistemology in general is concerned with how people should go about the business of trying to determine what is true, or what are the facts of the matter, on selected topics. In the case of individual epistemology, the person or agent in question who seeks the truth is a single individual who undertakes the task all by himself/herself, without consulting others. By contrast social epistemology is, in the first instance, an enterprise concerned with how people can best pursue the truth (whichever truth is in question) with the help of, or in the face of, others. It is also concerned with truth acquisition by groups, or collective agents.

The entry is full of all sorts of good topics familiar to anyone who reads about behavioral science: rules for Bayesian reasoning, how to aggregate beliefs in a group, network models of how beliefs spread, when and whether deliberation leads to true belief. But it is all fairly ahistorical.

Compare that to Charles Mills, writing about race, white supremacy, and why epistemology, once naturalized, needs both sociology and history:

[Quine’s work] had opened Pandora’s box. A naturalized epistemology had, perforce, also to be a socialized epistemology; this was ‘a straightforward extension of the naturalistic approach.’ What had originally been a specifically Marxist concept, ‘standpoint theory,’ was adopted and developed to its most sophisticated form in the work of feminist theorists, and it became possible for books with titles like Social Epistemology and Socializing Epistemology, and journals called Social Epistemology, to be published and seen as a legitimate part of philosophy. The Marxist challenge thrown down a century before could finally be taken up…

A central theme of the epistemology of the past few decades has been the discrediting of the idea of a raw perceptual ‘given’ completely unmediated by concepts… In most cases the concepts will not be neutral but oriented toward a certain understanding, embedded in sub-theories and larger theories about how things work.

In the orthodox left tradition, this set of issues is handled through the category of ‘ideology’; in more recent radical theory, through Foucault’s ‘discourses.’ But whatever one’s larger meta-theoretical sympathies, whatever approach one thinks best for investigating these ideational matters, such concerns obviously need to be part of a social epistemology. For if the society is one structured by relations of domination and subordination (as of course all societies in human history past the hunting-and-gathering stage have been) then in certain areas this conceptual apparatus is likely to be negatively shaped and inflected in various ways by the biases of the ruling group(s).

Black Rights / White Wrongs p. 60-63

Crucially, Mills characterizes this kind of bias as “ignorance” in part because it has “the virtue of signaling my theoretical sympathies with what I know will seem to many a deplorably old-fashioned ‘conservative’ realist intellectual framework, one in which truth, falsity, facts, reality, and so forth are not enclosed with ironic scare-quotes.” The history and sociology of race (like class or gender) help explain not just why people believe what they do but also why people reach incorrect beliefs.

That view is in contrast with some other sociological programs, as the Stanford entry on social epistemology notes:

A movement somewhat analogous to social epistemology was developed in the middle part of the 20th century, in which sociologists and deconstructionists set out to debunk orthodox epistemology, sometimes challenging the very possibility of truth, rationality, factuality, and/or other presumed desiderata of mainstream epistemology. Members of the “strong program” in the sociology of science, such as Bruno Latour and Steve Woolgar (1986), challenged the notions of objective truth and factuality, arguing that so-called “facts” are not discovered or revealed by science, but instead “constructed”, “constituted”, or “fabricated”. “There is no object beyond discourse,” they wrote. “The organization of discourse is the object” (1986: 73).

A similar version of postmodernism was offered by the philosopher Richard Rorty (1979). Rorty rejected the traditional conception of knowledge as “accuracy of representation” and sought to replace it with a notion of “social justification of belief”. As he expressed it, there is no such thing as a classical “objective truth”. The closest thing to (so called) truth is merely the practice of “keeping the conversation going” (1979: 377).

But as Oreskes argues in her defense of science as a social practice, the recognition that knowledge is fundamentally social doesn’t require a belief in relativism.

A naturalized epistemology requires, in Appiah’s words, a search for “belief-forming processes [that] are in fact reliable.” That requires the study of how belief formation works at the group level–including an appreciation of history and sociology. To overcome our biases we need to consider the specific society within which we are trying to find the truth, and the injustices that pervade it.

A short definition of power

From Power for All, by Julie Battilana and Tiziana Casciaro:

There are two common threads across these definitions [of power across the social sciences]. The first is that the authors view power as the ability of a person or a group of people to produce an effect on others–that is, to influence their behaviors. This influence can be exercised in different ways, which has led social scientist to distinguish between different forms of power. As summarized by the sociologist Manuel Castells, “Power is exercised by means of coercion (the monopoly on violence, legitimate or not, by the state) and/or by the construction of meaning in people’s minds through mechanisms of cultural production and distribution.” Therefore, two broad categories underpin the types of power identified in the literature. The first category encompasses persuasion-based types of powre, such as expert power that stems from trusting someone’s know-how, referent power that stems from admiration for or identification with someone, or power stemming from control over cultural norms. The other category comprises coercion-based types of power that include the use of force (be it physically violent or not) and authority (or “legitimate power”) to influence people’s behaviors. Building on this large and rich body of work, we define power as the ability to influence another person or group’s behavior, be it through persuasion or coercion.

The second common thread is that thye all, implicitly or explicitly, posit that power is a function of one actor’s dependence on another. Social exchange theory articulates this view clearly in the seminal model of power-dependence relations developed by sociologist Richard Emerson. In this view, power is the inverse of dependence. The power of Actor A over Actor B is the extent to which Actor B is dependent on Actor A. The dependence oof Actor B on Actor A is “directly proportional to B’s motivational investment in goals mediated by A and inversely proportional to the availability of those goals to B outside of the A-B relation.” The fundamentals of power that we present in this book are derived from this conceptualization of powre. They posit that the power of Actor A over Actor B depends on the extent to which A controls access over resoures that B values and that, in turn, the power of Actor B over Actor A depends on the extent to which B controls access over resources that A values. It follows from the fundamentals of power that power is always relational and that it is not a zero-sum game. The power relationship between A and B may be balanced if A and B are mutually dependent and they each value the resources that the other party has access to. It is imbalanced if one of the parties needs the resources that the other party can provide more.

Importantly the resources that each of the parties value may be psychological as well as material…

Cultural norms shape what is valued in a given context, while the distribution of resources favors some people and organizations and disadvantages others…

p. 200-201 (Appendix); emphasis added.

And strategies for shifting power:

Here’s the book. Here’s a summary from Charter. Here’s a past post of mine quoting Battilana’s work.