Innovation and well-being

Are innovative places good places to live?

I’ve been thinking about that question, and what follows is a quick bit of exploration — not anything definitive.

I wanted to see if the states with more innovative economies are also the states with better quality of life. To do that, I compared the Social Progress Index, a measure of well-being, to the tech think tank ITIF’s “New Economy” index which purports to rank states’ economies by their emphasis on innovation. As with all broad rankings like this, your mileage may vary.

  • Here’s detail on the ITIF ranking
  • Social Progress Index for states info here and here

Without further ado…

(There are a million questions and caveats that I won’t get into but one to note: the Social Progress score is 2018; ITIF is 2020.)

The innovation agenda

Ezra Klein has a column about the coronavirus agenda of economist Alex Tabarrok:

Here’s a question I’ve been mulling in recent months: Is Alex Tabarrok right? Are people dying because our coronavirus response is far too conservative?

I don’t mean conservative in the politicized, left-right sense. Tabarrok, an economist at George Mason University and a blogger at Marginal Revolution, is a libertarian, and I am very much not. But over the past year, he has emerged as a relentless critic of America’s coronavirus response, in ways that left me feeling like a Burkean in our conversations.

I don’t have anything to add on Covid-19, but I see the Tabarrok agenda as being centrally about putting innovation first — something I’ve described before. He’s concerned, first and foremost, with applying new ideas to solve important problems and pushes back against policies and institutions that exhibit a status quo bias.

Tabarrok is a libertarian, albeit not an incredibly doctrinaire one. And that’s the community I see most well-represented in the “innovation-first” agenda. It’s not that hard to see why pragmatic libertarians would be drawn to it: if you’re not defending libertarianism on purely principled grounds but on consequences, the argument typically hinges on the idea that innovation and economic growth are incredibly important and that governments often suppress them. Libertarians need a theory of how innovation happens for their arguments to work, so it’s not all that surprising that many of them have one.

But what do liberals have to offer in the discussion around innovation and innovation-fueled growth? In the case of Covid-19, left-of-center thinkers seemed to me quite reasonable in their approach to balancing speed and certainty. But on other innovation issues it can sometimes seem that they are borrowing a page from conservatives, standing athwart technology and yelling stop.

When Ezra has said multiple times on his podcast (paraphrasing) that the left needs a better theory of technology I suspect this is what he means.

I think of some of my own work as speaking to this need. It’s part of what I have had in mind when writing about the welfare state and entrepreneurship: my read of the evidence is that a more generous government support system increases innovation and dynamism. See here and here.

But do liberals care? If skepticism about growth and technology are the norm, the argument that liberal policies can further both won’t hold much weight. I think that’s a mistake. In the long run, ideas-driven growth is a (the?) primary driver of living standards. The Tabarrok’s of the world are right to put it in the foreground. We really do need to worry about status quo bias in our institutions and make innovation policy a central concern. Doing that doesn’t commit you to libertarianism.

Loss aversion and politics

I was thinking this week about political economy and status quo bias, specifically how cognitive biases could fit into Mancur Olson-style models of bargaining. Well, there is a literature on everything, per Cowen’s second law, and sure enough here’s a paper by Alberto Alesina and Francesca Passarelli on loss aversion in politics. Here are some key bits:

The basic idea

We present a model of unidimensional political choice where the voters differ in their evaluation of the relative costs and benefits of different levels of such policy. We assume throughout the paper that the reference point is the status quo. This seems realistic, since benefits and costs of political reforms are normally assessed relative to the current situation for given existing policies. Without loss aversion, the policy chosen would be the one preferred by the median voter, and the status quo is irrelevant. With loss aversion the status quo matters. For any initial policy level, a mass of voters would vote for the status quo, even if their rationally preferred policy differed from it. In fact changing policy implies losses and benefits, but the former weigh more. This generates a sort of political endowment effect: once the policy chosen by the majority becomes the new status quo, a larger majority of voters does not want to change it.

p. 2

Loss aversion as an explanation of incrementalism and a check on polarization

With this framework it is possible to prove a moderating effect. When individuals are loss averse, the distances among their ideal policies are lower: those who demand more p weigh the increases in cost; this dampens their demand for a policy expansion. On the contrary, those who would like to reduce p weigh the loss benefits; thus they desire to reduce the policy by a lesser amount. As individuals become more loss averse, the number of those who prefer the status quo increases, thus further dampening polarization.

p. 10

Once a bill passes, it becomes harder to undo

The second part of the Proposition is what we call the political endowment effect. The idea is the following. If the shock at time 2 is sufficiently large, the policy changes. But only the bare majority of voters cast votes in favor. All voters to the left of the median would prefer a lower policy. All those to the right prefer a higher one. Once the new policy has been set up and a certain amount of time has passed, this policy becomes the new reference point. The latter shapes voters’ preferences. Specifically, some voters to the left and to the right of the median change their minds and start considering this new policy their most preferred one. This means that, if no other big shocks occur, that same policy would beat any other alternative with more than the simple majority cast votes in favor. The political endowment effect hinges on the fact that a change in the policy yields a change in the reference point for subsequent periods, and the latter yields a change in voters’ (reference dependent) preferences. It might help explain why reforms that had hard time to be approved, gain popularity amongst people sometime later

p. 11

Do generations differ in how they approach losses vs. gains?

Here’s where things get a bit weird, but fascinating. They extend the model across time in a way that seems to me to reach conclusions that are right but for the wrong reasons. As you’ll see, they predict generational conflicts. But I’m not sure the psychology quite fits; a big part of the cognitive bias literature is not really thinking clearly about the future, which I think might undermine their analysis. They incorporate that in the model, but not in a way I find all that convincing.

This part is hard to find parts to copy from. They assume that we get used to changes faster than we think we will, which seems right. But they posit that older generations are less likely to think a policy will be worth the wait so to speak. If you think it’ll take you a decade to get used to a new arrangement, older people will find that less appealing since they’ll live to see less of the benefits of the period after having adjusted. This strikes me as trying to overly rationalize these biases, but your mileage may vary.

This implies that there are less young voters entrenched in the status quo, compared to old voters. It may happen that the majority of young voters want a change in policy, but the majority of old voters do not. The reason does not rely on differences in material interests. It is instead a psychological reason: the old do not want to bear the psychologically costly commitment to a change today, because their future horizon in which to enjoy the benefits of that commitment is shorter. The policy outcome depends on the population shares: older societies, where the share of young people is low, are more likely to remain with the status quo.

p. 16

Loss aversion vs status quo bias

The main result is that loss aversion translates into a preference for the status quo. They show later on that that’s not necessarily the case when risk is involved. I won’t try to excerpt this one; it starts on page 18.

Data helps

The Atlantic and The New Yorker each have good pieces recently on the complexities of gathering data. The Atlantic’s is by the founders of the Covid Tracking Project about all the work and subtleties that went into tracking even basic metrics about the pandemic:

Data might seem like an overly technical obsession, an oddly nerdy scapegoat on which to hang the deaths of half a million Americans. But data are how our leaders apprehend reality. In a sense, data are the federal government’s reality. As a gap opened between the data that leaders imagined should exist and the data that actually did exist, it swallowed the country’s pandemic planning and response.

The New Yorker’s is an essay about new books. At points it threatens to fall into the genre of criticizing data only to recommend some sort of hazy intuitionism. But, despite the headline “What data can’t do”, it ultimately gets it completely right:

But to recognize the limitations of a data-driven view of reality is not to downplay its might. It’s possible for two things to be true: for numbers to come up short before the nuances of reality, while also being the most powerful instrument we have when it comes to understanding that reality.

For all the difficulty and subjectivity of defining, collecting, and analyzing data, it sure seems to help.

It’s hard to give evidence for that statement without being circular, but here’s some anyway:

  • Analytics appears to have significantly changed lots of industries. Sports is one notable example; basketball is just played a lot differently today than a decade ago, in large part because of descriptive and correlational data. Ditto baseball.
  • Firms that report using data more tend to perform better than those that report using it less.
  • Simple statistical algorithms outperform people in a wide variety of contexts.
  • The best human forecasters often start their process by finding a “base rate,” essentially a rough numerical estimate of how often something generally does or doesn’t occur, before delving into the often more qualitative specifics of a forecast question. (Update: Here’s an experienced, reliable forecaster walking through her method.)

And then there’s this 2020 paper on “The Value of Descriptive Analytics”:

Does the adoption of descriptive analytics impact online retailer performance, and if so, how? We use the synthetic control method to analyze the staggered adoption of a retail analytics dashboard by more than 1,000 e-commerce websites, and find an increase of 13–20% in average weekly revenues post-adoption.

Data helps, even in plenty of cases when it has problems or contains spurious correlations or cannot on its own support formal causal inferences. But why?

Why does it help to be in possession of a correlation that itself can’t support any particular causal inference? The answer, I think, is that the alternative is even more flawed. Humans get really creative in our reasoning, especially in order to defend our preexisting beliefs and identity. Quantification anchors you to something, and though it’s biased and reductive, it still makes it harder for your reasoning to go astray.

Parsing US vaccine forecasts

This post is an experiment based on a simple idea: Readers care about the future; crowdsourced forecasts are one relatively reliable way to predict the future; so journalists should use those forecasts as grist for journalism. What might that look like?

President Biden has said all adults in the US will be eligible to get a vaccine by May 1, and that there will be enough vaccine supply for all of them by the end of May. Will the US reach that goal?

Forecasters on crowdsourced forecasting platforms have been growing more and more optimistic in their assessments of vaccine supply, and while most of their forecasts don’t map precisely to Biden’s promise, they paint an optimistic picture.

Vaccine supply

Good Judgment Corp., a forecasting firm that grew out of academic research into crowdsourced forecasting, maintains a public dashboard of Covid-related forecasts by its “superforecasters”—individuals with a track record of high accuracy forecasting geopolitics. Those forecasters have been estimating when the US will have distributed enough vaccine doses to inoculate 100 million and then 200 million people, and they’re much much more optimistic today than a month ago.

Good Judgment gives an 85% chance that the US will have distributed enough vaccine doses for 100 million people by the end of March and a 98% chance that it will have distributed enough for 200 million people by the end of June. These questions are asking about distribution of doses not jabs in arms, so the total number of people who have received doses will slightly lag these estimates.

There are about 250 million adults in the US, so to hit the goal the country will have to deliver more doses sooner than this question implies. Still, the uptick in forecasters’ confidence is a good sign.

(The company doesn’t publish the details of how it aggregates forecasts but I’m told it involves a mix of weighting based on newness and a forecaster’s past accuracy, plus making the aggregate forecast more extreme under certain conditions. You can read more about common aggregation methods here and here.)

How many Americans will have received a dose

Good Judgment also has an open platform where anyone can make forecasts, and there’s a question up there about how many Americans will have received at least one dose by the end of March. Those forecasters are quite confident the number will fall between 90 and 105 million, with the most likely outcome falling between 95 and 100 million.

If you take the median of their most likely scenario, 97.5 million, that implies 31.5 million more people receiving at least one dose in the last 20 days of March—that’s just under 1.6 million new people per day, up from the 1.4 the US averaged in the seven days prior to this post. In other words, they expect the pace of vaccination to continue to increase modestly throughout the rest of this month.

On Metaculus, another forecasting platform, the median estimate is that the US will have given 100 million people at least one dose by April 2

Finally, there is one Metaculus question that tackles the Biden promise directly, asking whether there will be enough vaccine doses available for all US adults by the end of May. The median forecast there gives a 63% of that outcome. 

I haven’t gotten much into the why here, partly because it’s a Sunday, partly because part of the point is taking these estimates as a form of evidence in themselves. All in all, the picture is much more optimistic than a month ago and consistent with this piece that suggests the Biden target is reachable. It may even be more likely than not.

Disclosure: I sometimes forecast on these platforms under a pseudonym for fun, but I have no financial stake in any of the questions or in the platforms.

Note: I’ve been somewhat loose here about describing the forecasts. Part of me thinks I should be more precise about exactly what every forecast says, at least if it weren’t Sunday evening. But part of the point is trying to strike a balance between the rigorous precision of forecasting platforms and the kind of writing people in the wider world want to read. I’m definitely open to feedback.

Sorting vs. synthesis

Tech analyst Ben Thompson likes to say that the internet is about abundance not scarcity. The most successful internet platforms take advantage of this, organizing the abundance into feeds and results pages. Thompson calls these platforms aggregators. It’s striking how influential this model has become: aggregation is about scanning a lot of content, on your own platform or on the web, and then ranking it. Google returns the pages it thinks best match your query, ranked. Facebook ranks all the posts from people you know, news you might be interested in, trending posts, and more, and puts the top ones in your feed. TikTok claims to have an even better ranking algorithm!

What do we do with all that abundance? In short, we sort it. But some of the most interesting projects online aren’t about sorting; they’re about synthesis and I’d like to see more of these.

The canonical synthesis example is probably Wikipedia. Editors scour the web for information–it’d be a lot harder to run a volunteer encyclopedia if they all had to go to the library–but they aren’t just aiming to rank it. They’re not giving you five things to read about X; they’re synthesizing what they find into a new thing, an encyclopedia article.

Metacritic is one of my favorite examples of a very different form of synthesis. It scours movie reviews but it doesn’t just sort them and return a list: it creates a numerical score to reflect the overall critical response. It creates an entirely new piece of “meta” content that makes sense of the abundance of online reviews.

Some other examples:

  • Fivethirtyeight’s political models don’t just rank the polls you should pay attention to. They use those polls to create a new thing, a forecast that reflects the state of the race better than any of those polls on its own.
  • Good Judgment Corp., CSET Foretell, Metaculus and other forecasting platforms invite users to make predictions and then report the overall view of the crowd. (Sometimes that’s as simple as a median score, but often it’s much more complex.) They could just rank all the forecasters’ replies and let you read them, like a feed, but instead they choose to synthesize them into something much more valuable.
  • IGM forum’s polls of economists provide a quick overview of expert opinion.
  • Google’s knowledge graph widgets, which it sometimes includes in results, straddle the line between sorting and synthesis. They don’t really offer any new information but, like a Wikipedia article, at some point a new organization of information can cross over into synthesis. I expect more automated synthesis like this in the future. (Update: Here’s another example of Google getting more into synthesis.)

We may be entering a new phase of innovation online, with new platforms vying for our attention. This time around, there’ll be much more discussion from the start about disinformation and other problems of abundance. It’ll be tempting to frame the answers in terms of sorting: rank the bad stuff low and the good stuff high! But I’d also like to see more thought to how to synthesize all that information (which, yes, likely also includes sorting it at some point). We don’t just need more places for people to post movie reviews, we need more Metacritics to put together what’s being said and make it easily interpretable.

The iron rule of explanation

Last year I posted about an Aeon article by the philosopher Michael Strevens, about the scientific method. That was based on his book The Knowledge Machine which I’ve since read. I’ve been posting a lot in the past year about theory vs. evidence and epistemology in general and I really recommend this book. Of all the books, articles, and courses I’ve looked at on the philosophy of science this is my favorite.

Strevens takes on several related questions: Why did it take so long for the scientific method to appear? What even is the scientific method? And is there any sense in which it is “objective” or is science an inherently subjective enterprise?

The core to his answer is what he calls the “iron rule of explanation”:

Here, then, in short, is the iron rule:

1. Strive to settle all arguments by empirical testing.

2. To conduct an empirical test to decide between a pair of hypotheses, perform an experiment or measurement, one of whose possible outcomes can be explained by one hypothesis (and accompanying cohort) but not the other.

p. 96

But the process of interpreting the results of empirical tests is subjective, he argues:

For these reasons, Popper is now thought by most philosophers of science to fall short of providing a rule for bringing evidence to bear on theories that is both fully objective and adequate to science’s needs. What kind of rule might do better? There is philosophical consensus on this matter too–and the answer is none. An objective rule for weighing scientific evidence is logically impossible.

p. 79

Interpreting evidence requires subjective assessment of the plausibility of both an explanation and its attending assumptions. Yet, despite this subjectivity, Strevens argues that science tilts, in the long run, toward “Baconian convergence” where scientists over time do agree more and more on the theories that best explain all the evidence they’ve created.

The iron rule works with four related innovations:

1. A notion of explanatory power on which all scientists agree

2. A distinction between public scientific argument and private scientific reasoning

3. A requirement of objectivity in scientific argument (as opposed to reasoning)

4. A requirement that scientific argument appeal only to the outcomes of empirical tests (and not to philosophical coherence, theoretical beauty, and so on)

p. 119

So what is the limited sense in which scientific publishing is “objective”?

When a scientific paper is written, the grounds of many of the experimenter’s crucial assumptions, being partially or wholly subjective, are cut away. What is left are only observation reports, statements of theories and other assumptions, and derivations that connect the two.

p. 161

There’s a ton more of interest in the book, including his argument that this whole process is, while useful, in some sense irrational. And a discussion of whether beauty is a useful criteria for theory. There’s a bunch of good material situating this whole argument within debates about both the philosophy and sociology of science, and some brief discussion of why science arose when it did.

But one of my favorite paragraphs is right in the beginning, right after he introduces the iron rule “compelling scientists to conduct all disputes with reference to empirical evidence alone.”

How can a rule so scant in content and so limited in scope account for science’s powers of discovery? It may dictate what gets called evidence, but it makes no attempt to forge agreement among scientists as to what the evidence says. It simply lays down the rule that all arguments must be carried out with reference to empirical evidence and then steps back, relinquishing control. Scientists are free to think almost anything they like about the connection between evidence and theory. But if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with. And so they do, with unfettered enthusiasm.

p. 7

I have a lot more to say about the book and how it applies to social science as well as to the more practical reasoning employed by forecasters and other analysts, but I’ll leave it there for now. The book is worth your time.

Lockdowns and spending

I’ve been thinking about what economic growth will look like in 2021, partly for work and partly for Good Judgment’s forecasting tournament. One question within that is how much lockdowns affect total consumer spending. Tracktherecovery.org makes it easy to explore this question.

That chart shows consumer spending and time spent outside the home for Americans, both indexed to (slightly different) late-January time periods.

In April, spending dipped even more significantly than time outside the home. Then, in late spring and through mid-summer, they recovered and plateaued in tandem.

But since about August, spending and mobility have diverged. By late fall, time outside the home was falling again but consumer spending mostly kept rising–very nearly reaching January levels in late November.

This return of consumer spending was made possible by relief from policymakers. But it also reflects adjustments across the economy: businesses are finding ways to cater to at-home consumers who in turn are finding new ways to spend from home. You can see this in the data: spending on, say, transportation is still down ~45% while retail and groceries are each up by double-digits.

The key variable in the 2021 economy will be the containment of the pandemic. But even if lockdowns continue longer into the year than many now expect, consumer spending might continue to recover and even reach new highs. At least if policymakers do their part.

Innovation and safety nets

Joseph Henrich in The WEIRDest People in the World:

Such broader and stronger safety nets would have sharpened the population’s cognitive and social skills on average. These psychological effects, along with the greater independence from families and churches that such insurance gives individuals, help explain why stronger safety nets promote more innovation, both in preindustrial England and in the modern world.

p. 464

He cites several papers but here’s part of one of them, where he’s co-author:

On the other end, reducing the costs of failure by creating a safety net can influence innovation via multiple channels, including by allowing individuals to invest in broader social ties (expanding the collective brain) over kin ties and by increasing entrepreneurship directly. This relationship is supported by analyses of England’s old poor law [100], more forgiving bankruptcy laws across 15 countries [101], unemployment insurance in France [102] and in the USA, the introduction of food stamps [103], health insurance for children [104] and access to health insurance unbundled from employment [105], all of which increased entrepreneurship. Of course, there is an optimal amount of social insurance vis-à-vis innovation, because increased funding of such programmes can increase tax burdens—some data suggest that higher corporate taxes can lead to lower entrepreneurship [106,107]. Overall, social safety nets energize innovation because they permit individuals to interconnect in broader, richer, networks.

https://royalsocietypublishing.org/doi/10.1098/rstb.2015.0192

A bunch of previous links to posts I’ve done on this here.

The psychology of competition

From anthropologist Joseph Henrich’s recent book The WEIRDest People In the World: How the West Became Psychologically Peculiar and Particularly Prosperous:

That differing effects of intergroup vs. within-group competition help us understand why “competition” has both positive and negative connotations. Unregulated and unmonitored, firms facing intense intergroup competition will start violently sabotaging each other while exploiting the powerless. We know this because it has happened repeatedly over many centuries, and continues today. Nevertheless, when properly yoked, moderate levels of nonviolent intergroup competition can strengthen impersonal trust and cooperation. Similarly, extreme forms of within-group competition encourage selfish behavior, envy, and zero-sum thinking. Yet when disciplined by intergroup competition, moderate levels of within-group competition can inspire perseverance and creativity.

p. 349

Seems about right. But what I really wanted to post here was Henrich’s summary of some research on competition and pro-social psychology:

However, because growing firms often hire residentially and relationally mobile individuals, greater interfirm competition should strengthen impersonal prosociality, not social embeddedness and interprersonal prosociality. As more people spend much of their day in more cooperative environments, governed by impartial norms, they should become more cooperative and trusting with anonymous others, even outside of work…

…Using data on the competitiveness of 50 different German industries, Patrick’s team asked a simple question: What happens to a person’s trust when they move from an industry with stronger interfirm competition to one with weaker competition? Remember, we are following the same individuals through time.

The results of this analysis reveal that when people move into a more competitive industry, their interpersonal trust tends to go up. The results imply that if people move from a hypothetical industry in which three firms divide up the market into one in which four firms divide it up, they will be about 4 percentile points more likely to say “most people can be trusted” on the GTQ. But, when they move into a less competitive industry, their trust goes down (on average).

p. 341-344

There are a couple other studies described, using different methods but finding the same effect, and there’s more on markets and prosocial attitudes on p. 240.

I found myself wondering about this in the context of current debates around industry concentration and competition in the U.S. Industries are becoming more concentrated, but there’s still disagreement among experts over the extent to which this implies a decline in competition. If you buy into the story Henrich is telling here for a second, you could ask: does someone who joins Google or Amazon today become more or less trusting of strangers? And might that tell us something about whether these highly concentrated tech industries are or aren’t competitive?