“Thin” and “thick” causality

Kathryn Paige Harden’s book The Genetic Lottery: Why DNA Matters for Social Equality includes a really nice primer on causality, including a distinction between “thin” and thick” versions of it. The book is about genetics, but that’s not what I’m interested in this post; more about the book here and here. Here are some excerpts of her treatment of causality:

Causes and Counterfactuals

In 1748, the Scottish philosopher David Hume offered a definition of “cause” that was actually two definitions in one:

“We may define a cause to be an object, followed by another, and where all the objects, similar to the first, are followed by objects similar to the second. Or, in other words, where, if the first object had not been, the second never had existed.”

The first half of Hume’s definition is about regularity–if you see one thing, do you always see a certain other thing? If I flick the light switch, the lights regularly, and almost without exception, come on…

Regularity accounts of causality occupied philosophers’ attention for the next two centuries, while the second half of Hume’s definition–where if the first object had not been, the second had never existed–was relatively neglected. Only in the 1970s did the philosopher David Lewis formulate a definition of cause that more closely resembled the second half of Hume’s definition. Lewis described a cause as “Something that makes a difference, and the difference it makes must be a difference from what would have happened without it.”

Lewis’s definition of a cause is all about the counterfactual–X happened, but what if X had not happened?…

[Saying that X causes Y] does not imply that researchers know the mechanism for how this works…

Each of these mechanistic stories could be decomposed into a set of sub-mechanisms, a matryoshka doll of “How?”…

But understanding mechanism is a separable set of scientific activities from those activities that establish causation…

p. 99-104

She goes on to describe a concept of “portability” that then ties into the problem of generalizability:

The portability of a cause can be limited or unknown… The developmental psychologist Urie Bronfenbrenner referred to the “bioecological” context of people’s lives. Everyone is embedded in concentric circles of context… I find Bronfenbrenner’s bioecological model to be a helpful framework for thinking about the portability of causes of human behavior: Which of these circles would have to change, and by how much, in order for the causal claim to no longer be true? Here, knowing about the mechanism also helps knowing about portability, as a good understanding of mechanism allows one to predict how cause-effect relationships will play out even in conditions that have never been observed.

p. 106-107

Finally she distinguishes between “thin” and “thick” causal explanations:

In the course of ordinary social science and medicine, we are quite comfortable calling something a cause, even when (a) we don’t understand the mechanisms by which the cause exerts its effects, (b) the cause is probabilistically but not deterministically associated with effects, and (c) the cause is of uncertain portability across time and space. “All” that is required to assert that you have identified a cause is to demonstrate evidence that the average outcome for a group of people would have been different if they had experienced X instead of Not-X…

I’m going to call this the “thin” model of causation.

We can contrast the “thin” model of causation with the type of “thick” causation we see in monogenic genetic disorders or chromosomal abnormalities. Take Down’s syndrome, for instance. Down’s syndrome is defined by a single, deterministic, portable cause… And this causal relationship operates as a “law of nature,” in the sense that we expect the trisomy-Down’s relationship to operate more or less in the same way, regardless of the social milieu into which an individual is born.

p. 108

Prediction, preparation, and humility

Sheila Jasanoff of Harvard has a really interesting essay in Boston Review titled “‘Preparedness’ Won’t Stop the Next Pandemic.” The whole thing is worth a read, but here’s the gist:

Humility, by contrast, admits that defeat is possible. It occupies the nebulous zone between preparedness and precaution by asking a moral question: not what we can achieve with what we have, but how we should act given that we cannot know the full consequences of our actions. Thought of in this way, humility addresses the questions perennially raised by critics of precaution and refutes the charges of passivity. Confronted on many fronts by riddles too knotty to solve, must society choose either to do nothing or to move aggressively forward as if risks don’t matter and resources are limitless? Decades of effort to protect human health and the environment suggest that the choice is not so stark or binary.

There is a middle way, the way of humility, that permits steps to be taken here and now in order to forestall worst-case scenarios later. It implements precaution by unheroic but also more ethical means, through what I call technologies of humility: institutional mechanisms—including greater citizen participation—for incorporating memory, experience, and concerns for justice into our schemes of governance and public policy. This is a proactive, historically informed, and analytically robust method that asks not just what we can do but who might get hurt, what happened when we tried before, whose perceptions were systematically ignored, and what protections are in place if we again guess wrong.

There are some responses to the essay here, which I’ve not yet read.

Notes on science

I’ve been reading and writing about the philosophy of science a bunch in the last couple of years, so this post is a place to clip together a number of quotes and posts in one place.

Michael Strevens says the scientific method boils down to the “iron rule of explanation” that “only empirical evidence counts.”

This is a very stripped down idea. It allows for subjectivity, and it grants that there is no logically or philosophically satisfying way to decide how to interpret the results of observation or experimentation.

Here, then, in short, is the iron rule:

1. Strive to settle all arguments by empirical testing.

2. To conduct an empirical test to decide between a pair of hypotheses, perform an experiment or measurement, one of whose possible outcomes can be explained by one hypothesis (and accompanying cohort) but not the other…

How can a rule so scant in content and so limited in scope account for science’s powers of discovery? It may dictate what gets called evidence, but it makes no attempt to forge agreement among scientists as to what the evidence says. It simply lays down the rule that all arguments must be carried out with reference to empirical evidence and then steps back, relinquishing control. Scientists are free to think almost anything they like about the connection between evidence and theory. But if they are to participate in the scientific enterprise, they must uncover or generate new evidence to argue with.

My posts on it are here and here. Here’s the book and here is an essay version in Aeon.

Naomi Oreskes says science must be understood as social practices–and that this is a reason to trust it, not dismiss it

There is now broad agreement among historians, philosophers, sociologists, and anthropologists of science that there is no (singular) scientific method, and that scientific practice consists of communities of people, making decisions for reasons that are both empirical and social, using diverse methods. But this leaves us with the question: If scientists are just people doing work, like plumbers or nurses or electricians, and if our scientific theories are fallible and subject to change, then what is the basis for trust in science?

I suggest that our answer should be two-fold: 1) its sustained engagement with the world and 2) its social character

My post is here and the book is here.

Four commonalities in scientific practice

From UPenn’s short Coursera course on the philosophy of science which is a nice overview:

Science is not completely unified and that there is no master method or recipe that’s appropriate in all contexts. Nevertheless, there are certain elements common of these examples… So what are the commonalities? There at least four major ones first all four of our examples involve a sophisticated forms of observation… Second, simple observation wasn’t enough… [experimentation and simulation were used as well.] Third, in each case it was multiple lines of evidence generated using different experimental and observational techniques that convinced the scientific community of the relevant results. Simplistic pictures of science such as those that are taught in high school make it seem like scientific research miraculously uncovers the truth by simply verifying one hypothesis with a single experiment. While this does happen occasionally research more often looks like the cases I’ve talked about. Research done by multiple people using different approaches that point in the same direction. Or they don’t sometimes like in the case of children’s beliefs. Philosophers call this robustness or consilience. Fourth and finally all of our examples involve the accumulation of evidence over time. Each case involves scientific understanding that improves over time from an initial sense that the answer is at hand to greater accuracy and precision and measurements and a much greater appreciation of what is genuinely needed to explain a phenomenon. While scientists never achieve certainty, this is reserved for logic and Mathematics. The accumulation of evidence especially from multiple independent sources is the key to increasing confidence that a hypothesis is true.

Tim Lewens defends scientific realism

Scientific realism is the label for the philosophical view that science is in the truth business. Scientific realism says that the sciences represent those parts of the world they deal with in an increasingly accurate way as time goes by. Scientific realists are not committed to the greedy idea that the sciences can tell us all there is to know about everything; they can happily acknowledge that there is plenty to learn from the arts and humanities. Moreover, by denying that science gives us a perfectly accurate picture of the world, scientific realists are not committed to the manifestly absurd idea that science is finished..

A moment’s reflection suggests that scientific realism is not the only sensible and respectful way to respond to the successes of science. Perhaps we should think of scientific theories in the way we think of hammers, or computers: they are remarkably useful, but like hammers or computers they are mere tools. It makes no senses to ask whether a hammer is true, or whether it accurately represents the world, and one might argue that the same goes for science: we should simply ask whether its theories are fit for their purposes…

Cutting to the chase, this chapter will argue in favor of scientific realism… First, we need to fend off… the argument from “underdetermination… [which] suggests that scientific evidence is never powerful enough to discriminate between wholly different theories about the underlying nature of the universe… Second, we need to ask whether there is any positive argument in favor of scientific realism. More or less the only argument that has ever been offered to support this view is known as the “No Miracles argument.” The basic gist of this argument is that if science were not true–if it made significant mistakes about the constituents of matter, for example–then when we acted on the basis of scientific theory, our plans would consistently go awry… Third, and finally, we must confront an argument known as the “Pessimistic Induction.” This argument draws on the historical record to suggest that theories we now think of as false have nonetheless been responsible for remarkable practical successes.”

Why Trust Science p. 85-88

The book is more a quick tour through the philosophy of science, and Lewens’ argument for realism was something of a detour.

Rorty says science is a tool and urges not to think of it purely with examples from physics

In [McDowell’s] picture, people like Quine (and sometimes even Sellars) are so impressed with natural science that they think that the first sort of intelligibility [associated with natural science rather than reason] is the only genuine sort.

I think it is important, when discussing the achievements of the scientific revolution, to make a distinction which McDowell does not make: a distinction between particle physics, together with those microstructural parts of natural science which can easily be linked up with particle physics, and all the rest of natural science. Particle physics, unfortunately, fascinates many contemporary philosophers, just as corpuscularian mechanics fascinated John Locke…

To guard against this simpleminded and reductionistic way of thinking of non-human nature, it is useful to remember that the form of intelligibility shared by Newton’s primitive corpuscularianism and contemporary particle physics has no counterpart in, for example, the geology of plate tectonics, nor in Darwin’s or Mendel’s accounts of heredity and evolution. What we get in those areas of natural science are narratives, natural histories, rather than the subsumptions of events under laws.

So I think that McDowell should not accept the bald naturalists’ view that there is a “distinctive form of intelligibility” found in the natural sciences and that it consists in relating events by laws. It would be better to say that what Davidson calls “strict laws” are the exception in natural science–nice if you can get them, but hardly essential to scientific explanation. It would be better to treat “natural science” as a name of an assortment of useful gimmicks…

I think we would do better to rid ourselves of the notion of “intelligibility” altogether. We should substitute the notion of techniques of problem-solving. Democritus, Newton, and Dalton solved problems with particles and laws. Darwin, Gibbon, and Hegel solved others with narratives. Carpenters solve others with hammers and nails, and soldiers still others with guns.

Pragmatism as anti-authoritarianism, p. 182-184

And elsewhere:

Scientific progress is a mater of integrating more and more data into a coherent web of belief–data from microscopes and telescope with data obtained by the naked eye, data forced into the open by experiment with data with has always been lying about.

Pragmatism as anti-authoritarianism p. 136

Rorty is looking to center epistemology on people. And of course in his earlier work rejects the idea that true belief is about correctly mirroring an external world. So how should we think about what seems like an external world?

The only other sense of “social construction” that I can think of is the one I referred to earlier: the sense in which bank accounts are social constructions but giraffes are not. Here the criterion is simply causal. The causal factors which produced giraffes did not include human societies, but those which produced bank accounts did.

Pragmatism as anti-authoritarianism, p. 140

David Weinberger says the success of machine learning models (MLMs) challenges Western ideas about scientific laws

Our encounter with MLMs doesn’t deny that there are generalisations, laws or principles. It denies that they are sufficient for understanding what happens in a universe as complex as ours. The contingent particulars, each affecting all others, overwhelm the explanatory power of the rules and would do so even if we knew all the rules. For example, if you know the laws governing gravitational attraction and air resistance, and if you know the mass of a coin and of Earth, and if you know the height from which the coin will be dropped, you can calculate how long it will take the coin to hit the ground. That will likely be enough to meet your pragmatic purpose. But the traditional Western framing of it has overemphasised the calm power of the laws. To apply the rules fully, we would have to know every factor that has an effect on the fall, including which pigeons are going to stir up the airflow around the tumbling coin and the gravitational pull of distant stars tugging at it from all directions simultaneously. (Did you remember to include the distant comet?) To apply the laws with complete accuracy, we would have to have Laplace’s demon’s comprehensive and impossible knowledge of the Universe.

That’s not a criticism of the pursuit of scientific laws, nor of the practice of science, which is usually empirical and sufficiently accurate for our needs­­­ – even if the degree of pragmatic accuracy possible silently shapes what we accept as our needs. But it should make us wonder why we in the West have treated the chaotic flow of the river we can’t step into twice as mere appearance, beneath which are the real and eternal principles of order that explain that flow. Why our ontological preference for the eternally unchanging over the eternally swirling water and dust?

Here is the Aeon essay.

One I read but left out was Steven Pinker’s Rationality which I won’t try to sum up here in part because it’s not about science per se.

I guess having clipped all that together I’ll end with some posts I’ve done in the past few years on or related to epistemology:

Objectivity as a social accomplishment

Here is an excellent characterization of scientific objectivity as a social practice, from Naomi Oreskes in her book Why Trust Science:

Sociologists of scientific knowledge stressed that science is a social activity, and this has been taken by many (for both better and worse) as undermining its claims to objectivity. The “social,” particularly to many scientists but also many philosophers, was synonymous with the personal, the subjective, the irrational, the arbitrary, and even the coerced. If the conclusions of scientists–who for the most part were European or North American men–were social constructions, then they had no more or less purchase on truth [than] the conclusions of other social groups. At least, a good deal of work in science studies seemed to imply that. But feminist philosophers of science, most notably Sandra Harding and Helen Longino, turned that argument on its head, suggesting that objectivity could be reenvisaged as a social accomplishment, something that is collectively achieved…

The greater the diversity and openness of a community and the stronger its protocols for supporting free and open debate, the greater the degree of objectivity it may be able to ahieve as individual biases and background assumptions are “outed,” as it were, by the community. Put another way: objectivity is likely to be maximized when there are recognized and robust avenues for criticism, such as peer review, when the community is open, non-defensive, and responsive to criticism, and when the community is sufficiently diverse that a broad range of views can be developed, heard, and appropriately considered…

To recapitulate: There is now broad agreement among historians, philosophers, sociologists, and anthropologists of science that there is no (singular) scientific method, and that scientific practice consists of communities of people, making decisions for reasons that are both empirical and social, using diverse methods. But this leaves us with the question: If scientists are just people doing work, like plumbers or nurses or electricians, and if our scientific theories are fallible and subject to change, then what is the basis for trust in science?

I suggest that our answer should be two-fold: 1) its sustained engagement with the world and 2) its social character….

This [first] consideration–that scientists are in our society the experts who study the world–is a reminder to scientists of the importance of foregrounding the empirical character of their work–their engagement with nature and society and the empirical basis for their conclusions…

However, reliance on empirical evidence alone is insufficient for understanding the basis of scientific conclusions and therefore insufficient for establishing trust in science. We must also take to heart–and explain–the social character of science and the role it plays in vetting claims.

Why Trust Science? Naomi Oreskes, p. 50-57

The book’s initial essay, from which this is drawn, is not only interesting in its own right but is a really concise overview of the philosophy of science and its twists and turns over time.

Better markets, but more or less?

Luigi Zingales has a good op-ed in Project Syndicate that summarizes a case he’s been making for years:

But this opposition of state and market is misleading, and it poses a major obstacle to understanding and addressing today’s policy challenges. The dichotomy emerged in the nineteenth century, when arcane government rules, rooted in a feudal past, were the main obstacle to the creation of competitive markets. The battle cry of this quite legitimate struggle was later raised to the principle of laissez-faire, ignoring the fact that markets are themselves institutions whose efficient functioning depends on rules. The question is not whether there should be rules, but rather who should set them, and in whose interest… In sum, we should strive to achieve a better state and better markets, and to contain each within its respective spheres.

Luigi has done more than anyone in the past decade to clarify that being “pro-market” or “pro-competition” doesn’t mean being laissez-faire and that it isn’t the same as being “pro-business.” And while that view began, in my estimation, as a pragmatic center-right idea (keep the appreciation of markets, lose the coziness with business) it won over some major adherents on the left. Most notably, Elizabeth Warren framed her progressive economic policy as pro-competition, and claimed she was a “capitalist to my bones.”

How might we think about the difference between Zingales and Warren on these issues? Certainly one might dive into specific policy areas and look for disagreements. But I’ve come to think of them as agreeing on the idea of better markets but parting ways over how much markets should structure the economy.

Although Zingales notes plenty of room for government to play an important role (see the op-ed for more), I think of him as wanting better markets and more markets. If the rules surrounding markets were written to be more pro-competitive, then markets would be able to take on even more tasks than they already do. I’m not certain this is what he thinks, but this is how I read his general perspective.

Warren, by contrast, I think basically wants better markets and less market control. She’d increase the government’s role not only as rule-setter but as provider of various goods, while simultaneously trying to make markets work better within a more limited sphere.

Of course nearly everyone would say, all else equal, that they want competitive, less corrupt markets over monopolistic ones (unless it’s one they personally benefit from). But it’s telling that some camps choose to prioritize this idea and others don’t. If these two dimensions are real, we can structure debate over the role of markets and business like this:

Better marketsStatus quo
More market control Zingales“Pro-business”
Less market controlWarrenSocialist

I outline all this because I think the left column contains a fascinating disagreement. If we could overcome some political-economy issues to get better, more competitive markets, what new uses might we put them to? Might we decide that more spheres work well under the control of regulated markets with the right rules? Or, having made that progress on political economy issues, might we find ourselves better able to write good rules to effectively use non-market institutions for things we currently leave to markets? Might we end up relying more on universities or open source communities or direct government provision?

The central challenge here, no matter where you land on these questions, is how to make progress on the political economy issues that limit competition. But if we could write the kind of rules we need to make markets truly competitive, would we use them more? Or less?

Software and the supply side

Chris Mims in WSJ writes about the new software conglomerates (I wrote about them for Quartz recently here) and says:

The large companies of yesteryear bet on things like economies of scale in manufacturing—everything gets cheaper to make, the more you make of it. Modern platform companies take advantage of something unique to the internet age. That something is “demand-side economies of scale,” which arise because platform companies are taking advantage of network effects, says Mr. Wu.

This is certainly true, and in line with Ben Thompson’s notion of the big tech companies as “aggregators” of consumer demand. And Mims and I seem to be on the same page on the subject of conglomerates.

But I think there are supply-side economies of scale here that we still are struggling to understand and appreciate. I’m not certain of this, and I certainly can’t quite describe them, but I strongly suspect they are there. There is something about being good at making software that is hard to buy your way into and easier to accomplish if you were born as a software company. Maybe that doesn’t exhibit economies of scale; maybe it’s best understood just as a capability that most large companies don’t have. But it’s not just the users that make Alphabet and Amazon powerful: the premise behind Alphabet’s self-driving car project isn’t that all those Gmail users are a good customer base. And it’s not exactly about data either; a bunch of search data isn’t necessarily what you need to make an autonomous vehicle work. Instead, the company knows how to start and scale up extremely large software-and-data projects. That’s part of the story.

Think of it this way: What if you just gave GE all that searcher data, or handed them control over google.com. Would they know what to do with it? And more importantly, would they leverage that business to expand into new domains?

Places where I’ve touched on this topic:

Trusting expertise

More on applied epistemology (which should just be called epistemology!) Here’s Holden Karnofsky of Open Philanthropy describing his process for “minimal-trust investigations”–basically trying to understand something yourself as close to from-the-ground-up as you can. Along the way he makes some very good points about social learning, i.e. how and when to trust others to reach accurate beliefs:

Over time, I’ve developed intuitions about how to decide whom to trust on what. For example, I think the ideal person to trust on topic X is someone who combines (a) obsessive dedication to topic X, with huge amounts of time poured into learning about it; (b) a tendency to do minimal-trust investigations themselves, when it comes to topic X; (c) a tendency to look at any given problem from multiple angles, rather than using a single framework, and hence an interest in basically every school of thought on topic X. (For example, if I’m deciding whom to trust about baseball predictions, I’d prefer someone who voraciously studies advanced baseball statistics and watches a huge number of baseball games, rather than someone who relies on one type of knowledge or the other.)

Here’s what I said on basically the same topic a while back:

You want to look for people who think clearly but with nuance (it’s easy to have one but not both), who seriously consider other perspectives, and who are self-critical.

The sites that dominated the economics blogosphere

Several years ago I posted the results of an analysis of top economics sites. I’ve redone that work a bit differently, this time with the data on Github. This time the analysis is purely of the curation done on economist Mark Thoma’s blog during the 2010s: the result is ~14,000 links that he recommended. Here’s some background on Thoma and his blog that also explains why analyzing it is useful.

So first some results, then a few more lists and observations… The top domains in this dataset:

The first thing to note is just how big a deal blogs were. Obvious if you were reading stuff back then, but just a few years later it’s striking! Blogspot, Typepad, and WordPress all make the top domains list because they hosted blogs by individual economists or small groups of them.

I hand-coded the top 60 domains and 23 of them are blogs or blog-hosting platforms. Weighted by how many individual links there are in the dataset, and restricting it just to those top 60 domains I hand-coded not the full list, 43% of links are to blogs.

Top blogs

Krugman’s blog tops the list at 542 links: that alone (not counting his columns) was 3.8% of the dataset. Author data isn’t perfect, but there are 218 other links with Krugman as author, including some non-NYT stuff and one Onion article. Combined, that puts him at 5.4% of the dataset.

No other economist is close to Krugman in terms of influence. But here are the blogs that come next, not counting group blogs at institutions like the New York Times or Federal Reserve.

Amazingly, these blogs all seem to still be going, and a couple of them I personally still read regularly.

Top media

The other big category in the top domains list is media. And here the New York Times dominates. I leave it to you to parse the cause-and-effect here vis a vis Krugman. Here are the top media sites:

DomainTitle
nytimes.com1906
ft.com640
project-syndicate.org435
washingtonpost.com251
economist.com241
bloomberg.com170
newyorker.com108

Top think tanks

DomainTitle
brookings.edu69
promarket.org68
cfr.org65
equitablegrowth.org60
piie.com38
epi.org37
bruegel.org32

Research

That leaves one last big category: research institutions. And here the basic list is: Voxeu, NBER, and the Federal Reserve. IMF makes the list, too, albeit a lot lower.

You can download the data for yourself here.

Revisiting the housing bubble

Timothy Lee has a good post on the revisionist history of the mid-2000s housing bubble in the US. I find the basic premise interesting and pretty compelling: what looked like a housing bubble might have just been prices responding to a mismatch between supply and demand. Lee further says this analytical error—seeing a bubble where there wasn’t one—had huge policy consequences:

This mistake had profound consequences because the perceived size of the housing bubble influenced decision-making by the Federal Reserve. The Fed started raising its benchmark interest rate in 2004, reaching a peak of 5.25 percent in mid-2006. Part of the Fed’s goal was to raise mortgage rates and thereby cool a housing market it viewed as overheated… If the Fed had understood this at the time and acted accordingly, it could have averted a lot of human misery. Home prices would not have fallen so much, and fewer people would have lost their jobs. That, in turn, would have limited the losses of banks that bet on the mortgage market, and might have prevented the 2008 financial crisis.

That’s all fine as far as it goes. But as we reevaluate the housing bubble it’s essential to remember what really caused the crisis in the late 2000s—and led to such an unusually severe recession—and that was the financial complexity and opacity that was built on top of the housing market.

So while this reassessment is important, it’s hard for me to see a counterfactual where things turned out well. The key cause of the Great Recession was a financial panic caused by derivatives, the risk of which were poorly understood. Financial institutions took on more housing-related risk than they realized, and their counterparties did, too—and at key moments in the panic they couldn’t tell just how exposed their counterparties were to the housing market. Housing prices and the Fed’s response are key elements of this story, but they’re the tip of the iceberg.

(Though it’s been years since I have read it and even longer since it was published, Alan Blinder’s After the Music Stopped remains my key reference on this subject.)

The social science side of science

Derek Thompson in a very good piece about Fast Grants:

A third feature of American science isthe experimentation paradox: The scientific revolution, which still inspires today’s research, extolled the virtues of experiments. But our scientific institutions are weirdly averse to them. The research establishment created after World War II concentrated scientific funding at the federal level. Institutions such as the NIH and NSF finance wonderful work, but they are neither nimble nor innovative, and the economist Cowen got the idea for Fast Grants by observing their sluggishness at the beginning of the pandemic. Many science reformers propose spicing things up with new lotteries that offer lavish rewards for major breakthroughs, or giving unlimited and unconditional funding to superstars in certain domains. “We need a better science of science,” the writer José Luis Ricón has argued. “The scientific method needs to examine the social practice of science as well, and this should involve funders doing more experiments to see what works.” In other words, we ought to let a thousand Fast Grants–style initiatives bloom, track their long-term productivity, and determine whether there are better ways to finance the sort of scientific breakthroughs that can change the course of history.

This is what I have heard in my reporting as well. The US has pioneered some extremely successful institutions for funding science and developing technology. That includes the NIH, as well as ARPA, the venture capital industry, etc.

But while those institutions are good at some things, they have flaws and are ill suited to certain tasks. Speed in the case of science funding, as Derek explains; for VC it’s a growing disinterest in deep technical risk, a mismatch with some capital-intensive forms of energy tech, and a model that demands 10X returns at minimum.

And yet we keep going back to the channels we have: more money pours into VC; Congress proposes more money for NIH. Neither of those is a bad idea, per se. But if you talk to folks who study the innovation process what they most want to see is experimentation in new institutions for developing science and tech. Fast Grants is a nice example but there’s a lot more experimenting still to do.