Crypto winter, midterms, Twitter layoffs

Welcome to Nonrival, the newsletter where readers make predictions.

How it works

  1. On Sundays, read the newsletter and make a forecast by clicking a link at the bottom.
  2. On Wednesdays, see what other readers predicted and how your forecast compares.
  3. Over time, you’ll get scores based on how accurate your forecasts are.

In this issue

  • Crypto VC forecast results
  • US Senate forecasts, scored
  • Twitter layoff forecasts, scored
  • Q&A: Learning from forecasts that don’t work out

Thanks for forecasting. Send feedback to

Six more months of crypto winter

Last week, amid the spectacular implosion of FTX, Nonrival asked whether VCs would shy away from crypto. Specifically: Would they invest $2B or more into US crypto startups in either Q4 or Q1. Your view: Probably not.

Most readers see macro factors dragging all of VC down for the next couple of quarters—tellingly, that backdrop was mentioned more in the most skeptical forecasts than anything about FTX. But the consensus seems to be that FTX won’t help, and that on the margin it will chill investment in the sector.

You can follow a version of this forecast on Manifold Markets to see what their users think in real-time.

Readers’ reasoning

The case for a long crypto winter:

10%: Investment is already trending below $2 billion per quarter thanks to crypto winter. This won’t help.
25%: If there was significant investment in October 2022 before FTX blew up, that's still Q4, so it's possible. Post-FTX, the likelihood in the near term is very low. Investments will continue at a slower pace with a lot more scrutiny by investors, who will demand board seats and favor companies operating in more highly regulated jurisdictions, and many may choose to wait six or so months until there's more clarity on regulation or on the timing of the end of this crypto winter. The knock-on effects of FTX's meltdown still need to work their way through the system, too.
35%: A chilling effect for several months seems likely, but markets are hard to predict and loony VCs doubly so

And the case that VCs will be undeterred:

50%: I expect overall funding activity to increase slightly, but mostly for earlier stage companies. I think VCs are pretty undeterred - especially ones that continue to play the semi-legal game of selling tokens that they acquire via funding rounds, like a16z.
51%: In favor of less: Big fuckup. In favor of more: $2B isn't that much money, particularly with inflation. There are two possible period where it could happen. I started with a 30% that it could happen for each quarter. This gives a (1-0.3) * (1 - 0.3) = 0.49 = 49% that it would not happen in any of the two quarters, or a 51% that it will happen.
60%: Mostly this will be seen as FTX specific. At most, it’ll dampen enthusiasm for crypto trading. But if VC overall stabilizes as I expect, I think plenty of VCs will still back web3 companies.

How your forecast compares

  • You didn't make a prediction by the Tuesday 9am deadline. Otherwise you'd be seeing your forecast here.

  • The average reader forecast was 34%.

Democrats keep control of the Senate

Last month, Nonrival asked how likely it was that Democrats would keep the Senate (50 seats or more):

  • The average reader said there was a 47% chance of that happening
  • The Democrats secured 50 seats with one runoff pending, so higher forecasts score better
  • You didn’t make a forecast or else you’d be seeing your personalized score here

Elon lays off half of Twitter

A few weeks ago, as press reports sugested Elon Musk was considering massive Twitter layoffs, Nonrival asked:

How likely is it that Twitter lays off half of its staff or more by the end of Q1 2023?

Learning from forecasts that don't turn out well

I did not expect Elon Musk to lay off half of Twitter. A third maybe. But half? Like most readers, I thought the chances were quite low.

What can we learn when a forecast doesn't go the way we thought? And how do we even know if we were wrong in the first place? If you say there's a 25% chance of two consecutive coin flips coming up heads, the fact that it then happens doesn't mean you were wrong.

Don Moore is a psychologist at Berkeley who studies overconfidence, and the author of the excellent book Perfectly Confident. This week I asked Don for his thoughts on learning from cases that don't turn out the way you predicted. Our interview is below:

Nonrival: What should a good decision maker or forecaster do when something turns out differently than they expected?

Don Moore: After you learn an outcome, whether you were right or wrong, it's worth going back and asking what (if anything) you have to learn. It's not as simple as patting yourself on the back if you're right and resolving to do better if you were wrong. The goal is to learn what you can generalize and apply to the next forecast. There are a couple of dangerous biases that make this difficult.

The first danger is what poker players like Annie Duke call "resulting": you judge a decision based exclusively on its outcome. In truth, you should judge the decision based on your effective use of the information you had at the time you made the decision. That leads you to the second danger: the hindsight bias will predispose you to selectively recall all the reasons why the observed outcome was likely, and make it harder for you to remember the contrary information.

In most complex phenomena, there is an element of irreducible uncertainty. It's easiest to see in chance devices like a coin flip. It would be a mistake to beat yourself up for predicting heads when the coin comes up tails. Should you have known that it would have come up tails? No. There was only a 50% chance of tails. And no matter how many coin flips you observe, you can't predict the next one with certainty greater than 50%. The irreducible uncertainty can't be eliminated. Expect it and factor it in. Your goal should be understanding as much of the rest as you can.

Given that risk of "resulting," how do you assess the possibility that you did misjudge something?

Here I think of Maria Konnikova's book, "The Biggest Bluff." She was recounting a poker hand to her coach, and he stopped her before she disclosed who won the hand. He wanted her to explain what she knew when she bet, and evaluate her decision based on what she knew at that time. The analogy in forecasting would be to attempt to explain your forecast using what you knew at the time. Can you justify that to someone else (ideally someone who doesn't know the actual outcome)?

Your work and your book Perfectly Confident point to how most of us suffer from overconfidence. Is the lesson from bad forecasts or decisions just to be more uncertain?

We could all benefit from a dose of humility. We're all vulnerable to being too sure that we know what's going to happen. The confidence we assign to our forecasts usually exceeds their accuracy. How come? Because the world surprises us with unknown unknowns. That is, we will sometimes be wrong for reasons we fail to anticipate. To pick one dramatic example, forecasts for US unemployment rates in the second quarter of 2020 look recklessly overconfident because forecasters did not anticipate the Covid-19 pandemic. Our forecasts will always be vulnerable to big shocks like that, and so good calibration demands that we adjust downward our confidence, especially when we realize we don't know everything and we are vulnerable to being surprised. Nassim Taleb has argued that "black swan" events like that render forecasting worthless. The situation isn't quite that grim. Forecasts are useful. In fact, they're essential. Every decision depends on a forecast of its likely consequences. But those forecasts and those decisions will be better if they are made with appropriate humility.




Learn more about Nonrival and crowd forecasting.


The newsletter where readers make predictions about business, tech, and politics. Read the newsletter. Make a prediction with one click. Keep score.

Read more from Nonrival