Crypto winter, midterms, Twitter layoffs
Welcome to Nonrival, the newsletter where readers make predictions.
How it works
- On Sundays, read the newsletter and make a forecast by clicking a link at the bottom.
- On Wednesdays, see what other readers predicted and how your forecast compares.
- Over time, you’ll get scores based on how accurate your forecasts are.
In this issue
- Crypto VC forecast results
- US Senate forecasts, scored
- Twitter layoff forecasts, scored
- Q&A: Learning from forecasts that don’t work out
Thanks for forecasting. Send feedback to email@example.com.
Six more months of crypto winter
Last week, amid the spectacular implosion of FTX, Nonrival asked whether VCs would shy away from crypto. Specifically: Would they invest $2B or more into US crypto startups in either Q4 or Q1. Your view: Probably not.
Most readers see macro factors dragging all of VC down for the next couple of quarters—tellingly, that backdrop was mentioned more in the most skeptical forecasts than anything about FTX. But the consensus seems to be that FTX won’t help, and that on the margin it will chill investment in the sector.
You can follow a version of this forecast on Manifold Markets to see what their users think in real-time.
The case for a long crypto winter:
And the case that VCs will be undeterred:
How your forecast compares
- You said there was a [111322_FINAL GOES HERE]% chance of VCs investing $2B+.
- You predicted that the average of readers' forecasts would be [111322_CROWD GOES HERE]%. The actual average was 34%. You were closer than [111322_CROWD_RANK GOES HERE]% of readers.
Senate midterms: Your forecast was more accurate than [103022_BRIER_RANK GOES HERE]% of readers
Last month, Nonrival asked how likely it was that Democrats would keep the Senate (50 seats or more):
- The average reader forecast said there was a 47% chance of that happening
- The Democrats secured 50 seats with one runoff pending, so higher forecasts score better
- You said there was a [103022_FINAL GOES HERE]% chance of that happening. Your forecast was closer to the actual outcome than [103022_BRIER_RANK GOES HERE]% of readers.
Twitter layoffs: Your forecast was more accurate than [102322_BRIER_RANK GOES HERE]% of readers
A few weeks ago, as press reports sugested Elon Musk was considering massive Twitter layoffs, Nonrival asked:
- Readers were skeptical: The average forecast as 27% and the median was just 15%.
- But Elon did it. Layoffs.fyi reports that 50% of the company laid off. That means higher forecasts score better.
- You said there was a [102322_FINAL GOES HERE]% chance of it happening. Your forecast was closer to the actual outcome than [102322_BRIER_RANK GOES HERE]% of readers.
Learning from forecasts that don't turn out well
I did not expect Elon Musk to lay off half of Twitter. A third maybe. But half? Like most readers, I thought the chances were quite low.
What can we learn when a forecast doesn't go the way we thought? And how do we even know if we were wrong in the first place? If you say there's a 25% chance of two consecutive coin flips coming up heads, the fact that it then happens doesn't mean you were wrong.
Don Moore is a psychologist at Berkeley who studies overconfidence, and the author of the excellent book Perfectly Confident. This week I asked Don for his thoughts on learning from cases that don't turn out the way you predicted. Our interview is below:
Nonrival: What should a good decision maker or forecaster do when something turns out differently than they expected?
Don Moore: After you learn an outcome, whether you were right or wrong, it's worth going back and asking what (if anything) you have to learn. It's not as simple as patting yourself on the back if you're right and resolving to do better if you were wrong. The goal is to learn what you can generalize and apply to the next forecast. There are a couple of dangerous biases that make this difficult.
The first danger is what poker players like Annie Duke call "resulting": you judge a decision based exclusively on its outcome. In truth, you should judge the decision based on your effective use of the information you had at the time you made the decision. That leads you to the second danger: the hindsight bias will predispose you to selectively recall all the reasons why the observed outcome was likely, and make it harder for you to remember the contrary information.
In most complex phenomena, there is an element of irreducible uncertainty. It's easiest to see in chance devices like a coin flip. It would be a mistake to beat yourself up for predicting heads when the coin comes up tails. Should you have known that it would have come up tails? No. There was only a 50% chance of tails. And no matter how many coin flips you observe, you can't predict the next one with certainty greater than 50%. The irreducible uncertainty can't be eliminated. Expect it and factor it in. Your goal should be understanding as much of the rest as you can.
Given that risk of "resulting," how do you assess the possibility that you did misjudge something?
Here I think of Maria Konnikova's book, "The Biggest Bluff." She was recounting a poker hand to her coach, and he stopped her before she disclosed who won the hand. He wanted her to explain what she knew when she bet, and evaluate her decision based on what she knew at that time. The analogy in forecasting would be to attempt to explain your forecast using what you knew at the time. Can you justify that to someone else (ideally someone who doesn't know the actual outcome)?
Your work and your book Perfectly Confident point to how most of us suffer from overconfidence. Is the lesson from bad forecasts or decisions just to be more uncertain?
We could all benefit from a dose of humility. We're all vulnerable to being too sure that we know what's going to happen. The confidence we assign to our forecasts usually exceeds their accuracy. How come? Because the world surprises us with unknown unknowns. That is, we will sometimes be wrong for reasons we fail to anticipate. To pick one dramatic example, forecasts for US unemployment rates in the second quarter of 2020 look recklessly overconfident because forecasters did not anticipate the Covid-19 pandemic. Our forecasts will always be vulnerable to big shocks like that, and so good calibration demands that we adjust downward our confidence, especially when we realize we don't know everything and we are vulnerable to being surprised. Nassim Taleb has argued that "black swan" events like that render forecasting worthless. The situation isn't quite that grim. Forecasts are useful. In fact, they're essential. Every decision depends on a forecast of its likely consequences. But those forecasts and those decisions will be better if they are made with appropriate humility.
- Have we hit peak inflation? Economist Mark Zandi: “As long as oil prices don’t jump again and China doesn’t shut down again due to its no COVID policy, inflation should be much lower a year from now”
- Interest rates: “Investors bet there was an 85 percent chance of a smaller 0.5 percentage point rise at the Fed’s next meeting.”
- A whole bunch of FTX-related forecasts.
- No change in the likelihood of Coinbase going broke (ie forecasters don’t think it’ll be affected by FTX)
- The polls were basically right
- Betting markets, maybe not so much
- Why the New York Times brought back the election needle
- How many people actually read election predictions?
- Accurately predicting an extreme event can be a sign of a poor forecaster
- “Musk’s vision… doesn’t appear to exist” (Stratechery)