Experts are deeply divided over whether robots are coming for our jobs. Still, there are some things we do know about the relationship between technology, employment, and wages. There are a lot of recent-ish books and essays out about this, and my aim is to briefly summarize their findings here, if only so I don’t forget it all.
Historically, technology has not caused long-term unemployment
As I summarized in a previous post:
Historically, fears of technology-driven unemployment have failed to materialize both because demand for goods and services continued to rise, and because workers learned new skills and so found new work. We might need fewer workers to produce food than we once did, but we’ve developed appetites for bigger houses, faster cars, and more elaborate entertainment that more than make up for the difference. Farmworkers eventually find employment producing those things, and society moves on.
But that process can take a while
In the short-term, technology can displace human labor and disrupt many peoples’ lives. New jobs are created, but even then it can take a while for wages to rise in these new professions, as James Bessen recounts in Learning By Doing. He argues wages don’t start to rise until technology matures, skills become standardized, and labor markets sprout up.
Technology can substitute for human labor, but it can also complement it
MIT’s David Autor puts it simply in a recent paper:
Workers benefit from automation if they supply tasks that are complemented by automation but not if they primarily (or exclusively) supply tasks that are substituted. A worker who knows how to operate a shovel but not an excavator will generally experience falling wages as automation advances.
Luckily, the tasks humans supply aren’t set in stone. As long as there is more work left for humans to do, the question is how quickly workers can transition from supplying skills that have been replaced to supplying ones that are complements.
Technology is not uniformly biased toward skilled or unskilled labor
These days, there’s lots of talk about “skill-biased technological change,” meaning change that favors the educated or highly skilled. Think about the demand for computer programmers vs. the demand for unskilled labor. But it isn’t always that way. Lots of early industrial technology was arguably “deskilling” — think of a skilled artisan vs. a factory where each worker simply completes one part of the process. In The Race Between Technology and Jobs, Claudia Goldin and Lawrence Katz argue that technology was largely deskilling up to the 20th century, and since then it’s been largely biased toward those with skills and education. Bessen sees the industrial revolution as less de-skilling, so none of this is beyond debate. The point is just that different technologies can shape the demand for skill and education differently.
The single biggest way to combat technological unemployment is human capital
Goldin and Katz argue that for most of the 20th century, technology didn’t lead to increased inequality even though the technologies of the time were skill-biased. The reason: America was simultaneously becoming more educated. Bessen emphasizes human capital as well, though he focuses more directly on skill than formal education. Both matter, and the broader point is that the level of human capital in a society shapes the society’s ability to reap the benefits of technology, and to do so in an equitable way. But left to its own devices, the market won’t adequately invest in human capital, as Stiglitz and Greenwald explain in Creating a Learning Society.
In other words, the labor supply matters as much as the demand for skill
In the early 20th century, this worked in society’s favor. Skill was in ever greater demand, but thanks to rising education, there were also more skilled workers. Therefore, inequality didn’t increase significantly and the benefits of technology were realized and broadly shared. Today, something different is happening. The demand for skill is increasing, arguably faster than ever, but supply is not keeping pace. The result is that more and more workers are competing for fewer and fewer unskilled jobs. This dynamic is described by Goldin and Katz, and by Autor, but Tyler Cowen’s Average is Over is excellent as well.
Still, the result of all this can be labor market polarization
Can technology be biased toward high and low-skilled workers simultaneously, at the expense of the middle? Autor argues that’s what’s been happening, as relatively unskilled service jobs have been growing, as have high-skilled jobs in techie jobs.
Cumulatively, these two trends of rapid employment growth in both high and low-education jobs have substantially reduced the share of employment accounted for by ‘middle skill’ jobs.
Cowen sketches out something similar, as does Ryan Avent in this post.
What about artificial intelligence. Are all bets off?
So far, most of this is relatively non-controversial. Though technology displaces some workers, it raises the productivity of other workers. People learn new skills and the eventual result is net positive. In the long-term it has not caused unemployment. It can raise wages or lower them, depending on how it impacts the demand for skill and how the labor market responds to those changes. Of all the potential policy responses, investment in human capital is probably the most important.
The one area where current thinkers disagree sharply is how artificial intelligence fits into all of this. McAfee and Brynjolfsson are bullish on its potential, and worried about it. They see AI as accelerating, such that more and more jobs will be taken over by machines. Cowen paints a similar picture. There are two dangers here: first, if the pace of change increases it might be harder for workers to learn new skills fast enough to keep up. Second, the lump of labor fallacy relies on the fact that there are still some things to be done that humans can do but machines can’t. If AI advances as fast as futurists predict, what will be left for humans to do? Bessen sees technology as moving slower, at least in certain sectors, and so less worried about AI. Autor argues that AI can only replace tasks that humans fully understand, and since there’s still so much that we don’t understand about how we do the things we humans do, there’s still plenty that humans will be uniquely able to do. But machine learning experts might argue that this isn’t true, and that with enough data AI can actually learn things that we humans don’t really ever understand.
Anyway, the basic point is that the controversy has to do with the pace of change, and the ability of AI to replace the bulk of human labor quickly.