Will AI transform the economy, and if so, how?
Published: 03:10 PM,Oct 04,2023 | EDITED : 11:10 AM,Oct 05,2023
So, will artificial intelligence transform the economy? Today I thought I’d take a break from my usual preoccupation with ongoing crises to engage in a bit of bigthink about how technology may change the economic landscape in the years ahead, including a topic that seems important but hasn’t drawn much attention: how AI might change the U.S. budget outlook.
Starting last fall there was a huge surge in buzz, both positive and negative, about AI. That buzz seems to have died down to some extent, with usage of ChatGPT, the most famous implementation of the technology, declining in recent months. And many more observers have realized that what we’ve been calling AI — or what more careful people call “generative AI” — isn’t really intelligence. What it is instead is extrapolation from pattern recognition. Or as some people I talk to put it, it’s basically souped-up autocorrect.
But that doesn’t mean that it’s not important. After all, a lot of what human workers, even workers considered highly skilled, do for a living is also arguably souped-up autocorrect. How many workers regularly engage in creative thinking? Even among creative workers, how much time is spent being creative as opposed to engaging in pattern recognition?
I don’t say this to disrespect knowledge workers, but rather to suggest that what we’re calling AI could be a big deal for the economy even if it doesn’t lead to the creation of HAL 9000 or Skynet.
But how big? And what kind of a deal?
Obviously, nobody really knows. Some people are trying to figure out the impact from the bottom up, looking at various kinds of work and guesstimating how much of that work can be replaced or augmented by AI. The most widely circulated numbers come from Goldman Sachs, whose base case has AI increasing the growth rate of productivity — output per person-hour — by almost 1.5 percentage points a year over a decade, for a total over that decade of about 15%.
Is this plausible? Actually, yes. One parallel, if you’ve studied the historical relationship between technology and productivity, is the productivity boom from 1995 to 2005, which followed decades of weak productivity growth.
As a recent paper from the Brookings Institution points out, this boom was mostly driven by “total factor productivity” — an increase in output per unit of input, including capital.
And economists often identify total factor productivity growth with technological progress. That’s sometimes a bit dubious, since TFP is really a “measure of our ignorance,” simply the part of economic growth we can’t explain otherwise. But from 1995 to 2005 it seems fairly clear that the boom was driven by information technology.
By the time the productivity surge tapered off, productivity was about 12% higher than the trend from the prior two decades would have led you to expect it would be. Since AI is arguably an even more profound innovation than the technologies that drove the 1995-2005 boom, 15% isn’t at all unreasonable.
But will higher productivity make us richer or simply reduce the number of jobs? Fears of technological unemployment — a term invented by none other than John Maynard Keynes in 1930 — go back at least to the early 19th century. They have even inspired one pretty good novel, Kurt Vonnegut’s “Player Piano.” While technology has often eliminated some jobs, however, historically this has always been, as Keynes wrote, “a temporary phase of maladjustment,” with other forms of employment rising to replace the jobs lost. For example, the Microsoft Excel shock — the rise of spreadsheet programs — seems to have eliminated many bookkeeping jobs, but these were replaced by increased employment even in financial analysis.
By the way, in that same essay, Keynes predicted a future in which people would work much less than they did in his time, and in which finding rewarding ways to fill our leisure hours would become a major social concern. The fact that this didn’t happen over the past 90 years is a reason to be sceptical about people making similar predictions now, such as Jamie Dimon, who predicted the other day that AI would lead to a 3-1/2-day workweek.
However, while there’s no reason to believe that what we’re calling AI will lead to mass unemployment, it may well hurt the people who are displaced from their jobs and either have trouble finding new employment or are obliged to accept lower wages. Who are the potential losers?
The likely answer is that big impacts will fall on relatively high-end administrative jobs, many of them currently highly paid, while blue-collar jobs will be largely unscathed.
Now, while this seems right for generative AI, there are other applications of big data that may affect blue-collar work. For example, with all the buzz around ChatGPT there has been relatively little attention paid to the fact that after years of failed hype, self-driving cars are actually beginning to go into service. Still, at this point it seems more likely than not that AI will, unlike technological progress over the past 40 years, be a force for lower rather than higher income inequality.
Finally, it seems worth considering how generative AI might bear on one issue that has regained prominence: worries about government debt.
Until recently, many economists, myself included, argued that public debt was less of a concern than many people imagine, because interest rates on debt were below the economy’s long-term growth rate, “rBut rapidly rising interest rates have made debt considerably more worrisome. Conventional estimates of the economy’s long-run sustainable growth rate, like those of the Federal Reserve, tend to put it around 1.8%. And real interest rates on federal debt are now above that number.
Discussions about debt sustainability are, however, oddly disconnected from the discourse about generative AI. In fact, I’m pretty sure there are people warning both about a debt crisis and about mass unemployment from AI, although I haven’t made the effort to track them down. But if optimistic estimates of the boost from the technology are at all right, growth will be much higher than 1.8% over the next decade, and debt won’t be a big concern after all — especially because faster growth will boost revenue and reduce the budget deficit.
All of this is, of course, highly speculative. Nobody really knows how big an impact AI will have. But again, it doesn’t have to be “true” artificial intelligence to be a big deal for the economy, and the best guess is that it will probably matter a lot. — The New York Times.
Starting last fall there was a huge surge in buzz, both positive and negative, about AI. That buzz seems to have died down to some extent, with usage of ChatGPT, the most famous implementation of the technology, declining in recent months. And many more observers have realized that what we’ve been calling AI — or what more careful people call “generative AI” — isn’t really intelligence. What it is instead is extrapolation from pattern recognition. Or as some people I talk to put it, it’s basically souped-up autocorrect.
But that doesn’t mean that it’s not important. After all, a lot of what human workers, even workers considered highly skilled, do for a living is also arguably souped-up autocorrect. How many workers regularly engage in creative thinking? Even among creative workers, how much time is spent being creative as opposed to engaging in pattern recognition?
I don’t say this to disrespect knowledge workers, but rather to suggest that what we’re calling AI could be a big deal for the economy even if it doesn’t lead to the creation of HAL 9000 or Skynet.
But how big? And what kind of a deal?
Obviously, nobody really knows. Some people are trying to figure out the impact from the bottom up, looking at various kinds of work and guesstimating how much of that work can be replaced or augmented by AI. The most widely circulated numbers come from Goldman Sachs, whose base case has AI increasing the growth rate of productivity — output per person-hour — by almost 1.5 percentage points a year over a decade, for a total over that decade of about 15%.
Is this plausible? Actually, yes. One parallel, if you’ve studied the historical relationship between technology and productivity, is the productivity boom from 1995 to 2005, which followed decades of weak productivity growth.
As a recent paper from the Brookings Institution points out, this boom was mostly driven by “total factor productivity” — an increase in output per unit of input, including capital.
And economists often identify total factor productivity growth with technological progress. That’s sometimes a bit dubious, since TFP is really a “measure of our ignorance,” simply the part of economic growth we can’t explain otherwise. But from 1995 to 2005 it seems fairly clear that the boom was driven by information technology.
By the time the productivity surge tapered off, productivity was about 12% higher than the trend from the prior two decades would have led you to expect it would be. Since AI is arguably an even more profound innovation than the technologies that drove the 1995-2005 boom, 15% isn’t at all unreasonable.
But will higher productivity make us richer or simply reduce the number of jobs? Fears of technological unemployment — a term invented by none other than John Maynard Keynes in 1930 — go back at least to the early 19th century. They have even inspired one pretty good novel, Kurt Vonnegut’s “Player Piano.” While technology has often eliminated some jobs, however, historically this has always been, as Keynes wrote, “a temporary phase of maladjustment,” with other forms of employment rising to replace the jobs lost. For example, the Microsoft Excel shock — the rise of spreadsheet programs — seems to have eliminated many bookkeeping jobs, but these were replaced by increased employment even in financial analysis.
By the way, in that same essay, Keynes predicted a future in which people would work much less than they did in his time, and in which finding rewarding ways to fill our leisure hours would become a major social concern. The fact that this didn’t happen over the past 90 years is a reason to be sceptical about people making similar predictions now, such as Jamie Dimon, who predicted the other day that AI would lead to a 3-1/2-day workweek.
However, while there’s no reason to believe that what we’re calling AI will lead to mass unemployment, it may well hurt the people who are displaced from their jobs and either have trouble finding new employment or are obliged to accept lower wages. Who are the potential losers?
The likely answer is that big impacts will fall on relatively high-end administrative jobs, many of them currently highly paid, while blue-collar jobs will be largely unscathed.
Now, while this seems right for generative AI, there are other applications of big data that may affect blue-collar work. For example, with all the buzz around ChatGPT there has been relatively little attention paid to the fact that after years of failed hype, self-driving cars are actually beginning to go into service. Still, at this point it seems more likely than not that AI will, unlike technological progress over the past 40 years, be a force for lower rather than higher income inequality.
Finally, it seems worth considering how generative AI might bear on one issue that has regained prominence: worries about government debt.
Until recently, many economists, myself included, argued that public debt was less of a concern than many people imagine, because interest rates on debt were below the economy’s long-term growth rate, “r
Discussions about debt sustainability are, however, oddly disconnected from the discourse about generative AI. In fact, I’m pretty sure there are people warning both about a debt crisis and about mass unemployment from AI, although I haven’t made the effort to track them down. But if optimistic estimates of the boost from the technology are at all right, growth will be much higher than 1.8% over the next decade, and debt won’t be a big concern after all — especially because faster growth will boost revenue and reduce the budget deficit.
All of this is, of course, highly speculative. Nobody really knows how big an impact AI will have. But again, it doesn’t have to be “true” artificial intelligence to be a big deal for the economy, and the best guess is that it will probably matter a lot. — The New York Times.