AI may change everything, but probably not too quickly
Published: 03:04 PM,Apr 04,2023 | EDITED : 07:04 PM,Apr 04,2023
“Artificial intelligence (AI) is already having a significant impact on the economy, and its influence is expected to grow significantly in the coming years.... Overall, the effects of AI on the economy will depend on a variety of factors, including the rate of technological advancement, government policies and the ability of workers to adapt to new technologies.” OK, who said that? Nobody, unless we’re ready to start calling large language models people. What I did was ask ChatGPT to describe the economic effects of artificial intelligence; it went on at length, so that was an excerpt.
I think many of us who’ve played around with large language models — which are being widely discussed under the rubric of AI (although there’s an almost metaphysical debate over whether we should call it intelligence) — have been shocked by how much they now manage to sound like people. And it’s a good bet that they or their descendants will eventually take over a significant number of tasks that are currently done by humans.
Like previous leaps in technology, this will make the economy more productive but will also probably hurt some workers whose skills have been devalued. Although the term “Luddite” is often used to describe someone who is simply prejudiced against new technology, the original Luddites were skilled artisans who suffered real economic harm from the introduction of power looms and knitting frames.
But this time around, how large will these effects be? And how quickly will they come about? On the first question, the answer is that nobody really knows. Predictions about the economic impact of technology are notoriously unreliable. On the second, history suggests that large economic effects from AI will take longer to materialise than many people seem to expect.
Consider the effects of previous advances in computing. Gordon Moore, a founder of Intel — which introduced the microprocessor in 1971 — died last month. He was famous for his prediction that the number of transistors on a computer chip would double every two years — a prediction that proved stunningly accurate for half a century. The consequences of Moore’s Law are all around us, most obviously in the powerful computers, aka smartphones, that almost everyone carries around these days.
For a long time, however, the economic payoff from this awesome rise in computing power was surprisingly elusive.
Why did a huge, prolonged surge in computing power take so long to pay off for the economy? In 1990, economic historian Paul David published one of my favourite economics papers of all time, “The Dynamo and the Computer.” It drew a parallel between the effects of information technology and those of an earlier tech revolution, the electrification of industry.
As David noted, electric motors became widely available in the 1890s. But having a technology isn’t enough. You also have to figure out what to do with it.
To take full advantage of electrification, manufacturers had to rethink the design of factories. Pre-electric factories were multistorey buildings with cramped working spaces, because that was necessary to make efficient use of a steam engine in the basement driving the machines through a system of shafts, gears and pulleys.
It took time to realise that having each machine driven by its own motor made it possible to have sprawling one-story factories with wide aisles allowing easy movement of materials, not to mention assembly lines. As a result, the big productivity gains from electrification didn’t materialise until after World War I.
Sure enough, as David, in effect, predicted, the economic payoff from information technology finally kicked in during the 1990s, as filing cabinets and secretaries taking dictation finally gave way to cubicle farms. (What? You think technological progress is always glamorous?) The lag in this economic payoff even ended up being similar in length to the lagged payoff from electrification.
But this history still presents a few puzzles. One is why the first productivity boom from information technology (there may be another one coming, if the enthusiasm about chatbots is justified) was so short-lived; basically it lasted only around a decade.
And even while it lasted, productivity growth during the IT boom was no higher than it was during the generation-long boom after World War II, which was notable in the fact that it didn’t seem to be driven by any radically new technology.
In 1969, celebrated management consultant Peter Drucker published “The Age of Discontinuity,” a book that correctly predicted major changes in the economy’s structure, yet the book’s title implies — correctly, I think — that the preceding period of extraordinary economic growth was actually an age of continuity, an era during which the basic outlines of the economy didn’t change much, even as America became vastly richer.
Or to put it another way, the great boom from the 1940s to around 1970 seems to have been largely based on the use of technologies, like the internal combustion engine, that had been around for decades — which should make us even more skeptical about trying to use recent technological developments to predict economic growth.
That’s not to say that AI won’t have huge economic impacts. But history suggests that they won’t come quickly. ChatGPT and whatever follows are probably an economic story for the 2030s, not for the next few years.
— The New York Times.
I think many of us who’ve played around with large language models — which are being widely discussed under the rubric of AI (although there’s an almost metaphysical debate over whether we should call it intelligence) — have been shocked by how much they now manage to sound like people. And it’s a good bet that they or their descendants will eventually take over a significant number of tasks that are currently done by humans.
Like previous leaps in technology, this will make the economy more productive but will also probably hurt some workers whose skills have been devalued. Although the term “Luddite” is often used to describe someone who is simply prejudiced against new technology, the original Luddites were skilled artisans who suffered real economic harm from the introduction of power looms and knitting frames.
But this time around, how large will these effects be? And how quickly will they come about? On the first question, the answer is that nobody really knows. Predictions about the economic impact of technology are notoriously unreliable. On the second, history suggests that large economic effects from AI will take longer to materialise than many people seem to expect.
Consider the effects of previous advances in computing. Gordon Moore, a founder of Intel — which introduced the microprocessor in 1971 — died last month. He was famous for his prediction that the number of transistors on a computer chip would double every two years — a prediction that proved stunningly accurate for half a century. The consequences of Moore’s Law are all around us, most obviously in the powerful computers, aka smartphones, that almost everyone carries around these days.
For a long time, however, the economic payoff from this awesome rise in computing power was surprisingly elusive.
Why did a huge, prolonged surge in computing power take so long to pay off for the economy? In 1990, economic historian Paul David published one of my favourite economics papers of all time, “The Dynamo and the Computer.” It drew a parallel between the effects of information technology and those of an earlier tech revolution, the electrification of industry.
As David noted, electric motors became widely available in the 1890s. But having a technology isn’t enough. You also have to figure out what to do with it.
To take full advantage of electrification, manufacturers had to rethink the design of factories. Pre-electric factories were multistorey buildings with cramped working spaces, because that was necessary to make efficient use of a steam engine in the basement driving the machines through a system of shafts, gears and pulleys.
It took time to realise that having each machine driven by its own motor made it possible to have sprawling one-story factories with wide aisles allowing easy movement of materials, not to mention assembly lines. As a result, the big productivity gains from electrification didn’t materialise until after World War I.
Sure enough, as David, in effect, predicted, the economic payoff from information technology finally kicked in during the 1990s, as filing cabinets and secretaries taking dictation finally gave way to cubicle farms. (What? You think technological progress is always glamorous?) The lag in this economic payoff even ended up being similar in length to the lagged payoff from electrification.
But this history still presents a few puzzles. One is why the first productivity boom from information technology (there may be another one coming, if the enthusiasm about chatbots is justified) was so short-lived; basically it lasted only around a decade.
And even while it lasted, productivity growth during the IT boom was no higher than it was during the generation-long boom after World War II, which was notable in the fact that it didn’t seem to be driven by any radically new technology.
In 1969, celebrated management consultant Peter Drucker published “The Age of Discontinuity,” a book that correctly predicted major changes in the economy’s structure, yet the book’s title implies — correctly, I think — that the preceding period of extraordinary economic growth was actually an age of continuity, an era during which the basic outlines of the economy didn’t change much, even as America became vastly richer.
Or to put it another way, the great boom from the 1940s to around 1970 seems to have been largely based on the use of technologies, like the internal combustion engine, that had been around for decades — which should make us even more skeptical about trying to use recent technological developments to predict economic growth.
That’s not to say that AI won’t have huge economic impacts. But history suggests that they won’t come quickly. ChatGPT and whatever follows are probably an economic story for the 2030s, not for the next few years.
— The New York Times.