SAN FRANCISCO — Early last year, OpenAI raised $10 billion. Just 18 months later, the company had burned through most of that money. So it raised $6.6 billion more and arranged to borrow an additional $4 billion.
But in another 18 months or so from now, OpenAI will need another cash infusion because the San Francisco startup is spending more than $5.4 billion a year. And by 2029, OpenAI expects to spend $37.5 billion a year.
OpenAI’s accelerating expenses are the main reason the corporate structure of the company, which began as a nonprofit research lab, could soon change. OpenAI must raise billions of additional dollars in the years to come, and its executives believe it will be more attractive to investors as a for-profit company.
In many ways, artificial intelligence has inverted how computer technology used to be created. For decades, Silicon Valley engineers designed new technologies one small step at a time. As they built social media apps like Facebook or shopping sites like Amazon, they wrote line after line of computer code. With each new line, they carefully defined what the app would do.
But when companies build AI systems, they go big first: They feed these systems enormous amounts of data. The more data companies feed into these systems, the more powerful they become. Just as a student learns more by reading more books, an AI system can improve its skills by ingesting larger pools of data. Chatbots like ChatGPT learn their skills by ingesting practically all the English language text on the internet.
That requires larger and larger amounts of computing power from giant data centers. Inside those data centers are computers packed with thousands of specialized computer chips called graphics processing units, or GPUs, which can cost more than $30,000 apiece.
The cost is pushed higher because the chips, data centers, and electricity needed to do this digital work are in short supply.
Sean Holzknecht, CEO of Colovore, a data center operator whose facilities are adopting specialized chips used to build AI, said this new kind of computing facility costs 10 to 20 times what a traditional data center does.
These chips spend months running the mathematical calculations that allow ChatGPT to pinpoint patterns in all that data. The price tag for each “training run” can climb into the hundreds of millions of dollars.
“Imagine needing to read the internet over and over and over,” said David Katz, a partner at Radical Ventures, a venture capital firm that has invested in AI startups. “This is the most computationally intensive task the world has ever seen.”
Google, Microsoft, OpenAI, and others are now working to expand the global pool of data centers needed to build their technologies. They plan to spend hundreds of billions to increase the number of computer chips manufactured each year, install them in facilities across the world, and secure the electricity needed to run them.
Those costs are particularly onerous when companies like OpenAI, Google, and Anthropic offer chats to consumers at no charge. Some of them are charging consumers around $20 a month to use their most powerful technologies — and even that may not recoup the cost of delivery.
(The New York Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to AI systems. The two companies have denied the suit’s claims).
Since building the initial version of ChatGPT, OpenAI has steadily improved its chatbot, feeding it increasingly large amounts of data, including images and sounds as well as text.
The company recently unveiled a version of ChatGPT that “reasons” through math, science, and computer programming problems. It built this technology using a technique called reinforcement learning.
Through this process, the system learns additional behavior over months of trial and error. Trying to solve various math problems, for instance, it can learn which methods lead to the right answer and which do not.
When people use this system, it “think” before responding. When someone asks it a question, it explores many possibilities before delivering an answer.
OpenAI sees this technology, called OpenAI o1, as the future of its business. And it requires even more computing power. That is why the company expects that its computing costs will grow sevenfold by 2029 as it chases the dream of artificial general intelligence — a machine that can do as much as the human brain or more.
“If you are trying to chase science fiction,” said Nick Frosst, a former Google researcher and co-founder of Cohere, a startup that builds similar technology, “the costs will keep going up.”
This article originally appeared in The New York Times.
Oman Observer is now on the WhatsApp channel. Click here