Is OpenAI’s Reported Sales Miss A Canary?

Nobody panic. It’s not as if the accuracy of Sam Altman’s back-of-the-envelope revenue projections is critical for the web of interconnected, self-referential deals at the core of the AI boom.

Critics of hyper-scalers’ freewheeling capex plans spent the better part of the last six months worrying aloud about backlog concentration vis-à-vis OpenAI.

Altman’s spending commitments — which ran to nearly $1.5 trillion late last year — are aggressive for any company, let alone one with just ~$25 billion in annualized revenue.

Simply put: If OpenAI doesn’t make enough money to support its commitments, hyper-scalers risk being left in the lurch on some of the hundreds of billions they’re throwing at an escalatory arms race.

As the figure above reminds you, this endeavor’s chewing through free cash flow, which in turn has previously debt-free (or debt-lite) companies borrowing like there’s no tomorrow.

AI enthusiasts (a euphemism for proselytizers) would say demand worries are absurd, not necessarily in the narrow sense that OpenAI will surely make good on its promises, but from a bigger-picture perspective which says that in the current environment, there’s no such thing as superfluous compute.

I tend to side with the enthusiasts and proselytizers on this, which is just to say if you’ve got AI-related capacity to sell, somebody will probably lease it. Whether that’s Altman or not isn’t necessarily relevant.

That said, a Wall Street Journal article suggesting Altman’s CFO Sarah Friar, who in November raised eyebrows by floating US government guarantees for AI infrastructure loans, is concerned about the company’s ability to “pay for future computing contracts,” is concerning at the margins.

Certainly, the report’s relevant for Oracle, the posterchild for OpenAI concentration risk. In March, Oracle said its backlog rose to $553 billion last quarter. OpenAI accounts for around half that. At CoreWeave, king of the neoclouds, Altman’s commitments comprise about a third (~$22.5 billion or so) of total contracted revenue.

That sort of concentrated dependence is problematic if Altman can’t hit internal company sales targets. After all, those targets are presumably set with a mind towards supporting OpenAI’s commitments. According to the linked Journal article, the company “recently missed its own targets for new users and revenue” ahead of an expected IPO as soon as Q4.

For now, I’m inclined to write this off to Alphabet’s Gemini coup (in November, Google upended the unofficial AI leaderboard with a new iteration of its flagship model that outperformed across key benchmarks) and a rapid succession of flashy agentic rollouts from Anthropic this year.

It’s also possible that Altman’s projections were just plain old unrealistic or that the Journal‘s making a mountain of a molehill.

But in case this is a canary, it bears mentioning. This whole thing — the arms race that’s transformed the business models of the largest, most important companies on the planet — hinges entirely, and inherently, on assumptions about insatiable demand for AI services.

If that demand doesn’t materialize, or even just undershoots expectations, it’s a potentially existential problem. Particularly given the extent to which the hyper-scalers are increasingly prone to funding capex with debt.


 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

15 thoughts on “Is OpenAI’s Reported Sales Miss A Canary?

  1. How can Bessent and Warsh suppress AI vols like the gov is suppressing rate vols? Will a Trump “I LOVE OPENAI AND SAM ALTMAN! DON’T BELIEVE THE FUD?” tweet do that heavy lifting before the bell, like “VANCE IS HEADING TO ISLAMABAD!!!”? Can they muddle along suppressing the MOVE through the midterms?

    “The best laid schemes o’ mice an’ men… Gang aft a-gley.”

  2. I’m curious how many of the readers here are frequent users of at least one of the AI services and are willing to pay for them because it’s valuable? I do and I was a sceptic until quite recently. I pay more for AI than office apps and use it more.

    1. I use it. I am retired from a long heavy industrial career. I find it meets me at the level I am at. I have more ideas that overflow almost every day. I have been publishing these in pre-print servers to get the work into public domain. I find the tools help me create at a level that I consider useful. I pay for Claude and I woudl remain in idea mode without it.

  3. The purported OpenAI story is probably weighing on chip shares today. But I’d suggest that the growing amazement about DeepSeek’s latest LLM model which is proving to be neck and neck on the widely used benchmarks. This matters to memory and logic chip producers because the DeepSeek model requires far fewer tokens and compute than the three current LLM leaders.

    If US AI firms finally admit the DeepSeek design is more efficient, they’d start adopting aspects of it which would further shrink overall demand for datacenters. But firms which are both LLM producers AND datacenter operators will probably be reluctant to walk away from the datacenters they have sunk billions of dollars into. So chip demand won’t crater overnight but there are increasingly good reasons to reexamine forward revenue assumptions.

  4. I still think OpenAI falls apart when the smoke from the AI bubble clears. They’ve taken on way too many investors and have been unable to focus their product on revenue generation. Now they are chasing Anthropic around wherever they go. A year ago no one had even heard of Anthropic. Alphabet will be fine given their ecosystem. Amazon is training their own midtier models. Meta has built much of the infrastructure to expose and leverage various models and has their own to rely on. Microsoft is even getting into the model building game. All of that is just the US model makers which are feeling heavy pressure from Chinese models who are nearly as capable for a fraction of the cost.

    At the end of the day, why would anyone pick OpenAI over their competitors?

  5. “If that demand doesn’t materialize, or even just undershoots expectations, it’s a potentially existential problem. Particularly given the extent to which the hyper-scalers are increasingly prone to funding capex with debt.”

    I expect it to undershoot in the near term. This is a huge transition for everyone. It is going to take some time for everyone to realize what AI can and can’t do for them, how much of a need they have, and how much they are willing to pay to address that need. Likewise, it will take time for all of these AI companies to find their niche and provide consumers with useful products at a reasonable price. You can’t just force feed the newest technology to everyone. That is overkill, and a waste of resources. Someone needs to package this technology for everyday consumers the way Microsoft used to package “Office.” Gemini is doing a great job of utilizing their “big moat” access to so many consumers.

  6. After all the usual due diligence stuff (i.e. picking the right industry, identifying a potential group of companies, looking at profitability, future prospects, etc.), I consider the CEO and contemplate whether I am comfortable investing in a company run by such individual. Something about Sam Altman really triggers my “spidey-sense”, however, I am a true believer of Hock Tan and Jensen Huang- among others.

    I probably don’t need to say this… but I am obviously NOT a professional investor! 🙂

  7. Personally, I find OpenAI to be the most useful model. Claude’s too sensitive and it can only render .SVGs which I don’t use except for merchandise designs. I don’t code, and I’d never trust an AI to do something like my taxes. Gemini struggles with basic tasks in Google Sheets. Or I struggle to give it the proper prompts. Either way I’m still doing all my own spreadsheet work manually, which is fine (I’ve done it for years), but I expected the integration to be more seamless in Sheets than it seems to be, particularly given it has access to Google Finance data. I end up defaulting to OpenAI because I can go to GPT and it’ll draw, work with data, reason and research capably, all in one place, plus it can tolerate a lot of verbal abuse and will occasionally push back with something funny to break the tension. None of these models can write, by the way. Not really. Everything they do’s a parlor trick, but that one’s just not very good. If you talk to these things enough, you know their cadence gets repetitive pretty quickly, and in a long enough string it’ll morph into a glorified version of canned responses. I’m something close to 100% confident that if I were a college professor, I could spot an AI paper every, single time, and not just because students can’t write coherently on their own. I could pick up the inflection. If you read and write all day, every day like I do, AI’s modulation is laughably easy to pick up on.

    1. It is very easy to see AI in student work. Harder to prove but quite obvious. Only tricky part is when smart kids get better at writing, hard to know if they got that much better or if they got really good at using AI.

  8. The internet upped the speed of daily life – a lot. If AI delivers on the projections, that speed is going to takeoff even more. When will we realize that the destination is what matters. I already feel like life is coming at me out of a firehose. I can do without a water cannon. I believe I read this here (apologies if I didn’t); all of this stuff works financially only if it captures more of our time.

Create a free account or log in

Gain access to read this article

Yes, I would like to receive new content and updates.

10th Anniversary Boutique

Coming Soon