The ubiquity of AI bubble narratives and, more to the point, the incisiveness with which they’re often elucidated, is somewhat remarkable as bubbles go.
By now, everyone’s acutely aware that the sundry reciprocal spending commitments at the heart of the AI hype cycle are self-referential, and in some cases hopelessly so.
Bloomberg’s Venn diagram-style schematic of the “AI money machine” has been circulated more times than every dollar of re-pledged spending it illustrates, which is to say investors are apprised of the overlap and consequent risks.
A couple of weeks before this high stakes three-card monte became the subject of every other mainstream financial media article, I complained that “it’s getting harder for the average investor to track the billions” touted in the near daily announcements of new AI deals.
“This needs to be watched and monitored more closely,” I mused, on September 23, adding that it was starting to look like the “revolution is being funded in at least some cases by daisy-chained billions, some of which aren’t even real yet” given that a lot of the spending’s “based on assumptions about future revenue.”
Since then, I’ve written — I don’t know — at least a dozen articles riffing on the same risk(s). Here are a few choice excerpts:
We should cast a wary eye at the extent to which companies in the AI-Semi-Mag7 ecosystem are increasingly prone to binding their fates such that the success of one assumes, and in some sense even depends upon, the success of the others. — October 13, 2025
If you promise to spend $50 billion to access GPUs through my service, and I pledge to spend $50 billion buying more AI chips, and the AI chip company pledges to invest $50 billion in your equity, what is it we’re really doing? Something, I guess. To believers, it’s a virtuous circle. To skeptics, it feels uncomfortably like rehypothecation. — November 3, 2025
[There’s] a parallel between the counterparty risk on Wall Street exposed in 2008 and the interdependence inherent in the self-referential web of deals at the heart of the AI hype cycle. If the whole thing implodes, that’ll be why. — November 11, 2025
Suffice to say that although I’m not bearish, I’m on top of the bear case. And that’s the thing: So’s everybody else.
It’s hardly unprecedented for the general investing public, the financial media and Wall Street to be apprised of risks only to largely ignore them, nor is it especially unusual for a bubble to inflate further in the face of a growing chorus of skeptics.
What is a bit odd, though — or what would seem odd after the fact if indeed this is a bubble and it bursts — is the extent to which so many people know the granular specifics of the bear case and can spell that case out so cogently and so convincingly as to make the crash sound like a foregone conclusion.
Consider a good article published in the Journal on Thursday called, poignantly, “Big Tech’s Soaring Profits Have an Ugly Underside: OpenAI’s Losses.” In the article, James Mackintosh notes that although we can’t know precisely how much OpenAI spent last quarter with mega-cap US tech companies, Sam Altman’s Q3 loss “equates to 65% of the rise in EBITDA [at] Microsoft, Nvidia, Alphabet, Amazon and Meta.”
The issue isn’t so much that one company’s loss is another company’s gain, it’s that in some sense, we’re talking about the same company. Not literally, but as noted above, there’s so much interdependence going on here that it seems foolish to make a bright-line distinction.
If I had a wife (I don’t) and if we shared our money (we wouldn’t), it’d be risky (and silly) if a meaningful share of my monthly income was derived from selling services to her.
Mackintosh did a bang up job summarizing the (potential) problem(s). “If investors stop being so excited about AI, if OpenAI struggles to generate sales, or if fundraising becomes difficult for other reasons such as a recession, investors might switch back from the vanity of revenue to focus on” OpenAI’s losses, at which point “the reality that the flow of cash from OpenAI and its rivals is bolstering big tech earnings will become painfully clear.”
I think — and this brings us full circle — that’s already painfully clear, as are all the other conceptually similar risks embedded in this multi-trillion dollar, hyper-symbiotic web of tie-ups and entanglements.
The graffiti’s on the wall, it’s huge and we’re — all of us — writers.


cue catalyst to reset the positioning cycle
Here’s something to consider. I asked AI how I could build my own home cluster of LLM’s that would facilitate a bunch of different work tasks. Research, coding, project management, and then a generalist. To build this out at home would cost me roughly $25k in hardware and then it would be a matter of downloading and installing the open source LLM’s that are suited to those specific tasks. I would then be able to leverage those LLM’s without a subscription, at will, on a pretty robust scale. If you properly monetize the use of such a network the returns would justify further investment or a refresh of hardware as needed and still generate profit.
If I can build this for such a low entry cost, anyone else can. So where is the need for trillions in investment to these big players for incrementally better tools? AI is the next iteration of technological advancement it’s railroads, radio, television, the home computer, the internet; all revolutionary technologies, all over invested in, all not worth as much as originally projected when it was all said and done.
But why would you go to all this effort? And the expense isn’t just the hardware but also the expertise to set all this up, which you may or may not have access to. Why do this when you can buy the same thing off an OpenAI type company for $200 a month.
You may want control over your LLMs, the ability to be independent of the big vendors I suppose. Or even because you suspect the costs of subscribing to these services will increase over time.
Will OpenAI ever make money? probably not. So like a lot of railroad, radio, home computer and internet companies in the past it will fold and any residual value absorbed by a survivor. The real questions are: were these tools worth it? can they replace people? If no one has a job any more how does the economy work?
Either this enormous AI investment goes bust and crashes the market, maybe the economy. Or it is useful and everyone is out of work and it crashes the economy. Seems like a lose – lose proposition to me.
Great questions! I have expertise, I have existing LLM subscriptions, and I for some reason enjoy toiling on projects like this so I understand the mechanics of how things work better.
I do suspect that at some point the cost of subscriptions will go up. I view the business model similar to how Uber and Lyft started out and I suspect that the LLM’s will collaboratively raise prices to generate profit. I could also see them exploiting some surge pricing models to help shore up the deficit. I expect the flat rate pricing will go extinct in the next year and then everyone will start getting surprise bills having no idea how many tokens they were actually consuming under the covers of that flat rate.
So I’m not building this yet, but I want to know how to build it so when this announcement comes I can continue to operate my businesses without a sudden surprise in upstream costs taking them out.
Check out Ollama. I haven’t set anythying up yet but was also interested in the idea of creating a local LLM.
For the applications where money can be made (augmenting/replacing human labor, e.g., coding assistant or automated bug fixing, automated handling complaints) accuracy, response quality and hallucinations are critical. Top of the line models have significant quality advantages to open source.
There’s a super bull case of significant replacement of human labor, bull case of increasing adoption and revenue, bear case of resource constraints or slow adoption, super bear case of earnings failing to show up or regulatory issues leading to bust. When your barber could tell you the bear case, shouldn’t all other things equal that be already reflected in the price?
The argument for top of line models holds true for as long as they are able to continue iterating and improving. The reality is that the ceiling on that iterating is closing in faster than they projected. AGI is not going to happen in this iteration of AI so it’s about optimizing accuracy and costs. Those limitations argue against the massive investments being made and at some point future improvement will be shelved to prioritize profit.
It is at this point where open source will once again show its superiority. Open source doesn’t care about profit, it’s community driven and about making the solution better for the community of users. While the private LLM’s will focus on profit extraction, the open source LLM’s will surpass them in accuracy and cost optimizations. Then I expect the cloud providers will take advantage of the open source tools and begin providing access to those LLM’s via a service offering that competes directly with Open AI and Anthropic. This is effectively how AWS built its entire business.
Human labor can’t ever be fully replaced by AI, it can be augmented but I doubt AI will ever be reliable enough to be trusted without human supervision. That and AI has no ambition, it doesn’t care about or understand the problems we are trying to solve with it. Humans still need to direct AI to do something otherwise it will do nothing.
Finally, data is not reliably consistent. Parameters change, schemas adapt, new and unforeseen iterations occur. AI is terrible at managing change, it often ignores things that are blatantly problematic. It over confidently declares victory without ever validating that a problem has been solved. If you think that you can eliminate headcount in favor of operating with this resource as your sole business support staff, you’re in for some major headwinds.
? for your last sentence.
meant to be a happy face emoji
The music will stop eventually and when it does the only question is how many chairs (safe invests) are left? Possibly only cash, but if inflation reignites (and everything is aligning that it will) even cash will have issues.
That said, there very well may be 10-50% left in the blow off top. Can’t miss that.
The big question to me is: when is peak Nvidia. 63x earnings, on a company that big? Could drop 60% to a “normal” 22x. My guess is Meta will be the first to lower Capex plans which will lead to others backing off as well. Watching for a pre-reporting announcement the days leading up to Metas next few earnings reports. I also have Meta as my most likely to have a Global Crossing style accounting scandal. I don’t believe their earnings. Late stage cockroaches, etc.
This is but the first Tremor!
I was asking an AI what exactly gaining AGI means, it then listed the capabilities that AIs lack. I thought it amusing the 3rd one was Common Sense. That reminded me of an instance where a user asked an AI what they could eat that was high on minerals and the AI suggested a rock. While i believe they will solve a lot of the issues with AI, I think we are still years away from AGI. They are great as assistants, and they will make us a lot more productive, but they are not going to provide the kinds of ROI currently expected because they’re not ready.