Let’s talk about capex. Again. Because that’s entertaining, right?
Don’t tell me you didn’t wake up Tuesday with capex on your mind. I know you did. Who doesn’t?!
Jokes aside, I wanted to follow up on “Capex, Good Or Bad?” published on August 11. The central question for the Mag7 currently, is whether hundreds of billions in AI spending counts as a “capex bubble.”
The implication is that at least some of these expenditures count as overspend (bubbles carry a negative connotation in the market context) and will be seen, in hindsight, as a waste.
Q2 results from the so-called hyper-scalers suggested that in fact, all this spending’s starting to pay off. But it’s fair to say the jury’s still out. And the verdict’s crucial.
As SocGen’s Andrew Lapthorne noted earlier this month, these businesses are no longer capital-light. A decade ago, Mag7 capex (and for the purposes of Lapthorne’s work, Broadcom stood in for Tesla) was around 25% as a share of operating cash flow, versus 55% for the rest of the index. Today, capex to operating cash flow for the big 7 tech companies is roughly the same as it is for the rest of the index, at ~45%.
So, the stakes are high. And not just because AI’s supposed to be an epochal technology. Spending on the AI arms race has fundamentally altered the nature of these businesses. They’re more capital-intensive now. Much more in some respects. That’s a meaningful shift.
With that in mind, the figure below from Goldman gives you some context for this spending.
The Mag7 are expected to spend $385 billion this year, up 52% from 2024, and nearly half a trillion in 2026. Those forecasts are up 12% and 29%, respectively, just since the end of June.
“Capex estimates for the Magnificent 7 have been lifted even more sharply than earnings estimates,” Goldman’s David Kostin remarked.
What about the rest of the S&P 500? Well, suffice to say they’re not spending as quickly. Investment spending for the median stock rose just 4% in Q2 from the same period a year ago.



I remember Qwest and the others laying all that “dark fiber” in the oceans, last bubble. A lot of money “invested” and a lot of money lost.
I realize that Oracle isn’t in MAG 7 but it has projected CapEx spending over $20 billion in 2025. Coreweave, the largest AI neo-cloud has also projected CapEx spending over $20 billion. So I think nearly half trillion $$ in CapEx AI spend is likely by end of 2025. It’s safe to say at least 50% of that half trillion $$ goes to NVIDIA. The hyperscalers are definitely no longer asset light. The divergence in stock performance so far this year among the MAG 7 reflects this divergence between price taker and price maker. Some analysts are projecting profit margins on NVIDIA’s GB200 server stacks are likely to higher than Hopper GPUs.
Just wish the US Govt had similar levels of capex investment in our infrastructure. Not sure where the electricity is going to come from to run this incredible boom in AI. Other than higher rates for all of us.
Also, water. I’m red pilled on AI as a transformative tech, but we’re already fucking up our water supply before ai.
Yup. Water is going to be the existential catastrophe of the 30s.
For anyone looking for a rational view of what AI is and isn’t in this moment I would recommend reading “The AI Con” by Emily M. Bender and Alex Hanna.
To this point, Generative AI and LLM’s have delivered profoundly life changing results in large part owing to their ability to sound human and to seem all knowing. But the question around CapEx should be, to what end is this investment supposed to reach? If you listen to the leaders in this space their dream is Artificial General Intelligence. A supposed fully cognitive AI that can solve all of humanity’s problems for them and a technology that will effectively replace all jobs resulting in Capital eliminating the need for Labor.
This dystopic Utopia that these leaders dream of has zero thought about how value is created from such technology. In a country where the economy is driven almost solely by consumerism, how in the world do you expect to monetize that technology where people no longer earn money to spend?
Aside from those tech bro hype fascinations, no one in that space can seem to explain HOW they expect to reach AGI. At present, LLM’s are good at writing content and writing code but not reliable without human supervision. They are not reliable for information and continue to confidently hallucinate farcically wrong answers. Generative AI is able to compile lots of data inputs and draw conclusions much faster than a human, but still requires humans to drive the work. The assumption that as the technology iterates it will dramatically improve in its ability to correctly and creatively solve problems is “hope as a strategy”.
The current LLM pricing strategy is akin to Uber and Lyft when they were flush with cash investment, loss leader pricing to drive demand and adoption. For anywhere from free to hundreds of dollars a month, you can get access to technology that has hundreds of billions of dollars invested in it. This is not sustainable and eventually the CapEx will decline and the pricing model will have to rise to a level that generates actual value, who’s going to pay thousands of dollars/month for access to LLM’s? Far less than are using it now.
The more likely outcome is one that argues against continued Capital investment, one where the models operate more efficiently, leverage caching technology to reduce repeated data processing, and require less computational power and scale.
Reminds me of the years (decades?) long quest to get OCR and Speech to Text accuracy to a point where it was useful in the workplace…and it’s still not 100%.
“Generative AI and LLM’s have delivered profoundly life changing results” – I don’t know about this. For a few people, maybe.
I wouldn’t assume a positive connotation with that statement. In some cases yes, other cases absolutely not. I know of people in marketing firms who are calling their profession “Sad Men” now. There are numerous cases of people losing touch with reality due to their infatuation with chatbots, some who end up ending their lives.
I understand that in the pharma and materials research spaces they are able to make significant strides using the technology. As a technologist I find the tool’s capacity to expand my skillset and allow me to deliver more on my own than I ever could before, empowering.
OpenAI is supposedly working on a social network, I guarantee you that will be a net negative for humanity overall. But all the same, the results of this technology cycle are, life changing.
“who’s going to pay thousands of dollars/month for access to LLM’s? Far less than are using it now. The more likely outcome is one that argues against continued Capital investment, one where the models operate more efficiently, leverage caching technology to reduce repeated data processing, and require less computational power and scale.”
Well summarized Anonymous One.
https://www.youtube.com/watch?v=HOoRnv3lA0k Not having to fold laundry will be all worth the CapEx spend!
My first article accepted for publication was written in 1974 on the subject of how best to evaluate capex for tech. We seemingly have made little to no progress on this topic. Today our main criteria for evaluating whether or not to adopt new tech is still size and power (it’s a guy thing you see). Planning is very straight forward when we actually do it. The thing is big deal dudes don’t actually believe in goal setting and all that stuff. They use their gut, what guys at the club are doing, and other time and money wasters. After all, the biggest is always the best, right? Why tech? Because it creates cash flow and increases enterprise value. Mostly it’s hard to tell if new tech actually does that in a way that can be measured objectively so no one actually seems to be checking this out. They just buy it. For years IBM’s sales engineers were only allowed to to talk to guys in the C-suite. They were told to sell their wares to the folks in the company who didn’t understand the product, but were willing to fall for the pitch. That’s what Oracle and those types still do.
As someone who’s worked in corporate finance, I can vouch for this. The fancy models and evaluation criteria for investment decisions are all a bunch of extra work people use to make themselves look busy and pretend like they are making a well-informed investment decision when in reality, it’s mostly just guessing and trying the justify decisions that were going to be made one based on those guesses.