Buybacks or capex?
Cue the knight from The Last Crusade: “You must choose. But choose wisely!”
The hyper-scalers have this all figured out. It’s very simple: The more money you spend on the AI buildout, the better. If that means betting the whole house on an unproven technology (or even renaming your entire company and changing your ticker symbol in honor of that technology), then so be it. It worked with the metaverse, right?
I’m joking. Only not. Not at all, actually. I’ve written voluminously on the hyper-scaler capex/buyback tradeoff, mostly recently in “How AI Spending Could Rob Stocks Of Their Biggest Buyer.”
The largest source of demand for US equities is the corporate bid. Buyback authorizations are running at a record pace this year, and executions will notch a new record too in 2025. That’s an “invisible” bid under the market, and it’s funded in no small part by big-tech cash flow.
That raises this obvious question: What happens if and when a larger share of that cash flow is diverted to capex? The short answer is that everything will almost surely be fine because it’s not as if there won’t still be plenty of buybacks.
The table above’s from Goldman, where David Kostin expects the corporate bid to be $1.12 trillion in 2026. And yet, that’d represent “just” 9% growth versus 2025, compared to another 17% YoY jump in capex.
If you’re wondering what share of that 17% increase is attributable to AI spending, you’re in luck: Kostin’s done the math. If you strip out the hyper-scalers (and Energy companies), capex growth will be just 8% in 2026, slower than the growth rate for buybacks.
The figure on the left, below, gives you a sense of the inflection tracked on a rolling basis. Measured versus the same period a year ago, buybacks for the largest AI spenders have flatlined (i.e., they’re still buying back stock, but no more than they bought back during the same stretch last year) while outlays on AI infrastructure, capacity and compute are growing rapidly.
The figure on the right, above, shows you the extent to which the same companies are scaling up their AI spending far faster than analysts anticipated at the beginning of the calendar year. The implication is that there’s upside to the 19% projection for 2026.
“Consensus estimates imply YoY capex growth will decelerate in 2026, but we believe estimates are too conservative,” Kostin remarked. “Capex growth registered 78% YoY in Q2, firms continue to message that supply cannot keep pace with AI demand and with some exceptions, the AI hyper-scalers have generally been able to fund these capex plans via cash flow generation and existing cash balances rather than debt financing.”
That’s all fine and good as long as it works out. “Better be worth it,” as I’ve put it previously. Because, as Kostin went on to note, “one consequence of the AI capex boom is dwindling support for buybacks.”




Cue Indy from Raiders.. “Shut your eyes, Marion. Don’t look at it, no matter what happens!”
“These people are trying to kill us.” “I know, Dad!” “This is a new experience for me.” “Happens to me all the time.”
Hang on lady, we going for a ride!
Hopefully we don’t get:”Wow! Holy Smoke! Crash landing!”
Most people have no clue about the limitations of AI, from hallucinations to power draw. It is estimated that the power requirements to cut in half the error rate of AI are about one order of magnitude above current levels. At some point investors will need to determine whether the ROI is there for AI capex.
Not just hallucinations it makes mistakes too. I was working on documentation and I fed the LLL a class of ~1000 lines of code. I then asked it to list in alphabetic order all the public methods and properties. When it output the list there were some missing, so i asked it why it missed one; it apologized and tried again. Again, I noticed there were still some missing, it retried and still got it wrong. At this point i was pissed and I asked it if it was deliberately making mistakes. It said something like it’s here to help, on the next time it did it right. If i were dealing with a person i would have said they were being lazy, not really looking properly but i can’t explain why the LLM was having so many issues with a simple task.
Very interesting. Good thing you weren’t working on a medical application.