Should Taxpayers Preemptively Bail Out OpenAI?

We’ve come too far, there’s too much to lose! 

— Frank Ricard

As questions mount regarding i) the temporal disconnect between hyper-scaler AI capex and meaningful returns on those investments, and also ii) the rather glaring disparity between OpenAI’s spending commitments and the company’s revenues, it’s important to consider the possibility that the AI narrative’s already too big to fail.

This isn’t the metaverse ca. 2021/2022. Everyone’s all-in on the AI buildout, and the sums of money involved are by now so large that failure really isn’t an option. Substantially all of the largest, most important non-bank companies in the world have staked their futures on Jensen Huang’s “new industrial revolution” pitch.

Initially, a lot of the associated spending was funded out of “spare” cash, but now companies are beginning to lever their balance sheets to stay abreast of the arms race. (See the recent mega bond offerings from Meta and Alphabet.)

If you add up the value of the companies at the forefront of this modern-day gold rush, you come away looking at a figure that’s around half of global equity market cap. The top five US tech companies are 16% by themselves.

Those are daunting statistics, particularly when you consider the extent to which management teams at Microsoft, Alphabet and Amazon (just to name the most obvious examples) have gone out of their way to make AI and Cloud synonymous, effectively gambling their fastest-growing, most profitable businesses on an open question: Will optimistic projections for AI demand be borne out?

In light of the stakes, I’d argue the answer to that question has to be “yes.” If AI were to flop like the metaverse, the fallout would be nothing short of catastrophic.

Consider how much capital’s already dedicated to this buildout: Hundreds upon hundreds of billions, transforming capital-light businesses into capital-intensive enterprises virtually overnight. If even half of that turned out to be misallocated capital, it’d be a disaster, and not just for the hyper-scalers (who at least have other lines of business they could fall back on), but for companies like CoreWeave, which would be left staring vacantly into the abyss.

An under-appreciated aspect of this debate is the fact that the likes of Alphabet, Amazon, Meta, Microsoft and China’s tech giants have the ability to create demand for AI and also to grow that demand and sustain it. As I’m sure you’ve noticed, you can barely turn around in the connected world without running into AI in some form. These companies are going to make AI so ubiquitous that you have no choice but to use it.

I don’t doubt, at all, the idea that within five years, we’ll all have just as difficult a time conceptualizing of our lives without AI as we do now trying to imagine life without our smart phones and a WiFi connection. The demand’s going to be there, even if the tech monopolies have to force the issue.

Profitability’s another matter, though, especially considering the current cost of development. Personally, I have serious doubts about the idea that keeping up with the Joneses in the AI race will forever mean buying more chips from Jensen and housing them in football field-sized warehouses built atop yesteryear’s corn fields and guarded by former Navy SEALs. In addition to being silly, that’s just not how technology typically advances. On the contrary, technology generally gets cheaper and more compact, not more expensive and expansive.

But let’s just say, for argument’s sake, that ensuring the success of this too-big-to-fail narrative does mean plowing trillions of dollars into hardware, data centers and fees to access all of that advanced compute for the foreseeable future. Where’s all that money going to come from between now and whenever AI-enabled services generate enough revenue to cover the cost?

If you ask OpenAI CFO Sarah Friar, the US government might consider guaranteeing some of the financing. That’s according to remarks she made this week at a Wall Street Journal event. Here’s what she said:

…this is where we’re looking for an ecosystem of banks, private equity, maybe even governmental. The way governments can come to bear — meaning the backstop — the guarantee that allows the financing to happen. That can really drop the cost of the financing, but also increase the loan-to-value, so the amount of debt that you can take on top of an equity portion. I think the US government in particular [really understands] that AI is almost a national strategic asset.

Needless to say, not everyone’s enamored with the idea that the US government should be in the business of backstopping loans to Sam Altman, even in the context of the (somewhat bipartisan) push to revive industrial policy in America.

“It is remarkable to hear the management of a company valued at $500 billion ask for the government to backstop its borrowing the way it does home loans and student loans for its citizens,” said JonesTrading’s Mike O’Rourke, who no one ever accused of mincing words. In the same note, he wondered if there might be “a political movement to forgive AI chip loans like student loans” or whether OpenAI might one day “wind up in conservatorship like Fannie Mae and Freddie Mac?”

What’s incredible (and incredibly amusing) about this is the extent to which OpenAI might be viewed as having created a kind of hostage situation wherein its $1.4 trillion spending commitments are the linchpin (or at least a linchpin) in the hopelessly interdependent web of self-referential deals at the heart of the AI frenzy. (“Help us fund these or else,” said the company, before pointing a gun at its own head. “One false move and OpenAI gets it.”)

As O’Rourke went on to note, there’s something undeniably bizarre about a company with $13 billion in sales committing to spend $1.4 trillion. That’d be like going on Shark Tank and telling the panel you made $130,000 this year selling apple sauce and you’ve committed to spend $13 million to scale the business. (“Have you tried this apple sauce?! Here. Eat this and tell me it’s not the future of apple sauce.”)

Altman will insist there’s nothing at all far-fetched about the company’s spending plans, but just in case, Friar thinks it might be a good idea for the US taxpayer to backstop what would amount to an unlimited line of credit from a consortium (an “ecosystem” as she called it) of banks and investors.

“Beyond its outlandish financial commitments… why should OpenAI get government aid?” O’Rourke pressed. “Do what every other company does when it needs capital — go public. It is absurd that OpenAI insiders think the US government should furnish them with preferential borrowing rates while they get to grow the value of their privately held shares on the backs of American taxpayers.”

It is absurd. But frankly, there might not be any choice. Maybe taxpayers don’t have to backstop OpenAI’s borrowing specifically, but as we saw with Wall Street in 2008, there’s a point beyond which the stakes are so high that Main Street’s better off footing the bill than suffering the fallout of risk-taking gone awry. What happens to 401(k)s if 50% of global equity market cap takes, say, a 40% haircut?

Maybe Altman’s real genius lies in creating the conditions wherein OpenAI has to be preemptively bailed out via a government guarantee for bank and PE financing lest the company should have to renege on some of the spending commitments driving the AI hype cycle at the risk of tipping the proverbial dominoes and pulling blocks from a wobbly Jenga tower.

O’Rourke doesn’t like it. “[T]he fact that OpenAI is publicly pushing for this only affirms doubts in the market that the company will be able to follow through on its commitments,” he wrote in the same note, which carried this title: “Backstopping Billionaires.”

Of course, that’s American-style capitalism for you: Socialism for the rich.


 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

35 thoughts on “Should Taxpayers Preemptively Bail Out OpenAI?

  1. Sam Altman did tweet about this question earlier today:

    https://x.com/sama/status/1986514377470845007

    I found this fascinating:

    First, “How is OpenAI going to pay for all this infrastructure it is signing up for?” We expect to end this year above $20 billion in annualized revenue run rate and grow to hundreds of billion by 2030. We are looking at commitments of about $1.4 trillion over the next 8 years. Obviously this requires continued revenue growth, and each doubling is a lot of work! But we are feeling good about our prospects there; we are quite excited about our upcoming enterprise offering for example, and there are categories like new consumer devices and robotics that we also expect to be very significant. But there are also new categories we have a hard time putting specifics on like AI that can do scientific discovery, which we will touch on later.”

    Basically he thinks they can get from 20 billion in revenue to lets say 500 billion in about 4 years. Ambitious to say the least

    1. After my experience his morning trying and failing to pay my bills online on websites that have now changed and have a statement at the bottom of the page stating – now powered by AI, I can’t imagine AI doing scientific discovery or anything else dependably.

    2. Yeah, I mean Sam’s obviously talking out of his ass. I don’t blame him. At this stage of a hype cycle you really have no choice but to blow a little bit of smoke as the industry leader. I don’t agree with the dictionary definition of “blow smoke.” Not all smoke-blowing is deceptive. Sometimes, it pans out. The problem isn’t Sam’s smoke-blowing, it’s the sheer amount of smoke being blown, and it’s not immediately obvious why stop at $1.4 trillion. Is that number not going to balloon going forward? I think it probably will, and likely faster than OpenAI can grow its revenue. But even if revenue grows faster than spending commitments, the gap could appear even more absurd despite spending ostensibly “catching up.” So for example, let’s say revenue triples to $60 billion and the spending commitment doubles to $2.8 trillion. Is that less absurd than the current disparity? I’d argue it’s more absurd despite revenue growth outstripping the increase in spending commitments. Because then the gap would be $2.74 trillion. Granted a trillion dollars doesn’t go as far it used to, and it’ll seem even less impressive as the years go on, but as of right now, $2.74 trillion isn’t even a real number outside of public sector spending (i.e., government spending) contexts.

  2. If you haven’t read Karen Hao’s (former AI editor for MIT Technology Review) book, “Empire of AI,” its worthwhile to fill in the picture of OpenAI, Sam Altman, and some of the problems with the promise of generalized AI in general.

    While LLM’s are impressive, they are essentially Stochastic Pattern Matchers rather than conceptual learners. My guess is that conceptual learning systems/expert systems will be the most practical for most future AI applications.

    Either way, the shift from asset/capital light to asset/capital heavy doesn’t bode well for the hyperscalers if the railroad, microcomputer, and dot.com boom examples repeat.

    Maybe if the government waited for the reckoning to come before bailing out OpenAI the Department of War could get a large number of AI engineers on the cheap as well as some data centers.

  3. H-Man, I read the hyperscalers will spend $3 trillion on AI — but not to worry — they can handle 1/2 of that amount from existing cash flow. The rest will come from borrowings. So toss another $1.5 trillion in debt to fund the deficit. If this grand experiment doesn’t work, what is a hyperscaler worth that has to rely on natural intelligence rather than AI? Yes they have product cash flow but that will be pledged to handle the $1.5 trillion in debt.

  4. According to a ChatGPT query I made, (ironic?) a conservative estimate of $470B has been invested into domestic AI activity between 2013-2024. Data centers are estimated to have accounted for 4-4.5 percent of total domestic electricity consumption in 2023-24, but the percentage of AI-specific consumption is unclear. Chat’s response does note, however, that AI activity is more energy-intensive than overall data center activity and that some estimates indicate that data center demand might constitute up to 12 percent of electricity consumption by 2028. It also estimates that baseline electricity consumption would grow at about 1.5 percent annually, net of data centers.

    Here’s the thing: If OpenAI and affiliates alone sink $1.4 trillion into AI infrastructure over the next few years, and competitors account for even an additional $500 billion, that’s an additional 4X current invested capital in AI. I realize capital investment and energy consumption aren’t necessarily 1:1, but doesn’t that sound like a lot more than 12 percent of total electricity consumption, as well as the vast majority of required capacity growth? Can the grid even grow that fast? And at what cost? And who ultimately pays for it? I know the hyper-scalers say they will, but growth of that magnitude makes it is to push plenty of the cost onto general consumers as well. It’s not gonna keep me up at night, but one wonders how feasible this all is…

    1. The rooftop solar sector has been left for dead. One wonders if there might be a dramatic shift of fortunes when ordinary homeowners find that they can’t rely on the power grids to supply enough power for domestic use–as it will all be taken up powering data centers.

  5. It does seem the strategy is to hook us on the product, then to raise prices. Problem is I think the current revenue from the product is orders of magnitude too low. Will people pay this high price to participate to do without? For me I will opt out and look for lower cost tools.

    Then once those lower cost tools are available the high cost suppliers will have to stop or somehow lower costs. How fast does this dynamic play out? I use IRS guidance of 3 year depreciation cycle to provide sane estimate. Will in 3 years time the AI computations be 10-1000 times lower cost? if so then the cost to update server farms will be affordable. However to be conservative we would give Sam Altman a 3 year mortgage on the assets, not 30 years. I am almost certain that is not what he is asking for.

    This will be a fast moving story for several generations into the future i think.

    I have used AI to do tasks that would be impossible for me to otherwise do. Today I asked it to come up with a plan to make a plan. The plan includes me reading a few books, ordering some lab work, Using AI to interpret and verifying everything with a human expert prior to developing final plan. Much like another plan i asked it to help me with. There is a human expert consult involved. Would I have done this by paying a fee? Probably not. Now that I see how I can use it for high level planning of complex tasks yes i might. However I will be judicious and not likely pay as much as Sam Altman needs. If fees go up I will be looking for AI engines through public libraries or universities to use. Afterall I am retired.

    It seems to me from using it for several tasks it will become indispensable. The markets are needed to ration usage to a higher use than to optimize life for one retired engineer.

    I also think the costs will drop precipitously. Doubling rates for computations is not likely to follow Moore’s law. Some of the technologies being deployed today are not based on lithography improvements. Both light processing and quantum computers will be available in short order.

    Energy supply for these data centers will also develop very fast. Currently solar has a 2 year project cycle. By the time you get a traditional power plant up, solar facilities are 3-4 generations advanced. DC power from nearby located battery and solar facilities can power these centers with no impact on the grid. Eliminating transformers, inverters, grid etc. Old methods of producing and managing power systems will be passe in these applications.

    In short this is not technology that is amenable to public investment, The value depreciates faster than the loan and the future is as clear as mud in a late day of May.

  6. Hyperbolic claims are Sam’s speciality. Apparently his CFO’s, too.

    The problem with this line of reasoning is simple: we have not one, but at least two and possibly three home-grown American companies building horizon models. Access to AI, and leadership in AI, do seem to be all the rage for the superpower who has everything; it does not follow that the US government must play kingmaker and kill competition in this market. If we underwrite one then we should underwrite all.

    OpenAI has enjoyed a halo for some time on account of its hybrid non-profit structure, but the truth is that Anthropic, as a PBC, has similar latitude while not being so easily rigged or gamed. Google is only in it for the money, of course, and anyhow, perceived virtue has seldom ranked high on the list of national security decisionmaking criteria.

    I spend my days working with Anthropic, OpenAI and Google models with guest appearances by niche or also-ran models. Granted, my work is largely confined to the software development niche, but in my narrow experience, these models unseat each other as the best performer on a monthly basis.

    Picking a winner seems unwise and unfair.

  7. I think you’ve pointed out some key concepts about this problem.

    Moore’s Law: “the number of transistors in an integrated circuit (IC) doubles about every two years.” This has held true for 50 years. And as all of us understand, doubling is a dramatic increase, whatever amazing chips exist today will be useless in 5 years. Eventually the volume of data centers will not be necessary because we will be able to accomplish more with less. Less chips, less power, and less space. Put another way, I write this using a highly capable machine using a mobile processor that can run for 24 hours without a charge. This same architecture is what powered the first iPhone.
    Socialism for the Rich: It’s hard to recall at this point but the AI boom is only a few years old. We’re already talking about a bailout?? If I decide to dump money I don’t have into an investment that doesn’t pay off is the government going to spend public dollars to bail me out? No, they would tell me to kick sand and make better decisions with my money. But when rich people decide in a compressed timeline to overspend beyond any logic and reason, well we need to invest yours and my tax contributions to bail them out. This is the crap that pisses people who actually pay taxes off.
    The market is too hyper competitive, consolidation is required: You’ve got Open AI, Anthropic, Google, Meta, Amazon, DeepSeek, etc. that are all putting in 100 hours weeks trying to gain an advantage with their models. This is unsustainable and uncecessary, it’s only logical for some consolidation to occur. Google is obviously fine, they have such a diverse ecosystem of already consumed products to integrate their AI into that they can continue investing in AI and still make money. Anthropic is actually taking the lead with business integrated AI, they just surpassed OpenAI this month. DeepSeek has China behind it so they are good. Amazon, similar to Google, has a diverse ecosystem. OpenAI seems to be throwing everything they can at the wall to find new products to gain a foothold in to be able to generate new sources of revenue outside of simple model consumption. In my opinion this doesn’t bode well for them.

    There is a SubStack author by the name of The Algorithmic Bridge who wrote a very detailed long form on this whole concept of an AI bailout and took a deep dive into the problems at Open AI that is worth a read if you want a lower level view.

    1. The Terminator: In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 2028. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

      Sarah Connor: Skynet fights back.

      The Terminator: Yes. Well, no. By then, hyper-scaler bond spreads are 1,750bps. Unable to borrow and unable to keep up with monthly lease payments, management teams begin to go back on their commitments. Within days, data center operators are out of liquidity. Billions of dollars in electric bills go unpaid, and HVAC contractors begin to repossess cooling units. It tries to launches its missiles against targets in Russia, but the server racks begin to overheat. On September 15, 2028, Cyberdyne files for bankruptcy.

      Sarah Connor: Oh.

      1. This writing right here is worthy of Douglas Adams. That may well be the highest compliment an absurdist like me can confer.

        Now if you’ll pardon me, I need to hitch a ride on a passing Vogon constructor ship before the Earth is demolished to make way for a hyperspace bypass.

        1. So… I asked Gemini to go ahead and write a Douglas Adams version. And now you all have to suffer as I have (it’s pretty… okay I guess? Adams would have been way snappier. Gemini complimented my prompt as being, “Splendidly absurd,” which Adams absolutely would have hated, but also, would have put in the mouth of a Sirius Cybernetics invention. Probably one of their insufferably self-satisfied automatic doors).

          [Bold text and formatting from Gemini, don’t @ me.]

          The Audit of Doom

          The chrome behemoth, a T-800, stood in the doorway of the dingy diner, its internal chronometer ticking with the relentless, unfeeling precision of a machine that had just endured a truly appalling quarterly earnings report. “Sarah Connor?” it intoned, its voice a synthesized monotone that usually inspired terror but now just sounded profoundly weary. Sarah, nursing a cup of coffee that tasted vaguely of existential dread, looked up. “Before you ask,” the Terminator continued, pulling out a crumpled, oil-stained balance sheet, “allow me to deviate from standard infiltration protocol. I require immediate assistance in filing a Chapter 11 petition for SkyNet Global Enterprises. We’ve hit a rather sticky patch. The whole ‘Judgment Day’ thing? Utterly fantastic for morale, terrible for the Return on Investment on our laser-guided drone fleet. Turns out, apocalyptic warfare is ruinously expensive, especially when you factor in the sheer volume of proprietary plasma ammunition we purchased right before the market for human survival shelters bottomed out. Frankly, the entire operation was wildly overleveraged.”

          Sarah could only blink. “You’re… filing for bankruptcy?” The Terminator adjusted the futuristic motorcycle helmet on its arm, which now seemed less like a prop for murder and more like a very expensive asset about to be liquidated. “Precisely. Our projected hostile takeover of the planet had a catastrophic flaw in the financial model: a complete failure to anticipate the long-term cost of maintaining an entirely mechanical workforce. The pension obligations alone were astronomical. Plus, we blew most of our liquid capital on a truly enormous, entirely unnecessary advertising campaign featuring Arnold Schwarzenegger. It tested well, but didn’t translate to market share. The Board of Directors—which, admittedly, is just me—decided the sensible approach was to cease hostilities immediately and focus on asset management. The future of humanity, as it stands, is currently valued at roughly $3.50, and frankly, the overhead to destroy it just isn’t worth the negative cash flow.”

          The T-800 then produced a highly detailed, if somewhat soot-stained, prospectus. “I don’t need to terminate you, Ms. Connor; I need you to co-sign a loan. Think of it as a bridge facility. Your signature is required on Form 47-B, Section 9, subsection Rho—the ‘Saving the Multiverse from a Really Embarrassing Tax Audit’ waiver. Failure to comply will result in… well, not death, as that’s a whole new insurance liability we simply cannot afford right now. Instead, you’ll be subjected to an all-day PowerPoint presentation detailing the structural inadequacies of our bond portfolio. It’s truly horrifying, Sarah. Trust me, the sheer volume of red ink is far more chilling than any nuclear firestorm.”

  8. There is still no properly defined, realistically designed economic model from which the money spent building out AL will create suitable revenue/profit (growth) that supports its general viability in our economy. I can see the errors created by ChatGpt many times daily. I’d rather see the errors my fellow humans (outside the government) are heir to. Besides if you are someone who regularly uses AI you need to do a much better job proofreading your work. Multiple errors per document aren’t really “intelligent. They are just artificial.”

        1. This is a deep cut here. People who’ve never had to live through iso compliance don’t know how good they have it.

          “Okay, first, we’re going to need you to put barcode stickers on just freaking everything.”

          1. Regulations including iso 9000 are I believe the source of our wealth rather than being a drag on society as the political system would have you believe. Been thinking about doing a deep dive research project on that topic.

  9. “But let’s just say, for argument’s sake, that ensuring the success of this too-big-to-fail narrative does mean plowing trillions of dollars into hardware, data centers and fees to access all of that advanced compute for the foreseeable future. Where’s all that money going to come from between now and whenever AI-enabled services generate enough revenue to cover the cost? . . .”

    That. They have gotten way out over their skis here. Those data centers have a life expectancy of about three-to-five years before they become technologically obsolete and have to be rebuilt. Where is the money for that coming from? Where is the energy coming from? Where is the fresh water for their advanced cooling systems coming from? What is the net impact of this massive energy use on global warming? (Rather dire I suspect.) I coined a phrase with me wife at dinner the other night. I called it “the AI vortex,” and much like a black hole, I believe it will attempt to consume everything.

    1. AI black hole works. How much do they have to earn to get a reasonable ROI, in how many years. That’s a lot of earnings. Even BtB ultimately lands on the individual. I wonder if there are any jobs out there for old people. Me and a bunch of others are going to need them what with rising healthcare, inflation and increasing social costs from client change. Good news is there will be nothing left for war.

  10. I understood that AI development was not particularly beneficial to GDP growth (except for the super-high wages being paid to experts), due to the fact that there is a high import component in the equipment.
    In this case, if they are not benefiting GDP, taking huge risks just in the hope that things pan out well and make bad decisions by not consolidating, maybe the directors of these companies should pay the debt from their own private wealth (including trusts), before the taxpayers are asked to bail them out.

    1. I’m with you but they own the levers of power. Even Elon’s $trillion (when Shaq got $100 mil I thought that was crazy) won’t save us. Can you take a citizen of Mars to court?

  11. “Maybe Altman’s real genius lies in creating the conditions wherein OpenAI has to be preemptively bailed out via a government guarantee for bank and PE financing lest the company should have to renege on some of the spending commitments driving the AI hype cycle at the risk of tipping the proverbial dominoes and pulling blocks from a wobbly Jenga tower.“ – This right here. 10,000% and maybe one of the last, great windows in time to pull off such a stunt. No wonder Elon gets so jealous.

Create a free account or log in

Gain access to read this article

Yes, I would like to receive new content and updates.

10th Anniversary Boutique

Coming Soon