Nvidia Says A.I. Revolution Proceeding Apace

I’d say “not surprisingly,” but I guess that doesn’t quite work given that beats versus consensus are the very definition of a “surprise” in the context of corporate earnings.

Instead, I’ll just say that I wasn’t surprised when Nvidia topped estimates with Q3 results released after the bell on Tuesday.

Revenue of $18.12 billion was more than $2 billion ahead of the Street, and EPS of $4.02 beat by $0.65.

Sales in Q3 were easily ahead of the guide and, amusingly, were 45% ahead of Q3 estimates as they stood prior to the company’s guidance as issued alongside Q2 results.

Not that it matters (the gains are so stupendous as to make comparisons meaningless for the purposes of gleaning anything beyond the self-evident conclusion that things must be going well), but sales growth was 206% YoY and 34% sequentially.

Data center revenue of $14.51 billion was (obviously) a record and represented a 41% increase from Q2, and a near 280% YoY jump. Consensus there was $12.82 billion.

The company guided for $20 billion in revenue for Q4, plus or minus 2%. Consensus saw $17.9 billion.

Nvidia did caution (to the extent they’re capable of coming across as cautious) that sales to China and other destinations affected by new licensing requirements “will decline significantly” in Q4. Those products, the company said, have contributed between a fifth and a quarter of data center revenue recently.

But fear not: Nvidia believes growth in other regions “will more than offset” those declines. Recall that the US recently imposed additional restrictions on Nvidia’s chip sales to Chinese customers.

Virtually everything was a beat for Q3 — so, not just sales and EPS. Gross margin was 250bps better than expected, for example. Expenses were generally in line.

Jensen Huang delivered a characteristically bold assessment, touting a “broad industry platform transition from general-purpose to accelerated computing and generative A.I.”

“Large language model startups, consumer internet companies and global cloud service providers were the first movers, and the next waves are starting to build,” Huang carried on. All of Nvidia’s growth engines are “in full throttle” and “the era of generative A.I. is taking off.”

A couple of things. First, expectations for Nvidia’s Q3 results and Q4 guide were sky-high. The shares are priced beyond perfection. Consensus aside, it’s hard to say where “the bar” is at this point, and therefore it’s hard to know whether Nvidia cleared it.

Second, this week’s breathless media circus around the OpenAI soap opera has convinced me that the A.I. frenzy is overblown, at least in the near-term. There are two existential armed conflicts going on in the world and more often than not over the past four days, Gaza lost to Sam Altman in the fight for space above the fold.

That sort of hair-on-fire, 24-hour blanket coverage would be justified if anyone thought Skynet was on the verge of becoming self-aware, but for now it’s just entertaining your silliest chat requests and drawing me pictures of bears and bulls (like the one you see at the top of this article).

I’m going to leave it at that and let delirious investors decide whether Huang did enough in Q3 and said enough about Q4 to justify the share price.


 

Speak your mind

This site uses Akismet to reduce spam. Learn how your comment data is processed.

6 thoughts on “Nvidia Says A.I. Revolution Proceeding Apace

  1. The beautiful AI generated Bull picture for this article alone justifies over a $1 Trillion valuation. But seriously, the forward P/E of NVIDIA is now solidly below 30 with the impressive forward guidance. The yield on 10 Treasury is 4.40 %. The lower liquidity of this holiday week should allow market makers to probably bring the share price a bit lower, shake out nervous nellies, stimulate some profit-taking, and kill some call options. I imagine some analysts raise price targets and then the chase for alpha into year-end should put a solid floor at $500/share.

  2. There are other sources for AI chips.
    – AMD’s MI300X matches or surpasses NVDA’s H100 in performance, and AMD is closing the software gap. AMD’s supply will be constrained in 2024, until TSM’s packaging capacity expansion.
    – GOOG TPU are competitive with H100 in a GOOG environment, they don’t compete for sales to others (being reserved for GOOG).
    – MSFT’s initial AI chips are reportedly a good first effort but, having been designed before the huge LLM models, lack sufficient memory bandwidth; MSFT’s 2nd gen hardware may be competitive in 2025. Again, MSFT’s chips are for MSFT’s own use.
    – AMZN’s present AI chips are reportedly lagging. Again, not for third party sale.
    – INTC’s current Gaudi is not competitive head to head with H100, but seem to be good for certain uses.
    – NVDA’s next generation (GraceHopper) is expected to surpass the current chips from all those companies, who are working on their next generation hardware to surpass GH.

    The picture for most of 2024, I think, is that
    – NVDA will dominate sales to third parties, 1H24 will (I think) be as glorious as 2H23
    – AMD will sell all it can get from TSM, which won’t be anywhere near what NVDA gets/sells, but $3-4BN nonetheless
    – INTC will find Gaudi’s niches (inference, probably) but won’t be a significant factor
    – The big cloud names (MSFT GOOG AMZN) will be increasingly self sufficient in AI chips, and will have less need for merchant silicon.

    Meanwhile, application companies will be building AI into applications and customers will be trying to actually save/make money with these. Let’s assume there is enough progress to keep all concerned enthusiastic and spending – at some point there will be a big wave of disillusionment and backlash, but let’s not think about that.

    Where are those workloads going to run? Not just the training – how many 2 trillion token LLMs is the world going to build? 100? 1000? – but more importantly the inference?

    If in the cloud, then some (probably a growing amount) of those workloads will run on proprietary GOOG AMZN MSFT hardware, lost to the likes of the semi players NVDA and AMD and INTC. NVDA will not only lose the H100 sald, it also won’t get the networking and software revenue that it currently extracts from the customers clamoring for H100s.

    If in users’ local datacenters, or on end users’ PCs (the “AI PC” pitch presumes that LLMs focused on specific business roles can be smaller and more efficient than the trillion token monster LLMs getting the headlines today) the semi players, NVDA and INTC and AMD (and ARM?) will compete for these sockets. Selling an AI accelerator or AI-optimized CPU/GPU into a PC will be attractive business for INTC and AMD, but a real comedown from the glorious ASP/GM% NVDA is doing now.

    1. Good analysis, JL. But I’d quibble about the roles of AMD and INTC. It seems that there is already some tiering of interest as potential business users (paying customers) try out the shiny new toy.

      Some early reports are that two of the largest adopters, to date, are customer service and coding. Neither of those two uses need a LLM that incorporates tourist arrivals at the Hobart, Tasmania Airport for the last 30 years or point spreads on over 20,000 NCAA football games. They don’t need to use the computing power to comb through those massive databases looking for causal factors.

      Edge chips, which you sorta reference, will be better suited (CHEAPER) to those tasks. Tasks that can replace workers to justify the costs.

      The cost/benefit analysis is no different than for buying a robot(s) to put in your factory. This is America, baby! Everyone and everything needs to pay its own way!

      1. I agree that some or many business LLMs won’t need colossal models like ChatGPT 4.0. They will be smaller models, focused on business data and needs, more economical to train and run, capable of being hosted in current datacenters with normal amounts of power and cooling. Privacy might even call for some to run locally.

        H100 and the coming GH are, I think, way overkill for these. It may be that GOOG, MSFT, and AMZN’s hardware is just fine. It may be that INTC’s Gaudi, or even INTC and AMD and AAPL’s CPUs can handle the workload of running these models.

        I don’t think it’s at all certain, or even very likely, that NVDA will continue to have the lock on the market that it does today. The whole tech industry is working very hard to break that lock. No-one wants NVDA to be deciding who is GPU-poor and GPU-rich.

        NVDA may be 30X 2024 P/E, and that “E” may be an unsustainable windfall level of earnings.

        This kind of reminds me of the years that NVDA GPUs were the magic key for Bitcoin miners. Then better ASICs were developed, and that whole market vanished for NVDA.

  3. The first stage in every new technology is the hype stage. Remember Netscape? AOL? That doesn’t mean the winners won’t grow into their sky-high valuations, which was the mistake the short sellers of the Internet made in spades (US Robotics is a hardware stock and should trade for a commodity multiple!)

    More popcorn!

NEWSROOM crewneck & prints