Nvidia Says A.I. Revolution Proceeding Apace

I'd say "not surprisingly," but I guess that doesn't quite work given that beats versus consensus are the very definition of a "surprise" in the context of corporate earnings. Instead, I'll just say that I wasn't surprised when Nvidia topped estimates with Q3 results released after the bell on Tuesday. Revenue of $18.12 billion was more than $2 billion ahead of the Street, and EPS of $4.02 beat by $0.65. Sales in Q3 were easily ahead of the guide and, amusingly, were 45% ahead of Q3 estimat

Join institutional investors, analysts and strategists from the world's largest banks: Subscribe today for as little as $7/month

View subscription options

Or try one month for FREE with a trial plan

Already have an account? log in

Speak your mind

This site uses Akismet to reduce spam. Learn how your comment data is processed.

6 thoughts on “Nvidia Says A.I. Revolution Proceeding Apace

  1. The beautiful AI generated Bull picture for this article alone justifies over a $1 Trillion valuation. But seriously, the forward P/E of NVIDIA is now solidly below 30 with the impressive forward guidance. The yield on 10 Treasury is 4.40 %. The lower liquidity of this holiday week should allow market makers to probably bring the share price a bit lower, shake out nervous nellies, stimulate some profit-taking, and kill some call options. I imagine some analysts raise price targets and then the chase for alpha into year-end should put a solid floor at $500/share.

  2. There are other sources for AI chips.
    – AMD’s MI300X matches or surpasses NVDA’s H100 in performance, and AMD is closing the software gap. AMD’s supply will be constrained in 2024, until TSM’s packaging capacity expansion.
    – GOOG TPU are competitive with H100 in a GOOG environment, they don’t compete for sales to others (being reserved for GOOG).
    – MSFT’s initial AI chips are reportedly a good first effort but, having been designed before the huge LLM models, lack sufficient memory bandwidth; MSFT’s 2nd gen hardware may be competitive in 2025. Again, MSFT’s chips are for MSFT’s own use.
    – AMZN’s present AI chips are reportedly lagging. Again, not for third party sale.
    – INTC’s current Gaudi is not competitive head to head with H100, but seem to be good for certain uses.
    – NVDA’s next generation (GraceHopper) is expected to surpass the current chips from all those companies, who are working on their next generation hardware to surpass GH.

    The picture for most of 2024, I think, is that
    – NVDA will dominate sales to third parties, 1H24 will (I think) be as glorious as 2H23
    – AMD will sell all it can get from TSM, which won’t be anywhere near what NVDA gets/sells, but $3-4BN nonetheless
    – INTC will find Gaudi’s niches (inference, probably) but won’t be a significant factor
    – The big cloud names (MSFT GOOG AMZN) will be increasingly self sufficient in AI chips, and will have less need for merchant silicon.

    Meanwhile, application companies will be building AI into applications and customers will be trying to actually save/make money with these. Let’s assume there is enough progress to keep all concerned enthusiastic and spending – at some point there will be a big wave of disillusionment and backlash, but let’s not think about that.

    Where are those workloads going to run? Not just the training – how many 2 trillion token LLMs is the world going to build? 100? 1000? – but more importantly the inference?

    If in the cloud, then some (probably a growing amount) of those workloads will run on proprietary GOOG AMZN MSFT hardware, lost to the likes of the semi players NVDA and AMD and INTC. NVDA will not only lose the H100 sald, it also won’t get the networking and software revenue that it currently extracts from the customers clamoring for H100s.

    If in users’ local datacenters, or on end users’ PCs (the “AI PC” pitch presumes that LLMs focused on specific business roles can be smaller and more efficient than the trillion token monster LLMs getting the headlines today) the semi players, NVDA and INTC and AMD (and ARM?) will compete for these sockets. Selling an AI accelerator or AI-optimized CPU/GPU into a PC will be attractive business for INTC and AMD, but a real comedown from the glorious ASP/GM% NVDA is doing now.

    1. Good analysis, JL. But I’d quibble about the roles of AMD and INTC. It seems that there is already some tiering of interest as potential business users (paying customers) try out the shiny new toy.

      Some early reports are that two of the largest adopters, to date, are customer service and coding. Neither of those two uses need a LLM that incorporates tourist arrivals at the Hobart, Tasmania Airport for the last 30 years or point spreads on over 20,000 NCAA football games. They don’t need to use the computing power to comb through those massive databases looking for causal factors.

      Edge chips, which you sorta reference, will be better suited (CHEAPER) to those tasks. Tasks that can replace workers to justify the costs.

      The cost/benefit analysis is no different than for buying a robot(s) to put in your factory. This is America, baby! Everyone and everything needs to pay its own way!

      1. I agree that some or many business LLMs won’t need colossal models like ChatGPT 4.0. They will be smaller models, focused on business data and needs, more economical to train and run, capable of being hosted in current datacenters with normal amounts of power and cooling. Privacy might even call for some to run locally.

        H100 and the coming GH are, I think, way overkill for these. It may be that GOOG, MSFT, and AMZN’s hardware is just fine. It may be that INTC’s Gaudi, or even INTC and AMD and AAPL’s CPUs can handle the workload of running these models.

        I don’t think it’s at all certain, or even very likely, that NVDA will continue to have the lock on the market that it does today. The whole tech industry is working very hard to break that lock. No-one wants NVDA to be deciding who is GPU-poor and GPU-rich.

        NVDA may be 30X 2024 P/E, and that “E” may be an unsustainable windfall level of earnings.

        This kind of reminds me of the years that NVDA GPUs were the magic key for Bitcoin miners. Then better ASICs were developed, and that whole market vanished for NVDA.

  3. The first stage in every new technology is the hype stage. Remember Netscape? AOL? That doesn’t mean the winners won’t grow into their sky-high valuations, which was the mistake the short sellers of the Internet made in spades (US Robotics is a hardware stock and should trade for a commodity multiple!)

    More popcorn!

NEWSROOM crewneck & prints