What If Generative AI Isn’t The Revolution Nvidia Says It Is?
Jensen Huang lost $9.8 billion on paper in six hours this week.
Don't worry: He can afford it. He's still the 18th richest person on the planet even after "losing" $18 billion in the days following Nvidia's Q2 earnings release.
At the lows for equities in March of 2020, during the original pandemic crash, Huang was worth a "mere" $5 billion. He's 20 times richer now, with the vast majority of his paper gains accruing since May of 2023, when Nvidia changed the world with the first of what would
From the short timeframe we’ve had with Gen AI it appears it’s only really valuable with incredibly consistent data sets. So I think we can expect the fight against cancer to exponentially improve, the efficiency of rockets and electric vehicles to improve, and anything where you can throw a ton of very consistent data at it to reap rewards. Everything else, well ask any one of the billion dollar chatbots to solve a basic econ question using known formulas and parameters and I think you’ll get the point.
AI is a game changer, just not for everyone. And so that begs the question, is this a bubble? I think mostly it is.
As luck would have it, I stumbled upon two AI related articles today.
In the first one, from futurism.com, the headline says it all: Government test finds that AI wildly underperforms compared to human employees.” Amazon Web Services conducted the test, which was commissioned by the Australian government, and the conclusion was that AI could actually end up creating more work for actual humans.
The second article was about AI image generation, and it came to quite different conclusions. From Tom’s Guide, the author put 7 AI image generators to 3 detailed prompts, then evaluated their outputs. The article conveniently shows the resulting images, which I have to admit were mostly very good (Dall-E was the clear laggard). Ideogram took first place.
It seems like we’re seeing the same curve as we did for the internet. From massive hype starting in 1995ish (Time Magazine cover: The Information Super Highway!), peak madness in 99-00, a crash as vast amounts of malinvestment was shaken out, and now it’s a ubiquitous part of our daily lives.
I’m still a big time believer, but as of now, I’m in the bucket of AI creating more work for me š When the boss wants AI, he gets it even when it currently adds no (or negative) value. If Nvidia pulls back more though, I might have to dip my toe in.
I agree with your assessment in the last paragraph. Collectively, we survived that insanity in the market. But what an avalanche of losses, wiping out many (especially the later comers), when the shakeout occurred.
My guess is it won’t be pretty, and on a much larger scale this time. So many layers in the house of cards, heaped on flimsiness. But we’ll collectively be further on down the road, again, in time.
From Marketwatch today: “However, Nvidiaās volatility is a sign of how much āhot moneyā (short-term investments being made in the hope of a quick return) is flowing in and out of the stock. For example, the GraniteShares 2x Long NVDA Daily ETF (NVDL) is the largest leveraged single-stock exchange-traded fund in the U.S. market by a long way, with $4.99 billion in assets.”
The history of silicon computing is processing and data. As processing speeds up, the demand for data increases. AI as it stands today is about faster processing chips and larger datasets. The speculation that AI will duplicate human consciousness is only a concept at this point. AI can outperform the human brain when the race is how fast data sets can be processed, think Watson beating a human champion in a chess match. What AI can’t do is think out of the box. It can only reassemble existing data within the ‘data box’ it has access to. We still don’t completely understand human consciousness. How does a human think outside the box. I’lll speculate that one key component is our emotions. Another maybe visualization such as when Einstein imagined himself as an atom crossing the cosmos. Thus far computers have no tools and maybe no concepts for these.
And if that databox is polluted with inaccurate data, AI doesn’t know how to determine truthiness, instead relying on data quantity to inform reliability. With a data set like the internet, that isn’t so reliable.
How does a human think outside the box? Imagination, or the ability to see patterns in tools and processes that may not be obvious. Iteratively when you add up thousands of years of this creativity expanding on newly informed tools and processes, you end up with cloud computing and AI.
AI does what it is asked to do inside of the parameters given, with the dataset available, leveraging the constraints it was programmed with. I have yet to see it decide to pursue its own course of action and I’m sure this is by design.
Most will think this comment brands me as an old dad. In fact I, am just that. I dislike so-called generative AI for several reasons. It has no soul, no honor, no respect. It is in fact, inherently, a lie. It is based on the confiscation of the work (data) created by others and then passed off as its own. I am now and have been for 60 years, a creator, a writer, a researcher and a teacher. I have been plagiarized and had personal data stolen for others to claim as their own. As AI algo builders need more and more data, they won’t create it through their work, they will collect it by stealth, steal it and claim their output as their own. AI will be used to create false resumes, dicey computer code, false images, bad medicine, and other untrustworthy outcomes. We who create value provide the basis for AI, which has to wait until we create our work so AI can confiscate our rightful property, claim it, and profit from it. We create the basis of AI through our work and those who steal and manipulate that work claim the paycheck. While some forms of AI can seem to be creators, they first must start with what they steal, our work, the trails left by our lives, as in medical data, retail data, financial data, etc. The thieves never tell us they have taken from us, they just do it and then they lie to us again when they hide their work by passing it off as real. I’m an old dad who believes in old values so I dislike AI intensely.
Itās early days. Long-term, I believe A.I. is going to pair with quantum computing, to usher in advancements that we canāt even imagine now.
I was just thinking about quantum computing. It dropped out of the conversation ever since the launch of Chat-GPT, but in a lot of ways it has the power to be just as transformative, if not more. AI does people things–and it’s getting pretty good at it–but QC does math things that people are literally incapable of doing.
Updated back-of-envelope numbers here https://www.wsj.com/articles/companies-ai-bets-are-reaching-astronomical-heights-why-the-c-suite-likes-its-odds-anyway-2bb73585?
*”Sequoia Partner David Cahn . . . arrived at his original $200 billion figure by taking investor estimates for Nvidiaās data-center revenue in the final quarter of 2023 and multiplying it by four to arrive at a run rate of $50 billion. He calculated that Nvidia customers would spend an additional $50 billion on energy, buildings, backup generators and the like. Finally, he assumed that Nvidia users would seek a 50% gross margin on these investments, which would require the infrastructure to drive $200 billion in revenue.
But some investors are now forecasting that Nvidiaās run rate for data-center revenue will reach $150 billion by the end of its fourth-quarter, according to Cahn.
That would imply the infrastructure will need to generate $600 billion in lifetime AI-related revenue.”
“Construction of new data centers costs time as well as money, so the expected return on these investments wonāt appear until late 2025 or early 2026.”
“CEOs believe they can make AI pay off despite the staggering costs.”*
Notice the CEOs quoted are not the ones spending tens of $BNs. When others are spending the money, no downside to egging them on, their over-spending now just means lower prices for you later.
For snarky amusement, I enjoy asking AI/Nvidia bulls “what is AI?” and “how does it work?” Most love the concept but few can make a coherent answer to the two questions. (It’s like that comedian who asked people on the street in LA who claim to be following a gluten-free diet just what is gluten?)
Isn’t it simply programs looking for causality among data points? Nothing new to anyone who was a grad student in economics or business in 1981 who taught himself how to program in Fortran so he could work in the university data center running simple and somewhat more complex regression analyses on their data libraries? *
What has changed is the processing power and the access to way larger sets of data which are suddenly more economical to include in premiliminary passes. And the speed which results are returned. (Overnight batch runs just a dim memory.)
The ability to churn through vast datasets doesn’t mean it’s always advisable or necessary for most potential PAYING users. A call center trying to parse likely inquiries and the best responses does not need to to analyze the sutras in three different Hindi dialects as part of that search for significant causalities. I dunno, but it seems like many ardent NVDA/AMD bulls assume that every end user will want to add to the expense of running vast LLMs when more focussed analysis of a much smaller set of data will do as well or better.
Meanwhile, for those pinning their optimism on the impact of AI on programming costs, check out the percentage of total US employment in ALL tech industries. It’s surprisingly low. Maybe 4%? And within the programming subsector of that number, so much has already been outsourced to Bangalore and China. No doubt they’ll be implementing similar technologies to stay cheaper.
I assume that AI will prove useful in drug discovery, weather forecasting, as well as weapons and nuclear power system designs. For sure. But beyond those, where will super-dooper searches for causal relationships allow companies to shed personnel and increase corporate profits?
In the end, it’s like a factory floor robot. AI has to pay it’s own way for compaies to pay money to use it.
I would like to hear the opinions from the optimists here.
*That was me. I hoped to find a model that would help predict cotton future prices. My quest for wealth revealed that the highest r-square for futures prices in the following week was a sum of the previous week’s Latin American soccer scores.
I’ve been a software engineer for 25 years, working and consulting mostly for software companies. What I see is companies at the beginning of a massive scramble to restructure their entire business models to leverage both generative AI and machine learning (separate applications of AI, and both of which are integral to Nvidia’s business model).
A CTO stated to our organization just 6 months ago that “I need all of you to become AI experts”, and he wasn’t just referencing the engineers. Everything from the way we work (e.g. the “copilot” concept), to learning how to build build machine learning and generative AI models. In our case it’s building these in the AWS cloud, but all cloud infrastructures are having to ramp up at warp speed to prepare for the onslaught.
And these changes are not going to be a slow roll. Executives understand that their survival depends on adapting to these new models, not over the long haul, but YESTERDAY.
That said, my personal belief is that if anything most people don’t realize how quickly the AI “revolution” will accelerate. Nvidia will be one of many companies that reap huge benefits while the technology world is massively “disrupted” by the unstoppable wave that is about to hit.
Scott, Thank you for the insight.
At best technology represents 10% of GDP and less than 5% of the workforce. The big test will be to see companies outside of tech adopting AI. Not just adopting it but benefiting from it financially = reducing staff.
Money makes the world go round.
“The big test will be to see companies outside of tech adopting AI. Not just adopting it but benefiting from it financially = reducing staff”
Companies outside of tech have no choice but to adopt AI, and they’ve already begun that process. The “copilot” concept will be ubiquitous and universal. There are few jobs that will not be impacted by it. It is an indisputably powerful productivity booster and companies are eager to roll it out broadly.
Generative AI seems to be somewhat more of an unknown, but it is rapidly becoming integral for companies that produce “content” – e.g. streaming (shows, movies, ads, music, etc), or text (e.g. publications, advertisements..), etc.
Elon Musk recently announced upcoming rollouts of fully autonomous “robotaxis” in China – another example of application of machine learning ( / predictive) AI.
The list is getting longer and IMO we’ve only seen the tip of the iceberg.
How much are end users actually paying for Copilot right now?
Today Apple left pricing on their AI-enabled iPhone-16 unchanged. Disappointing analysts who hoped they would raise prices thanks to rampant consumer demand for Apple-AI.
But what do they know?
I’m with ya, Scott – I don’t think we’ve scratched the surface of how impactful genAI/ML/LLMs will be. Right now, I’d say we are somewhere in between the peak of inflated expectations headed down toward the trough of disillusionment, but Apple, Microsoft, Amazon, Facebook, and Google are in an arms race right now and risk obsolescence if they don’t keep up. That applies down market as well.
Nvidia CEO reminds me of Tyco’s CEO in ~2007
Timing is everything; see Biotech “revolution” circa 1999
Nice focus on return vs risk H. “Revolutions” tend to obliterate the idea of “risk” and redefine “speculation.”
AVGO report and guide underwhelming, aftermarket move reflects it. Another -6% or so and stock will be on its 200D.