What If All The AI Spending Never Pays Off?

What if all the AI spending doesn't pay off? That's the question no one wants to ponder currently, caught up as we all are in a hype cycle for the ages. A few brave souls are willing to "go there" (so to speak). Just a few days ago, for example, SocGen's Albert Edwards asked if AI spend might be seen, in hindsight, as analogous to "over-investment in cabling by the Telecoms in the late 1990s." I think the answer's "no," but the figure below's worth a highlight all the same. Down there at t

You need a PLUS account to view this content. Try one month of PLUS for FREE.

Try PLUS for free

Already have an account? log in

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

5 thoughts on “What If All The AI Spending Never Pays Off?

  1. Kinda like a nuclear arms race in which neither superpower can afford to be left behind, none of the hyperscalers can afford to be left behind in case there is a breakthrough application for accelerated computing. Multimodal inferencing from unstructured data,(sound, video, text etc.) will require the highest level of processing power dispelling the popular fantasy that inferencing can be done with cheaper chips/infrastructure. Therefore, the demand for NVIDIA’s Blackwell chips appears insatiable through 2025.

  2. As a guy who’s put in a lot of time trying to make actual practical use of AI’s, the fact that they can’t even be trusted to make accurate statements of fact is beyond concerning. I feel like people are looking at this and saying, “look, it’s a machine that can understand me and talk back! It must be smarter than me!” When the truth is, it’s a machine that has been taught to string words together in syntactically likely combinations, that’s all. I literally saw a post today on LinkedIn of someone saying, and a bunch of people enthusiastically agreeing, that LLMs give better results if you are polite to them and regularly say “thank you”. One person says he welcomes it by telling if he has made a virtual cup of coffee for it and insists this improves his results. Another insisted that by telling it to play the role of an interviewer it can improve your interview prep… Because, obviously, any process that can string random words together into a sensible sentence must logically have a firm understanding of what interviewers are looking for. Then there was the job counselor who told me I should invest $40 in an AI generated professional headshot for LinkedIn. Instead, I invested zero dollars in photoshopping an existing photo of myself, and when he saw it, without realizing it I hadn’t sprung for the AI service, he loved it. This is the uncritical attitude with which these things are being approached. I wonder how long this can go on before somebody notices the emperor isn’t wearing any clothes.

    1. ChatBots are not the only application of accelerated computing. That is a popular misunderstanding of this technology. Oracle, Amazon, Meta, MSFT, Google who have been wildly successful over the last several decades and know more about this technology than U or I ever will, aren’t just a bunch idiots throwing billions$$$ on total nonsense.

  3. The thing about AI is itks a bit like smartphones. One day they’re everywhere. AI will integrate into most areas of life and commerce. The pace of advancement is incredible, and it’s still very early in the game. LLM is only one part of it.

    A more pressing issue that keeps me up at night is whether a large wave of disemployment will occur with the promised productivity gains.

    For me, the funny thing on LinkedIn is how many people are desperately trying to “look forward to the AI workplace and all the potential” and trying to maintain a saccharine optimism.

    One day AI will just be everywhere.

  4. rem : 100% agree with your nuclear arms race analogy. It’s career management 101 = it’s OK if we lose money on this if all of our competitors are losing as well. But heaven help the moron who held back investment and it actually becomes a profitable product.

    But how many paying end-users are there that really need multimodal inferencing for their everyday business processes? Chewbacco’s post is a good reminder that even the simpler forms of AI are not seeing the levels of take up by real final users that we were promised. Pyrognosis sums it up nicely.

    I am starting to see more and more comments along the lines of “well, many more wonderful uses are bound to appear down the road.” That reminds me of what has been said about the mRNA/genomics space. Investors bought into it at first, then walked away when it became clear that the profits in the space will take longer than today’s impatient investors would like. Where’s the beef?

NEWSROOM crewneck & prints