Could Irrational Exuberance Redux Drive The S&P To 6,250?

Are we witnessing a dot-com-style, tech-driven equity bubble in the US? That's the market's perpetual question du jour. The answer depends on who you ask, what metric you employ and, just as importantly, what story you're trying to tell. Or trying to sell. Bears will have you believe the "Magnificent 7" constitutes an egregious bubble built atop a ridiculous pipe dream that says super-intelligent chatbots are poised to usher in a productivity renaissance that'll boost profits and bolster econo

Join institutional investors, analysts and strategists from the world's largest banks: Subscribe today for as little as $7/month

View subscription options

Or try one month for FREE with a trial plan

Already have an account? log in

Speak your mind

This site uses Akismet to reduce spam. Learn how your comment data is processed.

5 thoughts on “Could Irrational Exuberance Redux Drive The S&P To 6,250?

  1. The mag 7 very well could take this cap weighted index to any height, but the 493 also-rans and the rest of the market won’t be feeling the love as they get sold in favor of the 7. Very frustrating for most of us who will also get hurt when that bubble finally pops.

  2. Regarding teh wonders of AI, it is over-hyped. I have tried using ChatGPT to assist with writing security related code and it does a very poor job. The I see an article on Bruce Schneier’s blog about how developers using special purpose AI code assistants actually write less secure code while believing their code is more secure (see https://www.schneier.com/blog/archives/2024/01/code-written-with-ai-assistants-is-less-secure.html). Personally I do not use AI directly, I only use it for suggestions regarding avenues of research that I might not have thought of.

    1. https://www.wsj.com/tech/ai/early-adopters-of-microsofts-ai-bot-wonder-if-its-worth-the-money-2e74e3a2?mod=ai_news_article_pos2

      Confirms something I read in the technosphere from someone at one of the large cloud providers. In that article he mentions how many companies are trying it out and finding it it is not worth the cost. (Alas I’m put in LA so I cannot pull up the exact reference which is back east on another PC.) Reinforcing my belief that it is no different than deciding whether or not to buy and deploy robots in a manufacturing pr distribution facility. As our next president noted, everyone and everything must pay its own way.

    2. I’m a software engineer, and I’ve done extensive personal work trying to get AI to be useful, with, for instance: development tasks; with working out quant finance ideas; or with various kinds of fact-oriented research. Mostly I use GPT4, as I find it to edge out Claude 2 in terms of being the least useless as a practical tool.

      It’s a chatbot. Anyone who ever talked to “Eliza” in the 80s will be familiar with the ultimate results of working with these things. Yes, occasionally by random chance it does come out with something that’s pure genius. I’d say the median answer it gives, though, is somewhere between what you might get talking to a reasonably intelligent person who knows how to search Wikipedia, and what you might get talking to a hopelessly stupid person who is both deep in the grips of Dunning-Kruger and congenitally driven to avoid ever acknowledging when they don’t know something.

      I did get some useful statistical factoids that I didn’t know about the behavior of certain kinds of options spreads out of GPT4 once. That’s about all I have to show for many, many months of efforts to get any kind of value out of these things.

      Much more often, I’ve seen them make very elementary logic mistakes, and then, when I point them out, apologize for them, and then make them again. I will give Claude 2 credit for one thing, though, as in a moment of exasperation I asked it “Do you think I asked you this question because I wanted a practical answer, or because I wanted a series of elementary mistakes repeated at me over and over again?” and it did actually apologize, as usual, but then, did not try to answer the question again and vaguely acknowledged it had a lot to learn. That was one of the most advanced, quasi-“intelligent” things I’ve seen it do.

      More amusingly, I’ve had GPT4 tell me it needed time to think about my issue, and then request we schedule a time when it could get back to me, and walk me through the scheduling process… when it doesn’t have the ability to continue to consider a topic beyond directly responding to the input queries, or to schedule future contacts, or to reach out to a user later in any way. It just had something in the corpus that told it “Humans are likely to answer this by having a dialogue concerning thinking it over and scheduling a future meeting to discuss further” so that’s what it did. That was very telling.

      I’ve certainly never had one generate working code, or even working pseudocode, and that’s after a lot of attempts.

      Fundamentally, these things don’t know what they’re talking about any more than a wooden cuckoo knows what time it is; and with a great deal more randomness in their responses than the cuckoo has.

      The fact that they sometimes appear to just tends to engage peoples’ confirmation bias. At some point, that’s going to dawn on people at large, and the hype bubble over the metaverse^h^h^h^h er, excuse me, cryptocurrency^h^h^h^h^h oops, sorry, generative AI is going to burst.

NEWSROOM crewneck & prints