Micron And The Great Expectations Trade

Generally speaking, you should be able to venture a good guess as to whether high profile companies are likely to rally or sell off post-earnings based on the magnitude of any beats or misses in the context of the current zeitgeist. That latter bit -- about contextualizing the scope of beats and misses by the prevailing market mood -- is key. That's why I italicized it. If you're a macro bellwether and everyone's convinced a recession's coming, an in-line guide might be good for a respectable

Join institutional investors, analysts and strategists from the world's largest banks: Subscribe today for as little as $7/month

View subscription options

Or try one month for FREE with a trial plan

Already have an account? log in

Speak your mind

This site uses Akismet to reduce spam. Learn how your comment data is processed.

19 thoughts on “Micron And The Great Expectations Trade

  1. Everything has sped up. “Investors” now are demanding instant gratification on theme trades. While many guys hope for that when dating, it’s pretty silly when it comes to investing in long-term ideas, yes-no? .

  2. Wish it was up ten but cant complain overall. Sanjay had some good propaganda in there but ive noticed over the years he has a degree of restraint in his stock pumping tactics. Theyll be ok.

  3. In my mind, chips always become a commodity (remember SanDisk?) Once the data centers are built and equipped with chips, demand will plummet and software will take the baton back. Chip prices will plummet, and the next data centers will only be upgrading much more slowly than this initial compute build. And it’s possible a different chip designer will take meaningful market from NVDA.

    1. Agree as well. It seems to me that the problem with betting on hardware demand is it’s possible to saturate the market, and, it’s harder for users to pivot when newer technology comes along.

  4. MU is interesting. I bought it at pretty much the bottom, because when growth, earnings, EBIT, gross margin, capex cuts, everything are horrifically negative and getting worse, that’s when you buy memory semis.

    You sell memory semis when everything is terrifically positive and getting better, so I took trims after the quick double and have been holding reduced positions, trying to figure out if the transcendence of AI will change the deep cyclical character of memory semis. Thinking I’m getting closer to a conclusion.

    The next question, of course, is whether other semis that are less cyclical but still cyclical have had their characters changed.

    1. Even with a seemingly powerful, logical and long-term tailwind, chips will remain somewhat cyclical. Similar to robotics & automation stocks.

      Chips have the added burden of being incredibly capital-intensive relative to most other products. Expansion often leads to over-capacity in the sector.

      It brings to mind the swings you see in the mining sector. It’s really no different than chips. You’ll always need copper/lithium etc but demand is NOT infinite, at least in the short run, which is all investors seem to care about these days.

  5. How big will AI be, most certainly more than blockchain (if I recall early predictions were pretty phenomenal yet there is really only bitcoin to hang your hat on). Some form of AI will most certainly be used in every midsize and above company. Is it another nice tool like CRM or ERP, or will it be something magnitudes larger giving outsize returns for companies beyond the AI infrastructure providers.

    1. Favorite pet topic of mine. I’m a big believer in the possibilities blockchain, but only because I actually understand it. The primary use case thing that’s so exciting to me about blockchain is not cryptocurrency, it’s the smart contracts and possibility of decentralized apps—being able to start the next Uber, eBay, or Amazon without needing to have the resources to pay for a server farm. At the moment, that’s way off in fantasyland, but only because the energy efficiency and computing horsepower aren’t there. Both of those will absolutely arrive someday. We are a very, very long way off from practical applications, but, I had a feeling as I got acquainted with smart contracts that I hadn’t had since the first time I saw a web browser in 1994. The possibilities are stunning.

      Meanwhile, scalable blockchain-based decentralized apps are not going to be here for a long time, while AI has already filled the streets of my city with driverless cars. But, in the very long run, for me personally, it’s hard to guess which is going to be a more impactful technology. My gut says AI (actual usable AI, not the myth that’s being hyped, see my other comment on this page) is a major but predictable evolutionary advance, like web browsers and HTML, while blockchain, if they can overcome the practical obstacles, will be a revolutionary paradigm shift more comparable to the development of modern internet infrastructure itself.

  6. Really curious to watch how this AI bubble goes. As with so many advanced technologies, obviously it does do incredible things, while the same time not even really doing a small fraction of what people are hyping it as doing now. Last night — representative recent experience — I had ChatGPT 4o fail almost 25 times in a row to correctly help me with a very simple CSS problem (refactoring a fairly simple animation to use compositor-only attributes, for you tech geeks out there). It got me 75% or 80% of the way there, and then failed abysmally, over and over, and I ended up with a completely meaningless answer that took way longer than it would’ve for me to simply Google to find out how to do it. I wonder how long it’s gonna take before the public at large realizes that that level of reliability is the norm, and what’s gonna happen to the AI investment thesis when they do.

    Make no mistake, the capacity for it to help in very specialized areas is, I’d go so far as to say, breathtaking. But as far as broad general practical applications or anything even remotely approaching AGI, we are so far from that that it’s laughable. That is nothing like how the current systems work. PT Barnum would be hawking AI today.

    1. An amusing “parlor game” is to ask people to explain just what AI actually is. What underpins it? How does it work? You will receive very few informed answers, outside of “I fooled around with it and generated a couple of really cool images!” Or “I heard it will replace every software coding job!”

      Now, if the last one us even remotely correct (it’s not) why in heaven’s name are you buying Indian stocks??

      1. I know even skilled developers who are insisting that it’s useful as a tool for programmers. However, I have yet to see one of them actually produce for me a prompt that gave them such practical results… I always make it a point to ask.

        My favorite explanation is the famous tweet describing it as “spicy autocomplete”. If you’re not familiar, look it up, the guy really put his finger on it. Basically, he described the stages of getting to familiar with AI as: 1. Oh my God this is amazing. 2. Holy cow, this thing is going to put me out of a job. 3. Wait a minute, this thing is wrong a lot. 4. This is nothing but spicy autocomplete.

        1. I guess the link is on my office PC, so I’ll have to produce an error-prone synopsis from memory. I came across an interview with the guy at AWS in charge of marketing AI tools to coders. He boasted how users were, on average, receiving around 15 suggestions per day from the AI tools.

          What caught my eye was when he went on to boast that 36% were adopted. So, he was boasting of a 64% failure/reject rate? That’ll change the world for sure!

          1. I’ll throw this out to add to your worry files or riducule bucket.

            If LLMs are becoming capable of replacing many coding functions, don’t you suppose that it would be VERY enticining for Chinese, Russian and North Korean hackers to inject clever little back doors into the libraries underpiing the LLM- generated code? That might prove to be way more efficient than most “spear fishing” campaigns.

            A silly worry I suppose – didn’t Sam Altman assure us that they all so very carefully screen their source databases for such things?

          2. He needs to define “suggestions”. My experience with using Copilot in VS Code, for instance just for autocomplete suggestions of somewhere between two words and at absolute most three or four lines including comment text, is that it’s actually astoundingly good. And I’ve seen it do some pretty remarkable things, like telling it “change this conditional to handle the default case, and change all the output to lowercase“ and it’ll handle it perfectly, which is a very cool convenience. But asking the VS code Copilot chatbot to help with anything is uniformly an exercise in futility. Plus there’s a problem that on the rare occasions it produces something workable, I found that you build on it for a few days and only then discover a fundamental flaw that was too subtle to notice at first but winds up being significant enough to break the whole thing. You said AWS, not Microsoft, otherwise I’d assume by “accept” he’s talking about hitting the button to use the suggestion, but I don’t actually know what he means in the context of AWS. I think it’s safe to assume that his answer falls in its expected place in the hierarchy of lies, damned lies, and statistics, and that the actual rate of AI suggestions making it all the way through to production code is likely much lower than even that not terribly practical 36%. Here’s an example, I had copilot give me consecutively two mutually exclusive opposite answers about how JavaScript evaluates a certain expression. That means the accuracy I got from it on a simple question of fact, not even a question of it coming up with a working algorithm but a simple reference fact, was equivalent to a coin flip. His quoted 36% is even less accurate than a coin flip.

            15 suggestions a day? I often get more than that in an hour, as with mounting frustration I try over and over again to get it to just solve the damn problem I asked without hallucinating or omitting basic facts.

            People will realize. It’s going to be very interesting to watch this shake out.

          3. Hmmmm. I’m not so worried about hackers putting anything in the code, it would be more of a concern about poisoning the models, I think. And a lot of it is open source, so it’ll get audited by geeks the world over.

            The funny thing for me to think about is that the training corpii all come off of the Internet, which means, over time, the training corpii are going to contain much more machine-generated material. Where is that gonna lead? We’re gonna wind up with the semantic equivalent of that banjo-playing kid from “Deliverance”.

          4. My totally uninformed guess is that a first attack vector will be via Python.

            But the real gold mine for pro hackers will be if they can inflitrate Cobol and other more legacy languages which still run many things. Using programming languages which have fewer and fewer folks who worked in them day after day. It’ll be more and more tempting for less experinced coders to rely on LLM generated suggestions. Just “click it in there”!

NEWSROOM crewneck & prints