Cheating Gravity With The ‘Magnificent 7’
Just how dominant is big US tech in 2023?
Or, put differently, how magnificent are the so-called "Magnificent 7"?
Very dominant, and very magnificent are the answers.
You surely didn't need me to tell you that, but these are the kinds of bite-sized "articles" which play well during holiday-shortened, pre-summer weeks, when attention spans are even shorter than usual (and that's saying a lot in a world where history is written in tweets and documented in "TikToks," which I recently discovered
Interesting market action today. Seems to be one giant reversal trade after the debt ceiling deal, where long vix, AI, tech, short small caps (esp retail) has gone 360 degrees, at least for the day. Monday could be back to “the future” for all we know.
The hunt has broadened to the next layer of companies that have good business models, resilient fundamentals, and a credible story of how AI will drive their demand or margins or market share. Semis, semicap, and some software names are among the obvious candidates, with selectivity increasing as you get away from the Seven Samurai.
Companies that will be AI power users, who can significantly lower their cost structure by replacing people with AI agents will need to prove it – ideally with fat RIFs – to join the party, and that feels some time away.
Which leaves a bunch of sectors out in the cold. Investors are not going to believe AI transformation stories from a retailer, metal-basher, distributor, etc.
I don’t know enough to go to far with this idea. What I know about “big tech” is that not all players are created equal or perform equally well. I notice a couple of things. Not all creators (of AI) are users who may profit from its ongoing use. I do know about real estate development. Developers and contractors profit only in the creation of a project. Contractors are done when they hand over the keys. Developers who stay on as owners still usually bail in 6 or 7 years because cash flow starts declining in three years or so (it’s a tax thing). AI developers may well see similar cycles, even if they choose to smoke their own product.
My first publication in 1975, involved an attempt to create a model of how to value information-related products. What does someone who spends a bunch of money on AI get for a return? So far Meta hasn’t shown anything but losses, in the billions. Where’s the beef? Firing a bunch of workers? You can only do that once. Then no more savings from those folks and you will have to hire someone to run the system. At the behest of the State Board of Regents my school was pushed into hiring Oracle to streamline our enterprise systems. They wanted the job because they hadn’t ever “done” a university. I cannot make a hearty recommendation for that move. They made tens of millions installing as much of the system as they could figure out, set up a nice fat fee for ongoing maintenance, and walked away. We had to hire 50 people to run the system, at a cost of many times the savings we were supposed to get. No one knows how this will turn out with AI … No one!
When those among you who know more than I begin your reply, stop. Think it over again. Write another sentence or two and stop. I spent six months thinking and writing about this before it was accepted by the journal and my paper was about buying physical systems. AI is far more of a puzzler than just buying a computer, a problem I had already tackled for a satisfied paying customer, NCR. Most people who will try to talk about this will just speak in platitudes about cutting staff, blah, blah. Real value involves hard numbers, time frames, secondary and tertiary effects, plusses and minuses. I haven’t seen a single plausible number related to AI yet. This is going to be much harder to evaluate than people think. Tens of billions will be lost, probably more than will actually be made. All those people who got fired, what did they know that they have taken somewhere else? There’s one secondary effect.
Thoughtful commentary Mr. Lucky. I think you make some very good points.
What are the odds that human emotion is inflating the value of AI out of fevered imagination vs having a prescient rational analysis of the discounted multiple cashflows that AI will generate?
Being a psychologist by trade I find human tendency towards emotional dysregulation to be a more likely phenomena than prescient rational analysis…
I know – novel idea.
Lucky One. Great post. I was pondering it when I came across this novel idea via a daily email I get from @medium into my media-side email account:
“Meanwhile, data scientist Brandeis Marshall makes a provocative inquiry. “I want to ask the opposite question: what can’t AI do?”
Whew. Hot topic.
She goes on to write this: “Instead of focusing on the projected adoption of AI in every facet of our lives, consider honing in on what is unAI-able. UnAI-able are actions, tasks and skills that can’t be digitized or automated. These routines require humans to constantly be in the loop to make key and pivotal decisions.”
Good questions. And now I’m curious: What do you think? “
What was her answer? b/c we thought it was “art creation” and that turned out very wrong.
These days, they tell you “well, you cannot automate a barista or a plumber”… but, in the case of the barista, I am far from convinced.