I used to wonder if the genius, multilingual Bulgarian provocateur who taught me the tricks of this particular trade harbored a paranoid inferiority complex.
But I was an incorrigible (albeit high-functioning) alcoholic at the time, an affliction which distorted my judgment and anyway meant that diagnosing someone else’s character flaws was an exercise in inebriated hypocrisy.
My suspicion stemmed from his habit of insisting that everything anyone else wrote about markets and macroeconomics was ripped off from something he wrote, even in cases where the accused plainly didn’t know he existed.
That seemed absurd to me, and also ironic: His (or, I suppose given my complicity, “our”) business model was built in part on legal plagiarism. He was AI before AI. People in awe of his productive capacity used to described him as “a machine,” and his method for producing market color very much resembled the way Gemini now conjures Google’s “AI Overview” section.
After a decade of running what two of the sell-side’s biggest names have privately described to me as “an honest version” of my estranged mentor’s platform, I gotta say: I now understand and sympathize with his misplaced paranoia. (You were right, you crazy idiot. About a lot, actually.)
It’s not that everyone’s ripping him (or me) off. In my case, almost no one‘s doing that because my readership is a tiny fraction of his, and although I dare say a fair number of big-name investors have run across these pages at one time or another, most of them don’t remember having visited, let alone count themselves regular readers.
Rather, when you write intelligently, exhaustively and obsessively about what, depending on how wide your aperture, is a relatively narrow field of interest, it will very often feel like other people are parroting your narratives — because everybody’s covering the same thing.
The situation’s complicated immeasurably by the fact that all good writers are also voracious readers, which means your own writing will invariably (if inadvertently) parrot what someone else wrote, creating an echo chamber.
This doesn’t bother me as much as it did (and probably still does) my erstwhile agitprop collaborator. But if I had his traffic and reach — he racks up tens if not hundreds of millions of monthly website visits — I’d probably be convinced I was the inspiration for everyone else’s work too.
I say all of that to… well, to entertain you, but also because the spreading of the AI disruption wildfire to more names early this week, including and especially IBM, was attributed in part to a fictionalized account of the US economy in 2028 which in places reads like, but most assuredly isn’t, a compilation of articles published here.
I’m not ashamed to say I’ve never heard of the piece’s author: Citrini Research. Time was, I’d pretend to be familiar with anyone and everyone whose musings moved markets, because I felt like to concede ignorance would somehow undermine my own pretensions to omnipotence. These days, I couldn’t give a sh-t. If I’ve never heard of you, I’ve never heard of you. Shame on me, I guess.
Citrini’s account of the AI white collar disruption domino effect and how it leads to a dystopian future for the US economy within half a decade, is fantastic. I wish I’d written it. It’s also fantastically long, which means even if I had (written it, and in some sense I did), no one save my most dedicated fans would’ve read it.
But people did read Citrini’s report (or “note” or “novella” or whatever it is), and some of those people sold stocks based on it. As one Bloomberg headline declared, “Citrini Fuels AI Scare Trade as IBM Drops Most in 25 Years.”
There’s the IBM chart. It’s a lot of fun. Unless you own it outside of an index fund, in which case have a drink and relax. It’ll be ok. It’s just money, and as Alap Shah, who co-wrote the Citrini piece, might attest, none of us will have any (money) five years from now anyway.
Although the actual trigger for the IBM selloff was another update from Anthropic — which said Claude can kickstart a left-for-dead legacy code modernization project for an IBM-associated programming language — the company’s blog post could’ve walked right out of the Citrini piece.
It’s as if we’re creating the dystopia in real time. Purposefully accelerating the time line on our own obsolescence. Drawing our own demise out of the page and into reality like M.C. Escher’s “Drawing Hands.”
Indeed, the above-cited Bloomberg headline, despite being about the Citrini piece, could’ve easily come from it, which speaks to the (not immaterial) risk that human hyperventilation could become self-fulfilling vis-à-vis the source of our collective existential angst.
Apropos, the most compelling aspect of Citrini’s piece is the extent to which the hypothetical data prints, wire headlines and ratings actions it imagines are rendered in a kind of literary hyper-realism. Consider the following examples:
- U.S. INITIAL JOBLESS CLAIMS SURGE TO 487,000, HIGHEST SINCE APRIL 2020; Department of Labor, Q3 2027
- MOODY’S DOWNGRADES $18B OF PE-BACKED SOFTWARE DEBT ACROSS 14 ISSUERS, CITING ‘SECULAR REVENUE HEADWINDS FROM AI-DRIVEN COMPETITIVE DISRUPTION’; LARGEST SINGLE-SECTOR ACTION SINCE ENERGY IN 2015 | Moody’s Investors Service, April 2027
- ZENDESK MISSES DEBT COVENANTS AS AI-DRIVEN CUSTOMER SERVICE AUTOMATION ERODES ARR; $5B DIRECT LENDING FACILITY MARKED TO 58 CENTS; LARGEST PRIVATE CREDIT SOFTWARE DEFAULT ON RECORD | Financial Times, September 2027
- ZILLOW HOME VALUE INDEX FALLS 11% YOY IN SAN FRANCISCO, 9% IN SEATTLE, 8% IN AUSTIN; FANNIE MAE FLAGS ‘ELEVATED EARLY-STAGE DELINQUENCIES’ IN ZIP CODES WITH >40% TECH/FINANCE EMPLOYMENT | Zillow / Fannie Mae, June 2028
Again, those are all fiction (note the timestamps), but they feel very, very real.
Whether anyone should be selling stocks based on a fictionalized account of the next three years is debatable, to put it politely.
“Considering we are among the most ardent believers that we have been in an equity market bubble since the summer, one would think we would embrace the Citrini report,” JonesTrading’s Mike O’Rourke, in whose dailies politeness and abrasiveness somehow peaceably coexist, wrote. “[W]e believe it is thoughtful and provocative [but] don’t believe it is actionable.”
I agree. It’s just a story. And as O’Rourke went on to point out, it’s the same story that AI proponents have been telling for years, only without redactions for all the possible adverse side effects of widespread AI adoption.
“AI evangelists were so good at selling their narrative that they now face backlash,” O’Rourke went on, calling investor consternation around the Citrini piece “remarkable in the sense that this market has repeatedly rallied in the face of legitimate negative news, only to sell off in reaction to a literal work of fiction.” (Somebody dap Mike up for me, because that’s funny as hell.)
The Citrini piece is an original work of fiction, but if you spend your days hanging around these pages, you’ve read parts of it before, as recently as last Thursday when, in “America’s ‘Jobless Boom’ Risks Becoming Techno Potemkin Village,” I wrote,
We’re at risk of overlooking the circular nature of the disinflation argument as it relates to a prospective AI-enabled productivity boom. Sure, it’d be nice if services prices stopped rising (or even started to fall) because many tasks are automated by technology which doesn’t ask for a paycheck, let alone a raise. But if people aren’t gainfully employed, they can’t spend into the economy even if goods and services are eminently affordable.
The Citrini piece, published three days later, contains a number of very similar passages.
“The headline numbers were still great [and] productivity was booming,” it reads, recounting 2026 and 2027 from the future. “Real output per hour rose at rates not seen since the 1950s, driven by AI agents that don’t sleep, take sick days or require health insurance.”
Then, the bad news starts. To wit:
When cracks began appearing in the consumer economy, economic pundits popularized the phrase “Ghost GDP“: Output that shows up in the national accounts but never circulates through the real economy. AI capabilities improved, companies needed fewer workers, white collar layoffs increased, displaced workers spent less, margin pressure pushed firms to invest more in AI, AI capabilities improved… It was a negative feedback loop with no natural brake.
Right. To re-quote myself for effect: We’re at risk of overlooking the circular nature of the disinflation argument as it relates to a prospective AI-enabled productivity boom.
As The New Yorker pointed out this month in a characteristically brilliant piece on Claude, the AI revolution’s unique as technological epochs go in the sense that “we are doing this because we can.” Note the emphasis.
The linked article quotes Brown computer scientist Ellie Pavlick. “What has long made the AI project so special is that it is born out of curiosity and fascination, not technological necessity or practicality,” she said. “It is, in that way, as much an artistic pursuit as it is a scientific one.”
Art imitates life, as they say. In this case, art may actually be life, or be indistinguishable enough from life that it demands we rapidly restructure society — or risk realizing too late that in creating new life, we committed suicide.
Coming full circle, I’ve warned on all of this at one time or another, in one article or another, for two, going on three, years. If you’re a regular reader, you won’t find much that’s “new,” per se, in Citrini’s fictionalized account of 2028. But I’d go so far as to call it required reading all the same.
After all, financial journalism and macroeconomic commentary, no matter how imaginative, is unavoidably repetitive. It’s very often difficult to discern what’s original and what isn’t. Who’s copying who. And, as some of you have pointed out, that makes the field vulnerable to AI disruption.
Indeed, when you read the Citrini piece, ask yourself: Could ChatGPT have written it? That’s not an accusation. But it is a rhetorical question.



Fascinating read on many levels. I thoroughly enjoy this type of speculation.
“Human hyperventilation could become self-fullfing”. In some ways that’s an accurate description of human history, just change could to is. For some reason, stopping and smelling the roses is never enough for humans. Almost immediately the smelling morphs into growing more roses, growing them everywhere, stealing roses, selling roses, going to war to get more roses and creating a market for roses.
It’s ironic, but I occasionally ask GPT if something is AI or human generated. AI telling me something is AI. But maybe it’s hallucinating and is wrong? Or maybe AI won’t be able to tell me about other AI as it gets better?
:: cue Spider-Man meme of everyone pointing at each other ::
Self fulfilling prophesies are always and everywhere. Humans are such silly animals.
I have IBM. I bought it because they were in the lead in the quantum computer race a few years ago. I was up over 100% now I’m not and by next week I may be again if quantum starts getting some headlines.
“It was a negative feedback loop with no natural break.”
Shouldn’t that be “It was a positive feedback loop…” ?
Negative feedback loops don’t need a break, they are self-regulating. Things get out of control when there’s a positive feedback loop. – right?
Yes, absolutely correct, good catch. A positive feedback is where an initial change to a system pushes the system further in that same direction. Which is generally a “negative” thing; pushing away from whatever equilibrium may have been present. A negative feedback is needed to keep a system tending toward equilibrium (ie., generally a good/positive thing).
Like atomic bombs and asset-backed securitization, artificial intelligence is one of those things that shouldn’t have been, but was inevitably bound to be, invented.
The Citrini piece was perfectly dropped when investors’ and the general public’s mood on AI was already accelerating to the downside. Boss bros crowing about surplusing human workers, datacenter bros flexing about all our electricity they will consume, social media bros gloating about our future diet of 100% artificial content, tech bros self-congratulating about their Digital God, and for the average person all this accomplishes nothing useful, while investors watch both their AI and non-AI names breaking down. As we watched the Olympics, cheering and crying at human glory and agony, who was looking forward to welcome the Robot Olympics? But it too will be invented.
“China has kicked off the first-ever Robot Olympics in Beijing, hosting the World Humanoid Robot Games. Competitors from around the globe are squaring off in events from martial arts to a 400-meter race, highlighting how robots could be used in everyday life….. ” Kind of like watching a Tee Ball game.
The very first Robot Olympics event was actually gymnastics featured in Los Angeles in 1982:
We skipped Robot Olympics and went straight to Robot Wars.
This is the scariest part from the Citrini hit piece:
“Thanks to Sam Koppelman of Hunterbrook for his help with proofreading.”
Hunterbrook is a known short fund that teams up with “research” and litigation partners to pursue their positions. I own a stock that recently was the subject of one of Hunterbrook’s “news” reports – intended to strike fear into the hearts of longs. I took the 30% plus sell off as an opportunity to double my position. Stock price has almost completely recovered.
I wouldn’t be surprised if Citrini Management and Hunterbrook were short these AI names prior to this publication.
In the hands of a good writer, can a “known unknown” be used to strike fear into the hearts of unsuspecting/unsophisticated investors; and actually get them to sell to you at a lower price? Did those same writers adequately disclose their relationship with equity funds? IDK.
The idea of evolutionary feed back loops is salient. For instance, how a peacocks tail gets so long because peahens like them. Never mind that those tails make peacocks easier meals for Fox and Owl. Then there’s non-evolutionary feed back loops. Like yeast in a beer barrel, the yeast cannot stop fermenting and inevitably kills itself in its own piss and alcohol. Humanity is hell bent on destroying itself, via climate denialism and such, and now perhaps by AI Maximalism. For another instance, suppose a large corporation that makes an amazingly profitable chemical that also kills oxygen creating cyanobacteria, and the continuation of using that chemical would doom all life on Earth, a K-Street lobby would scream “hoax!” and their 1% Congress people could enjoy an oxygen rich concrete bunker for a year or two. Humanity knows it’s killing itself and can’t stop. Always one more model to SA. One more golf buddy to embarrass on the links. Nothing will stop it. Unless, perhaps AI can modify our genetics for its own purposes and ‘save’ humanity by an alternative utility.
One more comment on feedback loops – for babeinwoods & pyrognosis – about positive vs. negative. Yes, negative feedback loops are self-correcting (like: high prices are a cure for high prices), whereas positive ones are self-reinforcing.
Positive example: less snow and melting icecaps expose more bare ground, lowering planetary albedo (% of sunlight reflected vs. absorbed), which causes more warming and thus more icecap melting etc.. At least until, perhaps, some negative feedback loops emerge…like a combination of short circuiting the Gulf Stream, causing northern hemisphere cooling and more snow…or too much smoke in the atmosphere from massive wildfires (or nuclear war over climate change) that increases particulates in the air, which reflect more heat back into space and thus cause global cooling (like The Year Without A Summer following 1816 Tambora volcano explosion)…
Surprised to learn that the eastern European from provocateur from Dances With Wolves was one of our own.. Bulgaria being a very small country and all. Don’t know his story but possibly a student of the Russian propaganda school that was at large in the entire eastern block.
I just finished reread Fritz Leiber’s “The Sinful Ones.” The version after he bought back the rights to the book; the 1986 printing. Compared to Leiber’s book, the Citrini article is a walk in the park. The comments to the article are interesting. A relative of mine is laid off from a Mag 7 in Silicon Valley but lots and lots (75,000?) of others are in the same situation; AI might be the reason? But AI is overhyped; it will do some things well and other thing poorly and society will survive as it usually does.
To comment on Claude’s COBOL promises. There is an inherent contraction in the presentation.
Following the link you helpfully included, the authors noted the scarcity of COBOL programmers. I first remember mention of this back in 1999 The scarcity was It was even noted in 1999 during the Y2K scare when programs running on Cobol were said to be the systems most at risk when the new century was welcomed in. A false alarm, but even then Cobol programmers were hard to find. The authors say it now has become much worse.
Yet when you scroll down and read the implementation section, they keep referring to how your onsite staff of Cobol experts will tweak and customize the AI output. Sure. But most firms may not have close to enough Cobol coders. That’s because the dirty little secret of AI. In many cases AI generated code even in common and more recent languages requires detailed and diligent examination of the code. That’s not gonna be done by a couple of entry-level coders in Bangalore.
The authors did give a nod to the factors which draw users to mainframes including high security and 99.9999% reliability. Why would any mainframe user risk replacing a functioning and stable legacy software with some bug and hallucination-ridden AI output? Especially users in a regulated sector risk? It makes little commercial sense.
It makes even less sense when you learn that IBM has already been offering their own AI-enhanced tools to assist with updates. Specialized tools which do not waste resources lugging around an encyclopedic LLM database.
It looks like an obvious choice, especially when you add in the CYA factor.
But then, there still is some magic left when the term “AI” is slapped on something.
I think part of the problem is that the engineers and coders that are nurturing AI into existence have read all of the futuristic dystopian novels, and seen all of the movies, to the point that life is going to imitate art. They can see all of the things that may go wrong, but appear to have no other template.
Krusty: “If this is anyone but Steve Allen, you’re stealing my bit!”
Nice
“We’re at risk of overlooking the circular nature of the disinflation argument as it relates to a prospective AI-enabled productivity boom. Sure, it’d be nice if services prices stopped rising (or even started to fall) because many tasks are automated by technology which doesn’t ask for a paycheck, let alone a raise. But if people aren’t gainfully employed, they can’t spend into the economy even if goods and services are eminently affordable.”
This has been my argument for about 6 months now. These people haven’t really considered the broader impacts of what they are building and how impossible it will be to monetize it once they reach their overarching objective. Again, in a consumer driven economy, what replaces the consumer when no one who is currently consuming has the income to keep doing so?
All of this makes me think of the Expanse novels, some amazing reads if you haven’t looked at them, in that very realistic future jobs are scarce and most people rely on UBI to barely get by.
I used to encounter a simiilar argument when litigating unfair trade cases. Why should we care if another country wants to dump or subsidize its exports to the US? American consumers will reap the benefits of lower prices via economies of scale, lower margins and state transfers.
But as you say, it doesn’t matter how low the prices are if you don’t have spending power.