2026 Kicks Off With AI News Frenzy Out Of Asia

New year, same theme: It’s all about AI.

Asian tech shares were aggressively bid Friday amid a hodgepodge of ostensibly exciting headlines, including news that Baidu will spin off its AI chip subsidiary, Kunlunxin, and offer shares in the unit through a Hong Kong IPO.

“The proposed spinoff could better reflect the value of the Kunlunxin Group on its own merits and increase its operational and financial transparency,” the filing reads, adding that the unit’s business “would be appealing to an investor base that specialized in general-purpose AI computing chips and related software-hardware systems, which is different from [Baidu’s] relatively more diverse business model.”

To which investors responded, “Just take our money!” The stock rose 10% or so in Hong Kong, its second outsized gain in three sessions.

Friday’s advance added to a mammoth 60% gain in 2025 (all of which came during the year’s final four months) and helped propel the Hang Seng Tech Index to its best levels since mid-November, when the gauge hit a four-year high.

Speaking of Chinese chip IPOs, Shanghai Biren Technology Co. — which designs GPUs — debuted on Friday. It went well, to put it mildly: The stock rose 75% in Hong Kong. That, Bloomberg noted, was “the best first-day performance since early 2021 among Hong Kong listings that raised at least $700 million.”

Early-2021, you’re reminded, was “peak Hong Kong tech.” The above-mentioned Hang Seng Tech Index hit a high above 11,000 in February of that year before Xi Jinping’s anti-monopoly crackdown ushered in a truly vicious bear market. By October of 2022, the gauge was 75% lower. It remains down more than 40% from the highs.

The figure below shows an upsurge in Hong Kong new listings. December was the busiest month in nearly six years.

Friday’s strong debut for Shanghai Biren bodes well for whatever’s in the pipeline for January.

As a quick aside, Shanghai Biren was “only” up 115% even at its intraday highs, nowhere near the absurd gains logged on the mainland by MetaX and Moore Threads which, together with Shanghai Biren and Shanghai Enflame Technology, comprise a quartet of would-be Nvidia rivals dubbed “China’s Four Little Dragons.”

Speaking of Chinese AI disruptors, DeepSeek was in the news again after the startup’s founder Liang Wenfeng published a paper alluding to yet another mini-revolution in the methodology for training AI models.

I won’t pretend to know anything about “Manifold-Constrained Hyper-Connections” — which sounds like something that might short circuit when you try to initiate warp speed in your ramshackle getaway ship after escaping alien captivity — but it’s apparently a big deal. (“Chewie, check the Manifold-Constrained Hyper-Connections!” “RAWRGWAWGGR.”)

“Empirical results confirm that mHC effectively [promotes] stable large-scale training with superior scalability compared with conventional hyper-connections,” the company said, suggesting DeepSeek may be on the brink of another efficiency breakthrough nearly a year on from a coup which called into question the relative wisdom of US firms throwing hundreds of billions at the AI arms race.

At a time when banks and even private credit are balking at — or at least charging more for — speculative data center financing, America’s hyper-scalers could do without fresh evidence that DeepSeek can do more with (a lot) less.


 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

4 thoughts on “2026 Kicks Off With AI News Frenzy Out Of Asia

  1. No doubt Sam, Elon and Jensen will simply advocate for even larger inefficient models. And more datacenters to accommodate them. The industry here has too much (as in hundreds of billion dollars) riding on that approach to pivot to something more efficient.

    For those who are interested in a deep dive into what Manifold-Constrained Hyper-Connection is and how it works, this morning I read an illuminating piece “DeepSeek Used 1967 Math to Fix AI’s Biggest Crisis. Meet mHC” by Rohit Kumar Thakur in the Medium.

    1. I’m going to disagree with you on this, even conceding that you clearly know way more than I on the technical side.

      I would argue that the industry would move very quickly to adopt more efficient methods so long as it produces equal-to-better quality results. At the same time, more efficient models wouldn’t stop their voracious cap-ex / data center build-outs. It is in the nature of organic systems to expand until they have absorbed all of the resources in their environment. If you make training runs 100x more efficient, then they will just do training runs 100x more frequently.

      While a huge gain in model efficiency might offset some of the urgency around the pace of building new data centers, that was bound to happen regardless just because of the increasing difficulty attendant to constructing each new center. Community push-back, water use, grid constraints, labor availability, hardware availability etc. all stand in the way.

      1. ” even conceding that you clearly know way more than I on the technical side.” That’s doubtful!

        Just extrapolating from the US industry response to the first DeepSeek breakthroughs. Despite Altman’s Code Red message to his team, not a whole lot has happened. The hope is that some domestic outsiders start to roll out more efficient models … and then get bought out by the big boys. Perhaps Elon’s sale of Grok was a harbinger?

  2. And somebody, somewhere is working on AI-on-a-PC, good enough for 98% of us. The builder of that chip will do well. The rest of the AI hype will burst as the loudest bubble implosion ever.

    A similar thing happened a few decades ago. The hype wasn’t nearly as loud, but INTL did OK.

Create a free account or log in

Gain access to read this article

Yes, I would like to receive new content and updates.

10th Anniversary Boutique

Coming Soon