
Little Red Box
"An excellent AI advancement and a perfect example of Test Time Scaling."
That's DeepSeek, accordin
You must be logged in to post a comment.
I’ve been busy at work and dealing with a sick kid today, so I missed a lot of the great conversation earlier. I’ve read a little bit about these developments, but know less than nothing about what this means for AI, Nvidia, or humanity. Maybe I’m crazy, but the timing of Deepseek’s rise just as Tiktok is under threat of ban is interesting. It sounds like the open source aspect is supposed to provide protection against CCP spying, but I find that hard to believe.
I, for one, welcome a significant market pullback. I have no idea if it’ll last, but it’s not hard to envision this being more than a bump in the road to US exceptionalist glory. I’ve (presumably) got plenty of runway ahead of me so I’d rather be buying at lower prices to fund my retirement. Then again, given that the singularity may be upon us and lack of any sort of intelligence in the White House, maybe it’s time to stop worrying and learn to love AI.
Hopefully, AI can start harvesting food and building houses soon though because our blue collar labor pool is about to rapidly disappear.
One other thought crossed my mind: what’s the over/under on dollars that OpenAI, Softbank, and Oracle will actually invest of the supposed $500B pool? Even before this, $100B might have been a stretch. Is the over/under now closer to Musk’s alleged $10B?
“It seems to me that Western AI players have a pretty good handle on what DeepSeek’s doing, which means it’ll be replicated outside of China, and with better chips.”
The open source aspect of it is what intrigues me. It can and will be duplicated and further perfected by US firms. And EU firms. And Indian firms. And Taiwanese firms etc. If Xi’s grip is so powerful, you’d think he would not have allowed that, no?
To a point, “better chips” might help, but the notion that we need more and more computing capacity to fulfill pretty basic business needs is rightly being called into question now. That has been the justification for forecasts of endless demand for GPU chips from one company.
Now I’d humbly suggest that as investors our attention should be on which companies stand to benefit most from using AI. (That’s make more money!) And then, how long will their advantage last when all of their competitors follow suit?
Again: “Where’s the Beef?”
You are a brave man buying Nvidia at this price.
I don’t see it that way at all. I liquidated a significant portion of my Nvidia position several months ago at $120. I have been trying to justify that decision ever since. Still lots of liquidity out there, same as last Friday, and it has to go somewhere.
I’ve been thinking about this conundrum (open source vs. party control) and I think I have an idea about how this is expected to work. The code driving DeepSeek is open source, you can view it and consume it (fork it) if you want. However, the data used to train the models is opaque, no one knows what they used and how they used it. The data is likely behind the explanation for how they were able to build this so cheaply, maybe even leveraging existing AI models to train their own.
The risk is not in using their open source code to create your own private GPT’s, it’s in using their hosted GPT’s where the data behind it is opaque and what happens to your data will be as well. Herein lies the risk for all DeepSeek consumers who don’t have the wherewithal or ambition to self-host, you are basically giving the Party all the data they could possibly ever want about you.
I expect most American consumers will be ambivalent about his idea considering their response to the TikTok ban. But it will be interesting how US policy, especially given the current administration’s pivot on TikTok, responds to this risk.
A tech trojan horse. Brilliant.
I think most serious users will download the model and run it locally. Does DeepSeek really want to get into the cloud model serving business?
I also think the methods used by DeepSeek will be adopted by other model makers.
You hit the nail on the head. The scary part of AI is the black box the data is in. If you trust your future to trainers like Zuch, Musk, and others of their ilk you are more foolish than you know. The government buys/takes our data from every source in digital tech. There is nothing about normal, everyday people that the NSA, the IRS, all of the healthcare industry, insurance companies, retailers, credit card companies, utilities, and the internet firms like Google, etc, etc don’t know. I see the clues every day. Mr. so and so here’s something you will really like. Check us out … It’s the data in AI that’s the scariest part. You can’t even get a lot of your information so you can see it yourself. How is your credit score calculated? It’s really too late already. Everybody who wants to know about us has already lined up our data in their systems.
Agreed sir. In that regard, I heard a snippet of a discussion on the radio about the security risk of having high-clearance government operations located close to a casino. Critics were suggesting that it opened the door to the blackmail of players with large gambling losses. A lawyer at a firm which handles cases around security clearances refuted that by saying that if any high security clearance individual wins or loses more than $10,000 the security officers at that person’s branch of government is informed immediately. In the same vein. have any of you sent a wire of over $10,000 in the last few years?
To quote Scott McNealy : “You have zero privacy anyway. Get over it.”
We’d be better served by spending money bolstering info security at our electricity and other providers which may be weaponized in a conflict. The private sector has done diddly about that.
“Xi already controls it.”
Granted. But when you have aspirations of being dominant world-wide in the field of AI, would you stick a back door into your first major contender, when it will be the most parsed computer program in history?
btw, do those count as scare quotes?
I don’t necessarily mean he controls it in the sense that there’s a secret backdoor in the code, I mean he controls it in the sense that — you know — “Hello, is this the office of DeepSeek? Ok, good. This is Xi Jinping calling. As you’re aware, I run this country and everything in it, including and especially that AI model you have. So, go ahead and send us every, single bit of data you’ve collected with it, otherwise I’ll have to kidnap your family, shoot you in the face and throw your body in a ditch. Have a nice day.”
Point being: There doesn’t need to be a backdoor. Every interaction with that thing (DeepSeek) is by definition the property of the CCP. And DeepSeek’s been downloaded by God only knows how many people who apparently don’t understand, or don’t care, that their interactions with it are a source of intelligence for Xi Jinping. Of course, the vast majority of those interactions are innocuous and will be completely useless to the Party, just like Xi’s not going to learn anything especially useful from the vast majority of the data collected by TikTok. But the bottom line is that the entire world spent Monday conversing with (or trying to converse with, given that DeepSeek apparently had to close down new registrations due to hacking attempts) a company that’s completely beholden to the CCP simply by virtue of being a Chinese company.
And all that data belongs to Uncle Sam as well. Both sides fear the dreaded “Mine Shaft Gap” or in this case the gap that might exist amongst the bad guys. I have heard some things are safe in Andorra.
My daughter finally got a nice senior management job managing a scalable AI creation process after two years of unemployment. She is now at a competitor to her last employer. When she was laid off she was in the process of collecting a huge pile of unique important data. The folks who laid her off realized they had no idea what she was going to do with all this valuable private data so they sent the whole database to her at her new company so she could do what she was going to do with it and kick her former firm’s butt (or sell selected skills to the former guys). It’s the data to train the models that’s really the key. That’s a new product strategy at IBM. They get hired to help firms create the data bases they need for internal AI and help them build the models. For a company that knows what they are doing, this process is a source of immense protectable competitive advantage.