Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

7 thoughts on “Silver Linings And Catch-22s

  1. Read this very topical opinion piece this morning on MW. It reminded me of a prescient comment here by John Taylor which he posted before the DeepSeek announcement where he contended that the US tech companies are the only ones on earth totally focused on rolling out every larger models. His comment was salient because each new LLM model iteration was already producing fewer and fewer gains per dollar wasted, oh I meant spent.

    https://www.marketwatch.com/story/deepseek-could-sink-big-techs-ai-growth-plans-and-why-shouldnt-it-1f6a887b

    But the sunk costs at the hyperscalers are too enormous to allow them to admit they were misguided. So they focus their energy on questioning the validity of DeepSeek instead of answering why they didn’t come out with something similar.

    1. Of course, we Americans firmly that big is better in all things. Think of your neighbor who just bought a black Dodge Ram 2500 Rebel to use for his weekly dump runs and monthly trips to Home Depot. Or the fascination with the 44 magnum handgun thanks to Dirty Harry. Hell, it’s the BIG Mac, not the Mac, that competes with the Whopper.

      So why shouldn’t we lavish endless money buildking the largest LLMs on the planet? BIG is better!

  2. My software company is using AI a lot, and it certainly makes the competent folks much more productive. It’s making my life easier every day in new ways. Though measuring productivity in aggregate for the industry is basically impossible imo

    1. Hard to extrrapolate the impact of AI on wider productivity given that the US tech sector, which includes software, computing, data storage, and more, only accounted for around 8.9% of the US GDP in 2023. Some estimate that software accounted for about 25% of that. Not sure how much of that came from software coding and such as opposed to marketing and ongoing support.

  3. Your point about expense is valid, for a single consumer who occasionally leverages the tooling. For example, I played around with using AI to help author a novel. It would write a few chapters and then claim it needed time to think, when I asked how long it needed, it said unspecific hours. Obviously AI doesn’t need hours to do what I was asking but this is a way of implementing throttling at the consumer level.

    At scale and, depending on which model, the expense increases dramatically. If you expect a specific class of service at scale you have to pay an additional up front cost called PTU to ensure a given level of throughput. Again, depending on usage, AI at scale, has the potential to become a FinOps nightmare for firms.

    From that perspective, I tend to agree with the assertion that lower cost foundation models are good for business. Everyone thinks they need the latest and greatest OpenAI model but when the expense starts piling up, executives will start to question that. If you can accomplish the same task on say Amazon Titan, why wouldn’t you?

  4. AI may be neat and all but most of it was trained using data owned by one or more of us. AI is mostly valuable because it has been created with our sweat and tears and never paid for. My HIPPA data is used to train health care AI machines and programs. I paid my doctor to create and record that data and my insurance company took it, sold to others and most of it I can’t even access. If I don’t sign an order to allow that process, the provider won’t allow my physician to serve me. (I know, I tried it). That should be illegal. This goes on everywhere. Credit card data is routinely sold. When lived in Iowa the state sold all my personal info to anyone who offered to pay for, while I had to pay them to take from me.

    AI is based on pure statistics. All statistics are subject to measurable and predictable errors. There are two types of statistical errors and at least one must always be non-zero. When one goes down, the other must rise. Once AI is released to do a task for us it is automatically subject to its inherent error rates, the levels of which you will never be allowed to know (classified secret). We tend to think something that is programed into AI its going to be great, no errors, blah, blah. Not at all true. Many commonly used new AI degrades over time (statistics behaves that way). Think about it before to go bed tonite. Do any of you want to be operated on by someone using a surgical robot operated by AI you know nothing about? I have been so treated and the machine made a mistake I was fortunate a human was able to eventually rectify. What about life-changing policies arising in similar “black boxes?”

NEWSROOM crewneck & prints