As ChatGPT Traffic Falls, A.I. ‘Baby Bubble’ Gets First Test

The "novelty wore off." That's how Similarweb described the first drop in global traffic to ChatGPT since OpenAI's chatbot took the world by storm several months back. Globally, traffic fell 9.7% and uniques dropped 5.7% in June. Time spent on the site by visitors declined nearly 9% the prior month. "Chatbots will have to prove their worth, rather than taking it for granted, from here on out," Similarweb said, in a blog post. Apparently, it costs OpenAI around $700,000 per day to run the publ

Join institutional investors, analysts and strategists from the world's largest banks: Subscribe today for as little as $7/month

View subscription options

Or try one month for FREE with a trial plan

Already have an account? log in

Speak your mind

This site uses Akismet to reduce spam. Learn how your comment data is processed.

15 thoughts on “As ChatGPT Traffic Falls, A.I. ‘Baby Bubble’ Gets First Test

  1. The road from “what a cool gizmo” to a huge productivity booster will probably be longer and less direct than many theme-starved analysts and investors hope. In fact, profits at the end-user firms that actually adopt AI may take a short term hit until the technology is buttoned down enough to allow large numbers of people to be fired.

    We may have a roadmap from the past experience when companies implemented ERP systems. For a few years it was very common to see corporations blame the cost and disruptions occurred while they implemented and tuned ERP software from SAP.

    I recently mentioned this to a B-school buddy who once ran a major healthcare company. His reaction was something like “Oh yeah. We saw that over and over in our industry back then.”

    Like ERP, AI should eventually benefit some firms. Eventually.  

    1. ERP isn’t a good comparison to generative AI. With ERP, companies had to convert old processes and data structures into a new system and that required considerable manual effort to customize it to a business’s needs and integrate with other tools.

      The major benefit of generative AI is that it basically solve the issue that makes ERPs so cumbersome: the ability to take large amounts of unstructured data and turn it into something that can be summarized and used easily for tasks like planning and decision-making. This can either be added into a company’s existing products that it is selling to a customer or integrated into a company’s internal systems to make processes more efficient.

      Because it can basically be bolted on to existing infrastructure through APIs and plugins, it’s a much lighter lift and doesn’t really disrupt a company’s existing operations. You don’t have the same switching costs as companies had with prior technologies like ERP.

      Also, OpenAI is rolling out the alpha version of Code Interpreter this week, which will be a huge leap forward in data analysis.

      Some examples of what it will be able to do: https://www.linkedin.com/feed/update/urn:li:activity:7082924954047942656/

      1. Thanks. Good points.

        Question: how will your APIs access client databases? On their own physical or cloud servers? Or a Snowflake-like setup where you port your company’s data to the AI provider’s cloud?

      2. I documented less than sophisticated AI a few years ago. But it was sophisticated enough to seize my imagine and enable visualizing its potential.

        The capabilities in the technology include replacing me in my job as a technical writer. It’s not there yet. And I happen not to fear its capability because I’m very near retirement. But, sincerely, AI will be able write code and spit out documents to describe software in the not to distant future.

        My guess is that it will be sooner rather than later. By that time I’ll have my own blog on WordPress, and my wife and I will probably be living someplace that’s more expensive and exotic than Chicago. Presumably, she and I will enjoy a cocktail on the veranda at the end of the work day. A tall gin and tonic is my drink.

        Yes, Walt, I do let my imagination run a little bit from time to time.

      3. Tom’s Hardware had an interesting piece today: “Generative AI Goes ‘MAD’ When Trained on AI-Created Data Over Five Times”. Reading this through just reinforces my suspicion hat “AI” is evolutionary rather than revolutionary.

        A nice step further in data analysis not all that far removed from the simpler data pattern correlation regression analyses I did on the cotton market back in 1981. Using what were some pretty large-scale databases for the time, my models “discovered” that one of the best predictors of cotton future prices was the aggregate total scores of Latin American soccer games played in the previous week.

        No doubt AI models would have worked faster, though some of that would simply be because of the increased data processing abilities of today’s rigs versus the mainframes in the early 1980s.

        1. I’ve whinged before about my inability to get ChatGPT (or publicly available LLMs like Bard, Bing, Pi, etc) to do anything useful for me, where “useful” means producing reliable data and specific answers to non-general questions.

          Writing a book report in the style of e.e. cummings, giving a cursory how-to on painting wood siding, or pretending to care about my feelings – these feats of generative AI are impressive for a little while, then rather pointless. Kind of like a talking horse.

          For widespread business adoption, gAI need to do useful things. For that, it needs to extract relevant data from multiple corporate data applications and databases/lakes/pools and accurately analyze that data, with a very small error rate (<0.01%?). LLMs cannot currently do that. They can be connected to existing applications and as the user interface for non-trained users who cannot formulate the correct query to all those those applications or combine and interpret the applications’ output. This requires supplying the LLM with a prompt that, as far as I can tell, has to include much of the logic and data needed to do the task. Don’t get me wrong, this is a compelling use case. Think of all the redundant trained users!

          It seems to me that business gAI won’t ultimately require LLMs trained on trillions of tokens, since it doesn’t need to know about e.e. cummings, how to prep wood siding, or human feelings. Thus the compute resources will be manageable and adoption widespread – I think.

          Meanwhile, AI will do remarkable things in protein synthesis, aerial warfare, mineral exploration, autonomous baby strollers, etc – but those are not gAI aka LLMs, they are the machine learning that has been in development for decades.

          Incidentally, assuming rapidly declining ASP, consensus estimates for NVDA’s datacenter revenue implies about 50-100% annual unit growth in AI hardware in the next couple years.

          1. An example in my line of work where it’s already creating a lot of value: summarizing calls and outlining a list of next steps. That saves our sales team considerable time and effort and provides much better notes. They obviously still need to review the notes and follow ups to ensure accuracy, but the vendors I’ve talked to are already teeing up the next generation of sales support capabilities like call prep including a summary of historical interactions and what kind of messaging a prospect might be interested in. Those kinds of things can save reps a lot of time and effort with meeting prep and follow up.

          2. Ha, well, the hope is that it increases our team’s ability to sell thereby right sizing in a positive direction 🙂

        2. Likely that AI can replace attorneys and/or paralegals who currently spend a lot of time doing legal case research.
          My understanding is that this is already occurring in the area of patent law. A human will likely still be needed for verification and final review, however, AI can significantly reduce the time required to prepare patent filings.

    2. Great Example with ERP implementations. And very relatable to anyone who’s working in corporate finance / IT in the past 6-10 years. Looking back, the job loses at my company were not an immediate result of the implementation of the technology but rather as folks became proficient users and sustained productivity was observed with no disruption to operations… Then the culling happened.

      I’d also submit that another example would be adoption Blockchain technology… Anyone cracked the code on a real world use case with a sustainable money making business model for that yet?

  2. Either people misunderstand what is AI or, perhaps, humanity does not understand what is “intelligence.” Jim Simons made a lot of money finding patterns without trying to understand how or why those patterns existed; perhaps blindly finding and using patterns (without any insight) as in AI is sufficient but such “intelligence” may not extrapolate well.
    Is the “liability cost” of using AI (e.g. ChatGPT) priced into the value of companies using AI? Levidow, Levidow & Oberman, P.C. only had to pay $5000, although the reputational cost to the firm might be higher. Now that the warning shot has been fired, will the next firm get off with a small fine or will more serious sanctions be imposed? What about the lawsuits against OpenAI?

  3. Anyone remember Lucent Technologies and the telecom build out or the fiber optics build out? Capacity gets built in excess of demand, then demand catches up over years.

  4. I have little or no direct experience to weigh in on the real vs bubble question, but as a consultant, I am often being explicity restricted from using any sort of online chatbot in the course of my contracted work. It is less a matter of reliability or accuracy than the clients (usually via law firms) not wanting sensitive or even proprietary topics “discussed” online, even in anonymous or genericized form.

  5. From what I’ve seen, private ChatGPT endpoints are becoming more critical path at trading companies. I’m curious to see how Azure will handle the pricing over the years once company systems critically depend on them. Like what could you sub in that’s 1:1 if you decided that the pricing was getting out of hand?

    In terms of the article pointing out a drop in usage of the public endpoint, that seems more related to user habits than anything else. For example, my brain is still wired to check x or y or follow z process and then sometimes I remember “oh man I should have involved ChatGPT!” I don’t think it’s a bubble at all in terms of usage — it will become the new “google it”

NEWSROOM crewneck & prints