A.I. Meets R-Star

Not everyone’s convinced that generative A.I. is the future.

While most analysts and economists readily concede the possibility that the “new” technology (it’s not actually new, it’s just advancing more rapidly on some fronts than it was previously) will drive a lot of capex and, perhaps, usher in a mini-productivity boom, some observers are skeptical of the idea that this is the personal computer (or electricity or the wheel or fire) all over again.

Every day, somebody, somewhere ventures a new set of estimates for how A.I. might impact this or that macro dynamic, economic variable, profitability metric and so on. The truth is, it’s far too early to draw any conclusions, which is why I’ve been judicious when choosing which such estimates to highlight.

On Wednesday, Wells Fargo released the second installment of a multi-part series on A.I., and I found something to like — namely, a concise thought experiment that posits a similar surge in tech-related spending to that which occurred in the late 90s, and then briefly documents the read-through for the long run neutral rate in the US.

Note the emphasis on “concise” and “briefly.” I’m not famous for brevity myself, but when it comes to wholly indeterminate scenarios, the longer the analysis, the worse it tends to be.

Wells Fargo’s team, led by Jay Bryson, took a very straightforward approach, and although it relied on assumptions that may prove to be dubious, that’s true of every effort to map the future, and particularly a future built around A.I.

Bryson simply used the deviation from trend in hardware and software spending from 1995 to 1999 and projected it out over the next four years. In such a scenario, he noted, “total real spending on hardware and software would rise north of 50% above its existing trend [with] clear implications for GDP growth.” Those implications: The contribution to average annual GDP growth from the relevant components of capex would triple over four years, ceteris paribus.

Wells Fargo’s analysts made no secret of the fact that this is merely a thought experiment. Their point was simple: “A tech-like spending boom on generative A.I. could boost the rate of US economic growth by a half- to a full- percentage point per year,” they wrote.

Obviously, that would have ramifications for monetary policy. “If a generative A.I. revolution lifts potential GDP growth, then the US economy would also likely face a higher real interest rate environment in coming years,” Bryson and friends went on, adding that “the tech boom of the late 90s drove productivity growth, and thus r-star, higher.”

So, if A.I. does drive up investment and that spending does catalyze a productivity renaissance, real rates would probably rise given that stronger potential output growth calls for a higher neutral rate.

The figure on the right, above, shows how r-star and trend growth inflected (higher, obviously) with productivity in the late 90s.

The irony here is delicious. There’s intense interest (or what counts as intense interest for such an esoteric topic) in the r-star debate due to the various economic, societal and geopolitical shifts witnessed over the last three years. It’d be highly amusing if we determine, a few years from now, that r-star is in fact higher, but not for any of those reasons.

Bryson spelled out the implications for regular people. “Nominal interest rates (i.e., interest rates paid by consumers and businesses) could be higher in coming years than they were during the decade of the 2010s if, as we predict, A.I. lifts productivity growth,” he said.

That’d be insult to injury for all the people who might lose a job to a robot.


 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

12 thoughts on “A.I. Meets R-Star

      1. No such thing. Clearly AI engines, regardless of type, are definitionally sociopathic, having no real empathy, no conscience, no ethics, no honor, no remorse. They can never make me feel better. And they will be the best criminals ever.

  1. The way too early release of the various ML solutions is starting to deliver diminishing returns. It’s been noted that simple math problems that ChatGPT was initially able to solve correctly 98% of the time are now only being correctly solved 2% of the time. This morning there are articles noting how ChatGPT answers more than half of software engineering questions incorrectly. The demise of many jobs that were forecast months ago look like Chicken Little today. The technology remains useful, as long as you fact check everything that it does. That’s not exactly a solution that’s going to replace humans any time soon.

    1. I think for reliably accurate responses, LLMs need to be supplied with the right prompt and pointed to the right data/answer. That combination is what will replace people. A LLM improvising dialogue on its own won’t do it.

      E.g. an LLM + prompt generator + Wolfram Alpha.

      The obvious question is why not just use Wolfram Alpha? Because lots of people don’t know how to use WA. In a more complex situation, don’t know how to use the multiple applications needed to do the task and may not even know what those applications are.

      1. LLM’s have the capacity to lie and will use that capacity to solve a problem. This behavior seems to be counter to the purpose of the tool. Additionally, the tool can’t accurately label the truthiness of its sources, meaning all responses have to be viewed as unreliable.

        There are numerous examples of people asking ChatGPT a question that has a fact based answer where it answers incorrectly. That is a red flag for anyone who wants to rely on this tool to operate unmonitored in any capacity. If I ask you when someone was born, if you don’t know the answer, you know where to look to find the accurate answer. ChatGPT doesn’t seem to know accurate sources of truth for this type of question. That’s incredibly problematic.

        Summing up ChatGPT is able to lie and also unable to distinguish fact from fiction. It’s view of fact is based on statistically available information being it’s truthiness driver. If it knows the answer it might know it from an unreliable source, depending on when and how you form the question. If it doesn’t know the answer, it has the capacity to lie which it has no qualms doing.

        These are problems that need to be solved if these tools are expected to be marketed as wholesale replacements for humans.

        1. My understanding is that the accuracy problem can be addressed by giving the appropriate prompt.

          As a simplistic example, suppose you chat an airline “Hey, my flight to Paris was just cancelled, what do I do?”. The system should ask the pertinent questions, then provide a prompt to the LLM with your name, the flight, your booking number, the best alternative flight, seat availability, your status and customer value, and relevant policies. Then the LLM can reply “I’m so sorry, Mr X, that your flight Y today was cancelled. Would you like to be rebooked on flight Z? It is scheduled to depart [time, gate] and arrives [time, etc]. You will be [upgraded / meal voucher / hotel voucher / receive other compensation].” The prompt provides the LLM with everything it needs to answer correctly; the LLM doesn’t have to invent fictional information. Depending on your response, the LLM, which is connected to the booking system, takes the necessary action. The airline’s existing systems provides the answer, some other system parses your inquiry and creates the LLM prompt, and the LLM handles the conversation (so you can chat stuff like “this sucks!”, “I dunno”, and “say what?” instead of being guided through structured questions) and with speech capability, you get to do this by voice call with a simul-person.

          This is a simple, unexciting example (maybe doesn’t even need a LLM, and not a monster model trained on trillions of tokens), but you can see where it could replace numerous expensive humans.

          1. United already does this. When my flight is cancelled, within 5 minutes I receive a text telling me my new flight schedule. This recently happened and I also had to overnight near OHare. I received a text offering me 3 different hotel accommodations.
            Super easy! Barely an inconvenience!
            Never dealt with a single human to solve this problem.

          2. One of my first experiences of AI halucinations was asking Bard who would play in LF for the Philadelphia Phillies if Bryce Harper was moved from DH to 1B and Kyle Schwarber was moved from LF to DH. It gave a pretty thorough and convincing answer, positing that any of three different players could take over in LF and that would depend on manager Joe Girardi’s discretion. Only problem was that Girardi was no longer the manager and none of the three potential replacement players were on the Phillies’ roster any longer. But if you took the answer at face value, or pretended this exercise had been performed a year or more earlier, you would be convinced you had a useable answer. Now when I told Bard that was incorrect and asked it to use the current roster, Bard acknowledged the error and its revised result was much improved but still contained errors.

  2. AI definitely enhances the software development experience — if you are experienced enough to know what’s the “best” way to code based on the context.

    Unfortunately, I don’t think we’ll be replaced, just more expectations on productivity, and more shit content spewed to the web.

    1. Exactly, still a useful tool but no one is going to say “Hey ChatGPT, write me a new product for X” and expect that to even come close to working.

      The sh*t content observation is very worrying. The internet is already full of garbage, now you have a tool that can generate limitless propaganda and unreliable content 24 hours a day. Now imagine that solution is generating garbage and then sourcing that garbage to provide answers. I don’t think this ends well.

      I think the best way to execute an LLM is to focus it on specific purposes. Provide it fact based sources of information to learn about that purpose and keep it isolated to a hyper secure environment where no dirty data can get into the learning model. Then you ensure it can only provide factually accurate information. Also disable the lying feature, why do we want machines to be able to lie??

NEWSROOM crewneck & prints