Corporate America’s New Strategy: Just Say ‘A.I.’

The A.I. discussion is unfolding everywhere, including and especially corporate boardrooms. Those discussions spilled over into analyst calls during reporting season, when companies in every sector mentioned A.I. As noted here last week, two-thirds and three quarters of tech and communications services companies, respectively, discussed A.I. on their calls. Between 19% and 26% of earnings calls for companies in financials, industrials, real estate and consumer discretionary contained some refe

Join institutional investors, analysts and strategists from the world's largest banks: Subscribe today

View subscription options

Already have an account? log in

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

9 thoughts on “Corporate America’s New Strategy: Just Say ‘A.I.’

  1. The classic new technology cycle: 1) technology trigger, 2) peak of inflated expectations, 3) trough of disillusionment, 4) slope of enlightenment, 5) plateau of productivity. Aka “Gartner Hype Cycle”. AI is early in stage 2).

    1. I’d reiterate one of the points I made in the last monthly letter: It’s not as simple as typing in a prompt and getting a good result. It takes lots of prompts and Photoshop tweaks, or at least for me it does. I’m sure there are people who are more effective at it. But for now, my experience is that it’s time consuming and it does cost a little money. Not much. But a little. It’s worth the effort, though. I think it adds to the reader experience.

      1. By comparison, the cartoon homage to Hopper that I used for the Newsroom t-shirts and prints and coffee mugs and mouse pads in the shop required me to pay a real, human artist, and the price was exponentially higher than what I’d pay for something similar using A.I.

        HOWEVER, the total amount of money I’ve spent on A.I. images since February now exceeds (by a wide margin) what I paid for the professional human cartoonist to design the shirts. And whereas that experience was highly enjoyable and very rewarding both for me and the cartoonist, the daily grind of trying to coax the A.I. to give me something usable can be very frustrating.

        Suffice to say that if the cost of the A.I. images doesn’t go even lower and/or if the A.I. doesn’t get more efficient at producing usable visuals, next year I’ll just use the money I spent on A.I. for 2023 and hire a part-time human cartoonist for 2024.

        To me, that suggests that A.I. isn’t close yet to replacing humans. I could easily see the cost of these images running to $30,000 this year not because they’re expensive individually, but because for every one the A.I. gets “right,” it gets (at least) 50 wrong, and even the “right” images have to be fed into Photoshop for tweaks.

        (When I say it gets 50 wrong for every one it gets right, that’s on average. Occasionally you’ll get lucky and it’ll spit out something usable on the first batch, but that’s just for very simple line cartoons with just one or two elements — e.g., “A cartoon of a bear sleeping.” If you look at some of the stuff other financial media outlets have tried — e.g., FT tried to use A.I. for a picture of a dollar bill with Washington screaming — they’re not something I’d use here. On that one the FT used, all the dollar bill text was jumbled up and distorted, and so on. It would’ve taken hours in Photoshop to fix that.)

        1. I’ve played with AI images a bit–just to the extent that DALL-E is willing to give it away for free. I can very easily see just what you’re talking about.

          For some fun, I had my young son feed me prompts which I would type up. Suffice to say, he does not appear to have a bright future in prompt engineering. He was more inclined to prompts that would make a decent second act of a sci-fi movie rather than prompts that would make a coherent still image. I typed them up all the same, mostly to see what the AI would come up with. It was an unpredictable hodge-podge that rarely looked good. Some times we would get something that looked really cool, but didn’t have a whole heck of a lot to do with the prompt we fed it.

          I did make one serious attempt at getting a usable image for another project. I just wanted a black line drawing of a bonsai tree. I know quite a bit about the subject (bonsai), and was able to give very specific instructions and refinements. I still wasn’t ultimately happy with anything that came out by the time I’d used up my monthly free pictures.

          For funsies, I just went and tried to recreate the image atop this article. The results were… yikes.

  2. Before we go too much farther into AI there is one key problem we are not yet addressing. It is completely untrustworthy. AI is essentially a black box containing math equations we let it create from data that was initially fed into the box by us humans. However, any serious AI is based on statistical processes that are self-modifying as more data is read. Furthermore, AI makes its own judgements based on its initial structure. We don’t process the data along side our AI partner to make sure it’s getting the outcome right. We can’t know what will come out of the box once we start. I know from things I see with AI applied to me it can get stupid. If it’s doing surgery what if it doesn’t see what the doctor would have seen. I got a dental implant last year. When I saw how easy it was I asked the doctor about a couple other ones I’d like to get done. He looked at my x-rays and said he wouldn’t do those. They were too dangerous. One risked making an unfixable hole in my sinus and one risked paralyzing my face by cutting a critical nerve. The latter one was a procedure I had planned earlier but didn’t go through with because I didn’t like that doctor. He missed the risky nerve danger. Two trained brains looking at the same pictures disagree and the price of a mistake is high. Because of its opaque processes we can’t ever know if it is optimized or even right at all. Letting this stuff make decisions affecting our lives is risky at a level we can’t actually know.

    1. These are well documented concerns about AI. Self-driving cars are a good example of this. Humans make errors driving resulting in tens of thousands of deaths a year, but a couple deaths from drivers using auto-pilot make national headlines even though self-driving cars are likely far safer overall. The same principle applies to medical errors. For whatever reason, we feel safer in the hands of imperfect humans than imperfect but more accurate machines.

      I’d also argue that we are more likely to determine the root cause of a machine error than a human error. Humans make mistakes for many reasons that are often unclear. We only have a person’s word when trying to understand their thought process. With a machine, you can create a log of the decision making and work through the steps methodically.

      AI is still in the very early phases, but advancing rapidly. Just like in medicine, we should absolutely be careful and run trials and QA lest we stumble into some very bad side effects, but dismissing it as a black box when statistics say humans often make more mistakes and cause more deaths is a bit dramatic.

      All that being said, it’s better to think of AI as an assistant at this point. You still need to be thoughtful about what you want to get out of it and what inputs you need as H describes above. However, we are already seeing instances of analysts being replaced because we can do jobs with AI in 30 minutes that took an analyst a week to complete. It still requires QA, but that’s a massive productivity boost and I doubt we’re the only ones who have done the same.

      1. AI could, in theory, be very useful for old-fashioned fundamental investors like me. A “junior in a box”, so to speak. Building models, back-testing, screening, hunting for data points. Stuff that investors already do, but when it is more efficient and accessible to “the little guy”, more of us will do it more of the time.

        Then it will rapidly become “table stakes” for the users and another required expense, just like Bloomberg or Thomson Reuters terminals.

        It will also become “table stakes” for the vendors and another required feature, just like data downloading, charting, and screening.

        I imagine the result will be more revenue and margin for the vendors, higher productivity for the users, and fewer junior analyst positions.

        The key requirement, though, is that the AI will have to be 99.99% accurate. There will be virtually zero tolerance for fictional data. Maybe 99% accuracy can be tolerated, but only if the AI can learn and has persistent memory, so that if you find and correct an error once, you can expect that error will never be repeated (similar to what you’d expect from a junior analyst).

        The popularly available generative AIs are not useful for me, at this point.

        For the past couple months, I’ve been periodically asking ChatGPT4, Bing, and Bard this question: “Annual incremental operating margin is the change in operating profit from year one to the year two, divided by the change in revenue from year one to year two. Give me a list of the annual incremental operating margin for Expeditors (EXPD) for the years 2000 to 2022 inclusive, in US dollars.” ChatGPT4 refuses to answer the question. Bing refers me to EXPD’s annual reports. Bard immediately gives me the requested list, and it is totally wrong and differently wrong each time.

        Sure, this is not what these AIs were designed for. So, I ask each of them a more pedestrian question: “I need to apply caulk and spackle to bare wood siding that will be primed and painted. Should I apply the caulk and spackle before, or after, the primer? If after, should I apply primer again over the caulk and spackle?” ChatGPR4 and Bard give me the wrong answer (apply caulk and spackle to bare wood, then prime over) and Bing gives me a semi-correct answer (apply primer, then caulk and spackle over the primer) but not the correct answer that an expert human painter would (apply primer, then cault and spackle, then another coat of primer over the spackle).

        I look forward to the specialist AIs trained on accurate and reliable data and free of made-up answers.

    2. I’ve been lightly dabbling with Bard and had an alarming experience trying to get it to calculate the starting and ending amounts of principal if the only known variables are the term, compound growth rate and increment between the starting and ending amounts. There were some other complicating factors, but it’s initial response was calculated incorrectly. I iterated by prompting that I knew the correct answer was “X” but perhaps I had not specified the equation/calculation correctly. Bard apologized and tried to take the blame, confirmed that the equation was incorrect, continued to do the math wrong, but gave me the correct answer. I iterated a final time by giving it a text description rather than an equation, and once again it gave me the right answer that did not correspond to its calculation. In the end, before I gave it the right answer, I could not get it to return the correct result, but after I gave it the right answer, every response generated that result no matter how it was miscalculated.

NEWSROOM crewneck & prints