Zuckerberg Sells The AI Dream

Mark Zuckerberg knew just what to say to placate investors concerned about the temporal gap between AI capex and the payoff from those investments. AI, Zuckerberg said Wednesday evening on Meta's call, will ultimately play some role in "almost every product that we have." The technology's going to "change all these different things over multiple time horizons." It's "super exciting," he effused. That didn't answer the most pressing questions regarding the time line on revenue generation from t

Join institutional investors, analysts and strategists from the world's largest banks: Subscribe today

View subscription options

Already have an account? log in

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

23 thoughts on “Zuckerberg Sells The AI Dream

  1. The Silicon Valley tech guys have always operated that way. They create something they believe will be wonderful, though they can’t tell you exactly how it will payoff. It’s how those businesses live. The good ones don’t know how to stop thinking big and often outside the box.

  2. Investors haven’t given EMTA much AI halo, but actually META has more to gain from AI than any “AI user” company I know of (distinguishing from the “AI supplier” names), because they can provide AI applications their users will actually want.

    AI is good at creating images, generating text, and having conversations, and that is what people do on social media. So META is using AI to let users create fun or flattering AI images of themselves, let creators have “create AI agents that can channel them to chat with their community”, and let “every business … have an AI agent that their customers can interact with”. Creepy, sure, but if social media is a nail, AI is a perfect hammer.

    META’s profiling/algorithm engine has by now far surpassed its effectiveness before the AAPL tracking block. No-one talks about Facebook Marketplace, but it is so much better than Craigslist (could someday challenge EBAY) that people have re-engaged with FB just for that. Talking to young adults, I believe META’s claim that it is regaining traction with that demographic. Threads has reached critical mass. I assume 3Q will see an election surge in ads/engagement.

    That said, it was basically a beat (report) and blah (guide) quarter, which only looked good relative to the lowered expectations created by GOOG and MSFT, and at this valuation, I wonder how what investors will think about a meet and blah 3Q.

      1. It’s OK. This is HR, home of the “no edit button” comment post. We all know how to read typo here.

        This is actually a really good observation, as someone who has worked with AI a whole lot and seen just want an unproductive waste of time the current state of the art is, this is actually a good observation… You’re right that the few things it inarguably does well are right in Facebook’s wheelhouse. It also exceeds really well beneath the hood and analytic functions, much more so than the generative functions. And Meta is a collosal data mining operation.

        So, in this case, yeah, probably a good thing that they got threw themselves into communications hardware, er, blockchain, er, virtual reality, er, AI. (Sorry, slip of the tongue.)

  3. What are the chances he changes the company name again to say, Meta AI, only to watch no one want to use his AI any more than they want to use his Metaverse?

    You have to admit, it would be funny!

  4. AI is still mostly projection. Unlike blockchain, which as a tool and not a currency is still trying to find its raison d’etre, I believe AI has lots of utility, but mostly as dreams and not profits so far, except for the AI infrastructure guys.

    1. You’ll like this link then

      https://www.upwork.com/research/ai-enhanced-work-models

      “Nearly half (47%) of workers using AI say they have no idea how to achieve the productivity gains their employers expect. Over three in four (77%) say AI tools have decreased their productivity and added to their workload in at least one way. For example, survey respondents reported that they’re spending more time reviewing or moderating AI-generated content (39%), invest more time learning to use these tools (23%), and are now being asked to do more work (21%). Forty percent of employees feel their company is asking too much of them when it comes to AI.”

      That said, the tools should get better.

      1. Thanks for the link JL.

        The parallels with the genome/mRNA trade are striking. The technologies both will prove useful, but it will take time to flower. Time is something most living and non-living market movers do not seem to have in recent years.

        1. For sure it has and will have uses. But with all respect, what percentage of the US workforce or economic output comes from coders these days? Between “outsourced to India” and the roll out of “drag & drop” code silos for many enterprise-level products?

          1. Do you know what, though, so much of a software engineers job isn’t coding at all… Maybe I have a different perspective because I’ve been a self-employed freelancer for my entire career, not a team member working under program manager who hands out tasks, but a guy who walks into businesses and solves their problems. But easily half my job is psychology, pure and simple. I’m not in the business of writing software. I’m in the business of making people happy. Software development is the tool I used to do that, but it’s just a tool. And no amount of the neural net ingesting a corpus of code samples from GitHub is going to teach it that.

            (Says the guy who’s always complaining about how he’s been out of work 15 months and is afraid his consulting career has finally collapsed for good. I know, I know. I’m speaking in the big picture. The current crisis in IT careers is due to tax law changes, not technological changes.)

        2. I would really like to see your prompts. This software engineer has never had his time be anything but wasted by trying to enlist the help of AI. I had a copilot, not a generalized LLM but an actual one intended to help programmers, consistently and repeatedly give me false information, including consecutive contradictory, mutually-exclusive answers on basic programming facts—I mean, literally, it’s saying things like “XYZ evaluates to true in language A” and then in the very next consecutive answer saying “XYZ evaluates to false in language A”. I’ve lost track of the time it came with a 95% of the correct answer, and then when I pointed out the floor, it fixed that and introduced to new bugs, which when I point them out, it fixes and then introduces three new bugs, one of which is the reappearance of original bug that it already fixed, and I go in circles like that for a few hours, by which time it’s already forgotten the original requirements completely and is off in some insane hallucination it cooked up entirely on its own, before I give up.

          Every now and again the stopped clock is right and I do actually get a working answer out of it, but it’s so wrong so often that I would say, even including the occasional successes, bringing in AI to help with programming consistently makes things take an average of, I don’t know, maybe 4 to 6 times longer than just looking info up on Stack Overflow and coding solutions myself. And then there’s the fact that even when it seems that first to have gotten it right, often it is overlooked very simple techniques or important basic facts in favor of some sort of abstruse sent over complicated solution, or, worse, and I’ve been screwed over by this, it seems to work, but contains a subtle enough bug that you don’t notice it’s broken until days later when you’ve already built a ton of stuff on top of it. Overall, it relies very strongly on you already knowing the answer, or on you being able to spot its mistakes, which makes it too unreliable to use. I can’t use a device that spits out “answers” that I need to correct myself.

          I will say I have many times heard other programmers say that it helps them, and it astounds me every time. I’ve just never seen that (and typically if I have a question, I run it through all three of ChatGPT 4o, Claude 3 opus, and Copilot.) But I have yet to see one of them actually be able to show me an example of it that I can see myself.

          Copilot is really excellent if you have small, certainly no more than two or three short lines, things that you need changed in your existing code. And its predictive autocomplete is, seriously, one of the most amazing things I’ve ever seen, in the context of limited bits of code it’s truly astounding. But these are only good for very, very small tasks and corrections. As far as actual programming? Far from jeopardizing my future work prospects, I feel like these things are guaranteeing it, by very consistently producing code that requires human programmers to fix.

        3. An added thought – sadly, AI will also help every coder in Bangalore equally as well. So you will just have to produce more and cut your compensation to remain even. That’s not all that different from today, I suppose?

          1. It won’t, though, because those programmers in Bangalore don’t know how to troubleshoot and fix the badly broken code that AI almost invariably produces.

    2. Au contraire, mon frere, Well blockchain has significant practical limitations that it has yet to overcome, as far as potential, I’ve been a technologist a really long time and it’s been many decades since I saw a technology has potential I got as excited about as I am about blockchain. The practical hurdles are substantial, and if ever overcome, it will take a long time. But what people fundamentally misunderstand is that blockchain isn’t about cryptocurrency, it’s about decentralized apps. Imagine being able to create the next Uber or Amazon without renting a server farm because the software runs on the money itself. Unlike AI, the economic gains are clear and direct. They’re going to need to find a way to do it without every app consuming the amount of energy that a small nation does in a year, but if they can, to me, that’s the most exciting potential advance on the horizon from the trendy technologies that have arisen in the last few years.

      AI, in the mean time, which absolutely can do some incredible things, absolutely cannot do and likely will never in the foreseeable future do some of the amazing things that people are already attributing to it.

  5. I know nothing about the current state of all AI output but the numbers bandied about where I go are like “90% accurate” and “95% accurate” Check with Cloudstrike about error rates. We got the the moon on six-sigma minimums and goals of zero defects. AI today seems happy with one-two sigma. That’s not even horeshoe close for medicine, science, programming, finance, quality management and other high value applications. Not even remotely ready for primetime as I see it.

    1. I hope you are correct- however, I fear a slippage in “high quality outcomes” across many disciplines in the US and an acceptance of that situation.
      For example, in medicine- a more and more common scenario is: One has an issue and goes to a doctor. Doctor prescribes medication and/or surgery that, at best, only partially improves situation but also causes secondary problems from medication or surgery. The US has the highest rate of spinal surgeries in the world and reported outcomes of corrected outcomes are surprisingly low. Turns out what doc should have told patient was eat healthier food, get outside more, walk/exercise more, lose weight and stop drinking. This comes from a surgical nurse (not me 🙂 )

    2. Sorry to sound like a broken record but the people throwing out numbers like 90% accuracy are the optimists. In practice, I’ve found AI to be certainly no more reliable than a coin flip, except, without a coin flip’s decisive outcome in that when a coin comes up heads, it’s really heads, and you don’t need to spend extra time confirming that It’s not simply telling you it came up heads when it actually came up fruit basket or tennis.

      1. No doubt you are familiar with the GIGO concept (garbage-in-garbage-out). AI takes a lot of power and time to crunch data that still needs intervention later to discover what was actually garbage in the input.

        1. Yeah, but it’s getting a lot better, very fast. I’ve mentioned this before, but the difference between the AI image creation now versus a year ago isn’t just night and day, it’s not comparable in any way, shape or form. These things went from barely being able to produce a passable picture of a bear in 15 tries last summer, to now turning out almost perfect illustrations of epic battles between bulls and bears in one try. And now I can talk to the model in very casual terms when I want to tweak something after the fact. So, I’ll say, “A wide-aspect ratio illustration of a giant bear fighting a giant bull in a dystopian city,” and it’ll produce something that’s damn near perfect in about 10 seconds flat, and then it’ll ask me if I need any adjustments. And I can be very colloquial/conversational with it. In fact, it’s better if you’re less formal. So I can say, for example, “Ok, so this is good, but it’d be cool if the bear was tossing a lightning bolt at the bull — like Zeus or something.” And the model will jump right on it, then explain itself in equally conversational terms once it’s done. And it seems to genuinely appreciate it if you tell it how good of a job it did. This is all very efficient most of the time. And now that it’s so accurate, the cost for each one has gone down massively.

          1. Of course, the caveat is that the stakes are very, very low when you use AI for what I use it for. If it puts too many claws on a bear’s paw, or the tail on a bull is slightly incorrect anatomically, or the angles on a building in a city it draws would annoy an architect, nobody notices or cares and there are no consequences. I wouldn’t use it for anything where the stakes are higher than overwrought pictures of animal fights.

          2. You answered a question I’ve been meaning to ask for a while about the source of your post illustrations — that is darn fine work. And yes, AI for illustration will continue to serve well if you don’t stress it out too much. I started an LLC for a consulting gig earlier this year, and AI (specifically MidJourney) helped me get close to a brand logo that I liked. I still had to do the “last mile” stuff with drawing a logo out on a grid to get the correct format for a website display and business cards (which are still in use).

            Ironically, the consulting gig involves software architecture, which I don’t want to entrust to AI for the reasons Chewbacco has pointed out, plus more. It’s probably because to do many aspects of software correct, one needs a very, very broad range of experience.

            I’ll grudgingly believe AI products in other areas (such as auto insurance), but only because the potential amount of input data is reasonably well focused.

          3. Your comment below sums up what I was about to point out reading the above: in illustration, there’s no “correct answer”, and the fault tolerances are huge. Not so in real knowledge work, where the outcome is binary: it’s factually accurate, or its not. An algorithm either performs to stated requirements, always and forever, or it doesn’t.

            As it happens, I’m a very avid AI artist too, I have a whole website of galleries, and here I wish I was more comfortuable giving PII because I’d love to substantiate that by letting you see just how much work I’ve done with them. Suffice to say I’ve been watching the advances up close because I’ve been using them regularly, from VQGAN on up through DALL-E and Stable Diffusion (and seriously looking forward to getting to try out FLUX!) So I’m very impressed and excited with the possibilities. And, as an aside, with plenty of things other things about AI too—just not the things about them that most people talk about.

            But, anyway, the advances in rendering images from text prompts don’t translate into an advancement in knowledge work.

            For instance, you can also tell an LLM “Tell me the story of the original Star Wars, in limericks”, or some such, and it can do that too, which is impressive, any way you slice it. But when it gives you the amazing 30-limerick version of “Star Wars”, then try giving it a rigorous request that has an actual right or wrong answer, like “Now rephrase that, avoiding any words containing the letter ‘o'”, and watch it fall flat on its face in mechanically stupid fashion, over and over and over again, telling you all the while that it’s not. Here, see for yourself: https://poe.com/s/EPl5eotMfvgwENtsM1qQ That hasn’t improved at all.

            And… er… much love, but with all due respect, I’m kind of shocked to hear you say something like “it seems to genuinely appreciate it”. I’m sure you know that’s akin to praising a wooden cuckoo for its punctuality, right? And the problem is, the cuckoo, once you’ve seen it strikes the hour accurately, you can assume it always will. An AI can get it right 100 times in a row, and you will need to manually double-check every answer answer, because the 101st might be confidentally, completely wrong.

            Here’s a parallel example to what you experienced with it seeming appreciative: I once had GPT tell me, “I need more time to work on this, can we set up an appointment to talk more about this next week?” and then discuss its schedule, walk me through an appointment-setting process, and decide on a time when its would be convenient for it to call me back with a result. This all despite the fact that it has no capacity to process anything outside the bounds of the active conversation, nor to pick up a previous conversation again in the future, nor does it have a schedule, nor can it reach out and make calls.

            The “sincere appreciation” you experienced was the same thing: just tokens—not even words, tokens, they might as well be hieroglyphics or geometric shapes—strung together into realistic-sounding hogwash by a system that has statistical rules for stringing tokens together into realistic-sounding hogwash based on quantitative analysis of previous observations of relationships between them.

            It’s a mechanism, much more complex than a cuckoo clock but no less mechanical. A simple version of it has even been implemented in Excel: https://www.tomshardware.com/tech-industry/artificial-intelligence/chatgpts-ancestor-gpt-2-jammed-into-125gb-excel-sheet-llm-runs-inside-a-spreadsheet-that-you-can-download-from-github. Nothing at all is happening other than some very advanced mathematics, any more than an old arcade automaton is conscious of what it’s mimicking.

NEWSROOM crewneck & prints