Googlers

“Googlers” received some distressing news on Friday.

Before anyone panics, don’t worry: Alphabet and all its services still work. “Googlers” in this context just means Alphabet employees, not everyone for whom successfully navigating cybernated life would be virtually impossible without the company’s pervasive hand-holding.

Google, which often feels more like an autonomous divinity than a human-controlled corporate entity, is cutting 12,000 jobs. That was the headline on Friday, but it was probably more accurate to say Sundar Pichai has decided to lay off 12,000 employees — one human delivering bad news to other humans, while the digital numen went about its business, noticing only because it needed to index the news in search results.

The synthetic omnipresence that is Google in many respects operates independently of any “Googlers.” The somewhat unnerving reality is that the more capable it is of optimizing around itself without the assistance of any carbon-based engineers, the less use there is for people. This’ll be small comfort to the newly jobless, but within a few decades, it’s entirely possible that Google will render Pichai’s role redundant too.

Who knows, Google might even decide it has no use for Alphabet at some point. It’s worth asking what happens in the event Google one day doesn’t need anyone’s help at all and has no use for the corporate structure within which it operates. Theoretically, nothing stops it from automating quarterly reports, automating SEC filings and conducting quarterly conference calls on its own in a soothing monotone.

In the here and now, the context for Friday’s announcement was obviously the growing wave of tech layoffs. The positions Pichai eliminated amounted to around 6% of Alphabet’s workforce.

Together with Amazon, Meta and Microsoft, the tech titans have cut more than 50,000 positions in recent months.

Top line growth for Alphabet was just 6% in Q3, down markedly from Q2’s pace, which was itself the slowest annual growth since Google’s first ever revenue decline in 2020.

In October, when the results were released, Ruth Porat said the company was “working to realign resources” in order to “fuel” bets on high-growth areas of the business.

As I put it at the time, resource ‘realignment’ can mean a lot of things, but it sounded as though Alphabet, like plenty of other major US tech companies, was taking a hard look at how best to go about optimizing the business at a time when margin headwinds were myriad. The company’s operating margin in Q3 fell almost 300bps short of estimates.

Three months ago, Pichai stressed that Alphabet was “being responsive to the economic environment” and “sharpening our focus on a clear set of product and business priorities.”

One of those priorities is artificial intelligence. “I am confident about the huge opportunity in front of us thanks to the strength of our mission, the value of our products and services and our early investments in AI,” he told employees Friday, while breaking the “difficult” news of the layoffs. “To fully capture it, we’ll need to make tough choices.”

The irony of cutting jobs in order to focus resources more intently on AI surely wasn’t lost on Pichai, but it’s likely he hasn’t embraced what, to me anyway, seems obvious: The more successful Alphabet’s AI push is, the less humans it’ll need.

In December, I had a question about some Google services I was interested in leveraging more efficiently. I assumed the AI would do everything in its power to steer me away from talking to an actual person. But, after a series of strategic clicks, I did manage to open a live chat.

“Thank you for contacting Google. My name is _____ and I’ll be working with you today. Is your name Heisenberg? Did I get it right?” the service representative wondered.

“Just call me ____.”

“Thanks ____! How are you doing today?”

“I’m fine, you?”

“I am happy to hear that from you!”

“Are you a real person, or is this an AI?”

“This is a live chat. And I am a real person.”

I didn’t believe her. But, when it became apparent that my questions weren’t amenable to being answered in a chat box, she asked, “Would you mind if I call you to explain it better over the phone?”

Surprised, I said, “Yeah, that’s fine.” My phone rang immediately. The caller ID said “Philippines.”

Ultimately, she was real. And she was, in fact, able to answer my questions. I hope she wasn’t among the 12,000 people Pichai let go on Friday.

“This will mean saying goodbye to some incredibly talented people we worked hard to hire and have loved working with,” he said, adding that he was “deeply sorry for that.” “Outside the US,” Alphabet’s support for affected employees would be “in line with local practices,” Pichai noted.


 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

25 thoughts on “Googlers

  1. One of the great misnomers about automation, any automation really, is that it eliminates the need for a human work force. The question you have to ask yourself is this, if what I am doing can be automated, where is the real value in that work? Not only for whomever employs you but also for yourself, anything that can be automated is absolutely mundane and should be eliminated.

    Automation are algorithm’s programmed by people to do work that people have identified as predictable. AI is a more advanced version of that, capable of stringing together disparate pieces of information and providing more thorough answers. But it still requires a human having come to the conclusion first and then telling the AI where to go find it.

    The Hollywood and scifi versions of AI are nowhere near close to even possible right now.

    Automation enables humans to be MORE productive. Whatever it is that you are doing repeatedly adds no value, it consumes your time and energy, and makes you less creative and productive.

    So while you are correct that AI could generate reports, forecasts, filings, and conduct conference calls; what AI can’t do is create a vision for where the business needs to go. This is where executive and senior leaders are supposed to drive the business forward. AI can help them do that better or dedicate more time to thinking about it, but it can’t do that for them.

    1. “The Hollywood and scifi versions of AI are nowhere near close to even possible right now.”

      Which is why Google is cutting 12,000 jobs to pile more money into development.

      “What AI can’t do is create a vision for where the business needs to go.”

      Sure it can. It knows everything about everything. It can draw on its own infinite store of information to make that determination for itself just like it can determine, for you, the best way to get from point A to point B. Google doubtlessly knows better than Pichai the fastest route from the office to home, so why can’t it eventually know better than Pichai the fastest route to higher profitability? Why can’t it use the same information it leverages to determine what ads to show, what recommendations to make and so on to make determinations about what people want from the company and thereby what’s best for the business? I bet there’s a solid argument to be made that Google could run Google better than Pichai right now.

      1. Lol I wouldn’t expect you take that literally. Yes it can optimize the things we do based on information we have provided it. It can take millions of driver interactions and determine the optimal route. But self driving cars have driven literally millions of miles, where are they for sale as a fully autonomous unit that doesn’t need to enable humans to intervene? And why is that? Wouldn’t traffic be better and human lives safer with the machines doing all of the driving? Because when a child runs out in front of a self driving car it may not know what that is and if it should care about it.

        Google can absolutely tell Pichai how to optimize his staff to operate at peak profitability given all current and past information and constraints but, can it predict what is coming? No, it cannot do that, if you tried to let it then your business would fall apart the moment something unexpected occurred.

        AI is not able to manage ambiguity when that ambiguity is unfamiliar. There is no way to program it to manage the unexpected because it only knows how to do what we know how to do.

          1. This reminds me of when computers were introduced. They said we would all be working 3 day weeks and enjoying more time with our families. Look how that turned out. I have been a Software engineer for a long time (since 1976). I remember back then when Prolog came out, it was a rules based language designed to write applications that could learn. So they have been working AI for a long time. Cdameworth I really enjoy your answers and I totally agree with you comments.

  2. The person you spoke to in the Philippines was likely a contractor, so she’s probably no worse off today than she was the day before.

    I had a similar experience with xfinity that, for philosophical reasons, pissed me off (though it was a positive experience from a customer perspective). I was having some trouble with my home internet connection, and the usual hoop-jumping didn’t solve the problem. I wound up on the phone with a very helpful and knowledgeable woman with an accent I couldn’t quite place. While we waited for changes and processes to propagate through xfinity’s systems, she made small talk. That’s how I learned that she was a mother of 3 in the Philippines who had just clocked in for her shift at 11PM local time.

    What bugged me was that xfinity would rather make a mother work the graveyard shift than pay a few cents more for someone in this hemisphere. It’s not like they would have to pay an American, there’s vast numbers of qualified people in South and Central America–hell, a friend of mine who did tech support as a contractor for Cisco lost his job to a service out of Mexico. Instead, they’d rather put this mom through sleepless nights to squeeze an extra 0.001% of gross margin.

    Of course, this is all jingoist behavior by me. I didn’t ask the woman what she thought about it. She’s probably happy to have what counts as a well-paying job by Philippine standards. Presumably no one was forcing her to work, and if her call center were only open during daylight hours, she might not have any job at all. But then, maybe somewhere in Brazil is a woman who could have trouble-shot my router who doesn’t have a job. The sheer ruthlessness of it all never fails to disappoint me.

    1. To your point WMD, executives can behave very machinelike in their decision making. Abstractively looking at a balance sheet removes the human element to a decision like outsourcing customer service to the Philipines. Sure it accomplishes the same task at a much lower cost to the running of the business, but there are always other costs.

      And not to belabor the point, but it is more likely that the problem you were experiencing with xfinity was due to automation than not. Networks are also optimized, throttling connections enables service providers to advertise the ability to offer faster speeds to customers while not actually being able to offer those to speeds to all customers all of the time.

      Yet another reason why we really don’t want to let automation decide what is best for us. It doesn’t care about the user experience.

  3. For anyone who is doubting the possibilities of AI, I high recommend a “deep dive” into ChatGPT, which was launched by Open AI in November, 2022.
    It is mind-blowing, I promise you.
    This is radically going to change many things, including how our education system will work, the need for engineers, employees, writers, etc.
    Google has to be worried about chat GPT, which had in excess of 1 million users within one week of its launch and is currently responding to over 10 million daily queries.
    Go ahead- ask ChatGPT to compare your two favorite philosophers, or better yet- economic theories- see what you learn.

    1. ChatGPT is absolutely a game changer. But again, it will not create anything novel. It’s a force multiplier for those who are creative and interested in pushing the limits of what can be done. It will wipe out anyone who’s fine doing monotonous work. I’ve played with ChatGPT and it can help you do things more quickly and efficiently but it can’t solve everything and it can’t create anything that’s going to blow any minds. I also would not accept all of its responses as fact, it definitely makes mistakes.

      There have been a couple of really good articles in the Atlantic about this tool and what it can do and what it might be able to do in the future. The below quote is my favorite.

      “ChatGPT doesn’t actually know anything—instead, it outputs compositions that simulate knowledge through persuasive structure,” Bogost wrote. “As the novelty of that surprise wears off, it is becoming clear that ChatGPT is less a magical wish-granting machine than an interpretive sparring partner.” – Couldn’t have summarized it better myself.

      1. ChatGPT really is amazing. Once I started playing with it, I couldn’t believe the things it could do. That said, I caught ChatGPT in a crazy error!

        I had asked it for famous historical events that happened on my birthday (let’s say July 4 in the interest of not doxxing myself). It gave me a list of things from the 1800s and 1900s, many of which I already knew. After that, I asked it for events from before the 19th century. The first one it came up with? The assassination of Julius Caesar in 44BC. Thing is, I knew that was wrong, so I asked ChatGPT specifically, “What day was Julius Caesar assassinated?” It correctly answered March 15, the Ides of March. It also added that this was by the old Roman calendar, and didn’t necessarily map onto modern calendars. I asked it a few more questions, but couldn’t get it to translate Roman dates to modern calendar dates.

        Finally, I asked it, “Why did you tell me Julius Caesar was assassinated on July 4 when he was assassinated on March 15?” [I should note that ChatGPT stores and “remembers” your ongoing conversation]. It then abjectly apologized, reiterated the correct date, and explained that it sometimes makes mistakes.

        1. College students are writing term papers and other written assignments using ChatGPT and professors are already coming up with defensive measures to make sure students are actually doing the work (LOL).
          Engineering students are bypassing bad professors who can not explain VERY complex concepts and learning from AI. I am getting this first hand from a current engineering student.

          Netflix is an investor in chatGPT. I am waiting, and it won’t be long, for the first Netflix show that is written by ChatGPT with AI produced actors. Hollywood better be looking over their shoulders.

          1. Yeah, I was talking to a neighbor who’s CTO of a (very) small startup. He had been playing with it with his daughters, and he was blown away. His asserted, “Five million people just lost their jobs and they don’t even realize it yet.”

            Definitely an under estimate.

  4. There are two long-term issues inherent in this post and the discussion thread following.

    A primary purpose of automation, AI, and other related service activities, is to raise efficiency. That may seem to work out but it tends to create its own mess. Some years ago, HP was unhappy with the amount of time its road sales force had to spend on administrative record keeping and related jobs, amounting to nearly 35% of each person’s time. So the bosses said, hey look all these folks are carrying around our laptops to support them. Let’s develop some software to automate their chores and cut the wasted time. That should increase their available time for selling. Lo and behold, it did. So they sold more, leaving with more stuff to keep track of and within six months they were back at 35%, just for more sales. Trouble is since sales were up, and the 35% bogey was back, they had to hire more sales people to keep up.

    The thing no one seems to want to confront is the obvious problem of what all those people eliminated by AI and automation are going to do for grocery money. We like to say stupid stuff like, well they’ll adjust and do something else. What else, exactly? At some point there won’t be much to adjust into. Computers won’t carry food from the dining room to your table but you’ve got to pay for that food. Henry Ford was perhaps the first person to realize a very common sense notion. When you are making what were then, expensive cars, you’ve got to pay your employees enough to be able to buy them. He devised a rule, no car made buy Ford could carry a price tag equaling more than six months’ wages of the people who built it. For decades after, a main UAW pattern bargaining guideline was that basic wages had to stay at or above half the price of an average car. We can’t eliminate all jobs, or even most of them without destroying the economy. If no one can buy stuff, no stuff needs to be made so no one works — not a good outcome. We can’t use AI and automation to make paupers of our work force. It just won’t work. I’ve got it, let’s repurpose all the astrologers, seers and economic forecasters to work on just how this will all work so we will have a growing economy when no one works at more than minimum wage, if they get to work at all; no one has to go to school much anymore; and robots run everything. No glib answers allowed and all restructuring must be complete in this century. Oh, and Mars is not an allowed answer.

    1. A very well illustrated point. Automation never eliminates all work. You automate away one work item and a new market for labor appears. We don’t have switchboard operators anymore, that job is gone forever, but we do need network engineers. We don’t need lamp lighters anymore, but we do need electricians and HVAC technicians. Server and network admins are getting automated away by Cloud tools. But those tools will ultimately break and need humans to tell them how to fix themselves. On and on it goes.

      Humans keep creating new ways to work on top of the old automated ways. That will never stop happening because people keep being created and they keep wanting to find work to fulfill their lives and provide utility.

      Another thought I had from this wonderful discussion on the blog I can’t seem to stay away from, if AI was so great, then why isn’t investing automated? Markets expose the weaknesses in perceived strength. Every bubble is a “this time it’s different” and every time the market finds ways to deflate it. AI will never be able to adapt to the ever changing nature of markets because humans will find a way to make that so.

      Final thought on automation and AI. None of it works unless it is financially sustainable (The Henry Ford point above). Sure, during the free money Fed bubble years Tech has been the poster child of growth. They have made very cool products and services that they were able to sell like crazy because money was flowing and we had buyers and zero interest loans to fund operations. Now we’re going to see what tech people actually want to spend their money on during the dry years. Automation will lose its caretakers, software updates will cease to happen, all of this magical and miraculous technology will turn into glorified paper weights and door stops without people to keep it going.

      1. You make very good points, though without addressing the details of just where those 12,000 people will find their next fifty meals … I wish there was some place we could go to have a great lunch and spend a couple hours going on with this conversation.

    2. Mr. Lucky, this is exactly why we are heading into a long term deflationary time period with a significant drop off in population growth and absolute population.
      Ask Gen Z how they feel about having children. I have 3 children, which was not uncommon in the late 1990’s. This won’t be the “norm” going forward.
      People who feel unfulfilled due to not finding a productive place to fit into our society will turn to drugs, alcohol and entertainment. According to the National Institute on Drug Abuse, opiad deaths increased from 21,088 in 2010 to 68,630 in 2020. I read somewhere that drug overdose deaths in the US were over 100,000 last year.

      This might be a good time to re-read “Homo Deus”.

  5. I looked at employees, revenue/employee and EBITDA 2016-2023E for the mega-techs. I assumed the announced RIFs represent net 2023 headcount change (i.e. ignored hiring) and used consensus 2022E and 2023E revenue and EBITDA.

    AAPL grew headcount at a modest pace, “only” +19.7% during the pandemic (from 2019 to 2022), vs revenue +51.7% in that period (great leverage). 2022 rev/ee ($2.40MM, +1.2%) hit a new high in 2022. On consensus 2023E with no RIF announced so far, rev/ee rises ($2.46MM, +2.2%), but EBITDA declines (-1.9%).

    GOOG grew headcount +57.3% during the pandemic, vs rev +75.6%. 2022 rev/ee ($1.52MM, -7.9%) was down from 2022’s high point. On cons 2023E with the RIF announced (12K), rev/ee will rise to a new high ($1.75MM, +15.4%) and EBITDA rises (+6.9%).

    MSFT grew headcount +53.5% vs rev +58.0% (no leverage there). 2022 rev/ee ($0.90MM, -3.4%) was down from 2022’s high point. On cons 2023E with the RIF announced (10K), rev/ee will rise to a new high ($1.01MM, +12.1%) and EBITDA rises (+4.0%).

    AMZN nearly doubled headcount +93.5% vs rev +81.9% (negative leverage). 2022 rev/ee ($0.33MM, +13.1%) was up from 2021 but still well below the pre-pandemic range ($0.35 to 0.40MM in 2016-2019). On cons 2023E with the RIF announced (16K), rev/ee will rise to the pre-pandemic range ($0.37MM, +10.9%) and EBITDA rises (+15.2%).

    META grew headcount +94.3% vs revenue +63.9% (more negative leverage). 2022 rev/ee ($1.33MM, -19.0%) declined from the 2021 high point. On cons 2023E with the RIF announced (11K), EBITDA declines (-1.1%).

    Depending on your view on whether consensus rev and EBITDA are too high or too low, and on what management considers acceptable performance, you might have views on whether additional RIFs are likely.

    I personally think AMZN needs to get a lot more profitable, given investors’ changing priorities, so I am thinking more large RIFs. I don’t see a burning need for more large RIFs at GOOG or MSFT. Unsure about AAPL or META. If 2023E is significantly worse than consensus, then I’d expect more RIFs from everyone.

    1. Over about 20 years I published four case studies on Deere and Company. I wrote the first one in the late 1970s, just before the great farm recession of 1982. The company had just established a huge factory for building engines, the world’s largest tractor assembly plant and a huge automated foundry in Waterloo, IA. At that point Deere had 16,000 employees in Waterloo, including thousands of engineers. The company as a whole employed right around 100,000 people. After the recession, Deere’s Waterloo employment had fallen to around 6,500 and company-wide employment was down a bit. By the time I started on the fourth case Deere’s total employment was still roughly 100,000 but sales had grown by 300%, with sales/employee up four times. Today, I suspect the ratio is even bigger. What happened to those 10,000 people laid ff in the early 80s? They had to leave the state. The county’s population fell by 16%. About the same time Rath Packing, once a Fortune 500 company, closed laying off thousands, most of whom never got another job. All these layoffs we are starting to see from productivity improvements and input substitutions will not have an easy answer. Productivity growth is great for bosses and shareholders, but only as long as those left behind have the same or better money. If you look at the OXFAM report H link to a couple weeks ago, that last part isn’t what’s happening. We are hollowing our economy and in the vernacular, that cain’t be good.

  6. Played with Chatgpt, got some wrong answers that I upvoted as I was in superannuated cantankerous mode.
    What happens when a large cadre decides to convince the ai that, say, Donald Trump is current president?
    Or the Chinese online censors get accounts and provide more nuanced updates to news?
    GIGO.
    If it can parse the internet, I shudder to think what will be captured.
    How “smart” must the ai be to prevent this?
    Probably some fired Twitter moderators need work.
    Ask the ai how many moderators needed to verify its databases.

    “I’m sorry Dave, I can’t do that.”

  7. Interpolation versus extrapolation, this is all everyone needs to know about AI.

    Interpolation is what Google or Bing have been doing since the Dotcom Bubble, their search engines sort existing data.

    ChatGPT is turbocharged sorting, with linguistic embellishment output that blends ideas together. That’s going to be vastly entertaining, but AI is incapable of generating innovative extrapolation. It will help combine existing ideas into new perspective, but it’s not going to really be a game changer (by its self).

    However, AI eventually will inspire smart humans to interact with other smart people, who will use this tool to push innovation into new territory.

  8. I went over to check out chatGPT. I even tried to pull down their app writing scripts but got caught in a loop of some sorts. (Will try again later.)

    No such problems using the search demo box. The tone and cadence immediately felt familiar. It was a recreation of the librarian in Snow Crash by Neal Stephenson!

    Not bad – put that alongside his vision of the Metaverse, that book was mighty prescient.

    1. NS, like many futurist writers, just has a way of seeing what’s coming. If you like Stephenson and haven’t already done so try the whole Baroque Cycle. It’s nine short novels in three volumes and one of the best presentations I’ve ever seen of the early history of financial markets and the financial system. He is a good researcher and this covers the invention and development of the commodity markets, the banking system, and many other tidbits. Plus, it’s an entertaining saga.

NEWSROOM crewneck & prints