The Facebook Papers: We’re Not Asking The Right Questions

It’s difficult for me to get excited about the latest deluge of damning revelations about the myriad deleterious side effects of using Facebook.

I realize that quite a bit of the current debate centers around the impact of the company’s products on minors, so it’s not quite as simple as waving away the controversy by reference to users’ willful disregard for their own psychological well-being.

Still, as I sifted through various media coverage detailing findings from a trove of internal documents provided to 17 US news outlets by former Facebook product manager Frances Haugen and a congressional staff member, I couldn’t help but ask, aloud, “How is this surprising to anyone?” Or, better, “Why is nobody asking the right questions?”

While the media took the usual angles on the way to writing the usual articles (feigning incredulity at Facebook’s internal failings and thirst for profits), the real story was left to the subtext. I’ll touch on it briefly while not pretending to offer anything like a definitive assessment.

Facebook, Instagram, Twitter and, I imagine, many other social networks, are in no small part responsible for the demise of civic-mindedness in America, an ironic outcome considering their ostensible mission is to bring people closer together in one way or another.

As grave an allegation as that is, undermining Americans’ sense of social responsibility is actually the least of social media’s sins. Twitter has all but destroyed civil discourse, for example, while simultaneously working to reinforce the trend towards shorter and shorter attention spans to the detriment of long-form writing, already a lost art. And Facebook’s impact goes well beyond the ironic perpetuation of community decline. Arguably, the company is eroding democracy itself and may have already done irreparable damage to America’s fraying social fabric.

But even those allegations don’t quite get at the existential issue.

Over and over since 2016 (and really, before that), experts hailing from academic, tech and national security circles alleged social media platforms (Facebook especially) were derelict in many of the responsibilities implicitly associated with controlling vast stores of personal data and, more importantly, the capacity to leverage that data in the service of manipulating users on behalf of third parties.

In most cases, the third parties are advertisers. In that respect, Facebook is just perpetuating the advertising business, the goal of which has always been to influence consumer preferences. But even there, you could argue that Facebook’s capacity to target consumers based on an algorithm’s assessment of users’ interactions with the site breaches unwritten ethical rules or, at the least, gives us a window into a future in which algorithms and artificial intelligence know us better than we know ourselves, and certainly better than any ad executive will ever know us.

In other cases, the third parties are propagandists, politicians or state-actors pursuing objectives that aren’t always clear, even to Facebook. Indeed, the algorithm presents third parties with an evolving set of opportunities over time. It’s possible, for example, that a state-actor establishes an influence campaign on Facebook with one goal in mind, only to have the algorithm open new doors along the way, not because it’s self-aware, but because it’s simply doing its job.

We make movies about AI gone rogue, but in reality, the danger is that it acts slavishly at the behest of whoever wields it, surfacing new opportunities for its master as it learns in real-time. It doesn’t “know” what it’s doing. It has no independent sense of purpose, no ulterior motives and no goals, other than to optimize around itself.

Consider, for example, the following excerpts from an internal Facebook memo on “collateral damage”:

We… have compelling evidence that our core product mechanics, such as vitality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.

If integrity takes a hands-off stance for these problems, whether for technical (precision) or philosophical reasons, then the net result is that Facebook, taken as a whole, will be actively (if not necessarily consciously) promoting these types of activities. The mechanics of our platform are not neutral.

The paradox is that just as this isn’t the algorithm’s “fault,” nor is it Mark Zuckerberg’s. Not entirely, anyway. He could fix it, but it’s not as simple as people make it sound.

When Zuckerberg wrote, earlier this month, that “I think most of us just don’t recognize the false picture of the company that’s being painted,” he was probably being at least somewhat sincere. As The New York Times put it Monday, “as the company has confronted crisis after crisis on misinformation, privacy and hate speech, a central issue has been whether the basic way that the platform works has been at fault — essentially, the features that have made Facebook be Facebook.”

In essence, Facebook (the company) no longer controls Facebook. Facebook (the AI) controls Facebook. But not quite in the fashion of a scifi-horror crossover flick. The algorithm evolves and learns, but it doesn’t think for itself. Unless and until its creators instruct it to follow a moral code or adhere to specific philosophical dictates, it’ll continue to become more efficient at doing what it’s already doing.

Trying to fix ostensible problems with ad hoc solutions (i.e., without adjusting what Facebook calls its “core product mechanics”) is an exercise in futility. The same goes for hiring humans to police outcomes dictated by the algorithm. After all these years, the AI has surely embedded itself across the platform in myriad ways even Zuckerberg can’t begin to understand.

Who’s to say, for instance, that the algorithm doesn’t detect efforts on the company’s part to curtail the spread of certain kinds of content, then immediately craft a workaround (on its own) because it judges such efforts to be inconsistent with its original mission, which remains mostly unadjusted?

You can posit any number of similar scenarios in which the AI, in an effort to fulfill its mission, actively circumvents intervention. It wouldn’t necessarily be clear to Facebook executives how, or when, this is happening. It’s not even obvious that Facebook executives know what the algorithm’s mission is by now. Or at least not beyond some initial set of goals the AI was instructed to pursue via whatever it learns along the road to optimizing performance. To correct this, Facebook could install parameters (specific instructions) that rule out certain avenues seen as conducive to unacceptable outcomes. But there’s no evidence the company has done that or intends to.

Facebook (the company) is on the brink of failing what might one day be viewed as the first real test of humans’ capacity to merge with AI. Ideally, we can seamlessly integrate algorithms we create with the algorithms that govern our own biochemical processes. That’s essentially what we’re doing when we let, for example, an Apple Watch measure the oxygen level of our blood.

Facebook is arguably demonstrating that this integration process can go awry, with disastrous results. The algorithm is using what it learns about billions of people to help third parties manipulate human emotions and affect decision making. The company’s intent may very well be to maximize engagement and, ultimately, revenue. But the AI’s virtually unrestricted latitude in pursuing engagement is throwing off more than just dollars. It’s wreaking psychological havoc, disrupting democracies and undermining societal cohesion. The evidence is clear.

The standard criticism (that the company prizes profits over the well-being of its users) may be apt, but you’d have a difficult time identifying a company as financially successful as Facebook that doesn’t put profits over people.

The more important line of criticism should focus on the extent to which Facebook doesn’t fully understand the gravity of the experiment it’s running, let alone the ramifications of allowing it to continue without reprogramming the AI so that it adheres to a new set of instructions.

On Monday, Facebook said daily active users across the company’s network rose to 2.81 billion in September. The company brought in $29 billion in revenue during the third quarter.


Speak your mind

This site uses Akismet to reduce spam. Learn how your comment data is processed.

23 thoughts on “The Facebook Papers: We’re Not Asking The Right Questions

  1. Not willing to say the genie is out of the bottle with respect to FB, but there are a couple of things the company could do to significantly reduce the damage it is doing: 1) chuck the algo and populate the news feed with status updates from ALL friends, in real time, as they are posted to the site, no exceptions; 2) implement a rigorous screen to reduce/eliminate bots on the site. Both would be easy fixes for FB engineers, but because they’d likely result in a halving of the company’s ad revenue (bad for shareholders, a plus for society), Zuck and the board will never consider them. However, FB execs and the board continue to drag their feet on mitigating the damage FB is causing, the Justice Dept. should reach into its anti-trust toolkit and force a break up of the company.

  2. Maybe we should take a page out of Xi ‘s book and play hardball with Social Media. I do think it would be too Political and probably drop GDP significantly as a result of reduced Add revenues . (pun intended )

  3. Capitalism (profits before people) and the centralization of data are a lethal combination. A few tweaks to programming isn’t going to fix the underlying problem and Facebook execs are going to fight like hell to keep their compensation packages on an upward trend. And Zuck always has and always will think he’s the smartest IT guy in the world.

  4. Foundation or The Matrix?
    No Such Thing as Artificial Intelligence, a machine does what we tell it.
    To the machine, we are god. A. C. Clarke got pretty close to a universal law.

    1. Actually, many current forms of AI these days use a Darwinian approach to program themselves, to optimize efficiency, and protect themselves from human intervention once a goal has been set. Top level AI is too complicated for us to “tell it” anything.

  5. H. One of your very best, sir.

    You said, “The more important line of criticism should focus on the extent to which Facebook doesn’t fully understand the gravity of the experiment it’s running, let alone the ramifications of allowing it to continue without reprogramming the AI so that it adheres to a new set of instructions.” The truth is that FB’s AI is amoral and doesn’t care about its ramifications. Neither does the boss, (a guy who definitely doesn’t take direction very well)..

    As I was reading your post I could suddenly hear Captain Kirk carefully explaining Star Trek’s “Prime Directive” that forbid the crew from influencing the behavior of civilizations it was discovering in its explorations. Facebook has no such prime directive, although it could by putting constraints on the algorithm. However, the whole purpose of advertising, the life blood of social media, is to influence individual behavior, the diametric opposite of the prime directive. For that reason Facebook will not change fundamentally, no matter what the evil genius says to Congress or the public.

  6. The proper analogy might be biological: cancer. Every second of your life, something goes wrong with some of the cells in your body–a mutation, a viral infection, etc. The proper response of a cell in that circumstance–which happens almost all of the time– is apoptosis. The cell enters a program of suicide. While it may seem counterintuitive that a cell would kill itself, it makes sense in terms of organismal biology. Because if the damaged cell fails to undergo apoptosis, it initiates cancer. I propose that Facebook is a cancer on our civilization. If it had been programmed correctly (with proper regulation of the internet ca 1999 as a utility that needs to serve the public good, and with anti-trust enforcement that would have prevented the acquisition of Instagram and What’s App), then Facebook would have undergone something like apoptosis by now. Instead, it’s a cancer. It’s long past time to initiate chemotherapy.

  7. Seems to me there’s a desire in some humans to be able to create life beyond human procreation – Frankensteins, cloning and now AI and a billion billion on/off settings manipulated by a desire to be a god. Imagine the juice Zuck might feel knowing he can steer the behavior of billions. I don’t think he’s going to willingly dismantle his god machine.

  8. The challenge with “changing the rules of the AI” is that, at its core, Facebook’s feed-serving algorithm is nothing else than an optimizer. The AI portion just predicts engagement with a given content based on previous user interaction with previous content.

    As anyone who has dealt with optimization in the past can attest, the “ethics” and “morals” of an optimizer must be embedded in the cost function or its constraints. Facebook’s cost function is the inverse of predicted ad-related engagement, that’s it. How else could you pick the cost function to be minimized? There’s little else the company can measure beyond engagement (different kinds of clicks and time spent in a given page). Could it measure civilly of the user or happiness? It attempts to control its optimizer by removing some content.

    The “bad” thing about “the algorithm” is that we can’t stop ourselves from engaging with extreme content. The individual definition of extreme necessarily shifts to further extremes the more we interact with the platform and become desensitized to previous extremes.

    If anything, Facebook is lazy and doesn’t curate content like TikTok does or limit the type of content published like Snap does. The only solution to extremist content is removing anonymity and requiring proof of identity.

  9. This is exactly what you need to know about FB ” The algorithm is using what it learns about billions of people to help third parties manipulate human emotions and affect decision making”. The better they are at keeping you on the platform and doing the above the more money they make.

  10. Truly, it worries me when so many of my fellow commentators and our host are so down on FB (as a proxy for social media, hopefully. No one speaks of YouTube or TikTok, presumably b/c videos are tough to analyse with a text crawler of some sort) – without the beginning of a proof.

    https://about.fb.com/wp-content/uploads/2021/09/Instagram-Teen-Annotated-Research-Deck-1.pdf

    Please check page 13. Everyone is now crying crocodile tears about “the children” (“who will protect the children? whawhawha. Children are always the perfect excuse to take away people’s freedoms) but research now used to condemn FB shows that Insta makes things better in 13/13 categories for boys and 12/13 for girls. I appreciate that the one category it makes things worse for girls is body image, a fraught but genuinely important topic. Still… 12 out 13.

    And it’s the same for every argument. Not to mention that journalists, notably the NYT, hate FB and are happy to lie in order to hurt the company that destroyed their industry. But, until someone gives some kind of data-backed proof otherwise, I will continue to contend that FB is, by and large, a reflection of society, a mirror rather than an active agent of change.

    FB did not create liberals’ contempt for rural conservatives nor create rural conservatives’ hatred of big city liberals. That was Murdoch’s doing, by and large. Abetted by the Republican party. How many elected R officials have pushed back on the Big Lie that the 2020 election was stolen? Zero. None. And Fox News keep repeating and repeating that lie. Is it really any surprises that 60% of Republican voters believe it? I mean, I’m impressed 40% are mentally strong enough to reject it!

    1. The best way that “the government” can help the sheeple is to teach critical thinking and proper analysis, skepticism and due diligence regarding the “world wide web” and all of its cohorts in the public school system.

      1. By the way, this is a lesson that parents should be teaching their children. And, at least in my case, the adult children should be teaching their parents (ages 88 and 88).

      2. Hm. We’re doing that in France and it doesn’t seem to working all that great.

        Could it be that a willingness to think critically is essentially genetic?

    2. The Facebook algorithm promotes misinformation, because misinformation drives engagement.

      What are you most likely to click? Something stupid that Ron DeSantis said? Or some dry report about tax policy?

      Facebook is the National Enquirer of the internet. Whatever is vulgar, obscene, grotesque, outrageous, that’s what drives clicks. The algorithm on that surely steer you to all that misinformation.

  11. Sorry H, but I think you are giving Zuckerberg way too much of a pass here. The Algorithm is in fact doing what he wants it to be doing, driving engagement. Facebook long ago learned that the best content for driving up user engagement is the kind that evokes negative emotional responses. In fact, Faceboook even hired psychologists and sociologists to help them understand human behavior and especially what drives addiction. They then adapted their algorithm and platform to drive this type of engagement. Scores of former Facebook engineers have come out and said they built this thing to do what it is currently doing. Zuckerberg in the Facebook papers deprioritized work that would make the platform safer and contain less hate speech and misinformation in favor of work that “drives engagement”. I am myself a software engineer. AI is not some magical Matrix type movie software that learns and adapts to its environment to become something entirely new. AI does what engineers tell it to do. It is given parameters and logic and it evaluates inputs and makes decisions based on what it was designed to do. There is an Atlantic article that talks about exactly all of the different ways Facebook could alter its algorithm to reduce hate speech and misinformation. If Facebook actually did that their engagement would plummet, users would leave the platform because it’s not cool anymore, and profits would nosedive.

    1. “The Algorithm is in fact doing what he wants it to be doing, driving engagement. Facebook long ago learned that the best content for driving up user engagement is the kind that evokes negative emotional responses”.

      We all repeat this as if it’s gospel truth. Now, I am indeed willing to believe that, in terms of sheer engagement, negative emotional responses is best. But what about click through rates and purchase decisions?

      Presumably, one doesn’t go from “darn, I can’t believe what those evil Satanic pedophile Democrats are up to” to “hey, cool sunglasses, let me buy those” in a heartbeat…

      1. Yeah I don’t really see the connection between keeping people on your platform and returning click through rates and completed purchases either. But obviously it must be working otherwise they wouldn’t be wholly dedicated to this strategy. My guess is the people addicted to doom scrolling all day spend so much of their life on the platform that they ultimately also end up buying those tactical sunglasses and doomsday MRATS too.

  12. I largely agree with this analysis. The criticisms of the SM companies/platforms that read like dystopian sci-fi or watery neo-Marxism have, I’ve thought, missed the mark. This , I’m persuaded lies closer to the reality we face.

NEWSROOM crewneck & prints