Economists and academics aren’t exactly famous for producing models that approximate reality while attempting to make sense of it (reality, I mean).
Comical simplifications that bear no resemblance whatsoever to the real world are fixtures of academic social science research, as are highly tenuous assumptions. In many cases, the initial assumptions adopted prior to devising models are so obviously spurious that it’s impossible to take the ensuing analysis any semblance of serious.
Believe it or not, that isn’t meant as a derisive comment on academics or even the social sciences more generally. Rather, it’s an indictment of the way in which academics are compelled to treat the soft sciences if they (academics) are to have any hope of being published on the way to securing gainful employment as a card-carrying member of the ivory tower club.
Economics has always occupied a kind of grey area between the hard and soft sciences. “Economic man” (or woman) is a creature obsessively concerned with calculation, and because calculation is often synonymous with quantification, economics should be amenable to the derivation of laws and axioms. Only it isn’t. Because the human element makes it impossible to predict outcomes with anything like precision.
Whenever you introduce the human element into an equation, it’s not an equation anymore in any strict sense. The human brain is probably something that can be mapped and mathematized, but irony of ironies, we don’t use enough of our brains to make such an endeavor feasible. Or at least not yet. Until we do, human behavior is unpredictable, and as such, attempts to model phenomena that involve human behavior can only be fruitful by accident.
I bring this up on Monday because The New York Fed published an article called “Breaking Down the Market for Misinformation.” That’s a crucial topic in the social media era, and it’s becoming more urgent by the day as Elon Musk moves to reshape Twitter in the image of the faux libertarianism espoused by too many disaffected Americans and championed from abroad by autocratic regimes keen to sow domestic discord in Western democracies.
The NY Fed article was actually a summary of a recent staff report that carried the less reader-friendly title, “Misinformation in Social Media: The Role of Verification Incentives.” It’s 41 pages, and I wouldn’t want to trivialize the effort. At the same time, the paper is a case study in belabored academic meandering which, at best, can only hope to support common sense conclusions that any rational person would draw immediately if the relevant questions were posed on an ad hoc basis, and at worst risks muddying the waters with the kind of laughably spurious assumptions and dubious modeling mentioned here at the outset.
“We develop a model of misinformation wherein users’ decisions to verify and share news of unknown truthfulness interact with producers’ choices to generate fake content as two sides of a market that balance to deliver an equilibrium prevalence and pass-through of fake news,” the authors boldly declared.
Already, they’re off base. The market for misinformation isn’t something that can be modeled by a supply-demand function. Game theory might work, but you don’t need any of that to “develop a model of misinformation.” Misinformation is spread for one of two main reasons:
- The producer (who, in some of the more dangerous cases, can be a state actor) is attempting to influence public opinion for the purposes of achieving societal outcomes that are politically or militarily advantageous
- The producer is a profiteer seeking to monetize misinformation by leveraging the many overlaps between misinformation and what we commonly refer to in the internet era as “clickbait”
Crucially, those two can (and often do) overlap. Purveyors of misinformation are sometimes not state actors themselves, but rather profit-seeking agents who wittingly or unwittingly parrot the agenda of state actors. That model is prevalent in the spread of Kremlin propaganda. I could give you specific examples — but I won’t. Maybe some other time.
On the demand side, the market for misinformation in the Western world is a function of undereducation (which is conducive to gullibility), voter disaffection with the status quo (almost always identified with “the establishment”) and the simple “guilty pleasure” of indulging in tabloid-style content (which is mostly harmless in isolation, no different from Googling the latest “news” on Nessie).
Compare that to the way in which the NY Fed researchers described the market:
When encountering a news item, a user can choose to determine its veracity at a cost (of time and effort, for example) before deciding whether to share it or not. The decision to pursue verification thus involves weighing the cost of achieving certainty (by consulting a fact-check article, for example) against the risk of sharing misinformation. The probability of the latter is determined by the prevalence of misinformation on a given platform, which is determined by supply and demand forces. Specifically, producers of fake content benefit when users share such information without first bothering to verify it. As the fraction of these users increases, so does the pass-through of misinformation, thereby incentivizing the production (and hence prevalence) of false content, resulting in an upward-sloping “supply curve” of misinformation. Conversely, users are less likely to pass on unverified news when the risk of encountering fake content is higher. As the prevalence of misinformation increases, fewer users then engage in such unverified sharing, resulting in a downward-sloping “misinformation pass-through curve.” Much like a standard supply-demand framework, equilibrium misinformation prevalence and pass-through emerge at the point of intersection of the two curves.
Again, you can’t help but laugh. At the risk of oversimplifying the situation, you’re either amenable to misinformation or you’re not. Some of us, for various reasons, have a particularly sensitive antenna, but the idea of social media users as people who weigh the opportunity cost of verifying a given item against the reputational cost of sharing unverified content that later turns out to be false, is quaint, to put it nicely.
In the real world, the gullible, the complicit and bots, share misinformation immediately if the headline fits their narrative. There’s no real “risk” from sharing misinformation among that group. The gullible can’t differentiate between fact and fiction even when presented with irrefutable evidence, and if you’re the type to spread memes about, for example, Satanic pizza parlors, you’re not exactly concerned about your reputation. The complicit are complicit. Simple as that. For them, the “risk” of sharing misinformation originating from a foreign government (for example) is arrest in their country of residence, but proving collusion is an impossibly high bar. Just ask Robert Mueller. So, that really isn’t a risk. For bots, there’s no risk at all. They’re bots.
Further, the notion that the prevalence of misinformation on a given platform is a function of supply and demand suggests an almost religious dedication to market-based explanations that’s completely out of step with the reality of this particular situation. The prevalence of misinformation on social media is a function of how much misinformation the Kremlin (for example) manages to embed. There’s certainly some truth to the idea that the supply of misinformation is linked to where that misinformation stands the best chance of being shared (conceptually akin to the logistics of the cross-border drug trade, wherein you’d generally concentrate on areas with the highest probability of successfully smuggling contraband without it being seized) but is it really worth trying to distinguish between social media platforms? Has any popular social media platform definitively demonstrated the capacity to stop the spread of misinformation at scale? No. The answer to those questions is unequivocally no. As such, producers of misinformation will continue to bombard all social media until such a time as their accounts are banned, at which point they’ll create new ones, and rely on an army of related accounts to push the same content out on the same sites anyway.
In short: The prevalence of misinformation on social media can be aptly described as “thoroughly saturated,” not in the sense that there’s no room for more, but rather in the sense that misinformation is so pervasive on all social media platforms that drawing distinctions between them is, I’d argue, meaningless. The risk of sharing misinformation on any social media site is thereby very high, unless that information comes from a verified, recognizable source. But that’s an exercise in question-begging because the people who would share misinformation are almost always of the mind that verified, recognizable sources aren’t to be trusted. Indeed, much of the misinformation is centered on perpetuating that very mistrust of traditional, trustworthy media.
The authors’ contention that “users are less likely to pass on unverified news when the risk of encountering fake content is higher” is flatly false. The risk of encountering fake content is extraordinarily high on Twitter, and although Musk would disagree, I’d argue it’s higher now than ever as exemplified by the fact that Musk himself shared (and then deleted) unverified rumors about the attack on the spouse of the US House Speaker. Are users thereby “less likely” to retweet unverified news compared to news they received in their Facebook timeline? I doubt it. Indeed, the more untrustworthy (in a traditional sense of the word) a platform is, the more trustworthy it appears to those who don’t trust the mainstream media, and those are the people most likely to engage in content sharing.
The NY Fed article posed a kind of conundrum for those seeking to curb the spread of misinformation. “Various policies currently in place attempt to reduce the profitability of peddling false information, thereby reducing its supply and prevalence [but] the pass-through of fake content increases because users’ verification incentives are weakened in the process, as news items become more likely to be truthful,” it read.
That’s probably true in a parallel universe where this discussion is meaningful because some social media platforms have taken serious steps which have materially curbed the spread of false information. That’s not the world we live in, though. People spread misinformation with virtual impunity and besides, the purveyors of such content (state actors, politicians, profiteers and everyone in-between) have succeeded in blurring the line between fact and fiction. Around a third of the US electorate no longer agrees with everyone else when it comes to what’s real and what isn’t. How do you conduct a “fact check” in that conjuncture?
With all due respect to the authors’ efforts, the only effective way to combat misinformation is with a heavy hand and discretion (they didn’t say otherwise, but the implication seemed to be that systematic checks and non-draconian steps might work in a market-based model). You can call it censorship, you can call it a lack of respect for freedom of speech or you can call it something else, but sometimes you have to be willing to countenance paradoxes and oxymorons that invite criticism in order to rescue society from itself. In this case, that might mean adopting China-style censorship in order to prevent regimes like China’s from subverting Western democracy.
Of course, censorship isn’t going so well in China currently, and you might argue that disallowing public debate about the merits of Xi Jinping’s “COVID zero” strategy isn’t conceptually different from censoring Americans questioning the use of vaccines. This is where someone (or, more aptly, social media and tech companies) has to exercise discretion. Unpalatable as that might seem, it’s the only real option. There have to be gatekeepers and truth arbiters who rely on a generally accepted, shared version of reality. If not, then what are we doing in organized societies? If facts and reality are indeed as malleable and elusive as many now claim, then what’s the point? Of anything, really? How do you function in such a world? When taken to extremes, such a world isn’t livable.
This is possible, by the way. For example, CNBC’s incorrigible Rick Santelli once said, on air, that he “didn’t believe” there was much difference between “500 people in a Lowe’s” and “150 people in a restaurant that holds 600” when it came to virus transmission. That was very controversial at the time (in December of 2020, during a vicious winter COVID wave and before most people were vaccinated). But was it the same as someone on YouTube telling the public outright lies about vaccines? No. And that’s where the discretion comes in. CNBC might ask Santelli to simply apologize, not for his opinion, but rather for the way in which he expressed it (and Rick did, in fact, apologize at least once for the way in which he communicated his opinions on COVID), while Google might demonetize or outright ban, someone spreading demonstrable falsehoods.
None of that is dystopian. Determining what counts as acceptable behavior and what doesn’t is part of running a business. What’s acceptable at a nightclub isn’t acceptable at a day care. You can’t dance around luridly and shout profane lyrics in the presence of 45 toddlers. Your claim to “free speech” is irrelevant in that setting. Or if it isn’t, the harm you might be doing far outweighs your First Amendment claims in that scenario, and the day care would be fully within its rights to kick you out. Nobody would argue with that, so what’s so contentious about a social media company telling people they can’t say things that might, for example, discourage the elderly from getting a vaccine that might save their lives?
In fact, in that context, I’d argue that social media companies would be fully within their rights to censor truthful content that might discourage the elderly from getting a life-saving vaccine. Before the internet, we didn’t trust random people we’ve never met with life or death decisions. If there were questions as to the tradeoffs of a medical treatment, we discussed them with professionals qualified to help us perform the cost-benefit analysis. We habitually claim the internet has created a better informed citizenry, but most of the time, I doubt it.
Further, the idea that tech companies and social media giants somehow shouldn’t be allowed to exercise any kind of discretion simply because they’re pervasive seems like a slippery slope to me. Does that mean that any company which becomes a fixture of daily life and whose services can serve as a conduit for Constitutionally-protected behavior thereby surrenders its right to discretion past a certain threshold of ubiquity? If so, isn’t that a deterrent to success?
The other option is to simply shut social media down, or regulate it such that the penalty for being derelict vis-à-vis egregious misinformation with the potential to bring about dire consequences for society is simply so onerous that no platform would risk it (e.g., a fine equivalent to 75% of the current year’s revenue or a criminal trial for board members.)
On Sunday evening, Elon Musk, citing the verbatim repetition of a tweet from Alexander Vindman, suggested that Vindman is a ringleader in a global conspiracy. Later, he posted a picture of a gun on what he claimed was his nightstand, before sharing a poorly Photoshopped image of CNN programming depicting a fake headline. “This headline never appeared on CNN. Be better.” the network’s official communications team chided. The image was originally posted by a crypto blogger with 114,000 followers.
On Monday afternoon, Musk claimed Apple has “mostly” stopped advertising on Twitter. “Do they hate free speech in America?” he wondered. “What’s going on here Tim Cook?”