Economists and academics aren’t exactly famous for producing models that approximate reality while attempting to make sense of it (reality, I mean).
Comical simplifications that bear no resemblance whatsoever to the real world are fixtures of academic social science research, as are highly tenuous assumptions. In many cases, the initial assumptions adopted prior to devising models are so obviously spurious that it’s impossible to take the ensuing analysis any semblance of serious.
Believe it or not, that isn’t meant as a derisive comment on academics or even the social sciences more generally. Rather, it’s an indictment of the way in which academics are compelled to treat the soft sciences if they (academics) are to have any hope of being published on the way to securing gainful employment as a card-carrying member of the ivory tower club.
Economics has always occupied a kind of grey area between the hard and soft sciences. “Economic man” (or woman) is a creature obsessively concerned with calculation, and because calculation is often synonymous with quantification, economics should be amenable to the derivation of laws and axioms. Only it isn’t. Because the human element makes it impossible to predict outcomes with anything like precision.
Whenever you introduce the human element into an equation, it’s not an equation anymore in any strict sense. The human brain is probably something that can be mapped and mathematized, but irony of ironies, we don’t use enough of our brains to make such an endeavor feasible. Or at least not yet. Until we do, human behavior is unpredictable, and as such, attempts to model phenomena that involve human behavior can only be fruitful by accident.
I bring this up on Monday because The New York Fed published an article called “Breaking Down the Market for Misinformation.” That’s a crucial topic in the social media era, and it’s becoming more urgent by the day as Elon Musk moves to reshape Twitter in the image of the faux libertarianism espoused by too many disaffected Americans and championed from abroad by autocratic regimes keen to sow domestic discord in Western democracies.
The NY Fed article was actually a summary of a recent staff report that carried the less reader-friendly title, “Misinformation in Social Media: The Role of Verification Incentives.” It’s 41 pages, and I wouldn’t want to trivialize the effort. At the same time, the paper is a case study in belabored academic meandering which, at best, can only hope to support common sense conclusions that any rational person would draw immediately if the relevant questions were posed on an ad hoc basis, and at worst risks muddying the waters with the kind of laughably spurious assumptions and dubious modeling mentioned here at the outset.
“We develop a model of misinformation wherein users’ decisions to verify and share news of unknown truthfulness interact with producers’ choices to generate fake content as two sides of a market that balance to deliver an equilibrium prevalence and pass-through of fake news,” the authors boldly declared.
Already, they’re off base. The market for misinformation isn’t something that can be modeled by a supply-demand function. Game theory might work, but you don’t need any of that to “develop a model of misinformation.” Misinformation is spread for one of two main reasons:
- The producer (who, in some of the more dangerous cases, can be a state actor) is attempting to influence public opinion for the purposes of achieving societal outcomes that are politically or militarily advantageous
- The producer is a profiteer seeking to monetize misinformation by leveraging the many overlaps between misinformation and what we commonly refer to in the internet era as “clickbait”
Crucially, those two can (and often do) overlap. Purveyors of misinformation are sometimes not state actors themselves, but rather profit-seeking agents who wittingly or unwittingly parrot the agenda of state actors. That model is prevalent in the spread of Kremlin propaganda. I could give you specific examples — but I won’t. Maybe some other time.
On the demand side, the market for misinformation in the Western world is a function of undereducation (which is conducive to gullibility), voter disaffection with the status quo (almost always identified with “the establishment”) and the simple “guilty pleasure” of indulging in tabloid-style content (which is mostly harmless in isolation, no different from Googling the latest “news” on Nessie).
Compare that to the way in which the NY Fed researchers described the market:
When encountering a news item, a user can choose to determine its veracity at a cost (of time and effort, for example) before deciding whether to share it or not. The decision to pursue verification thus involves weighing the cost of achieving certainty (by consulting a fact-check article, for example) against the risk of sharing misinformation. The probability of the latter is determined by the prevalence of misinformation on a given platform, which is determined by supply and demand forces. Specifically, producers of fake content benefit when users share such information without first bothering to verify it. As the fraction of these users increases, so does the pass-through of misinformation, thereby incentivizing the production (and hence prevalence) of false content, resulting in an upward-sloping “supply curve” of misinformation. Conversely, users are less likely to pass on unverified news when the risk of encountering fake content is higher. As the prevalence of misinformation increases, fewer users then engage in such unverified sharing, resulting in a downward-sloping “misinformation pass-through curve.” Much like a standard supply-demand framework, equilibrium misinformation prevalence and pass-through emerge at the point of intersection of the two curves.
Again, you can’t help but laugh. At the risk of oversimplifying the situation, you’re either amenable to misinformation or you’re not. Some of us, for various reasons, have a particularly sensitive antenna, but the idea of social media users as people who weigh the opportunity cost of verifying a given item against the reputational cost of sharing unverified content that later turns out to be false, is quaint, to put it nicely.
In the real world, the gullible, the complicit and bots, share misinformation immediately if the headline fits their narrative. There’s no real “risk” from sharing misinformation among that group. The gullible can’t differentiate between fact and fiction even when presented with irrefutable evidence, and if you’re the type to spread memes about, for example, Satanic pizza parlors, you’re not exactly concerned about your reputation. The complicit are complicit. Simple as that. For them, the “risk” of sharing misinformation originating from a foreign government (for example) is arrest in their country of residence, but proving collusion is an impossibly high bar. Just ask Robert Mueller. So, that really isn’t a risk. For bots, there’s no risk at all. They’re bots.
Further, the notion that the prevalence of misinformation on a given platform is a function of supply and demand suggests an almost religious dedication to market-based explanations that’s completely out of step with the reality of this particular situation. The prevalence of misinformation on social media is a function of how much misinformation the Kremlin (for example) manages to embed. There’s certainly some truth to the idea that the supply of misinformation is linked to where that misinformation stands the best chance of being shared (conceptually akin to the logistics of the cross-border drug trade, wherein you’d generally concentrate on areas with the highest probability of successfully smuggling contraband without it being seized) but is it really worth trying to distinguish between social media platforms? Has any popular social media platform definitively demonstrated the capacity to stop the spread of misinformation at scale? No. The answer to those questions is unequivocally no. As such, producers of misinformation will continue to bombard all social media until such a time as their accounts are banned, at which point they’ll create new ones, and rely on an army of related accounts to push the same content out on the same sites anyway.
In short: The prevalence of misinformation on social media can be aptly described as “thoroughly saturated,” not in the sense that there’s no room for more, but rather in the sense that misinformation is so pervasive on all social media platforms that drawing distinctions between them is, I’d argue, meaningless. The risk of sharing misinformation on any social media site is thereby very high, unless that information comes from a verified, recognizable source. But that’s an exercise in question-begging because the people who would share misinformation are almost always of the mind that verified, recognizable sources aren’t to be trusted. Indeed, much of the misinformation is centered on perpetuating that very mistrust of traditional, trustworthy media.
The authors’ contention that “users are less likely to pass on unverified news when the risk of encountering fake content is higher” is flatly false. The risk of encountering fake content is extraordinarily high on Twitter, and although Musk would disagree, I’d argue it’s higher now than ever as exemplified by the fact that Musk himself shared (and then deleted) unverified rumors about the attack on the spouse of the US House Speaker. Are users thereby “less likely” to retweet unverified news compared to news they received in their Facebook timeline? I doubt it. Indeed, the more untrustworthy (in a traditional sense of the word) a platform is, the more trustworthy it appears to those who don’t trust the mainstream media, and those are the people most likely to engage in content sharing.
The NY Fed article posed a kind of conundrum for those seeking to curb the spread of misinformation. “Various policies currently in place attempt to reduce the profitability of peddling false information, thereby reducing its supply and prevalence [but] the pass-through of fake content increases because users’ verification incentives are weakened in the process, as news items become more likely to be truthful,” it read.
That’s probably true in a parallel universe where this discussion is meaningful because some social media platforms have taken serious steps which have materially curbed the spread of false information. That’s not the world we live in, though. People spread misinformation with virtual impunity and besides, the purveyors of such content (state actors, politicians, profiteers and everyone in-between) have succeeded in blurring the line between fact and fiction. Around a third of the US electorate no longer agrees with everyone else when it comes to what’s real and what isn’t. How do you conduct a “fact check” in that conjuncture?
With all due respect to the authors’ efforts, the only effective way to combat misinformation is with a heavy hand and discretion (they didn’t say otherwise, but the implication seemed to be that systematic checks and non-draconian steps might work in a market-based model). You can call it censorship, you can call it a lack of respect for freedom of speech or you can call it something else, but sometimes you have to be willing to countenance paradoxes and oxymorons that invite criticism in order to rescue society from itself. In this case, that might mean adopting China-style censorship in order to prevent regimes like China’s from subverting Western democracy.
Of course, censorship isn’t going so well in China currently, and you might argue that disallowing public debate about the merits of Xi Jinping’s “COVID zero” strategy isn’t conceptually different from censoring Americans questioning the use of vaccines. This is where someone (or, more aptly, social media and tech companies) has to exercise discretion. Unpalatable as that might seem, it’s the only real option. There have to be gatekeepers and truth arbiters who rely on a generally accepted, shared version of reality. If not, then what are we doing in organized societies? If facts and reality are indeed as malleable and elusive as many now claim, then what’s the point? Of anything, really? How do you function in such a world? When taken to extremes, such a world isn’t livable.
This is possible, by the way. For example, CNBC’s incorrigible Rick Santelli once said, on air, that he “didn’t believe” there was much difference between “500 people in a Lowe’s” and “150 people in a restaurant that holds 600” when it came to virus transmission. That was very controversial at the time (in December of 2020, during a vicious winter COVID wave and before most people were vaccinated). But was it the same as someone on YouTube telling the public outright lies about vaccines? No. And that’s where the discretion comes in. CNBC might ask Santelli to simply apologize, not for his opinion, but rather for the way in which he expressed it (and Rick did, in fact, apologize at least once for the way in which he communicated his opinions on COVID), while Google might demonetize or outright ban, someone spreading demonstrable falsehoods.
None of that is dystopian. Determining what counts as acceptable behavior and what doesn’t is part of running a business. What’s acceptable at a nightclub isn’t acceptable at a day care. You can’t dance around luridly and shout profane lyrics in the presence of 45 toddlers. Your claim to “free speech” is irrelevant in that setting. Or if it isn’t, the harm you might be doing far outweighs your First Amendment claims in that scenario, and the day care would be fully within its rights to kick you out. Nobody would argue with that, so what’s so contentious about a social media company telling people they can’t say things that might, for example, discourage the elderly from getting a vaccine that might save their lives?
In fact, in that context, I’d argue that social media companies would be fully within their rights to censor truthful content that might discourage the elderly from getting a life-saving vaccine. Before the internet, we didn’t trust random people we’ve never met with life or death decisions. If there were questions as to the tradeoffs of a medical treatment, we discussed them with professionals qualified to help us perform the cost-benefit analysis. We habitually claim the internet has created a better informed citizenry, but most of the time, I doubt it.
Further, the idea that tech companies and social media giants somehow shouldn’t be allowed to exercise any kind of discretion simply because they’re pervasive seems like a slippery slope to me. Does that mean that any company which becomes a fixture of daily life and whose services can serve as a conduit for Constitutionally-protected behavior thereby surrenders its right to discretion past a certain threshold of ubiquity? If so, isn’t that a deterrent to success?
The other option is to simply shut social media down, or regulate it such that the penalty for being derelict vis-à-vis egregious misinformation with the potential to bring about dire consequences for society is simply so onerous that no platform would risk it (e.g., a fine equivalent to 75% of the current year’s revenue or a criminal trial for board members.)
On Sunday evening, Elon Musk, citing the verbatim repetition of a tweet from Alexander Vindman, suggested that Vindman is a ringleader in a global conspiracy. Later, he posted a picture of a gun on what he claimed was his nightstand, before sharing a poorly Photoshopped image of CNN programming depicting a fake headline. “This headline never appeared on CNN. Be better.” the network’s official communications team chided. The image was originally posted by a crypto blogger with 114,000 followers.
On Monday afternoon, Musk claimed Apple has “mostly” stopped advertising on Twitter. “Do they hate free speech in America?” he wondered. “What’s going on here Tim Cook?”
Great analysis of a poorly summarized problem. I agree with you completely, misinformation creation and sharing is not a market and doesn’t adhere to market based analysis. People who gullibly believe dis and mis information are not concerned about their reputation and will never let facts dissuade them from believing what they want. So it comes down to, people who do know better should be implementing policies that shield the gullible from themselves. It’s why seatbelt laws exist, why consumer protection laws exist, and why truth in lending is now a requirement. If free speech is allowed to manipulate social media into a free for all hellscape on Twitter then what’s to stop future law suits from repealing all of the laws I just mentioned? Mis and disinformation are significant problems that warning labels aren’t going to solve. Bad actors should be banned from peddling their snake oil ideas to those who are not able to distinguish what they are. Further, Congressmen and women should be censured for advocating any of these ideas that have been disproven such as who actually won the 2020 presidential election. If elected representatives can’t be held accountable then why should anyone else be?
Reminds me of something I saw from a former colleague on LinkedIn recently
Karl Popper on the Paradox of Tolerance: “If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them. […] We should claim that any movement preaching intolerance and persecution is criminal, in the same way as we should consider incitement to murder, or to kidnapping, or to the revival of the slave trade as criminal.”
This reminds me of a currently ongoing corporate inclusion policy. The policy is that we should be inclusive of “all”. This includes those who hold exclusionary views of women, minorities, LGBTQ+, etc. The pure audacity of trying to include those who want to exclude others is the definition of stupidity in my opinion. Of course the exclusionaries will win out, they have zero interest in including those who may want to include them. And this is how woke capitalism fails to make any kind of progress. It applies rules that obliterate whatever social progress we are trying to make and then we go back to square one.
Let’s see how going to “war” against Apple works out for him. Donald Trump may have figured out how to milk the MAGA movement for all it’s worth, but that amount still pales in comparison to what Apple or even Tesla bring in. In his attempts to try to keep Twitter afloat, Musk may ending up costing himself far more than the $44B he paid for it. I don’t know what’s less likely: Apple users giving up their iPhones or MAGA supporters buying Teslas.
Next thing you know, Elon and his old buddy, Peter Thiel, will reunite in their bizarro-libertarian fantasy world to try to get Desantis elected. I’m sure they’ll happily use the levers of government to try to bring down Apple in this war that he has proclaimed in order to “save free speech” which will of course mean their right to dictate what companies can and can’t do. Wouldn’t it be ironic if Donald Trump, the one that paved the way to this bizarro world of alternative facts, ultimately sunk them in their quest to force their version of free speech on corporations?
Regarding the notion that the internet has made us better informed I am reminded of the story of Babel. I’ve not really read the Bible but my impression is that the sheer quantity of voices made communication meaningless.
Our current inability to reach consensus on what is real, actual, truth is something I believe the internet has facilitated.
I’d like to add that the basic common sense I find on this blog is what makes and keeps me a subscriber. I don’t participate in any social media whatsoever so I’m out of many loops. So grateful for this one.
The Fed and Elon Musk, each in their own ways, fail to grapple with disinformation (ignoring the references to China where the truth is what Xi says it is).
I’d like to hear more about the historical (and possibly time tested) approaches: the Founding Fathers clearly feared the Monarchy’s ability to control information and speech, and wasn’t there a long time before the literacy of the 1900’s where the population struggled to get any information?
Sir, you have hit an important nail right on its head. to wit:
“Believe it or not, that isn’t meant as a derisive comment on academics or even the social sciences more generally. Rather, it’s an indictment of the way in which academics are compelled to treat the soft sciences if they (academics) are to have any hope of being published on the way to securing gainful employment as a card-carrying member of the ivory tower club.”
This couldn’t be said better. When I published my first finance article there were only four outlets. The need to publish “rigorous” (read mathematical) research has created un-accessible damage to policy making and for our collective lives. To accommodate social scientists we modified non-parametric statistics and invented junk science like the so-called “Likert Scale,” devised by Rensis Likert, so we could turn discontinuous human behavior into real “scientific” data and prove lots of really good stuff. If “m honest, I myself, wanting to survive and get to the tower (which I reached when I got a really nice endowed chair in which I spent 11 very comfortable years) was a oft-published participant in this madness.
In fact, I think this was your best ever post. You friend (I assume) writing from Disgraceland couldn’t have said it better. To me the most maddening thing about the rise of so-called social media was the fact that our own Congress, surely at the behest of their resident lobbyist enablers, passed legislation expressly prohibiting any restraint of absolute free speech on the Internet and World Wide Web. They even stated (I read in the Internet) they wanted this communication medium to be the last frontier, like our own wild, wild west. Probably this was one of the most effective pieces of legislation they ever got paid for.
Sadly, there are no more adults, only Sam B-Fs fresh out of their mother’s basements, which is why I believe the 21st Century will be our last as “civilized?” humans.