“Censorship laws are blunt instruments, not sharp scalpels. Once enacted, they are easily misapplied to merely unpopular or only marginally dangerous speech.”
—Alan Dershowitz, Finding, Framing, and Hanging Jefferson: A Lost Letter, a Remarkable Discovery, and Freedom of Speech in an Age of Terrorism
Fake news, false or misleading information presented as though it’s true, has been blamed for distorting national politics in the United States and undercutting the faith that citizens place in elites and institutions — so much so that Google has recently stepped in to provide a tool to help users avoid being hoodwinked. It looks plausible, at first glance, that fake news is a widespread problem; if people can be fooled into thinking misleading or false information is genuine news, their attitudes and beliefs about politics and policy can be influenced for the worse. In a functioning democracy, we need citizens, and especially voters, to be well-informed — we cannot have that if fake news is commonplace.
A recent study found political polarization — left, right, or center — to be the primary psychological motivation behind people sharing fake news. It seems we aren’t driven by ignorance, but vitriol for one’s political opponents. It isn’t a matter of folks being fooled by political fictions because they lack knowledge of the salient subject matter, say, but rather that people are most inclined to share fake news when it targets political adversaries whom they hate. And this aligns with what we already know about the increasing polarization in American politics: that it’s becoming increasingly difficulty for people in different political parties, notably Republicans and Democrats, to agree on issues that used to be a matter of bipartisan consensus (e.g., a progressive tax structure).
In the face of the (alleged) increasing threat from fake news, some have argued we need stronger intervention on the part of tech companies that is just shy of censorship — that is, fake news is parasitic on free speech, and can perhaps only be controlled by a concerted legal effort, along with help from big technology companies like Facebook and Google.
But perhaps the claim that fake news is widespread is dangerously overblown. How? The sharing of fake news is less common than we are often led to believe. A study from last year found that
“[although] fake news can be made to be cognitively appealing, and congruent with anyone’s political stance, it is only shared by a small minority of social media users, and by specialized media outlets. We suggest that so few sources share fake news because sharing fake news hurts one’s reputation … and that it does so in a way that cannot be easily mended by sharing real news: not only did trust in sources that had provided one fake news story against a background of real news dropped, but this drop was larger than the increase in trust yielded by sharing one real news story against a background of fake news stories.”
There are strong reputation incentives against sharing fake news — people don’t want to look bad to others. (Of course, the researchers also acknowledge the same incentives don’t apply to anonymous individuals who share fake news.) Humans are a cooperative species that rely on help from others for survival — and so it matters how others view us. People wouldn’t want to cooperate with someone with a bad reputation, thus most people will track how they are seen by others. We want to know those we cooperate with have a good reputation; we want them to be sufficiently trustworthy and reliable since we rely on each other for basic goods. As other researchers explain,
“[Humans] depend for their survival and welfare on frequent and varied cooperation with others. In the short run, it would often be advantageous to cheat, that is, to take the benefits of cooperation without paying the costs. Cheating however may seriously compromise one’s reputation and one’s chances of being able to benefit from future cooperation. In the long run, cooperators who can be relied upon to act in a mutually beneficial manner are likely to do better.”
Of course, people sometimes do things which aren’t in their best interests — taking a hit to one’s reputation is no different. The point though is that people have strong incentives to avoid sharing fake news when their reputations are at stake. So we have at least some evidence that fake news is overblown; people aren’t as likely to share fake news, for reputational reasons, than it may appear given the amount of attention the phenomenon of fake news has garnered in the public square. This doesn’t mean, of course, that there isn’t a lot of fake news in circulation on places like, say, social media — there could be substantial fake news shared, but only by a few actors. Moreover, the term ‘fake news’ is often used in a sloppy, arbitrary way — not everything called ‘fake news’ is fake news. (Former President Trump, for example, would often call a story ‘fake news’ if it made him look bad, even if the story was accurate.)
Overstating the problem fake news represents is also troubling as it encourages people to police others’ speech in problematic ways. Actively discouraging people from sharing ‘fake news’ (or worse, silencing them) can be a dangerous road to traverse. The worry is that just as former President Trump did to journalists and critics, folks will weaponize the label ‘fake news’ and use it against their political enemies. While targeting those who supposedly share fake news may prevent misinformation, often it will be used to suppress folks who have unorthodox or unpopular views. As the journalist Chris Hedges observed,
“In late April and early May the World Socialist Web Site, which identifies itself as a Trotskyite group that focuses on the crimes of capitalism, the plight of the working class and imperialism, began to see a steep decline in readership. The decline persisted into June. Search traffic to the World Socialist Web Site has been reduced by 75 percent overall. And the site is not alone. … The reductions coincided with the introduction of algorithms imposed by Google to fight ‘fake news.’ Google said the algorithms are designed to elevate ‘more authoritative content’ and marginalize ‘blatantly misleading, low quality, offensive or downright false information.’ It soon became apparent, however, that in the name of combating ‘fake news,’ Google, Facebook, YouTube and Twitter are censoring left-wing, progressive and anti-war sites.”
Perhaps the phenomenon of fake news really is as bad as some people say — though the evidence suggests that isn’t the case. In any event, we shouldn’t conclude from this that fake news isn’t a problem at all; we may need some form of policing that, while respecting freedom of expression, can empower voters and citizens with tools to allow them to avoid, or at least identify, fake news. But we can acknowledge both the need for fake news oversight and the need to significantly curtail that power.