In recent years, commentators — particularly those who lean left — have become increasingly dubious about John Stuart Mill’s famous defense of an absolutist position on free speech. Last week, for instance, The New York Times published a long piece by Yale Law School professor Emily Bazelon in which she echoes a now-popular complaint about Mill: that his arguments are fundamentally over-optimistic about the likelihood that the better argument will win the day, or that “good ideas win.” In this column, I will argue that this complaint rests on a mistaken view of Mill.
Mill’s argument, briefly stated, is that no matter whether a given belief is true, false, or partly true, its assertion will be useful for discovering truth and maintaining knowledge of the truth, and therefore it should not be suppressed. True beliefs are usually suppressed because they are believed to be either false or harmful, but according to Mill, to suppress a belief on these grounds is to imply that one’s grasp of the truth or of what is harmful is infallible. Mill, an empiricist, believed that no human being has infallible access to the truth. Even if the belief is actually false, its assertion can generate debate, which will lead to greater understanding and ensure that truths do not lapse into “mere dogma.” Finally, if the belief is partially true, it should not be suppressed because it can be indispensable to discovering the “whole” truth.
Notice that Mill’s whole argument concerns the assertion of beliefs, or the communication of what the speaker genuinely takes to be true. The key assumption in Mill’s argument is thus not that the truth will win out in the rough and tumble of debate. This may well be true — at least, it may be true in the long run, when every participant is really engaging in debate, or the evaluation of truth claims. Rather, Mill is taking as given that a lot of the public discourse is aimed at communicating truth claims in good faith. The problem is that much of this discourse is not intended to inform others about what speakers actually believe. Much of the public discourse is propaganda — speech aimed at achieving some political outcome, rather than at communicating belief. As Bazelon points out, referring to the deluge of disinformation that currently swamps our national public conversation,
“The conspiracy theories, the lies, the distortions, the overwhelming amount of information, the anger encoded in it — these all serve to create chaos and confusion and make people, even nonpartisans, exhausted, skeptical and cynical about politics. The spewing of falsehoods isn’t meant to win any battle of ideas. Its goal is to prevent the actual battle from being fought, by causing us to simply give up.”
The purpose of disinformation propaganda is to overwhelm people with contradictory claims and ultimately to encourage their retreat into apolitical cynicism. Even where propagandists appear to be in the business of putting forward truth claims, this is always in bad faith: propagandists aren’t trying to express truth claims.
Where does this leave Mill? Mill may have been mistaken in overlooking the pervasiveness of propaganda. However, his defense of free speech need not extend to propaganda. If Mill is concerned only with defending communicative acts that are aimed at expressing belief, then we have no reason to think that Mill needs to defend propaganda. Thus, a Millian defense of speech can distinguish between speech that is intended primarily to express a truth claim and speech that is intended primarily to effect some political outcome. While the former must be protected from suppression, the latter need not be, precisely because the latter is not aimed at, nor likely to produce, greater understanding.
Of course, this distinction might be difficult to draw in practice. Nevertheless, new policies recently rolled out by social media platforms appear to be aimed precisely at suppressing the spread of harmful propaganda. Twitter banned political ads a year ago, and last month Facebook restricted its Messenger app by preventing mass forwarding of private messages. Facebook’s Project P (P for propaganda) was an internal effort after the 2016 election to take down pages that spread Russian disinformation. Bazelon recommends pressuring social media platforms into changing their algorithms or identifying disinformation “super spreaders” and slowing the virality of their posts. Free speech absolutists might decry such measures as contrary to John Stuart Mill’s vision, but I have suggested that this might be a mistake.