In 2018, actor and filmmaker Jordan Peele partnered with Buzzfeed to create a warning video. The video appears to feature President Barak Obama advising viewers not to trust everything that they see on the Internet. After the President says some things that are out of character for him, Peele reveals that the speaker is not actually President Obama, but is, instead, Peele himself. The video was a “deepfake.” Peele’s face had been altered using digital technology to look and move just like the face of the president.
Deepfake technology is often used for innocuous and even humorous purposes. One popular example is a video that features Jennifer Lawrence discussing her favorite desperate housewife during a press conference at the Golden Globes. The face of actor Steve Buscemi is projected, seamlessly, onto Lawrence’s face. In a more troubling case, Rudy Giuliani tweeted an altered video of Nancy Pelosi in which she appears to be impaired, stuttering and slurring her speech. The appearance of this kind of altered video highlights the dangers that deepfakes can pose to both individual reputations and to our democracy more generally.
In response to this concern, California passed legislation this month that makes it a crime to distribute audio or video that presents a false impression about a candidate standing for an election occurring within sixty days. There are exceptions to the legislation. News media are exempt (clearing the way for them to report on this phenomenon), and it does not apply to deepfakes made for the purposes of satire or parody. The law sunsets in 2023.
This legislation caused controversy. Supporters of the law argue that the harmful effects of deepfake technology can destroy lives. Contemporary “cancel culture,” under which masses of people determine that a public figure is not deserving of time and attention and is even deserving of disdain and social stigma, could potentially amplify the harms. The mere perception of a misstep is often enough to permanently damage a person’s career and reputation. Videos featuring deepfakes have the potential to spread quickly, while the true nature of the video may spread much more slowly, if at all. By the time the truth comes out, it may be too late. People make up their minds quickly and are often reluctant to change their perspectives, even in the face of compelling evidence. Humans are prone to confirmation bias—the tendency to consider only the evidence that supports what the believer was already inclined to believe anyway. Deepfakes deliver fodder for confirmation bias, wrapped in very attractive packaging, to viewers. When deepfakes meet cancel culture in a climate of poor information literacy, the result is a social and political powder keg.
Supporters of the law argue further that deepfake technology threatens to seriously damage our democratic institutions. Citizens regularly rely on videos they see on the Internet to inform them about the temperament, behavioral profile, and political beliefs of candidates. It is likely that deepfakes would present a significant obstacle to becoming a well-informed voter. They would inevitably contribute to the sense that some voters currently have that we exist in a post-truth world—if you find a video in which Elizabeth Warren says one thing, just wait long enough and you’ll see a video of her saying the exact opposite. Who’s to say which is the deepfake? The results of such a worldview would be devastating.
Opponents of the law are concerned that it violates the first amendment. They argue that the legislation invites the government to consider the content of the messages being expressed and to allow or disallow such messages based on that content. This is dangerous precedent to set—it is exactly the type of thing that the first amendment is supposed to prevent.
What’s more, the legislation has the potential to stifle artistic expression. The law contains exemptions for the use of deepfakes that are made for the purposes of parody and satire. There are countless other kinds of statements that people might use deepfakes to make. In fact, in his warning video, artist Jordan Peele used a deepfake to great effect, arguably making his point far more powerfully than he could have using a different method. Peele’s deepfake might have resulted in more cautious and conscientious viewers. Opponents of the legislation argue that this is precisely why the first amendment is so important. It protects the kind of speech and artistic expression that gets people thinking about how their behavior ought to change in light of what they viewed.
In response, supporters of the legislation might argue that when the first amendment was originally drafted, we didn’t have the technology that we have today. It may well be the case that if the constitution were written today, it would be a very different document. Free speech is important, but technology can cause harm now in an utterly unprecedented way. Perhaps we need to balance the value of free speech against the potential harms differently now that those harms have such an extended scope.
A lingering, related question has to do with the role that social media companies play in all of this. False information spreads like wildfire on sites like Facebook and Twitter. Many people use these platforms as their source for news. The policies of these exceptionally powerful platforms are more important for the proper functioning of our democracy than anyone ever could have imagined. Facebook has taken some steps to prevent the spread of fake news, but many are concerned that it has not gone far enough.
In a tremendously short period of time, technology has transformed our perception of what’s possible. In light of this, we have an obligation to future generations to help them learn to navigate the very challenging information literacy circumstances that we’ve created for them. With good reason, people believe that they can trust their senses. Our academic curriculum must change to make future generations more discerning.