At the invitation of the Sphere Education Initiative, a project of the CATO Institute, Knight Institute Executive Director Jameel Jaffer spoke to schoolteachers from around the country about the Supreme Court’s recent social media decisions, and about free speech and new technology more broadly. The video is here, and the full text is below.
I’m glad to have this chance to speak to teachers, and to thank you for your work. I’m sure this is an especially challenging time to be a teacher, what with political leaders and advocacy groups around the country demanding that we impose new limits on what subjects can be taught, what books can be read, and what ideas can be explored in the library and the classroom. I know that many teachers and administrators have valiantly resisted these demands—speaking out publicly, testifying before legislatures, and even serving as plaintiffs in First Amendment lawsuits. As a citizen and a parent of school-age children, and as a First Amendment lawyer, I’m grateful to them for their civic courage.
Book bans and library purges are modes of censorship that are recognizable to us. First Amendment advocates know how to respond to them. Things are more complicated in the digital sphere. New technology has given us new possibilities for speech, and it has also generated novel modes of suppression. The conceptual vocabulary and legal frameworks that were developed to protect free speech in the analog era don’t map neatly onto digital technology. It’s not always evident what should count as censorship in the digital realm, let alone what should be done about it.
The Institute I direct at Columbia, the Knight First Amendment Institute, was established eight years ago to develop and defend a vision of the First Amendment for the digital age. I had nothing to do with the timing, so I can say without self-promotion that launching the Institute at that particular moment was prescient. Few questions have been more debated over the past eight years than the ones about free speech and new technology, and the ones about new communications platforms in particular.
In its most recent term, which ended just a couple of weeks ago, the Supreme Court considered five cases relating to free speech and social media. Many First Amendment advocates were worried about what the Court might do with these cases—partly because the questions presented were genuinely difficult, but also because, as Justice Kagan observed at an oral argument last year, the nine justices of our Supreme Court are “not, like, the nine greatest experts on the internet.”
But Justice Kagan needn’t have set expectations so low. While there’s lots to criticize in some of the Court’s other recent decisions—for example its ruling on presidential immunity—the Court resolved these First Amendment cases thoughtfully. This said, it left some big questions about censorship online unanswered. It won’t be able to avoid them for long.
Two cases the Court considered this past term were about the social media accounts of public officials. In general, public officials can use social media just like other citizens. They can post photos of their friends, repost news articles, and even tweet about politics. They have a constitutional right to speak on social media, just as you and I do.
But these days many government officials use their social media accounts not just for personal reasons but for official ones, too. They use Facebook and Instagram to announce their official decisions and explain their policies, to tell their constituents how to get help during natural disasters, or to let them know about a new vaccine. Ordinary citizens use the comment threads associated with these accounts to communicate with their representatives and with each other. You may remember that President Trump used his Twitter account to make cabinet appointments and engage with foreign leaders. Millions of ordinary citizens used the comment threads to respond to his statements, sometimes critically.
When public officials use their social media accounts in this way, as extensions of their offices, they create online forums that are important to our democracy. And when they block their critics from those forums, as President Trump notoriously did, they suppress dissent that’s socially valuable, and they insulate themselves from criticism they should hear. Offline, the First Amendment prohibits public officials from excluding citizens from important forums—like school board and city council meetings—on the basis of political viewpoint. As I’m sure you know well, a school board can’t eject people from their public meetings simply because they’ve disagreed with the board’s decisions.
The question the Supreme Court had to answer last term was whether the rule that applies to school-board meetings applies to public officials’ social media accounts as well. According to Justice Barrett, who wrote the opinion for a unanimous Court, it does. If a public official is using social media as an extension of her office, she’s subject to the First Amendment just as she would be if she were hosting a public meeting in city hall.
This was the right result. It’s a ruling will help protect the integrity of digital spaces that play an increasingly central role in our democracy. But this isn’t the last time the Court will have to consider the implications of social media accounts run by government officials.
Let me tell you about a case we’re working on at the Knight Institute. Like many other federal agencies, the National Institutes of Health—the NIH—maintains social media pages that it uses to communicate with the public, mainly about matters relating to public health. In the comment threads associated with these pages, ordinary citizens respond to the agency’s posts, ask questions, and interact with one another. Sometimes they criticize the agency.
At the Knight Institute we represent animal-rights activists who have strong views about federal agencies’ use of animals in scientific testing. The activists sometimes express those views in the NIH’s comment threads. In response, the NIH has deployed software that automatically deletes comments that use words associated with animal-rights advocacy—words like “stop,” “torture,” “monkey,” and “cruel”; and hashtags like “#stopanimaltesting.” Any comment that uses one of these words or phrases is immediately suppressed. To all appearances, it’s as if the comment had never been posted at all.
The NIH denies it’s involved in censorship. It says its keyword-blocking technology is intended to ensure that discussions in the comment threads stay on topic. But if the courts define censorship as narrowly as the NIH does, government agencies will soon have broad power to silence their critics online. Every government agency will be tempted to suppress inconvenient speech by labeling it off-topic, and keyword software will make the process costless, instantaneous, and invisible.
The question of what constitutes censorship online is very much contested, and it’s often difficult to answer. In this most recent term, the Supreme Court tackled another dimension of the question in a case arising from the government’s efforts to suppress vaccine misinformation during the pandemic. Over an 18-month period, government officials repeatedly demanded that social media platforms take down this content, sometimes berating them or vaguely threatening regulatory reprisal. At one point President Biden told the press that the platforms were “killing people” by failing to suppress vaccine misinformation more aggressively.
The question the Supreme Court considered was whether the Biden administration’s actions were a constitutionally permissible effort to persuade, or whether they amounted to an informal scheme of censorship in violation of the First Amendment.
This kind of question has arisen before, in many contexts, including most notably in a case that was presented to the Supreme Court in 1963. A Rhode Island legislative commission tasked with encouraging “morality in youth”—the forebear of some of the commissions stalking schools and libraries now—was sending letters to book distributors urging them to take “objectionable” books out of circulation and warning them of possible prosecution under the obscenity laws.
To follow up on the commission’s letters, police officers would visit the book distributors’ offices to inquire whether the letters had been acted upon. The Supreme Court held that the commission’s actions were unconstitutional. The principle the lower courts distilled from the ruling is that, while the First Amendment permits the government to try to persuade book distributors and other so-called speech intermediaries to reconsider their expressive decisions, it prohibits the government from coercing them. Persuasion is okay, but coercion isn’t.
That may seem like a straightforward enough rule, but it’s proven hard to apply, and the application of it in the digital sphere, to social media platforms, is especially challenging. Social media companies are especially susceptible to coercion. For one thing, they’re dependent on the goodwill of regulators. They aren’t particularly invested, financially or ideologically, in the content of the posts they publish. And they operate at such an incredible scale that the removal of any particular post doesn’t significantly change the product they’re offering their users.
The consequence is that coercing these companies to take down speech is relatively easy—certainly much easier than it is to persuade a librarian to pull a book from the shelves, or to persuade a newspaper to take down a particular news article.
The major platforms are also particularly attractive targets for coercion. The biggest platforms have immense power over public discourse. Through their content-moderation policies and algorithms, they determine not just what can be said, and who can say it, but what ideas get heard. Government officials know very well that by pressuring a small handful of technology companies they can reshape and control public discourse online.
All of this is to say that it’s not obvious that the First Amendment rules that we’ve developed in other contexts should be applied in the same way in this new one. In the vaccine-misinformation case it considered this past term, the Supreme Court managed to avoid some of the difficult questions by concluding that the conservative activists who had brought the case hadn’t shown that the platforms suppressed their speech because of government pressure. The Court reasoned that the platforms might have taken the same actions even if government officials had never contacted them.
But the Court will have to come back to this before too long. Government officials will continue to cajole and pressure the platforms because the platforms have a lot of power over what people can say online—I’ll come back to that in a minute. With a presidential election just a few months away, government officials will be especially concerned about posts that mislead people about their eligibility to vote, and about where and when votes can be cast.
Many of you are probably sympathetic to the government’s effort to counter false speech about vaccines and elections. I am, too. But whatever rules the courts develop in this context have to account for the possibility—the certainty—that government officials will sometimes be wrong about which speech is false. It’s also important to remember that whatever rules the courts come up with for false speech about vaccines and elections will apply to allegedly false speech about, for example, war, social justice, and reproductive rights. If you’re considering giving government officials broad authority to suppress misinformation, don’t assume that the officials will define misinformation the same way you do, or that they will focus on the same categories of misinformation that you and I think of as most dangerous.
The last of the Supreme Court’s recent social media cases—and the last cases I’ll talk about tonight—involved social media laws that Florida and Texas enacted two years ago. Legislators in those states were upset by social media companies’ decision to ban President Trump from their platforms after the events of Jan. 6th. They responded by prohibiting the platforms from banning controversial speakers and from taking down controversial speech.
Florida’s law restricts platforms’ right to suppress the posts of political candidates and media organizations. Texas’s bars platforms from taking down content because of its viewpoint. In court, the social media platforms argued that the laws amounted to censorship in violation of the First Amendment because they restricted their right to decide what speech to publish. The states said it was the platforms that were engaged in censorship by deplatforming President Trump and other conservative speakers. One federal appeals court sided with the platforms, and another sided with the states.
I said earlier that the rules the courts developed in the analog era don’t map on very well to the digital sphere. Because the most important spaces for public discourse online are owned by private corporations, two foundational free speech principles are placed into tension. The first principle is that private speakers and editors have the right to decide for themselves what speech to publish. The government can’t force a bookstore to sell books it doesn’t want to sell, or compel a newspaper to print stories it doesn’t want to print. The First Amendment protects the right of bookstores and newspapers to make those kinds of decisions for themselves. It protects the right of social media companies to make those kinds of decisions for themselves, too.
But the second principle, expressed particularly forcefully in the “public forum” cases I’ve already mentioned, is that the spaces most important to public discourse should be open to everyone whatever their political views. This is important because the legitimacy of democracy turns on all citizens having access to the public square on the same terms. It’s not a democracy if the only people permitted to speak in the public square are those whose views are favored by the government or by the most powerful private corporations. Our democracy won’t function if the most important spaces for expression online are closed to political minorities.
When democratically important spaces are owned by private actors, these two principles seem to collide. And this is what made the cases involving Florida’s and Texas’s social media laws so difficult. Justice Kagan, who wrote the decision for the majority, resolved the case narrowly, holding that the First Amendment doesn’t allow states to override platforms’ editorial decisions if their motivation for doing it is to advance their own conception of ideological balance. By resolving the case in this way, the Court deferred some of the hardest questions.
Here again, though, it won’t be able to defer them for long. While Texas’s legislators were motivated by a desire to restore what they saw as ideological balance to the platforms, Florida’s legislators were at least arguably motivated by something different—the desire to ensure that Florida’s citizens would have access to political candidates’ speech in the days before an election. The Supreme Court may have to say whether it should matter that Florida had different motivations than Texas did.
The Court will also have to say more how the First Amendment applies to other kinds of social media regulation. There are lots of ways of regulating social media that don’t involve overriding platforms’ editorial judgments. We could protect individual privacy by limiting the kinds of information that platforms can collect about their users. We could make the platforms more accountable to the public by requiring them to disclose more information about their practices and policies. We could make them more accountable to their users by requiring them to provide explanations to users whose speech they suppress or take down. And we could reduce the power they have over public discourse online by enforcing antitrust laws, making it easier for users to switch from one platform to another, and encouraging the development of new platforms.
I’ve been talking about social media tonight because these are the cases the Supreme Court dealt with this past term. But the Court will soon have to consider how analog-era precedents apply to other new technologies. New surveillance technologies—facial recognition, for example—have the potential to deter people from participating in public protests and exercising other First Amendment rights. Artificial intelligence has the potential to further undermine the integrity of public discourse in the digital sphere.
The distressing and ominous events of this past weekend only underscore how much is at stake here. As teachers, all of you play a vital role in educating young people about free speech and free speech values like tolerance and open-mindedness. Instilling these values is important work, as is fostering a broader culture that’s conducive to free speech.
But public discourse is shaped not just by individual and social values but by law and technology, which help determine how we communicate with one another, whether we understand each other, how we negotiate our differences, and whether we can bridge political divides. Over the next few years, the courts will have an essential role to play in determining whether new communications technology serves democracy or undermines it.
Thank you again for everything you do, thanks for the invitation to speak tonight, and thanks for listening.
Jameel Jaffer is executive director of the Knight Institute.