Americans spend a lot of time on social media, and this term the Supreme Court will do the same. Over the next few months—beginning Tuesday—the Court will hear a series of cases requiring it to resolve First Amendment questions arising out of the role that major social media platforms play in hosting, shaping, and setting the limits of public discourse online.
One striking feature of these cases is that they involve conflicts internal to free speech—not conflicts between free speech and other values, like equality or national security, but conflicts between the competing free speech claims of government, platforms, and ordinary citizens. In resolving these conflicts, the Court should remember that the First Amendment’s highest purpose—its “central meaning,” as Justice William Brennan put it nearly 60 years ago—is to protect the speech that’s necessary to democracy.
The two cases the Court will hear on Tuesday pose the question of when a public official’s social media account is subject to First Amendment constraints. One case was brought by parents in Southern California who were blocked from school board members’ Facebook pages after they posted hundreds of comments about racism at local schools. The other case was filed by a Michigan resident who was blocked from the Port Huron city manager’s Facebook page after he criticized the city’s response to the Covid-19 pandemic.
The plaintiffs in both cases argue their First Amendment rights were violated when public officials blocked them based on their viewpoints, but the public officials invoke the First Amendment, too. They argue that they have a constitutional right to use social media—and to block other users from their accounts—just like everyone else.
These cases may seem trivial, but they’re not. Some officials’ social media accounts have become vital forums for speech relating to those officials’ exercise of government power, and for speech about public policy more broadly. We need the First Amendment to protect ordinary citizens from government censorship in these forums to ensure that public officials don’t suppress dissent, insulate themselves from criticism, and transform these democratically important spaces into echo chambers.
A few years ago, the Knight First Amendment Institute, which I direct, filed a lawsuit against President Donald Trump on behalf of users he had blocked from his Twitter account after they had criticized him. The U.S. Court of Appeals for the Second Circuit sided with us, but after Mr. Trump lost the election, the Supreme Court declared the case to be moot and vacated the appeals court’s ruling. The two cases the Court is hearing Tuesday morning provide it with an opportunity to recognize, as the appeals court did, that public officials who use their social media accounts as extensions of their offices are not protected by the First Amendment but constrained by it.
Those two lawsuits are about government officials’ use of social media. Other cases the Court will hear this term are about government efforts to regulate the platforms. Two of the cases concern the constitutionality of social media laws enacted by Florida and Texas. Both require the platforms to carry speech they might prefer not to carry. Florida’s law restricts platforms’ right to remove or suppress the posts of political candidates and media organizations, and Texas’ bars platforms from taking down content because of its viewpoint. Both states’ laws also require the platforms to provide explanations to users whose posts the platforms take down.
A threshold question the Court will have to answer is whether platforms’ content moderation policies reflect the exercise of editorial judgment, since editorial judgment is protected by the First Amendment. Texas and Florida say no, and if the Court agrees, then the states win. But the platforms have the better of this argument. In fact, it was the states’ disagreement with the platforms’ editorial judgment, particularly with the decision of some of them to eject Mr. Trump after the events of Jan. 6, 2021, that led the states to pass these laws.
The harder question is what follows from this. Nearly half a century ago, the Court held that the First Amendment foreclosed the government from requiring newspapers to provide space in their pages for political candidates to respond to editorials that had criticized them. Whether the First Amendment should be similarly hostile to so-called must-carry rules imposed on the platforms is a question whose answer might turn on whether we think the platforms are meaningfully distinguishable from newspapers—perhaps because of the way these platforms exercise editorial judgment, the significance of their users’ free speech interests, or the reasons the government is seeking to regulate them.
Even those who believe that these provisions of the Florida and Texas laws are unconstitutional, as I do, should be wary of a First Amendment regime that would categorically foreclose all must-carry rules—even those that might be narrower, better supported by legislative findings and more closely connected to democratically legitimate goals.
Equally consequential will be how the Court addresses the provisions of the Florida and Texas laws that require platforms to notify users whose posts are removed. The platforms argue that these provisions are so onerous and the penalties for violating them so draconian that they will deter platforms from taking down speech they would otherwise take down. The Court should give real weight to this argument, particularly because the laws are poorly drafted.
It would be a mistake, though, for the Court to make the First Amendment an insurmountable obstacle to carefully drawn laws that strengthen democracy by empowering and protecting platforms’ users, enabling the public to better understand how major platforms are shaping public discourse, and mitigating the outsize power that a small number of platforms have over free speech online. It should certainly matter how heavy a burden a law imposes on platforms’ editorial judgment. But ordinary citizens have free speech interests, too, and First Amendment doctrine needs to account for those.
The last case also relates to the government’s power to regulate social media, though here the regulation takes the form of government speech rather than formal legislation. The case is mainly about efforts the White House and federal agencies undertook during the pandemic to impel the major platforms to suppress what the Biden administration believed to be dangerous misinformation about vaccines. Administration officials repeatedly requested or demanded that the platforms take down this content, sometimes berating them or vaguely threatening regulatory reprisal. At one point President Biden told the press that the platforms were “killing people” by failing to suppress vaccine misinformation more aggressively.
The question at the heart of the case is how the Courts should distinguish legitimate government speech from illegitimate government coercion. Here, once again, we are presented with a conflict—or at least a tension—between two competing claims, both of which sound in free speech. The plaintiffs, whose posts the platforms suppressed, argue that the government’s pressure campaign was a form of censorship, and a particularly insidious one because of its informal character, which insulated the government’s actions from the usual democratic checks.
But the best version of the government’s argument registers as a kind of free speech claim, too, even if the government doesn’t have free speech rights in the way that private actors do. A democratically elected government surely has a legitimate role to play in persuading private actors to be attentive to the public interest. And government speech is sometimes essential to informing autonomous decision-making by platforms and other private speech intermediaries—especially when the government has information that private decision-makers don’t, as is often the case with matters relating to public health.
Construing First Amendment rights so broadly that the government is precluded from sharing information and from encouraging powerful private corporations to act on it would compromise public discourse, not protect it. We need a First Amendment framework that can distinguish government speech that informs the platforms’ editorial autonomy from government speech that overrides it.
More broadly, we need a First Amendment that resolves conflicts among competing speech claims in the digital public sphere by privileging the speech that is most necessary for democracy. This will be a formidable challenge for the Court, but it could hardly be more important. Though we might wish it were otherwise, social media platforms are where a lot of public discourse takes place. It’s on these platforms that we hear from our elected leaders, hold them to account, learn about government policy, engage with other citizens, organize collective action, and advocate change—which is why the Court once described these platforms as among the “vast democratic forums of the internet.”
By making democracy its North Star, the Court can fulfill the promise implicit in that description and ensure that, in this sphere that has become so important to our society, the First Amendment does the work we need it to do.
Jameel Jaffer is executive director of the Knight Institute.