On October 20, 2023, the Knight Institute will host a closed convening to explore the question of jawboning: informal government efforts to persuade, cajole, or strong-arm private platforms to change their content-moderation policies. Participants in that workshop have written short notes to outline their thinking on this complex topic, which the Knight Institute is publishing in the weeks leading up to the convening. This blog post is part of that series.
***
The U.S. Supreme Court may soon weigh in on a vitally important First Amendment question: When do government efforts to pressure platforms to take down speech become unconstitutional? On September 14, the Biden administration asked the Court to pause a lower court order restricting the federal government’s communications with the platforms as it decides whether to review this question. The filing comes after a recent Fifth Circuit Court of Appeals decision, which affirmed a district court judgment finding that members of the Biden administration, the FBI, and the CDC likely violated the First Amendment by pressuring social media platforms to suppress specific content, including misinformation related to COVID-19. While the Fifth Circuit’s decision rightfully narrowed the scope of the district court’s preliminary injunction, its analysis of the plaintiffs’ First Amendment claims conflated legal tests intended to answer very different questions, missing an opportunity to adapt the relevant standard to the social media context. Now, it is likely the Supreme Court will be tasked with providing much needed clarity in this important area of law.
The Knight Institute has not yet taken a position on precisely how the courts should address claims relating to government “jawboning” of social media platforms, and the Institute is hosting a closed convening in October to address how First Amendment doctrine in this area should be clarified or changed. The analysis below is our own, preliminary view. As we explain, we think courts should focus narrowly on one central question: Did the government coerce a private intermediary to censor protected speech?
The Missouri v. Biden Lawsuit
In Missouri v. Biden, three doctors, a news website, a “healthcare activist,” and two states sued Biden administration officials and several federal agencies, alleging that communications between the government and major platforms like Facebook and Twitter concerning their moderation of posts on several topics—the COVID-19 pandemic, elections, and the Hunter Biden laptop story—violated the First Amendment. Earlier this summer, district court Judge Terry A. Doughty issued an opinion in favor of the plaintiffs, marking the first time that a court has ruled that a government actor violated the First Amendment by pressuring social media platforms to moderate content. The government immediately appealed. On September 8, the Fifth Circuit affirmed the lower court’s decision, finding that government officials likely violated the First Amendment in their various interactions with the platforms.
The Significance of Bantam Books
Missouri v. Biden is not the first time federal courts have taken up the question of whether the government coerced a private intermediary to censor protected speech. The Supreme Court considered the same issue in a 1963 case, Bantam Books v. Sullivan. There, the high Court reviewed a challenge to the actions of a state commission created to protect minors from obscene or offensive materials. The commission sent notices to book distributors threatening prosecution unless they pulled specific books and magazines from circulation. The commission then followed up on its requests, sending local police officers to the booksellers to ascertain their compliance. Unsurprisingly, the distributors refused to fill new orders for materials the commission deemed unfit and sent field workers to remove unsold books from retailers’ shelves. The Court held that the commission’s actions were a scheme of informal censorship that violated the First Amendment, intended to intimidate book distributors and retailers with the threat of criminal prosecution and resulting in the suppression of the sale and circulation of the targeted publications.
The principle established in Bantam Books carries new importance in the age of social media. Just as the First Amendment prohibits the government from coercing booksellers to suppress the circulation of objectionable materials, it also prohibits government officials from coercing social media platforms to censor the speech of their users. In a world where social media companies have become gatekeepers to public discourse, their decisions about what speech to publish—and how the government influences those decisions—are much more significant.
But the Bantam Books Court intentionally drew the line at coercive—not merely persuasive—speech. In the opinion, Justice Brennan emphasized that government officials need not renounce all informal contacts with intermediaries, clarifying, for example, that “consultation genuinely undertaken with the purpose of aiding” an intermediary would not run afoul of the First Amendment. In other words, not every communication between the government and a platform raises constitutional concerns, even where those communications are contentious. Instead, the Bantam Court viewed as dispositive the fact that the commission effectuated legal sanctions, going “beyond advising the distributors of their legal rights and liabilities.”
The Court’s decision to draw the line at coercion makes intuitive sense to us. This standard acknowledges that it may sometimes be appropriate for government officials to communicate with intermediaries about the speech they publish, such as explaining the impact of their editorial decisions on public health or safety. The role of the court is to scrutinize those interactions closely, consider the competing interests at stake, and determine whether they were efforts to coerce the platforms into censoring speech.
The State Action Distraction: Blum v. Yaretsky
Although Bantam Books provides the proper test for determining when government efforts to coerce speech intermediaries into suppressing speech violate the First Amendment, many courts have incorrectly analyzed such claims under a 1982 Supreme Court case called Blum v. Yaretsky. In Blum, a group of Medicaid recipients alleged that their due process rights were violated when private nursing homes discharged or transferred them to a different level of care without notice or a hearing. Because federal regulations encouraged nursing homes to discharge or transfer Medicaid patients whenever possible to cut costs, the plaintiffs argued that the decision to move them to a different level of care was attributable to the government, thus reflecting “state action” subject to constitutional review.
Blum, then, is a case about when the government can be held liable for the actions of a private entity. The test from Blum, accordingly, asks whether the government “has exercised coercive power or has provided such significant encouragement that the choice in law must be deemed to be that of the state.” This test overlaps in part with the First Amendment test from Bantam (both tests contemplate government liability for coercing private actors), but it is a general test for state action that applies to any claim seeking to hold government actors liable for the conduct of a private entity. It is not a First Amendment–specific test, and applying it to jawboning cases is not ideal for two main reasons.
First, the state-action test from Blum sets a high bar, but courts that have applied it to jawboning claims have diluted it in ways that may have far-reaching consequences. Generally speaking, the test for state action is intended to be a demanding one. When a court finds that a private actor’s decision reflects state action, the consequence is that both the government and the private actor may be held liable for violating the Constitution. This is a significant exception to the normal rule, recently reaffirmed by the Supreme Court in Halleck, that private actors should not be easily transformed into state actors subject to constitutional constraints. But lower courts that have applied Blum in jawboning cases have diluted the test. For example, the Fifth Circuit, as we discuss below, held that the CDC’s responses to platform inquiries about COVID-19 misinformation constituted “significant encouragement” under Blum, even though it’s quite a leap to conclude that the platforms’ eventual decision to act on the misinformation “must be deemed to be that of the state.” This dilution of Blum has left the state-action doctrine muddled in a way that leaves little guidance to the government in jawboning cases, and in a way that may significantly expand the liability of the platforms to First Amendment requirements that ordinarily apply only to the government.
Second, it isn’t clear that Blum can sufficiently account for the First Amendment rights of private intermediaries. As we’ve argued in other cases, social media platforms have a First Amendment right to make editorial decisions about which content to publish on their platforms. But under the Fifth Circuit’s reading of Blum, a platform’s decision to solicit input from the government could expose them to state action liability, making the platform hesitant to seek guidance from the state on what content they publish. In some circumstances, a related injunction might expressly prohibit a platform from communicating with the government about content moderation issues. A broad application of Blum, then, increases the likelihood that platforms’ voluntary communications with the government will be chilled, or enjoined by the courts. Either result violates the platforms’ editorial decision-making rights.
If the goal of jawboning doctrine is to deter the government from coercing or incentivizing platforms to make editorial decisions according to the government’s preferences, Bantam Books provides a framework that keeps the focus where it should be—on the government’s coercive actions—without resorting to Blum.
The Fifth Circuit’s Misguided Analysis
In many ways, the Fifth Circuit’s flawed opinion in Missouri v. Biden illustrates precisely why clarity on the applicable law here is much needed. While the court appropriately narrowed the scope of the district court’s injunction, it fell short in a few critical ways.
First, the court incorrectly analyzed the First Amendment claims under a broad reading of Blum v. Yaretsky instead of Bantam Books. As we noted above, many courts have interpreted Blum’s “significant encouragement” test as a diluted form of coercion that could encompass a whole host of government speech, including speech intended to offer guidance or persuade. That is precisely what happened in this case.
Consider the court’s analysis of the platforms’ interactions with the CDC. The court notes that “the platforms asked CDC officials to decide whether certain claims [about COVID] were misinformation” and “in response CDC officials told the platforms whether such claims were “true or false ... misleading ... or needed to be addressed by CDC-backed labels.” Recognizing that this back-and-forth could not conceivably be characterized as coercion, the court, drawing on Blum, concluded that the CDC “significantly encouraged the platforms’ moderation decisions,” and therefore attributed that decision-making to the state. Yet this reliance on the CDC appears, on our read, to be precisely what the platforms wanted. Throughout the pandemic, many platforms independently developed policies related to COVID-19 misinformation and affirmatively sought guidance from government agencies like the CDC about how to implement those policies. It’s difficult to imagine another authority the platforms should have—or even could have—relied on in the midst of a global public health crisis. But more importantly, and as we’ve discussed above, seeking such guidance from the government is squarely within the platforms’ own free-speech rights. Treating this decision as state action—and effectively prohibiting the platforms from receiving government input—actually violates the platforms’ First Amendment rights.
Second, even when the court properly analyzed the First Amendment claims under Bantam, it failed to adapt the test to the social media age. Instead, the court applied a four-part test meant to flesh out the meaning of coercion from Bantam Books, by analyzing (1) the government official’s word choice and tone; (2) the recipient’s perception; (3) whether the official has regulatory authority; and (4) whether the speaker refers to adverse consequences. But government interactions with the largest social media platforms require a bigger picture perspective. The outsized power social media companies wield is surely relevant to the question of whether they were coerced. The court noted that White House officials’ private messages frequently included “foreboding, inflammatory, and hyper-critical phraseology,” and that public statements—like President Biden’s comment that the platforms “were killing people”—reveal something beyond mere requests. True, these statements may be coercive in some contexts. But surely there is a difference between statements directed at, for instance, a small independent distributor circulating books and statements directed at Facebook, one of the world’s largest and most powerful tech companies. That is not to say that social media platforms cannot be coerced—it’s possible that the platforms’ close relationships with government agencies are themselves an effort to avoid friction with the government, and an internal Facebook email suggests the company did in fact remove one post due to government pressure. Indeed, our view is that some of the communications from the Biden administration likely did cross the constitutional line. But the platforms’ sheer power nonetheless raises serious doubts as to whether the platforms were in fact coerced by government officials through every statement relied upon by the Fifth Circuit. Notably, the government pointed to evidence that the platforms frequently refused to comply with government requests.
Finally, the court gave the plaintiffs’ claims of a vast “conspiracy” too much weight, failing to examine and contextualize the various communications at issue objectively. The Fifth Circuit justified its conclusion that the platforms’ content moderation decisions were state action by noting the Court “has rarely been faced with a coordinated campaign of this magnitude, orchestrated by federal officials.” But the factual background laid out in the opinion reveals no “coordinated campaign.” It describes various agencies working with platforms to address distinct content moderation issues at unique points in time. The CDC provided guidance on COVID-19 misinformation to the platforms early in the pandemic, years before the Biden administration took office, and at the platforms’ request. The FBI met with platforms years later to warn them about a possible 2020 presidential election influence operation. The Biden administration, while rolling out COVID-19 vaccines during its first year, focused its efforts on minimizing vaccine misinformation. While some of the communications highlighted in the opinion—particularly those from White House officials—are troubling and potentially unconstitutional, characterizing these unrelated events as a “coordinated campaign” strains credulity.
A lack of clarity in the doctrine certainly doesn’t help fend off such conspiratorial claims. Without a clear line to distinguish coercive government communications from permissible government speech, and a deeper understanding of the complex relationship between social media platforms and the government, even routine official interactions with the platforms can easily be misunderstood, mischaracterized, and ultimately misjudged by the court.
Conclusion
The unprecedented growth of social media over the last decade has solidified its role as a primary forum for speech and political expression. The risk that platforms could become targets for suppression of protected speech on a mass scale is beyond doubt. For that reason, the Fifth Circuit’s decision is particularly dissatisfying, and underscores the need for clarity in this area of law. In our view, if the Supreme Court decides to weigh in, it is critical that it develops a legal framework that protects users’ speech rights, preserves the editorial rights of the platforms, and recognizes that not all government communications with the platforms are inherently coercive or suspect.
Mayze Teitler was a legal fellow at the Knight First Amendment Institute.