Last week, the Knight Institute and other free expression organizations filed an amicus brief in Twitter v. Taamneh, a case that’s now before the Supreme Court. The case tees up high-stakes questions about whether and when tech companies can be held liable for hosting certain disfavored speech on their platforms. Depending on how the Court resolves the case, the decision could effectively force the platforms to engage in wide-ranging suppression of user speech to avoid liability under the Anti-Terrorism Act (ATA).
The Taamneh lawsuit was filed by relatives of Nawras Alassaf, a victim of the 2017 ISIS attack at a Turkish nightclub. Unable to hold ISIS accountable, Alassaf’s family sued Twitter, alleging that, because ISIS had used Twitter to expand its reach, and because Twitter knew that ISIS had done so and had failed to take sufficient countermeasures, Twitter had aided and abetted an act of international terrorism. The Ninth Circuit Court of Appeals agreed that the plaintiffs’ lawsuit could go forward, holding that an online platform with generalized knowledge that a terrorist organization used its service could be held liable under the Anti-Terrorism Act. Twitter petitioned the Supreme Court to review the case, and the Supreme Court agreed.
On December 5, we submitted an amicus brief with the Center for Democracy & Technology, the ACLU, the ACLU Foundation of Northern California, the Electronic Frontier Foundation, the R Street Institute, and the Reporters Committee for Freedom of the Press. The brief urges the Court to reject the Ninth Circuit’s ruling. We argue that an overly expansive interpretation of aiding-and-abetting liability would result in intermediaries—like social media platforms—taking down constitutionally protected speech for fear of legal liability. We point to a series of cases in which the Supreme Court has recognized that imposing liability on speech intermediaries can have precisely this effect.
In Taamneh, the plaintiffs are arguing that the platforms should be subject to suit under the ATA because they had generalized knowledge that terrorists use their services. What would happen if the Court adopted this rule? As we explain in the brief, platforms might respond in different ways—none of them good for free speech. Some might post user-generated content only if platform employees review the content in advance. Others might block users who speak about or report on terrorist groups, concerned that their posts could be viewed as providing assistance to terrorism. Still others might use overly restrictive content moderation algorithms to separate “good” speech from “bad.” Each of these approaches would suppress large amounts of constitutional and socially beneficial speech, depriving the public of diverse political viewpoints and shutting down channels of information—all to avoid running afoul of the law. But neither speech about terrorism nor speech by someone associated with a terrorist group is categorically unprotected, and the government can’t directly or indirectly suppress these broad swaths of political speech. The First Amendment is meant to protect against exactly this type of government intrusion.
In light of the very real risk of over-censorship, we argue that the Court should allow aiding-and-abetting liability only when platforms like Twitter have actual knowledge that a specific post, video, or other piece of user-generated content provides substantial assistance to a terrorist act. The standard we propose would allow plaintiffs to recover against platforms in some contexts. But it would avoid the significant First Amendment problems created by the Ninth Circuit’s rule, and help ensure that online platforms remain spaces for open and vibrant discussion and debate.
Anna Diakun is a staff attorney and the fellowship program's managing attorney at the Knight Institute.