It is a category error to assume that an old paradigm is obsolete simply because of the emergence, or even the dominance, of a new one. Although the title of Kate Klonick’s thoughtful essay sets Facebook and New York Times Co. v. Sullivan against each other by inserting a “v.” in between them, one upshot of her piece, and indeed of much of her other important work in this area,
is that the First Amendment continues to play a critical role in resolving disputes about what people should be able to say online. What Klonick describes isn’t a face-off between social media content moderation and First Amendment law. It’s more like the story of a child who thought herself equipped by her parents to understand the world but then finds herself in a novel setting—a semester abroad, a rave—where some of the rules she was taught don’t seem to help.To unpack some of the free expression-related issues raised by social media, it is useful to separate the content moderation practices Klonick discusses into two related but distinct types: what can be said about whom, and who can say it. With respect to the first type, the influence of the First Amendment still reigns, and for a good reason. It still works. To paraphrase Judge Easterbrook’s early critique of the law of cyberspace, general First Amendment rules have proven themselves adaptable to the “specialized endeavors” of social media,
even if the role of applying those rules has largely shifted from courts to moderators. The converse is also true. We should be careful not to let that which we perceive as special about the social media context overwhelm the soundness, wisdom, and relevance of the general rules.With respect to the second type of practice, however, the First Amendment has not been nearly as helpful for resolving content moderation problems. And if social media has largely failed to become the engine for social change and political discourse that many hoped it would, the influence of First Amendment ideology on questions concerning who can speak online—and in particular its valorization of anonymous speech—is one reason why.
Limited-Purpose Public Figures and Social Media
As Klonick writes, in Gertz v. Robert Welch, Inc.
the U.S. Supreme Court extended its actual malice doctrine to defamation plaintiffs who have come to be known as “limited-purpose public figures”—otherwise private people who have voluntarily inserted themselves into controversies and thus become the subject of public discussion. The Court concluded that these people should, like public officials, have to show actual malice in defamation suits relating to the controversies of which they are a part, because they assume the risk of being talked about negatively and even falsely when they enter public debates “in order to influence the resolution of the issues involved.” The Court also responded to a potential unfairness in its ruling, insofar as it might sweep in people who do not choose to be a public figure. As Klonick notes, the Court said that “[h]ypothetically, it may be possible for someone to become a public figure through no purposeful action of his own, but the instances of truly involuntary public figures must be exceedingly rare.”Klonick is absolutely correct that the internet has put the lie to the Court’s suggestion in Gertz that someone can involuntarily enter public debate only in the rarest of cases. By making public so much of day-to-day life that was formerly private, social media has been used to thrust publicity upon many individuals through no fault of their own. As Klonick also notes, there is a significant First Amendment risk in permitting the limited-purpose public figure doctrine to take online notoriety into account. To take proper account of this risk, however, it is helpful to consider the case not of Alex from Target but of a different Alex: Alex Jones.
Earlier this year, several parents of Sandy Hook Elementary School children sued Alex Jones for defamation, pointing to Jones’s statements falsely implicating the parents in faking the shooting and deaths of their children.
In his response to the suit, Jones has argued that the plaintiff-parents are limited-purpose public figures—that they have been discussed as part of, and have inserted themselves into, the larger controversy around gun rights in the United States—and that under Gertz they should therefore have to prove that his statements about them were made with actual malice. It is true that some Sandy Hook parents (though not the plaintiffs who have sued Jones) became vocal participants in the gun-control movement in the wake of the tragedy and that others have organized online to try and prevent future attacks. But Jones and similar defendants should not be able to expand the bounds of the controversies that they themselves create so as to raise the burden of proof for those implicated in the controversies who sue them for reputational harm. Making such individuals prove actual malice in defamation cases gets the First Amendment backward. It will encourage individuals to take the tragedies that happen to them and swallow them silently—to not get active, to not connect with others who have similar purposes, to not share in sorrow and attempt to make change.No one would have volunteered for the kind of attention that the Sandy Hook parents have received, and no one would argue that the controversy of which they became a part was not widely discussed, particularly online. But if a court were to “dispatch with the ‘voluntary’ and ‘involuntary’ concepts altogether,” as Klonick proposes, and go on to find that the parents were public figures because of that attention alone, then future parents might not speak out at all, which would do significant harm to the marketplace of ideas that the First Amendment is intended to promote.
Courts should therefore continue to ask whether, consistent with Gertz, a plaintiff in a defamation case has acted voluntarily—exercising her will to undertake “a course of action that invites attention.”
This inquiry should not turn on whether a speech platform has itself facilitated discussion of that person. After all, Alex from Target voluntarily appeared on Ellen. But one should not be transformed into a public figure through no affirmative, purposeful act of one’s own. The longstanding law of defamation gets this right.Judicial doctrine has another public figure rule that remains relevant online: A defamation defendant cannot cause a plaintiff to become a public figure by dint of the statements that gave rise to the claim.
The relevant controversy, in other words, “must have existed prior to the publication of the defamatory statement.” This too provides a useful heuristic for content moderation. Klonick describes a policy in which Facebook uses Google News to determine whether an individual is a public figure when deciding whether to take down bullying speech about or directed at that person. The case law suggests that if the results of that search include only stories about the complained-of bullying itself, then the victim of the bullying is a private person. Looking at the sheer number of Google News hits or at whether an individual is being discussed widely on social media obscures, rather than clarifies, these questions. Good old-fashioned First Amendment law does the job much better.The same is true with regard to “newsworthy” yet harmful content generally. The “more protection for speech about issues and groups, less protection for speech about specific individuals” decision rule that Klonick appears to recommend (my paraphrase, not her words) loosely tracks the development of legal rules around group libel and hate speech in the years since the Supreme Court’s 1952 decision in Beauharnais v. Illinois.
In recent decades, the federal appellate courts have concluded that cases such as Sullivan have “so washed away the foundations of Beauharnais that it [cannot] be considered authoritative.” Accordingly, First Amendment doctrine already calls for general statements about groups to receive more protection than statements about particular individuals. Both Facebook’s “Community Standards” and Klonick’s proposed intervention point content moderation decisions regarding takedowns in the same direction as that doctrine. And even before the First Amendment entered the tort law picture at all, many state courts faced with claims of privacy violations or intentional infliction of emotional distress took into account a public interest- and newsworthiness-based “privilege to interfere” with the legal interests those torts protect—the very same considerations that social media companies take into account when deciding what to take down or keep up.The challenge for Klonick’s decision rule, or for Facebook or Twitter in applying it, is that the hard cases are not those in which offending content is either about a specific individual or about a matter that is “generally” newsworthy. The hard cases are those in which the offending content is about a specific individual who is herself newsworthy. In such a case, it is perfectly legitimate to ask—again, as current doctrine does—whether the individual is herself a primary source of or reason for that newsworthiness and, if so, to count that fact as relevant with respect to that individual’s burden of proof if she sues the source of the content for defamation. This is the question the Supreme Court asked with respect to Elmer Gertz, and it is the question courts should ask with respect to Alex from Target or a Sandy Hook parent.
It is also a relevant question with respect to whether content about those individuals should be taken down by a social media moderator.So, existing law is well equipped to handle public figure questions, even in the age of online oversharing. But content moderation policies concerning who may speak present an entirely different set of challenges. And the problem with these policies is not too little First Amendment, but rather too much.
For Every One Mrs. McIntyre, a Thousand Trolls
The Supreme Court has forcefully and consistently held that the right to express oneself anonymously is protected by the First Amendment. In the 1995 case McIntyre v. Ohio Elections Commission,
the Court declared that the right of Margaret McIntyre to express her opposition to a proposed school tax levy without putting her name to that opposition was rooted in a free speech tradition older than the republic. The speaker’s “decision in favor of anonymity may be motivated by fear of economic or official retaliation,” noted the Court, or “by concern about social ostracism . . . or merely by a desire to preserve as much of one’s privacy as possible.”Individuals certainly do use Twitter’s ability to speak pseudonymously to express themselves in ways that would cause them harm if the expression was associated with their actual identities.
But verbal harassment, hate speech, doxxing, death threats, revenge pornography, and the like have all been turbocharged online by that same functionality. And the targets of that kind of expressive conduct are often the equivalent of the Jehovah’s Witnesses in the seminal First Amendment cases of the late 1930s and 1940s, or the socialists in the 1920s: members of politically unpopular, historically subordinated groups. It thus seems clear that social media companies have overlearned the lesson of the benefits of anonymous speech, and the lesson has come at a frightening cost.Social science research bears out the commonsense conclusion that platforms that permit speech from anonymous, fake-name, and sham accounts are less civil than those that don’t. In one study, political scientist Ian Rowe compared reader comments on a Washington Post story made on the site itself, which permitted anonymous speech, to those made in response to the article’s posting on Facebook, which has a real-name policy. The anonymous comments were more uncivil, more disinhibited, and contained more ad hominem attacks against other commenters.
Anonymity, at least as a First Amendment-informed design principle for communications networks, tends to result in a degraded expressive environment, not an improved one.Although it might be a marginally more civil place for political discourse than Twitter, Facebook is not free from blame. While the platform requires real names, its identity-verification policies are easy to circumvent. As we now know, this can facilitate not only harassing and offensive speech but also election interference by foreign states,
the dissemination of false propaganda, and, in the case of Myanmar’s Rohingya minority, literal genocide.Value judgments about forum quality are certainly relative, and each of us decides for ourselves how much ideals such as civility and trustworthiness are worth. But it bears remembering that the First Amendment is itself a significant impediment to government interventions that aim to improve deliberation and mitigate social harms on social media. Many believe that the content moderation policies of social media platforms, however self-serving or misguided, are themselves constitutionally protected speech. The First Amendment, consequently, is both a cause of the infection and an antibody that fights off several possible cures. No one should pretend that the First Amendment lights the path forward for many of the most significant problems facing online content moderation.
If we want to build a better speech space online, either the Governors or the Governed will have to lead the way. And if the Governors won’t act, it may be time to withdraw our consent to be governed by them.
© 2018, Enrique Armijo.
Cite as: Enrique Armijo, Meet the New Governors, Same as the Old Governors, 18-06.a Knight First Amend. Inst. (Oct. 30, 2018), https://knightcolumbia.org/content/meet-new-governors-same-old-governors [https://perma.cc/5XVG-VKV5].
See, e.g., Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598 (2018).
Frank H. Easterbrook, Cyberspace and the Law of the Horse, 1996 U. Chi. Legal F. 207, 207. Of course, the application of those rules by particular actors is a separate question, which raises its own set of issues of administrability, consistency, and transparency. See, e.g., Jason Koebler & Joseph Cox, The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People, Motherboard (Aug. 23, 2018), https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content-moderation-works; Sandra E. Garcia, Ex-Content Moderator Sues Facebook, Saying Violent Images Caused Her PTSD, N.Y. Times (Sept. 25, 2018), https://www.nytimes.com/2018/09/25/technology/facebook-moderator-job-ptsd-lawsuit.html.
418 U.S. 323 (1974).
Id. at 345.
Id.
See, e.g., Elizabeth Williamson, In Alex Jones Lawsuit, Lawyers Spar over an Online Broadcast on Sandy Hook, N.Y. Times (Aug. 1, 2018), https://www.nytimes.com/2018/08/01/us/politics/infowars-sandy-hook-alex-jones.html.
McDowell v. Paiewonsky, 769 F.2d 942, 949 (3d Cir. 1985); see also, e.g., Schultz v. Reader’s Digest Ass’n, 468 F. Supp. 551, 559 (E.D. Mich. 1979) (concluding there is no such thing as an involuntary public figure, given that the limited public figure category is confined to those who have thrust themselves in the vortex of a controversy); Chafoulias v. Peterson, 668 N.W.2d 642, 653 (Minn. 2003) (“‘The proper question is not whether the plaintiff volunteered for the publicity but whether the plaintiff volunteered for an activity out of which publicity would foreseeably arise.’” (quoting 1 Rodney A. Smolla, Law of Defamation § 2:32 (2d ed. 2002)).
See, e.g., Robert D. Sack, Sack on Defamation § 5.3.3 (3d ed. 2002).
Wells v. Liddy, 186 F.3d 505, 534, 541 (4th Cir. 1999).
343 U.S. 250 (1952).
Am. Booksellers Ass’n, Inc. v. Hudnut, 771 F.2d 323, 331 n.3 (7th Cir. 1985); see also Dworkin v. Hustler Magazine Inc., 867 F.2d 1188, 1200 (9th Cir. 1989).
David A. Anderson, Torts, Speech, and Contracts, 75 Tex. L. Rev. 1499, 1512 (1997); see also Alan E. Garfield, Promises of Silence: Contract Law and Freedom of Speech, 83 Cornell L. Rev. 261, 320–21 (1998) (“[E]ven before the First Amendment was invoked in private-facts cases, the common law recognized a First Amendment-like defense to the tort: no liability arises if the information disclosed is of legitimate concern to the public.” (internal quotation marks omitted)).
Although a court might not ask the same question about Charlottesville victim Heather Heyer based on the longstanding rule that postmortem defamation is not actionable, a social media content moderator could certainly consider these issues when deciding whether to take down the post about Heyer that Klonick describes at the ouset of her essay.
514 U.S. 334 (1995).
Id. at 341–42.
See, e.g., Parker Higgins, Hey Twitter, Killing Anonymity’s a Dumb Way to Fight Trolls, Wired (Mar. 13, 2015), https://www.wired.com/2015/03/hey-twitter-killing-anonymitys-dumb-way-fight-trolls.
See Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media 24 (2018) (“For years Twitter ha[s] been criticized for allowing a culture of harassment to fester largely unchecked on its service, particularly targeting women, but also the LGBTQ community, racial and ethnic minorities, participants of various subcultures, and public figures.”).
Ian Rowe, Civility 2.0: A Comparative Analysis of Incivility in Online Political Discussion, 18 Info., Commc’n & Soc’y 121 (2014).
See Nancy Scola, Massive Twitter Data Release Sheds Light on Russia’s Trump Strategy, Politico (Oct. 17, 2018), https://www.politico.com/story/2018/10/17/twitter-foreign-influence-operations-910005.
See Matthew Hindman & Vlad Barash, Disinformation, ‘Fake News’ and Influence Campaigns on Twitter, Knight Found. (Oct. 2018), https://kf-site-production.s3.amazonaws.com/media_elements/files/000/000/238/original/KF-DisinformationReport-final2.pdf.
Paul Mozur, A Genocide Incited on Facebook, with Posts from Myanmar’s Military, N.Y. Times (Oct. 15, 2018), https://www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html.
See Dakota Shane, Research Shows Users Are Leaving Facebook in Droves. Here’s What It Means for You., Inc. (Sept. 11, 2018), https://www.inc.com/dakota-shane/research-shows-users-are-leaving-facebook-in-droves-heres-what-it-means-for-you.html.
Enrique Armijo is a professor of law at the Elon University School of Law.