In Section 230 of the Communications Decency Act, lawmakers thought they were devising a safe harbor for online providers engaged in self-regulation. The goal was to encourage platforms to “clean up” offensive material online. Yet Section 230’s immunity has been stretched far beyond that purpose to immunize platforms that solicit or deliberately host illegality. As Olivier Sylvain’s thoughtful essay shows, it has been invoked to shield from liability platforms whose architectural choices lead ineluctably to illegal discrimination.
Section 230’s immunity provision has secured important breathing space for innovative new ways to work, speak, and engage with the world. But the law’s overbroad interpretation has been costly to expression and equality, especially for members of traditionally subordinated groups.
This response piece highlights Sylvain’s important normative contributions to the debate over Section 230. It provides some practical reinforcements for his reading of Section 230. Our central disagreement centers on the way forward. Congress should revise Section 230’s safe harbor to apply only to platforms that take reasonable steps to address unlawful activity. I end with thoughts about why it is time for platforms to pair their power with responsibility.
The “Decency” Act and Its Costs to Free Speech and Equality
In the technology world, Section 230 of the Communications Decency Act (CDA) is a kind of sacred cow—an untouchable protection of near-constitutional status.
It is, in some circles anyway, credited with enabling the development of the modern internet. As Emma Llansó recently put it, “Section 230 is as important as the First Amendment to protecting free speech online.”Before we tackle the current debate, it is important to step back to the statute’s history and purpose. The CDA, which was part of the Telecommunications Act of 1996, was not a libertarian enactment. At the time, online pornography was considered the scourge of the age. Senators James Exon and Slade Gorton introduced the CDA to make the internet safe for kids. Besides proposing criminal penalties for the distribution of sexually explicit material online, the Senators underscored the need for private sector help in reducing the volume of noxious material online. In that vein, Representatives Christopher Cox and Ron Wyden offered an amendment to the CDA entitled “Protection for Private Blocking and Screening of Offensive Material.” The Cox-Wyden amendment, codified in Section 230, provided immunity from liability for “Good Samaritan” online service providers that either over- or under-filtered objectionable content.
Twenty years ago, federal lawmakers could not have imagined how essential to modern life the internet would become. The internet was still largely a tool for hobbyists. Nonetheless, Section 230’s authors believed that “if this amazing new thing — the Internet — [was] going to blossom,” companies should not be “punished for trying to keep things clean.”
Cox recently noted that “the original purpose of [Section 230] was to help clean up the Internet, not to facilitate people doing bad things on the Internet.” The key to Section 230, explained Wyden, was “making sure that companies in return for that protection — that they wouldn’t be sued indiscriminately — were being responsible in terms of policing their platforms.”Courts, however, have stretched Section 230’s safe harbor far beyond what its words, context, and purpose support. Attributing the broad interpretation of Section 230 to “First Amendment values [that] drove the CDA,” courts have extended immunity from liability to platforms that republished content knowing it violated the law; solicited illegal content while ensuring that those responsible could not be identified; altered their user interface to ensure that criminals could not be caught; and sold dangerous products.
Granting immunity to platforms that deliberately host, encourage, or facilitate illegal activity would have seemed absurd to the CDA’s drafters.
The law’s overbroad interpretation means that platforms have no reason to take down illicit material and that victims have no leverage to insist that they do so. Rebecca Tushnet put it well a decade ago: Section 230 ensures that platforms enjoy “power without responsibility.”Although Section 230 has been valuable to innovation and expression, The free expression calculus, devised by the law’s supporters, fails to consider the loss of voices in the wake of destructive harassment that platforms have encouraged or deliberately tolerated.As ten years of research has shown, cyber mobs and individual harassers shove people offline with sexually threatening and sexually humiliating abuse; targeted individuals are more often women, women of color, lesbian and trans women, and sexual minorities. The benefits that Section 230’s immunity has enabled likely could have been secured at a lesser price.
it has not been the net boon for free speech that its celebrants imagine.Discriminatory Design and What to Do About It
Olivier Sylvain wisely urges us to think more broadly about the costs to historically disadvantaged groups wrought by Section 230’s overbroad interpretation. Platforms disadvantage the vulnerable not just through their encouragement of cyber mobs and individual abusers but also through their design choices. As Sylvain argues, conversations about Section 230’s costs to equality should include the ways that a platform’s design can “predictably elicit or even encourage expressive conduct that perpetuates discrimination.”
Sylvain’s focus on discriminatory design deserves the attention of courts and lawmakers. More than twenty years ago, Joel Reidenberg and Lawrence Lessig highlighted code’s role in channeling legal regulation and governance.
A platform’s architecture can prevent illegal discrimination, just as it can be designed to protect privacy, expression, property, and due process rights.As Sylvain has shown, platforms have instead chosen architectures that undermine legal mandates. Airbnb’s site, for instance, asks guests to include real names in their online profiles even though the company knows illegal discrimination is sure to result. As studies have shown, Airbnb guests with distinctively African-American names are 16 percent less likely to be accepted relative to identical guests with distinctively White names.
Facebook’s algorithms mine users’ data to create categories from which advertisers choose, including ones that facilitate illegal discrimination in hiring and housing.Sylvain’s normative argument is compelling. Platforms are by no means “neutral,” no matter how often or loudly tech companies say so. They are not merely publishing others’ content when their carefully devised user interfaces and algorithms damage minorities’ and women’s opportunities. When code enables invidious discrimination, law should be allowed to intervene.
Facebook has built an advertising system that inevitably results in fair housing violations. Airbnb’s user interface still requires guests to include their names, which predictably results in housing discrimination. Sylvain is right—platforms should not enjoy immunity from liability for their architectural choices that violate anti-discrimination laws.The question, of course, is strategy. Do we need to change Section 230 to achieve Sylvain’s normative ends? Section 230 should not be read to immunize platforms from liability related to user interface or design. Platforms are being sued for their code’s illegality, not for their users’ illegality or the platforms’ subsequent over- or under-removal of content. What is legally significant is the platform’s adoption of a design (such as Facebook’s algorithmic manipulation of user data to facilitate ads) that enables illegal discrimination.
Sylvain’s argument finds support in recent state and federal enforcement efforts. For instance, in a suit against revenge porn operator Craig Brittain, the Federal Trade Commission (FTC) argued that it was unfair—and a violation of Section 5 of the FTC Act—for Brittain to exploit individuals’ personal information shared in confidence for financial gain. The FTC’s theory of wrongdoing had roots in prior decisions related to companies that unfairly induced individuals to betray another’s trust. Theories of inducement focus on acts, not the publication of another’s speech. Section 230 would not bar such actions because they are not premised on platforms’ publication of another’s speech but rather on platforms’ inducing others to breach a trust. So, too, with claims asserting that a platform’s wrongful activity is its design that induces or enables illegal discrimination.
What if courts are not convinced by this argument? Sylvain urges Congress to maintain the immunity but to create an explicit exception from the safe harbor for civil rights violations. He notes that other exceptions could be added, such as those related to combating nonconsensual pornography, sex trafficking, or child sexual exploitation.
A recent example of that approach is the Stop Enabling Sex Traffickers Act, which recently passed the Senate by an overwhelming vote. The bill would amend Section 230 by rendering websites liable for hosting sex trafficking content.Congress, however, should avoid a piecemeal approach.
Carving out exceptions risks leaving out other areas of the law that should not be immunized. The statutory scheme would require updating as new problems arise that would seem to demand it. Legislation requiring piece-by-piece exemptions would, most likely, not get updated.Benjamin Wittes and I have offered a broader though balanced legislative fix. In our view, platforms should enjoy immunity from liability if they can show that their response to unlawful uses of their services is reasonable. Accordingly, Wittes and I have proposed a revision to Section 230(c)(1) as follows (revised language is italicized):
No provider or user of an interactive computer service that takes reasonable steps to prevent or address unlawful uses of its services shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider.
The determination of what constitutes a reasonable standard of care would take into account differences among online entities. Internet service providers (ISPs) and social networks with millions of postings a day cannot plausibly respond to complaints of abuse immediately, let alone within a day or two. On the other hand, they may be able to deploy technologies to detect content previously deemed unlawful.
The duty of care will evolve as technology improves.A reasonable standard of care will reduce opportunities for abuses without interfering with the further development of a vibrant internet or unintentionally turning innocent platforms into involuntary insurers for those injured through their sites. Approaching the problem as one of setting an appropriate standard of case more readily allows for differentiating among different kinds of online actors, setting a different rule for websites designed to facilitate mob attacks or to enable illegal discrimination from the one that would apply to large ISPs linking millions to the internet.
Parting Thoughts
We have come to an important inflection point. The public is beginning to understand the extraordinary power that platforms wield over our lives. Consider the strong, negative public reaction to journalistic reports of Cambridge Analytica’s mining of Facebook data to manipulate voters, or Facebook’s algorithms allowing advertisers to reach users who “hate Jews,” or YouTube’s video streams that push us to ever more extreme content.
Social media companies can no longer hide behind the notion that they are neutral platforms simply publishing people’s musings. Their terms-of-service agreements and content moderation systems determine if content is seen or heard or if it is muted or blocked. Their algorithms dictate the advertisements that are visible to job applicants and home seekers. Their systems act with laser-like precision to target, score, and manipulate each and every one of us.To return to Rebecca Tushnet’s framing, with power comes responsibility. Law should change to ensure that such power is wielded responsibly. Content intermediaries have moral obligations to their users and others affected by their sites, and companies are beginning to recognize this. As Mark Zuckerberg told CNN, “I’m not sure we shouldn’t be regulated.”
While the internet is special, it is not so fundamentally special that all normal legal rules should not apply to it. Online platforms facilitate expression, along with other key life opportunities, but no more so than do workplaces, schools, and various other civic institutions, which are zones of conversation and are not categorically exempted from legal responsibility for operating safely. The law has not destroyed expression in workplaces, homes, and other social venues.
When courts began recognizing claims under Title VII for sexually hostile work environments, employers argued that the cost of liability would force them to shutter and, if not, would ruin the camaraderie of workspaces.
That grim prediction has not come to pass. Rather, those spaces are now available to all on equal terms, and bricks-and-mortar businesses have more than survived in the face of Title VII liability. The same should be true for networked spaces. We must make policy for the internet and society that we actually have, not the internet and society that we believed we would get twenty years ago.© 2018, Danielle Keats Citron.
Cite as: Danielle Keats Citron, Section 230's Challenge to Civil Rights and Civil Liberties, 18-02.b Knight First Amend. Inst. (Apr. 6, 2018), https://knightcolumbia.org/content/section-230s-challenge-civil-rights-and-civil-liberties [https://perma.cc/V54Q-D3Z9].
See Electronic Frontier Foundation, CDA 230: The Most Important Law Protecting Internet Speech, http://www.eff.org/issues/cda230/legal (last visited Mar. 26, 2018).
See Christopher Zara, The Most Important Law in Tech Has a Problem, Wired (Jan. 3, 2017), http://www.wired.com/2017/01/the-most-important-law-in-tech-has-a-problem.
Alina Selyukh, Section 230: A Key Legal Shield for Facebook, Google Is About to Change, NPR (Mar. 21, 2018), http://www.wbur.org/npr/591622450/section-230-a-key-legal-shield-for-facebook-google-is-about-to-change (quoting Llansó).
See Danielle Keats Citron & Benjamin Wittes, The Internet Will Not Break: Denying Bad Samaritans Section 230 Immunity, 86 Fordham L. Rev. 401 (2017).
S. Rep. No. 104-23, at 59 (1995). Key provisions criminalized the transmission of indecent material to minors.
Id.
H.R. Rep. No. 104-223, Amendment No. 2-3 (1995) (proposed to be codified at 47 U.S.C. § 230).
47 U.S.C. § 230(c) (2012); see H. Conf. Rep. No. 104-458 (1996).
Selyukh, supra note 3.
Id. (quoting Cox).
Id. (quoting Wyden).
See Danielle Keats Citron, Cyber Civil Rights, 89 B.U. L. Rev. 61, 118 (2009). In the landmark ACLU v. Reno decision, the Supreme Court struck down the CDA’s blanket restrictions on internet indecency under the First Amendment. Reno v. ACLU, 521 U.S. 844, 853 (1997). Online expression was too important to be limited to what government officials think is fit for children. Id. at 875. Section 230’s immunity provision, however, was left intact.
Jane Doe No. 1 v. Backpage.com LLC, 817 F.3d 12, 25 (1st Cir. 2016). The judiciary’s insistence that the CDA reflected “Congress’ desire to promote unfettered speech on the Internet” so ignores its text and history as to bring to mind Justice Scalia’s admonition against selectively determining legislative intent in the manner of someone at a party who “look[s] over the heads of the crowd and pick[s] out [their] friends.” Antonin Scalia, A Matter of Interpretation: Federal Courts and the Law 36 (1997).
Shiamili v. Real Estate Group of New York, 2011 WL 2313818 (N.Y. App Ct. June 14, 2011); Phan v. Pham, 2010 WL 658244 (Cal. App. Ct. Feb. 25, 2010).
Jones v. Dirty World Entertainment Holding, 2014 WL 2694184 (6th Cir. June 16, 2014); S.C. v. The Dirty LLC, No. 11-CV-00392-DW (W.D. Mo. March 12, 2012).
817 F.3d 12 (1st Cir. 2016).
See, e.g., Hinton v. Amazon, 72 F. Supp. 3d 685, 687 (S.D. Miss. 2014).
Cox recently said as much: “I’m afraid . . . the judge-made law has drifted away from the original purpose of the statute.” Selyukh, supra note 3. In his view, sites that solicit unlawful materials or have a connection to unlawful activity should not enjoy Section 230 immunity. Id.
See Citron, supra note 12, at 118; Mark A. Lemley, Rationalizing Internet Safe Harbors, 6 J. Telecomm. & High Tech. L. 101 (2007); Doug Lichtman & Eric Posner, Holding Internet Service Providers Accountable, 14 Sup. Ct. Econ. Rev. 221 (2006).
Rebecca Tushnet, Power Without Responsibility: Intermediaries and the First Amendment, 76 Geo. Wash. L. Rev. 986 (2008).
See Jack M. Balkin, The Future of Free Expression in a Digital Age, 36 Pepp. L. Rev. 427, 434 (2009).
See Citron & Wittes, supra note 4, at 410; Danielle Keats Citron & Neil M. Richards, Four Preconditions for Free Expression in the Digital Age, Wash. U. L. Rev. (forthcoming).
See, e.g., Maeve Duggan, Pew Research Ctr., Online Harassment 2017, at 31 (2017), http://assets.pewresearch.org/wp-content/uploads/sites/14/2017/07/10151519/PI_2017.07.11_Online-Harassment_FINAL.pdf (finding that people “whose most recent incident involved severe forms of harassment are more likely to say they changed their username or deleted their profile, stopped attending offline venues[,] or reported the incident to law enforcement”). The individual and societal costs are considerable when victims go offline, lose their jobs and cannot find new ones, or suffer extreme emotional harm in the face of online abuse. See generally Danielle Keats Citron, Civil Rights in Our Information Age, in The Offensive Internet 31 (Saul Levmore & Martha C. Nussbaum eds., 2010).
See Danielle Keats Citron, Hate Crimes in Cyberspace (2014); Citron, supra note 12; Danielle Keats Citron, Law’s Expressive Value in Combating Cyber Gender Harassment, 108 Mich. L. Rev. 373 (2009); Danielle Keats Citron, Online Engagement on Equal Terms, B.U. L. Rev. Online (2015); Danielle Keats Citron & Mary Anne Franks, Criminalizing Revenge Porn, 49 Wake Forest L. Rev. 345 (2014); Mary Anne Franks, Unwilling Avatars: Idealism and Discrimination in Cyberspace, 20 Colum. Gender J.L. 220 (2011); Mary Anne Franks, Sexual Harassment 2.0, 71 Md. L. Rev. 655 (2012); Danielle Keats Citron, Yale ISP—Reputation Economies in Cyberspace Part 3, YouTube (Dec. 8, 2007), http://www.youtube.com/watch?v=XVEL4RfN3uQ.
Lawrence Lessig, Code and Other Laws of Cyberspace 60 (1999) (“How the code regulates . . . [is a] question[] that any practice of justice must focus in the age of cyberspace.”); Joel R. Reidenberg, Lex Informatica: The Formulation of Information Policy Rules Through Technology, 76 Tex. L. Rev. 553, 554 (1998) (exploring how system design choices provide sources of rulemaking and make a “useful extra-legal instrument that may be used to achieve objectives that otherwise challenge conventional laws”).
Woodrow Hartzog’s new book Privacy’s Blueprint: The Battle to Control the Design of New Technologies (2018) demonstrates how the design of digital technologies determines our privacy rights.
Jonathan Zittrain, The Future of the Internet—And How to Stop It 101–26 (2008).
See generally Eduardo Moisés Peñalver & Sonia K. Katyal, Property Outlaws: How Squatters, Pirates, and Protesters Improve the Law of Ownership (2010).
Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249 (2008); Danielle Keats Citron, Open Code Governance, 2008 U. Chi. Legal F. 355.
Benjamin Edelman et al., Racial Discrimination in the Sharing Economy: Evidence from a Field Experiment, Am. Econ. J.: Applied Econ., Apr. 2017, at 1.
Cf. Danielle Keats Citron, Mainstreaming Privacy Torts, 98 Cal. L. Rev. 1805, 1836–40 (2010) (considering potential claims for tortious enablement of criminal conduct against platforms that paved the way for foreseeable privacy invasions and cyberstalking including Section 230’s immunity provision and decisions like Roommates.com).
Danielle Citron & Woodrow Hartzog, The Decision That Could Finally Kill the Revenge Porn Business, Atlantic (Feb. 3, 2015), http://www.theatlantic.com/technology/archive/2015/02/the-decision-that-could-finally-kill-the-revenge-porn-business/385113 (discussing the FTC’s consent decree with revenge porn operator); see also Complaint for Permanent Injunction and Other Equitable Relief, FTC v. EMP Media, Inc., No. 2:18-cv-00035 (D. Nev. Jan. 9, 2018), http://www.ftc.gov/system/files/documents/cases/1623052_myex_complaint_1-9-18.pdf; Stipulated Order for Permanent Injunction and Monetary Judgment as to Defendant Aniello Infante, FTC v. EMP Media, Inc., 2:18-cv-00035-APG-NJK (D. Nev. Jan. 10, 2018), http://www.ftc.gov/system/files/documents/cases/1623052myexinfanteorder.pdf.
Then-California Attorney General Kamala Harris prosecuted revenge-porn site operators for exploiting confidential nude images for commercial ends. Revenge porn operator Kevin Bollaert, for instance, unsuccessfully raised Section 230 as a defense to state criminal prosecution. Danielle Citron, Can Revenge Porn Operators Go to Prison?, Forbes (Jan. 17, 2015), http://www.forbes.com/sites/daniellecitron/2015/01/17/can-revenge-porn-operators-go-to-jail.
Citron & Hartzog, supra note 31.
I contemplated this possibility in my book Hate Crimes in Cyberspace, supra note 24. Benjamin Wittes and I offered this possibility as an intermediate, though not ideal, step in recent work. Citron & Wittes, supra note 4, at 419.
See Colin Lecher, Senate Passes Controversial Anti-Sex Trafficking Bill, Verge (Mar. 21, 2018), http://www.theverge.com/2018/3/21/17147688/senate-sesta-fosta-vote-anti-sex-trafficking.
Id. The House passed its own version last month. Id. Quinta Jurecic and I will be writing about that proposal in the near future.
Citron & Wittes, supra note 4, at 419.
Id.
What comes to mind is Facebook’s effort to use hashing technology to detect and remove nonconsensual pornography that has been banned as terms-of-service violations. I serve on a small task force advising Facebook about the use of screening tools to address the problem of nonconsensually posted intimate images.
Current screening technology is far more effective against some kinds of abusive material than others; progress may produce cost-effective means of defeating other attacks. With current technologies, it is difficult, if not impossible, to automate the detection of certain illegal activity. That is certainly true of threats, which requires an understanding of the context to determine its objectionable nature.
Citron & Richards, supra note 22. Julia Angwin, writing for the Wall Street Journal and now ProPublica, has been a pioneer in this effort, educating the public on the various and sundry ways that tech companies control crucial aspects of our lives and our personal data.
See Danielle Keats Citron, Extremist Speech, Compelled Conformity, and Censorship Creep, 93 Notre Dame L. Rev. 1035 (2018).
See Olivier Sylvain, Intermediary Design Duties, 50 Conn. L. Rev. 1 (2018).
See generally Ryan M. Calo, Digital Market Manipulation, Geo. Wash. L. Rev. (2014).
Mark Zuckerberg in his Own Words, CNN (Mar. 21, 2018), http://money.cnn.com/2018/03/21/technology/mark-zuckerberg-cnn-interview-transcript/index.html.
See Jeffrey Rosen, The Unwanted Gaze: The Destruction of Privacy in America 107 (2001).
Danielle Keats Citron is the Morton & Sophia Macht Professor of Law at the University of Maryland Francis King Carey School of Law.