Section 230 of the Communications Decency Act of 1996 is widely credited with helping free expression flourish online. With limited exceptions, internet service providers, social networking sites, and other online intermediaries are protected under Section 230 against state civil and criminal claims for the third-party content they host. This immunity has allowed intermediaries to publish enormous volumes of speech. Yet in so doing, it has arguably shaped the development of the public sphere in problematic ways — subsidizing digital platforms over analog ones, rewarding reliance on user-generated rather than employee-generated content, and allowing website operators to avoid internalizing many of the social costs of the materials they disseminate. Without the expansive immunity granted by Section 230, the internet might not have become the remarkably rich discursive domain that it is today. It also might not be quite so saturated with racist, misogynistic, defamatory, fraudulent, and otherwise harmful speech.
That, at least, is the premise of Olivier Sylvain’s new paper on “Discriminatory Designs on User Data.” Sylvain worries that Section 230 doctrine has drifted away from the goal of encouraging intermediaries to clean up the tortious and discriminatory content on their sites, and that the human costs of this immunity regime have been borne disproportionately by women and by racial and ethnic minorities who are subject to myriad forms of online mistreatment and abuse. Sylvain calls attention, in particular, to the ways in which intermediaries’ interface design features may enable or elicit such behaviors. Airbnb’s requirement that users share racially suggestive profile information, for example, resulted in widespread racial discrimination by its hosts. Civil rights groups have alleged that Facebook’s marketing categories allow advertisers to exclude protected groups in contravention of fair housing statutes.
Although he does not go into detail, Sylvain suggests that intermediaries that knowingly or negligently facilitate the distribution of unlawful content should not benefit from Section 230 immunity, at least when violations of civil rights laws are at issue. Critics of this proposal will worry about chilling effects on lawful speech. But Sylvain maintains that the status quo already chills lawful speech — the speech of members of vulnerable groups — and that a more nuanced approach to intermediary liability could bring internet law into greater harmony with anti-discrimination norms while increasing the vitality and diversity of online expression. One way to read Sylvain’s paper, then, is as a brief against the fatalistic claim that intermediary immunity simply cannot be reined in without destroying the dynamism of the internet.
Sylvain’s argument will evoke, for many readers, the pioneering work of Danielle Citron highlighting law’s complicity in the proliferation of hateful and illicit internet speech, from cyberbullying to revenge pornography. Responding to Sylvain’s paper, Citron embraces his critique of current doctrine and his contention that “platforms should not enjoy immunity from liability for their architectural choices that violate anti-discrimination laws.” Although she agrees with Sylvain that Section 230 can be read in this way, Citron proposes a statutory revision that would condition intermediaries’ immunity on their compliance with a reasonable standard of care to prevent or address unlawful behaviors.
James Grimmelmann points out that any intermediary liability rule is likely to be over- or under-inclusive (or both). Without robust immunity, intermediaries can be expected to suppress some “good” speech by third parties; with immunity, they will fail to suppress some “bad” speech. How to weigh these different sorts of mistakes, Grimmelmann explains, depends not only on one’s view of their relative incidence and importance but also on how crisply the categories can be defined and how accurately and cheaply platforms can distinguish between the two. The normative question of whether Section 230 ought to be reformed cannot be divorced from these practical and empirical questions about how any reform would play out.
Daphne Keller sounds an additional note of caution. While sympathizing with Sylvain’s distress about the prevalence of online discrimination, Keller questions whether Section 230 is really an important contributing factor to many of its manifestations. Moreover, in situations where Section 230 does seem to license invidious discrimination, Keller draws on adjacent bodies of law to question the wisdom of tying intermediary liability to the absence of “neutrality” or to a knowledge-based standard.
Keller concludes with an appeal for morally motivated yet legally and institutionally grounded deliberation about the troubling developments that Sylvain describes. This collection of essays models such deliberation and, we hope, can prompt more of it.
© 2018, David Pozen.
Cite as: David Pozen, Intermediary Immunity and Discriminatory Designs, 18-02.d Knight First Amend. Inst. (Apr. 6, 2018), https://knightcolumbia.org/content/intermediary-immunity-and-discriminatory-designs [https://perma.cc/6GB5-M4Y6].
David Pozen is the Charles Keller Beekman Professor of Law at Columbia Law School and was the Knight Institute’s inaugural visiting scholar, 2017-2018.