In his contribution to the Knight First Amendment Institute’s “Emerging Threats” essay series, Fordham Law School’s Olivier Sylvain critiques a core U.S. internet law, Section 230 of the Communications Decency Act (CDA 230). CDA 230 immunizes platforms like YouTube and Craigslist from most liability for speech posted by their users. By doing so, it protects lawful and important speech that risk-averse platforms might otherwise silence. But it also lets platforms tolerate unlawful and harmful speech.
Sylvain argues that the net result is to perpetuate inequities in our society. For women, ethnic minorities, and many others, he suggests, CDA 230 facilitates harassment and abuse— and thus “helps to reinforce systemic subordination.”
We need not tolerate all this harm, Sylvain further suggests, given the current state of technology. Large platforms’ ever-improving ability to algorithmically curate users’ speech “belies the old faith that such services operate at too massive a scale to be asked to police user content.”CDA 230 has long been a pillar of U.S. internet law. Lately, though, it has come under sustained attack. In the spring of 2018, Congress passed the first legislative change to CDA 230 in two decades: the Allow States and Victims to Fight Online Sex Trafficking Act, commonly known as FOSTA.
FOSTA has an important goal—protecting victims of sex trafficking. But it is so badly drafted, no one can agree on exactly what it means. It passed despite opposition from advocates for trafficking victims and the ACLU, and despite the Justice Department’s concern that aspects of it could make prosecutors’ jobs harder. More challenges to CDA 230 are in the works. That makes close attention to the law, including both its strengths and its weaknesses, extremely timely.Supporters of CDA 230 generally focus on three broad benefits. The first is promoting innovation and competition. When Congress passed the law in 1996, it was largely looking to future businesses and technologies. In today’s age of powerful mega-platforms, the concern about competition is perhaps even more justified. When platform liability risks expand, wealthy incumbents can hire lawyers and armies of moderators to adapt to new standards. Startups and smaller companies can’t. That’s why advocates for startups opposed FOSTA,
while Facebook and the incumbent-backed Internet Association supported it.The second benefit of CDA 230 is its protection for internet users’ speech rights. When platforms face liability for user content, they have strong incentives to err on the side of caution and take it down, particularly for controversial or unpopular material. Empirical evidence from notice-and-takedown regimes tells us that wrongful legal accusations are common, and that platforms often simply comply with them. The Ecuadorian government, for example, has used spurious copyright claims to suppress criticism and videos of police brutality. Platform removal errors can harm any speaker, but a growing body of evidence suggests that they disproportionately harm vulnerable or disfavored groups. So while Sylvain is right to say that vulnerable groups suffer disproportionately when platforms take down too little content, they also suffer disproportionately when platforms take down too much.
The third benefit is that CDA 230 encourages community-oriented platforms like Facebook or YouTube to weed out offensive content. This was Congress’s goal in enacting the CDA’s “Good Samaritan” clause, which immunizes platforms for voluntarily taking down anything they consider “objectionable.”14 Prior to CDA 230, platforms faced the so-called moderator’s dilemma—any effort to weed out illegal content could expose them to liability for the things they missed, so they were safer not moderating at all.
Against these upsides, Sylvain marshals a compelling list of downsides. Permissive speech rules and hands-off attitudes by platforms, especially when combined with what Sylvain calls “discriminatory designs on user content and data,” enable appalling abuses, particularly against members of minority groups. Nonconsensual pornography, verbal attacks, and credible threats of violence are all too common.
Does that mean it is time to scrap CDA 230? Some people think so. Sylvain’s argument is more nuanced. He identifies specific harms, and specific advances in platform technology and operations, that he argues justify legal changes. While I disagree with some of his analysis and conclusions, the overall project is timely and useful. It arrives at a moment of chaotic, often rudderless public dialogue about platform responsibility. Pundits depict a maelstrom of online threats, often conflating issues as diverse as data breaches, “fake news,” and competition. The result is a moment of real risk, not just for platforms but for internet users. Poorly thought-through policy responses to misunderstood problems can far too easily become laws.
In contrast to this panicked approach, Sylvain says we should be “looking carefully at how intermediaries’ designs on user content do or do not result in actionable injuries.” This is a worthy project. It is one that, in today’s environment, requires us to pool our intellectual resources. Sylvain brings, among other things, a deep understanding of the history of communications regulation. I bring practical experience from years in-house at Google and familiarity with intermediary liability laws around the world.
To put my own cards on the table—and surely surprising no one—I am very wary of tinkering with intermediary liability law, including CDA 230. That’s mostly because I think the field is very poorly understood. It was hardly a field at all just a few years ago. A rising generation of experts, including Sylvain, will fix that before long. In the meantime, though, we need careful and calm analysis if we are to avoid shoot-from-the-hip legislative changes.
Whatever we do with the current slew of questions about platform responsibility, the starting point should be a close look at the facts and the law. The facts include the real and serious harms Sylvain identifies. He rightly asks why our system of laws tolerates them, and what we can do better.
CDA 230, though, is not the driver of many of the problems he identifies. In the first section of my response, I will walk through the reasons why. Hateful or harassing speech, for example, often doesn’t violate any law at all for reasons grounded in the First Amendment. If platforms tolerate content of this sort, it is not because of CDA 230. Quite the contrary: A major function of the law is to encourage platforms to take down lawful but offensive speech.
Other problems Sylvain describes are more akin to the story, recently reported, of Facebook user data winding up in the hands of Cambridge Analytica.
They stem from breaches of trust (or of privacy or consumer protection law) between a platform and the user who shared data or content in the first place. Legal claims for breaching this trust are generally not immunized by CDA 230. If we want to change laws that apply in these situations, CDA 230 is the wrong place to start.In the second section of my response, I will focus on the issues Sylvain surfaces that really do implicate CDA 230. In particular, I will discuss his argument that platforms’ immunities should be reduced when they actively curate content and target it to particular users. Under existing intermediary liability frameworks outside of CDA 230, arguments for disqualifying platforms from immunity based on curation typically fall into one of two categories. I will address both.
The first argument is that platforms should not be immunized when they are insufficiently “neutral.” This framing, I argue, is rarely helpful. It leads to confusing standards and in practice deters platforms from policing for harmful material.
The second argument is that immunity should depend on whether a platform “knows” about unlawful content. Knowledge is a slippery concept in the relevant law, but it is a relatively well-developed one. Knowledge-based liability has problems—it poses the very threats to speech, competition, and good-faith moderation efforts that CDA 230 avoids. But by talking about platform knowledge, we can reason from precedent and experience with other legal frameworks in the United States and around the world. That allows us to more clearly define the factual, legal, and policy questions in front of us. We can have an intelligent conversation, even if we don’t all agree. That’s something the world of internet law and policy badly needs right now.
Isolating Non-CDA 230 Issues
In this section I will walk through issues and potential legal claims mentioned by Sylvain that are not, I think, controlled by CDA 230. Eliminating them from the discussion will help us focus on his remaining important questions about intermediary liability.
Targeting Content or Ads Based on Discriminatory Classifications
Sylvain’s legal arguments are grounded in a deep moral concern with the harms of online discrimination. He provides numerous moving examples of bias and mistreatment. But many of the internet user and platform behaviors he describes are not actually illegal, or are governed by laws other than CDA 230.
As one particularly disturbing example, Sylvain describes how Facebook until recently allowed advertisers to target users based on algorithmically identified “interests” that included phrases like “how to burn Jews” and “Jew hater.” When ProPublica’s Julia Angwin broke this story, Facebook scrambled to suspend these interest categories. Sylvain recounts this episode to illustrate the kinds of antisocial outcomes that algorithmic decisionmaking can generate. However repugnant these phrases are, though, they are not illegal. Nor is using them to target ads. So CDA 230 does not increase platforms’ willingness to tolerate this content—although it does increase their legal flexibility to take it down.
To outlaw this kind of thing, we would need different substantive laws about things like hate speech and harassment. Do we want those? Does the internet context change First Amendment analysis? Like other critics of CDA 230 doctrine, Sylvain emphasizes the “significant qualitative and quantitative difference between the reach of [harmful] offline and online expressive acts.” But it’s not clear that reforming CDA 230 alone would curb many of these harms in the absence of larger legal change.
CDA 230 also has little or no influence on Facebook ads that target users based on their likely race, age, or gender. Critics raise well-justified concerns about this targeting. But, as Sylvain notes, it generally is not illegal under current law. Anti-discrimination laws, and hence CDA 230 defenses, only come into play for ads regarding housing, employment, and possibly credit.
Even for that narrower class of ads, it’s not clear that Facebook is doing anything illegal by offering a targeting tool that has both lawful and unlawful uses. If the Fair Housing Act (FHA) does apply to Facebook in this situation, the result in a CDA-230-less world would appear to be that Facebook must prohibit and remove these ads. But that’s what Facebook says it does already. So the CDA 230 problem here may be largely theoretical.Sylvain’s more complicated claim is that CDA 230 allows Airbnb to facilitate discrimination by requiring renters to post pictures of themselves. Given Airbnb’s importance to travelers, discrimination by hosts is a big deal. But CDA 230’s relevance is dubious. First, it’s not clear if anyone involved — even a host — violates the FHA by enforcing discriminatory preferences for shared dwellings.
Even if the hosts are liable, it seems unlikely that Airbnb violates the FHA by requiring photos, which serve legitimate as well as illegitimate purposes. Prohibiting the photos might even be unconstitutional: A court recently struck down under the First Amendment a California statute that, following reasoning similar to Sylvain’s, barred the Internet Movie Database from showing actors’ ages because employers might use the information to discriminate. Finally, if Airbnb’s photo requirement did violate the FHA, it seems unlikely that CDA 230 would provide immunity. The upshot is that CDA 230 is probably irrelevant to the problem Sylvain is trying to solve in this case.None of this legal analysis refutes Sylvain’s moral and technological point: The internet enables new forms of discrimination, and the law should respond. The law may very well warrant changing. But for these examples, CDA 230 isn’t the problem.
Targeting Content Based on Data Mining
Sylvain also describes a set of problems that seem to arise from platforms’ directly harming or breaching the trust of their users. Some of these commercial behaviors, like “administer[ing] their platforms in obscure or undisclosed ways that are meant to influence how users behave on the site,” don’t appear to implicate CDA 230 even superficially.
Others, like using user-generated content in ways the user did not expect, look more like CDA 230 issues because they involve publication. But I don’t think they really fall under CDA 230 either.In one particularly disturbing example, Sylvain describes an Instagram user who posted a picture of a rape threat she received—only to have Instagram reuse the picture as an ad. An analogous fact pattern was litigated under CDA 230 in Fraley v. Facebook, Inc.
In that case, users sued Facebook for using their profile pictures in ads, claiming a right-of-publicity violation. A court upheld their claim and rejected Facebook’s CDA 230 defense. If that ruling is correct, there should no CDA 230 issue for the case Sylvain describes.But there is a deeper question about what substantive law governs in cases like this. The harm comes from a breach of trust between the platform and individual users, the kind of thing usually addressed by consumer protection, privacy, or data protection laws. U.S. law is famously weak in these areas. Compared to other countries, we give internet users few legal tools to control platforms’ use of their data or content.
U.S. courts enforce privacy policies and terms of service that would be void in other jurisdictions, and they are stingy with standing or damages for people claiming privacy harms. That’s why smart plaintiffs’ lawyers bring claims like the right-of-publicity tort in Fraley. But the crux of those claims is not a publishing harm of the sort usually addressed by CDA 230. The crux is the user’s lack of control over her own speech or data — what Jack Balkin or Jonathan Zittrain might call an “information fiduciary” issue. Framing cases like these as CDA 230 issues risks losing sight of these other values and legal principles.Addressing CDA 230 Issues
Sylvain suggests that platforms should lose CDA 230 immunity when they “employ software to make meaning out of their users’ ‘reactions,’ search terms, and browsing activity in order to curate the content” and thereby “enable[] illegal online conduct.” For issues that really do involve illegal content and potential liability for intermediaries—like nonconsensual pornography—this argument is important. At least one case has reviewed a nearly identical argument and rejected it.
But Sylvain’s point isn’t to clarify the current law. It’s to work toward what he calls “a more nuanced immunity doctrine.” For that project, the curation argument matters.I see two potential reasons for stripping platforms of immunity when they “elicit and then algorithmically sort and repurpose” user content.
First, a platform might lose immunity because it is not “neutral” enough, given the ways it selects and prioritizes particular material. Second, it could lose immunity because curation efforts give it “knowledge” of unlawful material. Both theories have important analogues in other areas of law—including the Digital Millennium Copyright Act (DMCA), pre-CDA U.S. law, and law from outside the United States—to help us think them through.Neutrality
All intermediary liability laws have some limit on the platform operations that are immunized—a point at which a platform becomes too engaged in user-generated content and starts being held legally responsible for it. Courts and lawmakers often use words like “neutral” or “passive” to describe immunized platforms. Those words don’t, in my experience, have stable enough meanings to be useful.
For example, the Court of Justice of the European Union has said that only “passive” hosts are immune under EU law. Applying that standard in the leading case, it found Google immune for content in ads, which the company not only organizes and ranks but also ranks based in part on payment.
And in a U.S. case, a court said a platform was “neutral” when it engaged in the very kinds of curation that, under Sylvain’s analysis, makes platforms not neutral.In the internet service provider (ISP) context, neutrality—as in net neutrality—means something very different. Holding ISPs to a “passive conduit” standard makes sense as a technological matter. But that standard doesn’t transfer well to other intermediaries. It would eliminate immunity for topic-specific forums (Disney’s Club Penguin or a subreddit about knitting, for example) or for platforms like Facebook that bar lawful but offensive speech. That seems like the wrong outcome given that most users, seemingly including Sylvain, want platforms to remove this content.
Policymakers could in theory draw a line by saying that, definitionally, a platform that algorithmically curates content is not neutral or immunized. But then what do we do with search engines, which offer algorithmic ranking as their entire value proposition? And how exactly does a no-algorithmic-curation standard apply to social media? As Eric Goldman has pointed out, there is no such thing neutrality for a platform, like Facebook or Twitter, that hosts user-facing content.
Whether it sorts content chronologically, alphabetically, by size, or some other metric, it unavoidably imposes a hierarchy of some sort.All of this makes neutrality something of a Rorschach test. It takes on different meanings depending on the values we prioritize. For someone focused on speech rights, neutrality might mean not excluding any legal content, no matter how offensive. For a competition specialist, it might mean honesty and fair competition in ranking search results.
Still other concepts of neutrality might emerge if we prioritize competition, copyright, transparency, or, as Sylvain does in this piece, protecting vulnerable groups in society.One way out of this bind is for the law to get very, very granular—like the DMCA. It has multiple overlapping statutory tests that effectively assess a defendant’s neutrality before awarding immunity.
By focusing on just a few values, narrowly defining eligible technologies, and spelling out rules in detail, it’s easier to define the line between immunized behavior and non-immunized behavior.DMCA litigators on both sides hate these granular tests. Maybe that means the law is working as intended. But highly particular tests for immunity present serious tradeoffs. If every intermediary liability question looked like the DMCA, then only companies with armies of lawyers and reserves of cash for litigation and settlement could run platforms. And even they would block user speech or decide not to launch innovative features in the face of legal uncertainty. Detailed rules like the DMCA’s get us back to the problems that motivated Congress to pass the CDA: harm to lawful speech, harm to competition and innovation, and uncertainty about whether platforms could moderate content without incurring liability.
Congress’s goal in CDA 230 was to get away from neutrality tests as a basis for immunity and instead to encourage platforms to curate content. I think Congress was right on this score, and not only for the competition, speech, and “Good Samaritan” reasons identified at the time. As Sylvain’s discussion of intermediary designs suggests, abstract concepts of neutrality do not provide workable answers to real-world platform liability questions.
Knowledge
The other interpretation I see for Sylvain’s argument about curation is that platforms shouldn’t be able to claim immunity if they know about illegal content—and that the tools used for curation bring them ever closer to such knowledge. This factual claim is debatable. Do curation, ranking, and targeting algorithms really provide platforms with meaningful information about legal violations?
Whatever the answer, focusing on questions like this can clarify intermediary liability discussions.Like the neutrality framing, this one is familiar from non-CDA 230 intermediary liability. Many laws around the world, including parts of the DMCA, say that if a platform knows about unlawful content but doesn’t take it down, it loses immunity. These laws lead to litigation about what counts as “knowledge,” and to academic, NGO, and judicial attention to the effects on the internet ecosystem. If a mere allegation or notice to a platform creates culpable knowledge, platforms will err on the side of removing lawful speech. If “knowledge” is an effectively unobtainable legal ideal, on the other hand, platforms won’t have to take down anything.
Some courts and legislatures around the world have addressed this problem by reference to due process. Platforms in Brazil,
Chile, Spain, India, and Argentina are, for some or all claims, not considered to know whether a user’s speech is illegal until a court has made that determination. Laws like these often make exceptions for “manifestly” unlawful content that can, in principle, be identified by platforms. This is functionally somewhat similar to CDA 230’s exception for child pornography and other content barred by federal criminal law.Other models, like the DMCA, use procedural rules to cabin culpable knowledge. Sylvain rightly invokes these as important protections against abuse of notice-and-takedown systems. Claimants must follow a statutorily defined notice process and provide a penalty-of-perjury statement. A DMCA notice that does not comply with the statute’s requirements cannot be used to prove that a platform knows about infringing material.
Claimants also accept procedures for accused speakers to formally challenge a removal or to seek penalties for bad-faith removal demands.A rapidly expanding body of material from the United Nations and regional human rights systems,
as well as a widely endorsed civil society standard known as the Manila Principles, spell out additional procedures designed to limit over-removal of lawful speech. Importantly, these include public transparency to allow NGOs and internet users to crowdsource the job of identifying errors by platforms and patterns of abuse by claimants. Several courts around the world have also cited constitutional free expression rights of internet users in rejecting—as Sylvain does—strict liability for platforms.As Sylvain notes, liability based on knowledge is common in pre-CDA tort law. Platforms differ from print publishers and distributors in important respects. But case law about “analog intermediaries” can provide important guidance, some of it mandatory under the First Amendment. The “actual malice” standard established in New York Times Co. v. Sullivan is an example.
Importantly, the Times in that case acted as a platform, not as a publisher of its own reporting. The speech at issue came from paying advertisers, who bought space in the paper to document violence against civil rights protesters. As the court noted in rejecting the Alabama Supreme Court’s defamation judgment, high liability risk “would discourage newspapers from carrying ‘editorial advertisements’ of this type, and so might shut off an important outlet for the promulgation of information and ideas by persons who do not themselves have access to publishing facilities.” Similar considerations apply online.Knowledge-based standards for platform liability are no panacea.
Any concept of culpable knowledge for speech platforms involves tradeoffs of competing values, and not ones I necessarily believe we should make. What the knowledge framing and precedent provide, though, is a set of tools for deliberating more clearly about those tradeoffs.Conclusion
Talk of platform regulation is in the air. Lawyers can make sense of this chaotic public dialogue by being lawyerly. We can crisply identify harms and parse existing laws. If those laws aren’t adequately protecting important values, including the equality values Sylvain discusses, we can propose specific changes and consider their likely consequences.
At the end of the day, not everyone will agree about policy tradeoffs in intermediary liability—how to balance speech values against dignity and equality values, for example. And not everyone will have the same empirical predictions about what consequences laws are likely to have. But we can get a whole lot closer to agreement than we are now. We can build better shared language and analytic tools, and identify the right questions to ask. Sylvain’s observations and arguments, coupled with tools from existing intermediary liability law, can help us do that.
Note: The author was formerly Associate General Counsel to Google. The Center for Internet and Society (CIS) is a public interest technology law and policy program at Stanford Law School. A list of CIS donors and funding policies is available here.
© 2018, Daphne Keller.
Cite as: Daphne Keller, Toward a Clearer Conversation About Platform Liability, 18-02.c Knight First Amend. Inst. (Apr. 6, 2018), https://knightcolumbia.org/content/toward-clearer-conversation-about-platform-liability [https://perma.cc/8SKV-3Z2X].
The author was formerly Associate General Counsel to Google. The Center for Internet and Society (CIS) is a public interest technology law and policy program at Stanford Law School. A list of CIS donors and funding policies is available at https://cyberlaw.stanford.edu/about-us.
47 U.S.C. § 230 (2012). Under longstanding exceptions, platforms have no CDA 230 immunity for intellectual property law claims, federal criminal claims, and Electronic Communications Privacy Act claims. Id. § 230(e).
These arguments build on work developed by a number of scholars, prominently including Danielle Citron. See, e.g., Danielle Keats Citron, Law’s Expressive Value in Combating Cyber Gender Harassment, 108 Mich. L. Rev. 373 (2009).
H.R. 1865, 115th Cong. (2018).
See Daphne Keller, What Does the New CDA-Buster Legislation Actually Say?, Stanford Law School Center for Internet & Society (Aug. 11, 2017), http://cyberlaw.stanford.edu/blog/2017/08/what-does-new-cda-buster-legislation-actually-say (examining similar language of previous draft).
See, e.g., Shannon Roddel, Online Sex Trafficking Bill Will Make Things Worse for Victims, Expert Says, Notre Dame News (Mar. 28, 2018), http://news.nd.edu/news/online-sex-trafficking-bill-will-make-things-worse-for-victims-expert-says.
ACLU, ACLU Vote Recommendation on FOSTA (2018), http://www.aclu.org/letter/aclu-vote-recommendation-fosta.
See Letter from Stephen E. Boyd, Ass’t Att’y Gen., to Rep. Robert W. Goodlatte 2 (Feb. 27 2018), http://docs.techfreedom.org/DOJ_FOSTA_Letter.pdf.
See Engine, Startup Advocates Address Implications of Sex Trafficking Legislation on Tech (Feb. 26, 2018), available here.
See Ali Breland, Facebook’s Sandberg Backs Controversial Online Sex Trafficking Bill, Hill (Feb. 26, 2018), http://thehill.com/policy/technology/375680-facebooks-sheryl-sandberg-backs-legislation-to-curb-online-sex-trafficking.
See Internet Association, Statement in Support of Allow States and Victims to Fight Online Sex Trafficking Act of 2017 (FOSTA) (Dec. 11, 2017), http://internetassociation.org/statement-support-allow-states-victims-fight-online-sex-trafficking-act-2017-fosta.
See Daphne Keller, Empirical Evidence of “Over-Removal” by Internet Companies Under Intermediary Liability Laws, Stanford Law School Center for Internet & Society (Oct. 12, 2015) http://cyberlaw.stanford.edu/blog/2015/10/empirical-evidence-over-removal-internet-companies-under-intermediary-liability-laws; Jennifer M. Urban et al., Notice and Takedown in Everyday Practice 10–13, 116–17 (unpublished manuscript) (Mar. 2017), http://ssrn.com/abstract=2755628.
See José Miguel Vivanco, Censorship in Ecuador Has Made It to the Internet, Human Rights Watch (Dec. 15, 2014), http://www.hrw.org/news/2014/12/15/censorship-ecuador-has-made-it-internet.
See Daphne Keller, Inception Impact Assessment: Measures to Further Improve the Effectiveness of the Fight Against Illegal Content Online 6–7 (2018), http://cyberlaw.stanford.edu/files/publication/files/Commission-Filing-Stanford-CIS-26-3_0.pdf (describing discriminatory impact of platform efforts to remove “terrorist” content); Tracy Jan & Elizabeth Dwoskin, A White Man Called Her Kids the N-Word. Facebook Stopped Her from Sharing It., Wash. Post (July 31, 2017), http://www.washingtonpost.com/business/economy/for-facebook-erasing-hate-speech-proves-a-daunting-challenge/2017/07/31/922d9bc6-6e3b-11e7-9c15-177740635e83_story.html; Sam Levin, Civil Rights Groups Urge Facebook to Fix ‘Racially Biased’ Moderation System, Guardian (Jan. 18 2017), http://www.theguardian.com/technology/2017/jan/18/facebook-moderation-racial-bias-black-lives-matter.
47 U.S.C. § 230(c)(2)(A) (2012).
Congress specifically set out to correct this perverse incentive, as embodied in two 1990s internet defamation cases. See H.R. Rep. No. 104-458, at 194 (1996). In one case, a platform that enforced content policies was held liable for a user’s defamatory post. Stratton Oakmont, Inc. v. Prodigy Servs. Co., No. 31063/94, 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995). In another, a platform with no such guidelines was held immune. Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135 (S.D.N.Y. 1991).
See Daphne Keller with Sharon Driscoll, Data Analytics, App Developers, and Facebook’s Role in Data Misuse, SLS Blogs: Legal Aggregate (Mar. 20, 2018), http://law.stanford.edu/2018/03/20/data-analytic-companies-app-developers-facebooks-role-data-misuse.
See generally Fair Housing Act, 42 U.S.C. §§ 3601–19; Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e et seq.; Age Discrimination in Employment Act, 29 U.S.C. §§ 621–34; Equal Credit Opportunity Act, 15 U.S.C. §§ 1691 et seq.
Cf. Sony Corp. of America v. Universal City Studios, Inc., 464 U.S. 417 (1984) (VCR manufacturer not liable for user copyright infringement because of device’s substantial noninfringing uses).
See Mitchell v. Shane, 350 F.3d 39 (2d Cir. 2003) (no FHA violation where defendant was unaware of discrimination). The FHA does recognize respondeat superior liability: A principal cannot avoid liability under the FHA by delegating duties to an agent. See Green v. Century 21, 740 F.2d 460 (6th Cir. 1984). But if a respondeat relationship exists for online ads platforms, presumably the platforms are the agents, not the principals.
Facebook prohibits such ads, removes them upon notice, and has announced efforts to proactively find them. After investigations by ProPublica revealed ongoing problems with discriminatory rental housing ads on its platform, Facebook pledged late last year to make a number of changes, including that “all advertisers who want to exclude groups of users from seeing their ads . . . will have to certify that they are complying with anti-discrimination laws.” See Rachel Goodman, Facebook’s Ad-Targeting Problems Prove How Easy It Is to Discriminate Online, NBC News (Nov. 30, 2017), http://www.nbcnews.com/think/opinion/facebook-s-ad-targeting-problems-prove-how-easy-it-discriminate-ncna825196.
See Fair Hous. Council of San Fernando Valley v. Roommate.com, LLC, 666 F.3d 1216, 1221 (9th Cir. 2012) (FHA and California equivalent do not apply to listings for roommates, based on statutory language and constitutional privacy concerns activated by “a roommate’s unfettered access to the home”).
IMDb.com, Inc. v. Becerra, 16-CV-06535-VC, 2017 WL 772346 (N.D. Cal. Feb. 22, 2017).
If Airbnb’s liability derives from the hosts’ actions, it’s hard to see a publication element that CDA 230 would immunize. If the theory is that Airbnb violates the FHA by de facto requiring users to disclose their race, that’s almost exactly the thing that falls outside CDA 230 immunity under controlling precedent in the Ninth Circuit. See Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC , 521 F.3d 1157 (9th Cir. 2008) (en banc) (no CDA 230 immunity where platform required users to provide FHA-violating information as a condition of using the service).
This behavior and the claimed product design to “hold user attention by inducing something like addictive reliance,” if actionable, sound like some form of fraud or consumer protection violation by the platform itself. Sylvain also says that some platforms “are intentionally deceptive about how they acquire or employ content,” but CDA 230 does not provide immunity for that. In both cases he cites, courts held platforms liable for their actions—and rejected CDA 230 defenses. FTC v. LeadClick Media, LLC, 838 F.3d 158 (2d Cir. 2016); FTC v. Accusearch, Inc., 570 F.3d 1187 (10th Cir. 2009).
830 F. Supp. 2d 785 (N.D. Cal. 2011).
Id. at 801–03.
For a very rough overview, see Mark Scott & Natasha Singer, How Europe Protects Your Online Data Differently than the U.S., N.Y. Times (Jan. 31, 2016), http://www.nytimes.com/interactive/2016/01/29/technology/data-privacy-policy-us-europe.html.
See, e.g., European Commission, Press Release, Facebook, Google and Twitter Accept to Change Their Terms of Services to Make Them Customer-Friendly and Compliant with EU Rules (Feb. 15, 2018), http://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=614254.
See, e.g., Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016).
See Jack M. Balkin & Jonathan Zittrain, A Grand Bargain to Make Tech Companies Trustworthy, Atlantic (Oct. 3 2016), http://www.theatlantic.com/technology/archive/2016/10/information-fiduciary/502346.
Dyroff v. Ultimate Software Group, Inc., 2017 WL 5665670, at *8–*10 (N.D. Cal. 2017) (assessing allegations that a platform used data mining and machine learning to understand “the meaning and intent behind posts” and target illegal material to individual users).
Platforms could also lose immunity when they effectively create the unlawful communication themselves, by specifically eliciting or changing user content. As discussed above, though, CDA 230 already limits immunity in this situation.
Sylvain suggests that CDA 230 itself immunizes platforms because they are neutral or serve as passive conduits. I don’t think that’s what Congress intended, and it is not how I interpret CDA 230. But the much more important issue he raises is whether intermediary liability law should turn on platform neutrality.
Joined Cases C-236/08C-238/08, Google France SARL v. Louis Vuitton Malletier SA, 2010 E.C.R. I-2417.
Dyroff, 2017 WL 5665670, at *8–*10.
Eric Goldman, Social Networking Site Isn’t Liable for User’s Overdose of Drugs He Bought via the Site–Dyroff v. Ultimate Software, Tech. & Marketing L. Blog (Dec. 5, 2017), http://blog.ericgoldman.org/archives/2017/12/social-networking-site-isnt-liable-for-users-overdose-of-drugs-he-bought-via-the-site-dyroff-v-ultimate-software.htm.
See James Grimmelmann, Some Skepticism About Search Neutrality, in The Next Digital Decade: Essays on the Future of the Internet 435 (Berin Szoka & Adam Marcus eds., 2010).
Defendant hosts must offer one of four defined technical services. 17 U.S.C. § 512(a)–(d) (2012). They additionally must not have both the right and the ability to control and direct financial benefits. Id. § 512(c)(1)(B).
See Center for Democracy & Tech., Mixed Messages? The Limits of Automated Social Media Content Analysis 18 (2017), http://cdt.org/files/2017/11/Mixed-Messages-Paper.pdf (reporting accuracy rates in the 70 to 80 percent range for commercially available natural language processing filters). See generally Evan Engstrom & Nick Feamster, Engine, The Limits of Filtering: A Look at the Functioning & Shortcomings of Content Detection Tools (2017), available here.
My personal doubts about platform omniscience are reinforced by the ads I see, which routinely feature men’s clothing and software engineering jobs. People with higher expectations about the capabilities of curation and targeting technology must, I assume, be seeing better ads.
Lei No. 12.965, de 23 de Abril de 2014, Diário Oficial da União [D.O.U.] de 24.4.2014 (Braz.).
Law No. 20435 art. 71N, Abril 23, 2010, Diario Oficial [D.O.] (Chile).
See Royo v. Google (Barcelona appellate court judgment 76/2013), 13 February 2013.
See Singhal v. India, A.I.R. 2015 S.C. 1523.
See Corte Suprema de Justicia de la Nación [CSJN] [National Supreme Court of Justice], 29/10/2014, “Rodriguez María Belen c/Google y Otro s/ daños y perjuicios” (Arg.).
In intermediary liability regimes like this, one can move the needle by calling more or fewer things “manifestly unlawful” and thus subject to de facto adjudication by a platform. Such choices involve substantive tradeoffs; they force us to ask what harms are worth risking platform error. One can also move the needle by allowing accelerated proceedings, such as temporary restraining orders or administrative review. This involves tradeoffs between access to justice for victims of speech harms, on the one hand, and due process and expression rights for speakers, on the other. Within those parameters—and subject to the recognition that all these systems attract abuse—I see ample room for intelligent advocacy on all sides.
17 U.S.C. §§ 512(c)(3)(B)(i), (c)(3)(A) (2012).
Id. §§ 512(f), (g)(2)(B).
See, e.g., Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, U.N. Doc. A/HRC/32/38 (May 11, 2016); U.N. Special Rapporteur on Freedom of Opinion & Expression et al., Joint Declaration on Freedom of Expression on the Internet (June 1, 2011), http://www.osce.org/fom/78309.
Manila Principles on Intermediary Liability, http://www.manilaprinciples.org (last visited Apr. 2, 2018).
See, e.g., Singhal v. India, A.I.R. 2015 S.C. 1523; Corte Suprema de Justicia de la Nación [CSJN] [National Supreme Court of Justice], 29/10/2014, “Rodriguez María Belen c/Google y Otro s/ daños y perjuicios” (Arg.); MTE v. Hungary, App. No. 22947/13 (Eur. Ct. H.R. 2016) (rejecting platform monitoring obligation for defamation because of harm to internet user speech rights). But see Delfi AS v. Estonia, 64569/09 Eur. Ct. H.R. (2015) (permitting monitoring obligation for hate speech); Scarlet v. SABAM, Case C-70/10, 2011 E.C.R. I-11959 (rejecting ISP monitoring remedy in a copyright case).
376 U.S. 254 (1964).
Id. at 266; see also Smith v. California, 361 U.S. 147, 153 (1959) (rejecting strict obscenity liability for bookstores and noting that a bookseller subject to such liability “will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected, as well as obscene literature”); Bantam Books, Inc. v. Sullivan, 372 U.S. 58 (1963) (rejecting administrative notice obscenity liability for bookstores).
The DMCA, for example, applies a knowledge standard buttressed with procedural protections for accused speakers but still leads to widespread removal of lawful speech. See generally Urban et al., supra note 12.
Daphne Keller directs the Program on Platform Regulation at Stanford’s Cyber Policy Center.