“[I]ts power seems inescapable—but then, so did the divine right of kings.”
— Ursula K. Le Guin
Social media companies are in Congress’s sights. In May 2016, in the wake of allegations that Facebook workers had suppressed pro-conservative viewpoints and links while injecting liberal stories into the newly introduced Trending Topics section, Senator John Thune sent a letter to Mark Zuckerberg demanding, among other things, a copy of the company’s guidelines for choosing Trending Topics, a list of all news stories removed or injected into Trending Topics, and information about what steps the company would take to “hold the responsible individuals accountable.” Facebook complied, with Zuckerberg himself meeting with lawmakers.
During the recent hearings before the Senate and House intelligence committees on Russian interference in the 2016 presidential campaign, Senator Dianne Feinstein told the general counsels of Facebook, Google, and Twitter—whose CEOs were conspicuously absent—“You bear this responsibility. You’ve created these platforms. And now they’re being misused. And you have to be the ones to do something about it. Or we will.” Despite intensive lobbying efforts by these companies, both individually and through their collective trade association, legislation imposing new restrictions on how they operate is, “[f]or the first time in years, . . . being discussed seriously in Washington.” As one reporter put it, “In 2008, it was Wall Street bankers. In 2017, tech workers are the world’s villain.”
That Bay Area tech companies are having something of a PR crisis is clear.
And in the rough and tumble of politics, that these companies would meet with and appease legislators is no great surprise. But if Congress does decide to get tough, how credible and wide-ranging is the regulatory threat, under current First Amendment jurisprudence?Some prominent commentators claim that Facebook is analogous to a newspaper and that its handling of a feature like Trending Topics is analogous to a newspaper’s editorial choices.
As a result, these commentators find congressional scrutiny of such matters to be constitutionally problematic. Moreover, the editorial analogy has been a remarkably effective shield for these tech companies in litigation. In a series of lower court cases, Google and others have argued that their decisions concerning their platforms—for example, what sites to list (or delist) and in what order, who can buy ads and where to place them, and what users to block or permanently ban—are analogous to the editorial decisions of publishers. And like editorial decisions, they argue, these decisions are protected “speech” under the First Amendment. While mostly wielded against small-fry, often pro se plaintiffs, courts have tended to accept this analogy wholesale.Large consequences hinge on whether the various choices companies like Facebook and Google make are indeed analogous to editorial “speech.” The answer will partly determine whether and how the state can respond to current challenges ranging from the proliferation of fake news to high levels of market concentration to the lack of ad transparency. Furthermore, algorithmic discrimination and the discrimination facilitated by these platforms’ structures affect people’s lives today and no doubt will continue to do so.
But if these algorithms and outputs are analogous to the decisions the New York Times makes on what to publish, then attempts to extend antidiscrimination laws to deal with such discrimination will face an onslaught of potentially insuperable constitutional challenges. In short, these companies’ deployment of the editorial analogy in the First Amendment context poses a major hurdle to government intervention.Whether, or to what extent, the editorial analogy should work as a shield against looming legislation and litigation for companies like Facebook and Google is something this historical moment demands we carefully consider. My primary aim in this paper is to do just that. I will engage critically with, and ultimately raise questions about, the near-automatic application of the editorial analogy. The core takeaways are these: (1) we should be cognizant of the inherent limitations of analogical reasoning generally and of the editorial analogy specifically; (2) whether these companies’ various outputs should receive coverage as First Amendment “speech” is far from clear, both descriptively and normatively;
(3) the proposition that regulations compelling these companies to add content (disclaimers, links to competitors, and so on) compel the companies to speak is also far from clear; and, finally and most crucially, (4) given the limits of analogical reasoning, our future debates about First Amendment coverage should focus less on analogy and more on what actually matters—the normative commitments that undergird free speech theory and how our choices either help or hinder their manifestations.To that end, I start by reviewing some of the cases in which the editorial analogy has been successfully deployed. Next, I lay the groundwork for rethinking the editorial analogy—first, by analyzing its internal weaknesses, and second, by raising other potentially compelling analogical frames. Each new analogy raises far knottier questions than I can address here, so I will briefly mention only a few, ending with the analogy brought to life by the Court’s recent language in Packingham.
There, the Court, either strategically or recklessly, “equate[d] the entirety of the internet with public streets and parks” and declared it “clear” that “cyberspace” and “social media in particular” are now “the most important places (in a spatial sense) for the exchange of views.” The Court found social media to be “the modern public square” and stated that it is a “fundamental principle of the First Amendment . . . that all persons have access” to such a forum. This language casts doubt on whether the editorial analogy will be successful going forward. Its reliance on highly abstract characterizations also serves as a lesson. We should address First Amendment coverage questions through the lens of normative theory and not through a collection of ill-suited analogies.The Editorial Analogy in Litigation
Zhang v. Baidu.com, Inc. is the case in which a lower court has most fully explained why, in its view, the editorial analogy applies to a search engine’s outputs.
The plaintiffs, New York residents and self-described “promoters of democracy in China,” alleged that Baidu, the dominant Chinese search engine, intentionally delisted their pro-democracy websites from its search results in the United States at the behest of the Chinese government. And in so doing, they further alleged, Baidu violated their First Amendment rights. Baidu replied that its listing decisions were its protected speech. The Southern District of New York agreed, finding that “First Amendment jurisprudence all but compels the conclusion that Plaintiffs’ suit must be dismissed.” With no attention paid to the claim that Baidu was acting on behalf of the Chinese government, the court saw the relevant precedent as Miami Herald Publishing Co. v. Tornillo. There, the U.S. Supreme Court found unconstitutional a statute that required newspapers to provide political candidates a right of reply to critical editorials. The court in Baidu also saw Hurley v. Irish-American Gay, Lesbian, and Bisexual Group of Boston as an extension of Tornillo, equally applicable to Baidu. In Hurley, the Court ruled that requiring parade organizers to permit a pro-LGBT group to participate would entail unconstitutionally compelling the parade organizers to speak.The Baidu court’s holding followed directly from its analogical reasoning. It saw Baidu as organizing information, which it thought sufficient to make the relevant analogy a “newspaper editor’s judgment of which . . . stories to run.”
The Supreme Court previously found a newspaper’s judgment of which stories to run protected “speech” and struck down as compelled speech a requirement that it include content that went against that judgment. Thus, analogizing Baidu to a newspaper, Baidu’s judgments about which sites to list were also protected “speech” and requiring Baidu to include sites against its wishes would be unconstitutional compelled speech, too.The editorial analogy again won out, this time for Google, in e-ventures Worldwide, LLC v. Google, Inc.
e-ventures is a search engine optimization (SEO) firm. Such firms seek to improve the visibility of client websites in organic (i.e., non-paid) search results. Clients like this because the higher their websites in organic rankings, the heavier the flow of traffic to their sites, which in turn enables them to sell advertising space on their sites at higher rates. Search engine companies are not big fans of SEO firms—they see them as trying to game the system for unpaid rankings. More to the point, when SEO firms are successful, it means that companies spend a portion of their advertising budgets with the SEO firms and not with Google for paid placement in search results. As a result, a perpetual game of cat and mouse ensues. Apparently unable to tweak its search algorithm in a way it liked, Google instead manually delisted 231 websites belonging to e-ventures clients. e-ventures attempted to reach out to Google through several channels, with the hopes of getting the sites relisted, but was unsuccessful. As a result, it filed suit, at which point Google relisted the sites.In its suit, e-ventures alleged that the delisting constituted unfair competition under the Lanham Act, tortious interference with business relations, and a violation of Florida’s Deceptive and Unfair Trade Practices Act. Google responded by asserting, among other things, that e-ventures’ claims were overridden by the First Amendment, as Google’s search results were its editorial judgments and opinions. While the court did not grant Google’s motion to dismiss, it ultimately agreed with Google at summary judgment that the First Amendment protects its delisting decisions. And the court did so by squarely analogizing Google to a publisher and its judgments about what to list or delist to a publisher’s decision about what to publish.
e-ventures also alleged that Google’s statements about its search results—that “Google search results are a reflection of the content publicly available on the web” and that “[i]t is Google’s policy not to censor search results”—were false and deceptive in light of its delisting practices.That Google’s actions were commercial and arguably anticompetitive did not matter. That Google was alleged to have made deceptive statements did not matter. On the contrary, the court expressly opined that Google’s free speech rights protect its listing and delisting decisions “whether they are fair or unfair, or motivated by profit or altruism.”
The court’s conclusion that if Google’s results were speech, unfair competition laws could not apply is deeply problematic and difficult to square with the obvious fact that laws addressing unfair and deceptive advertising prohibit certain speech all the time. This conclusion underscores the editorial analogy’s powerful influence and what its successful use puts at stake.That said, while the editorial analogy has proved potent in lower court cases, there is still time to rethink it. First, the Supreme Court has yet to weigh in. As I mentioned before and will discuss below, the Court’s most recent comments in this area come in Packingham.
If we take the majority at its word, that case suggests that it is an analogy to the public square, and not to a publisher, that ought to guide First Amendment thinking about social media. Second, plaintiffs in these prior cases were much more modestly resourced than the search titans they opposed. Some plaintiffs proceeded pro se. As a practical matter, this means that lower courts have been under little pressure to interrogate the cursory analogical-reasoning rationales that favored the defendants.But this too might change. In what Yelp’s vice president of public policy described as the “most significant enforcement event in consumer tech antitrust” since the action against Microsoft in 2000, While EU actions do not ensure any movement domestically, they can bring to light information that further tarnishes Silicon Valley’s reputation and thus contributes to the erosion of the basis for its companies’ exceptional treatment to date. Within the United States, moreover, Yelp and TripAdvisor have repeatedly argued that Google deliberately diverts users searching for their sites to Google-owned alternatives. Google has said that some of these results are the result of bugs, but its competitors argue otherwise. It is at least possible that a major (and well-funded) lawsuit in the United States—and with it, a vigorous battle over First Amendment coverage, the editorial analogy, and unfair competition laws—may yet materialize.
Google was fined a record-breaking €2.4 billion by European regulators in June 2017 for abusing its market dominance by giving an illegal advantage to its own products while demoting rivals in its comparison shopping service, Google Shopping.The Limits of the Editorial Analogy
The analogical argument works something like this: A does x and merits treatment y. B does x. Therefore, B is analogous to A, and B also merits treatment y. We can challenge arguments of this form in several ways. First, internal to the argument, we can question the relationship between doing x and getting treatment y. We cannot assume that doing x always merits treatment y. Indeed, we cannot assume that doing x has anything to do with why treatment y is merited. An example will help make this more concrete: Take the action of eating a sundae without permission. If I work at the ice cream shop from which I took that sundae, a reprimand from my employer might be merited. But say instead that I’m a professor. We likely think that it would be absurd for my employer to reprimand me for eating a sundae without permission. In both cases I did the same thing—ate a sundae without permission—but additional facts change what treatment we think that same action merits.
Put simply, even when A and B have some similarities, there can be relevant dissimilarities between them that renders treatment y appropriate for one but not the other.A second challenge, and one I would call external, is to propose a different analogy. Why analogize B to A and not B to C? Consider that newspapers (A) provide people information (x) and that requiring newspapers to provide different information (for example, a right of reply) may be struck down as compelling them to speak (merits treatment y). Search engines (B) also provide people information (x). As a result, search engines are analogous to newspapers (A), and so we might think that requiring a search engine to provide different information should similarly be struck down as compelling it to speak (merits treatment y). Now consider an alternative analogy. Law schools (C) provide information (x) by hosting and organizing recruitment fairs, to which they invite a limited number of employers. Requiring law schools to allow military recruiters into such fairs and to give them equal access to students does not compel the schools to say anything (they remain free to protest the military’s policies), so this requirement is constitutional (merits treatment z).
Search engines (B) provide information (x) via their rankings, in which a limited number of sites are included. Therefore, requiring search engines to allow sites into those rankings and to give them equal access to the search engine’s users similarly does not compel the search engine to speak (it remains free to protest the competitor’s speech). Thus, that requirement is constitutional as well (merits treatment z). Treatments y and z are incompatible. Yet, we can construct analogies that call for search engines to get both. That’s a problem.Like all analogies, the editorial analogy is vulnerable on both the internal and external front.
Internal Weaknesses of the Analogy
In a white paper paid for by Google at the same time that the Federal Trade Commission was investigating whether the company had abused its market dominance, Relatedly, they argue that requiring Google to change its search results (for example, by placing Yelp higher) would unconstitutionally compel Google to speak in much the same way that a right-of-reply law would unconstitutionally compel a newspaper editor to speak. In making their argument, the authors rely heavily on the editorial analogy. As they put it, companies like Google are “analogous to newspapers and book publishers” in that they both “convey a wide range of information.” They claim that search results are also analogous to editorial publications, as both involve choices about “how to rank and organize content,” “what should be presented to users,” and “what constitutes useful information.” This description of (some of) what Google does is accurate. But, crucially, these analogies do not substantiate the authors’ two claims—namely, that (1) search engines and search results merit the same treatment as publishers and editorial judgments for First Amendment purposes, and (2) requiring Google to modify its search results would compel Google to speak.
Eugene Volokh and Donald Falk argue that Google’s organic search results are fully protected speech and, as a result, are insulated from antitrust scrutiny.Let’s start with the first claim — that Google is analogous to a publisher because it, too, conveys a wide range of information. Now consider the application of that argument to a familiar saying: “Actions speak louder than words.” We say this because actions convey a wide range of information, often more truthful information than is conveyed through speech alone. Yet we certainly do not think that whenever people act, they are analogous to newspaper editors under the First Amendment and that their actions are therefore covered as speech. Thus, we can conclude that conveying a wide range of information is not sufficient for being treated like a publisher under the First Amendment. And given this, it straightforwardly follows that pointing out that Google conveys a wide range of information does not yet tell us whether Google should be treated like a publisher under the First Amendment.
Now consider the layout of a grocery store. There are good reasons that pharmacies are in the back, that certain brands are at eye level, and that candy is near the checkout. All those choices convey a wide range of information to consumers. Do we think that for purposes of First Amendment analysis, grocery stores are therefore analogous to publishers, because grocery stores convey a wide range of information through their organizing of products? Is the layout of the grocery store analogous to an editorial for purposes of speech coverage? My guess is most people think the answer is an obvious no.
If any individual or organization who satisfies this “conveys a wide range of information” criterion is deemed analogous to newspaper and book publishers for First Amendment purposes, then we have misunderstood how liberal political theory and free speech theory work. At the heart of liberal political theory is the idea that everyone is free to live according to their own ideals, so long as doing so does not unduly interfere with other people’s ability to do likewise. As a result, the government can only legitimately restrict people’s freedom when it is necessary to prevent harm or secure the demands of justice. The idea at the heart of liberal free speech theory is that when it comes to certain communicative acts, a commitment to individual freedom isn’t enough and must be bolstered by extra protections that make what counts as “speech” less liable to regulation than similarly harmful or unjust non-speech. This doesn’t mean that the government can willy-nilly regulate whatever it wants except for speech. It must always show the harm or injustice that results from the object of regulation. Instead, liberal free speech theory says that regulating a subset of those harms or injustices—those that come directly from “speech”—should be more difficult, even acknowledging that they are harmful or unjust.
But this whole scheme presupposes that what gets covered as “speech” for this purpose is limited, a special domain of extra protection. We should remember that this special domain comes at a cost. “Free” speech isn’t truly free. When we grant “speech” coverage, we require those who are harmed or treated unjustly by that speech to absorb more of its costs. Once any entity that conveys a wide range of information is suddenly analogous to a newspaper, we have begun making what was supposed to be exceptional treatment the new rule. While some might welcome this libertarian, deregulatory move in the short run, it is not only anathema to liberal theory but also, I suspect, unlikely to yield attractive outcomes in the long run.Volokh and Falk next say that search results are analogous to editorial publications because both involve choices about “how to rank and organize content,”
“what should be presented to users,” and “what constitutes useful information.” These similarities to publishers fare no better. As I said before, every store organizes and “ranks” content through its layout. Are all store layouts now akin to editorial publications under the First Amendment? Are all stores First Amendment publishers? Again, I think the answer is no. But as an ex-Google product philosopher (and who doesn’t want that title?) points out, companies like Facebook, Google, and Twitter seek to influence users by means of various organizational and content choices in much the same way that grocery stores do by their layout and product placement.One might respond here by saying that ranking and organizing only counts as analogous to editorial functions if what is ranked and organized is itself speech. But this is implausible. Surely Volokh and Falk think that a restaurant ranking qualifies as speech even though the underlying things ranked and organized—restaurants—are not. Thus, that the thing being ranked and organized is itself speech is not necessary for coverage. Is it sufficient?
Here is an argument for that position: A bookstore selects which books to sell. Wouldn’t we say that its selection of those books is itself speech? And if so, doesn’t that show that curating other people’s speech is necessarily speech itself? Once again, I think the answer is no. First, I hesitate to grant the premise—that we would call a bookseller’s book selections an independent instance of protected speech. I say this because in cases where the state has banned the sale of protected speech, the Court has invoked either the First Amendment rights of speech creators or would-be speech buyers. When sellers challenge these bans, they point to the First Amendment rights of those other parties. Take Brown v. Entertainment Merchants Association, where the Court struck down a law banning the sale of violent video games.
Although its opinion was admittedly not a paragon of clarity, the Court in Brown considered the First Amendment rights of game creators and children buyers. Nowhere did the Court consider whether the ban might violate the speech rights of video game sellers. Second, and more fundamentally, even if a bookseller’s choice of which books to sell counts as speech, that still does not show that (1) every time an entity curates third-party speech that curation is itself speech, nor does it show what might ultimately be more crucial—namely, that (2) like the newspaper in Tornillo, requiring a modification of that curation constitutes compelled speech. I have already gone over the reason for (1). To see (2), consider the military recruitment case Rumsfeld v. Forum for Academic and Institutional Rights (FAIR).In FAIR, a federal statute required law schools to provide military recruiters the same access to students as that given to other recruiters or lose funding.
A group of law schools argued that requiring them to include the military in their fairs would send students the message that the schools endorsed the military’s “don’t ask, don’t tell” policy, which they did not. As a result, the schools argued that the requirement constituted unconstitutional compelled speech. The Court disagreed, holding that requiring law schools to give military recruiters equal access and even sending out scheduling emails to students on behalf of the military recruiters did not compel the law schools to speak at all. As the Court saw it, “schools are not speaking when they host interviews and recruiting receptions.” Even more, the Court thought some of the schools’ compelled-speech claims “trivialize[d] the freedom protected” in its prior compelled-speech cases. Given the Court’s ruling in FAIR, and even granting that the curation of third-party speech is itself speech, it is not the case that requiring an entity to include speech it dislikes within its curation necessarily entails compelling that entity to speak.A final move someone might suggest to rehabilitate the Volokh and Falk position entails looking at the restaurant ranking differently—it doesn’t rank and organize restaurants but instead information about those restaurants. And so, any entity that makes such rankings is in the business of ranking and organizing information and is relevantly analogous to a publisher making editorial selections.
Two points here. First, I find it difficult to characterize a restaurant ranking as the organization of information about restaurants. It seems more natural to say that it is a ranking of restaurants that also generates information (which restaurants are best and which are worst). Second, as already noted, virtually any activity that involves the creation of information entails some curatorial decisions. Unless we are willing to say that every such activity warrants constitutional protection, we must concede that the fact that newspaper editors and search engines both engage in the curation of information is not sufficient for finding the latter analogous, for First Amendment purposes, to the former.Potentially Relevant Dissimilarities
While often unnoticed, the extent to which we find analogical reasoning convincing is based not only on relevant similarities but also on the absence of relevant dissimilarities. And as many have already pointed out, there are significant and arguably relevant dissimilarities between the outputs of tech companies like Facebook, Google, and Twitter, on the one hand, and newspapers, on the other.
To make this point about the importance of dissimilarity more concrete, consider the development of oil and gas rights in the United States. Courts were faced with the question of whether land owners had property rights to oil and gas reservoirs that lay underneath their land. Reasoning by analogy, early American courts were “captured” by an analogy to the law of capture. If you capture a wild animal while you’re on your own property, it’s yours.
Therefore, analogously, so long as you take out the gas and oil while you’re on your own property, it’s also yours. But of course, while in the grip of this analogy, courts failed to see the relevant dissimilarities between hunting wild animals and extracting oil and gas that made the analogy, and thus the application of the law of capture to oil and gas, problematic. For starters, such a rule incentivized landowners to over-drill so as to extract as much oil and gas as possible before their neighbors could do the same. Eventually we figured out that sometimes the dissimilarities are more important than the similarities and changed the rule.Returning to editorial publications and tech company outputs, some scholars have argued that the use of algorithms creates a relevant dissimilarity. As Oren Bracha and Frank Pasquale have put it, we should distinguish between dialogical and functional expression and only give First Amendment coverage to the former.
The rough idea is that dialogical expression is perceived by the audience as something with which it can agree or disagree, criticize or support, argue for or against. In contrast, functional expression, while not clearly defined, is expression that the audience does not perceive as speech to which it can respond in these ways. Bracha and Pasquale argue that algorithmically generated search outputs are functional because users do not perceive rankings as expression with which they can dialogically engage.Volokh and Falk object to claims that algorithms and their outputs are not speech, pointing in part to the fact that algorithms are written by humans and result from engineers’ judgments.
However, if we instead put them in conversation with Bracha and Pasquale, they might argue that audiences do perceive these outputs as judgments with which they can critically engage—just consider the public outcry over certain rankings and what does or does not trend. Even if we accept the dialogical/functional methodology, it seems that both sides are only partially right. Bracha and Pasquale are wrong to suggest that algorithmically encoded curation is necessarily functional. As others have suggested, we can conjure up some cases of algorithmic operations that look dialogical. This undermines the claim that the algorithm is what makes Facebook’s and Google’s curation non-speech.Yet all of this is consistent with the plausible view, contra Volokh and Falk, that in light of how these companies portray themselves and their outputs to the public, outputs like search results, lists of what is trending, and newsfeed fodder are not understood by most members of the public as dialogical expression on the order of the content a newspaper publishes. While newspapers generally stand behind their content, Google,
Facebook, and Twitter have all explicitly disavowed the substance of their results. Newspapers also (and unsurprisingly) hold themselves out as editors, whereas these tech companies do everything they can to run from that categorization. It strikes me that selling themselves to the public in this way does lessen users’ perception that their outputs are dialogical. I doubt many people enter a search query into Google and think, “I now know Google’s views on my query.” And part of the reason for this may well be that these companies expressly tell users not to think the results are their speech (even as they claim the opposite in litigation). Self-presentation as not-a-speaker has another important consequence: Users may not perceive requirements that these companies alter their results as tantamount to compelling the companies to speak.To see why the public might not perceive these algorithmic outputs as the speech of these companies, let’s turn to a few specific examples.
Google’s Position: Not a Speaker
We can start with the controversy over Google’s autocomplete function. As most reading this will be aware, when you start typing a search query into Google’s search box, Google automatically makes suggestions for how the query should be completed. These suggestions, which are generated algorithmically, depend on several variables, including what you are typing, what you have previously searched for, what others have searched for, and what is currently trending. In 2016, users noticed that when they typed “are Jews” or “are women,” Google suggested “evil” to complete the query. Similarly, when users typed “are Muslims,” Google suggested “bad.”
In 2011, when a certain Italian citizen’s name was typed into Google’s search box, autocomplete suggestions included the Italian words for “con man” and “fraud.” The individual then sued Google for defamation and won.If we really think the outputs of Google’s algorithms are its speech, this defamation suit makes sense. But Google argued the opposite. In its statement after losing the suit in Italian court, Google said, “We believe that Google should not be held liable for terms that appear in autocomplete as these are predicted by computer algorithms based on searches from previous users, not by Google itself.”
If you go to Google’s support pages today and look under “Search using autocomplete,” you will see the following: “Note: Search predictions aren’t the answer to your search. They’re also not statements by other people or Google about your search terms.” We should pause to reflect on this. Google is not simply saying that the views of those it ranks are not its speech. More than that, it expressly disavows as its own speech the very rankings and algorithmic outputs it claims in litigation to be its editorial speech.There are, in fact, numerous situations in which Google disavows as its speech the very rankings that commentators like Volokh and Falk argue are both its speech and analogous to the speech of editorial publications. Stuart Benjamin describes a case in which Google’s top result for the term “Jew” was an anti-Semitic site called “Jew Watch.” When civil rights groups pressured Google to delist the site, Google instead posted a note stating that its results rely on “algorithms using thousands of factors to calculate a page’s relevance to a given query” and that they don’t reflect “the beliefs and preferences of those who work at Google.”
Google thus presented itself as a conduit for the speech of others—not so different from how Google saw internet service providers (ISPs) as conduits, at least when I worked there. Now consider Tornillo, where the newspaper was so intimately tied to the content it published that a mere right of reply was thought to compel the newspaper to speak. The difference between Tornillo and Google’s situation is clear. Google’s point is that its search-related outputs aren’t its speech at all.Google most recently and explicitly eschewed the editorial analogy in its testimony before the Senate Judiciary Subcommittee on Crime and Terrorism in October of last year. It is worth reproducing in full the relevant dialogue between Louisiana Senator John Kennedy and Richard Salgado, Google’s law enforcement and information security director:
Kennedy: Are you a media—let me ask Google this, to be fair. Are you a media company, or a neutral technology platform?Salgado: We’re the technology platform, primarily.
Kennedy: That’s what I thought you’d say. You don’t think you’re one of the largest, the largest newspapers in 92 countries?
Salgado: We’re not a newspaper. We’re a platform for sharing of information that can include news from sources such as newspapers.
Kennedy: Isn’t that what newspapers do?
Salgado: This is a platform from which news can be read from news sources.
If we are stuck making First Amendment coverage determinations by analogy, we might want to look beyond the analogy Google explicitly rejected in its congressional testimony.
Facebook’s Position: Not a Speaker
The history of Facebook’s Trending News and the recent controversy surrounding how its architecture facilitates—indeed, encourages—the proliferation of inflammatory and weaponized misinformation and propaganda provide further examples of a company that deliberately disclaims its curatorial products as its speech and itself as editor.
Facebook launched Trending News in January 2014. By this time, Twitter had established itself as the go-to social media site for breaking news and minute-by-minute coverage of live events. As a result, Twitter could “gobble up enormous amounts of engagement during TV premieres, award shows, sport matches, and world news events.”
Twitter also successfully commercialized its Trending Topics feature, selling lucrative advertising space in the form of “promoted trends.” Facebook’s Trending News was viewed as the company’s attempt to emulate and compete with Twitter in this commercial space.By the summer of 2014, Facebook was already facing criticism for its lack of serious news, both in Trending News and its main news feeds. The civil unrest in Ferguson was considered the year’s “most important domestic news story,” and while Twitter was hailed for its second-by-second coverage of Ferguson, there was scant evidence of the conflict on Facebook, which instead seemed dominated by the ALS ice bucket challenge. Some observers conjectured that Facebook’s feed algorithms were to blame. At one point, a senior Facebook employee said that the company was “actually working on it,” but uncertainty about the nature of the problem and Facebook’s response remained. Should we understand the lack of Ferguson coverage in people’s feeds as the editorial decision of Facebook? Did Facebook see the lack of Ferguson coverage as its own speech? After all, according to Volokh and Falk, that absence was clearly the result of algorithmic construction choices, which in turn reflected the judgments of the company’s engineers. And Facebook was criticized for its algorithm’s design, which basically hid controversial content and showed users more universally agreeable content, because the latter is what “keeps people coming back.” But once again, and unsurprisingly, this is not how Facebook saw it. Facebook did not see the resulting absence of Ferguson coverage as its own speech, let alone the product of a deliberate decision akin to the choices made by a newspaper to write about or neglect that same topic. Nor does Facebook’s recognition that it needed to respond to the controversy by tweaking its algorithm, which it did, necessarily suggest that the lack of Ferguson coverage in Facebook feeds was an editorial judgment.
As this episode underscored, Facebook straightforwardly does not see itself as an editor or its curation as its speech. Instead, in a Q&A session, Zuckerberg, much like Google, characterized Facebook as more analogous to a neutral conduit or tool that enables the speech of others:
What we’re trying to do is make it so that every single person in the world has a voice and a channel and can share their opinions, or research, or facts that they’ve come across, and can broadcast that out to their friends and family and people who follow them and want to hear what they have to say.
... . We view it as our job to ... giv[e] everyone the richest tools to communicate and share what’s important to them.
This innocent-conduit-for-the-speech-of-others framing is not inconsistent with the facts that, by 2014, Facebook was the primary driver of traffic to most of the top news websites
and that, by 2017, 45 percent of U.S. adults were getting at least some of their news from Facebook. Facebook has become “to the news business what Amazon is to book publishing—a behemoth that provides access to hundreds of millions of consumers and wields enormous power.” Nevertheless, Greg Marra, the engineer who oversees Facebook’s News Feed algorithm, said in an interview that he and his team “explicitly view ourselves as not editors. . . . We don’t want to have editorial judgment over the content,” because users are in the best position to decide what they want to see.Facebook’s response to the 2016 controversy surrounding the curation of Trending Topics further drives home the editorial disanalogy. Back in 2014, Facebook said that its Trending Topics articles were ranked by an algorithm based on metrics like popularity and timeliness. These contractors reported that they were told to link to stories from preferred outlets like the New York Times; that they had a prerogative, which they regularly exercised, to blacklist topics that weren’t covered by multiple traditional news sources or that concerned Facebook itself; and that they were told not to publicize that they were working for Facebook, presumably because the company “wanted to keep the magic about how trending topics work a secret.” Contractors subsequently reported that they had also injected stories about topics like Black Lives Matter into Trending News at the behest of management, who thought certain topics should be trending regardless of algorithmic metrics. Most controversially, the contractors reported that pro-conservative stories were regularly excluded from Trending News, not at management’s instruction, but on account of left-leaning colleagues using their prerogative to blacklist topics. Based on these reports, Nunez argued that Facebook wanted to “foster the illusion of a bias-free news ranking process” and that Facebook was obscuring its workers’ involvement because it “risk[ed] losing its image as a non-partisan player in the media industry” rather than “an inherently flawed curator.” In Nunez’s view, Facebook worked like a newsroom, expressing the views of its staff in its reporting, in “stark contrast” to the company’s depiction of Trending News as merely “topics that have recently become popular on Facebook” or “a neutral pipeline for distributing content.”
Until the publication of a story by Recode in August 2015, there appears to have been no awareness that this was not the whole truth. That story suggested that Facebook’s workers had some hand in shaping Trending Topics content—not by selecting which articles appeared (“that’s done automatically by the algorithm”) but by writing headlines. But in two explosive pieces on the tech news site Gizmodo in May 2016, Michael Nunez reported that the involvement of workers went much further: Material appearing in Trending News was curated by Facebook contractors who, in addition to writing headlines, selected which topics trended and which sites they linked to.This did not sit well with Republicans. Within hours of Nunez’s second report, Republican National Committee Chairman Reince Priebus demanded that Facebook “answer for conservative censorship.” A post on the GOP’s official blog argued (presciently) that “Facebook has the power to greatly influence the presidential election” and objected to its platform “being used to silence viewpoints and stories that don’t fit someone else’s agenda.” Shortly thereafter, Senate Commerce Committee Chairman John Thune—a leading critic of the Federal Communications Commission’s fairness doctrine until it was officially repealed (after years of non-enforcement) in 2011 —notified Facebook that his committee was exploring a consumer protection investigation. In his words:
If Facebook presents its Trending Topics section as the result of a neutral, objective algorithm, but it is in fact subjective and filtered to support or suppress particular political viewpoints, Facebook’s assertion that it maintains a platform for people and perspectives from across the political spectrum misleads the public.
Thune gave Facebook fourteen days to provide details of its guidelines for preventing the suppression of political views, the training it provided workers in relation to those guidelines, and its methods for monitoring compliance.
Despite the view of lawyers who thought that Facebook could (and perhaps should) invoke the editorial analogy and reject Thune’s demands on First Amendment grounds, the company responded to Thune, explained its practices, and shared its internal Trending Topics review guidelines. Facebook’s senior leaders also met with a number of top Republican leaders to reassure them that it was an impartial platform. In its letter to Senator Thune, Facebook said that it found “no evidence of systematic political bias” but couldn’t rule out occasional biased judgment by its curators. It also identified, and pledged to reform, two parts of its process for generating Trending News. First, it would end its practice of boosting topics being covered by preferred major media players like BBC News, CNN, Fox News, and the New York Times (a change, looking back, that we might wish Facebook had not made). Second, the company stated that it would “take prompt remedial actions” should it find evidence of “improper actions taken on the basis of political bias.”Facebook’s response to this issue, in the following months and amid a contentious U.S. election cycle, was to replace the Trending News curatorial team with engineers who had a more mechanical role in approving stories generated by the Trending News algorithm. These engineers, as one writer poetically put it, would be “the algorithm’s janitors.” Per its revised guidelines, Facebook removed its own headlines and summaries, and all featured news stories, including their accompanying excerpt, became algorithmically generated, based on “spikes in conversation.” The only non-algorithmic effect on content was when reviewers found clear mistakes—such as duplicate topics, posts about non-news, and posts about fictional events—and when they separated topics that had been automatically clustered under a single heading by the algorithm. Before approving a topic, reviewers also confirmed that each topic contained at least three recently posted articles or five recently published posts, reviewed the keywords associated with the topic, nominated related topics, and set the topic location and category (for example, business, sports, or politics). From this point on, the source of posted articles no longer had a bearing on whether a topic would appear in Trending News.
Facebook thus changed its practices to become more “neutral,” however amorphous the concept. The company wanted to make clear that its rankings were not its speech. Recall that in the FAIR case, the Court thought that requiring law schools to include military recruiters was not compelled speech, as even “high school students can appreciate the difference between speech a school sponsors and speech the school permits because legally required to do so, pursuant to an equal access policy.”
Facebook is asking users to do this very same thing—to appreciate that what is trending is not Facebook’s speech, even though it is on its platform.Unfortunately, the more Facebook went out of its way to not be an editor, the more its Trending News algorithm was, as various news outlets characterized it, a “disaster,”
an algorithm “go[ne] crazy.” A few days after the change, Megyn Kelly was trending with a false headline: “Fox News Exposes Traitor Megyn Kelly, Kicks Her Out for Backing Hillary.” At the same time, four Washington Post journalists ran an experiment on their personal Facebook accounts to look at the sorts of stories in Trending News and uncovered that, from August 31 to September 22, there were “five trending stories that were indisputably fake and three that were profoundly inaccurate.” Throughout all of this, Zuckerberg did not reconsider his prior insistence that Facebook is “a tech company, not a media company.” For better or worse, it is hard to imagine Facebook trying harder to distance itself from both the editorial analogy and any claim that what showed up in Trending News was its speech. Even in the wake of Trump’s win, when “everyone from President Obama to the Pope . . . raised concerns about fake news and the potential impact on both political life and innocent individuals,” Zuckerberg reiterated that he and his company “do not want to be arbiters of truth ourselves, but instead [want to] rely on our community and trusted third parties.”When we diagnose what went wrong with regard to fake news, we need not conclude that Facebook made the mistake of trying to be too neutral. Instead, we can realize that our (and their) previous conception of what “neutrality” entailed—not privileging certain news sources and treating all sources of “news” the same—was wrong. Facebook, and the rest of us, learned that treating fake news sites on a par with the Wall Street Journal and the New York Times is saying something very not “neutral” about how we should treat information from those sites. Just recently, Facebook announced that it will once again rank news sources, but this time it plans to do so based on user evaluations of those sources.
We can debate this method as well, but it represents yet another attempt by Facebook to figure out what “neutral” means and then do it.Finally, like Google, Facebook and Twitter were asked during recent congressional hearings how they “respond to . . . the growing concerns that [they] and other Silicon Valley companies are putting a thumb on the scale of political debate and shifting it in ways consistent with the political views of [their] employees?”
Facebook General Counsel Colin Stretch replied, “Senator, again, we think of ourselves as a platform of all ideas—for all ideas and we aspire to that.” Stretch then discussed training given to prevent bias in its employees, saying, “We want to make sure that people’s own biases are not brought to bear in how we manage the platform.” Responding to the same question, Sean Edgett of Twitter insisted that “our goal and . . . one of our fundamental principles at the company is to remain impartial.”Whatever the analogical similarities these companies share with publishers, these companies see the analogical dissimilarities as more salient. Given this, it is hard to see why we should extend First Amendment coverage to the choices they make about how to run their platforms. And perhaps more significantly, these companies’ self-conception dramatically weakens the claim that requirements to change their outputs would unconstitutionally compel them to speak.
Competing Analogies
In addition to delving into some internal weaknesses of the editorial analogy, we can cast further doubt on its near-automatic acceptance by raising rival analogical frameworks that either (1) suggest that these companies’ judgments should not receive free speech coverage or (2) assume some coverage but suggest ways that government regulation would not count as compelling these companies to speak. The ISP-like conduit analogy has already been discussed extensively by others (and briefly by me above),
so here I will mention three other sets: shopping malls or law schools, fiduciaries or public trustees, and public forums or public squares. My goal here is not to convince you of one analogy above the rest but instead to show the limitations and (often unstated) normative judgments inherent in making First Amendment coverage determinations via analogy at all.Shopping Mall or Law School
In PruneYard Shopping Center v. Robins,
the appellees, a group of high school students, set up a stand to gather signatures for a petition in a privately owned shopping center. Security guards forced the students to leave; the students sued, claiming a right to solicit signatures on the premises under the California Constitution. The California Supreme Court ruled in their favor, but the PruneYard Shopping Center appealed, claiming a violation of its speech rights under the Federal Constitution. Most interestingly for our purposes, in its briefs PruneYard cited Tornillo to argue analogically. That is, PruneYard argued that requiring it to allow the students to petition was analogous to compelling newspapers to publish replies by political candidates they criticize. Now, we can see that there are some similarities between a shopping center and a newspaper—for example, both decide what to present to consumers, and both convey information to those consumers by means of their curatorial decisions (i.e., they share the same similarities Volokh and Falk identified between newspapers and Google). But crucially, the U.S. Supreme Court did not think those similarities were salient. Instead, the Court took a different, and better, methodological approach. It looked at the reasoning underlying Tornillo to see whether that same reasoning was applicable to a shopping mall. As the PruneYard Court saw it, the state cannot force newspapers to publish right-of-reply articles because doing so would deter editors “from publishing controversial political statements” and thus limit the “vigor” and “variety” of public debate. But such concerns did not apply in the case of a shopping center, and so the analogy did not hold sway. The Court ruled that PruneYard’s First Amendment rights were not infringed by the students’ state-given rights of expression and petition on its property. Indeed, the Court did not think allowing the students to petition compelled PruneYard to speak at all.The Court again discussed and rejected the Tornillo analogy in FAIR.
While the law schools argued that requiring them to treat military and non-military recruiters alike unconstitutionally compels them to speak—to send a message about their views on a military policy with which they disagreed—the Court thought otherwise. Unlike a newspaper engaging in First Amendment–protected activity in choosing which editorials to run, the Court held that “schools are not speaking when they host interviews and recruiting receptions.”We can analogize both the PruneYard Shopping Center and the law schools to Facebook Trending and Google Search in a way that has prima facie appeal. Like PruneYard and the schools, neither Facebook nor Google is literally a newspaper. Both companies’ platforms, like the shopping center, are generally accessible to all. Like the shopping center’s selecting which retailers to lease space to and the law schools’ selecting which employers to participate in their recruitment fairs, Facebook and Google make curatorial decisions. As I have discussed at length above, Facebook and Google can and do publicly dissociate themselves from the views expressed by people who speak through their platforms and from the products of their own curatorial efforts (such as a particular ranking). The Supreme Court thought it important that PruneYard and the law schools were capable of doing the same. Thus, if we reason by this analogy, Facebook and Google are also not compelled to speak when required to let others speak on their platform.
Analogous to PruneYard, it is also not obvious that regulations preventing Facebook, Google, and Twitter from making certain curatorial and architectural choices—for example, from delisting competitors’ sites or refusing their ads, deactivating user live streams at the behest of police with no judicial oversight,
striking deals with record labels to preemptively block the upload of certain user videos, or relying on monetization models that encourage addictive behaviors and the development of polarized epistemic bubbles that in turn facilitate the viral spread of fake news and propaganda —would limit the vigor or variety of public debate. Indeed, it’s important to remember that even if, like in PruneYard, the state can force these private actors to permit third-party speech in ways that do not require the companies themselves to speak, the First Amendment rights of users remain. The government could not have silenced the high school petitioners in PruneYard, and the same can be said for political dissent on Facebook.In short, we can plausibly analogize Facebook, Google, and Twitter to the shopping center in PruneYard or the law schools in FAIR, instead of to the newspaper in Tornillo. And when we do, certain regulations don’t look constitutionally problematic after all.
Fiduciary or Public Trustee
An alternative analogical approach conceives of major tech companies as information fiduciaries.
Tim Wu raises a similar idea when he asks whether new laws and regulations should “requir[e] that major speech platforms behave as public trustees, with general duties to police fake users, remove propaganda robots, and promote a robust speech environment surrounding matters of public concern.” As Wu points out, such a move would require a reorientation of the First Amendment so as to renew the concern the Court evinced for the speech rights of listeners (or users) in cases like Red Lion Broadcasting Co. v. FCC.While this analogy may seem unlikely to be adopted in practice, such a move accords with the Court’s recognition in Packingham of cyberspace as “the most important place” for the exchange of views.
In the recent congressional hearings with social media companies, it was also clear that all the participants were operating on a background assumption that while dealing with problems like those generated by Russian interference in the election, these companies had to be mindful of First Amendment principles. At one point, Senator Dick Durbin remarked, “Now take the word Russian out of it. A Facebook account that promotes anti-immigrant, anti-refugee sentiment in the United States. I don’t know if you would characterize that as vile. I sure would.” Pursuing this concern, Senator Durbin asked, “How are you going to sort this out, consistent with the basic values of this country when it comes to freedom of expression?”If we thought of these companies as the same as any other private company, the idea that their solutions need to be consistent with the First Amendment would seem confused. Under existing doctrine, the tech companies don’t need to comply with the First Amendment, nor concern themselves with the First Amendment rights of users, because they aren’t engaged in state action. But even putting aside a finding of state action, members of the government, ordinary citizens, and the companies themselves do seem to see the companies as having a fiduciary-type role, given the importance of their platforms as spaces of public debate.
Further movement toward a public trustee role was also essentially called for by a shareholder proposal filed with Facebook and Twitter by Arjuna Capital (an activist investment firm) and the New York State Common Retirement Fund (the nation’s third-largest public pension fund).
And Zuckerberg embraced a public trustee model in his 2018 annual self-challenge and Yom Kippur atonement. Zuckerberg did not commit to turning Facebook into a better newspaper editor; he suggested that the company would “assume the responsibilities implied by [its] power,” much like a public trustee would. And while these latter two are Zuckerberg’s personal commitments, as Facebook’s CEO and a controlling shareholder, he has fiduciary duties of his own to think about.Like the editorial analogy, analogizing these companies to fiduciaries or public trustees is prima facie plausible. Indeed, even more so than in the case of the editorial analogy, pretty much all of the relevant parties act (at least outside of litigation) as if something like this were the case today. If these companies were analogized to fiduciaries for purposes of First Amendment law ,then as with lawyers and doctors, case law supports the regulation of their fiduciary-related choices, even assuming those choices are speech.
Company Town or Public Forum
When considering the company town or limited public forum analogy, we should distinguish two distinct positions: (1) the social media sites themselves are like company towns or create limited public forums such that when the company bans or delists someone, there are First Amendment implications; and (2) government officials who communicate to the public through their pages on these privately owned platforms can violate users’ First Amendment rights by banning the users or deleting their comments.
Up until recently, courts have rejected the first and been uncertain about the second. As all lawyers know, for the First Amendment to apply, there must be state action. And rarely does a private actor’s power rise to that level. But historical moments—and the nature of emerging threats—matter. As Eric Goldman observes, “We can’t ignore that there is such skepticism towards internet companies’ consolidation of power.” Goldman was focused on antitrust, but the point generalizes. If we combine this skepticism with the Court’s broad language in Packingham, the once off-the-wall theory that these companies should count as state actors for First Amendment purposes is starting to look a bit more on the table. And indeed, both the language of Packingham and its public square analogy have made appearances in recent suits by users alleging that social media companies violated their First Amendment rights. More than that, they have already appeared in court opinions concerning the same. It seems possible that the Court has signaled a willingness to return to an earlier and more capacious reading of the state action doctrine.
The second question concerns whether government officials’ pages on private social platforms can amount to limited public forums under the First Amendment. And while certain cases suggesting an affirmative answer predate Packingham,
Packingham has already been used to bolster that conclusion. Most obviously, the Knight First Amendment Institute itself has argued, citing Packingham, that Trump’s @realDonaldTrump Twitter account is a designated public forum and that his banishment of seven Twitter users violates their First Amendment rights.As for the company town or limited public forum analogy, there are two strands of state action doctrine worth mentioning here. The first concerns public function and the second entanglement. And we can make out analogies to cases in both.
The classic public function case is Marsh v. Alabama,
which involved a company town. As happened not infrequently in the early 1900s, companies would build “towns” and then have their workers live and buy within them. Often, companies would use a claim of private property to prohibit certain individuals, particularly union organizers, from entering the town, bringing out the police in the event of any trespass. In Marsh, it was not a union organizer but a Jehovah’s Witness who was arrested for trespass while distributing religious literature on the company-owned sidewalk. The Court held that the company’s actions constituted state action, because the entire company town had “all the characteristics of any other American town,” save for the fact that it was privately owned. The company executed a public function, and that meant it could be treated as a state actor for constitutional purposes.So when it comes to Facebook, Google, and Twitter, what counts as a “public function”? As the history of the state action doctrine attests, the Court has changed its mind on this very issue. In Amalgamated Food Employees Union Local 590 v. Logan Valley Plaza, for instance, the Court held that so long as union picketers used a private shopping center in a manner and purpose “generally consonant” with the use the owners had intended, they could not be banned from it consistent with the First Amendment.
In the Court’s view, the shopping center was “clearly the functionally equivalent of the business district . . . involved in Marsh.” And “because the shopping center serve[d] as the community business block and [was] freely accessible and open to the people in the area and those passing through, the State [could] not delegate the power, through the use of its trespass laws, wholly to exclude those members of the public wishing to exercise their First Amendment rights on the premises.” If Logan Valley Plaza were still good law, it would seem that the platforms run by Facebook, Google, and Twitter could easily be analogized to the plaza, and users and advertisers would have First Amendment claims against these private companies.But Logan Valley Plaza was overruled in Hudgens v. NLRB.
There, the Court thought itself bound by its earlier decision in Lloyd Corporation v. Tanner, which held that a shopping center did enough to make clear that it was not dedicated to public use, so that members of the public had no First Amendment right to distribute handbills protesting the Vietnam War. In Hudgens, the Court said it was its “institutional duty . . . to follow until changed the law as it now is” and thought the rationale in Logan Valley Plaza could not survive Lloyd. Hudgens re-read Marsh as standing for something narrower: namely, that private entities that are the functional equivalent of a municipality cannot, consistent with the First Amendment, wholly restrict the speech of others on their property.From these precedents, two questions naturally arise. First, and reasoning analogically, we can ask whether platforms such as those run by Facebook, Google, and Twitter are more like municipalities or more like shopping centers. Because I see these platforms as sufficiently different from both (and because I am skeptical of analogical reasoning in this space generally), this framing of the issue strikes me as unattractive. Alternatively, we might instead ask whether a majority of the current Court is open to finding a public forum well before a company has created the equivalent of an entire town. The language in Packingham supports an affirmative answer.
Again, in Packingham the Court “equate[d] the entirety of the internet with public streets and parks”
and declared it “clear [that] cyberspace . . . and social media in particular” are “the most important places (in a spatial sense) for the exchange of views.” It found social media “the modern public square” and suggested it is “[a] fundamental principle of the First Amendment . . . that all persons have access” to it. This might be read as analogizing social media to the company towns of the past. If these spaces are the “modern public square,” they are clearly taking on important government functions.One might reply—as these companies always do—that users are just a click away from going somewhere else. Two thoughts about this. First, this reply only highlights how open to the public these platforms are. And since Hudgens, when courts have tried to make sense of when private property becomes a public forum, they find relevant whether the site has been dedicated to public use.
If people can seamlessly move between social media sites, it may be easier to find these sites dedicated to public use. Like walking into a park or entering a shopping mall, it is true that you agree to follow some basic rules upon entry, but overall such barriers are low. The emphasis that leading social media companies placed on openness and non-bias in their recent congressional testimony buttresses this point. Second, we know that such freedom of online movement would only exist if the costs of switching platforms were zero or close to it. But we (and they) know that this is not true, given, among other things, network effects, switching costs, and first-mover advantages. Moreover, and as the more analogically inclined have put it, even if you do switch, it tends to be a move from one online feudal lord (such as Google) to another (such as Facebook). Like moving from company town to company town, moving from one online feudal lord to another does not obviously diminish the sense in which either engages in the functional equivalent of state action.A separate strand of cases within the “murky waters of the state action doctrine”
concerns government entanglement. This is considered the “category of exceptions that has produced—and continues to produce—the most confusion.” Given this, how the Court will evolve the doctrine going forward is anybody’s guess. With that said, and putting aside cases concerning state action via judicial enforcement of private contractual agreements (Shelley v. Kraemer being the apex of this ), the Court has previously found state action when “[t]he State so far insinuated itself into a position of interdependence with” a private non-state actor “that it must be recognized as a joint participant in the challenged activity.” Relatedly, in Evans v. Newton the Court said that “[c]onduct that is formally ‘private’ may become so entwined with governmental policies or so impregnated with a governmental character as to become subject to the constitutional limitations placed upon state action.”The government-like character of the leading tech companies has been acknowledged by the companies themselves. Almost a decade ago, Zuckerberg opined, “In a lot of ways Facebook is more like a government than a traditional company. We have this large community of people, and more than other technology companies we’re really setting policies.”
But governments also hold substantial power over these companies, often in ways invisible to the public. Take government “requests” for data, without judicial oversight. It isn’t hard to see what is technically a private decision by companies like Facebook (to hand over user data to the government) as so entwined with the government that finding state action would be reasonable. Or take the pervasive—and, in most of academia, deeply under-appreciated—informal pressures that governments put on these platforms to regulate certain content: a technique sometimes called “jawboning.” The recent congressional hearings and various letters from congressional committees to these companies underscore how responsive these companies are to the concerns and recommendations of U.S. government officials, even where the government’s legal authority to demand such responsiveness is unclear. If members of the public were more aware of all the ways that the U.S. government works with and makes “requests” of these companies, I suspect findings of state action would be more forthcoming.The Takeaway
As with the editorial analogy, other proposed analogies highlight certain facts while obscuring others. Yet all these analogies have prima facie purchase. When it comes to programs that organize, rank, and transmit third-party communication to users, some of what they do is similar, in some respects, to some of what publishers or editors do; some of what they do is similar, in some respects, to what fiduciaries do; and some of their functions are similar, in some respects, to what shopping malls and law schools do; and some of what they do makes them look analogous to public squares or to state actors. The question that everything hinges on is this: Which similarities and dissimilarities are the ones that matter from the point of view of free speech principles?
In the First Amendment context, to invoke the compelled speech doctrine and cite Tornillo as the relevant precedent, based on the mere fact that both search engines and newspapers rank and organize content, is to beg this question instead of properly addressing it. In asking which similarities and dissimilarities matter from the perspective of free speech principles, we are posing a question the answer to which cannot but reside in normative considerations. Analogical methods that respond to questions of free speech coverage by noting similarities between different types of communication, without examining these underlying normative concerns, are at best limited and at worst misleading. The limits of analogical reasoning help explain why some find the concept of “similarity” nearly useless.
Indeed, the very use of analogical reasoning in law remains contested, with some finding it to be the “cornerstone of common law reasoning” while others see it as “mere window-dressing, without normative force.” As I have suggested elsewhere, if analogical reasoning is to be useful at all, we may need to distinguish between types of analogy and recognize the limited value of each.The above point is focused on the threshold question of First Amendment coverage. There also remains an enormous amount of uncertainty concerning how these different framings, if adopted, would play out in practice. Take the fiduciary analogy. Determining to whom these companies would owe fiduciary obligations is far less clear than some acknowledge. Even among domestic users, interests will conflict, as we see in debates about these companies’ policies concerning hate speech and on university campuses when the need for open debate runs up against the need for safe spaces. Similarly, while finding these companies to be analogous to public squares or company towns might be straightforward in some respects, it is worth noting that neither government officials nor a majority of users seem to want these companies to be confined by the First Amendment.
Returning to hate speech, it remains protected under the First Amendment, yet there has been a steady stream of controversies surrounding the failure of these platforms to remove hate speech and the users who engage in it. Users expect a level of content moderation that would likely be unachievable by a platform constrained by the First Amendment. Even more than this, applying the First Amendment would likely mean that each of these companies’ community standard guidelines are unconstitutional. If the state can’t eject you from the public square for saying something, these companies wouldn’t be able to do so either.If the First Amendment rights of users were deployed to overturn content moderation as we know it, I suspect these platforms would witness a mass exodus. If I may analogize a bit myself, there is something to be said for the Nintendo way, where systems are more closed and curated. Such systems often end up creating more value for users (and persisting longer) than alternatives like Sega or MySpace, which try to be too many things to too many people at once, with minimal quality control.
If the First Amendment really did apply to today’s tech giants, it’s not clear to me that they could avoid the latter’s fate.Normative Beginnings
Instead of focusing on plausible analogies, we need to think through the normative theories undergirding the free speech principle and which of them, singular or plural, we want to privilege when making First Amendment coverage determinations.
Here I will only mention two major contenders—democratic participation theory and thinker-based theory—and leave it to readers to decide whether these theories or others are what ought to be privileged at this historical moment.Democratic ideals are invoked by many influential First Amendment scholars to explain and defend U.S. free speech doctrine.
Building on this tradition, the democratic participation theory of free speech says that speech must be protected in order to ensure “the opportunity for individuals to participate in the speech by which we govern ourselves.” How do we decide what counts as “speech” using democratic participation as our normative reference point? We cannot construe the ideal too broadly, such that all parts of social life are part of the project of self-government, for in encompassing everything, the ideal would prioritize nothing. Instead, the ideal of democratic participation requires us to conceptually divide society into two domains: public life, where we act as citizens cooperating in collective self-governance, and private life, where we act independently in the service of our own projects. For free speech principles grounded in democratic participation, “speech” denotes whatever forms of communication are integral to collective self-governance. Of course, there will be complications at the margins, but the basic implications of the democratic participation theory are discernible all the same. Free speech principles are not meant to immunize all communication against legitimate regulatory aims. They are meant to support the project of collective self-government by safeguarding the communicative conduct that is essential to that project’s realization.With those clarifications in place, the pertinent question for our purposes is which sorts of ostensible “speech”—be it algorithmic outputs in the form of rankings, listing decisions, trending topics, and so on—help the project of democratic self-government and which do not? At this moment, we can certainly appreciate how troll armies, fake accounts, and bots can be anathema to these projects. The economic decisions that companies like Google make in determining which ads to run or whether to privilege their own products against rivals like Yelp and TripAdvisor are, as I said, commercial and need not be seen as worth protecting as “speech” for the sake of democratic self-governance, at least across the board.
That’s not to say that these decisions should necessarily be regulated but instead to show why, under democratic participation theory, they could be, without running afoul of the First Amendment.The “thinker-based” theory, recently developed by Seana Shiffrin, identifies “the individual agent’s interest in the protection of the free development and operation of her mind” as the normative keystone of free speech. Whereas other theories situate the value of the thinker in relation to extrinsic ideals or desiderata, this theory identifies a direct and non-contingent link between the value of mental autonomy and the justification for the protected status of communicative conduct. Again, however, not all communication is privileged under such a theory. If we prioritize the “fundamental function of allowing an agent to transmit . . . the contents of her mind to others and to externalize her mental content,” then we will need to have special protections for people sharing all of this “content” with others. This is part of what makes Shiffrin’s theory distinctive: The expression of thoughts about politics and government does not occupy an exalted position relative to the expression of thoughts about everyday life. But crucially, what is especially protected in this theory is not communication as such but the communication of the thought of individuals. And this will tend to assign a less privileged status to much commercial communication. So when we revisit our key questions—whether programs that synthesize, organize, rank, and transmit third-party communication to users are implicated in “the fundamental function of allowing an agent to transmit the contents of her mind to others”—the diagnosis is mixed, as in the previous case.
One interesting consequence of the thinker-based theory is that, unlike the democratic participation theory, it suggests that facilitation of everyday online chatter by search engines and social networks may be as much a part of the case for protecting (some of) their operations as their role in facilitating political discourse. But as with the democratic participation theory, much of what these programs do—including running ads and allowing for the creation of bot armies and the spread of fake and inflammatory news—will likely fall outside the scope of free speech coverage by the lights of this normative approach.
Concluding Thoughts
In debates over tech companies and free speech coverage, neither the gravity of the policy stakes nor the complexity of the things being compared has dampened the willingness of courts and scholars to use tenuous analogies in charting the way forward. Most everybody seems to agree that search engines and social media platforms should be covered by principles of a free press, if and to the extent that the reasons underlying our protection of the press apply to them. But the point of this paper is that casual analogical methods—observing that both types of things “convey a wide range of information” or “rank and organize content”—do not tell us whether or to what extent they do. There are multiple plausible analogies that might be used, each with different First Amendment implications, and none tells us whether the normative considerations underlying free speech coverage for the one apply to the other. But if those normative considerations are inapplicable, the reason to extend coverage disappears.
+ The following borrows from and builds on prior work, including Heather Whitney, Does the Packingham Ruling Presage Greater Government Control over Search Results? Or Less?, Tech. & Marketing L. Blog (June 22, 2017), http://blog.ericgoldman.org/archives/2017/06/does-the-packingham-ruling-presage-greater-government-control-over-search-results-or-less-guest-blog-post.htm; and Heather M. Whitney & Robert Mark Simpson, Search Engines, Free Speech Coverage, and the Limits of Analogical Reasoning, in Free Speech in the Digital Age (Susan Brison & Kath Gelber eds., forthcoming 2018). For helpful feedback, my sincerest thanks to Adam Shmarya Lovett, Chris Franco, Daniel Viehoff, David Pozen, Eric Goldman, Jameel Jaffer, Jane Friedman, Katie Fallow, Neil Martin, Robert Hopkins, and Robert Mark Simpson. Additional thanks to David Pozen, who also served as editor for this paper, and to Knight First Amendment Institute interns Joseph Catalanotto and Sam Matthews for editorial assistance.
© 2018, Heather Whitney.
Cite as: Heather Whitney, Search Engines, Social Media, and the Editorial Analogy, 18-01 Knight First Amend. Inst. (Feb. 27, 2018), https://knightcolumbia.org/content/search-engines-social-media-and-editorial-analogy [https://perma.cc/Q2U5-DU3X].
Ursula K. Le Guin, Speech in Acceptance of the National Book Foundation Medal for Distinguished Contribution to American Letters (Nov. 19, 2014), available at http://www.theguardian.com/books/2014/nov/20/ursula-k-le-guin-national-book-awards-speech.
Letter from Sen. John Thune to Mark Zuckerberg 2–3 (May 10, 2016), http://www.commerce.senate.gov/public/_cache/files/fe5b7b75-8d53-44c3-8a20-6b2c12b0970d/C5CF587E2778E073A80A79E2A6F73705.fb-letter.pdf [hereinafter Thune Letter].
Craig Timberg et al., Fiery Exchanges on Capitol Hill as Lawmakers Scold Facebook, Google and Twitter, Wash. Post (Nov. 1, 2017), http://www.washingtonpost.com/news/the-switch/wp/2017/11/01/fiery-exchanges-on-capitol-hill-as-lawmakers-scold-facebook-google-and-twitter (emphasis added).
See Mark Bergen et al., Google, Facebook, Twitter Scramble to Hold Washington at Bay, Bloomberg (Oct. 10, 2017), http://www.bloomberg.com/news/articles/2017-10-10/google-facebook-and-twitter-scramble-to-hold-washington-at-bay; Ben Brody & Bill Allison, Lobbying Group for Facebook and Google to Pitch Self-Regulation of Ads, Bloomberg (Oct. 23, 2017), http://www.bloomberg.com/news/articles/2017-10-24/lobby-group-for-facebook-google-to-pitch-self-regulation-of-ads; Sarah Frier & Bill Allison, Facebook Fought Rules That Could Have Exposed Fake Russian Ads, Bloomberg (Oct. 4, 2017), http://www.bloomberg.com/news/articles/2017-10-04/facebook-fought-for-years-to-avoid-political-ad-disclosure-rules.
Timberg et al., supra note 3. In addition, “[i]n a rare act of unanimity, all current FEC commissioners voted [this fall] to reopen public comments” on the Federal Election Commission’s rule exempting Facebook from its political advertising disclosure regulations. Frier & Allison, supra note 4. Since 2011, Facebook has asked for exemptions, arguing that “political ad disclosures could hinder free speech” and that the FEC “should not stand in the way of innovation.” Id.
Erin Griffith, The Other Tech Bubble, Wired (Dec. 16, 2017), http://www.wired.com/story/the-other-tech-bubble.
Growing disenchantment with Bay Area tech companies is certainly not limited to concerns about fake news and foreign propaganda. From their association with massive economic inequality and neighborhood gentrification, to purported attempts at union-busting at Tesla, to sexual harassment and other forms of discrimination both within these companies and facilitated through their platforms, to the recent uproar over Apple’s iPhone-slowdown “misunderstanding,” the last few years have done much to erode the utopian view some once held of Silicon Valley. See, e.g., Brady Dale, The Economic Justice Fight Inside Silicon Valley’s Commuter Buses, Observer (Apr. 6, 2017), http://observer.com/2017/04/teamsters-facebook-google-linkedin; Megan Rose Dickey, In Light of Discrimination Concerns, Uber and Lyft Defend Their Policies to Show Rider Names and Photos, TechCrunch (Dec. 29, 2016), http://techcrunch.com/2016/12/29/uber-lyft-respond-al-franken-about-discrimination; Zac Estrada, Tesla Hit with Labor Complaint on Behalf of Fired Factory Workers, Verge (Oct. 26, 2017), http://www.theverge.com/2017/10/26/16553554/tesla-labor-complaint-fired-factory-workers-elon-musk; Jordan McMahon, Apple Had Way Better Options Than Slowing Down Your iPhone, Wired (Dec. 21, 2017), http://www.wired.com/story/apple-iphone-battery-slow-down; Alexandra Simon-Lewis, What Is Silicon Valley’s Problem with Women?, Wired (June 12, 2017), http://www.wired.co.uk/article/tesla-sexism-lawsuit-harassment-uber.
See Hope King, Is Facebook Protected Under the First Amendment?, CNN (May 12, 2016), http://money.cnn.com/2016/05/12/media/facebook-first-amendment/index.html; Hope King & Brian Stelter, Senate Demands Answers from Facebook, CNN (May 10, 2016), http://money.cnn.com/2016/05/10/technology/facebook-news-senate/index.html; Jeff John Roberts, Like It or Not, Facebook Has the Right to Choose Your News, Fortune (May 10, 2016), http://fortune.com/2016/05/10/facebook-first-amendment.
See, e.g., Julia Angwin, Making Algorithms Accountable, ProPublica (Aug. 1, 2016), http://www.propublica.org/article/making-algorithms-accountable (reviewing mounting concerns over algorithmic secrecy and “bias”); Katharine T. Bartlett & Mitu Gulati, Discrimination by Customers, 102 Iowa L. Rev. 223 (2016) (discussing regulation of online discrimination of customers); Nancy Leong & Aaron Belzer, The New Public Accommodations: Race Discrimination in the Platform Economy, 105 Geo. L.J. 1271 (2017) (arguing that existing public accommodation laws should evolve to regulate businesses in the online platform economy); cf. Heather M. Whitney, The Regulation of Discrimination by Individuals in the Market, 2017 U. Chi. Legal F. 537 (2017) (laying the groundwork for thinking through whether, descriptively and normatively, consumers have a right to discriminate online and online companies a right to help them); Heather M. Whitney, Markets, Rights, and Discrimination by Customers, 102 Iowa L. Rev. Online 346 (2017) (responding to Bartlett and Gulati).
See generally Frederick Schauer, Free Speech: A Philosophical Inquiry (1982) (describing the question of which classes of communicative acts warrant free speech protection as the question of coverage).
Packingham v. North Carolina, 137 S. Ct. 1730 (2017).
Id. at 1738 (Alito, J., concurring in judgment).
Id. at 1735.
Id. at 1737.
Id. at 1735.
10 F. Supp. 3d 433 (S.D.N.Y. 2014).
Id. at 435.
Id. at 436.
Id. at 436–43 (citing Tornillo, 418 U.S. 241 (1974)).
Tornillo, 418 U.S. at 258.
Baidu, 10 F. Supp. 3d at 437–442 (citing Hurley, 515 U.S. 557 (1995)).
For an interesting criticism of the Hurley Court’s analysis, see John Gardner, Case Note, Hurley and South Boston Allied War Veterans Council v. Irish American Gay, Lesbian, and Bi-Sexual Group of Boston, 115 S. Ct. 2338, 1 Int’l J. Discrimination & L. 283 (1996).
Baidu, 10 F. Supp. 3d at 438.
No. 2:14-cv-646-FtM-PAM-CM, 2017 WL 2210029 (M.D. Fla. Feb. 8, 2017).
Note the oddity here. Search engine companies don’t like SEO firms because they don’t like how those firms boost their clients’ sites in the search engine’s own rankings. This fact—that search engine companies cannot fully control their own results—makes it harder to then call those rankings the company’s speech.
See Alexia Tsotsis, Google’s Algorithmic Cat and Mouse Game [Infographic], TechCrunch (Mar. 23, 2011), http://techcrunch.com/2011/03/23/googles-algorithmic-cat-and-mouse-game-infographic (illustrating this game).
e-ventures Worldwide, LLC v. Google, Inc., 188 F. Supp. 3d 1265, 1271 (M.D. Fla. 2016).
Id.
Id. at 1270–71. You might notice that this latter argument is similar in spirit to the one made by Senate Commerce Committee Chairman John Thune when writing Zuckerberg about Facebook’s allegedly liberal-biased Trending Topics. See Thune Letter, supra note 2, at 1–2 (“If Facebook presents its Trending Topics section as the result of a neutral, objective algorithm, but it is in fact subjective and filtered to support or suppress particular political viewpoints, Facebook’s assertion that it maintains a ‘platform for people and perspectives from across the political spectrum’ misleads the public.” (quoting Josh Constine & Sarah Buhr, Facebook Now Directly Denies Report of Biased Trends, Says There’s No Evidence, TechCrunch (May 9, 2016), http://techcrunch.com/2016/05/09/facebook-workers)). A similar move has been made against Yelp. Companies have alleged that Yelp deleted their positive reviews, leaving only the negatives, and then blackmailed them into paying for ads. Given that Yelp portrays itself as a neutral platform that accurately reflects the reviews of users (indeed, that’s why people use it), this would also seem like a clear case of false advertising if the allegations are true. But sometimes courts, misapplying Section 230 of the Communications Decency Act, have failed to see this. See, e.g., Levitt v. Yelp! Inc., 2011 WL 5079526 (N.D. Cal. Oct. 26, 2011). Thankfully, however, not all courts have made this mistake. See Demetriades v. Yelp, Inc., 175 Cal. Rptr. 3d 131 (Cal. Ct. App. 2014). For more on this problem, compare Rebecca Tushnet, A Cry for Yelp, Rebecca Tushnet’s 43(B)log (Nov. 1, 2011), http://tushnet.blogspot.com/2011/11/cry-for-yelp.html (agreeing with me), with Eric Goldman, Yelp Gets Complete Win in Advertiser “Extortion” Case—Levitt v. Yelp, Tech. & Marketing L. Blog (Oct. 26, 2011), http://blog.ericgoldman.org/archives/2011/10/yelp_gets_compl.htm (disagreeing).
e-ventures Worldwide, 188 F. Supp. 3d at 1273–74.
e-ventures Worldwide, LLC v. Google, Inc., 2:14-cv-00646-PAM-CM (M.D. Fla. Feb. 8, 2017), 2017 WL 2210029, at *4.
Id. (“A search engine is akin to a publisher, whose judgments about what to publish and what not to publish are absolutely protected by the First Amendment.” (citing Miami Herald Publ’g Co. v. Tornillo, 418 U.S. 241, 258 (1974))).
Id.
The California Attorney General’s Office has also set out arguments for why Google search results can both be covered by the First Amendment and still be open to antitrust scrutiny. See Paula Lauren Gibson, Does the First Amendment Immunize Google’s Search Engine Search Results from Government Antitrust Scrutiny?, 23 Competition: J. Antirust & Unfair Comp. L. Sec. St. B. Cal. 125, 136 (2014); see also supra note 29.
Packingham v. North Carolina, 137 S. Ct. 1730 (2017).
Klint Finley, Google’s Big EU Fine Isn’t Just About The Money, Wired (Jun 27, 2017), http://www.wired.com/story/google-big-eu-fine.
Josie Cox, Google Hit with Record EU Fine over “Unfair” Shopping Searches, Independent (June 27, 2017), http://www.independent.co.uk/news/business/news/google-eu-fine-latest-competition-shopping-searches-prepare-online-european-commission-results-a7809886.html.
Mark Bergen, Google Says Local Search Results That Buried Rivals Yelp, TripAdvisor Is Just a Bug, Recode (Nov. 24, 2015), http://www.recode.net/2015/11/24/11620920/google-says-local-search-result-that-buried-rivals-yelp-tripadvisor.
Thanks to Jane Friedman for help with this example.
Rumsfeld v. Forum for Acad. & Institutional Rights, Inc., 547 U.S. 47, 58 (2006).
See Debra Cassens Weiss, Law Prof Volokh Argues Google Has a Free Speech Right to Determine Search Results, ABA J. (May 14, 2012), http://www.abajournal.com/news/article/law_prof_volokh_argues_google_has_a_free_speech_right_to_determine_search_r; Thomas Catan & Amir Efrati, Feds to Launch Probe of Google, Wall St. J. (June 24, 2011), http://www.wsj.com/articles/SB10001424052702303339904576403603764717680.
Eugene Volokh & Donald M. Falk, First Amendment Protection for Search Engine Search Results (2012), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2055364.
Id. at 21.
Id. at 27.
Id. at 11.
Id. at 14.
Id. at 27.
See J.L. Austin, How to Do Things with Words 4–11 (J.O. Urmson & Marina Sbisà eds., 2d ed. 1975) (preliminarily developing the notion of “performatives”—utterances that are the performance of an action, such as promising).
See Paco Underhill, Why We Buy: The Science of Shopping 85–86 (2009).
See id. at 80.
See id. at 205, 209.
A further complication is that, as any law student can tell you, not even all of what we would colloquially call “speech” is necessarily covered under the First Amendment—for instance, lying under oath, yelling “fire” in a theater, or threatening or defrauding someone.
Cf. Stanley Fish, There’s No Such Thing as Free Speech (1994).
Volokh & Falk, supra note 42, at 11.
Id. at 14.
Id. at 27.
See Underhill, supra note 49, at 77–88.
Tristan Harris, How Technology Is Hijacking Your Mind—from a Magician and Google Design Ethicist, Thrive Global (May 18, 2016), http://journal.thriveglobal.com/how-technology-hijacks-peoples-minds-from-a-magician-and-google-s-design-ethicist-56d62ef5edf3; see also Paul Lewis, ‘Our Minds Can Be Hijacked’: The Tech Insiders Who Fear a Smartphone Dystopia, Guardian (Oct. 6, 2017), http://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia.
564 U.S. 786 (2011).
547 U.S. 47 (2006).
Id. at 51.
Id.
Id. at 64.
Id. at 62.
I thank Eric Goldman for making this argument to me.
See, e.g., Pierson v. Post, 3 Cai. 175 (N.Y. 1805).
For further discussion, see Richard Posner, How Judges Think 186–87 (2008); Grant Lamond, Analogical Reasoning in the Common Law, 34 Oxford J. Legal Stud. 567, 582–84 (2014).
Oren Bracha & Frank Pasquale, Federal Search Commission? Access, Fairness and Accountability in the Law of Search, 93 Cornell L. Rev. 1149, 1197–1200 (2008).
See Volokh & Falk, supra note 42, at 11. While I will not rehash it here, we can contest Volokh and Falk’s argument that the presence of judgments in constructing algorithms provides a reason why those algorithms’ outputs are analogous to editorial speech. Judgments are ubiquitous; First Amendment coverage is not.
And I am reluctant to do so. The method seems to distinguish communicative acts from acts that communicate information, and I worry that such a divide is as unstable and illusory as philosophers of language have shown the communication-conduct distinction to be. See Austin, supra note 48, at 4–11. Thanks to Rob Hopkins for raising this issue.
See, e.g., Stuart Minor Benjamin, Transmitting, Editing, and Communicating: Determining What “the Freedom of Speech” Encompasses, 60 Duke L.J. 1673, 1704–05 (2011) (exploring, but ultimately rejecting, the possibility that network-design choices that the network operator makes to optimize certain types of communications might be understood as substantive speech protected by the First Amendment).
See, e.g., Tom Warren, Google’s Top Search Results Promote Offensive Content, Again, Verge (Nov. 22, 2017), http://www.theverge.com/2017/11/22/16689534/google-search-results-offensive-material (describing Google’s decision to remove offensive webpages and images that ranked highly in their search results for certain terms).
See, e.g., Jordan Crook, Fake Times, TechCrunch (Mar. 19, 2017), http://techcrunch.com/2017/03/19/facebook-will-never-take-responsibility-for-fake-news (“As part of a nationwide tour, Zuck[erberg] expressed that Facebook doesn’t want fake news on the platform.”).
See, e.g., Marty Swant, Twitter Is Cracking Down on Trolls and Offensive Tweets with These New Tools, Adweek (Feb. 7, 2017), http://www.adweek.com/digital/twitter-is-cracking-down-on-trolls-and-offensive-tweets-with-these-new-tools (describing steps Twitter has taken to identify and minimize content on the platform that is “potentially abusive and low-quality”).
Cf. Rumsfeld v. Forum for Acad. & Institutional Rights, Inc., 547 U.S. 47, 65 (2006) (“Nothing about recruiting suggests that law schools agree with any speech by recruiters, and nothing in the [statute] restricts what the law schools may say about the military’s policies. We have held that high school students can appreciate the difference between speech a school sponsors and speech the school permits because legally required to do so, pursuant to an equal access policy.”).
See Samuel Gibbs, Google Alters Search Autocomplete to Remove “Are Jews Evil” Suggestion, Guardian (Dec. 5, 2016), http://www.theguardian.com/technology/2016/dec/05/google-alters-search-autocomplete-remove-are-jews-evil-suggestion.
David Meyer, Google Loses Autocomplete Defamation Case in Italy, ZDNet (April 5, 2011), http://www.zdnet.com/article/google-loses-autocomplete-defamation-case-in-italy.
Id.
Google, Search Using Autocomplete, Google Search Help, http://support.google.com/websearch/answer/106230?co=GENIE.Platform%3DAndroid&hl=en (last visited Jan. 11, 2018).
Thanks to Daniel Viehoff for encouraging me to emphasize this point.
Stuart Minor Benjamin, Algorithms and Speech, 161 U. Pa. L. Rev. 1445, 1470 (2013) (quoting An Explanation of Our Search Results, Google, http://www.google.com/explanation.html (last visited Apr. 10, 2013)).
Joe Pinsker, Where Were Netflix and Google in the Net-Neutrality Fight?, Atlantic (Dec. 20, 2017), http://www.theatlantic.com/business/archive/2017/12/netflix-google-net-neutrality/548768. As Evgeny Morozov has pointed out, Google has also relied on a mirror metaphor that paints Google as simply and neutrally reflecting back the state of the world, where it is not a speaker and its outputs are not its speech. See Evgeny Morozov, To Save Everything, Click Here 144–46 (2013).
Thanks to Evelyn Douek for most helpfully sharing her summary and transcript excerpts from those hearings. See Evelyn Douek, Summary of Congressional Tech Hearings 16 (2017), http://lawfare.s3-us-west-2.amazonaws.com/staging/2018/DOUEK%20Summary%20of%20Congressional%20Tech%20Hearings.pdf (last visited Jan. 19, 2018).
Josh Constine, Facebook Launches Trending Topics on Web with Descriptions of Why Each Is Popular, TechCrunch (Jan. 16, 2014), http://techcrunch.com/2014/01/16/facebook-trending.
See How Much Does It Cost to Advertise on Twitter?, ThriveHive (Feb. 21, 2017), http://thrivehive.com/how-much-does-it-cost-to-advertise-on-twitter.
Garett Sloane, Facebook Lends Trending Hand to Brands, Adweek (Jan. 17, 2014), http://www.adweek.com/digital/facebook-lends-trending-hand-brands-155060. Accordingly, Facebook contacted advertising agencies to explain how brands could feature in Trending Topics and to explain the incentives for advertising on Facebook’s platform more generally. See id. This raises an important question: How ought the commerciality of these sites and the profit-driven decisions of which content to put before users alter our First Amendment analysis? Volokh and Falk recognized the potential relevance of the commercial divide and limited their analysis to Google’s organic results. See Volokh & Falk, supra note 42, at 5–6 (“We focus in this submission on Google search results for which no payment has been made to Google . . . .”). Given the economic motivations behind organic rankings, it is not obvious to me that they should be considered noncommercial. Then again, given how the current Supreme Court has chipped away at the commercial-noncommercial distinction, it is hard to say how much commerciality will even matter moving forward.
Charlie Warzel, How Ferguson Exposed Facebook’s Breaking News Problem, BuzzFeed (Aug. 19, 2014), http://www.buzzfeed.com/charliewarzel/in-ferguson-facebook-cant-deliver-on-its-promise-to-deliver; see also Ravi Somaiya, Facebook Takes Steps Against “Click Bait” Articles, N.Y. Times (Aug. 25, 2014), http://www.nytimes.com/2014/08/26/business/media/facebook-takes-steps-against-click-bait-articles.html (describing updates to Facebook’s algorithm designed to increase the quality of articles that users see).
See Warzel, supra note 87.
Id.
Gail Sullivan, How Facebook and Twitter Control What You See About Ferguson, Wash. Post (Aug. 19, 2014), http://www.washingtonpost.com/news/morning-mix/wp/2014/08/19/how-facebook-and-twitter-control-what-you-see-about-ferguson. Again we see the tight connection between these companies’ “organic” algorithmic decisions and commercial objectives. Cf. supra note 86.
Martin Beck, Timely Change? Facebook Adjusts News Feed Algorithm to Surface More Trending Stories, Marketing Land (Sept. 18, 2014), http://marketingland.com/facebook-adjusts-news-feed-algorithm-surface-timely-stories-100630. By August, Facebook had already announced tweaks to its algorithm meant to reduce clickbait articles, by considering how long users spent reading an article and whether they shared it (though, as we eventually learned, these metrics did not weed out fake news). See Somaiya, supra note 87.
David Cohen, Mark Zuckerberg Q&A: Dislike Button, Ferguson, Graph Search, News Feed Study Controversy, Adweek (Dec. 12, 2014), http://www.adweek.com/digital/mark-zuckerberg-qa-121114.
Amy Mitchell et al., Pew Research Ctr., Social, Search & Direct: Pathways to Digital News 23–24 (2014), http://assets.pewresearch.org/wp-content/uploads/sites/13/2014/03/SocialSearchandDirect_PathwaystoDigitalNews.pdf.
Elisa Shearer & Jeffrey Gottfried, Pew Research Ctr., News Use Across Social Media Platforms 2017 (2017), http://www.journalism.org/2017/09/07/news-use-across-social-media-platforms-2017.
Somaiya, supra note 87.
Id. A common reply is that conduits can be “neutral” but platforms and their algorithms cannot. See, e.g., Eric Goldman, Search Engine Bias and the Demise of Search Engine Utopianism, 8 Yale J.L. & Tech. 188 (2006) (making an argument along these lines). I take this to be a false dichotomy. There aren’t neutral conduits on one side and non-neutral platforms on the other. Something being “neutral” just means neutral against some baseline. But the threshold choice of that baseline is itself a non-neutral judgment. Many say, for example, that ISPs should be “neutral” with respect to content; this is a cornerstone of the net neutrality movement. But what is “neutral” here? Making all content creators pay the same amount? Treating similar kinds of content similarly? Our definition of “neutral” does not emerge from the sea fully grown—we decide it.
Updates to Trending, Facebook Newsroom (Dec. 10, 2014), http://newsroom.fb.com/news/2014/12/updates-to-trending.
Kurt Wagner, How Facebook Decides What’s Trending, Recode (Aug. 21, 2015), http://www.recode.net/2015/8/21/11617880/how-facebook-decides-whats-trending.
Id.
Michael Nunez, Want to Know What Facebook Really Thinks of Journalists? Here’s What Happened When It Hired Some, Gizmodo (May 3, 2016), http://gizmodo.com/want-to-know-what-facebook-really-thinks-of-journalists-1773916117.
Nunez, supra note 100.
Michael Nunez, Former Facebook Workers: We Routinely Suppressed Conservative News, Gizmodo (May 9, 2016), http://gizmodo.com/former-facebook-workers-we-routinely-suppressed-conser-1775461006.
Id. Nunez reported that the contractors, in addition to having a liberal political bent, had all previously worked in the news industry and mostly came from Ivy League and elite East Coast colleges. See Nunez, supra note 100.
Nunez, supra note 100.
Nunez, supra note 102.
Nunez, supra note 100.
Tony Romm & Hadas Gold, Inside Facebook’s GOP Charm Offensive, Politico (May 16, 2016), http://www.politico.com/story/2016/05/facebook-conservatives-zuckerberg-bias-223244.
Team GOP, #MakeThisTrend: Facebook Must Answer for Conservative Censorship, GOP.com: Blog (May 9, 2016), http://gop.com/makethistrend-facebook-must-answer-for-liberal-bias.
Robinson Meyer, Facebook Doesn’t Have to Be Fair, Atlantic (May 13, 2016), http://www.theatlantic.com/technology/archive/2016/05/facebook-isnt-fair/482610.
Thune Letter, supra note 2, at 1–2 (internal quotation marks omitted).
Id. at 2–3.
See, e.g., Nick Corasaniti & Mike Isaac, Senator Demands Answers from Facebook on Claims of ‘Trending’ List Bias, N.Y. Times (May 10, 2016), http://www.nytimes.com/2016/05/11/technology/facebook-thune-conservative.html; Thomas C. Rubin, Facebook’s Trending Topics Are None of the Senate’s Business, Slate (May 23, 2016), http://www.slate.com/articles/news_and_politics/jurisprudence/2016/05/facebook_s_trending_topics_are_none_of_the_senate_s_business.html.
Colin Stretch, Response to Chairman John Thune’s Letter on Trending Topics, Facebook Newsroom (May 23, 2016), http://newsroom.fb.com/news/2016/05/response-to-chairman-john-thunes-letter-on-trending-topics.
Justin Osofsky, Information About Trending Topics, Facebook Newsroom (May 12, 2016), http://newsroom.fb.com/news/2016/05/information-about-trending-topics. A link to the guidelines can be found in this article.
Romm & Gold, supra note 107.
Letter from Colin Stretch, Facebook Gen. Counsel, to Sen. Thune 1–2 (May 23, 2016), http://www.commerce.senate.gov/public/_cache/files/93a14e98-2443-4d27-bf04-1fc59b8cf2b4/22796A1389F52BE16D225F9A03FB53F8.facebook-letter.pdf.
Id. at 11.
Abby Ohlheiser, Three Days After Removing Human Editors, Facebook Is Already Trending Fake News, Wash. Post (Aug. 29, 2016), http://www.washingtonpost.com/news/the-intersect/wp/2016/08/29/a-fake-headline-about-megyn-kelly-was-trending-on-facebook. For further explanation of these changes, see Search FYI: An Update on Trending, Facebook Newsroom (Aug. 26, 2016), http://newsroom.fb.com/news/2016/08/search-fyi-an-update-to-trending; Sam Thielman, Facebook Fires Trending Team, and Algorithm Without Humans Goes Crazy, Guardian (Aug. 29, 2016), http://www.theguardian.com/technology/2016/aug/29/facebook-fires-trending-topics-team-algorithm.
Trending Review Guidelines, Facebook Newsroom (Aug. 26, 2016), http://fbnewsroomus.files.wordpress.com/2016/08/trending-review-guidelines.pdf.
For example, if “NBA Finals” is a live Trending Topic, the team would not accept “#NBAFinals” as a new topic. See id.
For example, while “#lunch” peaks in usage across the site during lunchtime every day and around the world, curators would not allow #lunch to be a trending topic because it is not tied to a news event. See id.
For instance, when Congressman John Lewis said that he didn’t believe Trump was a legitimate president and Trump tweeted against Lewis in response, the Trending Topics algorithm might pick up a spike in references to Trump and Lewis together, and the two might be clustered together as if they were all part of one conversation. The reviewer’s job would be to ensure that the algorithmically suggested clustering is correct. See id.
Rumsfeld v. Forum for Acad. & Institutional Rights, Inc., 547 U.S. 47, 65 (2006).
Will Oremus, Trending Bad: How Facebook’s Foray into Automated News Went from Messy to Disastrous, Slate (Aug. 30, 2016), http://www.slate.com/articles/technology/future_tense/2016/08/how_facebook_s_trending_news_feature_went_from_messy_to_disastrous.html.
Thielman, supra note 118.
See Ohlheiser, supra note 118.
Caitlin Dewey, Facebook Has Repeatedly Trended Fake News Since Firing Its Human Editors, Wash. Post (Oct. 12, 2016), http://www.washingtonpost.com/news/the-intersect/wp/2016/10/12/facebook-has-repeatedly-trended-fake-news-since-firing-its-human-editors.
Giulia Segreti, Facebook CEO Says Group Will Not Become a Media Company, Reuters (Aug. 29, 2016), http://www.reuters.com/article/us-facebook-zuckerberg/facebook-ceo-says-group-will-not-become-a-media-company-idUSKCN1141WN.
Michael Barthel et al., Many Americans Believe Fake News Is Sowing Confusion, Pew Research Ctr. (Dec. 15, 2016), http://www.journalism.org/2016/12/15/many-americans-believe-fake-news-is-sowing-confusion.
Mark Zuckerberg, Facebook (Nov. 19, 2016), http://www.facebook.com/zuck/posts/10103269806149061.
See Deepa Seetharaman, Facebook to Rank News Sources by Quality to Battle Misinformation, Wall St. J. (Jan. 19, 2018), http://www.wsj.com/articles/facebook-to-rank-news-sources-by-quality-to-battle-misinformation-1516394184.
Douek, supra note 83, at 18.
Id.
Id. at 19.
Id.
The New York Times recently used similar language when describing these tech platforms. See Farhad Manjoo, How 2017 Became a Turning Point for Tech Giants, N.Y. Times: State of the Art (Dec. 13, 2017), http://www.nytimes.com/2017/12/13/technology/tech-companies-social-responsibility.html (“Think of these platforms as the roads, railroads and waterways of the information economy—an essentially inescapable part of life for any business or regular person who doesn’t live in a secluded cabin in the woods.”).
447 U.S. 74 (1980).
See Brief of Appellants, PruneYard Shopping Ctr. v. Robins, 447 U.S. 74 (1980) (No. 79-289), 1979 WL 199940, at *13.
This same move was made by the ACLU in its amicus brief. See Brief of the American Civil Liberties Union of Northern California et al. as Amici Curiae, PruneYard Shopping Ctr. v. Robins, 447 U.S. 74 (1980) (No. 79-289), 1980 WL 339574, at *39–*41.
447 U.S. at 88.
See id.
Rumsfeld v. Forum for Acad. & Institutional Rights, Inc., 547 U.S. 47 (2006).
Id. at 64.
As was the case when Korryn Gaines, a black woman, was shot and killed, and her five-year-old son shot twice, during a standoff at her home with police officers. See Hanna Kozlowska, Facebook Is Giving the US Government More and More Data, Quartz (Dec. 19, 2017), http://qz.com/1160719/facebooks-transparency-report-the-company-is-giving-the-us-government-more-and-more-data; Baynard Woods, Facebook Deactivated Korryn Gaines’ Account During Standoff, Police Say, Guardian (Aug. 3, 2016), http://www.theguardian.com/us-news/2016/aug/03/korryn-gaines-facebook-account-baltimore-police; see also Mike Isaac & Sydney Ember, Live Footage of Shootings Forces Facebook to Confront New Role, N.Y. Times (July 8, 2016), http://www.nytimes.com/2016/07/09/technology/facebook-dallas-live-video-breaking-news.html (discussing the Facebook live stream of the killing of Philando Castile by police and how “it was taken down by Facebook for a few hours without explanation. Facebook blamed a technical glitch for the video’s removal, but declined to speak further of the incident”).
See Videos Removed or Blocked Due to YouTube’s Contractual Obligations, YouTube Help, http://support.google.com/youtube/answer/3045545?hl=en (last visited Feb. 2, 2018); Mike Masnick, YouTube Won’t Put Your Video Back Up, Even If It’s Fair Use, If It Contains Music from Universal Music, TechDirt (Apr. 5, 2013), http://www.techdirt.com/articles/20130405/01191322589/youtube-wont-put-your-video-back-up-even-if-its-fair-use-if-it-contains-content-universal-music.shtml.
See generally Adam Alter, Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked (2017); Julian Morgans, Your Addiction to Social Media Is No Accident, Vice (May 19, 2017), http://www.vice.com/en_us/article/vv5jkb/the-secret-ways-social-media-is-built-for-addiction.
See Renee Diresta & Tristan Harris, Why Facebook and Twitter Can’t Be Trusted to Police Themselves, Politico Mag. (Nov. 1, 2017), http://www.politico.com/magazine/story/2017/11/01/why-facebook-and-twitter-cant-be-trusted-to-police-themselves-215775; Kurt Wagner, Facebook Is Making a Major Change to the News Feed that Will Show You More Content from Friends and Family and Less from Publishers, Recode (Jan. 11, 2018), http://www.recode.net/2018/1/11/16881160/facebook-mark-zuckerberg-news-feed-algorithm-content-video-friends-family-media-publishers.
Many have suggested something along these lines. See, e.g., Jack Balkin, Information Fiduciaries and the First Amendment, 49 U.C. Davis L. Rev. 1183 (2016); James Grimmelmann, Speech Engines, 98 Minn. L. Rev. 868 (2014).
Tim Wu, Knight First Amendment Inst., Is the First Amendment Obsolete? 23 (2017), https://knightcolumbia.org/sites/default/files/content/Emerging%20Threats%20Tim%20Wu%20Is%20the%20First%20Amendment%20Obsolete.pdf.
395 U.S. 367 (1969).
Packingham v. North Carolina, 137 S. Ct. 1730, 1735 (2017).
Douek, supra note 83, at 14.
Id.
See also Franklin Foer, Facebook Finally Blinks, Atlantic (Jan. 11, 2018), http://www.theatlantic.com/technology/archive/2018/01/facebook/550376 (“The company finally acted as if it might assume the responsibilities implied by its power . . . .”).
See Nitaska Tiku, Investors Join Calls for Facebook, Twitter to Take More Responsibility, Wired (Jan. 11, 2018), http://www.wired.com/story/investors-join-calls-for-facebook-twitter-to-take-more-responsibility.
Mark Zuckerberg, Facebook (Jan. 4, 2018), http://www.facebook.com/zuck/posts/10104380170714571 (“The world feels anxious and divided, and Facebook has a lot of work to do—whether it’s protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent.”).
See Ian Sherr, Zuckerberg Says He’s Committed to Fixing Facebook This Year, CNET (Jan. 4, 2018), http://www.cnet.com/news/zuckerberg-says-hes-committed-to-fixing-facebook-hate-harassment-russia (describing Zuckerberg’s apology “for the ways my work was used to divide people rather than bring us together”).
Foer, supra note 154.
See, e.g., Zhang v. Baidu.com, Inc., 10 F. Supp. 3d 433 (S.D.N.Y. 2014). First Amendment arguments leveled against online platforms go back at least to Cyber Promotions, Inc. v. American Online, Inc., 948 F. Supp. 436 (E.D. Pa. 1996). As for the latter position, while courts have found several government-run websites not to be open public forums, those same courts have also suggested that the outcomes would be different if the sites were more dynamic and open to the public. See, e.g., Sutliffe v. Epping Sch. Dist., 584 F.3d 314, 334–35 (1st Cir. 2009); Page v. Lexington Cty. Sch. Dist. One, 531 F.3d 275, 284 (4th Cir. 2008); Putnam Pit, Inc. v. City of Cookeville, Tennessee, 221 F.3d 834, 842 (6th Cir. 2000). Prior to Packingham, the district court in Davison v. Loudoun County Board of Supervisors, 1:16-cv-932 (JCC/IDD), 2017 WL 1929406 (E.D. Va. May 10, 2017), a case involving comments on a county chair’s Facebook page, found that the question could not be answered at summary judgment.
Joseph Bernstein & Charlie Warzel, Far-Right Activist Charles Johnson Has Sued Twitter over His Suspension, BuzzFeed (Jan. 8, 2018,), http://www.buzzfeed.com/josephbernstein/far-right-activist-charles-johnson-has-sued-twitter-over.
See, e.g., id. (complaint embedded in article); Caitlin Dewey, Charles Johnson, One of the Internet’s Most Infamous Trolls, Has Finally Been Banned from Twitter, Wash. Post (May 26, 2015), http://www.washingtonpost.com/news/the-intersect/wp/2015/05/26/charles-johnson-one-of-the-internets-most-infamous-trolls-has-finally-been-banned-from-twitter; Elizabeth Dwoskin & Craig Timberg, Google, Twitter Face New Lawsuits Alleging Discrimination Against Conservative Voices, Wash. Post (Jan. 8, 2018), http://www.washingtonpost.com/news/the-switch/wp/2018/01/08/google-faces-a-lawsuit-over-discriminating-against-white-men-and-conservatives; Ian Lovett & Jack Nicas, PragerU Sues YouTube in Free-Speech Case, Wall St. J. (Oct. 23, 2017), http://www.wsj.com/articles/prageru-sues-youtube-in-free-speech-case-1508811856; Heather Whitney, Does the Packingham Ruling Presage Greater Government Control over Search Results? Or Less?, Tech. & Marketing L. Blog (June 22, 2017), http://blog.ericgoldman.org/archives/2017/06/does-the-packingham-ruling-presage-greater-government-control-over-search-results-or-less-guest-blog-post.htm. Litigants have also used Packingham to bolster arguments that private commercial websites, and not only physical structures, can be “public accommodations” for purposes of Title III of the Americans with Disabilities Act. See, e.g., Andrews v. Blick Art Materials, LLC, 268 F. Supp. 3d 381, 393 (E.D.N.Y. 2017); Del-Orden v. Bonobos, Inc., No. 17 CIV. 2744 (PAE) (S.D.N.Y. Dec. 20, 2017), 2017 WL 6547902, at *10.
See, e.g., hiQ Labs, Inc. v. LinkedIn Corp., No. 17-CV-03301-EMC (N.D. Cal. Aug. 14, 2017), 2017 WL 3473663, at *7 (“The Court’s analogy of the Internet in general, and social networking sites in particular, to the ‘modern public square,’ embraces the social norm that assumes the openness and accessibility of that forum to all comers.” (quoting Packingham v. North Carolina, 137 S. Ct. 1730, 1737 (2017))); see also Venkat Balasubramani, LinkedIn Enjoined from Blocking Scraper—hiQ v. LinkedIn, Techn. & Marketing L. Blog (Aug. 15, 2017), http://blog.ericgoldman.org/archives/2017/08/linkedin-enjoined-from-blocking-scraper-hiq-v-linkedin.htm (response from Eric Goldman noting that “[c]ollectively, it looks like some courts are reading Packingham to enshrine a general purpose right of users to get content without restriction on the Internet—which would be an interesting and potentially far-reaching implication of the ruling”).
See, e.g., Davison v. Plowman, 247 F. Supp. 3d 767, 776 (E.D. Va. 2017), appeal filed, No. 17-1771 (June 27, 2017) (“The Court has already ruled that the Loudoun County Social Media Comments Policy—both as originally written and as amended—serves to create a limited public forum as applied to the Loudoun County Commonwealth’s Attorney Facebook page.”); see also supra note 159 (citing additional cases).
Complaint for Declaratory & Injunctive Relief at 16, Knight First Amendment Inst. at Columbia Univ. v. Trump, No. 1:17-cv-05205 (S.D.N.Y. July 11, 2017); see also Davison v. Loudoun Cty. Bd. of Supervisors, 267 F. Supp. 3d 703, 715–19 (E.D. Va. 2017) (citing Packingham and holding, without deciding on the nature of the forum, that the banning of a resident from a local politician’s Facebook page for twelve hours violated the First Amendment).
326 U.S. 501 (1945).
For an example of this tactic, see Mary Harris Jones, Autobiography of Mother Jones 156 (1996) (Mary Field Parton ed., 1925) (describing how the author had to drive through a creek bed to reach miners “as that was the only public road and I could be arrested for trespassing if I took any other”).
326 U.S. at 502.
391 U.S. 308, 319–20 (1968).
Id. at 318.
Id. at 319.
424 U.S. 507 (1976).
407 U.S. 551 (1972).
424 U.S. at 518.
Id. at 520.
Packingham v. North Carolina, 137 S. Ct. 1730, 1738 (2017) (Alito, J., concurring).
Id. at 1735.
Id. at 1737.
Id. at 1732.
See, e.g., Venetian Casino Resort LLC v. Local Joint Exec. Bd., 257 F.3d 937, 943 (9th Cir. 2001); Freedom from Religion Found., Inc. v. City of Marshfield, 203 F.3d 487, 494 (7th Cir. 2000).
See Douek, supra note 83, at 16–19.
See Tai Liu et al., The Barriers to Overthrowing Internet Feudalism, in Sixteenth ACM Workshop on Hot Topics in Networks (2017), http://cs.nyu.edu/~jchen/publications/hotnets17-liu.pdf. The feudalism analogy has been made for many years. See, e.g., Ben Johnson, How Tech Giants Are Like Feudal Lords and Users Are Like Serfs, Marketplace (Nov. 27, 2012), http://www.marketplace.org/2012/11/27/tech/how-tech-giants-are-feudal-lords-and-users-are-serfs; Bruce Schneier, The Battle for Power on the Internet, Atlantic (Oct. 24, 2013), http://www.theatlantic.com/technology/archive/2013/10/the-battle-for-power-on-the-internet/280824; Bruce Schneier, When It Comes to Security, We’re Back to Feudalism, Wired (Nov. 26, 2012), http://www.wired.com/2012/11/feudal-security.
Fitzgerald v. Mountain Laurel Racing, Inc., 607 F.2d 589, 591 (3d Cir. 1979).
Christopher W. Schmidt, On Doctrinal Confusion: The Case of the State Action Doctrine, 2016 BYU L. Rev. 575, 589.
334 U.S. 1 (1948).
Burton v. Wilmington Parking Auth., 365 U.S. 715, 725 (1961).
382 U.S. 296, 299 (1966) (holding segregation not permitted in a private park maintained by city and granted tax exemption); see also Lugar v. Edmondson Oil Co., 457 U.S. 922 (1982) (finding state action where prohibited act could fairly be attributed to the state); Jackson v. Metro. Edison, 419 U.S. 345, 351 (1974) (finding state action when “there is a sufficiently close nexus between the State and the challenged actions of the [private] entity” that the state can be deemed responsible for the actions).
David Kirkpatrick, The Facebook Effect 254 (2010).
Derek E. Bambauer, Against Jawboning, 100 Minn. L. Rev. 51 (2015). The term jawboning “became fashionable in the 1960s” and signifies “an effort by the government, usually the president, to persuade companies—through intimidation, bullying or shaming—to do what the president asked in the ‘national interest’ even if it wasn’t in the firms’ immediate self-interest.” Robert J. Samuelson, Trump’s Job “Jawboning” May Be Good Politics—But It’s Not Good Economics, Wash. Post (Jan. 8, 2017), http://www.washingtonpost.com/opinions/trumps-job-jawboning-may-be-good-politics--but-its-not-good-economics/2017/01/08/a1496dc2-d44b-11e6-a783-cd3fa950f2fd_story.html.
I do not mean to suggest that these companies always comply with government requests. Apple’s refusal to unlock the iPhone of Syed Rizwan Farook, an American-born terrorist and one of the two people responsible for the San Bernardino attack, even in the face of a court order, is just one example of company resistance. See Mark Berman & Ellen Nakashima, FBI Director: Victory in the Fight with Apple Could Set a Precedent, Lead to More Requests, Wash. Post (Mar. 1, 2016), http://www.washingtonpost.com/news/post-nation/wp/2016/03/01/fbi-apple-bringing-fight-over-encryption-to-capitol-hill. These companies have reasons both to comply and not to comply in such situations, and users have few ways of knowing what happened one way or the other. See Bruce Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World 85 (2015). To the extent that we worry about this, an expansive approach to state action might provide companies an attractive shield against informal government requests.
Nelson Goodman, Seven Strictures on Similarity, in Problems and Projects 437, 437–46 (1972).
Lamond, supra note 67, at 567–68; see also Larry Alexander & Emily Sherwin, Demystifying Legal Reasoning (2008); Posner, supra note 67, at 180–92; Larry Alexander, Bad Beginnings, 145 U. Pa. L. Rev. 57 (1996); Emily Sherwin, A Defense of Analogical Reasoning in Law, 66 U. Chi. L. Rev. 1179 (1999); Cass Sunstein, On Analogical Reasoning, 106 Harv. L. Rev. 741 (1993).
See generally Heather M. Whitney & Robert Mark Simpson, Search Engines, Free Speech Coverage, and the Limits of Analogical Reasoning, in Free Speech in the Digital Age (Susan Brison & Kath Gelber eds., forthcoming 2018).
See, e.g., Thomas Wheatley, Why Social Media Is Not a Public Forum, Wash. Post (Aug. 4, 2017), http://www.washingtonpost.com/blogs/all-opinions-are-local/wp/2017/08/04/why-social-media-is-not-a-public-forum.
Users’ expectations on this score, it bears mention, are compatible both with objections to specific moderation decisions and with calls for these companies’ content moderation policies to be less opaque.
For more on this and confirmation that the Cylons were right (this has all happened before and all of this will happen again), see Blake J. Harris, Console Wars: Sega, Nintendo, and the Battle That Defined a Generation (2014); Adam Gabbatt, Myspace Tom to Google+: Don’t Become a Cesspool Like My Site, Guardian (Dec. 30, 2011), http://www.theguardian.com/technology/2011/dec/30/myspace-tom-google-censorship.
In this way, I agree with Robert Simpson that we should move from a confused and tacit subtractive approach to speech coverage to an additive one. See Robert Mark Simpson, Defining ‘Speech’: Subtraction, Addition, and Division, 29 Canadian J.L. & Jurisprudence 457 (2016); see also Robert C. Post, The Constitutional Status of Commercial Speech, 48 UCLA L. Rev. 1 (2000) (using an additive methodology to discuss the protection of commercial speech).
See, e.g., Alexander Meiklejohn, Free Speech and Its Relation to Self-Government 22–27 (1948); Robert Post, The Constitutional Conception of Public Discourse: Outrageous Opinion, Democratic Deliberation, and Hustler Magazine v. Falwell, 103 Harv. L. Rev. 601 (1990).
James Weinstein, Participatory Democracy as the Central Value of American Free Speech Doctrine, 97 Va. L. Rev. 491, 491 (2011).
See supra notes 37–38 and accompanying text.
Seana Valentine Shiffrin, A Thinker-Based Approach to Freedom of Speech, 27 Const. Comment. 283, 287 (2011). See generally Seana Valentine Shiffrin, Speech Matters: On Lying, Morality, and the Law (2014).
Shiffrin, A Thinker-Based Approach, supra note 200, at 295.
Heather Whitney is a doctoral candidate in philosophy at New York University.