Large internet platforms’ pleas for free expression protection have vexed policymakers and scholars for over a decade. I am presently trying to reconcile the type of intermediary responsibility I call for in recent work with my earlier characterization of platforms as common carriers.
These platforms assume a variety of distinctive roles and responsibilities that arise situationally. Sometimes a platform takes on real editorial responsibility. In other scenarios, it is unable or unwilling to exercise control over users, and regulators are unwilling or unable to formulate rules requiring it to act.Heather Whitney’s paper Search Engines, Social Media, and the Editorial Analogy challenges these distinctions by highlighting how media organizations and intermediaries that were the subjects of leading First Amendment precedents are unlike contemporary platforms. Whitney’s subtle intervention carefully parses the role and purpose of media outlets, fiduciaries, and other entities with a longer history of regulation than platforms. It should be a vital corrective to the anachronistic metaphors that bog down First Amendment discourse to this day.
The question now is how to craft free expression doctrine capable of addressing platforms that are far more centralized, pervasive, and powerful than the vast majority of entities that pressed free expression claims before 2000.
That is a project worthy of a treatment as expansive as Thomas Emerson’s classic The System of Freedom of Expression. In my brief intervention here, I merely hope to advance a perspective congruent with Whitney’s turn to Seana Shiffrin’s “thinker-based” theory of free expression. I believe that free speech protections are primarily for people, and only secondarily (if at all) for software, algorithms, artificial intelligence, and platforms.“Free speech for people” is a particularly pressing goal given ongoing investigations into manipulation of public spheres around the world. American voters still do not know to what extent foreign governments, non-state actors, and bots manipulated social media during the presidential election of 2016. The Federal Election Commission failed to require disclosure of the source of much political advertising on Facebook and Twitter. Explosive reports now suggest that the goal of the Russian buyers of many ads “was to amplify political discord in the U.S. and fuel an atmosphere of divisiveness and chaos.”
Social media firms are cooperating with investigators now. But they will likely fight proactive regulation by arguing that their algorithmic feeds are speech. They have already deleted critical information.Courts are divided on whether algorithmic generation of search results and newsfeeds merits full First Amendment protection.
As Tim Wu has observed, “[c]omputers make trillions of invisible decisions each day; the possibility that each decision could be protected speech should give us pause.” He and other scholars have argued forcefully for limiting constitutional protection of “machine speech.” By contrast, Stuart Benjamin has predicted that courts will expand the coverage of First Amendment protection to artificial intelligence (AI), including algorithmic data processing.Given the growing concern about the extraordinary power of secret algorithmic manipulation to target influential messaging to persons with little to no appreciation of its ultimate source, courts should not privilege algorithmic data processing in these scenarios as speech. As James Grimmelmann has warned with respect to “robotic copyright,” First Amendment protection for the products of AI could systematically favor machine over human speech.
This is particularly dangerous as bots begin mimicking actual human actors. Henry Farrell paints a vivid picture:The world that the Internet and social media have created is less a system than an ecology, a proliferation of unexpected niches, and entities created and adapted to exploit them in deceptive ways. Vast commercial architectures are being colonized by quasi-autonomous parasites. . . . Such fractured worlds are more vulnerable to invasion by the non-human. . . . Twitterbots vary in sophistication from automated accounts that do no more than retweet what other bots have said, to sophisticated algorithms deploying so-called “Sybil attacks,” creating fake identities in peer-to-peer networks to invade specific organizations or degrade particular kinds of conversation.
There is also a growing body of empirical research on the troubling effects of an automated public sphere.
In too many scenarios, bot interventions are less speech than anti-speech, calculated efforts to disrupt democratic will formation and fool the unwary.To restore public confidence in democratic deliberation, Congress should require rapid disclosure of the data used to generate algorithmic speech, the algorithms employed, and the targeting of that speech. American legislation akin to the “right to explanation” in the European Union’s General Data Protection Regulation would not infringe on, but would rather support, First Amendment values. Affected firms may assert that their algorithms are too complex to disclose.
If so, Congress should have the power to ban the targeting and arrangement of information at issue, because the speech protected by the Constitution must bear some recognizable relation to human cognition.Authorities should also consider banning certain types of manipulation. The UK Code of Broadcast Advertising states that “audiovisual commercial communications shall not use subliminal techniques.”
In a less esoteric mode, there is a long line of U.S. Federal Trade Commission (FTC) guidance forbidding misleading advertisements and false or missing indication of sponsorship. Given the FTC’s manifold limitations, U.S. states will also need to develop more specific laws to govern an increasingly automated public sphere. California Senator Robert Hertzberg recently introduced the so-called “Blade Runner Bill,” which “would require digital bots, often credited with spreading misinformation, to be identified on social media sites.” Another proposed bill would “would prohibit an operator of a social media Internet Web site from engaging in the sale of advertising with a computer software account or user that performs an automated task, and that is not verified by the operator as being controlled by a natural person.” I applaud such interventions as concrete efforts to assure that critical forums for human communication and interaction are not overwhelmed by a posthuman swarm of spam, propaganda, and distraction.As theorists develop a philosophy of free expression for the twenty-first century, they might take the principles underlying interventions like the Blade Runner Bill as fixed points of considered convictions to guide a reflective equilibrium on the proper balance between the rights of speakers and listeners, individuals and community, technology users, and those subject to technology’s effects.
Even if free expression protections extend to algorithmic targeting and bot expression, disclosure rules are both essential and constitutionally sound. Courts should avoid intervening to protect “speech” premised on elaborate and secretive human-subject research on internet users. The future of human expression depends on strict rules limiting the power and scope of technological substitutes for real thinkers and real thoughts.© 2018, Frank Pasquale.
Cite as: Frank Pasquale, Preventing a Posthuman Law of Freedom of Expression, 18-01.c Knight First Amend, Inst. (Feb. 26, 2018), https://knightcolumbia.org/content/preventing-posthuman-law-freedom-expression [https://perma.cc/T5CE-C9CL].
See generally Frank Pasquale, The Automated Public Sphere, in The Politics and Policies of Big Data: Big Data, Big Brother? (Ann Rudinow Sætnan et al. eds., 2018) (translated into German and Portuguese); Frank Pasquale, Internet Nondiscrimination Principles: Commercial Ethics for Carriers and Search Engines, 2008 U. Chi. Legal F. 263; Frank Pasquale, Platform Neutrality: Enhancing Freedom of Expression in Spheres of Private Power, 17 Theoretical Inquiries L. 487 (2016) (translated into Hungarian); Frank Pasquale, Reforming the Law of Reputation, 47 Loyola L. Rev. 515 (2016). As the translations suggest, I intend my work for a global audience and do not limit my consideration of free expression policy to First Amendment–dominated U.S. perspectives.
In this sentence, and for the rest of this essay, the term “platform” refers to large internet platforms with over ten million users. As the regulation of platforms evolves, legislators, regulators, and courts should hold large platforms to higher standards than smaller platforms.
On the evolution of the structure of the internet, see Chelsea Barabas et al., Ctr. for Civic Media & Digital Currency Initiative, Defending Internet Freedom Through oughhrnet Freedom edom Currency Ini 1 (2017), http://dci.mit.edu/assets/papers/decentralized_web.pdf (“[S]ince its development, the Web has steadily evolved into an ecosystem of large, corporate-controlled mega-platforms which intermediate speech online.”).
Thomas I. Emerson, The System of Freedom of Expression (1970).
My position here echoes the name of the anti–Citizens United group Free Speech for People, as I believe the doctrine of “computer speech” could evolve in the same troubling ways that corporate speech doctrine has.
Dylan Byers, Facebook Gives Russian-linked Ads to Congress, CNN (Oct. 2, 2017), http://money.cnn.com/2017/10/01/media/facebook-russia-ads-congress/index.html.
Kieren McCarthy, Facebook, Twitter Slammed for Deleting Evidence of Russia’s US Election Mischief, Register (Oct. 13, 2017), http://www.theregister.co.uk/2017/10/13/facebook_and_twitter_slammed_for_deleting_evidence_of_russian_election_interference.
Scholars compiling cases limiting such coverage include Ashutosh Bhagwat, When Speech Is Not “Speech,” 78 Ohio St. L.J. 839 (2017); Oren Bracha, The Folklore of Informationalism: The Case of Search Engine Speech, 82 Fordham L. Rev. 1629 (2014); and Tim Wu, Machine Speech, 161 U. Pa. L. Rev. 1495 (2013).
Tim Wu, Free Speech for Computers?, N.Y. Times (June 19, 2012), http://www.nytimes.com/2012/06/20/opinion/free-speech-for-computers.html; see also Morgan Weiland, Expanding the Periphery and Threatening the Core: The Ascendant Libertarian Speech Tradition, 69 Stan. L. Rev. 1389 (2017).
Stuart Minor Benjamin, Algorithms and Speech, 161 U. Pa. L. Rev. 1445 (2013); see also Zhang v. Baidu.com, Inc., 932 F. Supp. 2d 561 (S.D.N.Y. 2013).
James Grimmelmann, Copyright for Literate Robots, 101 Iowa L. Rev. 657 (2016).
Henry Farrell, Philip K. Dick and the Fake Humans, Boston Rev. (Jan. 16, 2018), http://bostonreview.net/literature-culture/henry-farrell-philip-k-dick-and-fake-humans.
See Robyn Caplan et al., Data & Soc’y, Dead Reckoning: Navigating Content Moderation After “Fake News” (2018), http://datasociety.net/pubs/oh/DataAndSociety_Dead_Reckoning_2018.pdf; Alice Marwick & Rebecca Lewis, Data & Soc’y, Media Manipulation and Disinformation Online (2017), http://datasociety.net/pubs/oh/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf.
On the need to limit the scope and power of such “inexplicable” artificial intelligence, see Frank Pasquale, Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society, 78 Ohio St. L.J. 1243 (2017).
The BCAP Code: The UK Code of Broadcast Advertising, App’x 2 (Sept. 1, 2010), http://www.asa.org.uk/uploads/assets/uploaded/e6e8b10a-20e6-4674-a7aa6dc15aa4f814.pdf. The U.S. Federal Communications Commission has twice considered the issue, but done nothing.
Senator Robert Hertzberg, Press Release: Hertzberg Announces Legislation to Encourage Social Media Transparency (Feb. 1, 2018), http://sd18.senate.ca.gov/news/212018-hertzberg-announces-legislation-encourage-social-media-transparency.
An act to add Article 10 (commencing with Section 17610) to Chapter 1 of Part 3 of Division 7 of the Business and Professions Code, relating to advertising, A.B. 1950, 2017–2018 Reg. Sess. (Jan. 29, 2018), available at http://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=201720180AB1950.
Frank Pasquale, Campaign 2020: Bots United, Balkinization (Feb. 14, 2012), https://balkin.blogspot.com/2012/02/campaign-2020-bots-united.html. Like much in my Cassandran oeuvre, this post was too cautious—its title suggested that dynamics that emerged only a few years after publication would take at least eight years to occur.
I draw here on terms developed in Rawlsian theoretical methodology. John Rawls, Political Liberalism 8 (1993).
See, e.g., McConnell v. FEC, 540 U.S. 93, 194–202 (2003) (upholding disclosure provisions of the Bipartisan Campaign Reform Act of 2002); Frank Pasquale, Reclaiming Egalitarianism in the Political Theory of Campaign Finance Reform, 2008 Ill. L. Rev. 599 (2008) (cited in Citizens United v. FEC, 130 S. Ct. 876, 963 (Stevens, J., dissenting)).
Frank Pasquale is a professor of law at Brooklyn Law School, an affiliate fellow at the Yale Information Society Project, and the Minderoo High Impact Distinguished Fellow at the AI Now Institute.