Introduction
Media policy is designed in large part to support high-fidelity information—news with a signal-to-noise ratio necessary for self-government. Federal broadcast regulations, the Supreme Court precedents upholding them, investments in public media, and journalistic norms all seek to support an informed citizenry and glorify the predicate values of truth and robust debate. “Signal,” in this context, is information that is truthful and supportive of democratic discourse. “Noise” misinforms and undermines discursive potential. When signal overpowers noise, there is high fidelity in the information environment.
Policymakers and the public are outraged at digital information platforms (“platforms” or “digital platforms”) variously for the platforms’ roles in promoting noise via disinformation and hate speech.
This rage is fomenting calls to break up the platform companies. Reducing platform size may address some aspects of overweening power, but antitrust law will not correct problematic information dynamics. For one thing, splintered companies are likely to re-consolidate. For another, if more small companies simply replicate the same practices, similar patterns are likely to emerge with different owners. Diverse ownership is a justly enduring value in media policy, but not a panacea.A distinct and often complementary approach to antitrust is regulation. Digital platforms have operated largely free from the media regulations. So too, they have been untethered by the norms around media responsibility, and associated legal liability, that have constrained publishers. Transparency rules and norms are among those that have never applied, and have long been useful for fidelity. In analog commercial and political advertising, rules require sponsorship identification on the theory that if people know who is speaking, they will better be able to filter out noise. Because disclosure mandates increase information, rather than suppress it, transparency is a light lift from the free speech perspective. It is thus natural that transparency would top policy agendas as the cure-all or at least a “cure-enough” for online harms. As governments now begin to close the digital loophole and extend analog-era regulations to digital flows of information, we should understand the limits of these moves.
Transparency alone is no match for platform design choices that degrade fidelity. Algorithmic amplification creates a digital undertow that weakens cognitive autonomy and makes it difficult for people to sift signal from noise. Merely importing analog-era regulations into the digital realm will not adequately reckon with how meaning is made online. If the internet is a stack of functions, with data transmission at the bottom, and content at the top, traditional transparency happens at the surface where content emerges. But it is lower down the stack where cascades of individual actions, paid promotions, and platform priorities determine how messages move. Meaning is made where likes and shares and algorithmic optimizing minutely construct audiences, where waves of disinformation swell and noxious speech gathers energy. Increasing fidelity by empowering individual autonomous choice will require both transparency and other interventions at the level of system architecture. To this end, disclosures should cover the reach and targeting of recommended and sponsored messages.
One way to understand disclosure rules is that they create friction in digital flow—friction that opens pathways for reflection. Disclosure is not the only, and may not be the best, frictive intervention. Media policy should introduce other forms of salubrious friction to disincentivize and disrupt practices that addict, surveil, and dull critical functions. New sources of friction can slow the pull of low-fidelity information and equip people to resist it.
The first section of this essay briefly describes the historic relationship between American media policy and information fidelity, focusing on transparency rules and the reliance on listener cognitive autonomy. The second shows how analog-era transparency rules are being adapted for digital platforms with a view toward restoring and protecting autonomy. The third discusses the ways in which these transparency solutions alone cannot cope with algorithmic noise and suggests that more systemic transparency is necessary. The fourth proposes that new sources of friction in information flows may be needed to foster information fidelity amidst the algorithmic production of salience.
High-Fidelity Information and Media Policy
The development of American 20th century media, was, as Paul Starr argues, inextricably tied to liberal constitutionalism and its values of truth, reasoned discourse, and mental freedom.
This linkage was reflected in media policies that yoked regulation to safeguarding autonomy and encouraging democratic participation. A principal media policy goal has been to boost information fidelity, or the signal-to-noise ratio, in the service of democratic processes. The signal is information necessary to self-government characterized by accuracy, relevance, diversity of views, and similar values. As Justice Stephen Breyer put it, “[C]ommunications policy ... seeks to facilitate the public discussion and informed deliberation which ... democratic government presupposes.”Digital platforms can overwhelm signal with noise. Scale and speed, user propagation, automated promotion, inauthentic and hidden amplification, and the mixture of sponsored and organic speech all make digital discourse different. Alongside these technical differences are sociopolitical ones. Digital platforms emerged from the world of software engineering, not the press. They are not inextricably tied to liberal constitutionalism. They stumbled into media without the norms or bonds of 20th century professionalized press traditions or regulatory pressures. It is therefore not shocking that platform architecture not only tolerates but even favors low-fidelity speech.
Accuracy has little structural advantage in the attention economy. Deepfakes, bot-generated narratives masquerading as groundswell truths, and other social media contrivances amplify disinformation and can create epistemic bubbles. Algorithmic systems deliver content to audiences deemed receptive based on data-inferred characteristics. This delivery system has design features like the infinite scroll or social rewards of provocation that bypass listeners’ cognitive checks and autonomous choice. The result is a noisy information environment that is inhospitable to the production of shared truths and the trust necessary for self-government.American media policy can do very little to eradicate noise. For the most part, the First Amendment is hostile to bureaucratic judgments about information quality. Aside from defamation actions and outside of advertising, the law generally protects falsehoods from government censure.
So strong is the aversion to policing truth that investigative journalists who break the law in order to reveal truths enjoy less protection than those who misinform in order to deceive. The constitutional tolerance for lies rests on the assumption that people can and will privilege truth if given the chance. This is the classic “marketplace of ideas” formulation of a free speech contest for mindshare. Truth is expected to outperform lies so long as people are equipped to choose it.A high-fidelity information environment in liberal democracies thus depends heavily on the exercise of cognitive autonomy: people reasoning for themselves.
Respect for autonomy is at the root of the First Amendment’s guarantees of free speech, religion, assembly, and petition. So in a decision interpreting the First Amendment, Justice Louis Brandeis observed, “Those who won our independence ... believed that freedom to think as you will and to speak as you think are means indispensable to the discovery and spread of political truth.” From the heart of the First Amendment, the impulse to safeguard autonomous thought runs straight through the Fourth Amendment’s protection against unreasonable government searches. By impeding entry to the house, the Constitution made it harder for government to enter the mind. Justice Brandeis again: the Founders “sought to protect Americans in their beliefs, their thoughts, their emotions and their sensations.”Law developed over the 20th century to safeguard the free mind from deceptive messaging conveyed by mass communication, especially via the mechanism of the disclosure requirements discussed below. There were other broadcast law interventions—significant more for their rhetorical weight than their operative force—that sought to prevent manipulation. Federal Communications Commission (FCC) rules prohibit broadcast hoaxes and the intentional slanting of news: “[A]s public trustees, broadcast licensees may not intentionally distort the news ... ‘rigging or slanting the news is a most heinous act against the public interest.’”
Long ago, the FCC banned the broadcast of “functional music”—something like muzak—for fear that it would subliminally seduce the public into a buying mood. What these regulatory examples show is a concern for listener autonomy: that listeners not be deceived or lured into false consciousness. So freed, the listener can presumably ensure for herself a high-fidelity information diet.The operation of autonomous choice to filter signal from noise, as it developed in the analog world, has to be understood against the backdrop of signal-supporting government policies and industry practices. Broadcasters have been subject to public service requirements of various kinds, affirmative programming requirements for news, and the erstwhile “fairness requirements” to ventilate opposing viewpoints on controversial issues. The Public Broadcasting Act
established subsidies for noncommercial media that would “be responsive to the interests of people, ... constitute an expression of diversity and excellence, [develop] programming that involves creative risks and that addresses the needs of unserved and underserved audiences ... [and] constitute valuable local community resources for utilizing electronic media to address national concerns and solve local problems.” Broadcasters who are inclined to amplify deceptive messages might also be deterred by the spectrum licensing system, which at least formally subjects broadcasters to the risk that they will lose their licenses through petitions to deny renewals.More signal-boosting work was done by press norms and business structures. Defamation law incentivizes publishers to take care with the truth. Newspaper mastheads lay responsibility for content at the feet of named publishers and editors. The professionalization of the news business led to norms of fact-checking and fidelity. Analog-era media economics tended to reward high-fidelity news production. Local newspapers enjoyed near-monopoly claims on advertising revenue that supported investigative journalists. Because news was bundled with entertainment and sports, media outlets cross-subsidized one with revenue from the other. These economics have of course been upended in the digital world, where content bundles are disaggregated and digital platforms absorb the advertising revenue needed for journalism—without in fact producing it.
One other thing to note about the analog media environment that birthed media policy is that analog information flows were much slower. The task of filtering signal from noise was made easier simply by virtue of analog system constraints. Attention abundance and content scarcity meant that more cognitive resources could be allocated to evaluating a particular piece of content.
The information flow through newspapers and broadcast channels left time enough to absorb disclosures or discriminate among messages. Perhaps most significantly, the flow was not narrowcast. Noise in the form of lies or manipulation would be exposed to a large audience, which was itself a form of discipline and an opportunity for collective filtering.With this background, we can turn to the transparency rules that developed in the analog environment to safeguard cognitive autonomy and enhance information fidelity. It is the translation of these rules for digital platforms that is the first order work of platform regulation.
Fidelity of Message—Know Who’s Talking to You
In reaction to the social media disruptions of 2016—including foreign interference in the messaging around the Brexit vote in the United Kingdom and the presidential election in the United States—western democracies are considering or adopting laws to try to limit foreign political advertising and surreptitious messaging of all kinds. These interventions are forward-looking as well, with an eye toward the expected onslaught of disinformation in future campaigns. At the same time, the largest social media platforms, including Facebook, Twitter, and YouTube, have taken voluntary steps to police inauthentic accounts that violate their terms of service and to be more transparent about the sources of political advertising.
For the most part, the notion of transparency reflected in both mandated and self-imposed measures is an old one: Individuals can be manipulated into mistaking noise for signal if they don’t know who is speaking to them. Analog-era transparency requirements took hold at the level of the message. That is, disclosures about a particular advertisement or program were displayed simultaneously with the message in order to allow listeners to exercise autonomous judgment about that message. The following shows how analog-era media transparency rules tried to increase information fidelity and how these rules are being adapted for digital flows.
Analog-Era Transparency Rules
Twentieth-century advertising and media law sought to advance information fidelity by increasing transparency of authorship, essentially to help listeners filter out noise. Without knowing who is behind a message, people might be manipulated into believing what, in the light of disclosure, is unbelievable. Concealed authorship slips messages past cognitive checks that safeguard freedom of mind.
Disclosure mandates aim to restore these checks and enable listeners to apply cognitive resistance.Most of the analog-era source disclosures are tied to the message itself. For example, print, radio, and television political advertising messages are subject to disclosure requirements under the Federal Election Campaign Act.
A “clear and conspicuous” disclaimer is required to accompany certain “public communications” that expressly advocate for a candidate. The disclaimer identifies who paid for the message and whether it was authorized by the candidate. The Supreme Court, in Citizens United v. FEC, found these requirements to be justified by the government interest in ensuring that “‘voters are fully informed’ about who is speaking.” In an earlier decision, Justice Antonin Scalia celebrated the virtue of transparent political speech, writing, “Requiring people to stand up in public for their political acts fosters civic courage, without which democracy is doomed.”Disclosure law is also entrusted to the Federal Communications Commission, whose predecessor agency started requiring sponsorship identification under the 1927 Radio Communications Act.
The most notable expansion of these rules followed not a political event but the payola scandals in the 1950s when record labels bribed DJs to play their music, thus surreptitiously appropriating the editorial role. It was then that Congress authorized the FCC to require broadcasters to disclose paid promotions. Disclosure is required when “any type of valuable consideration is directly or indirectly paid or promised, charged or accepted” for the inclusion of a sponsored message in a broadcast. For controversial or political matters, disclosure is required even when no consideration is paid. Behind this requirement is the idea that faked provenance prevents people from engaging with speech on the level and thereby from exercising cognitive autonomy. As discussed below, these rules only apply to the broadcast media, not to the internet.Another set of source disclosure rules comes from the Federal Trade Commission. Once Madison Avenue had perfected techniques to bypass critical resistance to commercial messages, it became the job of the FTC to protect consumers from being duped. Section 5(a) of the Federal Trade Commission Act empowers the agency to police sponsored messages for unfairness or deception.
To reduce the likelihood that advertising would deceive by concealing motive or authorship, the FTC issued guidance about source disclosures for paid product endorsements. These disclosures must be “clear and conspicuous ... to avoid misleading consumers.” Here, in theory, there is no digital loophole. Clear and conspicuous guidelines also apply to digital advertisements and to digital influencer sponsorship.Some analog-era disclosure rules, while still operating at the message level, are meant for information intermediaries, rather than the listener. For example, the FCC requires various kinds of “public file” submissions so that the public can be made aware of how broadcasters approach their public interest obligations.
Broadcasters also have to make disclosures about their ownership structure so as to inform the public who really holds their communicative power. So too, the FEC requires this kind of intermediary-focused disclosure about campaign contributions and spending. Though aimed at intermediaries, the objective of these disclosures is still to help listeners understand who is speaking to them.Adaptation to Digital
The first rounds of proposals to regulate digital platforms more or less adapt analog-era transparency requirements to the internet.
They attack manipulation in the form of source concealment at the level of the message.Most internet messaging is not covered under the election law term public communications,
and therefore there has been no FEC-required sponsorship disclosure on digital platforms. Closing this sort of digital loophole is a straightforward, though still unrealized, policy project. One of the first attempts to translate analog transparency regimes to the digital world in the United States was the Honest Ads Act, introduced for a second time in March 2019. Seeking to uphold the principle that “the electorate bears the right to be fully informed,” the Act would close the digital loophole for online campaign ads. Platforms would have to reveal the identities of political ad purchasers. While the Honest Ads Act is stalled in Congress as of this writing, several states have moved forward to adopt similar legislation, including California, Maryland, and New York.California’s Social Media DISCLOSE Act of 2018 extends political advertising sponsorship disclosure requirements to social media.
New York's Democracy Protection Act of 2018 requires paid internet and digital political ads to display disclaimers stating whether the ad was authorized by a candidate as well as who actually paid for the ad. Washington State has altered its campaign finance laws to require disclosure of the names and addresses of political ad sponsors and the cost of advertising. Canada enacted a law requiring that platforms publish the verified real names of advertising purchasers.New technologies have created new threats to information fidelity. Bots enable massive messaging campaigns that disguise authorship and thereby increase the perceived value or strength of an opinion.
A substantial number of tweeted links originate from fake accounts designed to flood the information space with an opinion expressed so frequently that people believe it. Deepfakes create fraudulent impressions of authorship through ventriloquy, using artificial intelligence to fake audio or video. Proposed and adopted laws to address deepfakes and bot-generated speech are in the same tradition as the political and advertising disclosure requirements advanced to close the digital loopholes. They seek to ensure that people are informed about who is speaking to them (in the case of bots) and whether the speech is real (in the case of deepfakes).California SB 1001
makes it illegal for a bot to communicate with someone with “the intention of misleading and without clearly and conspicuously disclosing that the bot is not a natural person” and requires removal of offending accounts. It requires that any “automated online [“bot”] account” engaging a Californian on a purchase or a vote must identify itself as a bot. Notably, the law makes clear that it “does not impose a duty on service providers of online platforms.”At the federal level, the proposed Bot Disclosure and Accountability Act
would clamp down on the use of social media bots by political candidates. Candidates, their campaigns, and other political groups would not be permitted to use bots in political advertising. Moreover, the FTC would be given power to direct the platforms to develop policies requiring the disclosure of bots by their creators/users. Another federal proposal would require platforms to identify inauthentic accounts and determine the origin of posts and/or accounts. Finally, the European Commission’s artificial intelligence ethics guidelines include a provision that users should be notified when they are interacting with algorithms rather than humans.Deepfakes are another technique to distort democratic discourse by concealing authorship.
Facebook is entreating developers to produce better detection systems for deepfakes. Early legislative efforts at the federal level and the state level would penalize propagators of deepfakes in various circumstances. The most notable federal proposal—the DEEPFAKES Accountability Act—would address the manipulative possibilities of deepfakes by requiring anyone creating synthetic media featuring an imposter to disclose that the media was altered or artificially generated. Such disclosure would have to be made through “irremovable digital watermarks, as well as textual descriptions.” This sort of “digital provenance” only works if the marks are ubiquitous and unremovable—both of which are unlikely. As Devin Coldewey critically observes, “[T]he law here is akin to asking bootleggers to mark their barrels with their contact information.” If it is not effective or enforceable, at the very least the law serves an expressive purpose by stating (or restating) that informational fidelity is worth pursuing.While most of these proposals deal with direct-to-consumer transparency, there are also new proposed and adopted rules to benefit information intermediaries. There are many versions of an advertising archive requirement.
The Honest Ads Act would require platforms to maintain a political ad repository of all political advertisers that have spent more than $500 on ads or sponsored posts. Canada’s political advertising law also mandates an ad repository. On the state level, the California Disclose Act requires political campaign advertisers to list their top three contributors and requires platforms to maintain a database of political ads run in the state. The New York State Democracy Protection Act mandates that political ads be collected in an online archive maintained by the State Board of Elections. Washington State requires disclosure of who paid for a political ad, how much the advertiser spent, the issue or candidate supported by the ad, and the demographics of the targeted audience.Much about this adaptation of analog-era transparency rules to digital is good and necessary. But it will not be sufficient, either as a matter of transparency policy or as more general instrument of digital information fidelity.
Fidelity of System—Know Who the System Is Talking To
Digital platforms serve up content and advertising to listeners to capitalize on cognitive vulnerabilities that have surfaced through pervasive digital surveillance.
The noise problem on digital platforms is different than on analog ones in part because the business model pushes content to soft targets, where cognitive resistance is impaired. Merely updating analog-era transparency rules as an approach to information fidelity misses this fundamental point about how digital audiences are selected for content. Analog mass media and advertising transparency regimes, embodied in such practices as sponsorship identification, seek to combat manipulation at the level of the message. But digital manipulation transcends the message. It is systemic. The actual message is only the end product of a persuasive effort that starts with personal data collection, personal inferences, amplification, and tailoring of messages to the “right” people, all of which happens in the dark.Advertisers always tried to target segmented audiences with persuasive messages, but analog technologies offered only scattershot messaging to the masses. System architecture made it impossible to hide where the messages went; distribution was evident. All listeners of channel x were exposed to y content at z moment (give or take some time shifting). On social media platforms like Facebook and Twitter, obfuscation and manipulation are emergent properties of algorithmically mediated speech flows that surface communications based on microtargeting and personal data collection.
In the current environment, no one can easily solve for x, y, and z. Moreover, people are ill equipped to filter out noise in light of digital design features that depress cognitive autonomy, as discussed below. Manipulation in this context resides not only in the individual messages but also in the algorithmic production of salience. Transparency mechanisms designed mainly to strengthen cognitive resistance to discrete messages will not be enough to secure freedom of mind. Policy should boost signal throughout the system, through transparency and other means.Algorithmic Noise
As Julie E. Cohen observes,
Algorithmic mediation of information flows intended to target controversial material to receptive audiences, ... inculcating resistance to facts that contradict preferred narratives, and encouraging demonization and abuse. ... New data harvesting techniques designed to detect users’ moods and emotions ... exacerbate these problems; increasingly, today’s networked information flows are optimized for subconscious, affective appeal.
She is touching on a complex of problems related to polarization, outrage, and filter bubbles. Platforms systematically demote values of information fidelity. There is a collapse of context between paid advertisements and organic content, between real and false news, between peer and paid-for recommendations. Jonathan Albright describes a “micro-propaganda machine” of “behavioral micro-targeting and emotional manipulation—data-driven ‘psyops’” ... that can “tailor people’s opinions, emotional reactions, and create ‘viral’ sharing (LOL/haha/RAGE) episodes around what should be serious or contemplative issues.”
Platform algorithms boost noise through the system as a byproduct of the main aim: engagement (subject to some recent alterations to content moderation practices). In order to maximize and monetize attention capture, the major digital platforms serve up “sticky” content predicted to appeal based on personal data. Mark Andrejevic has called “digital enclosure.” Algorithmic promotion is abetted, often unwittingly, by the users themselves, who are nudged to amplify messages that on reflection they might abjure. In this respect, users are manipulated not (or not only) via a specific message but through technical affordances that drive them into message streams without care for message quality. This production of salience happens below the level of the message. Listeners relate to information unaware of the digital undertow.
Dipayan Ghosh writes that “[b]ecause there is no earnest consideration of what consumers wish to or should see in this equation, they are subjected to whatever content the platform believes will maximize profits.” Platforms understand what content will maximize engagement through a process of data harvesting thatThe Council of Europe directly confronted the ways in which platform design undermines cognitive autonomy in its 2019 Declaration on the Manipulative Capabilities of Algorithmic Processes. Machine learning tools, the Council said, have the “capacity not only to predict choices but also to influence emotions and thoughts and alter an anticipated course of action, sometimes subliminally.” The Declaration further states that “fine grained, sub-conscious and personalized levels of algorithmic persuasion may have significant effects on the cognitive autonomy of individuals and their right to form opinions and take independent decisions.”
The Declaration’s supposition is supported by research showing how digital speech flows are shaped by data harvesting and algorithmically driven and relentlessly monetized platform mediation.Platform priorities and architecture have reshaped public discourse in ways that individual users cannot see and may not want. Platforms flatten out the information terrain so that all communications in theory have equal weight, with high-fidelity messages served up on a par with misinformation of all kinds. This is sometimes called context collapse. Stories posted on social media or surfaced through voice command are often denuded of credibility tokens or origination detail, like sponsorship and authorship, making it hard to distinguish between fact and fable, high fidelity and low.
Listeners face this material in vulnerable states, by design. Platforms in pursuit of engagement may pair users with content in order to exploit users’ cognitive weaknesses or predispositions. Design tricks like the “infinite scroll” keep people engaged while blunting their defenses to credibility signals. YouTube autoplay queues up video suggestions to carry viewers deeper into content verticals that are often manipulative or otherwise low-fidelity.
Social bots exploit feelings of tribalism and a “hive mind” logic to enlist people into amplifying information, again without regard to information fidelity.Other design features like notifications and the quantification of “likes” or “follows” trigger dopamine hits to hook users to their apps. Gratification from these hits pushes people to share information that will garner a reaction. On top of this, Facebook’s News Feed and YouTube’s Suggested Videos use predictive analytics to promote virality through a user’s network. These tricks are among what are called “dark pattern” design elements. They are hidden or structurally embedded techniques that lower cognitive resistance, encouraging a sort of numb consumption and automatic amplification while at the same time facilitating more data collection, which supports more targeted content delivery, and so on.
That these design features can be responsible for lowering information fidelity is something the platforms themselves recognize. Under pressure from legislators, Facebook in 2017 said that it would block the activity of government and nonstate actors to “distort domestic or foreign political sentiment” and “coordinated activity by inauthentic accounts with the intent of manipulating political discussion.”
In other words, the platform would work to depress noise. But this reference to “distortion” assumes a baseline of signal that the platform has not consistently supported. Its strategies with respect to news zig-zag in ways that have undermined the salience of high-quality information. Emily Bell and her team at Columbia University’s Tow Center for Digital Journalism have chronicled how Facebook policies influence news providers, getting them to invest in algorithmically desirable content (including, for a while, video), only to abruptly change directions, scrambling editorial policies and wasting resources. Facebook decided in 2018 to demote news as compared with “friends and family” posts and then the next year created a privileged place for select journalism outlets in the News Tab. Policies that are both erratic and truth-agnostic allow noise generators, through the canny use of amplification techniques, to manipulate sentiment without resorting to inauthenticity. Facebook’s editorial policies and their fluidity have led to criticisms that the process is lacking in transparency and accountability.Platform design features have to be understood against the platforms’ background entitlements and resulting norms. The most significant entitlement is their immunity under Section 230 of the Communications Decency Act.
This provision holds platforms harmless for most of the content they transmit, freeing them from the liability that other media distributors may face for propagating harms. It is not surprising, then, given the legal landscape, that the platforms have not developed a strong culture of editorial conscience. They have grown up without anything like a robust tradition of making editorial choices in the public interest, of clearly separating advertising from other content, of considering information needs, or of worrying that they might lose their license to operate.All of these features—business models, architecture, traditions, and regulation—combine with the sheer volume of message exposure to limit the effectiveness of message-level disclosure in digital flows.
Systemic Transparency
For disclosures to enhance digital information fidelity, it will require more than message-level transparency. There are at least two reasons to look further down the stack toward greater system-level transparency.
The first reason is that message labels may not be effective counters to manipulation, given the volume and velocity of digital messaging. In studies of false news, researchers have found that users repeatedly exposed to false headlines on social media perceive them as substantially accurate even when they are clearly implausible.
Warning labels about the headlines being incorrect had no effect on perceptions of credibility or even caused people to share the information more often. The frictionless sharing that digital platforms enable may simply overwhelm signifiers of compromised informational integrity delivered at the point of consumption.In important ways, by the time the message is delivered to the user, meaning has already been made. The messages on the surface are epiphenomenal of algorithmic choices made below. This is the second reason to push transparency mandates to lower down in the stack where algorithmic amplification decisions reside. How can we render visible the “authorship” of information flows? It’s not enough for the individual to know who is messaging her. What is trending and what messages are reaching which populations are a function of algorithmic ordering and behavioral nudges hidden from view.
Salience is a product of these systemic choices.European governments are trying to address algorithmic manipulation through transparency rules geared to the algorithmic production of salience. Among other regulators, the UK Electoral Commission aspires to fill in the lacunae of campaign ad microtargeting, where “‘[o]nly the company and the campaigner know why a voter was targeted and how much was spent on a particular campaign.’” A report commissioned by the French government has proposed “prescriptive regulation” that obliges platforms to be transparent about “the function of ordering content,” among other features. This includes transparency about “the methods of presentation, prioritisation and targeting of the content published by the users, including when they are promoted by the platform or by a third party in return for remuneration.” Similarly, a UK Parliament Committee report in the aftermath of the Cambridge Analytica scandal has recommended that “[t]here should be full disclosure of the targeting used as part of advertising transparency. ... Political advertising items should be publicly accessible in a searchable repository—who is paying for the ads, which organisations are sponsoring the ad, who is being targeted by the ads.” Maryland’s electioneering transparency law would also have mandated extensive disclosure of election ad reach but was held unconstitutional on First Amendment grounds by the Fourth Circuit Court of Appeals.
Drawing on these and other proposed interventions, we can identify systemic transparency touchstones. Some of these can be addressed by platform disclosure, others only by making data available for third-party auditing. When Facebook was interrogated by the U.S. Congress over Russian interference in the 2016 election, it showed itself capable of disclosing a lot of information about data flows. This is the kind of information that should be routinely disclosed at least with respect to certain categories of paid promotion.
Items that should be made known or knowable by independent auditors include:
- The reach of election-related political advertisements, paid and organic, and revenue figures;
- The reach of promoted content over a certain threshold;
- The platforms’ course of conduct with respect to violations of their own terms of service and community standards, including decisions not to downrank or remove content that has been flagged for violations;
- The use of human subjects to test messaging techniques by advertisers and platforms (also known as A/B testing);
- Change logs recording the alterations platforms make to their content and amplification policies;
- “Know Your Customer” information about who really is behind the purchases of political advertising.
Noise Reduction Via Friction
Alongside new forms of systemic transparency, other changes to system design are needed to promote signal over noise. Of course, investing in and promoting fact-based journalism is important to boosting signal.
Changes to platform moderation, amplification, and transparency policies can help to depress noise. But ultimately, it is the individual who must identify signal; communications systems can only be designed to assist the exercise of cognitive autonomy. I suggest that communicative friction is a design feature to support cognitive autonomy. Indeed, one way to see analog-era transparency requirements is as messaging ballast—cognitive speed bumps of sorts. Slow media, like slow food, may deliver sociopolitical benefits that compensate for efficiency losses. What might such speed bumps look like in the digital realm? This section briefly characterizes the shift to frictionless digital communications and concludes with some ideas for strategically increasing friction in information flows to benefit information fidelity.From Analog-Era Friction to Digital Frictionlessness
The analog world was naturally frictive in the delivery of information and production of salience. Sources of friction were varied, including barriers to entry to production and distribution, as well as inefficient markets. It was costly to sponsor a message and to distribute content on electronic media. And it was a “drag”—as in, full of friction—for an individual to circulate content, requiring as it did access to relatively scarce distribution media. Friction protected markets for legacy media companies. This was undesirable in all kinds of ways. But one of the benefits was that these companies invested in high-cost journalism and policed disinformation.
Friction was built into the analog-era business models and technology, some of which was discussed earlier. Relatively meager (by comparison to digital) content offerings were bundled for mass consumption and therefore were imperfectly tailored to individual preferences. By dint of this bundling in channels, networks, and newspapers, advertisers ended up supporting high-fidelity information along with reporting on popular topics like sports and entertainment.
Content scarcity, crude market segmentation, and imperfect targeting of advertising support all served as impediments to the most efficient matching of taste and message; technological friction impeded virality. Analog communications system inefficiencies and limitations did not necessarily promote information fidelity. After all, both information and disinformation campaigns, truth-tellers and liars, would have to overcome obstacles to persuasion. But the friction slowed message transmission to allow for rational consideration. Research on polarization suggests that when people have more time for deliberation, they tend to think more freely and resist misleading messaging.Some of the friction in analog media was regulatory, including the message-level sponsorship disclosure requirements described above.
A message that says “I’m Sally Candidate, and I approved this ad” forces the listener to stop before fully processing the ad to consider its meaning. It is a flag on the field, stalling the flow of information between message and mind. That disclosures have the effect of cluttering speech is a knock against them in the literature on transparency policy. Listeners may be so overloaded with information that they don’t heed the disclosures. Their minds may not be open to hearing whatever it is the disclosure wants them to know. It is nevertheless possible that disclosures can function as salubrious friction, simply by flashing warning. In their paper on online manipulation, Daniel Susser and co-authors note that disclosures serve just such a function, encouraging “individuals to slow down, reflect on, and make more informed decisions.”Digital platforms dismantle cognitive checkpoints along with other obstacles to information flows. For the engineer, friction is “any sort of irritating obstacle” to be overcome. “networked Fourth Estate” that took on the watchdog function of the legacy press. Reduced communicative friction did open opportunities for the voiceless. But the optimism of the early 2010s did not account for the collapse of legacy media as a source of signal or for how commercial platforms would amplify noise. Citizen journalists might take advantage of frictionless communications, but not nearly to the same degree as malicious actors and market players, whose objectives were very different.
This engineering mindset converged with democratic hopes for an open internet to produce a vision of better information fidelity. For example, by tearing down barriers to entry, digital could amplify “We the media,” to cite Dan Gillmor’s 2005 book of the same name. Decentralized media authority, it was hoped, would reveal truths through distributed networks, leading to a kind of collaborative “self-righting.” Building on his earlier work on networked peer production, Yochai Benkler conceptualized aNew Frictions
Digital enclosure seals communicators in feedback loops of data that are harvested from attention and then used to deliver content back to data subjects in an endless scroll. Platforms have bulldozed the sources of friction that were able to disrupt the loop. When 20th century highway builders bulldozed neighborhoods to foster frictionless travel, place-making urbanists like Jane Jacobs articulated how the collision of different uses—something many planners considered inefficient—improves communities.
The sociologist Richard Sennett used “friction” to describe aspects of this urban phenomenon, which he viewed favorably. In communications as in urbanism, a certain degree of friction can disrupt the most efficient matching of message and mind in ways that promote wellbeing. Specifically, new frictions can promote information fidelity. Indeed, given the First Amendment limitations on any regulatory response to noisy communications, the introduction of content-neutral frictions may be one of the very few regulatory interventions that are consistent with American free speech traditions.The use of friction already is both a public policy and private management strategy in the digital realm. Paul Ohm and Jonathan Frankle have explored digital systems that implement inefficient solutions to advance non-efficiency values—what they term desirable inefficiency.
The platforms themselves are voluntarily moving to implement fricative solutions. For example, WhatsApp decided in 2019 to limit bulk message forwarding so as to reduce the harms caused by the frictionless sharing of disinformation. The limit imposes higher cognitive and logistical burdens on those who would amplify the noise. At the extreme, friction becomes prohibition, which is one way to think about Twitter’s decision to reject political advertising because it did not want to, or believed it could not, reduce the noise.Forms of friction that could enhance information fidelity and cognitive autonomy include communication delays, virality disruptions, and taxes.
Communication Delays. The columnist Farhad Manjoo has written, “If I were king of the internet, I would impose an ironclad rule: No one is allowed to share any piece of content without waiting a day to think it over.”
He assumes that people will incline toward information fidelity if encouraged to exercise cognitive autonomy. This intuition is supported by research showing that individuals are more likely to resist manipulative communications when they have the mental space and inclination to raise cognitive defenses. Are there ways to systematize this sort of “pause” to cue consideration? Other examples of intentional communications delays adopted as sources of felicitous friction suggest that there are. For reasons of quality control, for example, broadcasters have imposed a short delay (usually five to seven seconds) in the transmission of live broadcasts. Frictionless communications, when it is only selectively available, can reduce faith in markets. For this reason, the IEX stock exchange runs all trades through extra cable so that more proximate traders have no communications advantage, thereby protecting faith in the integrity of their market.As discussed above, platforms deploy dark patterns to spike engagement. Businesses routinely ask, “Are you sure you want to unsubscribe?” It should be possible for platforms to use these techniques to slow down communications: “Are you sure you want to share this?” Senator Josh Hawley’s proposed Social Media Addiction Reduction Technology Act would require platforms to slow down speech transmission as a matter of law.
The Act would make it unlawful for a “social media company” to deploy an “infinite scroll or auto refill,” among other techniques that blow past the “natural stopping points” for content consumption. While the bill has problems of conception and execution, it touches on some of the ways that platforms might be redesigned with friction to enhance cognitive autonomy. Commentators have suggested other ways that Congress could deter platform practices that subvert individual choice.Virality Disruptors. Many forms of noise overwhelm signal only at scale, when the communications go viral. One way to deal with virality is to impose a duty on platforms to disrupt traffic at a certain threshold of circulation. At that point, human review would be required to assess the communication for compliance with applicable laws and platform standards. Pausing waves of virality could stem disinformation, deepfakes, bot-generated speech, and other categories of information especially likely to manipulate listeners. The disruption itself, combined with the opportunity to moderate the content or remove it, could reduce the salience of low-fidelity communication. Another approach is something like the sharing limit that WhatsApp imposed to increase friction around amplification. Substitute volatility for virality, and it’s easy to see how the U.S. Securities and Exchange Commission reserves to itself friction-creating powers. At a certain threshold of volatility in financial markets, it will curb trading to prevent market panic, in effect imposing a trip wire to stop information flows likely to overwhelm cognitive checks.
The New York Stock Exchange adopted these circuit-breakers in reaction to the 1987 market crash caused by high-volatility trading . Other countries quickly followed suit to impose friction on algorithmic trading when it moves so fast as to threaten precipitous market drops. The purpose of these circuit-breakers, in the view of the New York Stock Exchange, is to give investors “time to assimilate incoming information and the ability to make informed choices during periods of high market volatility.” That is, it was expressly to create the space for the exercise of cognitive autonomy.Taxes. Taxes are also sources of friction that can be deployed to disincentivize business practices that boost noise over signal. Tal Zarsky has called data the “fuel” powering online manipulation.
If so, a tax on data could aid in resistance to manipulation. There are a number of nascent proposals to put a price on exploitative data practices. One possibility, for example, would be to impose a “pollution tax” on platform data sharing. Another is to have a transaction tax for advertising on platforms. These kinds of taxes would begin to make companies internalize the costs of exploitative data practices. If set to the right level, they could attract platforms and online information providers away from advertising models that monetize attention and finance the noisy digital undertow. Taxes would have the additional benefit of raising revenue that could be used to support signal-producing journalism, resulting in higher-fidelity speech.Conclusion
It is long overdue that media transparency requirements from the analog world be adapted for digital platforms. Informing listeners about who is speaking to them—whether candidate, company, or bot—helps them to make sense of messages and discern signal from noise. But this kind of message-level transparency will not suffice either to protect cognitive autonomy or to promote information fidelity in the digital world. The sources of manipulation and misinformation often lie deeper in digital flows. By serving up content to optimize time spent on the platform and segment audiences for advertisers, at a volume and velocity that overwhelms cognitive defenses, digital platform design prioritizes content without regard to values of truth, exposure to difference, or democratic discourse. The algorithmic production of meaning hides not only who is speaking but also who is being spoken to. To really increase the transparency of communications in digital flows, interventions should focus on system-level reach and amplification, along with message-level authorship. Research suggests that transparency may have limited impact, especially in light of the volume and velocity of speech. Thus, in addition to transparency, policymakers and platform designers should consider introducing forms of friction to disrupt the production of noise in a way that respects First Amendment traditions. These could include communications delays, virality disruptors, and taxes.
Printable PDF
© 2020, Ellen P. Goodman.
Cite as: Ellen P. Goodman, Digital Information Fidelity and Friction, 20-05 Knight First Amend. Inst. (Feb. 26, 2020), https://knightcolumbia.org/content/digital-fidelity-and-friction [https://perma.cc/97BC-HA5G].
See, e.g., Andrew Marantz, Free Speech Is Killing Us, N.Y. Times (Oct. 6, 2019), https://www.nytimes.com/2019/10/04/opinion/sunday/free-speech-social-media-violence.html [https://perma.cc/8UNW-LMUV] (summarizing public rage at “noxious speech” and comparing it to pollution).
Cf. Carl Shapiro, Protecting Competition in the American Economy: Merger Control, Tech Titans, Labor Markets, J. Econ. Persp., Summer 2019, at 69, 79 (arguing that competition may not “provide consumers with greater privacy, or better combat information disorder: unregulated, competition might instead trigger a race to the bottom, and many smaller firms might be harder to regulate than a few large ones.”).
Paul Starr, The Creation of the Media (2004).
See, e.g., C. Edwin Baker, Human Liberty and Freedom of Speech (1989).
See, e.g., Robert Post, Reconciling Theory and Doctrine in First Amendment Jurisprudence, 88 Cal. L. Rev. 2355 (2000).
See, e.g., In Re Complaints Covering CBS Program “Hunger in America,” 20 F.C.C.2d 143, 151 (1969) (“Rigging or slanting the news is a most heinous act against the public interest—indeed, there is no act more harmful to the public's ability to handle its affairs.”).
Turner Broad. Sys. Inc. v. FCC, 520 U.S. 180, 227 (1997) (Breyer, J., concurring-in-part).
This seems to be truer on the right of the political spectrum than on the left. See Yochai Benkler et al., Network Propaganda (2018). Information gluts and poor-quality information were anticipated, if downplayed, byproducts of low-cost speech distribution. Eugene Volokh, Cheap Speech and What It Will Do, 104 Yale L.J. 1805, 1838 (1995)(“But when speakers can communicate to the public directly, it's possible their speech will be less trustworthy: They might not be willing to hire fact checkers, or might not be influenced enough by professional journalistic norms, or might not care enough about their long-term reputation for accuracy.”).
See Claire Wardle, Fake News. It’s Complicated, First Draft (Feb. 16, 2017), https://firstdraftnews.org/latest/fake-news-complicated [https://perma.cc/5J2V-ZXJP].
See, e.g., Peter Pomerantsev, This Is Not Propaganda (2019); Jason Stanley, How Propaganda Works (2015); Tim Wu, Is the First Amendment Obsolete?, Knight First Amend. Inst. (2017), https://knightcolumbia.org/content/tim-wu-first-amendment-obsolete [https://perma.cc/EPY3-SVZG].
U.S. v. Alvarez, 567 U.S. 709 (2012).
See Michael C. Dorf & Sidney Tarrow, Stings and Scams: ‘Fake News,’ the First Amendment, and the New Activist Journalism, 20 U. Pa. J. Const. L. 1 (2017).
See Abrams v. United States, 250 U.S. 616, 630 (Holmes, J., dissenting) (“[T]he best test of truth is the power of the thought to get itself accepted in the competition of the market.”). Enlightenment antecedents abound: See, e.g., John Stewart Mill, On Liberty, in On Liberty and Considerations of Representative Government 1, 13–48 (R. McCallum ed. 1948); 2 John Milton, Areopagitica, in Complete Prose Works of John Milton 485 (E. Sirluck ed. 1959).
See Seana Valentine Shiffrin, Paternalism, Unconscionability Doctrine, and Accommodation, 29 Phil. & Pub. Aff. 205, 220 (2000).
Whitney v. California, 274 U.S. 357, 375 (1927) (Brandeis, J., concurring). For autonomy theories of the First Amendment, see generally Thomas Scanlon, A Theory of Freedom of Expression, 1 Phil. & Pub. Aff. 204, 215-22 (1972).
Olmstead v. United States, 277 U.S. 438, 478 (1928) (Brandeis, J., dissenting).
FCC, Media Bureau, The Public and Broadcasting 12 (2019), https://www.fcc.gov/sites/default/files/public-and-broadcasting.pdf [https://perma.cc/B38X-DXDG] (citing In Re Complaints Covering “Hunger in America,” supra note 6, at 151). See also Report on Editorializing by Broadcast Licensees, 13 F.C.C. 1246, 1254-55 (1949). There was a recent congressional request to enforce this rule against Sinclair Broadcasting for alleged misrepresentations in news broadcasts. See Press Release, Sen. Tom Udall, Udall, Cantwell Lead Colleagues in Call for FCC to Investigate Sinclair Broadcasting for News Distortion (Apr. 12, 2018), https://www.tomudall.senate.gov/news/press-releases/udall-cantwell-lead-colleagues-in-call-for-fcc-to-investigate-sinclair-broadcasting-for-news-distortion [https://perma.cc/5PH3-EHUT].
Functional Music Inc. v. FCC, 274 F.2d 543 (D.C. Cir. 1958) (considering a challenge to the FCC regulation of muzak). Years later, the FCC also declared, without so ordering, subliminal advertising unsuitable for broadcast. Public Notice 74-78, Broadcast of Information by Means of “Subliminal Perception” Techniques, 44 F.C.C.2d 1016, 1017 (Jan. 24, 1974) (declaring that attempts “to convey information to the viewer by transmitting messages below the threshold level of normal awareness,” are “contrary to the public interest” because such advertisements are “intended to be deceptive.”).
47 U.S.C. §396.
Id. §396(a)(5)–(8).
See, e.g., Christopher Anderson, Journalism: Expertise, Authority, and Power in Democratic Life, in Media and Social Theory 248 (David Hesmondhalgh & Jason Toynbee eds., 2008).
See generally Victor Pickard, Democracy Without Journalism? Confronting the Misinformation Society (2019).
Cf. Ellen P. Goodman, Media Policy Out of the Box: Content Abundance, Attention Scarcity, and the Failures of Digital Markets, 19 Berkeley Tech. L.J. 1389, 1420–21 (2004) (describing audience and attention fragmentation in the early stages of digitalization); Wu, supra note 10 (“The most important change in the expressive environment can be boiled down to one idea: it is no longer speech itself that is scarce, but the attention of listeners. Emerging threats to public discourse take advantage of this change ... emerging techniques of speech control depend on (1) a range of new punishments, like unleashing “troll armies” to abuse the press and other critics, and (2) “flooding” tactics (sometimes called “reverse censorship”) that distort or drown out disfavored speech through the creation and dissemination of fake news, the payment of fake commentators, and the deployment of propaganda robots.”).
See Micah L. Berman, Manipulative Marketing and the First Amendment, 103 Geo. L. J. 497, 522-24 (2015) (describing manipulative marketing techniques that take advantage of consumers’ cognitive limitations); id. at 518-30 (discussing manipulative effect of covert advertising that evades critical evaluation); Lili Levi, A “Faustian Pact”? Native Advertising and the Future of the Press, 57 Ariz. L. Rev. 647, 696 (2015) (explaining that product placement relies on what cognitive psychology calls System 1 cognitive processes—those rapid and unconscious biases and heuristics that tell us to trust).
52 U.S.C. §§ 30101–12, 30124.
11 CFR 110.11(c)(1).
2 U.S.C. 441d(a); 11 CFR 110.11(a). Political committees must also include a disclaimer in communications sent via email to more than 500 recipients.
Citizens United v. FEC 558 U.S. 310, 368 (2010) (quoting Buckley v. Valeo, 424 U.S. 1, 76 (1976)). See also id. at 368 (quoting First Nat'l Bank of Boston v. Bellotti, 435 U. S. 765, 792 n.32 (1978)) (“Identification of the source of advertising may be required as a means of disclosure, so that the people will be able to evaluate the arguments to which they are being subjected.”).
Doe v. Reed, 561 U.S. 186, 228 (2010) (Scalia, J., concurring).
For a brief history of sponsorship disclosure laws, see Ellen P. Goodman, Stealth Marketing and Editorial Integrity, 85 Tex. L. Rev. 83, 98 (2006) [hereinafter Stealth Marketing].
Pub. L. No. 85-752 sec. 8, 74 Stat. 889 (1960) (codified as amended at 47 U.S.C. § 317 (2012)).
Letter from David H. Solomon, Chief, Enforcement Bureau, FCC, to Thomas W. Dean, Litig. Dir., NORML Found., 16 F.C.C. Rcd. 1421, 1423 (Dec. 22, 2000).
47 U.S.C. § 317(a)(2). See also H.R. Rep. No. 86-1800, at 24–25 (1960) (stating that a sponsorship identification announcement may be required for political programs or discussions of controversial issues even if “the matter broadcast is not ‘paid’ matter”).
Stealth Marketing, supra note 30, at 116 (“Whether the speech urges consumption, as in advertising, or urges belief, as in propaganda, it aims to effect audience action through cognitive manipulation, rather than through persuasion.”).
15 U.S.C. § 45(a).
FTC, Guides Concerning the Use of Endorsements and Testimonials in Advertising (Sept. 2017), https://www.ftc.gov/sites/default/files/attachments/press-releases/ftc-publishes-final-guides-governing-endorsements-testimonials/091005revisedendorsementguides.pdf [https://perma.cc/Q9TT-GAH6].
Press Release, FTC, Operation ‘Full Disclosure’ Targets More Than 60 National Advertisers (Sept. 23, 2014), https://www.ftc.gov/news-events/press-releases/2014/09/operation-full-disclosure-targets-more-60-national-advertisers [https://perma.cc/3TYT-G88H].
FTC, .com Disclosures: How to Make Effective Disclosures in Digital Advertising (Mar. 2013), https://www.ftc.gov/sites/default/files/attachments/press-releases/ftc-staff-revises-online-advertising-disclosure-guidelines/130312dotcomdisclosures.pdf [https://perma.cc/Y3WW-P5ZW].
47 CFR §§ 73.1943, 73.3526, 76.1700.
11 CFR § 110.11.
See generally Wesley A. Magat & W. Kip Viscusi, Informational Approaches to Regulation (1992) (discussing the different kinds of disclosure regimes).
For a handful of proposals, see Abby K. Wood & Ann M. Ravel, Fool Me Once: Regulating ‘Fake News’ and Other Online Advertising, 91 S. Cal. L. Rev. 1223, 1255–68 (2018).
11 CFR § 100.26 (defining “public communications”).
Internet Communication Disclaimers and Definition of “Public Communications,” 83 Fed. Reg. 12865 (Mar. 26, 2018) (proposing to modify 11 CFR Parts 100 and 110, either by extending existing political advertising disclaimer regulations to “internet communications” or by adopting a general rule that all online advertising contain a “clear and conspicuous” disclaimer of source).
For the People Act of 2019, H.R. 1, 116th Congress (2019) incorporated the Honest Ads Act in Sections 4026 and 4028.
Sen. Mark Warner, Summary of proposed Honest Ads Act, https://www.warner.senate.gov/public/index.cfm/the-honest-ads-act?page=1 [https://perma.cc/44HX-EE3Y]. See also Sen. Mark Warner, Potential Policy Proposals for Regulation of Social Media and Technology Firms (2018) (draft white paper) (available at https://regmedia.co.uk/2018/07/30/warner_social_media_proposal.pdf [https://perma.cc/N4TL-GQND]) [hereinafter Warner Policy Proposals]. For a more far-reaching proposal, see Wood & Ravel, supra note 43, at 1264 (proposing disclosures also for unpaid ads and other communications).
Social Media DISCLOSE Act, A.B. 2188, Gen. Assemb., 92nd Sess. (Cal. 2018).
Democracy Protection Act, A.B. 9930, Gen. Assemb. (N.Y. 2018) (amending definition of “political communication” in N.Y. Elec. Law §14-106 (McKinney 2019) and adding §14-107(b) to require digital records of online platform independent expenditures).
H.B. 2938, 65th Leg., Reg. Sess. (Wash. 2018) (amending Wash. Rev. Code 42.17A (2018)).
Elections Modernization Act, Bill C-76 (2018) (enacted as S.C. 2018, c 31 (Can.)) (replacing the Fair Elections Act, S.C. 2014, c 12). Akin to the situation in Washington State (Brunner & Clarridge, infra note 71), Google pulled or blocked all ads that fell within C-76’s purview ahead of federal elections in March 2019. Tom Cardoso, Google to Ban Political Ads Ahead of Federal Election, Citing New Transparency Rules, Globe & Mail (Mar. 4, 2019), https://www.theglobeandmail.com/politics/article-google-to-ban-political-ads-ahead-of-federal-election-citing-new/ [https://perma.cc/GF6Q-J8QA].
Renee DiResta, Computational Propaganda: If You Make It Trend, You Make It True, Yale Rev., Oct. 2018, at 12.
Stefan Wojcik et al., Bots in the Twittersphere, Pew Res. Ctr. (Apr. 9, 2018), http://www.pewinternet.org/2018/04/09/bots-in-the-twittersphere/ [https://perma.cc/V6C8-JLW4](finding two-thirds of tweeted links were bots). See also Madeline Lamo & M. Ryan Calo, Regulating Bot Speech, 66 UCLA L. Rev. 988 (2019).
For a discussion on platform recommendations, see Deep Fakes and the Next Generation of Influence Operations, Council on Foreign Rel. (Nov. 14, 2018), https://www.cfr.org/event/deep-fakes-and-next-generation-influence-operations.
Bolstering Online Transparency Act (B.O.T.), S.B. 1001 (Cal. 2018) (codified in Cal. Bus. & Prof. Code § 17940 (West 2018)).
Selina Wang, California Would Require Twitter, Facebook to Disclose Bots, Bloomberg (Apr. 3, 2018), https://www.bloomberg.com/news/articles/2018-04-03/california-would-require-twitter-facebook-to-disclose-bots [https://perma.cc/S2TR-TQHT] (quoting State Senator Bob Hertzberg, who introduced the bill).
Cal. Bus. & Prof. Code § 17941(a) (West 2018).
Id. § 17942(c).
Bot Disclosure and Accountability Act, S. 3127, 115th Cong. (2018). A similar bill was introduced in the California Assembly by Marc Levine (D-San Rafael). A.B. 1950, Gen. Assemb., Reg. Session (Cal. 2018).
Warner Policy Proposals, supra note 46, at 6.
Ethics Guidelines for Trustworthy AI, Euro. Comm'n (Apr. 8, 2019), https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai [https://perma.cc/Z36E-K4PC]. See also Australian Competition & Consumer Comm’n, Digital Platforms Inquiry—Preliminary Report (Dec. 2018), https://www.accc.gov.au/system/files/ACCC%20Digital%20Platforms%20Inquiry%20-%20Preliminary%20Report.pdf [https://perma.cc/R2RP-NG3X]. For a particularly relevant section, see Australian Competition & Consumer Comm’n, supra note 60, § 7.3.
See, e.g., Robert Chesney & Danielle Keats Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 Cal. L. Rev. 1753 (2019) (discussing solutions such as forensic technology, digital provenance, and authenticated alibi services).
Malicious Deep Fake Prohibition Act, S. 3805, 115th Cong. (2018) (would criminalize the creation or distribution of a deep fake that facilitates illegal conduct). See also Deceptive Practices and Voter Intimidation Prevention Act, H.R. 6607/S.B. 3279, 115th Cong. (2018) (would criminalize the intentional publication of false information about elections within 60 days of an election).
A.B. 8155, Gen. Assemb., Reg. Sess. (N.Y. 2017) (would extend the right of publicity such that “an individual's persona is the personal property of the individual” and “the use of a digital replica for purposes of trade within an expressive work [absent consent] shall be a violation” of the act with exceptions for commentary, etc.); A.B. 1280, Gen. Assemb., Reg. Sess. (Cal. 2019) (would criminalize the creation or distribution of a deep fake that depicts a person engaging in sexual conduct or that intends to coerce or deceive voters within 60 days of an election). See also S.B. 751, 86th Leg. (Tex. 2019) (codified in Tex. Elec. Code Ann. § 255.004 (West 2019)) (criminalizing the creation of a deep fake within 30 days of an election period with the intent to deceive and “influence the outcome of the election”).
The Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to (DEEPFAKES) Accountability Act, H.R. 3230, 116th Cong. (2019).
Devin Coldewey, DEEPFAKES Accountability Act Would Impose Unenforceable Rules—but It’s a Start, TechCrunch (June 13, 2019), https://techcrunch.com/2019/06/13/deepfakes-accountability-act-would-impose-unenforceable-rules-but-its-a-start [https://perma.cc/P355-774C].
See Robert Chesney & Danielle Citron, Deepfakes and the New Disinformation War, Foreign Aff. (Jan./Feb. 2019), https://www.foreignaffairs.com/articles/world/2018-12-11/deepfakes-and-new-disinformation-war [https://perma.cc/5JK8-2QRR] (expressing skepticism toward the “digital provenance” solution).
Coldewey, supra note 65.
See generally Paddy Leerssen, et al, Platform Ad Archives: Promises and Pitfalls, Internet Pol’y Rev., Oct. 2019, at 1 (providing comprehensive review of ad archive developments in the U.S. and Europe).
Elections Modernization Act, supra note 50.
California Disclose Act, Cal. Gov’t Code §§ 84503–04 (West 2017).
Democracy Protection Act, N.Y. Elec. § 14-107.
Wash. Rev. Code 42.17A.345 (2018). Hours after the disclosure law went into effect on June 7, 2018, Washington State’s Public Disclosure Commission issued an emergency rule clarifying that platforms like Google were subject to it. Wash. Admin. Code §390-18-050(g) (2018). In response to the new law, Google said it pulled all covered ads. Jim Brunner & Christine Clarridge, Why Google Won’t Run Political Ads in Washington State for Now, Seattle Times (June 7, 2018), https://www.seattletimes.com/seattle-news/google-halts-political-ads-in-washington-state-as-disclosure-law-goes-into-effect/ [https://perma.cc/NYK3-L6SM]. Washington State is suing both Google and Facebook for running political ads without sufficient disclosure. Monica Nickelsburg, Washington State Affirms Rule Requiring Facebook and Google to Make Political Ad Disclosures, GeekWire (Nov. 29, 2018), https://www.geekwire.com/2018/washington-state-affirms-rule-requiring-facebook-google-make-political-ad-disclosures [https://perma.cc/DEP2-MVYJ].
See, e.g., Karen Yeung, Hypernudge: Big Data as a Mode of Regulation by Design, 20 Info., Comm. & Soc’y 118 (2017); Anthony Nadler, et al, Weaponizing the Digital Influence Machine: The Political Perils of Online Ad Tech, Data & Soc’y (2018), https://datasociety.net/wp-content/uploads/2018/10/DS_Digital_Influence_Machine.pdf [https://perma.cc/78S2-W8BN](showing how platform advertising systems are “used to prioritize vulnerability over relevance”).
Daniel Susser, et al., Technology, Autonomy, and Manipulation, Internet Pol’y Rev., June 2019, at 1, 4 (defining online manipulation as “intentionally and covertly influencing their decision-making, by targeting and exploiting their decision-making vulnerabilities”). See also Cass Sunstein, Fifty Shades of Manipulation, 1 J. Marketing Behav. 213, 216 (2016) (“I suggest that an effort to influence people’s choices counts as manipulative to the extent that it does not sufficiently engage or appeal to their capacity for reflection and deliberation.” (emphasis original)). See also The Ethics of Manipulation, Stan. Encyclopedia of Phil. (Mar. 30, 2018), https://plato.stanford.edu/entries/ethics-manipulation/[https://perma.cc/Z3WU-XB6A] (contrasting rational persuasion with manipulation). See also Tal Zarsky, Privacy and Manipulation in the Digital Age, 20 Theoretical Inquiries Law 157 (2019).
See Nadler et al., supra note73, at 11 (identifying three stages in the “digital influence machine”: the development of “detailed consumer profiles,” “the capacity to target customized audiences, or publics, with strategic messaging across devices, channels, and contexts,” and “the capacity to automate and optimize tactical elements of influence campaigns, leveraging consumer data and real-time feedback to test and tweak key variables, including the composition of target publics and the timing, placement, and content of ad messages”).
See, e.g., Siva Vaidhyanathan, Antisocial Media: How Facebook Disconnects Us and Undermines Democracy (2018).
Julie E. Cohen, Law for the Platform Economy, 51 U.C. Davis L. Rev. 133, 150 (2017).
Jonathan Albright, The #Election2016 Micro-Propaganda Machine, Medium (Nov. 18, 2016), https://medium.com/@d1gi/the-election2016-micro-propaganda-machine-383449cc1fba [https://perma.cc/FC97-6V5H]. See also Nadler, et al., supra note 73, at 32 (“many of the most popular social media interfaces are designed in ways that favor the spread of content triggering quick, emotionally intense responses.”) (citing Kerry Jones, et al., The Emotional Combinations That Make Stories Go Viral, Harv. Bus. Rev. (May 23, 2016), https://hbr.org/2016/05/research-the-link-between-feeling-in-control-and-viral-content [https://perma.cc/9TT5-794R]); Vaidhyanathan, supra note 76.
See, e.g., Ryan Calo, Digital Market Manipulation, 82 Geo. Wash. L. Rev. 995 (2014).
Dipayan Ghosh, Facebook’s Oversight Board is Not Enough, Harv. Bus. Rev. (Oct. 16, 2019), https://hbr.org/2019/10/facebooks-oversight-board-is-not-enough [https://perma.cc/9D56-EWA8].
Mark Andrejevic, Privacy, Exploitation, and the Digital Enclosure, 1 Amsterdam L. F., no. 4, 2009, at 47, 53 (“the creation of an interactive realm wherein every action, interaction, and transaction generates information about itself”) (drawing on James Boyle, The Second Enclosure Movement and the Construction of the Public Domain, 66 L. & Cont. Probs. 33 (2003)).
Declaration by the Comm. of Ministers on the Manipulative Capabilities of Algorithmic Processes, 1337th Mtng. (Feb. 13, 2019), https://search.coe.int/cm/pages/result_details.aspx?objectid=090000168092dd4b [https://perma.cc/7LTU-9CYP].
See Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (2018).
Dipayan Ghosh & Ben Scott, #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet, New America (Jan. 2018); Daniel Susser et al., Online Manipulation: Hidden Influences in a Digital World, 4 GEO. L. TECH. REV. 1 (2020) [hereinafter Online Manipulation: Hidden Influences]; Alice Marwick & Rebecca Lewis, Media Manipulation and Disinformation Online, Data & Soc'y (2017), https://datasociety.net/pubs/oh/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf [https://perma.cc/HPT2-AN6G]. See also Final Report of the High Level Expert Group on Fake News and Online Disinformation (Mar. 12, 2018), https://ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-group-fake-news-and-online-disinformation [https://perma.cc/ZE3K-CY5A]; UK Sec'y of State for Digital, Culture, Media & Sport & the Sec'y of State for the Home Dep't, Online Harms White Paper (Apr. 19, 2019), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/793360/Online_Harms_White_Paper.pdf [https://perma.cc/6HEL-VM27]; Claire Wardle & Hossein Derakhshan, Information Disorder, Council of Eur. (Sept. 27, 2017), https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c [https://perma.cc/ZA8S-ZZYL].
Stigler Comm. on Digital Platforms, Final Report 145 (2019), https://research.chicagobooth.edu/-/media/research/stigler/pdfs/digital-platforms---committee-report---stigler-center.pdf [https://perma.cc/VN7R-ZJ7Y].
Rebecca Lewis, Alternative Influence: Broadcasting the Reactionary Right on YouTube, Data & Soc’y (2018), https://datasociety.net/wp-content/uploads/2018/09/DS_Alternative_Influence.pdf [https://perma.cc/9LQ3-NWBE]; Kevin Roose, The Making of a YouTube Radical, N. Y. Times (June 8, 2019), https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html [https://perma.cc/M9AJ-JCJZ]. But cf. Kevin Munger & Joseph Phillips, A Supply and Demand Framework for YouTube Politics (Oct. 1, 2019) (working paper) (on file at https://osf.io/73jys/ [https://perma.cc/36EK-5KQ2]) (casting doubt on the algorithmic radicalization theory that platforms create demand for disinformation, and suggesting instead that they simply supply existing demand).
Chengcheng Shao et al., The Spread of Low-Credibility Content by Social Bots, 9 Nature Comm., Nov. 20, 2018, at 1, 5 (“[B]ots are particularly active in amplifying fake news in the very early spreading moments.”).
See Kyle Langvardt, Habit Forming Technology, 88 Fordham L. Rev. 129, 145 (2019) (“After sinking [energy] into the product, the user becomes ‘internally triggered’ to come back and check on its performance: Who commented? What did they say? How many likes?”); Simon Parkin, Has Dopamine Got Us Hooked on Tech?, The Guardian (Mar. 4, 2018), https://www.theguardian.com/technology/2018/mar/04/has-dopamine-got-us-hooked-on-tech-facebook-apps-addiction [https://perma.cc/7UCT-MSBH]. See generally, B.J. Fogg, Persuasive Technology: Using Computers to Change What We Think and Do (2003); Ctr. for Humane Tech., Technology is Downgrading Humanity: Let’s Reverse That Trend Now https://humanetech.com/wp-content/uploads/2019/06/Technology-is-Downgrading-Humanity-Let%E2%80%99s-Reverse-That-Trend-Now-1.pdf [https://perma.cc/U2QD-BC9X].
See Langvardt, supra note 88, at 130–52.
James Grimmelmann, The Platform Is the Message, 2 Geo. L. Tech. Rev. 217, 227 (2018) (“[P]latforms tend to promote content that already has the characteristics that promote virality … With trending topics, this is explicit: these are topics that are already going viral (perhaps on a more limited scale). But even the Facebook News Feed and YouTube Suggested Videos are attempts to predict what will go viral most successfully in a user's network and amplify it with that user.”); Jean-Christophe Plantin, et al., Infrastructure Studies Meet Platform Studies in the Age of Google and Facebook, 20 New Media & Soc’y 293, 299–306 (2018). Facebook uses data collected from user activities “to tailor advertising and adjust newsfeed priorities, among other customizations to our personalized walled gardens.” Id. at 304.
Chris Lewis, Irresistible Apps: Motivational Design Patterns for Apps, Games, and Web-based Communities 99–110 (2014).
Id. at 6–7, 103–10; Zeynep Tufekci, YouTube, the Great Radicalizer, N.Y. TIMES (Mar. 10, 2018), https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html?searchResultPosition=2 [https://perma.cc/UT2F-AJRF].
Satwik Shukla & Tessa Lyons, Blocking Ads From Pages that Repeatedly Share False News, Facebook Newsroom (Aug. 28, 2017), https://newsroom.fb.com/news/2017/08/blocking-ads-from-pages-that-repeatedly-share-false-news/[https://perma.cc/98UQ-8RYQ]. Two years later, Facebook refused to remove advertisements of the Trump presidential campaign that were widely considered to be misleading. Craig Timburg et al., A Facebook Policy Lets Politicians Lie in Ads, Leaving Democrats Fearing What Trump Will Do, Wash. Post (Oct. 10, 2019), https://www.washingtonpost.com/technology/2019/10/10/facebook-policy-political-speech-lets-politicians-lie-ads/[https://perma.cc/836T-NY9S].
Nushin Rashidian et al., Friend and Foe: The Platform Press at the Heart of Journalism, Colum. Journalism Rev. (June 14, 2018), https://www.cjr.org/tow_center_reports/the-platform-press-at-the-heart-of-journalism.php [https://perma.cc/XL5A-23RE].
Mark Zuckerberg, Post announcing demotion of publisher content, Facebook (Jan. 11, 2018), https://www.facebook.com/zuck/posts/10104413015393571 [https://perma.cc/W7TJ-JWZG]. For the change’s impact on publishers, see Josh Constine, How Facebook Stole the News Business, TechCrunch (Feb. 3, 2018), https://techcrunch.com/2018/02/03/facebooks-siren-call/ [https://perma.cc/QP57-8J56].
See Casey Newton, A New Facebook News Tab Is Starting to Roll Out in the United States, The Verge (Oct. 25, 2019), https://www.theverge.com/2019/10/25/20930664/facebook-news-tab-launch-united-states-test [https://perma.cc/3BTP-3K5G].
See, e.g., Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598 (2018).
47 U.S.C. §230. See Ellen P. Goodman & Ryan Whittington, Section 230 of the Communications Decency Act and the Future of Online Speech, German Marshall Fund (August 2019), http://www.gmfus.org/sites/default/files/publications/pdf/Goodman%20%20Whittington%20-%20Section%20230%20paper%20-%209%20Aug.pdf [https://perma.cc/735B-R3PR].
But see Mike Ananny & Kate Crawford, Seeing Without Knowing: Limitations of the Transparency Ideal and its Application to Algorithmic Accountability, 20 New Media & Soc’y 973 (2018) (calling into question transparency as an effective policy lever for digital platforms).
See, e.g., Gordon Pennycook & David G. Rand, Who Falls for Fake News? The Roles of Bullshit Receptivity, Overclaiming, Familiarity, and Analytic Thinking,. J. of Personality (Mar. 31, 2019), https://onlinelibrary-wiley-com.ezproxy.cul.columbia.edu/doi/full/10.1111/jopy.12476 [https://perma.cc/7V78-DZPD]; Emily Thorson, Belief Echoes: The Persistent Effects of Corrected Misinformation, 33 Pol. Comm. 460 (2015).
Sam Levin, Facebook Told Advertisers It Can Identify Teens Feeling ‘Insecure’ and ‘Worthless’, The Guardian (May 1, 2017), https://www.theguardian.com/technology/2017/may/01/facebook-advertising-data-insecure-teens [https://perma.cc/U8W2-YKVM].
See, e.g., Nicolas P. Suzor et al., What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commercial Content Moderation, 13 Int'l J. of Comm. 1526 (2019). See also Grimmelmann, supra note 90; Vaidhyanathan, supra note 76.
Digital, Culture, Media and Sport Committee, Disinformation and ‘Fake News’: Final Report 2017-19, HC 1791, at 59 (UK). https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf [https://perma.cc/UYJ4-SEC7] [hereinafter UK Fake News Report].
French Sec’y of State for Dig. Affairs, Regulation of Social Networks – Facebook Experiment 20 (May 2019), https://www.numerique.gouv.fr/uploads/Regulation-of-social-networks_Mission-report_ENG.pdf [https://perma.cc/2SLK-PURA].
Id.
UK Fake News Report, supra note 103, at 61.
Online Electioneering Transparency and Accountability Act, Md. Code Ann., Elec. Law § 13-405. Sites hosting online were required to disclose “an approximate description of the geographic locations where the [ad] was disseminated,” “an approximate description of the audience that received or was targeted to receive the [ad],” and “the total number of impressions generated by the [ad].” Id. §13-405(c)(1)-(3).
Washington Post v. McManus, 944 F.3d 506 (4th Cir. 2019).
Some of these suggestions are proposed by Ranking Digital Rights as new indicators for safeguarding digital rights. Ranking Digital Rights, RDR Corporate Accountability Index: Draft Indicators (Oct. 2019), https://rankingdigitalrights.org/wp-content/uploads/2019/10/RDR-Index-Draft-Indicators_-Targeted-advertising-algorithms.pdf [https://perma.cc/MJ7S-UHHH]. See also Ellen P. Goodman & Karen Kornbluh, Bringing Truth to the Internet, Democracy, Summer 2019, https://democracyjournal.org/magazine/53/bringing-truth-to-the-internet/ [https://perma.cc/YK4F-ETET] (“Large platforms should be required to implement Know Your Customer procedures, similar to those implemented by banks, to ensure that advertisers are in fact giving the company accurate information, and the database should name funders of dark money groups rather than their opaque corporate names.”).
See Whitney Phillips, The Toxins We Carry, Colum. Journalism Rev., Fall 2019, at 53 (discussing efforts to increase facts in the information ecosystem).
See generally, Pickard, supra note 22.
Bence Bago, et al., Fake News, Fast and Slow: Deliberation Reduces Belief in False (But Not True) News Headlines, J. of Experimental Psychology (Jan. 9, 2020) (“[P]eople made fewer mistakes in judging the veracity of headlines and in particular were less likely to believe false claims when they deliberated, regardless of whether or not the headlines aligned with their ideology.”).
52 U.S.C. 30120(a); 11 CFR 110.11 (requiring disclaimers on certain content to identify who paid for the ad and, where applicable, whether the communication was authorized by a candidate). See also Disclaimers, Fraudulent Solicitations, Civil Penalties, and Personal Use of Campaign Funds, 67 Fed. Reg. 76962 (Dec. 13, 2002).
See, e.g., Lewis A. Grossman, FDA and the Rise of the Empowered Consumer, 66 ADMIN. L. REV. 627, 631 (2014) (“A surfeit of information can overwhelm consumers, leading them to attend to it selectively or to ignore it altogether.”) (citing Barry Schwartz, The Paradox of Choice (2004)); Mario F. Teisl & Brian Roe, The Economics of Labeling: An Overview of Issues for Health and Environmental Disclosure, 27 Agric. & Resource Econ. Rev. 140, 148 (1998) (“increasing the amount of information on a label may actually make any given amount of information harder to extract”). See generally Carl Schneider & Omri Ben-Shahar, More Than You Wanted to Know: The Failure of Mandated Disclosure (2014); Archon Fung, et al., Full Disclosure: The Perils and Promise of Transparency (2007).
Media reception literature explores how individuals filter information through preexisting epistemic constructs, leading them to ignore or recast mandatory disclosures. See Mark Fenster, The Opacity of Transparency, 91 Iowa L. Rev. 885, 930 (2006) (“At the moment a text ultimately has meaning for its audience, the receiver has decoded the text in a manner framed by individual social and cognitive structures of understanding that are in part determined by race, class, gender, educational background, and the like.”).
Susser et al., supra note 74, at 6.
William McGeveran, The Law of Friction, 2013 U. Chi. Legal F. 15, 51 (2013).
Dan Gillmor, We the Media: Grassroots Journalism by the People, For the People 187 (2005).
Yochai Benkler, A Free Irresponsible Press: WikiLeaks and the Battle over the Soul of the Networked Fourth Estate, 46 Harv. C.R.- C.L. L. Rev. 311 (2011).
Jane Jacobs, The Death and Life of Great American Cities 144 (1961) (“A mixture of uses, if it is to be sufficiently complex to sustain city safety, public contact and cross-use, needs an enormous diversity of ingredients.”).
Richard Sennett, Uses of Disorder 139–71 (1970).
Paul Ohm & Jonathan Frankle, Desirable Inefficiency, 70 Fla. L. Rev. 777, 782–83 (2018) (describing how stock exchanges run financial transactions through extra cable in order to slow them down, how bitcoin requires “proof of work” to engender trust, and how iPhone has built time delays into its passcode lock to increase security).
Jacob Kastrenakes, WhatsApp Limits Message Forwarding in Fight Against Misinformation, The Verge (Jan. 21, 2019), https://www.theverge.com/2019/1/21/18191455/whatsapp-forwarding-limit-five-messages-misinformation-battle [https://perma.cc/N5T4-YNA2].
Farhad Manjoo, Only You Can Prevent Dystopia, N.Y. Times (Jan. 1, 2020), https://www.nytimes.com/2020/01/01/opinion/social-media-2020.html [https://perma.cc/N4YH-VEAZ].
Kevin Arceneaux & Ryan J. Vander Wielen, Taming Intuition: How Reflection Minimizes Partisan Reasoning and Promotes Democratic Accountability (2017).
Ohm & Frankle, supra note 122, at 781 (“The IEX shoebox, which imposes an artificial delay on all communication, represents a strike against the single-minded law of efficiency.”).
The Social Media Addiction Reduction Technology (SMART) Act, S. 2314, 116th Cong. (2019).
Id. §3.
See, e.g., Paul Ohm, Manipulation, Dark Patterns, and Evil Nudges, Jotwell (May 22, 2019), https://cyber.jotwell.com/manipulation-dark-patterns-and-evil-nudges/ [https://perma.cc/3JY9-FAHH] (reviewing Online Manipulation: Hidden Influences, supra note 84) (suggesting that manipulation could be added “as a third prohibited act alongside deception and unfairness in section five of the FTC Act”).
SEC, Investor Bulletin, Measures to Address Market Volatility (July 1, 2012), https://www.sec.gov/oiea/investor-alerts-bulletins/investor-alerts-circuitbreakersbulletinhtm.html [https://perma.cc/4QPA-8P3F].
Yong H. Kim & J. Jimmy Yang, What Makes Circuit Breakers Attractive to Financial Markets? A Survey, Fin. Markets, Institutions & Instruments, Aug. 2004, at 109, 121.
See, e.g., Directive 2014/65/EU of the European Parliament and of the Council of 15 May 2014 on Markets in Financial Instruments and Amending Directive 2002/92/EC and Directive 2011/61/EU 2014 (EU), 2014 O.J. (L 173) 349. Indeed, European regulators went further to require that traders disclose algorithms and make available the data necessary to model trading flows. Tilen Čuk & Arnaud van Waeyenberge, European Legal Framework for Algorithmic and High Frequency Trading (Mifid 2 and MAR): A Global Approach to Managing the Risks of the Modern Trading Paradigm, 9 Eur. J. of Risk Reg. 146 (2018).
Donald Bernhardt & Marshall Eckblad, Stock Market Crash of 1987, Fed. Res. Hist. n.12 (Nov. 22, 2013), https://www.federalreservehistory.org/essays/stock_market_crash_of_1987 [https://perma.cc/LN29-ZBEQ].
Zarsky, supra note 74, at 186.
See, e.g., Omri Ben-Shahar, Data Pollution, J. of Legal Analysis, 2019, at 104.
See, e.g., Timothy Karr & Craig Aaron, Free Press, Beyond Fixing Facebook 24 (Feb. 2019), https://www.freepress.net/sites/default/files/2019-02/Beyond-Fixing-Facebook-Final_0.pdf [https://perma.cc/JWD5-LZTC].
Ellen P. Goodman is Professor of Law at Rutgers University and Co-Director and Co-Founder of the Rutgers Institute for Information Policy & Law (RIIPL).