Introduction
Anonymity has emerged in recent years as an important focus of debates about the digital public sphere.
An opinion piece in The Wall Street Journal argued that a solution to the problems besetting social media was to “end anonymity.” Soon after, Senator John Kennedy announced he would introduce a bill to ban anonymity online. In the United Kingdom, anonymity also featured in the discussions about the Online Safety Act. Bills designed to curb anonymity are also frequent in Brazil, including more recently with one introduced by the select committee investigating the Bolsonaro administration’s handling of the pandemic as part of the committee’s recommendations included in the final report to punish those who engage in disinformation.Supporters of proposals targeting anonymity sometimes argue that requiring users to make themselves known will remedy many of the pathologies afflicting the digital public sphere, including misinformation. Identification is seen as a tool for creating a more truth-based discourse, by inducing speakers to behave more responsibly, as well as providing listeners with information to assess the credibility of the speaker. The assumption often is that anonymity promotes lies and incivility, while identification induces truth and civility.
Nathaniel Persily sums it up: “If online anonymity is the cause of many of the democracy-related ills of social media, then disclosure might be the best disinfectant.”In fact, in an environment beset by political polarization, instead of serving as a disinfectant, identification can add fuel to the fire of mis- and disinformation. Not only that, anonymity can have a role also in enabling public political deliberation that has been underappreciated. This paper surveys literature from multiple disciplines and challenges assumptions behind the prevailing stances towards anonymity and mis- and disinformation. It argues that anonymity and identification do not have a fixed function;
it instead refers to the plurality of identification and the plurality of anonymity. “Plurality” is meant to emphasize that both anonymity and identification shape and are shaped by factors such as social norms and platform affordances. As such, whether identification will contribute to a more truth-based public discourse and to a more civic-minded digital sphere is a question that can only be answered if we account for those factors. Considering the identity-based components of the spread of disinformation in polarized contexts, anonymity can serve as a device to create opportunities for conversation and avoid some of the mechanisms triggering those components.A few notes on terminology and scope should be helpful. Anonymity stands for namelessness in the vernacular, yet conceptually it must be appreciated as going beyond names.
In fact, names are less effective as unique identifiers, as they often can shared by more than one person. Identification correspondingly is not constrained to names. Identification and anonymity can be seen as “different poles of a continuum.” Anonymity is relational: Someone might have knowledge that allows them to identify a speaker, while another person might not.This relational aspect can be relevant particularly when we are considering illegal content, where it is not just listeners who pass judgment on anonymous speech, but also authorities seeking to hold speakers accountable. That is, the audience not having knowledge that identifies a speaker (because their name is not unique or the speaker uses a pen name) can be a different issue than law enforcement and other officials being able to trace the speech.
Although the questions are connected, this paper will not discuss traceability. It will focus on identification with one kind of identifier, real names, as a lever that commentators and policymakers have turned to with the aspiration of governing legal speech. Combatting mis- and disinformation is one reason why commentators want to expand identification. One shared hope is that both speakers and listeners will be closer to the truth through real-name identification. That is the central concern of this paper.Part I introduces the concept of the plurality of identification, which the paper uses to call attention to how real names have a different operation on social media. Names, which were not ubiquitously employed to the same extent they are now (e.g., full names in Facebook profiles), work in markedly transformed ways when they offer an index to massively aggregated, permanent information on every one of us that is accessible through social media and search engines. Calls for identification often rest on an assumption that real names instantiate the same identity regardless of the context they are displayed. This ignores the impact of context collapse
—the flattening of different social contexts—in impelling individuals to perform their identity to an imagined, unspecified audience with which they engage much like micro-celebrities.At the same time, anonymity is thought to prevent accountability by disconnecting us from drivers of norm-abiding behavior. Part II shows that this is only sometimes true—and introduces the plurality of anonymity. It surveys research establishing that anonymous settings may produce greater conformity to local, i.e., group-related, social norms (which may or may not be democratically desirable). The paper then argues that the impact of anonymity on user behavior depends on content moderation practices and community norms.
Part III consolidates those points and discusses the role of political polarization and the sharing of false information. Although it is commonly assumed that identification is a means of fostering veracity, as well as civility, this is often not the case. The paper explores findings from psychology and computational social science to argue that real names are part of mechanisms that drive misinformation in settings marked by affective polarization (negative attitudes toward the other party). Anonymity, conversely, has potential as a device for reducing polarization as well as creating opportunities for conversations not infected by those mechanisms.
This paper aims to add to a years-long debate about the place of anonymity in a healthy digital public sphere.
Much work has been done about the disproportionate effects flowing from real-name policies to marginalized communities, to individuals who have legitimate reason to fear for their safety in disclosing their real names, or to those whose names do not match their official government identification. Indeed, in 2011, the announcement of now-defunct Google Plus’s real-name policy prompted considerable backlash along those lines, leading in what were described as the Nymwars; in 2015, a new battlefront turned to changes in Facebook’s enforcement of its policies, which was met with opposition by a collection of civil society organizations gathered around the Nameless Coalition. Scholars have suggested that such concerns can be addressed in specific cases and exceptionally, only “where anonymity is needed to avoid ‘threats, harassment, or reprisals,’” as Justice Scalia argued in McIntyre, a landmark case on the topic. My hope with this paper is to explore the role of anonymity and identification even beyond the risk of speech suppression and disproportionate effects.I. The Plurality of Identification: Real names and context collapse
Mark Zuckerberg framed real names as the appropriate norm for online interaction when he claimed, in 2010, that “[h]aving two identities for yourself is an example of a lack of integrity.”
The notion seems to be: People hardly ever use assumed names in offline life, so why should they do it differently online?By implementing a real-name policy for Facebook, and its 3 billion users worldwide, Zuckerberg gave some credit to the notion that using real names is what is to be expected generally from people online. He made his assertion a kind of self-fulfilling prophecy.
Discussions of online anonymity often frame it as a deviation from established social norms
—a deviation that is justified by an individual’s legitimate fear of retaliation, or as a legitimate response to surveillance. However, despite the allure of the familiarity of names in a pre-internet, offline world, it is instead real-name policies that break with longstanding conventions. As Section A shows, the internet did not always presume real names.Even when users adopt real names online, doing so significantly alters the function those names play, as Section B explores. This is because those names get indexed on multiple social media and search engines, the result being that users’ multiple audiences, representing a range of social interactions, are flattened into a single one.
This forces them to perform to their “most sensitive [audience] members: parents, partners, and bosses,” as if they were broadcasting for these networked audiences. Real-name identification in such a setting does not mean the same as it would in each social context; this evidences how identification is multifarious.A. Before real names
In fact, the early days of computers and the internet were marked by identifiers other than real names. As Emily van der Nagel reports,
the earliest usernames were actually numbers: System administrators would assign individuals unique user identification numbers to distinguish their activities from those of others who shared the same computer (then owned only by institutions).And although early email accounts were at first controlled by institutions, not individuals—who tended to use employees’ or students’ full names as their email identification (or a combination of initials and numbers)
— as the internet developed and the commercial internet grew, service providers started offering personal email addresses for a fee. Once (institutional and financial) constraints on email creation disappeared, people began to choose usernames creatively, “play[ing] with numbers, nicknames, interests, in-jokes and cultural references.” Those creative email addresses were also a way to establish boundaries between work and personal life, which made more sense before connected portable devices (laptops and phones) eroded those divisions.Pseudonyms were also a staple of users of social media precursors such as bulletin boards and IRC (internet relay chat) channels.
As early as the 1970s, people played with usernames at the Electronic Information Exchange System, a computer conferencing bulletin board, so they could adopt “a role in particular conferences, have the freedom to say things they would not want attributed to them or their organisation, signal that the discussion was not to be taken too seriously, and let newcomers experiment with sending messages on the board without fear of revealing their lack of skill in the medium.” Foundational work by Sherry Turkle, writing on the early days of the commercial internet, discussed how users in IRC channels had fluid identities and explored how this helped to create a space in which conventions around gender, age, and race could be redefined and transformed. Those hopes did not bear out as Turkle might have expected, in large part because “forms of discrimination such as racism and sexism are not solely based on appearance.”The tendency of users to continue to rely on pseudonyms was a consequence of many features of the early internet, which Bernie Hogan discusses.
First, pre-Web 2.0, user-generated content was generally text-based, and digital cameras and webcams were not yet widespread. As such, constructing a new identity required less effort. Second, because relatively few people used the internet, communities tended to be interest-based, not based on social ties. This meant there were few costs for using pseudonyms online. Third, the internet was still a mystery to many, even those who used it, and people were wary of exposing their “real-world” identities.B. Real names in context collapse
The rise of social media altered many of these features of the internet. And because social media platforms were designed to link people to those they were already connected to offline in some way, it made sense for users to employ their real names when they used the platforms.
Indeed, it is worth remembering that Facebook was early on described as an online version of Harvard’s paper face books. Real names made sense for TheFacebook, just as pseudonyms made sense for other websites. Initially, Facebook was limited to the Harvard community; it would later be extended to other universities in the US. Still, it was a walled garden, with social norms appropriate for the context of that community.An important change happened, however, when the platform became accessible to anyone with an email.
Users now could interact simultaneously with their high school friends, college colleagues, family, coworkers, and so on. This meant that users had no single set of social norms they could rely upon when communicating to these multiple audiences. The opening up of Facebook resulted, in other words, in context collapse, a term which stands for “[t]he lack of spatial, social, and temporal boundaries mak[ing] it difficult to maintain distinct social contexts.”The consequence was that, although Facebook’s real-name policy stuck around, users’ real names no longer played the role they did offline. For one thing, users’ real names were now persistent and searchable: When users spoke online, their words were not only broadcast to everyone in their online network; they could be found and associated with them at any later point. With context collapse, users would be read by audiences they might not have expected. Attempting to make a joke after giving the barista one’s name entailed the risk of either looking silly to a handful of people nearby or drawing a few chuckles from them. With real-name social media accounts, the embarrassment goes much further, as does the comedy. This is not just a question of reach; it affects how users see themselves and what they post.
Alice Marwick and danah boyd have explored this transformation in how people interact online. They show how the collapsed social contexts drive people to engage in practices of “micro-celebrities,” much like broadcast television, with the caveat that “unlike broadcast television, social media users are not professional image-makers.”
To the extent that each social interaction enacts identity, the collapsing of contexts in social media means that users must present themselves to an imagined audience (who they think might consume their content) that does not share a set of norms regarding what is appropriate.So, while sticking to real names might seem a continuation of established social practices, it is not, because internet affordances change how names operate socially. Our names are indexed, our café encounters, our workplace banter, our relationships—in short, now we are visible to all, we have to perform our identities for all those people, or pay the price for not doing so.
In light of that, we can see that pseudonyms in fact make sense online, because they allow people to navigate different contexts, and speak in different registers to different audiences.
This is not to say that real names on social media do not make sense. Billions of users found value in connecting to high school friends, distant family members, former coworkers, etc. The point that we should be clear on is how real names online are not a continuation of our pre-digital practices. And, as Part I, Section A showed, the ensuing transformation is not directly a result of technological change. As Bernie Hogan notes, “[t]he real-name web is not a technology; it is a practice and a system of values.” The familiar appeal of using real names, therefore, rests on an inadequate understanding of how internet affordances changed what our names mean. The impact of attaching real names to our speech and actions varies, and this is how we can see the plurality of identification.II. The Plurality of Anonymity: Norm conformity and the mediation of other affordances
Part I explored how the same form of identification can function differently according to the context. Real names have different implications in digital settings. Part III will explore how this variation frustrates the assumptions of commentators who put faith in identification to combat mis- and disinformation. This Part shows how anonymity can play a part in making behavior conform to social norms, a point that is often neglected. Section A introduces the theoretical model that describes how. Section B then transitions from theory to practice. It canvasses some of the ways anonymous communities work to shape identities around their aspirations and goals. Section C discusses quantitative research that has sought to understand the role of anonymity in the quality of online content by studying newspaper comment sections.
A. Anonymity does not mean absence of social norms
It is tempting to think of online anonymity as bringing out the worst in us. If users are not held accountable for their offline identities, the argument goes, then incentives to refrain from engaging in abusive behavior are removed, and only incentives to indulge in toxic disinhibition remain. In short, the idea is that when individuals are anonymous, they will flout social norms and behave badly. This tracks classic theories on deindividuation in social psychology.
This familiar view of the impact of anonymity has been challenged in recent decades by scholars in social psychology and communication studies who have developed the social identity model of deindividuation effects (SIDE).This model holds that, in many situations, “group immersion and anonymity le[a]d to greater conformity to specific (i.e., local) group norms, rather than to transgression of general prosocial norms, as deindividuation theory proposed.”
Contrary to classic deindividuation theory, which links the lack of identification with individuals acting in disdain for any social norms, the SIDE model predicts that, when group identity is salient, it will modulate anonymous individuals.Deindividuation theory would see the behavior of individuals in a crowd as irrational and anti-normative, reflecting a sense of loss of identity and the constraints of self-awareness. The SIDE model sees such behavior as a consequence of individuals corresponding to group identity and local norms, acting in accordance with what that group finds normative. In a nutshell, where deindividuation theory “implies a loss of self in the group,” the SIDE model instead recognizes “the emergence of the group in the self”
—when individuals perceive each other as “interchangeable group members.” Initially applied to text-based media, the model has been extended to other kinds of media as well (e.g., video-based). The SIDE model has been supported by multiple research findings.So the notion that online anonymity entails a negation of identity and any kind of social norms must be revised in light of research showing how, even in conditions of anonymity, identities are still intermediated by norms. We should be careful about what this means. It does not mean that group identity and corresponding norms will always prevail. Which identity will be salient depends on a wide range of factors; the SIDE model does not say it will always be the case, instead, it rejects a “blanket assumption that people will always act in line with individual self-interest when anonymous.”
It also does not mean that the resulting norms will guide group behavior toward positive social outcomes. Importantly, the norms here are local, i.e., those embraced by the group, and might be in tension with broader social norms or with the law.Indeed, as noted, SIDE explains (instead of refuting or ignoring) how, in groups such as mobs, individuals can be guided toward extreme conduct. While one might think that the anonymity of the mob (i.e., the fact that individual behavior is less likely to be discerned) releases mob members from social norms, the reverse is often the case: Individuals are dragged by the mass behavior because they fused to the (destructive) group identity. The insight borne out by this framework is that this is not a result of the absence of social norms. It is rather the opposite: Groups can become more extreme than the aggregation of members’ attitudes precisely because group identity plays such an overwhelming force.
We turn now to 4chan and Reddit to see in practice how group identity can be shaped to very different results.B. Affordances and norms shape identity even in anonymous settings
The SIDE model shows that we should not assume that anonymity necessarily erodes the constraints of identity and social norms. Identity can play a part in anonymous settings, and identity performance is then not unlike what takes place in non-anonymous settings when we perform not just one but many roles (or, under context collapse, try to negotiate performing those identities for audiences with differing expectations). How we make decisions regarding identity performance in such circumstances is the result of the interplay of digital affordances and social norms, which are reciprocally shaped.
The outcome of this complicated function can affirm or undermine our democratic aspirations for the digital public sphere. The argument here is not that anonymity always yields valuable results. Instead, it is that the role of anonymity in that function is not linearly fixed. Commentators often talk as if it were.
To see how, we can consider platforms that allow users to be anonymous and where anonymity is the norm—and are still markedly different. 4chan and Reddit both enable users to post without any verification.
Users can employ multiple handles and create temporary accounts (which on Reddit are known as throwaway accounts), one for each post they want to make even; 4chan goes a step further and allows for the same handle to be shared by multiple users, which is the norm. They fall roughly on the same extreme of the spectrum from real-name verified accounts and no identification at all. In spite of that, the 4chan boards and Reddit subreddits that we will consider are starkly contrasting.Reddit operates with federated community standards and moderation, with site-wide (or federal) policies and practices supplemented by more specific, community-built and enforced, (local) subreddit rules.
Site-wide policies and their enforcement were significantly stiffened after very visible incidents, particularly the use of the website for the non-consensual sharing of intimate images of celebrities, leading the platform to ban a community that had hosted much of the material. After 2015, Reddit announced an update to its harassment policy that culminated in the banning of “a fatphobic community [targeting] photographs and videos of overweight and/or obese persons.” Other subreddits were later banned, and the platform also started using quarantine as an enforcement instrument.Once again, this federal level of policies and enforcement sits on top of communities’, which can abide by stringent rules for eligibility to participate (sometimes by obtaining assurances of who the user is without checking any official or institutional forms of identification)
and the manner of participation. That shows that group identity is deliberately and fastidiously molded by the communities, promulgating and patrolling the model of behavior they have elected for themselves. That effort by communities sits within Reddit’s ‘karma’ system and upvote and downvote mechanisms, which affects content visibility, and which subreddits can to an extent wield as part of their governance strategies (e.g., by instructing users to use the downvote function to enforce community rules, not so much to signal their disliking of the content). There are also other platform affordances that subreddits can use and adjust to their needs, including automation tools for moderation and “flairs,” color tags that can be attached by moderators to both pieces of content and usernames (when displayed in that community). For instance, r/AskHistorians uses flairs as badges of community-verified expertise.4chan, on the other hand, is decidedly not invested in that kind of meticulously manicured public forum. 4chan message boards such as /b/ are often described as “a well-known trolling stomping ground,”
notoriously often accorded the distinction of being one of “the dark corners of the internet.” Like Gab and 8chan, 4chan “engage[s] in little or no moderation of the content posted.”That might be taken to suggest that group identity and social norms do not play a role. Yet the opposite is true. Meaningful participation in such 4chan boards in fact requires intricate demonstrations of membership, which are designed to cordon off outsiders.
These range from the digital equivalent of shibboleths (for instance, being able to post unusual Unicode characters), to particular slang, to a choreography involving sarcastic use of design features (such as “memeflags”), grasp of community tropes regarding current affairs, and textual and nontextual representations. Seasoned users explicitly tell the uninitiated to observe and assimilate the ways of the community. Mastery of social norms is persistently tested, and lack of familiarity prompts chastisement. Archetypes about members and unwanted participants are also upheld.More specifically, the import from the SIDE model is that we should not assume that anonymity works the same on platforms such as 4chan and Reddit. The former does virtually no moderation; community norms are uncodified, and there is often apparent informal approval of abuse and harm toward out-group users. The latter platform, in contrast, operates with federated community standards and moderation, with site-wide (or federal) practices supplemented by more specific, community-built and enforced, (local) subreddit rules. In terms of requiring and validating information, they might otherwise be seen as quite similar. Yet the differences are striking. While certain 4chan boards are often referred to as one of “the dark corners of the internet,”
researchers have shown how subreddits are able to create vibrant forums for scholarly knowledge, parenting, and intimate content, among others. The SIDE model offers insight as to why: anonymity is employed with patently different goals— and outcomes.C. Measuring the impact of anonymity: the role of content moderation
Research about the role of anonymity in comment sections of newspaper websites has been prolific. It provides additional insight into anonymity by showing us a picture of how forums that are not interest-specific (like some subreddits) or extremist (like some 4chan message boards) are affected by it.
Several studies seek to evaluate the role of anonymity by assessing discursive civility, which an influential study notes “has been defined as arguing the justice of one’s own view while admitting and respecting the justice of others’ views.”
Civility is not, of course, the only value that critics of anonymity online argue that it threatens. Anonymity has also been linked to hate speech, actual threats, and harassment. Nevertheless, research on civility can help shed light on the extent to which anonymity drives people to behave without respect for social norms, which include but are not limited to, disapproval of uncivil speech.What, then, do studies on comment sections tell about civility and anonymity? The evidence is mixed. A highly cited 2014 study compared 11 online newspapers and found that “over 53 percent of the anonymous comments were uncivil, while 28.7 percent of the non-anonymous comments were uncivil.”
The same researcher more recently examined 30 outlets, with similar results. Yet competing explanations were not discussed, and so differences in the audiences of each website as well as varying content moderation practices could have interfered with the observed effects. Another study compared comments on The Washington Post website, which “afford[ed] users a relatively high level of anonymity,” with the newspaper’s Facebook page, finding the former had significantly more uncivil discussions than the latter. Yet again other factors cannot be excluded, and it is plausible that content moderation practices in early 2013 available to and deployed by Facebook were considerably more efficient than those The Washington Post website could make use of. Conversely, a study comparing comments posted to newspaper websites and respective Facebook pages in Brazil in 2016 (a period of considerable disruption that saw President Dilma Rousseff’s removal from office after her impeachment trial) identified no significant difference in terms of incivility and actually found more intolerance on Facebook.Knustad and Johansson examined the toxicity of the comments section of The New York Times and The Washington Post and assessed whether anonymous commenters were more toxic than non-anonymous commenters.
The outlets were selected for comparison because they are both “east-coast, national, fairly mainstream, left-leaning newspapers,” thus reducing “the likelihood of interfering variables, such as the affordances of different platforms, with different rules of conduct, moderation and different comment section cultures.” They found a “small or tiny” correlation between anonymity and toxic comments, but a much larger difference between the two publications. The Post had considerably more toxic comments than The Times comments section. This led researchers to conclude that “website is a stronger explanation for toxicity than anonymity alone.” The authors speculated that these results might be a product of different content moderation strategies since both newspapers “have extensive community rules and guidelines that are linked to in the comment sections […] that reflect their desire for civil and well-informed comments, and neither allow personal attacks, vulgarity or off-topic comments,” noting that The Times uses machine learning software developed by Jigsaw, part of the Alphabet conglomerate. The researchers hypothesized The Times’ system might be “better at catching unwanted comments than the system used by The Washington Post,” which boasted about having its own, proprietary machine learning system.Another potential factor is that The Times also banks on “NYT Picks,” which are selected by the moderators to showcase “high quality comments with exceptional insights that are highlighted in the commenting interface.”
A study found evidence of “the positive impact of highlighting desirable behaviors via NYT Picks to encourage a higher-quality communication in online comment communities.” The Post also highlighted comments, not for their quality, but to call attention to “[u]sers with direct involvement in a particular story.” The Times’ content moderation strategy of spotlighting quality contributions while taking advantage of design features might be an important factor in the differences found by research on anonymous comments.This reaffirms the centrality of content moderation practices to understanding how anonymous communities work. Policies, strategies, and enforcement are crucial in governing the digital public sphere, and not just as assessed by, e.g., the volume or prevalence of infringing or abusive content. The point here is not that creative content moderation or more efficient systems can keep the anonymous vandals out. Indeed, we should not underestimate issues with automation in content moderation,
particularly with Perspective, the Jigsaw software which was adapted to create The Times’ Moderator. The point is instead that content moderation is a component in shaping the identity of those taking part in a particular digital forum, which takes place even when identification is not required. Content moderation can do so by modeling positive behavior, as with the NYT Picks (or flairs in some subreddits, as seen in Part II, Section B), as well as by curbing unwelcome content and preventing users from being provoked into emulating it.III. False Information, Polarization, and Identity
Identification is often seen as a means for better democratic deliberation. Anonymity is regarded as an abettor of lying.
By contrast, as “more information moves the market closer to truth,” identification makes for an improved marketplace of ideas, by equipping listeners with better information to form their judgment. Identification is taken “as a beneficial and purifying process,” through which “[t]he sense of being exposed to public view spurs us to engage in the actions of the person we would like to be.” “[C]ivil and dignified” discourse is also associated with identification, which furthermore upholds civic virtues needed for democratic decision-making.The previous Part has shown that categorical statements such as those do not appreciate how anonymous settings can shape identities in different ways. Just as anonymity in Reddit and 4chan results in contrasting outcomes, we should not expect that anonymity will always undermine the democratic values with which commentators are concerned. Anonymity is not intrinsically inferior to identification. That is because the effects of anonymity and, as Part I showed, identification vary. This Part goes further than claiming anonymity is not less than. I will argue that, in fact, identification can be an agent in the pathologies afflicting social media, particularly dis- and misinformation.
To see how, we need to understand the real-world interplay of identity in its articulation with community and norms. Commentators have assumed that anonymity “facilitate[s] the kind of lying and misrepresentation that undercut a well-informed electorate” because “the speaker bares no cost for repeating lies and promoting false content.”
But research into political polarization paints a different picture. It tells us that, in affectively polarized settings, identified speakers can reap rewards from lies and inaccuracies, instead of being punished by their listeners.In an affectively polarized landscape where hyperpartisans dominate social media,
accuracy and truth do not take the disciplining force that the commentary about identification assumes. In fact, users have been shown to sever the decision to share from their judgment on the truthfulness or falsity of the news. This indicates that it is not only anonymity but also identification that has been insufficiently conceptualized. The next sections will bring together findings from different strands of scholarly literature to explore the role that identification plays in the sharing of mis- and disinformation.A. Identity and affective polarization
An important concern about the current state of the online landscape is political polarization, which has been described as “the greatest threat to American democracy”
and one of the “four horsemen of constitutional rot.” Social media has been blamed for reinforcing preexisting beliefs through repeated exposure to homogeneous viewpoints, which in turn further cements beliefs and insulates them from being challenged. It thus contributes to increasingly polarized politics, with each side of the divide living in its own “echo chamber,” according to a popular account of the issue.In fact, the “echo chambers” theory of social media as a driver of polarization is quite controversial. Researchers have found little empirical support for the thesis, or have concluded that the claim is overstated.
A study has found that modest monetary incentives may considerably dissipate reported incorrect partisan beliefs about facts. And even when it comes to opinions, the echo chambers account might fail to consider how partisan attitudes toward policy positions are formed. For instance, in a study, participants voiced support for a policy aligned with their perception of party ideology but expressed the contrary view when told party stance was in favor of the policy. Furthermore, the echo chamber account of political polarization may dramatically underestimate the role of legacy media actors in driving the phenomenon.Instead of issue-based, or ideological, polarization, many scholars are increasingly more interested in the escalation of affective polarization, described as a “phenomenon of animosity between the parties.”
Affective polarization refers to the process whereby identities get sorted along a cultural divide that predicts where people buy food, what clothes they wear, and what shows they watch on TV. In short, rather than policy positions (e.g., support for a proposed gun control measure), this kind of polarization manifests itself in more encompassing terms. The divide is not only along partisan, but also racial, religious, cultural, and geographic lines, all of which are increasingly conflated.Affective polarization helps explain seemingly paradoxical results from a research intervention that was designed to decrease “echo chamber” insulation (and hence partisan distance). In that study, partisans were paid to follow a Twitter bot account that exposed them to opposing political ideologies.
If greater political polarization is understood as the result of social media reinforcing views and information, and not exposing partisans to different thinking, we would expect that participants who saw more cross-partisan content would hold less polarized attitudes. The study instead found that participants subsequently exhibited more partisan attitudes. The key to unraveling this paradox is in understanding how identities are shaped on social media in a context of affective polarization.The echo chamber account sees polarization as a consequence of insulation created by social media. Scholarship highlighting affective polarization instead frames it as being “driven by conflict rather than isolation.”
Exposure to cross-party content such as offered by the study thus does not break echo chambers, argues the lead author of the study in subsequent work, because it does not breed reflection and deliberation. Rather, it “sharpen[s] the contrasts between ‘us’ and ‘them,’” magnifying affective polarization.Chris Bail uses the metaphor of a prism to explain how social media plays an important role in shaping political identities in the reflection of a distorted image of society.
In a setting where affective polarization festers, extremists get validation and social support from detracting the out-party, as well as from disciplining in-party members who stray from in-party views. This sort of behavior is then normalized by moderates (who are given the impression that their views are less prevalent than they in fact are) and extremists (who are entrenched further not just in their views but their tactics).Affective polarization offers identity, more than policy positions or information, as crucial to understanding political polarization on the internet. Platforms magnify feedback processes of the presentation of the self; they “enable us to make social comparisons with unprecedented scale and speed.”
That is, we can clearly see what sort of content gets positive engagement from other users, and what sort of content brings about the embarrassment of being ignored or the stress of being contested. Given that social media is now so ingrained in everyday life, straying from partisan expectations is very costly, socially and emotionally. This, Bail contends, creates a prism distorting our sense of the environment, inducing us to see the partisan out-group as more extreme than it actually is, through the rewarding of radical partisan behavior and silencing of moderate behavior. All of that culminates in “status seeking on social media creat[ing] a vicious cycle of political extremism.”B. Performing Lies and Misinformation: Identification as a driver
So social media is a cog in a machine that rewards greater affective polarization. Platforms “do not isolate us from opposing ideas; au contraire, they throw us into a national political war.”
The prevalence of dis- and misinformation online must be understood against that background. Once we appreciate this, the connection between identification (particularly the kind established by real-name policies) and misinformation is made clear.There is increasing evidence that content employing “moral-emotional language” does significantly better on social media. Moral psychologists use the term “moral-emotional language” to refer to language which both expresses a moral judgment about what is right and wrong and an emotional state (such as “hate” or “contempt”).
Moral-emotional content shows a propensity to go viral online, in a process researchers have described as “moral contagion” given how “it mimics the spread of disease.” One study of over 500,000 tweets found a 20 percent increase in sharing for each word marked by that kind of language.Disinformation campaigns have leveraged that viral propensity of moral-emotional content.
A study that looked at news articles shared on Twitter concluded false news (established as such through concurring assessments by fact-checking organizations) evoked disgust, more so than real news.Research focused on moral outrage (a subcategory of moral-emotional language)
on social media has explored the mechanisms behind the virality of that sort of content. I want to foreground how identification is a part of those mechanisms.One important component in expressing moral outrage is the reputational gains that can be reaped by signaling to the in-group that we care about serious moral violations.
This logic is valid both offline and online, but whereas expressing outrage at, for instance, how badly a fellow commuter was treated might typically get us credit with others waiting in the subway station, for instance, “doing so online instantly advertises your character to your entire social network and beyond,” as M. J. Crockett puts it. This line of research emphasizes how social media “is a context in which our political group identities are hypersalient,” which amplifies motivations to engage in moral-emotional expression to uphold in-group versus perceived out-group threats and to accrue in-group reputational gains. Status seeking (much like Bail described) is an important component of the spread of dis- and misinformation online.When users are anonymous in online settings, they are less likely to express outrage,
as an important part of their underlying personal motivation is removed: “the need to maintain an image as a good group member in the eyes of other group members.” This is supported by related research on “online aggression” in a German petitions website, which found that non-anonymous users comments were more aggressive when engaging in firestorms against public officials and policies, given that aggressiveness would not be something they would want to conceal—on the contrary, they would want to be seen as standing up for their values.My argument is that real names on social media significantly raise the stakes of the rewards for expressing moral outrage. Granted, pseudonymous users can benefit from reputational gains within that particular digital context. The reputational gains for identified users, however, can render them material benefits, including offline. The favorable recognition they can achieve might be translated, for instance, in media appearances in prestigious legacy publications or in professional opportunities. The fact that they can extract those tangible gains is a function of the affordances for identification in a given platform. A platform that disfavors identification, where users are not identified across different posts, like YikYak or Whisper, impedes users who might be willing to claim to be the authors of a viral piece of content; they will have problems establishing themselves as the genuine posters—anyone would be able to fabricate a screenshot and try to get credit.
The entanglement between identification and moral outrage goes further than the rewards those expressing it online can garner. Moral outrage directed at a member of the out-group upholds in-group norms and thus also affirms group identity, to the detriment of the out-group.
In an affectively polarized setting, moral outrage is an assault on the opposing party and its political capital. Identification again is crucial here. Real names in a polarized setting will enable and invite users to try to establish whether the target of their outrage is a member of the opposing party. Digital encounters in identified settings then provide opportunities, particularly for hyperpartisans, to raid the opposing party at every flank where moral outrage can be expressed. Even if in a given platform users find insufficient cues about the potential targets for moral outrage expression (such as how they identify through their bios, their profile picture, likes, or follows), other information on the web can be found to try and infer party affiliation.To be clear, this kind of antagonistic behavior is not exclusive to real-name settings; it can also take place with pseudonyms whenever there are sufficient cues for users to make inferences about others. Yet this mechanism is contingent on the norms in an anonymous setting: it depends, that is, on whether or not users do bear their party affiliations or give them away inadvertently. In real-name social media, platform design makes this inescapable. As noted earlier, these in-group-oriented motivations extend to the sharing of fake news, regardless of whether the user “ha[s] a firm belief in” it, as research has found.
Indeed, one line of study highlights that whether or not people believe false information stands separately from whether they condone it—“they recognize it as false, but give it a moral pass.”And while it might be objected that moral outrage did not start with the internet, M.J. Crockett points to several factors explaining why outrage is amplified by social media.
It multiplies opportunities: There is evidence that in-person observation of violation of moral norms is uncommon. In platforms driven by user engagement, moral outrage is more likely to go viral. And while expressing moral outrage in person is costly (because many will shy away from confrontation or be intimidated by the risk of retaliation from the target of outrage, including with violence), online the costs are lower, and the corresponding positive feedback can be much more immediate. Again, such positive feedback for the individual translates into gains to their reputation, accrued in terms of virtue signaling to the in-group, which is a function of their identification. Once more, online identification with real names can yield different results compared to offline identification, as discussed in Part I.While the discussion so far has emphasized deleterious effects of moral outrage, the literature has emphasized that such emotional phenomena should not be viewed as intrinsically positive or negative, citing for instance the role it has in propelling collective action around activism around social inequality and injustice and fundraising campaigns, for instance.
The point here is not to pass judgment on moral outrage but to note its part in the mechanisms that underpin the sharing of misinformation online and highlight how identification magnifies those mechanisms.Commentators see identification as beneficial because they believe users will behave better by refraining from toxic speech out of fear of how their actions online will impact their standing in their social circles.
What is generally not accounted for in that narrative is how real names on social media also impel users to perform their context-collapsed identities under a condition of affective polarization. The audience (composed of their friends, family, coworkers, and so on) is watching and will pass judgment on deviations from group loyalties. In real-name platforms, experimentation, self-questioning, and crossing the aisle to try to understand the other side come at a price. Posts and comments supporting the in-group are rewarded; in-group opposing content will often lead to disciplining. Risks flowing from context-collapsed identities in social media have been described in terms of what users will or will not post. What we are considering here is how norm enforcement will effectively shape not only what users themselves post but how they consume content by other users. In other words, the content of the posts users share and how they read posts by others are both in part a function of how identities are presented in a platform. This can create a vicious cycle. Conversely, these drivers can be prevented in certain anonymous settings. This is exactly what some researchers have been exploring and is the topic of Part III, Section C.C. Anonymity as a depolarizing, discourse-enabling device
With polarization breaking records
and social media engulfed in a vicious cycle elicited by status seeking based on constant feedback from like-minded individuals, it might sound like an inane notion to participate in online communities to solicit views contradicting our beliefs on topics such as immigration, gender identity, and the disbandment of one of the main political parties in the U.S. Still, those are examples of conversations at r/ChangeMyView, a subreddit created in 2013 to serve as a venue where users deliberately invite challenges to their opinions.The community operates within Reddit, which, as discussed above, requires no more than a username and password for account creation and employs a policy that allows and even encourages temporary or “throwaway” accounts.
It illustrates how anonymity, combined with platform design and content moderation strategies, can mold identity to support digital spaces in overcoming afflictions plaguing much of social media.First, rather than all-encompassing policies that must apply to a wide range of contexts,
at r/ChangeMyView the rules meticulously govern not just what can and cannot be posted but also how. They cover the text, the attitude, and the manner and effort of participation expected from users who submit issues to the community. The rules are accompanied by “indicators of violations,” which give more insight into how the rules are interpreted and applied. There are also rules for commenters, establishing, for example, that top-level comments (i.e., direct responses to the OP) “must challenge or question at least one aspect of the submitted view,” whereas comments within a thread may express agreement with the OP.Second, the subreddit also leverages platform design to promote the community’s goals. The rules also set out criteria for when to award and when not to award deltas, which any user is able to do. Deltas are “a token of appreciation towards a user who helped tweak or reshape your opinion.”
Deltas are displayed as community badges within r/ChangeMyView. Like mainstream social media, then, the subreddit makes use of gamification strategies; unlike them, however, it does not optimize for user engagement, and instead leverages platform affordances to “celebrat[e] view changes, [which] is at the core of Change My View.”Third, policy enforcement. Moderators are active and adopt a range of approaches to steer the subreddit.
An extensive set of “Moderation standards and practices” addresses “procedures for removing posts/comments, how bans are decided and implemented, how the six (6) month statute of limitations is applied for offenses, and how our appeal process works.” Policy enforcement is therefore also tailored to support the community’s goals, including providing explanations for post removals, adapting automation tools, and employing and modifying design features, such as flags and the delta system.The extent to which r/ChangeMyView actually vindicates its name is debated. Researchers conducted interviews with 15 participants, reporting users “typically did not change their view completely,”
even though they saw the community as useful. More importantly for affective polarization concerns, they found participants thought “posting on CMV helped them develop empathy towards users they earlier disagreed with.”Another example of anonymity being put to use to achieve what real names could not is DiscussIt, a “mobile chat app [developed] to conduct a field experiment testing the impact of anonymous cross-party conversations on controversial topics.”
After recruiting 1,200 Democrats and Republicans, DiscussIt matched each of them with a participant from the opposing political party. Participants were issued an androgynous-sounding pseudonym to join a chat with question prompts asking for their views on either immigration or gun control and received notifications if they became non-responsive. Comparing surveys of participants that responded before and after the experiment, one of the authors says he is “cautiously optimistic about the power of anonymity,” as “many people expressed fewer negative attitudes to the other parties or subscribed less strongly to the stereotypes about them,” and “many others expressed more moderate views about political issues they discussed or social policies designed to address them.” The study reported findings of changes in sentiment toward opposing party members as well as views on the issues discussed.Experiences such as DiscussIt and r/ChangeMyView show us at least two ways that anonymity can be instrumental in creating a more vibrant digital public sphere. One is by attenuating affective polarization, as noted. This is in line with research suggesting that positive contact with the out-group can reduce affective polarization.
There is evidence that partisans have exaggerated perceptions of members of the opposing party, so that engaging in conversation with a living average Republican or Democrat can disabuse stereotypes and reduce negative attitudes. Anonymous social media can create opportunities for cross-party interaction that do not take place in the battlegrounds of a “national political war,” and where reputation is not gained by scoring points against the opposing party with any available means. We have seen how affective polarization is connected with misinformation, so alleviating one could help with the other.A further way anonymity can play a part in enacting more truth-based discourse is by enabling conversations that are grounded in facts and guided by what is generally expected of public deliberation—even if it does not move the needle on polarization. Anonymity can lower the stakes of engaging in what could otherwise be seen as heretical partisan equivocation that would be faced with disciplining. It can as such facilitate hard conversations for which at least some users have a hunger.
These examples might be too hopeful if thought of as immediate prototypes for replacing Facebook or Twitter,
yet they are still valuable. It is true that interest and available time to invest in civic-minded exercises such as DiscussIt or r/ChangeMyView should not be presumed to be universal. Still, they are valuable in creating an alternative environment where people can have sincere conversations about topics which have been battlegrounds of the political divide. More importantly, both r/ChangeMyView and DiscussIt suggest an exciting path for re-engineering platforms, tweaking the levers to steer the digital environment toward democracy-empowering settings, and helping to allay the illnesses afflicting politics, instead of exacerbating them. Importantly, both highlight “how the design of our platforms shapes the types of identities we create and the social status we seek.” Tinkering with anonymity and identification as such is an important component of potential experimentation with other social media affordances and should be a focus of attention when considering the other kinds of internet ecosystems that scholars such as Ethan Zuckerman have imagined.Conclusion
The plurality of identification and plurality of anonymity emerging from the study of networked communities holds cross-cutting insights. This paper begins the work of setting out what it entails. Identity is not fixed but is perennially shaped by and shapes group identity and norms, as Robert Post explicates.
Real names as used in a networked society are not equivalent to how real names work offline. Anonymity has been wrongly conceived as a marker of the absence of communal identity and of community norms. In effect, it is an ingredient in establishing communities, mediated by other affordances, including design, norms, and practices. We should not understand anonymity as operating according to a uniform function. Identification is likewise mediated. This point has overlooked implications for the condition of the digital public sphere revealing identification as a driver of political polarization and misinformation in a complicated interconnected machinery. Rather than a piece in that machinery, anonymity may instead afford a disentangling device to respond to the pathologies of political discourse that have concerned commentators.
© 2023, Artur Pericles Lima Monteiro.
Cite as: Artur Pericles Lima Monteiro, Anonymity, Identity, and Lies, 23-13 Knight First Amend. Inst. (Dec. 5, 2023), https://knightcolumbia.org/content/anonymity-identity-and-lies [https://perma.cc/E5XX-SMTH].
Olivier Sylvain, Intermediary Design Duties, 50 Conn. L. Rev. 203 (2017); Danielle Keats Citron & Benjamin Wittes, The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity, 86 Fordham L. Rev. 401 (2017); Mary Anne Franks, Beyond the Public Square: Imagining Digital Democracy, Yale L.J. F. 427 (2021).
Andy Kessler, Online Speech Wars Are Here to Stay, Wall St. J. (Jan. 24, 2021), https://www.wsj.com/articles/online-speech-wars-are-here-to-stay-11611526491.
Mike Masnick, No, Getting Rid of Anonymity Will Not Fix Social Media; It Will Cause More Problems, techdirt (Feb. 1, 2021), https://www.techdirt.com/articles/20210131/01114246154/no-getting-rid-anonymity-will-not-fix-social-media-it-will-cause-more-problems.shtml.
Online Safety Act 2023. The final version creates a duty for providers falling under the most intense requirements (Category 1 services) to “offer all adult users of the service the option to verify their identity.” See id., § 64(1). The Act does not require providers to review official government identification for verification. See § 64(2). While not banning anonymity, the Act also requires providers to offer “features which adult users may use or apply if they wish to filter out non-verified users.” § 15(9). There had been calls for a stronger stance against anonymity, but the government decided to adopt a strategy it described as empowering users and striking a balance. See Nadine Dorries, New Plans to Protect People From Anonymous Trolls Online, Gov.UK (Feb. 22, 2022), https://www.gov.uk/government/news/new-plans-to-protect-people-from-anonymous-trolls-online.
See Senado Federal, CPI da Pandemia, Parecer No. 1, de 26 de outubro de 2021, 1150 (introducing a requirement that “providers of social networks” verify users’ identification including through the use of biometrical data and official taxpayer databases). Under the Brazilian Constitution, “anonymity is forbidden.” Constituição Federal [C.F.] [Constitution] art. 5, IV. What that clause entails is unclear and disputed. See Artur Pericles Lima Monteiro, Online Anonymity in Brazil: Identification and the Dignity in Wearing a Mask (2017) (arguing that neither the Constitution nor Brazilian statutory law create a general identification requirement, contrary to what is often stated).
McIntyre v. Ohio Elections Comm’n, 514 U.S. 334, 382 (1995) (Scalia, J., dissenting): “[…] a person who is required to put his name to a document is much less likely to lie than one who can lie anonymously […].” See also Nathaniel Persily, The Internet’s Challenge to Democracy: Framing the Problem and Assessing Reforms 16 (2020): “When it comes to elections, though, the unaccountable speech anonymity facilitates can promote division and deception that hinders the proper functioning of a democracy. It enables extremist voices that seek to undercut the legitimacy of the electoral process and basic constitutional values. Anonymity and pseudonymity (adopting an online persona other than one’s own) also facilitate the kind of lying and misrepresentation that undercut a well-informed electorate.”; See also Enrique Armijo, Meet the New Governors, Same as the Old Governors, in The Perilous Public Square: Structural Threats to Free Expression Today 352, 356–57 (David E. Pozen ed., 2019) (“Anonymity, at least as a First Amendment–informed design principle for communications networks, tends to result in a degraded expressive environment, not an improved one.”); Anne Wells Branscomb, Anonymity, Autonomy, and Accountability: Challenges to the First Amendment in Cyberspaces, 104 Yale L.J. 1639, 1645 (1995) (stating anonymity “strips users of the civility that face-to-face the encounter has engendered in most modern societies” and “facilitates the distribution of false information”).
Persily, supra note 6 at 41.
Because it argues that we must acknowledge that identity is not static, instead of using “identity disclosure” (which seems to imply there is only one identity), this paper prefers the term “identification” to refer to a particular form of identity manifestation. In the context of social media governance and misinformation, this usually refers to adopting real names as an identifier.
I thank Helen Norton for suggesting the phrase “the diversity of anonymity” at Yale Law School’s IX Freedom of Expression Scholars Conference. Reflection on that notion prompted this concept.
See Helen Nissenbaum, The Meaning of Anonymity in an Information Age, 15 Info. Soc'y 141, 141 (1999).
See Gary T. Marx, What’s in a Name? Some Reflections on the Sociology of Anonymity, 15 Info. Soc'y 99, 101 (1999).
Craig R. Scott & Stephen A. Rains, (Dis)connections in Anonymous Communication Theory: Exploring Conceptualizations of Anonymity in Communication Research, 44 Ann. Int'l Comm. Ass'n 1, 392 (2020).
See Kathleen A. Wallace, Anonymity, 1 Ethics & Info. Tech. 23, 24 (1999) (“Anonymity presupposes social relations. In other words, it is relative to social contexts in which one has the capacity to act, affect or be affected by others, or in which the knowledge or lack of knowledge of who a person is is [sic] relevant to their acting, affecting or being affected by others.”).
See Margot E. Kaminski, Real Masks and Real Name Policies: Applying Anti-Mask Case Law to Anonymous Online Speech, 23 Fordham Intell. Prop. Media & Entm’t L.J. 815, 877 (2013) (“Policies that prohibit anonymity apply to all layers of the communication stack: the individual cannot speak without self-identifying to everyone. Policies that address traceability do not mandate that an individual speak under his real name; instead, they require the individual to register identity with at least one party, so that if he commits a crime or a tort, law enforcement will be able to find him.”).
For a discussion of untraceable anonymity, see A. Michael Froomkin, From Anonymity to Identification, 1 J. Self-Regulation & Reg. 120 (2015).
This was one argument in Justice Scalia’s dissent in a leading precedent protecting anonymity. He noted that identification played a part in “promoting a civil and dignified level of campaign debate—which the State has no power to command, but ample power to encourage by such undemanding measures as a signature requirement [for campaign material].” 514 U.S. 334, 382 (1995) (Scalia, J., dissenting).
Seth Kreimer refers to this as “purification by publicity.” Seth F. Kreimer, Sunlight, Secrets and Scarlet Letters: The Tension Between Privacy and Disclosure in Constitutional Law, 140 U. Pa. L. Rev. 1, 89 (1991). See also Part III, infra, notes 104 - 111 and accompanying text.
See danah boyd, Social Network Sites as Networked Publics: Affordances, Dynamics, and Implications, in A Networked Self: Identity, Community, and Culture on Social Network Sites 39, 49 (Zizi Papacharissi ed., 2011) (describing context collapse).
For legal scholarship, see Jeff Kosseff, The United States of Anonymous: How the First Amendment Shaped Online Speech (2022) for a recent overview. For a theoretical discussion that also explores anonymity regulation beyond the U.S., see Eric Barendt, Anonymous Speech: Literature, Law and Politics (2016). See also, among others, Branscomb, supra note 6 ; Lee Tien, Who’s Afraid of Anonymous Speech? McIntyre and the Internet, 75 Or. L. Rev. 117 (1996); A. Michael Froomkin, Legal Issues in Anonymity and Pseudonymity, 15 Info. Soc’y 113 (1999); Danielle Keats Citron, Cyber Civil Rights, 89 B. U. L. Rev. 61 (2009); Lyrissa Barnett Lidsky & Thomas F Cotter. Authorship, Audiences, and Anonymous Speech, 82 Notre Dame L. Rev. 1537 (2007); Kaminski supra note 14 ; Rebecca Tushnet, The Yes Men and The Women Men Don’t See, in A World without Privacy (Austin Sarat ed., 2014). A. Michael Froomkin, Lessons Learned Too Well: Anonymity in a Time of Surveillance, 59 Ariz. L. Rev. 95 (2017).
See Eva Galperin, 2011 in review: Nymwars, Deeplinks (2011), https://www.eff.org/deeplinks/2011/12/2011-review-nymwars. See also Jillian C. York, A case for pseudonyms, Deeplinks (2011), https://www.eff.org/deeplinks/2011/07/case-pseudonyms.
See Eva Galperin & Wafa Ben Hassine, Changes to Facebook’s “real names” policy still don’t fix the problem, Deeplinks (2015), https://www.eff.org/deeplinks/2015/12/changes-facebooks-real-names-policy-still-dont-fix-problem.
514 U.S. 334, 385 (1995) (Scalia, J., dissenting) (citing NAACP v. Alabama ex rel. Patterson, 357 U. S. 449 (1958)). See Barendt, supra note 19 at 68-70, 80 (arguing that anonymity should only be protected in circumstances where “its value clearly outweighs the risks”); Helen Norton, Secrets, Lies, and Disclosure, 27 J.L. & Pol. 641, 646 (2012) (urging “that we add an inquiry into why speakers want to keep their identity secret to the factors that we consider when thinking about disclosure requirements' First Amendment autonomy implications.”).
Bernie Hogan, Pseudonyms and the Rise of the Real-Name Web, in A Companion to New Media Dynamics 290, 292 (John Hartley, Jean Burgess, & Axel Bruns eds., 2013).
danah boyd, The Politics of “Real Names,” 55 Comm. ACM 29, 30 (2012) (arguing that “real name” policies rely on an implicit notion of the role of names in offline interactions). For an analysis of the role of “real names” in the early days of Facebook and of statements by Mark Zuckerberg on authenticity, see Oliver L. Haimson & Anna Lauren Hoffmann, Constructing and Enforcing “Authentic” Identity Online: Facebook, Real Names, and Non-Normative Identities, 21 First Monday 6 (2016).
boyd, supra note 24 at 30 (discussing how names shift “how people relate online”).
Paul Bernal, The Internet, Warts and All 220–23 (2018).
Froomkin, supra note 15 .
See infra Part II, Section B.
Alice E. Marwick & danah boyd, I Tweet Honestly, I Tweet Passionately: Twitter Users, Context Collapse, and the Imagined Audience, 13 New Media & Soc’y 114, 125 (2011).
Id. at 129.
Emily van der Nagel, From Usernames to Profiles: The Development of Pseudonymity in Internet Communication, 1 Internet Histories 312 (2017).
Id. at 315.
Id. at 316.
“Since 1988, IRC has allowed people to exchange text-based messages in dedicated channels modelled after citizens’ band radio, first within Finland, then across the global Internet.” Id. at 319. See also id. at 320–22 for user interface illustrations.
Sherry Turkle, Life on the Screen: Identity in the Age of the Internet (1995).
Alice E. Marwick, Online Identity, in A Companion to New Media Dynamics 355, 357–58 (John Hartley, Jean Burgess, & Axel Bruns eds., 2013).
See Hogan, supra note 23 at 292.
boyd, supra note 24 at 29–30.
Alan J. Tabak, Hundreds Register for New Facebook Website, Harv. Crimson (Feb. 9, 2004), https://www.thecrimson.com/article/2004/2/9/hundreds-register-for-new-facebook-website/.
See boyd, supra note 24 at 29: “At Harvard, Facebook’s launch signaled a safe, intimate alternative to the popular social network sites. People provided their names because they saw the site as an extension of campus life. […] As Facebook spread beyond college campuses, not all new users embraced the ‘real names’ norm. During the course of my research, I found that late teen adopters were far less likely to use their given name. Yet, although Facebook required compliance, it tended not to actively—or at least, publicly—enforce its policy.”
boyd, supra note 18 at 49.
Marwick & boyd, supra note 29 at 123.
See Tushnet, supra note 19 at 86-89.
Hogan, supra note 23 at 291.
See Felipe Vilanova et al., Deindividuation: From Le Bon to the Social Identity Model of Deindividuation Effects, 4 Cogent Psychol. 1308104 (2016).
S. D. Reicher, R. Spears, & T. Postmes, A Social Identity Model of Deindividuation Phenomena, 6 Eur. R. Soc. Psychol. 161 (1995).
Russell Spears, Social Identity Model of Deindividuation Effects 1, 2 (Patrick Rössler, Cynthia A. Hoffner, & Liesbet van Zoonen eds., 2017).
In experiments, group identity salience is often achieved by manipulating the cues available to participants, through design choices that represent them in terms of the identity researchers are trying to emphasize. This can be done, for instance, by user interfaces that provide only cues to make the group identity salient, instead of photos or names that would give participants information about each other. Identity salience is also manipulated by telling participants that they were selected because they share the same characteristics as other group members—because they are science students, as opposed to social science students (and vice-versa), in one experiment. See Reicher, Spears, & Postmes, supra note 46 at 177–78 (discussing strategies for operationalizing identity salience).
For previous legal writing discussing deindividuation, see Katherine S. Williams, On-Line Anonymity, Deindividuation and Freedom of Expression and Privacy, 110 Penn St. L. Rev. 687, 693–97 (2006); Diane Rowland, Griping, Bitching and Speaking Your Mind: Defamation and Free Expression on the Internet, 110 Penn St. L. Rev. 519, 531–35 (2006); Julie Seaman, Hate Speech and Identity Politics: A Situationalist Proposal, 36 Fla. St. U. L. Rev. 99, 116–21 (2019).
Russell Spears & Tom Postmes, Group Identity, Social Influence, and Collective Action Online: Extensions and Applications of the SIDE Model, in The Handbook of the Psychology of Communication Technology 23, 27 (S. Shyam Sundar ed., 2015). Somewhat confusingly, despite the name “social identity model of deindividuation effects,” proponents of the SIDE model reject the notion of a deindividuated state (how deindividuation theory describes the lack of social regulation). See Spears and Postmes, id. at 29–30. Instead, they refer to the process through which group identity governs as “depersonalization.” Note, however, that the term does not imply that individuals are then not acting as persons, but instead that their actions are better explained by the impersonal perspective of the group.
Spears, supra note 47 at 3.
See Spears & Postmes, supra note 50 at 34–36 (discussing research beyond text-based media).
See Spears & Postmes, supra note 50 (reviewing evidence supporting the model). See also Guanxiong Huang & Kang Li, The Effect of Anonymity on Conformity to Group Norms in Online Contexts: A Meta-Analysis, 10 Int’l J. Comm. 398, 16 (2016) (meta-analysis reviewing 13 studies, concluding that “[The] result supports the SIDE model, such that anonymous individuals define their identities on a group level, and their behaviors are guided by the norms associated with their salient group memberships.”). This is not to say that the SIDE model has been definitively proven as true and that deindividuation theory has been abandoned. See Vilanova et al., supra note 45 (arguing the SIDE model does not replace but actually supplements deindividuation).
Spears & Postmes, supra note 50 at 32.
See id. at 25, for a brief overview of how social identity theory, on which the SIDE model builds, explains the “group polarization, in which group discussion results in group decisions that are more extreme (or ‘polarized’) than the mathematical average of individual group members’ attitudes.”
For Kyle Langvardt, “[t]he possibility of anonymous speech on the Internet, combined with the ease of ‘one to many’ communications, largely removes the normative and practical constraints that made content-shock rare in the twentieth century.” Kyle Langvardt, Regulating Online Content Moderation, 106 Geo. L.J. 1353, 1361 (2018); Saul Levmore argues “that one cost of Internet anonymity is that a successful site must monitor and censor in order to inhibit what might become overwhelming noise.” Saul Levmore, The Internet’s Anonymity Problem, in The Offensive Internet 50, 59 (Saul Levmore & Martha Nussbaum eds., 2010); Mary Anne Franks mentions anonymity as part of “many characteristics of virtual interactions [that] negatively impact communication and debate.” Franks, supra note 1 at 436.
On 4chan, “[t]here is no registration process or login required.” Lee Knuttila, User Unknown: 4chan, Anonymity and Contingency, 16 First Monday 10 (2011); Reddit “allows one-time use accounts that are easily created by signing up only with a new username, password, and CAPTCHA (even an email address is not required).” Alex Leavitt, “This Is a Throwaway Account”: Temporary Technical Identities and Perceptions of Anonymity in a Massive Online Community, Proc. 18th ACM Conf. on Comput. Supported Coop. Work & Soc. Computing 317, 320 (2015).
See Leavitt, supra note 57 .
See Munmun De Choudhury & Sushovan De, Mental Health Discourse on Reddit: Self-Disclosure, Social Support, and Anonymity, 8 Proc. Int’l AAAI Conf. Web & Soc. Media 71, 78 (2014) (2014 study on Reddit finding 61% of the users in a selection of mental health subreddits had posted a single post or comment).
See Knuttila, supra note 57 (“the vast majority of posts fall under the default username: Anonymous.”).
See Shagun Jhaver et al., “Did You Suspect the Post Would Be Removed?”: Understanding User Reactions to Content Removals on Reddit, 3 Proc. ACM Hum.–Comput. Interaction 1, 5 (2019) (“First, there exists a user agreement and content policy similar to the terms and conditions of many websites. Second, a set of established rules defined by Reddit users, called [r]ediquette, guide site-wide behavior. Finally, many subreddits also have their own set of rules that exist alongside site-wide policy and lay out expectations about content posted on the community.”). See also Casey Fiesler et al., Reddit Rules! Characterizing an Ecosystem of Governance, Proc. 12th Int’l AAAI Conf. on Web & Soc. Media 72 (2018); Eshwar Chandrasekharan et al., The Internet’s Hidden Rules: An Empirical Study of Reddit Norm Violations at Micro, Meso, and Macro Scales, 2 Proc. ACM Hum.–Comput. Interaction 1, 2 (2018).
See Julia R. DeCook, R/WatchRedditDie and the Politics of Reddit’s Bans and Quarantines, 6 Internet Histories 206, 212–13 (2022); Adrienne Massanari, #Gamergate and The Fappening: How Reddit’s Algorithm, Governance, and Culture Support Toxic Technocultures, 19 New Media & Soc’y 329 (2017).
DeCook, supra note 62 at 212.
See id. at 212 (“[The August 2015] policy update also introduced the quarantine function of [R]eddit, where subreddits are not removed but are kept from reaching the front page and require a user to agree to view the content (effectively creating more friction to access the community).”).
See Emily van der Nagel, Embodied Verification: Linking Identities and Bodies on NSFW Reddit, in Mediated interfaces: the body on social media 47, 58 (2020) (describing verification on sexual exhibitionist subreddits “as an act that proves consent, as including a Reddit username, the date, and the name of the subreddit in a photograph with their body is a way of asserting the person posting their selfie took it with the intention of uploading it to Gonewild”); Emily van der Nagel, Faceless Bodies: Negotiating Technological and Cultura Codes on Reddit Gonewild, 10 Scan – J. Media Arts Culture (2013) (same); Tawfiq Ammari, Sarita Schoenebeck, & Daniel Romero, Self-Declared Throwaway Accounts on Reddit, 3 Proc. ACM Hum.–Comput. Interaction 1, 23–24 (2019) (noting that a subreddit for parents asks that those interested in joining provide a link to a post on Reddit corroborating the user has children as well as a picture displaying the handle of the user next to “items only new fathers would have,” such as a stroller or diapers).
Subreddit rules often govern not just what the community is about (generally a topic or interest), but also how users should engage. See Fiesler et al., supra note 61 (describing subreddit rules on formatting posts, links and outside content, off-topic content, low-quality content and others).
A mixed-methods large-scale study on subreddit rules found that subreddits commonly have rules that seek to model personality that is welcome (or unwelcome) in that community. See id. at 77 (Table 2, reporting 40.15% of the manually coded, qualitative sample of subreddits that had rules included rules on personality, as did 30.39% of the classifier-based, large-scale data set analysis). Tawfiq Ammari, Sarita Schoenebeck, and Daniel M. Romero, who have researched throwaway accounts used in parenting communities, “argue that throwaways provide parents with shared norms and expectations for sharing potentially stigmatizing experiences while still being embedded within their existing online community.” Ammari, Schoenebeck, & Romero, supra note 65 at 3.
See Tim Squirrell, Platform Dialectics: The Relationships Between Volunteer Moderators and End Users on Reddit, 21 New Media & Soc’y 1910, 1922: (2019) “the karma system … allows users to ‘vote’ on content (including ‘submissions’ – links, images, videos and text posts – and comments on these submissions) and influence its visibility to others. The net ‘score’ (‘upvotes’ minus ‘downvotes’) is displayed next to content, and a user’s overall karma from all their submissions and comments is displayed on their (relatively minimal) profile.”
See Sarah A. Gilbert, “I Run the World’s Largest Historical Outreach Project and It’s on a Cesspool of a Website.” Moderating a Public Scholarship Site on Reddit: A Case Study of r/AskHistorians, 4 Proc. ACM Hum.–Comput. Interaction 1, 4 (2020) (“The total number of votes, or karma, is used to determine what content is seen; although the exact algorithm that determines which posts will be promoted to users’ front page or r/all is proprietary, content that is highly upvoted rises to the top, while highly downvoted content is obfuscated.”). See also Squirrell, supra note 68 at 1922 (“The consequences of being downvoted are that users are less likely to accord a post credence, while also creating a ‘bandwagon’ effect, where more users pile in to downvote a post further. Worse, the comment will become invisible to many users: a user-adjustable setting hides posts below a certain point threshold until they are clicked upon.”).
See Squirrell, supra note 68 at 1922–23 (describing how two subreddits dedicated to self-improvement leverage this, and noting communities are constrained by platform-wide affordances and design choices).
See Shagun Jhaver et al., Human–Machine Collaboration for Content Regulation: The Case of Reddit Automoderator, 26 ACM Transactions on Computer-Human Interaction (TOCHI) 1 (2019); Lucas Wright, Automated Platform Governance Through Visibility and Scale: On the Transformational Power of Automoderator, 8 Soc. Media + Soc’y 205630512210770 (2022).
See Gilbert, supra note 69 at 6 (“A key feature of r/AskHistorians is its panel of experts. The panel system was established so that users could identify experts through the use of flair, a coloured line of text adjacent to the username. Those who want flair must provide evidence of their expertise by linking comments made in r/AskHistorians that demonstrate this expertise. Moderators review these submissions and either award flair or provide feedback on how a submission for flair could be improved.”).
Danielle Citron, Hate Crimes in Cyberspace 53 (2014).
Persily, supra note 6 at 21.
Richard Ashby Wilson & Molly K. Land, Hate Speech on Social Media: Content Moderation in Context, 52 Conn. L. Rev. 1029, 1046 (2021).
“To communicate high status in the community, most users tend to turn to textual, linguistic, and visual cues.” Michael Bernstein et al., 4chan and /b/: An Analysis of Anonymity and Ephemerality in a Large Online Community, 5 Proc. Int’l AAAI Conf. Web & Soc. Media 50, 56 (2011).
“One example status signal in /b/ is the classic barrier for newcomers called ‘triforcing.’ Triforcing means leaving a post using Unicode to mimic the three-triangle icon of popular video game The Legend of Zelda […] Uninitiated users will then copy and paste an existing triforce into their reply. It will look like a correct triforce in the reply field; however, after posting, the alignment is wrong.” Id.
“Small gags morph into cultural punch lines and simple misspellings become new popular slang.” Knuttila, supra note 57 . “Simply writing in 4chan dialect is non-obvious to outsiders and in dialect writing serves as an entry-level signal of membership and status.” Bernstein et al., supra note 76 at 56.
“[E]ach post has a flag included to it. The default flag will indicate the country to which their IP address is located, labeled as ‘geographic location.’ The other identifiers present are randomly generated thread IDs, which are created for an individual user and persists only within a single thread. Users, however, have other flags available to them, chosen via drop-down menu before the reply is submitted. These alternative options are known as ‘memeflags,’ and represent ideologies and organizations such as LGBT (lesbian, gay, bisexual, and transgender), the United Nations, Nazi, and others.” Dillon Ludemann, Digital Semaphore: Political Discourse and Identity Negotiation Through 4chan’s /Pol/, New Media & Soc'y 2274, 2729 (2021).
See id. (investigating how users demonstrate membership in the 4chan message board /pol/).
“Lack of fluency is dismissed with the phrase ‘LURK MOAR,’ asking the poster to spend more time learning about the culture of the board.” Bernstein et al., supra note 76 at 56.
“This was identified by other users as a ‘shill’ post. In brief, a ‘shill’ in this context is a person who pretends to lean into a conspiracy to absurdity, often with the intention of discrediting the theory or to deter others from participating and can be considered as trolling to an extent. It is also the assumption that shills are being paid to post, and are frequently met with contempt by others, wherein isolating shills here and trolling them back have become political participation.” Ludemann, supra note 79 at 2735.
Persily, supra note 6 at 21. See supra notes 73 - 75 and accompanying text.
See supra notes 65 - 69 and accompanying text.
Arthur D. Santana, Virtuous or Vitriolic: The Effect of Anonymity on Civility in Online Newspaper Reader Comment Boards, 8 Journalism Practice 18, 21 (2014).
And neither is civility valuable in every given circumstance; it would be unwarranted to expect civility from those who are faced with abuse. Indeed, recent work has criticized the weight given to civility measures as a proxy for deliberative quality. See Patrícia Rossini, Beyond Incivility: Understanding Patterns of Uncivil and Intolerant Discourse in Online Political Talk, 49 Comm. Res. 399, 400 (2022). The point here is to discuss the incivility-inducing role attributed to anonymity. I make no normative claim about civility, but instead aim to show how those concerned with it would be wrong to assume anonymity is the culprit.
Santana, supra note 85 at 27.
Arthur D. Santana, Toward Quality Discourse: Measuring the Effect of User Identity in Commenting Forums, 40 Newspaper Res. J. 467 (2019).
Ian Rowe, Civility 2.0: A Comparative Analysis of Incivility in Online Political Discussion, 18 Info. Comm. Soc'y 121 (2014).
Rossini, supra note 86 at 416 (“[P]latform was not a significant predictor of incivility … . Differently than incivility, intolerance is more likely to be expressed on Facebook.”).
Magnus Knustad & Christer Johansson, Anonymity and Inhibition in Newspaper Comments, 12 Info. 106 (2021).
Id. at 6.
Id. at 12.
Id.
Bassey Etim, The Times Sharply Increases Articles Open for Comments, Using Google’s Technology, N.Y. Times (Jun. 13, 2017), https://www.nytimes.com/2017/06/13/insider/have-a-comment-leave-a-comment.html.
Knustad & Johansson, supra note 91 at 12.
The Washington Post Leverages Artificial Intelligence in Comment Moderation, Wash. Post (Jun. 22, 2017), https://www.washingtonpost.com/pr/wp/2017/06/22/the-washington-post-leverages-artificial-intelligence-in-comment-moderation/.
Deokgun Park et al., Supporting Comment Moderators in Identifying High Quality Online News Comments, Proc. 2016 CHI Conf. on Hum. Factors Computing Sys. 1114, 1114 (2016).
Yixue Wang & Nicholas Diakopoulos, Highlighting High-Quality Content as a Moderation Strategy: The Role of New York Times Picks in Comment Quality and Engagement, 4 ACM Transactions on Social Computing 1, 3 (2021). “Our findings include the following: (1) Picks are correlated with an improvement in first-time receivers’ next approved comment quality, with the quality boost associated with receiving a Pick attenuating after subsequent Picks; (2) receiving a Pick is associated with commenters early in their tenure on the site (i.e., within their first 2 approved comments) returning to the comment section more quickly to make their next comment; and (3) The quality of the visible commentary is positively associated with the quality of subsequent approved commentary. Exposure to Pick badges is also associated with subsequently writing higher-quality approved reply comments, though to a somewhat lesser degree compared to the impact of the quality of parent comments.” Wang & Diakopoulos, id. at 19.
The Washington Post Leverages Artificial Intelligence in Comment Moderation, supra note 97 . It later announced “featured comments,” “picked by Post staff members to highlight thoughtful and diverse contributions to the discussion.” Community Rules, Wash. Post (Apr. 13, 2020), https://www.washingtonpost.com/discussions/2020/04/13/community-rules/.
See Hannah Bloch-Wehba, Automation in Moderation, 53 Cornell Int'l L.J. 42 (2020); James Grimmelmann, The Virtues of Moderation, 17 Yale J. L. & Tech. 42, 63–65 (2015).
See Matthew J. Salganik & Robin C. Lee, To Apply Machine Learning Responsibly, We Use It in Moderation, N.Y. Times (Apr. 30, 2020), https://open.nytimes.com/to-apply-machine-learning-responsibly-we-use-it-in-moderation-d001f49e0644 (discussing biases and limitations of the NYT software); Thiago Dias Oliva, Dennys Marcelo Antonialli, & Alessandra Gomes, Fighting Hate Speech, Silencing Drag Queens? Artificial Intelligence in Content Moderation and Risks to LGBTQ Voices Online, 25 Sexuality & Culture 700 (2020) (finding perspective to evaluate white nationalist speech as less toxic than drag queens’). See also Aaron Mendon–Plasek, “Mechanized Significance and Machine Learning: Why It Became Thinkable and Preferable to Teach Machines to Judge the World”, in The Cultural Life of Machine Learning 31, 34-36 (Jonathan Roberge & Michael Castelle eds., 2021) (discussing limitations in the approach that the developers of Perspective API adopted to create a toxicity classifier).
See Tushnet, supra note 19 at 108 (“Instead of focusing on names, online discourse would be better served by comment moderation or other forms of curation that can operate to serve similar purposes as norms of behavior in physical public spaces, where we likewise don’t usually know legal names but nonetheless generally expect certain constraints to hold.”). Note however that Tushnet emphasizes a contrast between pseudonymity and anonymity and sees a role for community building for the former. See id. at 84. The argument here is consistent with her points on persistent pseudonyms but expands them to anonymity. See Monteiro, supra note 5 at 77-81 for sources and a discussion of situated anonymity and anonymous intimacy in settings of non-persistent pseudonyms.
See 514 U.S. 334, 382 (1995) (Scalia, J., dissenting) (“I am sure … that a person who is required to put his name to a document is much less likely to lie than one who can lie anonymously.”). See also Persily, supra note 6 at 16 (“Anonymity and pseudonymity (adopting an online persona other than one’s own) also facilitate the kind of lying and misrepresentation that undercut a well-informed electorate. In the internet world, anonymous and pseudonymous speakers cannot be held to account for the truth of their electorally relevant statements. Consequently, the speaker bares no cost for repeating lies and promoting false content.”).
Kreimer, supra note 17 at 74.
Frederick Schauer, Anonymity and Authority, 27 J.L. & Pol. 597, 606 (2012) (“The identity of a speaker, and the signals about reliability that may be provided by knowing the speaker’s identity, are part and parcel of the content of what a speaker says and of how listeners evaluate it.”).
Kreimer, supra note 17 at 89 (describing arguments in favor of disclosure).
Id. at 92.
See 514 U.S. 334, 382 (1995) (Scalia, J., dissenting): “[T]he usefulness of a signing requirement lies not only in promoting observance of the law against campaign falsehoods (though that alone is enough to sustain it). It lies also in promoting a civil and dignified level of campaign debate—which the State has no power to command, but ample power to encourage by such undemanding measures as a signature requirement.” A Pew Research Center report on the results of “a large-scale canvassing of technology experts, scholars, corporate practitioners, and government leaders” found that “many … attributed [anonymity] to the enabling bad behavior and facilitating ‘uncivil discourse’ in shared online spaces.” Lee Rainie, Janna Anderson, & Jonathan Albright, The Future of Free Speech, Trolls, Anonymity and Fake News Online 3–4 (2017), https://www.pewresearch.org/internet/2017/03/29/the-future-of-free-speech-trolls-anonymity-and-fake-news-online/. Views on anonymity included: “People are snarky and awful online in large part because they can be anonymous, or because they don’t have to speak to other people face-to-face.” “Anonymity (or at least the illusion of it) feeds a negative culture of vitriol and abuse that stifles free speech online.” Rainie, Anderson, & Albright, id. at 36. “Increased anonymity coupled with an increase in less-than-informed input, with no responsibility by the actors, has tended and will continue to create less open and honest conversations and more one-sided and negative activities.” Rainie, Anderson, & Albright, id. at 8. See also Persily, supra note 6 at 16.
See Kreimer, supra note 17 at 101–02 (“First, publicity assures the quality of debate by acting as a check on the qualities of the debaters. … Second, publicity improves the character and judgement of the citizenry. Open participation in public life exercises and develops the virtue of courage.”). But see Kreimer, id. at 107 (arguing that such virtues associated with identification require “the concrete analysis of the situations in which claims of anonymity are exerted.”).
Persily, supra note 6 at 16.
Iyengar et al. describe affective polarization in terms of animosity between political parties. Shanto Iyengar et al., The Origins and Consequences of Affective Polarization in the United States, 22 Ann. R. Pol. Sci. 1, 130 (2018) (“Democrats and Republicans both say that the other party’s members are hypocritical, selfish, and closed-minded, and they are unwilling to socialize across party lines, or even to partner with opponents in a variety of other activities. This phenomenon of animosity between the parties is known as affective polarization.”). See also infra notes 123 - 125 .
See, e.g., Chris Bail, Breaking the Social Media Prism 76 (2021) (citation omitted): “A 2019 report from Pew showed that a small group of people is responsible for most political content on Twitter. Specifically, this report found that ‘prolific political tweeters make up just 6% of all Twitter users but generate 20% of all tweets and 73% of the tweets mentioning national politics.’ What is more, extremists represented nearly half of all prolific tweeters. Though people with extreme views constitute about 6 percent of the U.S. population, the Pew report found that ‘55% of prolific political tweeters identity as very liberal or very conservative.’”
See Mathias Osmundsen et al., Partisan Polarization Is the Primary Psychological Motivation Behind Political Fake News Sharing on Twitter, 115 Am. Pol. Sci. Rev. 999, 1012 (2020): “From a partisan-motivated perspective, fake news is not categorically different from other sources of political information. … [P]artisans’ decisions to share both fake and real news sources depend on how politically useful they are in derogating the out-party.”
Gordon Pennycook & David G. Rand, The Psychology of Fake News, 25 Trends Cognitive Sci. 388, 6 (2021) (“… participants who were asked about the accuracy of a set of headlines rated true headlines as much more accurate than false headlines; but, when asked whether they would share the headlines, veracity had little impact on sharing intentions […].”); Gordon Pennycook et al., Shifting Attention to Accuracy Can Reduce Misinformation Online, 592 Nature 590 (2020).
Erwin Chemerinsky, False Speech and the First Amendment, 71 Okla. L. Rev. 1, 14 (2017).
See Jack Balkin, The Cycles of Constitutional Time 49 (2020): “There are four basic causes of constitutional rot—I call them the Four Horsemen of Constitutional Rot. The first is political polarization.” (Citation omitted.)
Cass Sunstein, #Republic (3 ed. 2018).
Pablo Barberá, Social Media, Echo Chambers, and Political Polarization, in Social Media and Democracy: The State of the Field, Prospects for Reform 34, 35–41 (Nathaniel Persily & Joshua A. Tucker eds., 2019); Andrew Guess et al., Avoiding the Echo Chamber About Echo Chambers: Why Selective Exposure to Like-Minded Political News Is Less Prevalent Than You Think (2018).
John G Bullock et al., Partisan Bias in Factual Beliefs About Politics, Q.J. Pol. Sci. 519 (2015).
See Geoffrey L. Cohen, Party over Policy: The Dominating Impact of Group Influence on Political Beliefs, 85 J. Personality & Soc. Psychol. 808, 819 (2003) (summarizing the findings: “If information about the position of their party was absent, liberal and conservative undergraduates based their attitude on the objective content of the policy and its merit in light of long-held ideological beliefs. If information about the position of their party was available, however, participants assumed that position as their own regardless of the content of the policy.”).
Yochai Benkler, Robert Farris, & Hal Roberts, Network Propaganda 386 (2018) (“There is no echo chamber or filter-bubble effect that will inexorably take a society with a well-functioning public sphere and turn it into a shambles simply because the internet comes to town. The American online public sphere is a shambles because it was grafted onto a television and radio public sphere that was already deeply broken. Even here, those parts of the American public sphere that were not already in the grip of a propaganda feedback loop and under the influence of hyperpartisan media dedicated to a propagandist project did not develop such a structure as a result of the internet’s development.”).
See Iyengar et al., supra note 112 at 130: “Democrats and Republicans both say that the other party’s members are hypocritical, selfish, and closed-minded, and they are unwilling to socialize across party lines, or even to partner with opponents in a variety of other activities. This phenomenon of animosity between the parties is known as affective polarization.” See also Shanto Iyengar, Gaurav Sood, & Yphtach Lelkes, Affect, Not Ideology, 76 Pub. Op. Q. 405 (2011); Lilliana Mason, The Rise of Uncivil Agreement: Issue Versus Behavioral Polarization in the American Electorate, 57 Am. Behav. Scientist 140 (2013).
See Lilliana Mason, Losing Common Ground: Social Sorting and Polarization, 16 The Forum 47, 49 (2018): “[…] American partisans are speaking different languages, misunderstanding one another, and distrusting their fellow Americans on a basic level. Where Democrats and Republicans could at one time discuss last night’s television shows around the water cooler, today they are not only watching different shows, but they are also drinking different beverages.” See also Lilliana Mason, Uncivil Agreement: How Politics Became Our Identity (2018).
See Eli J. Finkel et al., Political Sectarianism in America, 370 Science 533, 535 (2020): “Compared to a few decades ago, Americans today are much more opposed to dating or marrying an opposing partisan; they are also wary of living near or working for one. They tend to discriminate, as when paying an opposing partisan less than a copartisan for identical job performance or recommending that an opposing partisan be denied a scholarship despite being the more qualified applicant.”
The authors “created a liberal Twitter bot and a conservative Twitter bot for each of our experiments. These bots retweeted messages randomly sampled from a list of 4,176 political Twitter accounts (e.g., elected officials, opinion leaders, media organizations, and nonprofit groups).” Christopher A. Bail et al., Exposure to Opposing Views on Social Media Can Increase Political Polarization, 115 Proc. Nat'l Acad. Sci. 9216, 9217 (2018).
The effect was not uniform across political lines: Democrat participants showed slight effects which were not statistically significant, while “Republicans, by contrast, exhibited substantially more conservative views.” Id. Importantly, “[e]xposing people to views of the other side did not make participants more moderate.” Bail, supra note 113 at 20.
Petter Törnberg, How Digital Media Drive Affective Polarization Through Partisan Sorting, 119 Proc. Nat'l Acad. Sci. e2207159119, 10 (2022).
See Bail, supra note 113 .
Id. at 39.
See id. at 10 (introducing the social media prism).
See id. at 82–83: “[…] the social media prism makes the other side appear monolithic, unflinching, and unreasonable. While extremists captivate our attention, moderates can seem all but invisible.”
See id. at 66–67 (describing “extremism through the prism”).
Id. at 51.
See id. at 77: “Posting online about politics simply carries more risk than it’s worth. Such moderates [as opposed to with extremists] are keenly aware that what happens online can have important consequences off-line.”
See id. at 53. “Moderates disengage from politics on social media for several different reasons. Some do so after they are attacked by extremists. Others are so appalled by the breakdown in civility that they see little point to wading into the fray. Still others disengage because they worry that posting about politics might sacrifice the hard-fought status they’ve achieved in their off-line lives.” Id. at 83.
Törnberg, supra note 128 at 10.
See William J Brady et al., Emotion Shapes the Diffusion of Moralized Content in Social Networks, 114 Proc. Nat'l Acad. Sci. 7313, 7313 (2017) (describing moral-emotional language). Note that “moral expression” is employed in the broadest possible terms, with reference to “what is perceived as ‘right’ and ‘wrong.’” Gun control is given an example of “moralized content,” and contrasted with “a social-media message about cute kittens.” William J. Brady, M. J. Crockett, & Jay J. Van Bavel, The MAD Model of Moral Contagion: The Role of Motivation, Attention, and Design in the Spread of Moralized Content Online, 15 Perspectives on Psychol. Sci. 978, 978–79 (2020).
Brady, Crockett, & Van Bavel, supra note 138 at 7313.
See id. at 7316: “Using a large sample of tweets concerning three polarizing issues (n = 563,312), the presence of moral-emotional words in messages increased their transmission by approximately 20% per word.”
“[M]oral and emotional appeals that capture attention can be exploited by disinformation profiteers, as in the case of fake news spread around the 2016 U.S. election[.]” Brady, Crockett, & Van Bavel, supra note 138 at 20.
“Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust.” Soroush Vosoughi, Deb Roy, & Sinan Aral, The Spread of True and False News Online, 359 Science 1146, 1146 (2018).
Moral outrage is defined as “an intense negative emotion combining anger and disgust triggered by a perception that someone violated a moral norm.” Jordan Carpenter et al., Political Polarization and Moral Outrage on Social Media, 52 Conn. L. Rev. 1107, 90 (2021).
M. J. Crockett, Moral Outrage in the Digital Age, 1 Nature Human Behav. 769, 770 (2017) (noting that “expressing moral outrage benefits individuals by signalling their moral quality to others”).
Id.
Brady, Crockett, & Van Bavel, supra note 138 at 989.
See id. at 985: “When out-group members pose threats to the moral values of the in-group, out-group derogation is a common in-group response to uphold a positive in-group image […]. In other words, condemning an out-group’s behavior makes one’s in-group appear better by comparison.”
Id. at 987: “[E]xpressing moral emotions that derogate the out-group or bolster the in-group can enhance one’s reputation and increase group belonging.”
See id.: “For example, in online settings people are more likely to express outrage toward policies they oppose when their identity is not anonymous, suggesting that the opportunity to signal to others should be associated with a greater likelihood of expressing outrage online” (emphasis in the original, citation omitted).
Id. at 995. “In other words, expressing moral emotions that derogate the out-group or bolster the in-group can enhance one’s reputation and increase group belonging.” Id.
See Katja Rost, Lea Stahel, & Bruno S. Frey, Digital Social Norm Enforcement: Online Firestorms in Social Media, 11 PLoS ONE 1, 2 (2016) (citations omitted): “In online firestorms, large amounts of critique, insulting comments, and swearwords against a person, organization, or group may be formed by, and propagated via, thousands or millions of people within hours. Social media enable these unleashed phenomena. They allow attacking everywhere at anytime with the potential for an unlimited audience.”
See id. at 17: “[O]nline anonymity does not promote online aggression in the context of online firestorms. There are no reasons for anonymity if people want to stand up for higher-order moral principles and if anonymity decreases the effectiveness of sanctions for norm enforcement.” See also Lea Stahel & Katja Rost, Angels and Devils of Digital Social Norm Enforcement, Proc. 8th Int’l Conf. on Soc. Media & Soc’y 17 1, 6 (2017): “[Users enforcing norms in online firestorms] comment more aggressively […] if they comment non-anonymously […].”
See Brady, Crockett, & Van Bavel, supra note 138 at 985: “When out-group members pose threats to the moral values of the in-group, out-group derogation is a common in-group response to uphold a positive in-group image […]. In other words, condemning an out-group’s behavior makes one’s in-group appear better by comparison.”
See Pennycook et al., supra note 115 at 594 (“[W]e found a dissociation between accuracy judgments and sharing intentions that suggests that people may share news that they do not necessarily have a firm belief in.”); see also Pennycook & Rand, supra note 115 at 6 (“[P]articipants who were asked about the accuracy of a set of headlines rated true headlines as much more accurate than false headlines; but, when asked whether they would share the headlines, veracity had little impact on sharing intentions […].”).
Daniel A. Effron & Beth Anne Helgason, The Moral Psychology of Misinformation: Why We Excuse Dishonesty in a Post-Truth World, 47 Current Op. Psychol. 101375, 1 (2022).
Crockett, supra note 144 .
See Wilhelm Hofmann et al., Morality in Everyday Life, 345 Science 1340, 1341 (2014) (describing the results of a study in which participants reported their daily experiences; less than 5% of those reports were for witnessing or being the target of “immoral acts”).
See Crockett, supra note 144 at 770 (“Expressing moral outrage can be costly. Offline, moralistic punishment carries a risk of retaliation. But online social networks limit this risk.”).
See id. (“Of course, online social networks massively amplify the reputational benefits of outrage expression. While offline punishment signals your virtue only to whoever might be watching, doing so online instantly advertises your character to your entire social network and beyond. A single tweet with an initial audience of just a few hundred can quickly reach millions through viral sharing — and outrage fuels virality.”)
This should not be overstated. The claim here is not that signaling dynamics play no part in anonymous settings. In experiments of one-shot interactions, after observing selfish behavior, some participants (who performed worse on cognitive tests) still reported (to experimenters) anger, desire for punishment and moral reprobation. Researchers suggest this is because of the role of reputational heuristics, that is, what the reputational stakes individuals think are typically at stake (even if not present in a given setting). This was supported by the fact that participants who performed better in cognitive tests were less likely to act on moral outrage when punishing selfishness was costly to them. See Jillian J. Jordan & David G. Rand, Signaling When No One Is Watching: A Reputation Heuristics Account of Outrage and Punishment in One-Shot Anonymous Interactions, 118 J. Personality & Soc. Psychol. 57 (2019). Note, however, that this study did not examine settings dominated by political polarization.
Victoria L. Spring, C. Daryl Cameron, & Mina Cikara, The Upside of Outrage, 22 Trends Cognitive Sci. 1067 (2018); Victoria L. Spring, C. Daryl Cameron, & Mina Cikara, Asking Different Questions About Outrage: A Reply to Brady and Crockett, 23 Trends Cognitive Sci. 79 (2018).
See Persily, supra note 6 at 16 (“The norms of civility, the fears of retaliation and estrangement, as well as basic psychological dynamics of reciprocity that might deter some types of speech when the speaker and audience know each other – all are retarded when the speech is separated from the speaker, as it is online.”).
Marwick and boyd, supra note 29 at 122 (describing how context collapse “creates a lowest-common denominator effect,” where users will avoid topics they think may alienate their followers).
In 1960, about 5% of survey respondents stated they would be displeased if their child married someone from the opposing party. By 2010, roughly one-third of Republicans and half of Democrats expressed they were somewhat upset or very upset by the prospect. See Iyengar, Sood, & Lelkes, supra note 123 at 416–18. Ahead of the 2022 U.S. midterm elections, the Pew Research Center found that 62% of Republicans expressed very unfavorable views of Democrats, with Democrats reporting at 54%. That was up from 21% and 17% respectively in 1994. It also found all-time highs for respondents describing members of the other party as immoral, dishonest, unintelligent, lazy and closed-minded. See Pew Research Center, As Partisan Hostility Grows, Signs of Frustration with the Two-Party System (2022), https://www.pewresearch.org/politics/wp-content/uploads/sites/4/2022/08/PP_2022.09.08_partisan-hostility_REPORT.pdf (last visited Feb. 27, 2023).
See Shagun Jhaver, Pranil Vora, & Amy Bruckman, Designing for Civil Conversations: Lessons Learned from ChangeMyView 4 (2017) (providing examples of posts at r/ChangeMyView).
See Wiki, r/ChangeMyView, Reddit (2018), https://www.reddit.com/r/changemyview/wiki/ (last visited Feb. 27, 2023) (“What is /r/changemyview? […] CMV is the perfect place to post an opinion you’re open to changing.”). See also Chenhao Tan et al., Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-Faith Online Discussions, Proc. 25th Int’l World Wide Web Conf. 613, 613 (2016) (brief description of the subreddit by one of the first works on it); Jhaver, Vora, & Bruckman, supra note 165 at 3–4 (describing the community and quoting the subreddit’s creator as stating that the “his goal behind creating CMV was not to facilitate debates but to motivate conversations that help users understand different perspectives.”).
See Leavitt, supra note 57 at 320 (describing throwaway accounts).
See Kate Klonic, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, 1642 (2018) (quoting from an interview with former Facebook employees who developed the first set of policies stating “There are no ‘places’ in Facebook — there are just people with different nationalities, all interacting in many shared forums.”).
Rules, r/ChangeMyView, Reddit (2023), https://www.reddit.com/r/changemyview/wiki/rules/ (last visited Feb. 27, 2023).
Rule A for submissions states that original posters (“OPs”) must “explain the reasoning behind [their] view, not just what that view is,” and that this elaboration requires at least 500 characters. Id. Under Rule C, “Submission titles must adequately sum up your view and include ‘CMV:’ [for ‘change my view’] at the beginning.” Rule D specifies that “[p]osts cannot express a neutral stance, suggest harm against a specific person, be self-promotional, or discuss this subreddit.” Id.
Rule B insists that OPs must “personally hold the view and demonstrate” that they are “open to changing it.” Id.
Rule E requires the OP to be “willing to have a conversation” and “available to do so within 3 hours after posting.” Id.
For instance, a Rule E violation is assessed, resulting in removal of the post, if the OP does respond within the three-hour period yet only engages the conversation with “[a] small number of one line responses that don’t address the arguments that people are making.” Id.
Id.
See Jhaver, Vora, & Bruckman, supra note 165 at 4 (“The community gamifies the process of changing the view of post submitters by implementing an award mechanism called the delta system”).
Rules, r/ChangeMyView, supra note 169 .
See Jhaver, Vora, & Bruckman, supra note 165 at 6 (reporting that many participants interviewed by the authors “felt that a strict enforcement of [the] rules has been critical in maintaining the civil nature of conversations”).
Moderation Standards and Practices, r/ChangeMyView, Reddit (2023), https://www.reddit.com/r/changemyview/wiki/modstandards/ (last visited Feb. 27, 2023).
See Kumar Bhargav Srinivasan et al., Content Removal as a Moderation Strategy: Compliance and Other Outcomes in the ChangeMyView Community, 3 Proc. ACM Hum.–Comput. Interaction 1, 4 (2019) (providing an example of the notice explaining reasons for removal of a post).
See Chandrasekharan et al., supra note 61 at 22 (discussing the use of automated moderation tools by subreddits).
See Jisu Kim et al., Promoting Online Civility Through Platform Architecture, 1 J. Online Trust & Safety, 15 (2022) (study on Nextdoor, a location-based social media platform, noting that “more civil interactions among users can be encouraged by altering the design and architecture of the online environment within which the interaction occurs”).
Jhaver, Vora, & Bruckman, supra note 165 .
Id.
Aidan Combs et al., Anonymous Cross-Party Conversations Can Decrease Political Polarization: A Field Experiment on a Mobile Chat Platform, SocArXiv 3 (2022), https://osf.io/preprints/socarxiv/cwgu5/.
See id. at 4–7 (discussing research design).
Bail, supra note 113 at 125.
See Combs et al., supra note 184 at 9 (discussing findings).
See Rachel Hartman et al., Interventions to Reduce Partisan Animosity, 6 Nature Human Behav. 1194, 1197–98 (2022) (citations omitted) (“A rich body of literature in social psychology details the positive effects of contact on intergroup relations across barriers related to race, ethnicity, religion and sexual orientation.”). See also James N. Druckman et al., (Mis)estimating Affective Polarization, 84 J. Pol. 1106 (2022).
See Samantha L. Moore-Berg et al., Exaggerated Meta-Perceptions Predict Intergroup Hostility Between American Political Partisans, 117 Proc. Nat'l Acad. Sci. 14864, 14871 (2020) (concluding that “the degree to which both parties think the other side dislikes and dehumanizes their own group is dramatically overestimated.”).
See James Fishkin et al., Is Deliberation an Antidote to Extreme Partisan Polarization? Reflections on “America in One Room,” 115 Am. Pol. Sci. Rev. 1464 (2021); Magdalena Wojcieszak & Benjamin R. Warner, Can Interparty Contact Reduce Affective Polarization? A Systematic Test of Different Forms of Intergroup Contact, 37 Pol. Comm. 789 (2020); Matthew S. Levendusky & Dominik A. Stecula, We Need to Talk: How Cross-Party Dialogue Reduces Affective Polarization (James N. Druckman ed., 2021).
Törnberg, supra note 128 at 10.
A mixed-methods study that combined interviews and a survey reported results suggesting “a hunger for hard conversations” and that anonymity was valued by users (with one participant quoted as saying “when you join a social network, […] you’re exposed and you have to watch the things you write, because they can be used against you) in facilitating those (although one participant said Reddit gave them “the feeling that I am arguing over nothing with nobodies”). See Amanda Baughan et al., Someone Is Wrong on the Internet: Having Hard Conversations in Online Spaces, 5 Proc. ACM Conf. on Hum.–Comput. interaction, 8–9 (2021).
See Joshua Tucker, Breaking the Social Media Prism: How to Make Our Platforms Less Polarizing. By Chris Bail, 127 Am. J. Soc. 1685, 1687 (2022) (“I wonder about the extent to which DiscussIt-like platforms are really an alternative to social media platforms.”).
See Bail, supra note 113 at 132: “Needless to say, not everyone would use a platform where you gain status for bridging political divides.”
Id. at 128.
See Ethan Zuckerman, The Case for the Digital Public Infrastructure, 20-01 Knight First Amend. Inst. (Jan. 17, 2020), https://knightcolumbia.org/content/the-case-for-digital-public-infrastructure.
See Robert C. Post, Constitutional Domains: Democracy, Community, Management 182 (1995): “We can define community, therefore, as a form of social organization that strives to establish an essential reciprocity between individual and social identity. Both are instantiated in social norms that are initially transmitted through processes of primary socialization and are thereafter continually reaffirmed through the transactions of everyday life.”
See Tushnet, supra note 19 at 108. See also supra note 103 and accompanying text.
Artur Pericles Lima Monteiro is a Lecturer and the Schmidt Visiting Scholar on Artificial Intelligence at the Yale Jackson School of Global Affairs. He is also a Resident Fellow with the Information Society Project at Yale Law School.