Today, the Knight Institute is publishing the first essays from our April 2023 symposium, “Optimizing for What? Algorithmic Amplification and Society.” The symposium was the product of a year-long collaboration with Arvind Narayanan, the Institute’s 2022-23 Visiting Senior Research Scientist and a Professor of computer science at Princeton, where he also directs the Center for Information Technology Policy.
We wanted to study algorithmic amplification in depth because almost every kind of speech and discourse online is now hosted on algorithmic platforms designed to optimize for engagement. As we wrote in the call for proposals:
Algorithms are not neutral. Compared to nonalgorithmic systems (such as a chronological feed), they amplify some speech and suppress others. Platforms are “complex systems,” so amplification is an emergent and hard-to-predict effect of interactions between design and human behavior. Platform companies themselves have repeatedly failed to anticipate the effects of algorithm or design changes. Independent research is stymied both by the inherent difficulty of studying complex systems and the lack of transparency by platform companies.
The project focused specifically on the algorithms that power social media platforms’ content recommendation systems. Conversations about recommender systems, particularly in the law and policy arena, often fail to accurately describe how these systems operate, relying instead on shorthand references to “the algorithm” as the source of social media’s ills. We wanted to commission papers that would offer more precise explanations of how recommender systems work and propose some interventions that would mitigate some of the harms caused by amplification, or allow us to take fuller advantage of the benefits—whether through platforms changing their algorithms and design, or institutions and individuals adapting to algorithm-mediated information propagation.
We also wanted to engage with some normative questions about algorithmic recommenders: What should they optimize for? What does it mean to design a system to promote a healthier, or more just, public discourse? How can we promote human creativity and innovation in an age when platforms are designed to show us more of what (they think) we already like? Our hope is that these papers will facilitate a more deeply informed debate about these systems’ impact on society and how they could be improved.
The symposium brought together an outstanding group of scholars, nonprofit leaders, and technology industry professionals from a wide range of disciplinary backgrounds, including computer science, psychology, law, philosophy, and other fields. In the coming months, we will publish more than a dozen essays written by symposium participants, beginning with the two we are releasing today:
The Algorithmic Management of Polarization and Violence on Social Media
By Jonathan Stray, Ravi Iyer, and Helena Puig Larrauri
Social media is integrated into nearly every aspect of private and public life. How these platforms are designed profoundly affects what people believe and how people interact—both on- and offline. Critically, design choices regarding content moderation, recommendation, and engagement have the potential to escalate violence and destructive conflict.
Stray, Iyer, and Puig Larrauri draw on relevant theories, experiments, and the first-hand experiences of content creators, peacebuilders, and those living in conflict environments to better understand how platforms facilitate harassment, manipulation, and division. The authors conclude by proposing ways in which platforms can better monitor their impact and aid the transformation from destructive to constructive conflict by diverting user attention to certain types of content.
By Luca Belli and Marlena Wisniak
Understanding how social media platforms decide what content is shown, in what order, and how is key to promoting meaningful transparency to the public. Yet, not enough is known about the tools platforms use to organize and display content online—from amplification to downranking to outright bans—and the values and conversations driving these decisions. In this paper, Belli and Wisniak propose a set of metrics to measure recommender systems’ algorithms. Called “nutrition labels,” these metrics can be a helpful tool in ensuring transparency and empowering users to better realize how algorithms may be affecting them.
These, and the essays to come, will be available on the Knight Institute’s website on the Algorithmic Amplification and Society home page.
Katy Glenn Bass is the Knight Institute’s research director.