The Knight First Amendment Institute invites submissions for its spring symposium, “Optimizing for What? Algorithmic Amplification and Society,” to be held at Columbia University on April 27-28, 2023. A discussion of the theme of the symposium is below, followed by logistical information for those who wish to participate.
Why This Symposium? Why Now?
Most online speech today is hosted on algorithmic platforms that are designed to optimize for engagement. Speech of every conceivable kind is carried by platforms, including entertainment, news, politics, social movements, educational content, public health information, scholarship, religious content, sports, cultural products such as music and art, and other commercial content about travel, restaurants, shopping, jobs, and more.
Algorithms are not neutral. Compared to nonalgorithmic systems (such as a chronological feed), they amplify some speech and suppress others. Platforms are “complex systems,” so amplification is an emergent and hard-to-predict effect of interactions between design and human behavior. Platform companies themselves have repeatedly failed to anticipate the effects of algorithm or design changes. Independent research is stymied both by the inherent difficulty of studying complex systems and the lack of transparency by platform companies.
The logic of engagement optimization, while arguably well suited for entertainment, has come to profoundly reshape the distribution of other kinds of speech. Some effects are positive, such as the decreased power of gatekeepers in identifying new talent. On the other hand, algorithmic amplification and suppression exert a pervasive distorting effect on everything ranging from the production and dissemination of science to the restaurant market. Each domain has its own notion of quality, refined over decades or centuries, but platform logic rewards unrelated factors. The algorithm-savvy are able to exploit the reach of platforms for their own purposes, while entities traditionally entrusted with information dissemination, such as public health agencies, are struggling to adapt to the new media environment.
This is an urgent topic. Platform companies are facing enormous competitive pressure to make algorithms even more central to their operation, the so-called “Tiktokification of everything.” Algorithms are playing a bigger role in content moderation, including through nontransparent methods such as downranking.
This symposium will bring together technologists with journalists, legal scholars, sociologists, psychologists, and others. It has two main goals: 1) to further the understanding of algorithmic amplification and distortion, and 2) to explore interventions; whether platforms changing their algorithms and design, or institutions and individuals adapting to algorithm-mediated information propagation.
The two sets of questions listed below are of particular interest. That said, we are happy to consider any submission relevant to the provocation above. Scholars from any discipline are welcome to submit.
Algorithmic amplification and distortion. How can we define amplification in a way that’s coherent, measurable, and useful? What attributes of content and context lead to amplification on prominent platforms today, and how do algorithmic and design features of platforms preference different types of content? What strategies do interest groups use to amplify their reach?
How does algorithmic amplification shape or distort specific domains, markets, or facets of society? (We especially welcome case studies that go beyond areas such as politics to explore a less-studied domain.) When are these distorting effects harmful? How prevalent are specific distorting effects, such as the “rich get richer” effect or demographic biases?
Interventions. How can platforms modify algorithms or design in order to minimize harmful amplifying or distorting effects? How should institutions whose norms around knowledge production and information sharing developed well before the advent of social media (like public health or science) adapt to the era of algorithm-mediated information propagation? What policy levers, such as transparency, are opened up by the perspective that algorithms are central to speech propagation and that their harmful effects manifest differently in different domains? What is the state of the art in algorithm audits, what are the barriers, and what types of access and transparency are most urgently needed?
Logistics, Dates, and Deadlines
The symposium will take place over two days. The first day, April 27, 2023, will consist of a private workshop at which authors of submitted papers will be invited to discuss and improve their papers. There will be a dinner for participants in the evening. The second day, April 28, 2023, will be a public event at Columbia University, open for both in-person and online attendance. It will offer a series of discussions on algorithmic amplification, featuring both those authors who have submitted new work for this symposium and those scholars presenting on recently published papers.
We hope that the public event will be useful to many stakeholders, including policymakers and legal scholars aiming to sharpen their thinking on platform governance by engaging with how platforms operate under the hood; journalists aiming to explain complex concepts to readers and better hold platforms accountable; and technologists who may build the platforms of tomorrow. The talks will be accessible with no prior expertise; we welcome anyone to attend who is concerned about the effects of platforms on their own lives or on the health of the public sphere.
If you are interested in workshopping a paper, please send us a 250-word abstract of your paper by January 3, 2023. Please submit the abstract to Katy Glenn Bass at [email protected]. We intend to review all of the abstracts by January 15, 2023, with the goal of commissioning 6-12 papers.
Draft papers will be due April 17, 2023. For analytical papers, such as legal scholarship, we encourage you to keep papers short (5,000 words) but we will consider proposals for longer pieces as well. For empirical papers, work in progress is welcome, anywhere from a formal research plan to a paper in submission. We will circulate these drafts to all participants of the private workshop in advance of the symposium, which will take place on April 27-28, 2023, at Columbia University. Revised drafts will be due after the symposium. Final papers will be published on the Knight Institute’s website, and authors are free to pursue subsequent publication in a journal. Each author will receive an honorarium of $6,000 (divided between co-authors as needed). The Knight Institute will cover participants’ travel and hotel expenses.
Papers will be reviewed by Arvind Narayanan, Professor of Computer Science at Princeton University and Senior Visiting Research Scientist at the Knight Institute, and Katy Glenn Bass, Research Director of the Knight Institute, with the assistance of other Institute staff and scholars. For papers from disciplines outside our areas of expertise, we will solicit additional feedback from external reviewers as necessary.
Arvind Narayanan is a professor of computer science at Princeton University and was the Knight Institute visiting senior research scientist, 2022-2023.
Katy Glenn Bass is the Knight Institute’s research director.