The First Amendment offers broad protections for what individuals and companies in the United States can do, and leaves plenty of room for debate about what they should do. That tension opens new paradigms for how content is regulated and moderated online. Scott Wilkens, senior counsel at the Knight First Amendment Institute at Columbia University, answered our questions about how the First Amendment is evolving in an increasingly digital world.
What is the biggest public misconception about how the First Amendment applies to digital spaces?
Outside legal circles, few people understand that the First Amendment protects social media companies’ decisions about what user content to publish. The companies generally have the right to moderate user content as they see fit, even when doing so limits the ability of the platforms’ users to express themselves online.
In your opinion, what principles should guide social media companies as they try to balance the right to free expression with the desire to control user-submitted content?
The social media companies are of course free to decide for themselves how to moderate their platforms, but we hope they do so in a way that serves the public interest. They should restrict the kind of harassment, intimidation, and hate that can silence some users, especially those from marginalized communities. They should make sure that the public can access and engage with the speech of their public officials. And they should stop amplifying the kind of disinformation that, while boosting engagement, undermines society’s search for truth and common ground.
States including Florida and Texas have passed laws that would penalize social media companies for content moderation decisions that legislators and governors perceived as hostile to conservatives. Enforcement has been blocked pending appeal, but litigation surrounding the laws raises important questions. Where is the line between legitimate regulations furthering government interests and rules unfairly impinging on First Amendment protections?
Florida and Texas say they have broad power to regulate content moderation decisions because they are not protected by the First Amendment. The platforms argue that the government may never regulate content moderation because it is absolutely protected by the First Amendment. The line lies in between these all-or-nothing positions. While the government generally cannot tell the platforms what content they must or must not publish, it has the power to impose narrowly drawn transparency, privacy, and due process regulations that serve First Amendment values.
Social media companies—like the internet itself—are global entities. Are there limitations to using the US Constitution’s First Amendment as the primary framework for talking about regulating free expression in the digital world?
It’s true that US companies’ content-moderation policies have been deeply influenced by American free speech norms, and that those norms have been deeply influenced by First Amendment doctrine. But the major social media companies are global, and the rules that make sense in the US don’t necessarily make sense everywhere else. After all, even other democracies conceive of “free speech” very differently than we do here in the US.
Section 230 of the Communications Decency Act has been getting a lot of attention in Congress and the courts. What do journalists and media executives need to know about it?
Although the Supreme Court has not yet interpreted Section 230, the lower federal courts have held that it broadly shields companies from lawsuits for the user content they publish online. This means, for example, that companies cannot be held liable for publishing user content online that defames someone. In general, such lawsuits will be dismissed at a very early stage. But this legal shield is not available when a company helps to create the content at issue. If you wrote it or worked with your users to write it, and you put it up, your business can expect to take responsibility for any fallout that ensues.
There is a case pending before the Supreme Court dealing with Section 230—Gonzalez v. Google LLC—that asks the court to rule on whether a platform that makes targeted recommendations can be held liable for that content, even if it was first published by someone else. What are the key issues to watch for in that case?
This is the first time the Supreme Court will interpret Section 230 since it was enacted over 25 years ago. The specific issue is whether Section 230 shields internet platforms from lawsuits for using recommendation algorithms to provide users with content that is most likely to be of interest to them. This is an enormously important question because recommendation algorithms are fundamental to the services that search engines and social media platforms provide to users. If the Supreme Court holds that algorithmic recommendations are not protected by Section 230, then search engines and social media platforms will be forced to radically change how they operate in order to avoid the risk of liability—changes that would have profoundly negative consequences for free speech online.
What emerging technologies pose the greatest challenge for protecting free expression in the digital future?
Spyware is becoming more and more powerful and is already being used by governments to closely monitor civil society groups and journalists. This surveillance technology will pose ever greater threats to the digital public sphere. We recently sued NSO Group, the maker of Pegasus spyware, on behalf of journalists at one of Latin America’s most important news organizations. Their case shows that spyware can have devastating consequences for investigative journalism.