When Elon Musk fired Vijaya Gadde, Twitter’s legal, public policy, trust and safety lead, at the start of an avalanche of firings and departures, the Tesla CEO’s actions were reminiscent of another automaker in crisis. After General Motors closed its Flint, Michigan, factories in 1997, it posted signs on the empty buildings: “Demolition Means Progress.” Even if Twitter limps along after its recent self-immolation, we can expect to see more and more signs of something like urban decay on the platform. As a scientist who has studied online safety and discrimination for more than a decade, I’m deeply worried about the safety risks of half-failing infrastructure.
Yet Twitter’s failure could also lead to something better. As we watch the platform crumble, we are experiencing what science and technology scholars call an “inversion”: failures that jolt us to recognize that infrastructure depends on people and institutions, not just technology.
By recognizing infrastructure for what it is, we plant the seeds of hope. If we simply shop for a replacement app to Twitter, we reduce the social fabric into a product to be traded and sold by data brokers. Instead we could understand our social networks more clearly, as a public good like clean water, safe roads, or literacy, which confer broad benefits and grow in value as more people participate. Whether or not social media is run by a corporation, recognizing public goods means recognizing shared responsibility.
When residents of Flint learned eight years ago that their water was poisoned, they were also forced to learn the details of a complex system most people shouldn’t have to understand. That recognition comes with an urgent opportunity to imagine things differently. Corporate abandonment helped create the water crisis in Flint, but the crisis was also a failure of democracy. If we’re not careful, the same could happen to our social ecosystems.
How can we stop relying on the whims of billionaires to manage our digital environment’s equivalent to clean water? To start, we can recognize three basic kinds of institutions a livable social web needs: institutions for safety, facilitation, and research. Next, we can commit to listening to the people who are already doing the work, support their efforts, and when needed, start our own institutions.
Healthy social media needs strong institutions focused on safety. From information security to child protection, we have delegated too much of this work to corporations. The good news is that an ecosystem of nonprofits has developed to coordinate, advocate for, and educate people on online safety. From the National Center for Missing and Exploited Children and Common Sense Media to the Cyber Civil Rights Initiative, dozens of organizations have already become critical infrastructure in online safety. New professional societies including the Trust & Safety Professionals Association and the Integrity Institute are also organizing tech employees who keep us safe. For the rest of us, Right To Be holds online bystander trainings that offer skills for digital safety.
Great social activities also need good facilitation—as any party organizer or parliamentarian knows. Invitations, decorations, introductions, social norms, conversation prompts, and conflict resolution all take skill and work. But tech firms have transformed facilitation, one of humanity’s most nuanced and uplifting skills, into a soul-crushing euphemism for censorship—selling us on unreliable AI systems that can only tell us what not to say. Even worse, corporations have forced us to accept them as cut-rate arbiters of our social conflicts and collective futures.
The good news? Hundreds of thousands of volunteers and compensated clickworkers have cultivated deep expertise at facilitating online conversations, despite their mistreatment by companies. Many moderators from marginalized communities have transformed their experience of online violence into caring expertise at protecting and uplifting others. We should support those communities and listen to them. As we escape the burning wreckage of Twitter, we also can prevent future disasters by investing in the internet’s fire brigades and supporting their basic labor rights.
A flourishing digital environment depends on trustworthy research institutions. Reliable evidence helps us spot problems, hold power accountable, imagine a better world, and test our best ideas for achieving the common good.
But most social media research has been dependent on the whims of tech leaders for funding, data, and permission. Driven by profit, vast teams of scientists have invaded people’s privacy to influence market behavior rather than discover ways to make the internet safer, fairer, or more understanding. And when tech leaders find research inconvenient, they threaten and try to suppress embarrassing evidence.
A new crop of institutions is creating the kind of independent research that a healthy internet needs. Journalists at news organizations like The Markup are investigating tech firms for discrimination. Citizen science labs like the Citizens and Technology Lab at Cornell, which I lead, are working alongside communities to measure and improve our digital environments—just like environmental groups that protect our waterways and our air. Communities like Inside Airbnb are building their own research capacities to monitor the impact of platforms on their neighborhoods. Long-standing organizations including Consumer Reports have also developed research labs committed to consumer protection online. This week also sees the launch of the Coalition for Independent Technology Research, an organization I helped start that works to advance, defend, and sustain the right to ethically study technology and society.
The final ingredient of a healthy internet is the kind of conflict that upholds stability. Since advertisers and influencers have different goals than governments or journalists, we will naturally disagree. That’s a good thing. As social scientists have found, successful management of public goods with competing interests depends on an ecosystem of institutions in conflict.
Let’s be honest with ourselves. Social media has been ailing since the beginning. Since the earliest days of the internet, it has not been reliably safe for women, people of color, or anyone facing exclusion or violence. Even before last week, firms sidelined and fired so many of their employees working for the common good. Twitter’s apparent collapse should wake us up to these facts—if only to inspire us to imagine better.
Demolition may not mean progress, but it does create an opportunity. As we look for what’s next after Twitter, let’s not fall for the very 20th century mistake that white flight will solve systemic problems or that cosmetic urban renewal will revive our illusions of former glory. Instead, we should dedicate our time and money to real solutions: the people and institutions that keep us safe, manage our messy and beautiful social worlds, and generate the knowledge needed for a better internet.
J. Nathan Matias is an assistant professor in the Department of Communication at Cornell University and a Knight Institute visiting associate research scholar.