This article is co-published with Just Security.
WASHINGTON, DC – January 6, 2021: Protests were seen around the Capitol building where Trump supporters rioted and broke into the building. Lev Radin/Shutterstock
In late March, we convened a working group of experts on social media, election integrity, extremism, and political violence to discuss the relationship between online platforms and election-related political violence. Our goal was to provide practical and effective recommendations on steps platforms can take to ensure their products do not contribute to the potential for political violence, particularly around the time of the U.S. general election in November and affecting countries around the world.
Today, we published a paper summarizing the working group’s consensus, titled “Preventing Political Violence through Technology: What Online Platforms Can Do to Avoid Contributing to Election-Related Violence.” We believe this issue is urgent given the current threat landscape in the United States. While relying on online platforms to “do the right thing” without proper regulation and business incentives may seem increasingly futile, we believe there is still an important role for independent experts to play in shaping the public debate and highlighting areas where these companies can act more responsibly.
Increasing signs of possible political violence
The attack on the U.S. Capitol on January 6, 2021 has significant implications for the 2024 election cycle. Former President Donald Trump and many Republican political elites continue to make false claims about the results of the 2020 election, which provide potential grounds for efforts to delegitimize the results of this November’s vote.
But this rhetoric is just one of the potential catalysts for political violence in the United States this season. In a cover story on the subject this month, The New York Times noted that “a continuing undercurrent of violence and physical risk has become the new normal” across the country, with public officials and democratic institutions being targeted in particular. And a survey conducted by the Brennan Center this spring found that 38% of election officials have received violent threats. And on top of this already threatening environment, there are conflicts over Israel-Gaza protests on college campuses and in major cities, potentially controversial developments in the various trials of former presidents, and warnings from the FBI and Department of Homeland Security about potential threats to LGBTQ+ pride events this summer. The potential for political violence in the United States seems, unfortunately, to be increasing.
Ignoring technology platforms could make the situation worse
What role do online platforms play in this threat environment? It is unclear whether the big platforms are prepared to respond to this situation. Many platforms have rolled back moderation policies regarding false claims of election fraud, dismantled trust and safety teams, and seem to be unwittingly jumping on the bandwagon of rising threats to judges and election officials. These developments suggest that platforms are ignoring the lessons of the past few years both in the United States and abroad. For example, a year after January 6, supporters of Brazil’s outgoing president Jair Bolsonaro used social media to organize and mobilize attacks on government institutions. And an American Progress investigation into the 2022 U.S. midterm elections concluded that “social media companies have again refused to address their complicity in stoking hatred and information chaos…With important exceptions, the companies again offered cosmetic changes and empty promises not backed by adequate personnel or resources.”
The failure of platforms to prepare for election violence suggests that 2024 will resemble 2020 in many ways. Ahead of that election, two of the authors (Eizenstat and Kreis) convened a working group of experts to outline what platforms needed to do to protect elections. Unfortunately, platforms largely ignored these and many other recommendations from independent researchers and civil society groups, including enforcing limits on voting misinformation for all users (including political leaders), clearly refuting election disinformation, and amplifying credible election information. The failure of platforms to adequately follow such recommendations helped create the conditions of January 6, as documented in the draft report on the role of social media in the storming of the Capitol by the investigative team of the House Select Committee on the January 6 Attack.
Recommendations
To avoid a similar outcome, we propose several steps that platforms can and should take to ensure they do not incite political violence. None of these recommendations are entirely new; in fact, many of them are consistent with numerous papers that scholars and civil society leaders have published over the years. But they are worth repeating, despite the limited time available to implement them.
The full seven recommendations and details can be found in our report, but in general they focus on a few themes where online platforms are currently falling short:
Platforms should develop robust standards for threat assessments and conduct scenario planning, crisis response drills, and engagement with external stakeholders as transparently as possible. Platforms should implement clear, enforceable content moderation policies that ensure election integrity year-round and proactively address election denialism and potential threats to election staff. Politicians and other politically influential figures should not receive exemptions or special treatment from platforms to content policies. Platforms must apply the rules uniformly. Platforms should clearly explain significant content moderation decisions during election periods and be transparent, especially when it comes to moderation of high-profile accounts.
During this election, much of the discussion around accountability for technology has shifted to the question of what to do about deceptive uses of AI. But the distribution channels for AI-generated content remain primarily through online platforms, where in 2020 users spread the “Stop the Steal” narrative and ultimately inspired those who engaged in political violence at the U.S. Capitol. We will continue to focus on these open questions in the hope that growing calls for accountability will encourage platforms to act more responsibly and prioritize the risks of political violence in the U.S. and abroad.