Over the four years of the Trump presidency, social media platforms generally took a soft line in enforcing their policies against threats and misinformation, allowing most borderline speech, including the president’s, to stand.
In the wake of a deadly riot at the U.S. Capitol aimed at disrupting the transfer of power, and ahead of an inauguration feared to provoke new attacks around the country, those same social media companies are taking a notably more aggressive approach.
Organized in Facebook groups and other online forums, the Jan. 6 riot was a wake-up call — for Silicon Valley, government officials and the public — that even euphemistic or ambiguous comments made online can fuel real-world violence.
Now, tech companies are on high alert. In the days following the insurrection, Twitter, Facebook, YouTube and other major platforms have imposed stricter measures and deployed new rationales for taking action. Besides suspending or permanently banning President Trump, they’ve also removed content undermining the integrity of the election results or calling for more attacks at the U.S. and state capitols.
“The tech companies have realized this is not an abstract question: These are very real threats to American democracy,” said James Grimmelmann, a professor at Cornell University who focuses on internet law. “They’ve drawn their line,” he said. “I see it as a meaningful new position.”
Tech leaders are also emboldened by the results of the election, no longer having to worry about “vindictive reprisals from Trump and his allies.”
“They all had to go along, to some extent, or he’d drop something like the TikTok ban on them,” Grimmelmann said, referring to an executive order banning the Chinese-owned app. (The fate of that order is in limbo after repeated court-ordered postponements.) “Even if it was legally problematic, simply by having his power, he put serious threats on their businesses. They’re more protected from that now, so they feel more comfortable doing what they think is morally and legally right.”
Two days after the Capitol siege, Twitter banned Trump permanently “due to the risk of further incitement of violence.” That same day, Google announced that Parler, a Twitter alternative seen as a refuge for the extreme content barred by other platforms and as a possible haven for Trump, would no longer be available for download on its app store; Apple and Amazon followed suit, removing Parler from their stores.
On Friday, Facebook — which has suspended Trump’s account through the inauguration — said it was implementing two new measures to “further prevent people from trying to use our services to incite violence”: blocking the creation of new Facebook events near the White House, U.S. Capitol and state capitols through Inauguration Day, and restricting features for U.S. users who have repeatedly violated its policies.
Facebook also said Saturday that it would temporarily stop showing ads for military gear and gun accessories to users in the U.S. after BuzzFeed News reported such ads were being served to people who had viewed content about the Capitol riot.
Snapchat, Twitch and Instagram have also banned or suspended Trump’s accounts, and sites including Reddit, Shopify and Pinterest have removed or limited groups, online stores and hashtags related to him.
That the biggest social media platforms, which dragged their feet for years on enforcing existing rules and implementing additional safeguards, acted in concert “is not surprising,” said Tarleton Gillespie, a senior principal researcher at Microsoft Research.
“Herds respond similarly to real threats, and there is safety in numbers,” he said. “The Capitol riot is an undeniable signal of how dangerous things have become, and of how culpable these platforms may be.
“Once a few make the move, there’s political cover for others to make similar changes,” Gillespie said. Besides, no company wants the “risk of looking like the site that failed to act.”
Some technology industry watchers say the recent measures still fall short and are more of an acceleration of the changes that were already underway.
“It’s not a sea change,” said Angelo Carusone, president of Media Matters for America, a nonprofit liberal media watchdog. “The efforts that they’re taking are significant, but they are mostly in the realm of mitigation or reducing some of the potential harms. Most of them totally avoid some of the root concerns; they’re not prevention-focused.”
Facebook in particular could be doing more, he said, pointing out that the world’s largest social network only suspended Trump for possibly as little as two weeks instead of banning him permanently despite repeated rule violations. Facebook Chief Operating Officer Sheryl Sandberg said last week that the company had no plans to reinstate his account.
“It was the lowest possible bar and even then they hedged,” Carusone said. “That to me really underscores what their posture is.”
Steven Renderos, executive director of the nonprofit MediaJustice, said Sandberg’s remarks that the Capitol riot was “largely organized” elsewhere showed that the company is “divorced from reality and still trying to deflect.”
“Internally, the company knows. They’ve known for a long time that toxicity exists on their platform,” Renderos said. “Yet their algorithms are tailor-made to amplify the content that drives the most engagement — and that’s the stuff that upsets people or outrages people.”
He was skeptical of Facebook’s efforts over the last week, accusing the Menlo Park, Calif., company of “trying to play the optics game.”
“Facebook makes a lot of decisions based on trying to win the headlines,” he said, “and not necessarily because it’s the right thing to do.”
But Grimmelmann, the Cornell law professor, said he believed the industrywide moves were well-intentioned and “likely to stick.”
“You rarely see the companies announcing new restrictions on speech and then backing off from them,” he said. “It’s hard to see them retreating.”
How the major social media companies responded to Jan. 6—and are preparing for Inauguration Day