Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

How the 'Stop the Steal' movement outwitted Facebook ahead of the Jan. 6 insurrection

A protester unleashes a smoke grenade in front of the U.S. Capitol building on Jan. 6, 2021.
Bloomberg
/
Bloomberg via Getty Images
A protester unleashes a smoke grenade in front of the U.S. Capitol building on Jan. 6, 2021.

Hours after polls closed on Nov. 3, angry Donald Trump supporters on Facebook coalesced around a rallying cry now synonymous with the siege on the U.S. Capitol: "Stop the Steal."

Inside Facebook, employees were watching with concern.

The presidential election may have passed without major incident, but in its wake, "angry vitriol and a slew of conspiracy theories" were fomenting, Facebook staff wrote in an internal report on the Stop the Steal movement earlier this year. Supporters perpetuated the lie that the election had been stolen from then-President Donald Trump — a lie that Trump himself had been stoking for months.

By the time Facebook banned the first Stop the Steal group on Nov. 5, for falsely casting doubt on the legitimacy of the election and calling for violence, the group had already mushroomed to more than 360,000 members. Every hour, tens of thousands of people were joining.

Facebook removed the group from its platform. But that only sent Stop the Steal loyalists to other groups on Facebook filled with misinformation and claims the election was stolen. It was a classic game of whack-a-mole that Facebook tried but failed to stay on top of. Droves of Trump fans and right-wing conspiracists had outwitted the world's largest social network.

In the days after the election, researchers at Facebook later noted, "almost all the fastest growing groups were Stop the Steal" affiliated — groups devoted to spreading falsehoods about the vote. Some even continued to use the name.

They were spreading at a pace that outstripped Facebook's ability to keep up, just as company insiders were feeling relief that election night did not devolve into civil unrest. There was no widespread foreign interference or hacking. These had been worst-case scenarios for Facebook. Avoiding them provided solace to the company, even though a pernicious movement was gathering momentum on the platform, something that would only become clear to Facebook after the Jan. 6 Capitol insurrection.

The Stop the Steal report, first reported by BuzzFeed News earlier this year, was included in disclosures made to the Securities and Exchange Commission by Facebook whistleblower Frances Haugen and provided to Congress in redacted form by Haugen's legal counsel. A consortium of news organizations, including NPR, has reviewed the redacted versions received by Congress. Some of the documents have also been reported on by the Wall Street Journal. NPR interviewed experts and former Facebook employees to shed light on the thousands of pages of internal research, discussions and other material.

Facebook rolled out "break the glass" measures for the election

As Facebook prepared for the 2020 election, it consulted its emergency playbook. Internally, staffers called these "break the glass measures" — a list of temporary interventions to keep its platform safe.

They included efforts to slow down the growth of political groups that could be vectors for misinformation and extremism. Facebook reduced the visibility of posts and comments deemed likely to incite violence so that people were less likely to see them. And the company designated the U.S. a "high risk location" so it could more aggressively delete harmful posts.

Facebook knew groups dedicated to politics and related issues — which it calls "civic groups" — presented particular risks, especially when it came to amplifying misinformation and growing more quickly than the company could control.

So ahead of the election, the company tried to stop suggesting users join groups it thought they might be interested in; it restricted the number of invitations people could send out each day; in some cases, it put group administrators on the hook for making sure posts didn't break the rules, according to an internal spreadsheet describing the measures.

Despite these interventions, Facebook failed to curb the proliferation of the Stop the Steal movement. Inside the company, warnings about how the platform encouraged groups to grow quickly were getting louder. In its internal report, Facebook acknowledged something striking: It "helped incite the Capitol Insurrection" on Jan. 6.

In a statement on Friday, Facebook spokesman Andy Stone rejected the idea that Facebook bore responsibility for the Capitol siege.

"The responsibility for the violence that occurred on January 6 lies with those who attacked our Capitol and those who encouraged them. We took steps to limit content that sought to delegitimize the election, including labeling candidates' posts with the latest vote count after Mr. Trump prematurely declared victory, pausing new political advertising and removing the original #StopTheSteal Group in November," he said.

"After the violence at the Capitol erupted and as we saw continued attempts to organize events to dispute the outcome of the presidential election, we removed content with the phrase 'stop the steal' under our Coordinating Harm policy and suspended Trump from our platforms."

After this story was published on Friday, Facebook put out a blog post from Guy Rosen, its head of integrity, describing its efforts to protect the election and its aftermath.

But what unfolded on the platform and in Washington was especially disheartening to Haugen, a product manager in Facebook's civic integrity team, and other members of the unit. That team of employees dedicated to tackling political misinformation and protecting elections around the world was disbanded in early December.

Facebook's choices questioned

Haugen and other former employees whom NPR spoke with say the steps Facebook took around the election and Capitol insurrection show just how much the company knows about the problems endemic to its platform — and how resistant it is to make changes that affect the growth it prizes above all else.

"The thing I think we should be discussing is, what choices did Facebook make to expose the public to greater risk than was necessary?" Haugen says. "We should ask who gets to resolve these tradeoffs between safety and Facebook's profits."

Haugen has filed at least eight complaints with the SEC alleging that Facebook violated U.S. securities law, including one saying that Facebook allegedly misled investors and the public about its role in the Jan. 6 Capitol riot.

While Facebook "has publicized its work to combat misinformation and violent extremism relating to the 2020 election and insurrection," that complaint said, the company "knew its algorithms and platforms promoted this type of harmful content, and it failed to deploy internally-recommended or lasting counter-measures."

But the picture that emerges from a review of the internal documents and interviews with former employees is murkier: Facebook did deploy many of its emergency measures. Some didn't work; others were temporary. After the insurrection, as Facebook and the country reeled from images of the besieged Capitol, employees on the company's internal message board blasted leadership for holding back efforts to make the platform safer.

Facebook, in its statement, said it considered "signals" on the platform and worked in collaboration with law enforcement prior to the election and after to decide what emergency steps to take.

"It is wrong to claim that these steps were the reason for January 6th — the measures we did need remained in place well into February," Stone said, adding that some steps, like not recommending political groups, are still in place.

Company researchers highlighted the risks of political groups for months

Inside Facebook, employees had been ringing the alarms for months. In February 2020, staff flagged private political groups as a "high" risk for spreading misinformation during the election.

That's because posts in private groups are viewable only by people who have been invited by a member, or approved by an administrator. Comments, links and photos in these groups operate in something of a Wild West and are not subject to Facebook's outside fact-checking program — a core part of the company's approach to keeping lies off its platform. Most political content in groups is seen in these private channels, the employees noted.

Internal research has looked at how Facebook's groups recommendations could quickly send users down partisan political rabbit holes. In a 2019 experiment called "Carol's Journey to QAnon," a Facebook employee created a test user named Carol Smith, a 41-year-old "conservative mom in the US south."

After setting up the account to follow mainstream conservative political news and humor, including pages for Fox News and Donald Trump, the researcher found Facebook's automated recommendations for groups and pages it thought Carol might like "devolved toward polarizing content." Within two days, Facebook served her recommendations for more partisan political groups, such as one called "Lock Hillary Up!!!!" Within a week, it suggested she follow a page promoting the baseless QAnon conspiracy.

Company researchers had also warned how the most problematic groups were fueled by supercharged growth. An August 2020 internal presentation, first reported by the Wall Street Journal, warned that 70% of the 100 most active U.S. civic groups were so rife with hate, bullying, harassment, misinformation and other rule violations that Facebook's systems were not supposed to recommend them to other users.

Right-wing groups kept growing and getting around the rules

Facebook removed more posts for violating its hate speech rules in one private Trump-supporting group than in any other U.S. group from June to August, the presentation noted. But groups that were punished for breaking the rules found it easy to reestablish themselves; administrators regularly set up alternate "recidivist groups" that they encouraged members to join in case Facebook shut them down.

The researchers found many of the most toxic civic groups were "growing really large, really fast," thanks in part to "mass inviters" sending out thousands of messages urging people to join.

So, as the election neared, Facebook put a new "break the glass" measure in place: It allowed group members to invite just 100 people a day. As Stop the Steal flourished after the election, the company dropped that limit to 30.

"The groups were regardless able to grow substantially," the internal Stop the Steal report said. "These invites were dominated by a handful of super-inviters," the report concluded, with 30% of invitations coming from just 0.3% of group members — just as researchers had warned about back in August. Many were administrators of other Stop the Steal connected groups, "suggesting cooperation in growing the movement," the report said.

Stop the Steal organizers were also able to elude detection, the internal report said, by carefully choosing their words to evade Facebook's automatic systems that scan content and by posting to its disappearing "Stories" product, where posts vanish after a day.

Some "Break the glass" measures did not last

While invitation limits were kept in place, other "break the glass" measures were turned off after the election. For example, the company dialed back an emergency fix that made it less likely users would see posts that Facebook's algorithms predicted might break its rules against violence, incitement and hate.

Haugen and other former employees say those election guardrails were taken away too soon. Many on the integrity team lobbied to keep the safeguards in place longer and even adopt some permanently, according to a former employee.

Facebook says it developed and implemented the "break glass" measures to keep potentially harmful content from spreading before it could be reviewed. But it also says those measures are blunt instruments with trade-offs affecting other users and posts that do not break its rules, so they are only suited for emergencies.

One "break the glass" intervention that some members of the integrity team thought should be made permanent slowed down "deep reshares" of political posts. That's Facebook lingo for a post that is shared so much it makes its way into a user's newsfeed even if they don't know or aren't connected to the person who made the original post.

In April 2019, a Facebook data scientist wrote in an internal report that controls on "deep reshares" would reduce political misinformation in links by 25% and cut in half the number of photos containing political misinformation spreading across the platform.

For example, a doctored video of Nancy Pelosi that distorted her voice so it sounded slurred as if she were drunk went viral on Facebook a week before the 2019 memo. Nearly half of the views, the researcher found, were due to deep reshares — people who shared a post with the video from a friend, who had shared it from another friend, and so on down the chain.

But while Facebook was willing to tamp down reshares temporarily, it resisted making a permanent change, according to a former employee.

"In rare circumstances we reduce how often people see any content that has been shared by a chain of two or more people," Stone, the Facebook spokesman, said. "While we have other systems that demote content that might violate our specific policies, like hate speech or nudity, this intervention reduces all content with equal strength. Because it is so blunt, and reduces positive and completely benign speech alongside potentially inflammatory or violent rhetoric, we use it sparingly."

Leadership resisted calls to do more in lead-up to Jan. 6, employees say

As the pro-Trump mob poured into the Capitol on Jan. 6, Facebook employees watched in horror, and the company scrambled to put back in place many of the "break the glass" measures that it had turned off soon after the election.

It also banned then-President Donald Trump for 24 hours (later extended to two years), a move some employees argued was too little too late.

"Do you genuinely think 24 hours is a meaningful ban?" one employee wrote in response to a post on the company's internal message board from Facebook's chief technology officer, Mike Schroepfer, on the day of the Capitol attack.

"How are we expected to ignore when leadership overrides research based policy decisions to better serve people like the groups inciting violence today," the employee wrote. "Rank and file workers have done their part to identify changes to improve our platform but have been actively held back."

"The atrophy occurs when people know how to circumvent our policies and we're too reactive to stay ahead," another employee lamented. "There were dozens of Stop the Steal groups active up until yesterday, and I doubt they minced words about their intentions."

It was only after the events of Jan. 6 and a wave of Storm the Capitol events across the country that Facebook realized it was dealing with a coordinated movement, the company's internal Stop the Steal report from March said.

The report concluded that there was a broader failure in Facebook's approach. It focused on removing individual groups, rather than quickly seeing the systemic way Facebook's growth-optimized mechanics enabled misinformation to emerge and flourish.

Its policies were built to root out "inauthentic behavior" — such as networks of fake accounts and Russian trolls impersonating Americans — but had little scope to confront "coordinated authentic harm" — that is, real people, using their real names, undermining confidence in American democracy.

"There's not a single meme or a single post that is going to necessarily cause somebody to bring their zip ties to the Capitol," said Lisa Kaplan, chief executive of the Alethea Group, a company that fights online misinformation and other threats. "It's a slow drip narrative that ultimately changes people's perceptions of reality."

Editor's note: Facebook is among NPR's recent financial supporters.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Tags
Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.
Bobby Allyn is a business reporter at NPR based in San Francisco. He covers technology and how Silicon Valley's largest companies are transforming how we live and reshaping society.