Twitter is Suspending More Than One Million Accounts Per Day in Latest Purge

Twitter has sharply escalated its battle against fake and suspicious accounts, suspending more than 1 million a day in recent months, a major shift to lessen the flow of disinformation on the platform, according to data obtained by The Washington Post.

The rate of account suspensions, which Twitter confirmed to The Post, has more than doubled since October, when the company revealed under congressional pressure how Russia used fake accounts to interfere in the U.S. presidential election. Twitter suspended more than 70 million accounts in May and June, and the pace has continued in July, according to the data.

The aggressive removal of unwanted accounts may result in a rare decline in the number of monthly users in the second quarter, which ended last week, according to a person familiar with the situation who was not authorized to speak. Twitter declined to comment on a possible decline in its user base.

Twitter’s growing campaign against bots and trolls – coming despite the risk to the company’s user growth – is part of the ongoing fallout from Russia’s disinformation offensive during the 2016 presidential campaign, when a St. Petersburg-based troll factory was able to use some of America’s most prominent technology platforms to deceive voters on a mass scale to exacerbate social and political tensions.

The extent of account suspensions, which has not previously been reported, is one of several recent moves by Twitter to limit the influence of people it says are abusing its platform. The changes, which were the subject of internal debate, reflect a philosophical shift for Twitter. Its executives long resisted policing misbehavior more aggressively, for a time even referring to themselves as “the free speech wing of the free speech party.”

Twitter’s Vice President for Trust and Safety Del Harvey said in an interview this week the company is changing the calculus between promoting public discourse and preserving safety. She added that Twitter only recently was able to dedicate the resources and develop the technical capabilities to target malicious behavior in this way.

“One of the biggest shifts is in how we think about balancing free expression versus the potential for free expression to chill someone else’s speech,” Harvey said. “Free expression doesn’t really mean much if people don’t feel safe.”

But Twitter’s increased suspensions also throw into question its estimate that fewer than 5 percent of its active users are fake or involved in spam, and that fewer than 8.5 percent use automation tools that characterize the accounts as bots. (A fake account can also be one that engages in malicious behavior and is operated by a real person. Many legitimate accounts are bots, such as to report weather or seismic activity.)

Harvey said the crackdown has not had “a ton of impact” on the numbers of active users – which stood at 336 million at the end of the first quarter – because many of the problematic accounts were not tweeting regularly. But moving more aggressively against suspicious accounts has helped the platform better protect users from manipulation and abuse, she said.

Legitimate human users — the only ones capable of responding to the advertising that is the main source of revenue for the company — are central to Twitter’s stock price and broader perceptions of a company that has struggled to generate profits.

Independent researchers and some investors long have criticized the company for not acting more aggressively to address what many considered a rampant problem with bots, trolls and other accounts used to amplify disinformation. Though some go dormant for years at a time, the most active of these accounts tweet hundreds of times a day with the help of automation software, a tactic that can drown out authentic voices and warp online political discourse, critics say.

“I wish Twitter had been more proactive sooner,” said Sen. Mark R. Warner (Va.), the top ranking Democrat on the Senate Intelligence Committee. “I’m glad that – after months of focus on this issue – Twitter appears to be cracking down on the use of bots and other fake accounts, though there is still much work to do.”

The decision to forcefully target suspicious accounts followed a pitched battle within Twitter last year over whether to implement new detection tools. One previously undisclosed effort called “Operation Megaphone” involved quietly buying fake accounts and seeking to detect connections among them, said two people familiar with internal deliberations. They spoke on the condition of anonymity to share details of private conversations.

The name of the operation referred to the virtual megaphones – such as fake accounts and automation – that abusers of Twitter’s platforms use to drown out other voices. The program, also known as a white hat operation, was part of a broader plan to get the company to treat disinformation campaigns by governments differently than it did more traditional problems such as spam, which is aimed at tricking individual users as opposed to shaping the political climate in an entire country, according to these people. Harvey said she had not heard of the operation.

Some executives initially were reluctant to act aggressively against suspected fake accounts and raised questions about the legality of doing so, said the people familiar with internal company debates. In November, one frustrated engineer sought to illustrate the severity of the problem by buying thousands of fake followers for a Twitter manager, said two people familiar with the episode. Bots can be readily purchased on a gray market of websites.

A person with access to one of Twitter’s “Firehose” products, which organizations buy to track tweets and social media metrics, provided the data to the Post. The Firehose reports what accounts have been suspended and unsuspended, along with data on individual tweets.

Bots, trolls and fake accounts are nearly as old as Twitter, which started operations in 2006. In 2015, Twitter’s then-chief executive Dick Costolo acknowledged the problem in a company memo: “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years.”