News

Actions

Twitter is purging ‘terror accounts’ at a dizzying rate

Posted

A finger touching the screen of a handheld device that features a logo of the micro-blogging site Twitter. (DAMIEN MEYER/AFP/Getty Images)

By Avi Asher-Schapiro

Twitter — the online microblogging platform favored by President Donald Trump — has been purging accounts suspected of supporting terrorism at a dizzying pace: nearly 377,000 accounts in the last six months of 2016 alone, almost three times the rate of the previous year.

Those numbers were made public Tuesday, as part of Twitter’s latest transparency report.

Experts who monitor the Islamic State militant group (ISIS) online presence are pleased. “The median age of an ISIS-supporting account on Twitter is about one day — that’s as long as they survive, and many don’t survive that long,” said J.M. Berger, an associate fellow with the International Centre for Counter-Terrorism. “I think Twitter is handling this appropriately.”

At the same time, free speech and transparency advocates say that Twitter’s apparent zeal to take on problematic accounts could lead to collateral damage — especially when governments lean on the company to censor content. “The speech rules that social networks maintain are notoriously vague, and repeatedly lead to mistaken take-downs of legal images, such as breast feeding, legitimate images of armed conflict, and other matters of public concern,” said Matt Cagle, an attorney with the American Civil Liberties Union’s (ACLU) Technology and Civil Liberties Project. “The worry here is that governments would seek to exploit these rules for their own ends.”

Twitter has multiple pathways for censuring or controlling content on its platform. The most straightforward: a government can issue a formal legal request or a court order to remove something illegal. Outside of those formal legal avenues, Twitter can also use its own internal algorithms —  or tips from other users — to surface accounts that may not violate the law, but do violate its own Terms of Service (TOS). The TOS prohibit content that “make threats of violence or promote violence, including threatening or promoting terrorism.”

There’s also a gray area: the governments can — and often do — pressure Twitter to take down content without officially using legal mechanisms, but rather by privately pointing out Twitter accounts that may violate the TOS.

Since the ISIS began using Twitter to communicate and disseminate its propaganda in 2014, the company has come under serious pressure to purge the group and its supporters from the platform. As the ISIS rampaged through Iraq and Syria in the summer of 2014, supportive accounts tweeted out propaganda videos celebrating the group’s exploits. At a counterterrorism conference in 2015, FBI Director James Comey put it starkly:  “Twitter works as a way to sell books, as a way to promote movies, and it works as a way to crowdsource terrorism — to sell murder.”  That same year, Berger published “The ISIS Twitter Census,” which identified 46,000 major Twitter accounts operating on behalf of the group.

Outside the battle zones in the Middle East, Twitter had become one of the primary breeding grounds for ISIS-related activity. The company — which prides itself on eschewing censorship and guarding the privacy of its users — had a “slow start” purging the terror accounts from its platform, Berger says. But the latest report shows they’ve hit their stride. “I think it’s probably not reasonable to expect them to push it much further,” Berger says.  

As Twitter escalated its campaign against ISIS and other terror groups such as al Qaeda and al-Shabab, the ACLU wasn’t the only group to worry both about creeping censorship, and about the extent to which Twitter and other social media companies acquiesced to government pressure outside the public eye.

At the beginning of 2016, the U.S government convened a closed-door summit with social media companies, including Facebook and Twitter, to strategize about combatting terror in cyberspace.  Civil libertarians warned tech companies to be wary of being too cooperative with authorities in censuring content, given the gray space between legitimate counter-terror activity and censorship. At that summit, Buzzfeed reported, the Pentagon applied pressure on companies to tweak their internal algorithms to bury certain content that the government deemed harmful to national security.

Last May, the ACLU of Northern California filed a Freedom of Information Request (FOIA) asking for documents that outline the extent of behind-the-scenes cooperation between the federal government and social media companies like Twitter.  “We want to see any informal demands they might be sending to social networks to remove content,” Cagle explained.

Twitter’s latest report did shine some light on that dynamic: in terrorism-related take-downs, the company reported that a full 74 percent were flagged by “internal, proprietary spam-fighting tools” and less than 2 percent had been  suspended after authorities complained.

In the ACLU’s view, more detail is required. They want to know which governments made those complaints, and what specific sections of the Twitter’s terms of service were cited to justify the take-down. Without that information in the public domain, Cagle says, Twitter’s cooperation with authorities creates a chilling effect that could, in his words, “minimize the legitimate speech of other users that has nothing to do with terrorism.”

At the same time, Twitter’s success pushing ISIS and other foreign terror organizations off its platform does not mean it’s devised a successful formula for dealing with all violent content.  “Things are going to be more complicated going forward, as not all violent extremists are as clearcut as ISIS,” Berger said.

One of the major unresolved challenges: how to deal with content that’s violent, or documenting violence in war zone, but that emanates from a source that’s primarily engaged in documenting that conflict. “There are questions about how you handle social media accounts related to the Syrian civil war that are much more complex,” Beger says. “[There] we might be less comfortable with a social media company’s qualifications to draw distinctions about who should be allowed to use the platform and who should not.”

Beyond that, Twitter still is getting its legs under it when it comes to policing newer online sensations like the alt-right movement in the US — the ultraconservative political strain that has coalesced around Duke University drop-out Richard Spencer. While members of the alt-right often preach racial separation, or admire Nazi figures, they may not always directly advocate violence. “We’re seeing huge growth among right-wing extremist movements where the lines between terrorism, violent extremism, and simply offensive content are blurrier,” Berger added. “That’s a looming challenge.”