Twitter Still Can’t Keep Up With Its Flood of Junk Accounts, Study Finds

Credit to Author: Andy Greenberg| Date: Fri, 08 Feb 2019 12:00:00 +0000

Since the world learned of state-sponsored campaigns to spread disinformation on social media and sway the 2016 election, Twitter has scrambled to rein in the bots and trolls polluting its platform. But when it comes to the larger problem of automated accounts on Twitter designed to spread spam and scams, inflate follower counts, and game trending topics, one study argues that the company still isn’t keeping up with the deluge of garbage and abuse.

In fact, the paper's two researchers write that with a machine learning approach they developed themselves, they could identify abusive accounts in far greater volumes and faster than Twitter does—often flagging the accounts months before Twitter spotted and banned them.

In an 16-month study of 1.5 billion tweets, Zubair Shafiq, a computer science professor at the University of Iowa, and his graduate student Shehroze Farooqi, identified more than 167,000 apps using Twitter's API to automate bot accounts that spread tens of millions of tweets pushing spam, links to malware, and astroturfing campaigns. They write that more than 60 percent of the time, Twitter waited for those apps to send more than 100 tweets before identifying them as abusive; the researchers' own detection method had flagged the vast majority of the malicious apps after just a handful of tweets. For about 40 percent of the apps the pair checked, Twitter seemed to take more than a month longer than the study's method to spot an app's abusive tweeting. That lag time, they estimate, allows abusive apps to cumulatively churn out tens of millions of tweets per month before they're banned.

"We show that many of these abusive apps used for all sorts of nefarious activity remain undetected by Twitter's fraud detection algorithms, sometimes for months, and they do a lot of damage before Twitter eventually figures them out and removes them," says Shafiq. The study will be presented at the Web Conference in San Francisco this May. "They’ve said they’re now taking this problem seriously and implementing a lot of countermeasures. The takeaway is that these countermeasures didn’t have a substantial impact on these applications that are responsible for millions and millions of abusive tweets."

"We found a way to detect them even better than Twitter."

Zubair Shafiq, University of Iowa

The researchers say they've been sharing their results with Twitter for more than a year, but that the company hasn't asked for further details of their method or data. When WIRED reached out to Twitter, the company expressed appreciation for the study's goals but objected to its findings, arguing that the Iowa researchers lacked the full picture of how it's fighting abusive accounts. "Research based solely on publicly available information about accounts and tweets on Twitter often cannot paint an accurate or complete picture of the steps we take to enforce our developer policies," a spokesperson wrote.

Twitter has, to its credit, at least taken an aggressive approach to stopping some of the most organized disinformation trolls exploiting its megaphone. In a report it released last week, the social media firm said that it had banned more than 4,000 politically motivated disinformation accounts originating in Russia, another 3,300 from Iran, and more than 750 from Venezuela. In a statement to WIRED, Twitter noted that it's also working to curb abusive apps, implementing new restrictions on how they're given access to Twitter's API. The company says it banned 162,000 abusive applications in the last six months of 2018 alone.

But the Iowa researchers argue that their findings—which are based only on the one percent of tweets Twitter makes available through a research-focused API show abusive Twitter applications still run rampant. Although their data set only runs through the end of 2017, they ran their machine-learning model on tweets from the last two weeks of January at WIRED's request, and immediately found 325 apps they deemed abusive that Twitter had yet to ban, some with explicitly spammy names like EarnCash_ and La App de Escorts.

In their study, the researchers focused exclusively on finding toxic tweets produced by those third party apps, given the outsized effects of those automated tools. Sometimes the malicious apps controlled accounts that spammers or scammers themselves created. In other cases, they hijacked accounts of users who had been tricked into installing the applications, or done so in exchange for incentives like a boost in fake followers.

Amid the 1.5 billion tweets the researchers started with—Twitter makes only one percent of all tweets available through a research-focused API—457,000 third party applications were represented. The pair then used that data to train their own machine learning model for tracking abusive apps. They noted which accounts each application posted to, along with factors including the age of the accounts, the timing of tweets, the number of usernames, hashtags, and links the tweets included, and the ratios of retweets to original tweets. Most importantly, they observed which accounts were eventually banned by Twitter during the 16-month period they watched, essentially using those bans to denote abusive accounts.

With the resulting machine-learning-trained model, they found they could identify 93 percent of the applications that Twitter would ultimately ban without looking at more than their first seven tweets. "We're in some sense relying on seeing what Twitter eventually labels as malicious apps. But we found a way to detect them even better than Twitter," says Shafiq.

Twitter countered in its statement that the Iowa researchers' machine learning model was faulty, because they couldn't actually say with certainty which applications Twitter had banned for abusive behavior. Since Twitter doesn't make that data public, the researchers could only guess by looking at which applications had tweets removed. That could have been from a ban, but also due to users or applications deleting their own tweets.

"We think the methods used for this research do not accurately measure or reflect the health of our developer platform—principally because the factors used to train the model in this research are not strongly correlated with whether or not an application in fact violates our policies," a spokesperson wrote to WIRED.

But the Iowa researchers note in their paper that they only marked an application as having been banned by Twitter if 90 percent or more of its tweets had been removed. They observed that for popular, benign apps like Twitter for iPhone or Android, less than 30 percent of tweets are removed. If users of some legitimate app do delete their tweets more often, "these would be a small minority, these apps would not be used by a lot of people, and I don’t expect their results would be affected by that," says Gianluca Stringhini, a researcher at Boston University who has worked on previous studies of abusive social media apps. "So I would expect that their ground truth is reasonably strong."

Beyond those educated guesses at which apps had been banned, the researchers also honed their definition of abusive apps by crawling sites advertising fake followers, and downloading 14,000 applications they offered. Of those 14,000, about 6,300 had produced tweets in their 1.5 billion tweet sample, so they also served as examples of abusive apps for the researchers' machine learning model's training data.

One drawback to the Iowa researchers' method was its rate of false positives: They admit that about six percent of the apps their detection method flags as malicious are in fact benign. But they argue that false positive rate is low enough that Twitter could assign human staffers to review their algorithm's results and catch mistakes. "I don't think it would take more than one person to do this kind of review," says Shafiq. "If you don't aggressively target these applications, they’re going to compromise many more accounts and tweets, and cost many more man-hours."

The researchers agree with Twitter that the company is moving in the right direction, tightening the screws on junk accounts and more importantly, in his view, abusive applications. They noticed that around June 2017, the company did seem to be more aggressively banning bad apps. But they say that their findings show that Twitter is still not exploiting machine learning's potential to catch app abuse as quickly as it could. "They’re probably doing some of this right now," Shafiq says. "But clearly not enough."

https://www.wired.com/category/security/feed/