How Liberals Amped Up a Parkland Shooting Conspiracy Theory

Credit to Author: Molly McKew| Date: Tue, 27 Feb 2018 20:24:19 +0000

The shooting at Florida’s Marjory Stoneman Douglas High School on Valentine’s Day inspired an energetic group of young activists to weigh in on the national debate on guns, safety, and personal freedoms. But as they found their voice, conspiracy theories purporting that they were “crisis actors”—frauds pretending to be students—spiraled across social media and into the mainstream.

As documented elsewhere, this idea of “false flag” operations and actors being used by liberals to stage media stories for political purposes is a long-running narrative in far-right media outlets like InfoWars (and, perhaps worth noting, something Russian propaganda networks have been caught actually doing multiple times in Ukraine). In this case, it is an easy if cynical tactic to discredit the voices of victims and undermine the moral weight behind their message.

In the days that followed the shooting, social media companies scrambled to deal with complaints about the proliferation of the crisis actors conspiracy across their platforms—even as their own algorithms helped to promote that same content. There were new rounds of statements from Facebook, YouTube, and Google about addressing the problematic content and assurances that more AI and human monitors must be enlisted in this cause.

But there are a lot of assumptions being made about how this content was amplified, and how it got past controls within the algorithmic star chambers. Russian bots, the NRA echo-chamber, and so-called alt-right media personalities have all been fingered as the perpetrators.

And, as our research group, New Media Frontier—which collects and analyzes social media intelligence using a range of custom and commercial analytical tools—recently outlined in an analysis of the #releasethememo campaign, there are many contributing factors to the amplification of American far-right content, including foreign and domestic bots, intentional amplification networks, and other factors. Whether it’s fully automated bot or semi-automated cyborg accounts, automation is a vital part of accelerating the distribution of content on social media.

But in looking at the case of the Parkland, Florida, shooting and the crisis actors narrative it spawned, there was another important factor that allowed it to leap into mainstream consciousness: People outraged by the conspiracy helped to promote it—in some cases far more than the supporters of the story. And algorithms—apparently absent the necessary “sentiment sensitivity” that is needed to tell the context of a piece of content and assess whether it is being shared positively or negatively—see all that noise the same.

This unintended amplification created by outrage-sharing may have helped put the conspiracy in front of more unsuspecting people. This analysis looks at how one story of the crisis actor conspiracy—the claim that David Hogg, a senior at Marjory Stoneman Douglas High School, was a fraud because he had been coached by his father—gained amplification from both its supporters and its opponents.

The story began as expected. At 5:30 pm EST on February 19, five days after the shooting, alt-right website Gateway Pundit posted a story claiming that student David Hogg was coached on his lines as part of an FBI plot to create false activism against President Trump. On Twitter, this story was initially amplified by right-leaning accounts, some of which are automated.

Of the 660 tweets and retweets of the “crisis actors” Gateway Pundit conspiracy story during the hour after it was posted, 200 (30 percent) came from accounts that have tweeted more than 45,000 times. Human, cyborg, or bot, these accounts are acting with purpose to amplify content (more on this in a moment). And this machinery of curation, duplication, and amplification both cultivates echo chambers that keep human users engaged and impacts how social media companies’ algorithms decide what is important, trending, and promoted to other users—part of triggering a feedback loop to win the “algorithmic popularity contest.”

Some of the better-established networks almost seem to predict what will become a trending story because of the position they occupy in the information architecture of social networks—selecting specific content and then ensuring its amplification.

The crisis actors narrative was being amplified on other platforms, as well. The promotion of stories being aggressively pushed by far-right conspiracy sites raised alarms. YouTube had to intervene to remove a video promoting the crisis actor conspiracy that topped its trending algorithm. Meanwhile Google and Twitter searches were auto-filling “crisis actors” as a search term. FaceBook and Reddit were also being used to promote versions of the story.

However, this trending content was not pushed solely from the right. At 6:21 pm, Frank Luntz (@frankluntz, a prominent pollster and PR executive with almost 250,000 followers) tweeted in protest of the Gateway Pundit story, becoming one of four non-right-wing amplifiers of the story with verified accounts. (In most cases, getting content seen by or promoted by verified accounts greatly accelerates its amplification.) The other three are the New York Times’ Nick Confessore, MSNBC producer Kyle Griffin, and former first daughter Chelsea Clinton. Each of them quote-tweeted the Gateway Pundit story to denounce it, but in doing so gave it more amplification.

https://twitter.com/ChelseaClinton/status/965746303388004352

By the next morning, the Gateway Pundit story had been promoted roughly 30,000 times on Twitter. These four progressive influencers were responsible for more than 60 percent of the total mentions of the story.

This is a limited example, but it shows quite clearly that this one conspiracy, on one platform, was amplified not by its supporters but—unintentionally—by its opponents.

On both the right and the left, automated and semi-automated accounts were contributing to the promotion of this story. These accounts serve different functions.

Some act like highly curated, low-quality newswires—posting a heavy volume of content from sources with a wide range of legitimacy, but narrow ideological views. For example, the first account to post the Gateway Pundit story on Twitter, @Tokaise, actually did so before the publisher itself.

https://twitter.com/Tokaise/status/965731833899028480

This is because the account likely relies on automation software to identify, share, and repost content based on a predetermined list of outlets, social media accounts, and keyword designations. It tweets about 190 times a day, and its 4,100 followers include alt-right influencers (Charlie Kirk, Jacob Wohl, and others).

These curated newswires are important players in synthetic information networks—parts of social media that are populated by content even when human users are not engaged. The reposted content helps stories trend; it also lays the groundwork for what human users see when they tune in to their Twitter feeds, where Twitter’s algorithms also helpfully provide content you may have missed while you were away.

To get a snapshot of some of the automation in both silos, right and left, we looked at the first 10 accounts to retweet Gateway Pundit founder John Hoft’s original tweet of the article (@rlyor, @ahernandez85b, @mandersonhare1, @dalerogersL2528, @topdeserttrader, @jodie4045, @Markknight45, @James87060132, @AIIAmericanGirI, @deplorableGOP13) and at the first 10 accounts to retweet Chelsea Clinton’s denunciation of the story (@DOFReport, @AndrewOnSeeAIR, @TheSharktonaut, @CarolynCpcraig, @guavate86, @NinjaPosition_, @Jjwcampbell, @mikemnyc, @intern07, @maximepo1).

In January 2018, the right-leaning accounts collectively tweeted 42,654 times (that’s an average of about 140 tweets a day per account), a fair indicator that at least some of them are automated amplifiers. The largest of these accounts—@AIIAmericanGirI—has tweeted 542,000 times since 2013 (10,000 tweets a month, or more than 300 per day). Her 115,000 followers include Harlan Hill, Charlie Kirk, Tea Pain, Bill Kristol, Mike Allen, and Sarah Carter—all widely followed individuals who help shape opinion across the political spectrum on social media.

These known influencers probably don’t follow Girl because she is a self-described “wife, mother, patriot, friend,” or because her avatar is a pistol suggestively positioned in something that might almost qualify as underwear. They follow this account, knowingly or not, because it improves the social media statistics of its followers. The reason: It is embedded in a network that distributes content, adds followers, and garners likes and retweets (some of these techniques are discussed here).

Another of these accounts, “Roy” (@dalerogersL2528), which does little more than retweet Gateway Pundit, promotes and uses Crowdfire, an app that helps users increase their follower count, gain specific kinds of followers, and automate a posting schedule. Roy’s followers include elected Republican officials and candidates for office. Girl and Roy are designed to amplify certain types of content to certain types of users and improve the statistics of those who follow them in ways that are often quite opaque.

On the left, the profiles of automated accounts look similar. In January 2018, the 10 accounts that retweeted Chelsea Clinton’s denunciation collectively tweeted 36,063 times (roughly 116 tweets per day per account). The first retweet was from a self-labelled news aggregator (a newswire-style account that retweets the former first daughter as part of its automated tasking). Another, @TheSharktonaut, which retweets a high volume of left-leaning content, is followed by Democratic lawmakers and candidates—a left version of Roy.

@AndrewOnSeeAIR’s Twitter biography claims he is British and anti-Brexit, but this account uses a hashtag meant to create a “follow-back” network amongst anti-Brexiteers—that is, it’s designed to improve follower counts in both directions. His tweets—more than 200 a day—consist almost entirely of left-leaning American content, despite his claim of being British.

Right and left, there is a pattern of full and partial automation and amplification. But in this case, the accounts on the left have relatively more modest followings and less well-established positions within the broader information architecture of Twitter. The left has far more verified followers (more than 500); on the right, it’s closer to 200-plus. In some ways, there is the temptation to see a reflection of the party engagement strategies in these information tactics: one side more focused on broader support, while the other is more reliant on a tighter group of elites to achieve the same effect.

As time moved on, the right-left narratives on the crisis actors diverged. The right narrative is: This is all a conspiracy and an FBI/Deep State plot to undermine President Trump. The left narrative is: The crisis actor story is an attack on the victims, Gateway Pundit is Russian propaganda, and the story is being amplified by Russian bots. Both sides see a manipulative villain at play (liberal media lies vs hostile foreign propaganda). And on both sides, the cognitive impact of these narratives is to harden political beliefs.

Absent more active and accurate sentiment scoring, support and outrage alike can amplify the same content. The bigger question is what to do about automation and computational propaganda—using information and communication technologies to manipulate perceptions, affect cognition, and influence behavior—writ large. Automation accelerates the borderless world, includes state and nonstate actors, and allows a distortion of our discourse and a poisoning of our democracy. It is a complex problem, and it affects us as consumers of information in complex ways.

The truth in the crisis actors case was less clearcut and less glamorous than either side of the debate would like to admit. Bots, including likely Russian bots, were promoting both narratives and remain essential elements of computational propaganda, the tactics of which are being used more frequently on social media.

Automation, in a variety of forms, is deeply entrenched in social media’s information landscape. Automated accounts traffic information and impact what we see online, either directly or through their impact on algorithms. Algorithms curate and promote information in ambiguous and sometimes unhelpful ways. Over and over, human intervention is needed to correct the “judgment” of algorithms. And this feels, to some audiences, like a new form of censorship.

Social media companies have started to step in to correct the excesses and unintended consequences of automation, but that happens only on a case-by-case basis, particularly in high-profile cases of disinformation and defamation. Responding in this way will increasingly raise questions about who is deciding which automation is bad automation, and which is allowed to continue unchecked. It also leaves regular, everyday users exposed to the same types of defamation campaigns but with far less protections or means of recourse.

Sometimes there is the sense that this is just the new way to consume information and we all need to figure out how to navigate it. That whatever is loudest is somehow what is most important, and after that, figure it out on your own. On Reliable Sources this past weekend, David Hogg himself said he wasn’t upset by all the conspiracies because they were all great “marketing,” boosting his twitter following to more than 250,000 people. The younger and the more social media savvy seem to understand this more mercenary approach instinctively. It’s the wild-west landscape that social media platforms have encouraged, knowing that outrage is an effective currency in the so-called attention economy.

This terminology camouflages the war for minds that is underway on social media platforms, the impact that this has on our cognitive capabilities over time, and the extent to which automation is being engaged to gain advantage. The assumption, for example, that other would-be participants in social media information wars who choose to use these same tactics will gain the same capabilities or advantage is not necessarily true. This is a playing field that is hard to level: Amplification networks have data-driven, machine learning components that work better with refinement over time. You can’t just turn one on and expect it to work perfectly.

The vast amounts of content being uploaded every minute cannot possibly be reviewed by human beings. Algorithms, and the poets who sculpt them, are thus given an increasingly outsized role in the shape of our information environment. Human minds are on a battlefield between warring AIs—caught in the crossfire between forces we can’t see, sometimes as collateral damage and sometimes as unwitting participants. In this blackbox algorithmic wonderland, we don’t know if we are picking up a gun or a shield.

Russian bots flooded the internet with pro-gun tweets as soon as news of the Parkland shootings broke.

But in a matter of days, writes Virginia Heffernan, the students from Parkland were turning their grief into effective activism.

The author of this article, Molly McKew, wrote another WIRED piece arguing that it’s now undeniable that Russian info-warriors affected the outcome of the 2016 presidential election.

Molly K. McKew (@MollyMcKew) is an expert on information warfare and the narrative architect at New Media Frontier. She advised Georgian President Mikheil Saakashvili’s government from 2009 to 2013 and former Moldovan Prime Minister Vlad Filat in 2014-15. New Media Frontier co-founder Max Marshall (@maxgmarshall) helped conduct the research for this analysis.

https://www.wired.com/category/security/feed/