First Monday

Characterizing social media manipulation in the 2020 U.S. presidential election by Emilio Ferrara, Herbert Chang, Emily Chen, Goran Muric, and Jaimin Patel



Abstract
Democracies are postulated upon the ability to carry out fair elections, free from any form of interference or manipulation. Social media have been reportedly used to distort public opinion nearing election events in the United States and beyond. With over 240 million election-related tweets recorded between 20 June and 9 September 2020, in this study we chart the landscape of social media manipulation in the context of the upcoming 3 November 2020 U.S. presidential election. We focus on characterizing two salient dimensions of social media manipulation, namely (i) automation (e.g., the prevalence of bots), and (ii) distortion (e.g., manipulation of narratives, injection of conspiracies or rumors). Despite being outnumbered by several orders of magnitude, just a few thousands of bots generated spikes of conversations around real-world political events in all comparable with the volume of activity of humans. We discover that bots also exacerbate the consumption of content produced by users with their same political views, worsening the issue of political echo chambers. Furthermore, coordinated efforts carried out by Russia, China and other countries are hereby characterized. Finally, we draw a clear connection between bots, hyper-partisan media outlets, and conspiracy groups, suggesting the presence of systematic efforts to distort political narratives and propagate disinformation. Our findings may have impactful implications, shedding light on different forms of social media manipulation that may, altogether, ultimately pose a risk to the integrity of the election.

Contents

Introduction
Methodology
Data analysis
Distortion
Conclusions

 


 

Introduction

The integrity of the upcoming 3 November 2020 U.S. presidential election has been a concern for the U.S. government and the public alike. Ample documentation of foreign interference with the 2016 U.S. presidential election, including social media manipulation, has been surfaced by both official investigations and independent researchers (Bessi and Ferrara, 2016; Guess, et al., 2020; Galdieri, et al., 2018; Bode, et al., 2020). The efforts of the Russian Internet Research Agency (IRA) to sow division and distort social media discussion in 2016 resulted in numerous indictments (Federal Bureau of Investigation and U.S. Department of Justice, 2018), and well documented strategies and tactics employed by trolls and bots (Strudwicke and Grant, 2020; Kriel and Pavliuc, 2019; Baie, et al., 2020; Walter, et al., 2020).

The automation of social media manipulation in politics, often referred to as computational propaganda (Woolley and Howard, 2018), affected countries beyond the United States, including Poland, Germany, Taiwan, and Brazil. Social media manipulation has also been reported in domains beyond politics (Ferrara, 2015), including in public health (Jiang, et al., 2020) and finance (Nizzoli, et al., 2020).

Certainly, manipulation can occur across various media channels, such as news portals and the traditional media (Zelenkauskaite and Balduccini, 2017). Known in general as “information warfare,” such manipulation arises from coordinated efforts by state-level actors and every day users (Quandt, 2018). Here, we focus on Twitter given its significant role in U.S. politics discourse (Ott, 2017).

In this paper, we set out to characterize social media manipulation in the context of the 2020 U.S. presidential election. We leverage an unique dataset that we collected and shared with the research community (Chen, et al., 2020), which encompasses over 240 million tweets, related to the upcoming events, spanning 20 June to 9 September 2020. The period of observation includes several salient real-world political events, such as the Democratic National Committee (DNC) and Republican National Committee (RNC) conventions. The data at hand provide an unparalleled window into election-related chatter, but also represent a trove of material for investigations into possible social media manipulation. By using a combination of state-of-the-art machine learning technologies and human validation, we investigate a number of research questions pertaining two signatures of manipulation: (i) automation, that is the evidence for adoption of automated accounts governed predominantly by software rather than human users; and (ii) distortion, in particular of salient narratives of discussion of political events, e.g., with the injection of inaccurate information, conspiracies or rumors. In particular, we focus on disinformation (the intentional spread of inaccurate information). We investigate community-driven conspiracies as direct evidence of disinformation (Faris, et al., 2017).

In the remainder of the paper, we will discuss our methodology, including the aspects of (i) data collection and preparation; (ii) bot detection; and, (iii) political leaning inference. Then we will present our findings drawn by systematic and principled data analyses, including (i) the prevalence of automation and its effects on (ii) campaign-related discourse and (iii) political echo chambers; we also analyze how efforts that have been conclusively associated to Russia, China and other countries targeted users in our data. We then move to study distortion of narratives.

 

++++++++++

Methodology

Data collection

We carried out an uninterrupted data collection of election-related tweets starting in May 2019. To access Twitter, we leverage the Twitter streaming API. Mentions of specific keywords, accounts, or hashtags related to the presidential candidates, as well as general-purpose election-related terms, were tracked continuously throughout this period. The source list was manually compiled and consistently updated to track events in the real world that spurred online chatter or trends of discussion. Content about all Democratic nominees was tracked throughout the periods of their campaigns, from announcement to withdrawal. This process is documented in Table 1 and in our recent paper documenting this dataset (Chen, et al., 2020). This data amounts to well over 600 million tweets, resulting in over four TB of raw data.

For this particular paper, however, we focus on a subset of the full dataset, and we concentrate on tweets that appeared between 20 June 2020 and 9 September 2020, in order to gain insight on events closer to the election itself, scheduled for 3 November 2020. During this timeframe, we tracked tweets that mentioned any Republican or Democratic candidate’s official campaign or personal account. This results in the content of tweets collected and analyzed throughout this paper reflecting the events that these politicians are involved in or associated with, where tweets directly mentioned the candidates. This subset constitutes approximately 240 million tweets and almost two TB of raw data. All reported results in this study refer to this particular set of data.

In the period under scrutiny, the two major collection targets are, intuitively, incumbent President Donald Trump, and Democratic presidential nominee, former Vice President Joe Biden. Both incumbent Vice President Pence and Democratic Vice President nominee Kamala Harris are also central to the data collection.

Descriptive statistics about this dataset are reported in Table 1 of this paper. In Tables 2 and 3, we summarize the top 30 hashtags and mentions and associated volume of tweets in our data. Please refer to (Chen, et al., 2020) for additional details on the dataset. Each tweet has an associated unique ID, or tweet-id, that is a part of a tweet’s metadata; during our time collecting tweets, we have noticed that tweets with duplicate IDs are occasionally collected through the Twitter API. Before releasing our dataset (Chen, et al., 2020), we pre-processed it to remove duplicate tweet-ids. However, in this paper we did not remove tweets with duplicate IDs, resulting in having approximately 0.77 percent more tweets than in (Chen, et al., 2020). Analytical results presented here remain unchanged with or without duplicates.

 

Descriptive statistics of our datasets
 
Table 1: Descriptive statistics of our datasets.
 

 

 

List of top 30 hashtags (case insensitive) in our dataset occurred between 20 June 2020 and 9 September 2020
 
Table 2: List of top 30 hashtags (case insensitive) in our dataset occurred between 20 June 2020 and 9 September 2020. Official Trump campaign hashtags highlighted in red. Official Biden campaign hashtags highlighted in blue. Conspiracy hashtags highlighted in yellow.
 

 

 

List of the top 30 mentions in our dataset occurred between 20 June 2020 and 9 September 2020
 
Table 3: List of the top 30 mentions in our dataset occurred between 20 June 2020 and 9 September 2020. Trump campaign and conservative-related accounts in red; Biden campaign and liberal-related accounts in blue.
 

 

Twitter unhashed banned user dataset

As part of their Transparency Center initiative, Twitter released a large database of banned users, as part of their efforts to catalog information operations from foreign governments and entities. The countries include Saudi Arabia, China, Russia, Turkey, Honduras, Indonesia, and Nigeria/Ghana. We used their full set of unhashed users for our investigation of foreign interference. Each country is associated with two datasets: (a) the banned users including metadata and (b) the tweets of these users. The user-level metadata includes screen_ID and follower count, whereas the tweet level includes general tweet object metadata. To obtain the targets of these banned accounts, we extracted all mentioned users by each banned account.

Bot detection

The term bot (short for robot) typically refers to an entity operating in a digital space (e.g., the Web, social media, etc.) that is controlled by software rather than a human operator. Although various taxonomies exist, here we use the term bot as shorthand to social bot, a social media account controlled by software tools aimed at automating its behavior to varying degrees, from predominant to complete automation, in opposition to accounts controlled predominantly or uniquely by human users (Ferrara, et al., 2016).

While some bots declare their artificial nature, most don’t, hence in order to study their impact, they need to be detected in the first place. However, detecting bots becomes increasingly harder as new technologies powered by the recent leaps in artificial intelligence make it into automation tools used to produce bots. Three recent studies surveyed the state of bot making and detection tools (Millimaggi and Daniel, 2019; Cresci, 2020; Assenmacher, et al., 2020).

In this study we use Botometer, a bot-detection tool co-invented by our group and Indiana University (Davis, et al., 2016; Yang, et al., 2019); specifically, we leverage both Botometer v.3 and the recently released Botometer v.4 (Yang, et al., 2020), which improves the accuracy of classification of new, unseen types of bots. In our analysis, we will use a conservative approach to classify bots as accounts that sit at the top end of the bot score distribution, rather than carrying out a binary classification of accounts into bots and humans. This addresses the problem of determining the nature of borderline cases for which detection can be inaccurate, and conversely allows to focus on accounts that exhibit clear bot traits. The results will be manually validated for accuracy.

Characterizing users’ political bias

We characterize the user’s political leaning by measuring the political bias of the media outlets they endorse. Similar to prior work (Bovet and Makse, 2019; Badawy, et al., 2019), we identify a set of 29 prominent media outlets that appear on Twitter. Each of the outlets is placed on a political spectrum (left, lean left, center, lean right, right) per ratings provided by the non-partisan service allsides.com (https://www.allsides.com/unbiased-balanced-news). If a user retweeted a URL from these 29 media sites without adding a comment, we consider it as an endorsement. For each user in the dataset, we keep a record of all retweets of the content that contain a domain of the media outlet from the set of prominent media outlets or a retweet of a tweet that is published from the outlet’s official Twitter account. The users’ political bias is calculated as the average political bias of all media outlets they endorsed.

 

++++++++++

Data analysis

Automation

In this section, we present our results pertaining to automation. To identify bot activity, we utilize Botometer v.3 to calculate the likelihood of a Twitter account being a human user or a bot, with a botscore between 0 to 1 (with 0 being most likely a human and 1 being most likely a bot). We note that we used Botometer v.3 (Davis, et al., 2016; Yang, et al., 2019) to tag each tweet with the author account’s bot score. Given the large number of accounts, we were only able to tag 32 percent of all users within the dataset. As Botometer recently released a new version that provides more granular user botscores (e.g., astroturf, fake follower, etc.) (Yang, et al., 2020), we manually use Botometer’s Web interface to derive botscores for manual validation and to provide examples in this study.

We first isolate the top and bottom decile of accounts that we were able to tag with Botometer scores, and filter our dataset to select the tweets posted by these users. The top decile of users includes accounts with bot scores from 0.485 to 0.988, while the bottom decile of users includes accounts with scores from 0.0144 to 0.0327. This resulted in a dataset of over four million tweets posted by users in the top decile of Botometer scores (more likely to be a bot), and over one million tweets posted by users in the lowest decile of Botometer scores (less likely to be a bot). For the rest of this paper, for simplicity, we will refer to the users in the top decile of Botometer scores as “bots” and users in the bottom decile as “humans”.

Next, we examine the most frequently used hashtags (case insensitive) in both the top and bottom deciles, which can be found in Tables 4 and 5 respectively. When looking at the top 15 hashtags for these two groups, there are a few clear categories of hashtags that emerge — Trump campaign (red), Biden campaign (blue) and conspiracy theory-related (yellow) hashtags. We will delve further into the observed conspiracy theories later in this paper, when we discuss narrative distortion. We find that tweets from bots most frequently use hashtags related to the Trump campaign and conspiracy theories, while tweets from humans most frequently use hashtags related to both the Trump and Biden campaigns. We do not see conspiracy theory-related hashtags being used as much by humans compared to hashtag usage by bots.

 

Top 15 hashtags used in tweets by bot
Top 15 hashtags used in tweets by bot
 
Table 4: Top 15 hashtags used in tweets by bots (N=4,388,807).
 

 

 

Top 15 hashtags used in tweets by bot
 
Table 5: Top 15 hashtags used in tweets by humans (N=1,320,394).
 

 

Bots in campaign discourse

We next identify specific hashtags from the 50 most frequent hashtags used in tweets by users in both the highest and lowest Botometer score deciles that are specifically related to the Democratic and Republican campaigns and use this as a mean to identify whether a tweet is right-leaning or left-leaning. While we do recognize that there may be users who use these hashtags in tweets with opposing viewpoints, vast amounts of research in political polarization assert that that is relatively infrequent (Jiang, et al., 2020; Bail, et al., 2018). Hence, we selected hashtags that were most relevant to both campaigns, such as “trump2020” and “bidenharris2020”. The full table of these hashtags can be found in Tables 6.1 and 6.2.

We leverage these campaign-specific hashtags to disaggregate the highest and lowest deciles into left-leaning tweets and right-leaning tweets — we provide a few examples of the highest and lowest scoring tweets in each category (e.g., right-leaning bots) in Tables 710. As mentioned earlier, we used Botometer v.3 to tag our tweets (scores ranging from 0–1), but we include Botometer v.4’s score breakdowns in our examples (Yang, et al., 2020). Botometer v4.’s documentation explicitly describes the six types of Twitter bots they evaluate for in their new granular bot type scores: “Astroturf: manually labeled political bots and accounts involved in follow trains that systematically delete content, Fake follower: bots purchased to increase follower counts, Financial: bots that post using cashtags, Self declared: bots from botwiki.org, Spammer: accounts labeled as spambots from several datasets, Other: miscellaneous other bots obtained from manual annotation, user feedback, etc.” (Yang, et al., 2020).

We manually inspected a sample of the tweets that were posted by human and bot accounts. The bots that tweeted right-leaning content almost always routinely posted right-leaning news content, many were highly active, and some accounts’ tweets were consistently structured with the same general combination of hashtags, suggesting some level of automation in the posting process. Others in this category post highly structured conspiracy theory-related tweets that include a preceding all caps word (e.g., “BREAKING:” or “SICK:”), with links and references to well known conspiracy theories, conspiracists and conspiracy-promoting news organizations and Web sites. However, we do note that because the top decile covers a wide range of Botometer scores, there are some accounts in the lower end of the spectrum that, upon manual inspection, appear to be possibly hybrid human-bot accounts (semi-automated, a.k.a., cyborgs).

Accounts classified as bots that tweeted left-leaning content seemed to be, in general, less obvious as their tweets do not exhibit clear pre-structured formats. We also found that some hashtags we classified as being left-leaning were also being used by right-leaning users, and vice versa. We note that the Botometer scores for the sample of tweets included in Tables 7 and 8 show that right-leaning discourse from users in the top decile are, on average, higher than their counterparts that engage in left-leaning discourse.

Tweets posted by human users (examples in Tables 9 and 10) were, from a manual inspection, all classified as having a low-likelihood of originating from a bot according to their botscores. The tweets from both the left-leaning and right-leaning chatter, while differing in content and opinion, did show more variability in sentence structure and appeared as organic.

Top hashtags from the bottom and top deciles disaggregated by left- and right-leaning discourse can be found in Tables 11 and 12.

 

Examples of Republican campaign-related hashtags
 
Table 6.1: Examples of Republican campaign-related hashtags.
 

 

 

Examples of Democratic campaign-related hashtags
 
Table 6.2: Examples of Democratic campaign-related hashtags.
 

 

 

Examples of tweets from accounts classified as right-leaning bots
 
Table 7: Examples of tweets from accounts classified as right-leaning bots.
 

 

 

Examples of tweets from accounts classified as left-leaning bots
 
Table 8: Examples of tweets from accounts classified as left-leaning bots.
 

 

 

Examples of tweets from accounts classified as right-leaning human users
 
Table 9: Examples of tweets from accounts classified as right-leaning human users.
 

 

 

Examples of tweets from accounts classified as left-leaning human users
Examples of tweets from accounts classified as left-leaning human users
 
Table 10: Examples of tweets from accounts classified as left-leaning human users.
 

 

 

Top 14 hashtags used in right-leaning discourse. Conspiracy related hashtags highlighted in yellow
 
Table 11: Top 14 hashtags used in right-leaning discourse. Conspiracy related hashtags highlighted in yellow.
 

 

 

Top 14 hashtags tweeted by left-leaning discourse. Conspiracy related hashtags in yellow
 
Table 12: Top 14 hashtags tweeted by left-leaning discourse. Conspiracy related hashtags in yellow.
 

 

Next, we turn our attention to the tweets in the top and bottom decile that include party-related hashtags (identified in Tables 6.1 and 6.2). Figure 1 shows the volume of tweets from the top 10 percent accounts by bot scores (N=4,388,807 tweets) and the lower 10 percent by bot scores (N=1,320,394 tweets) over time. In general, the volume of activity attributed to the top 10 percent by bot scores (i.e., likely bot users) vastly surpasses that of the bottom 10 percent (i.e., likely human accounts). This is intuitive since bots are typically much more active than human users. However, around specific external events, the two volumes of activity become close and comparable. One such example is the huge peak in bot activity engaging in left-leaning discourse after Biden chose Kamala Harris as his running mate on 11 August 2020. Another notable peak in engagement during and in the aftermath of the Democratic National Convention (DNC) from 17–20 August is exhibited for both left- and right-leaning bots. A manual inspection confirmed that there is indeed a general surge in both organic and suspicious tweets related to real-world events such as Harris being Biden’s VP pick (11 August) and the DNC (17–20 August).

 

We plot the volume of tweets disaggregated by humans and bots and left or right leaning content based on hashtags identified in Tables 6.1 and 6.2
 
Figure 1: We plot the volume of tweets disaggregated by most likely human (bottom decile of bot score distribution, accounting for approx. 1.3M tweets) and bot accounts (top decile of the bot score distribution, accounting for approx 4.3M tweets), and left or right leaning content based on hashtags identified in Tables 6.1 and 6.2.
 

 

We observe that the right-leaning discourse also exhibits similar phenomena surrounding the Republican National Convention that took place from 24–27 August. However, what is interesting about those engaging in right-leaning discourse is that we also see a general increase of activity during the DNC, although it should be noted that the proportional increase in activity is much higher for users that fall in the bottom decile of Botometer scores. To understand what these users were talking about on Twitter during the DNC, we isolated tweets from these accounts and compared the content of tweets from a few days preceding the DNC (15–16 August) and during the convention (17–20 August). We do this by examining hashtag, keyword, and bigram usage in the tweets. The discourse from human users does not change significantly — the topics of interest are heavily related to voting both before and during the DNC (e.g., “4moreyears”, “voteredtosaveamerica”). However, we see a greater change in conversation topics in discourse from tweets from bot accounts, where there is a shift in topical coverage from, for example, Trump being endorsed by the NYC Police Benevolent Association to promoting voting for the Republican party. We list a subset of the 16 most used keywords by users from the top decile that engage in right-leaning discourse the days preceding and during the DNC convention in Table 13.

 

Most used keywords by users engaging in right-leaning discourse who rank in the top decile of Botometer scores (likely-bots)
 
Table 13: Most used keywords by users engaging in right-leaning discourse who rank in the top decile of Botometer scores (likely-bots).
 

 

Human-bot interactions and echo chambers

To investigate the extent to which humans interact among each other and with bots, as a function of their political leaning, in Figure 2 we show the retweet behavior between four groups: left-leaning accounts that are likely human, left-leaning accounts that are likely bots, right-leaning accounts that are likely humans and right-leaning accounts that are likely bots. The directionality should be interpreted in a counter-clockwise fashion. For instance, of all retweets, left retweeting right accounts for 5.1 percent (cf., Figure 2a).

We make a few observations. First, accounts that are bots almost exclusively retweet accounts that are humans. This is consistent with prior observations that bots typically do not generate original content, but also with the intuition that they tend to borrow credibility by targeting visible human users and rebroadcasting their content. Additionally, there are proportionally more right-leaning bot accounts, at 4:1, whereas the ratio of right-leaning to left-leaning Twitter accounts that are likely to be human is roughly 2:1.

 

Meso-flow of retweet behavior between four cohorts
 
Figure 2: Meso-flow of retweet behavior between four cohorts: left-leaning humans (N=2,614,524), left-leaning bots (N=18,106), right-leaning humans (N=8,699,479 ), and right-leaning bots (N=84,827). The total number of retweets is 11,416,936. Sample sizes are given in retweet volume, and the bot threshold was taken at a botscore of greater than 0.7. Links should be interpreted counter clockwise (i.e., in Fig 2a, 5.1 percent of retweets is left-leaning humans retweeting right-leaning humans). Figure 2a shows the raw percentages of user interaction. Figure 2b shows the group-based interaction, normalized by the total retweet volume within each sub-group.
 

 

Second, we quantify the extent of the presence of an echo-chamber (Bail, et al., 2018), i.e., users consuming content mostly produced by others with their same political views. Our analysis shows that 35 percent of retweets are left-leaning humans retweeting other left-leaning humans; 53 percent of retweets are right-leaning humans retweeting other right-leaning humans (cf., Figure 2a). The picture becomes clearer when conditioned upon a group’s total retweets (cf., Figure 2b). Left-leaning users retweet other left-leaning users 87 percent of the time, whereas right-leaning users retweet other right-leaning 89 percent of the time (cf., Figure 2b). This indicates a slightly higher propensity of within-cohort interaction for right-leaning users. Left-leaning users retweet around 13 percent across the aisle, whereas right-leaning users retweet 10 percent across the aisle.

Bot-retweet behavior roughly mirrors these statistics. Bots retweet both left-leaning and right-leaning users, but predominantly retweet from the same side of the aisle. However, 18 percent of left-leaning bots’ retweets are from right-leaning humans; 11 percent right-leaning bots’ retweets are from left-leaning humans. This indicates left-leaning bots have a more diverse retweet appetite than right-leaning bots do.

In comparison with a similar graph in (Luceri, et al., 2019), this shows a trend of increased partisanship across both humans and bots. Note that we use a more stringent cut-off for bots, which contributes to decreased amounts of humans retweeting bots.

Foreign interference operations

Using Twitter’s unhashed banned user dataset, we tabulated all users that banned accounts had interacted with, by country as per Twitter’s definition. Figure 3 shows the relative propensity of interaction, split by the political affiliation of the user. Explicitly, let NL be the total number of left-leaning users and NR be the total number of right-leaning users. The relative propensities of Turkey, as an example, are given as:

 

Equation 1
 

 

Where n(Turk,L) denotes the number of left-leaning banned accounts from Turkey, and respectively n(Turk,R) denotes the number of right-leaning banned accounts from Turkey.

According to Twitter, investigations of these campaigns suggest that Ghana and Nigerian information operations seemed to target the Black Lives Matter movement, while Saudi Arabia and Turkey both boast high levels of engagement with right-leaning users. Russia and China however targeted fringe communities and conservative groups more prominently.

 

The relative propensity is the political bias of each group divided by the total left- and right- leaning users
 
Figure 3: The relative propensity is the political bias of each group divided by the total left- and right- leaning users.
 

 

Next, we plot specific targets of banned users, and their relative position in the Twitter network (cf., Figure 4). Edges are weighted by the number of retweets or quoted tweets between users. For visualization purposes, given the large size of the full network, in Figure 4 we only include users who have shared more than five politically oriented URLs and links with weights greater than 100, where the weights conveys the total amount of pairwise interactions between two nodes. The visualization was generated using the distributed recursive layout algorithm and the Force Atlas algorithm (Jacomy, et al., 2014).

There are a few observations to note. First, there are roughly six left-leaning clusters (in blue) and two right-leaning clusters (in red). This indicates that right-leaning users are more tightly knit than left-leaning users are. In Figure 4, we also show the position of banned Chinese-ops users (green diamonds) and Russian operations (yellow diamonds).

Since banned users often interact with Twitter celebrities, the users shown are ones exclusive to each cohort. That is, yellow diamonds are users who have only been associated with banned Russian accounts. We also observe that Chinese state-sponsored users tend to interact with Republican users more. Russian sponsored interactions also emerged outside of the right-leaning and left-leaning cores. Together, these suggest information operations are targeted toward specific communities based on partisan or ideological leanings.

 

Network visualization of users engaging in election discourse. Six left-leaning cores and two right-leaning cores emerge
 
Figure 4: Network visualization of users engaging in election discourse. Six left-leaning cores and two right-leaning cores emerge.
 

 

 

++++++++++

Distortion

In this section we focus on narrative distortion in relation with the 2020 U.S. presidential election. Although there is no established definition of distorted narrative in the literature, authors typically refer to that as information shared on social media that is most likely false and that appears in a systematic rather than spurious manner. For example, Allcott and Gentzkow (2017) conceptualize fake news as distorted signals uncorrelated with the truth. However, to avoid the conundrum of establishing what is true and what is false to qualify a piece of information as fake news (or not), in this study we focus on conspiracy theories, another typical example of distorted narratives. Conspiracy theories are most likely false narratives, oftentimes postulated upon rumors or unverifiable information, that appear in social networks shared by users or groups with the aim to deliberately deceive unsuspecting individuals who genuinely believe in such claims (van Prooijen, 2019). They are typically used by underground groups as an explanation for an event or situation that invokes a conspiracy, often with a political motive. Some of the most prominent recent Twitter conspiracy theories and groups revolve around topics such as: such as: objections to vaccinations, false claims related to 5G technology, a plethora of coronavirus related false claims and the flat earth movement (Ferrara, 2020). In the context of 2020 U.S. presidential election, the proliferation of political conspiratorial narratives could have an adverse effect on political discourse and democracy.

In our analysis we focus on three main conspiracy groups:

1) QAnon: A far-right conspiracy movement which has gained popularity in the run up to the 2020 election. This group’s theory suggests that President Trump has been battling against a satan worshipping global child sex-trafficking ring and an anonymous source called ‘Q’ is cryptically providing secret information about the ring (Zuckerman, 2019). These users frequently use hashtags such as #qanon, #wwg1wga (where we go one, we go all), #taketheoath, #thegreatawakening and #qarmy. An example of a typical tweet from a QAnon supporter is: “@potus @realDonaldTrump was indeed correct,the beruit fire was hit by a missile, oh and to the rest of you calling this fake,you are not a qanon you need to go ahead and change to your real handles u liberal scumbags just purpously put out misinfo and exposed yourselves,thnxnan”

2) “gate” conspiracies: Another indicator of conspiratorial content is signalled by the suffix ‘-gate’ with theories such as obamagate, an unvalidated claim against the Barack Obama’s officials that allegedly conspired to entrap Trump’s former national security adviser, Michael Flynn, as part of a larger plot to bring down the then-incoming president. Another example of “gate” conspiracy theory is pizzagate, a debunked claim that connects several high-ranking Democratic Party officials and U.S. restaurants with an alleged human trafficking and child sex ring. An example of a typical conspiratorial tweet related to these two conspiracies is: “#obamagate when will law enforcement take anything seriously? there is EVIDENCE!!!! everyone involved in the trafficking ring is laughing because they KNOW nothing will be done. @HillaryClinton @realDonaldTrump. justice will be served one way or another. literally disgusting.”

3) Covid conspiracies: A plethora of false claims related to the coronavirus pandemic emerged recently. They are mostly about the scale of the pandemic and the origin, prevention, diagnosis, and treatment of the disease. The false claims typically go alongside the hashtags such as #plandemic, #scandemic or #fakevirus. A typical tweet referring to the false claims regarding the origins of the coronavirus is: “@fyjackson @rickyb_sports @rhus00 @KamalaHarris @realDonaldTrump The plandemic is a leftist design. And it’s backfiring on them. We’ve had an effective treatment for covid19, the entire time. Leftists hate Trump so much, they are willing to murder 10’s of thousands of Americans to try to make him look bad. The jig is up.”

Looking at conspiracy-related narratives in our dataset, we observe that QAnon related material has more highly active and engaged users (as measured by the ratio of tweet/unique users) than -gate narratives. The most frequently used hashtag, #wwg1wga, had more than 600K tweets from 140K unique users; by contrast #obamagate had 414K tweets from 125K users. This suggests that the QAnon community using hashtags such as #wwg1wga has a more active user base. Some such examples are in Figure 5, which illustrates the volume (number of tweets and number of unique users) of nine popular conspiracies.

Furthermore, the sentiment was analyzed deriving the Valence Aware Dictionary and sEntiment Reasoner (VADER; Hutto and Gilbert, 2014) score for each tweet containing one of the tracked hashtags. Interestingly, the mean sentiment of QAnon associated tweets was much more positive when compared to the mean sentiment for ‘gate’ and Covid conspiracy theories, which are negative.

 

Volume of conspiracy related hashtags and average sentiment of tweets with those hashtags
 
Figure 5: Volume of conspiracy related hashtags and average sentiment of tweets with those hashtags.
 

 

To demonstrate the usage of conspiratorial hashtags, in Tables 14 to 17, we select two of the three most frequently retweeted tweets for #ww1wga, #taketheoath, #obamagate and #plandemic. This gives an example of the kind of popular content and misinformation spread by these communities.

 

Most retweeted #ww1wga tweets
 
Table 14: Most retweeted #ww1wga tweets.
 

 

 

Most retweeted #taketheoath tweets
 
Table 15: Most retweeted #taketheoath tweets.
 

 

 

Most retweeted #obamagate tweets
 
Table 16: Most retweeted #obamagate tweets.
 

 

 

Most retweeted #plandemic tweets
 
Table 17: Most retweeted #plandemic tweets.
 

 

These tweets were retweeted more than 25K times in total and all convey conspiratorial content, multiple hashtags related to conspiracy theories are often used. A video link accompanies #taketheoath tweets, showing individuals taking an oath and pledging support to the QAnon group. Considering the range of topics behind the alleged conspiracies, it is worth analyzing the differences in content and writing styles. In QAnon related tweets, praise and support are common themes directed towards both President Trump and the QAnon community. The -gate related tweets take on a more accusatory tone, claiming that individuals are guilty of crimes. These factors can help to explain the observed overall positive and negative sentiment.

 

Word clouds of most frequently associated hashtags with #qanon and #obamagate
 
Figure 6: Word clouds of most frequently associated hashtags with #qanon and #obamagate.
 

 

The methodology for collecting conspiratorial hashtags and identifying proponents in the conspiracy community starts with a known conspiracy hashtag. Word clouds of co-occurring hashtags in the same tweets can identify other hashtags frequently used by the conspiracy population. Additional tweets and conspiracy related material were identified through further word clouds and identifying neighboring keywords to look for.

In Figure 7, we illustrate the temporal dynamics of the top four tracked conspiracy theory hashtags throughout the dataset. At the start of the data collection, the conspiracy theory narratives experience a spike in activity. The hashtags #ww1wga, #qanon and #taketheoath follow very similar usage patterns, concurrently peaking in late June then falling off during July; #obamagate follows that trend but to a lesser degree, having subsequent peaks not reflected in activity of the other conspiracies. The shape of this observed pattern can be explained by actions taken by Twitter to reduce misinformation on its platform. In July, Twitter engaged in a takedown of over 7,000 QAnon associated accounts [1], which would account for the reduction in the number of tweets using these hashtags that we see in Figure 7. Similarly, Facebook banned QAnon associated groups and accounts across all its platforms on 7 October 2020 [2].

 

Tweet timeline for most frequently used conspiracy hashtags
 
Figure 7: Tweet timeline for most frequently used conspiracy hashtags.
 

 

Twitter users have the option to manually type a location into their profile. From our data collection, 57.6 percent of users report a location. Figure 8 shows the proportion of these users who have used a QAnon associated hashtag broken down by state. Maybe unsurprisingly, given the affinity of this conspiracy with alt-right and far-right narratives, its adoption is most prominently featured in southern states or historically very conservative states.

 

The proportion of users using QAnon hashtags for each state
 
Figure 8: The proportion of users using QAnon hashtags for each state.
 

 

Conspiracies and media bias

We further analyze how the conspiratorial narratives are endorsed by the users, conditioned upon where they fall on the political spectrum. We assume the user endorsed a piece of information, if they retweeted it without adding a comment, meaning we only consider retweets and not quoted tweets. This is done to avoid the conundrum of interpreting the stance a user takes via their comment (i.e., quote). Each user is given a conspiratory score that can take a value between 0 and 1, and is the proportion of the endorsed conspiratory narratives out of all endorsed narratives of a particular user. We additionally separate the users in two groups: conspiratory and non-conspiratory. The binary classification is based on a threshold value t: all users who have a conspiratory score larger than t will be classified as the conspiratory users. The threshold value t is calculated as a median value of all positive conspiratory scores. That way, we make sure to identify the users who are the strongest endorsers of conspiratory narratives and prevent the misclassification of users who accidentally or occasionally retweeted a tweet that contains some of the conspiratory keywords. The zero-inflated distribution of the conspiratory scores justifies the use of a median of all positive values to identify the threshold t.

To understand the relation between distorted narratives and political discourse, we explore the users’ most likely political leaning by quantifying their endorsements to the prominent media outlets across the political spectrum (see Methodology). Each user in our dataset is therefore characterized by the political leaning that can take any of the following values: left, lean left, center, lean right, right.

Figure 9 illustrates the significantly different distributions of two groups of users. Conspiratory users tend to skew to the right, suggesting that the users who are prone to share the conspiratory narratives are likely to endorse the right-leaning media. The non-conspiratory users, those unlikely to share the conspiratory narratives, are distributed more equally across the political spectrum with significant proportions of them on left and center of the spectrum. Two-sided t-test performed both on a continuous (t=5.17) and discrete data (t=7.5), confirms that two distributions are significantly different with p<0.005.

 

The distributions of two groups of users across the political spectrum: conspiratory and non-conspiratory
 
Figure 9: The distributions of two groups of users across the political spectrum: conspiratory and non-conspiratory.
 

 

The following analysis provides an additional insight into the distributions of conspiratory users across the political spectrum. Figure 10 (upper panel) illustrates the numbers of conspiratory and non-conspiratory users for all the political affiliations of the media platforms endorsed. Note that the total numbers pictured are lower than the total number of all users in the dataset, as only users who endorsed a media outlet in our media outlet political affiliation dataset are included in this analysis. Further, we focus on the proportions of the conspiratory users across the political spectrum and illustrate it in Figure 10 (lower panel). Almost a quarter of users who endorse predominantly right-leaning media platforms are likely to engage in sharing conspiratory narratives. Out of all users who endorse left-leaning media, approximately two percent are likely to share conspiratory narratives.

 

Users distribution across the political spectrum (upper). Proportion of users that are likely to share the conspiratory content (lower)
 
Figure 10: Users distribution across the political spectrum (upper). Proportion of users that are likely to share the conspiratory content (lower).
 

 

Conspiracies and bots

To connect our analyses through the lenses of automation and distortion, we finally study conspiracies pushed by bots. We investigate the role of the likely automated accounts (bots) within these conspiracy communities and media sources. Leveraging Botometer scores, we present general trends and observations of bot activity with respect to the conspiracies tracked and specific media outlets. The main question we seek to answer is: are bots used to target groups and how do they push conspiracy narratives with news related media?

We compare the botscores of four groups, QAnon (27K users), -gate (21K users), Covid (3K users) and non-conspiracy (30K users). If a user does not use any of our tracked conspiratorial hashtags, they are classified as a non-conspiracy user. A user who uses conspiratorial hashtags can be assigned to multiple conspiracy groups if they use keywords from more than one category.

The QAnon, -gate and Covid conspiracy groups have a higher median botscore than non-conspiracy users (cf., Figure 11). For the QAnon hashtag group, this result is unsurprising as in Table 4 (top bot decile) we find two hashtags associated with the conspiracy. The Covid hashtag group has the highest botscores. It is worth noting that in our dataset, the collected number of Covid conspiracy botscores is far fewer than the other groups as the volume of Covid conspiracy tweets is lower (cf., Figure 5). The boxplots for the QAnon and -gate groups look similar. Previous work established the presence of numerous bots in the Covid community (Ferrara, 2020). Although it is difficult to translate botscore quantiles into an estimated number of bots, it should be recognized that a large portion of the conspiracy related discourse comes from bots, and there are tens of thousands such conspiracy bots.

 

Boxplots of Botometer scores for users categorized as QAnon, -gate and Non-Conspiracy
 
Figure 11: Boxplots of Botometer scores for users categorized as QAnon, -gate and Non-Conspiracy.
 

 

To further investigate the conspiracy groups, we examine the overlap of communities. Of the -gate users, 55 percent are identified as also sharing QAnon hashtags. For the Covid community, 78 percent use QAnon hashtags, in line with previous findings (Ferrara, 2020). This overlap suggests that users who share conspiracy related content are prone to adopting multiple conspiracy narratives and that the communities are highly connected.

Hyper-partisan media outlets and bots

News media outlets play an important role in the spread of information on Twitter, since content is shared through embedded or clickable links to their Web domains. This allows us to investigate the activity of users with respect to specific news Web site domains.

An analysis using Botometer scores in Figure 12 shows the proportion of users sharing URLs from these news Web sites who have also used QAnon hashtags at any point in our dataset. Hyper-partisan news outlets like One America News Network (OANN) and Infowars are outliers, which see the greatest proportion of their user base tweeting QAnon material. These two outlets also have the highest bot scores, but the volume of tweets that contain these Web site URLs is fairly low. Left-leaning news outlets such as the New York Times and Washington Post have a low Botometer score and proportion of QAnon users, but the volume of tweets mentioning these URLs is much (approx. 29 times) larger. The proportion of users using QAnon keywords is highly correlated with the average Botometer score (correlation coefficient: 0.947) across the spectrum of left, right and neutral outlets.

 

Proportion of users using QAnon hashtags and mean botscore for each news outlet, dot size indicates relative number of tweets
 
Figure 12: Proportion of users using QAnon hashtags and mean botscore for each news outlet, dot size indicates relative number of tweets.
 

 

Additionally, we explore the proportion of bots and compare it to their political leaning and usage of conspiratory language (cf., Figure 13). Bots appear across the political spectrum and are likely to endorse polarizing views. The smallest fraction of automated accounts is among the users who endorse centric media outlets (four percent). This distribution is somewhat to be expected, as bot creators are typically seeking to increase the visibility of automated accounts by associating them with popular narratives, which are often polarized. We also observe a larger proportion of automated accounts that endorse right-leaning media outlets. Almost 10 percent of all users who share content from right-leaning media are most likely automated, compared to less than six percent of users who share left-leaning news. By performing the t-test on the distributions of bot scores we confirm that the differences between the pairs (left-center, center-right and right-center) are all significant with p<0.005.

The proportion of bots varies in between users who are likely to share conspiratory narratives and those who are not. Almost 13 percent of all users that endorse conspiracy narratives are likely automated. This is significantly more than accounts who never share conspiracy narratives, where only five percent of them are likely to be automated (significant with t=4.4 and p<0.005). It is possible that such observations are in part the byproduct of the fact that bots are programmed to interact with more engaging content, and inflammatory topics such as conspiracy theories provide fertile ground for engagement (Stella, et al., 2018). On the other hand, bot activity can inflate certain narratives and make them popular. The automated accounts that are a part of an organized campaign can purposely propel some of the conspiracy narratives, further polarizing the political discourse.

 

The proportions of bots in different populations grouped by the political leaning (left) and the endorsement of the conspiratory content (right)
 
Figure 13: The proportions of bots in different populations grouped by the political leaning (left) and the endorsement of the conspiratory content (right).
 

 

 

++++++++++

Conclusions

In this study, we looked at the possible sources of social media manipulation in the context of the upcoming 2020 U.S. presidential election. We focused on two aspects, namely automation and distortion of narratives. Automation in the form of bots is more common in the run up to an important political event like an election. When examining partisan discourse disaggregated on likelihood of a user being a human or bot, we find that, in general, bots are more likely to include right-leaning and conspiracy-related hashtags in their tweets, while humans tend to use both right- and left-leaning hashtags. This is reflective of the fact that many of the conspiracy theories tend to be supported by right-leaning media.

When viewing human and bot accounts in the context of political affiliation, right-leaning bots tend to use highly structured tweets and reference known conspiracy theories; left-leaning bots are much less structured in their tweets. We also find that, at a user level, bots typically generate more tweets than human users do. In comparing bot and human activity to real-time events, left-leaning bots and humans seem to follow the same general activity trends, reacting to political events affiliated with the Democratic party. While the right-leaning bots and humans also react to Republican events like the RNC, there is also a notable increase in activity during the DNC. In further investigating this increase in activity, we find tweets from humans increase in volume but do not necessarily change in topics; however, the right-leaning bot discourse not only experiences an increase in volume but also shifts from general party support to encouraging their party members to vote. We find that the activity between right-leaning bots and humans are not as correlated as their left-leaning counterparts are.

We observe highly partisan behavior in retweeting among bots and humans. Humans predominantly retweet other humans from the same party. In contrast, while bots mostly retweet humans along party lines, they also tweet humans from across the aisle. In sum, both left- and right- leaning discourse has become more self-reinforcing.

In visualizing the network in full, we find six liberal clusters and two conservative clusters, which suggests the right-leaning users have a larger and denser community, centered around Trump. Information operations from different countries are shown to interact with very different user groups. For instance, Nigerian and Ghanan accounts engaged with left-leaning users close to four times more than right-leaning users. In contrast, Turkish banned accounts engaged with left- and right-leaning users at almost a one-to-one ratio. Russia and China targeted predominantly conservative users and some fringe communities.

When we shifted our focus on distorted narratives, three major political conspiracy types emerged: QAnon, “-gate”, and COVID related conspiracies. We discovered a sizable overlap between users spreading these narratives but the content and sentiment varied between categories. The users who are prone to share the conspiratory narratives are likely to endorse the right-leaning media. The non-conspiratory users, those unlikely to share the conspiratory narratives, are distributed more equally across the political spectrum. Bots play an important role in spreading these conspiracies targeting content from hyper-partisan news media outlets. Nearly 13 percent of users engaging with conspiracies are bots, as opposed to just five percent bots engaging with non-conspiracy content. On the one hand, this is good news: not all the popularity of political conspiracies is genuine; on the other hand, since bots can inflate narratives and bring organic attention to unsuspecting users, the high prevalence of bots in conspiracy narratives is a problem that requires urgent attention: Twitter is currently considering to take preventative measures, like suspending QAnon accounts, to hinder the spread of political conspiracies, but ours and others’ work suggests this may not be enough [3].

In conclusion, with this work, we urge researchers, computational journalists, and practitioners alike to investigate problematic social media manipulation including some of the phenomena we highlighted: in that spirit, we released the dataset used in this work publicly for reproducibility as well as to enable future work in this problem space (Chen, et al., 2020). End of article

 

About the authors

Emilio Ferrara is Associate Professor of Communication and Computer Science in the Annenberg School for Communication and Journalism at the University of Southern California.
E-mail: emiliofe [at] usc [dot] edu

Herbert Chang is a Ph.D. student in the Annenberg School for Communication and Journalism at the University of Southern California.
E-mail: hochunhe [at] usc [dot] edu

Emily Chen is a computer science Ph.D. student at the University of Southern California.
E-mail: echen920 [at] usc [dot] edu

Goran Muric is a postdoctoral research associate at the University of Southern California’s Information Sciences Institute.
E-mail: gmuric [at] isi [dot] edu

Jaimin Patel is a computer science M.S. student at the University of Southern California.
E-mail: jaiminpa [at] usc [dot] edu

 

Acknowledgements

This work has not been supported by any funding agency, private organization, or political party.

 

Notes

1. https://www.washingtonpost.com/technology/2020/07/22/twitter-bans-qanon-accounts/.

2. https://www.theguardian.com/technology/2020/oct/06/qanon-facebook-ban-conspiracy-theory-groups.

3. https://www.washingtonpost.com/technology/2020/10/03/twitter-banished-worst-qanon-accounts-more-than-93000-remain-site-research-shows/.

 

References

Hunt Allcott and Matthew Gentzkow, 2017. “Social media and fake news in the 2016 election,” National Bureau of Economic Research, Working Paper, 23089.
doi: https://doi.org/10.3386/w23089, accessed 19 October 2020.

Dennis Assenmacher, Lena Clever, Lena Frischlich, Thorsten Quandt, Heike Trautmann, and Christian Grimme, 2020. “Demystifying social bots: On the intelligence of automated social media actors,” Social Media + Society (1 September).
doi: https://doi.org/10.1177/2056305120939264, accessed 19 October 2020.

Adam Badawy, Kristina Lerman, and Emilio Ferrara, 2019. “Who falls for online political manipulation?” WWW ’19: Companion Proceedings of The 2019 World Wide Web Conference, pp. 162–168.
doi: https://doi.org/10.1145/3308560.3316494, accessed 19 October 2020.

Christopher A. Bail, Brian Guay, Emily Maloney, Aidan Combs, D. Sunshine Hillygus, Friedolin Merhout, Deen Freelon, and Alexander Volfovsky, 2020. “Assessing the Russian Internet Research Agency’s impact on the political attitudes and behaviors of American Twitter Users in late 2017,” Proceedings of the National Academy of Sciences, volume 117, number 1 (7 January), pp. 243–250.
doi: https://doi.org/10.1073/pnas.1906420116, accessed 19 October 2020.

Christopher A. Bail, Lisa P. Argyle, Taylor W. Brown, John P. Bumpus, Haohan Chen, M.B. Fallin Hunzaker, Jaemin Lee, Marcus Mann, Friedolin Merhout, and Alexander Volfovsky, 2018. “Exposure to opposing views on social media can increase political polarization,” Proceedings of the National Academy of Sciences, volume 115, number 37 (28 August), pp. 9,216–9,221.
doi: https://doi.org/10.1073/pnas.1804840115, accessed 19 October 2020.

Alessandro Bessi and Emilio Ferrara, 2016. “Social bots distort the 2016 U.S. presidential Election online discussion,” First Monday, volume 21, number 11, at https://firstmonday.org/article/view/7090/5653, accessed 19 October 2020..
doi: https://doi.org/10.5210/fm.v21i11.7090, accessed 19 October 2020.

Leticia Bode, Ceren Budak, Jonathan M. Ladd, Frank Newport, Josh Pasek, Lisa O. Singh, Stuart N. Soroka, and Michael W. Traugott, 2020. Words that matter: How the news and social media shaped the 2016 presidential campaign. Washington, D.C.: Brookings Institution Press.

Alexandre Bovet and Hernán A. Makse, 2019. “Influence of fake news in Twitter during the 2016 US presidential election,” Nature Communications, volume 10, article number 7 (9 January).
doi: https://doi.org/10.1038/s41467-018-07761-2, accessed 19 October 2020.

Emily Chen, Ashok Deb, and Emilio Ferrara, 2020. “#Election2020: The first public Twitter dataset on the 2020 US presidential election,” arXiv2010.00600, (1 October), at https://arxiv.org/abs/2010.00600, accessed 19 October 2020.

Stefano Cresci, 2020. “A decade of social bot detection,” Communications of the ACM (September).
doi: https://doi.org/10.1145/3409116, accessed 19 October 2020.

Clayton Allen Davis, Onur Varol, Emilio Ferrara, Alessandro Flammini, and Filippo Menczer, 2016. “BotOrNot: A system to evaluate social bots,” WWW ’16 Companion: Proceedings of the 25th International Conference Companion on World Wide Web, pp. 273–274.
doi: https://doi.org/10.1145/2872518.2889302, accessed 19 October 2020.

Robert Faris, Hal Roberts, Bruce Etling, Nikki Bourassa, Ethan Zuckerman, and Yochai Benkler, 2017. “Partisanship, propaganda, and disinformation: Online media and the 2016 U.S. presidential election,” Berkman Klein Center, Research Publication, 2017–6 (16 August), at https://cyber.harvard.edu/publications/2017/08/mediacloud, accessed 19 October 2020.

Federal Bureau of Investigation, and U.S. Department of Justice, 2018. “United States of America v. Internet Research Agency LLC” (16 February), at https://www.justice.gov/file/1035477/download, accessed 19 October 2020.

Emilio Ferrara, 2020. “What types of COVID-19 conspiracies are populated by Twitter bots?” First Monday, volume 25, number 6, at https://firstmonday.org/article/view/10633/9548, accessed 19 October 2020..
doi: https://doi.org/10.5210/fm.v25i6.10633, accessed 19 October 2020.

Emilio Ferrara, 2015. “‘Manipulation and Abuse on Social Media’ by Emilio Ferrara with Ching-man Au Yeung as coordinator,” ACM SIGWEB Newsletter (April), article number 4.
doi: https://doi.org/10.1145/2749279.2749283, accessed 19 October 2020.

Emilio Ferrara, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro Flammini, 2016. “The rise of social bots,” Communications of the ACM, volume 59, number 7.
doi: https://doi.org/10.1145/2818717, accessed 19 October 2020.

Christopher J. Galdieri, Jennifer C. Lucas, and Tauna S. Sisco, 2018. The role of Twitter in the 2016 US election. New York: Palgrave Pivot.
doi: https://doi.org/10.1007/978-3-319-68981-4, accessed 19 October 2020.

Andrew M. Guess, Brendan Nyhan, and Jason Reifler, 2020. “Exposure to untrustworthy websites in the 2016 US election,” Nature Human Behaviour volume 4, pp. 472–480.
doi: https://doi.org/10.1038/s41562-020-0833-x, accessed 19 October 2020.

C.J. Hutto and Eric Gilbert, 2014. “VADER: A parsimonious rule-based model for sentiment analysis of social media text,” Eighth International AAAI Conference on Weblogs and Social Media, at https://www.aaai.org/ocs/index.php/ICWSM/ICWSM14/paper/view/8109, accessed 19 October 2020.

Mathieu Jacomy, Tommaso Venturini, Sebastien Heymann, and Mathieu Bastian, 2014. “ForceAtlas2, a continuous graph layout algorithm for handy network visualization designed for the Gephi software,” PloS ONE, volume 9, number 6, e98679 (10 June).
doi: https://doi.org/10.1371/journal.pone.0098679, accessed 19 October 2020.

Julie Jiang, Emily Chen, Kristina Lerman, and Emilio Ferrara. 2020. “Political polarization drives online conversations about COVID-19 in the United States,” Human Behavior and Emerging Technologies, volume 2, number 3, pp. 200–211.
doi: https://doi.org/10.1002/hbe2.202, accessed 19 October 2020.

Charles Kriel and Alexa Pavliuc, 2019. “Reverse engineering Russian Internet Research Agency tactics through network analysis,” Defence Strategic Communications, at https://stratcomcoe.org/ckriel-apavliuc-reverse-engineering-russian-internet-research-agency-tactics-through-network, accessed 19 October 2020.

Luca Luceri, Ashok Deb, Silvia Giordano, and Emilio Ferrara, 2019. “Evolution of bot and human behavior during elections,” First Monday, volume 24, number 9, at https://firstmonday.org/article/view/10213/8073, accessed 19 October 2020..
doi: https://doi.org/10.5210/fm.v24i9.10213, accessed 19 October 2020.

Andrea Millimaggi and Florian Daniel, 2019. “On Twitter bots behaving badly: Empirical study of code patterns on GitHub,” In: Maxim Bakaev. Flavius Frasincar, and In-Young Ko (editors). Web engineering. Lecture Notes in Computer Science, volume 11496. Cham, Switzerland: Springer, pp. 187–202.
doi: https://doi.org/10.1007/978-3-030-19274-7_14, accessed 19 October 2020.

Leonardo Nizzoli, Serena Tardelli, Marco Avvenuti, Stefano Cresci, Maurizio Tesconi, and Emilio Ferrara, 2020. “Charting the landscape of online cryptocurrency manipulation,” IEEE Access, volume 8, pp. 113,230–113,245.
doi: https://doi.org/10.1109/access.2020.3003370, accessed 19 October 2020.

Brian L. Ott, 2017. “The age of Twitter: Donald J. Trump and the politics of debasement,” Critical Studies in Media Communication, volume 34, number 1, pp. 59–68.
doi: https://doi.org/10.1080/15295036.2016.1266686, accessed 19 October 2020.

Thorsten Quandt, 2018. “Dark participation,” Media and Communication, volume 6, number 4 (8 November).
doi: https://doi.org/10.17645/mac.v6i4.1519, accessed 19 October 2020.

Massimo Stella, Emilio Ferrara, and Manlio De Domenico, 2018. “Bots increase exposure to negative and inflammatory content in online social systems,” Proceedings of the National Academy of Sciences, volume 115, number 49 (20 November), pp. 12,435–12,440.
doi: https://doi.org/10.1073/pnas.1803470115, accessed 19 October 2020.

Indigo J. Strudwicke and Will J. Grant, 2020. “#JunkScience: Investigating pseudoscience disinformation in the Russian Internet Research Agency tweets,” Public Understanding of Science, volume 29, number 5, pp. 459–472.
doi: https://doi.org/10.1177/0963662520935071, accessed 19 October 2020.

Jan-Willem van Prooijen, 2019. “Belief in conspiracy theories:Gullibility or rational skepticism?” In: Joseph P. Forgas and Roy Baumeister (editors). Social psychology of gullibility: Conspiracy theories, fake news and irrational beliefs. New York: Routledge.
doi: https://doi.org/10.4324/9780429203787-17, accessed 19 October 2020.

Dror Walter, Yotam Ophir, and Kathleen Hall Jamieson, 2020. “Russian Twitter accounts and the partisan polarization of vaccine discourse, 2015–2017,” American Journal of Public Health, volume 110, number 5, pp. 718–724.
doi: https://doi.org/10.2105/AJPH.2019.305564, accessed 19 October 2020.

Samuel C. Woolley and Philip N. Howard, 2018. Computational propaganda: Political parties, politicians, and political manipulation on social media. New York: Oxford University Press.
doi: https://doi.org/10.1093/oso/9780190931407.001.0001, accessed 19 October 2020.

Kai-Cheng Yang, Onur Varol, Pik-Mai Hui, and Filippo Menczer, 2020. “Scalable and generalizable social bot detection through data selection,” Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, number 1.
doi: https://doi.org/10.1609/aaai.v34i01.5460, accessed 19 October 2020.

Kai-Cheng Yang, Onur Varol, Clayton A. Davis, Emilio Ferrara, Alessandro Flammini, and Filippo Menczer. 2019. “Arming the public with artificial intelligence to counter social bots,” Human Behavior and Emerging Technologies, volume 1, number 1, pp. 48–61.
doi: https://doi.org/10.1002/hbe2.115, accessed 19 October 2020.

Asta Zelenkauskaite and Marcello Balduccini, 2017. “‘Information warfare’ and online news commenting: Analyzing forces of social influence through location-based commenting user typology,” Social Media + Society (17 July).
doi: https://doi.org/10.1177/2056305117718468, accessed 19 October 2020.

Ethan Zuckerman, 2019. “QAnon and the emergence of the unreal,” Journal of Design and Science, number 6 (15 July).
doi: https://doi.org/10.21428/7808da6b.6b8a82b9, accessed 19 October 2020.

 


Editorial history

Received 11 October 2020; revised 15 October 2020; accepted 16 October 2020.


Creative Commons License
This paper is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Characterizing social media manipulation in the 2020 U.S. presidential election
by Emilio Ferrara, Herbert Chang, Emily Chen, Goran Muric, and Jaimin Patel.
First Monday, Volume 25, Number 11 - 2 November 2020
https://firstmonday.org/ojs/index.php/fm/article/download/11431/9993
doi: https://dx.doi.org/10.5210/fm.v25i11.11431