With people moving out of physical public spaces due to containment measures to tackle the novel coronavirus (COVID-19) pandemic, online platforms become even more prominent tools to understand social discussion. Studying social media can be informative to assess how we are collectively coping with this unprecedented global crisis. However, social media platforms are also populated by bots, automated accounts that can amplify certain topics of discussion at the expense of others. In this paper, we study 43.3M English tweets about COVID-19 and provide early evidence of the use of bots to promote political conspiracies in the United States, in stark contrast with humans who focus on public health concerns.
Background & related literature
RQ1: Characterizing bot and human behavior
RQ2: Characterizing bot-populated campaigns
At the time of this writing (mid-April 2020) the novel coronavirus (COVID-19) pandemic outbreak has already put tremendous strain on many countries’ citizens, resources and economies around the world. Social distancing measures, travel bans, self-quarantines, and business closures are changing the very fabric of societies worldwide. With people forced out of the safety and comfort of their life routines, social media take centerstage, more than ever, as a mirror to global social discussions. Therefore, it is of paramount importance to determine whether online chatter reflects genuine people’s conversations or otherwise may be distorted by the activity of automated accounts, often referred to as bots (a.k.a., social bots, sybil accounts, etc.). The presence of bots has been documented in the context of online political discussion (Bessi and Ferrara, 2016; Ferrara, 2017; Luceri, et al., 2019), public health (Subrahmanian, et al., 2016; Broniatowski, et al., 2018; Sutton, 2018; Hwang, et al., 2012; Allem, et al., 2017), civil unrest (Stella, et al., 2018), stock market manipulation (Ferrara, 2015), the spread of false news (Vosoughi, et al., 2018; Grinberg, et al., 2019; Shao, et al., 2018), alongside with other tools such as troll accounts (Broniatowski, et al., 2018; Sutton, 2018; Bail, et al., 2020; Badawy, et al., 2018; Badawy, et al., 2019; Luceri, et al., 2020).
In this paper, we chart the landscape of Twitter chatter within the context of COVID-19 related conversation, in particular to characterize the presence and activity of bots. We leverage a large Twitter dataset that our group has been continuously collecting since 21 January 2020, when the first COVID-19 case was announced in the United States (Holshue, et al., 2020). We use combinations of machine learning and manual validation to identify bots, and then use computational tools and statistical analysis to describe their behavior, in contrast with human activity, and their focal topics of discussion.
Research questions & contributions of this work
We hereby posit two research questions, formalized in the following, and by leveraging a large COVID-19 Twitter dataset (cf., Methods & data) we provide empirical evidence as well as theoretical grounding to answer these questions:
- Research question 1 (RQ1): Is there any evidence of the presence of automated accounts (bots) in the online discussion surrounding COVID-19 on Twitter? If so, what is their prevalence and volume of activity compared to that of human accounts? Do the bots exhibit any behavioral characteristic that are their prerogative, which in turn significantly differ from the behavior of human users?
- Research question 2 (RQ2): Prior work demonstrated how bots are used to push ideologies and political narratives on social media; Do we observe any pattern of preferential behavior where the bots seem to focus on fueling specific topics of discussion concerned with politics or ideology?
In this work we provide the following contributions:
- First, we combine machine learning and human validation to identify accounts that show signatures of automation (bots) and provide a statistical characterization of their behavior, contrasted with human users, specifically in the English-speaking Twitter-sphere.
- Then, we leverage content and time-series analysis techniques to illustrate the topical focus of bots and human users, highlighting that bots appear to be used to promote conspiracy theories in the United States, in stark contrast with human users who focus on public health and welfare.
Background & related literature
There has already been a wealth of studies that looked into social media dynamics in the context of COVID-19. As of the time of this writing (mid-April 2020), the vast majority of these studies are pre-print papers that provide a timely, yet partial, characterization of online discussion and issues revolving around COVID-19 (Chen, et al., 2020; Alshaabi, et al., 2020; Cinelli, et al., 2020; Pennycook, et al., 2020; Gao, et al., 2020; Schild, et al., 2020; Kleinberg, et al., 2020; Singh, et al., 2020; Gallotti, et al., 2020; Li, et al., 2020).
Various studies presented the concept of social media infodemic, i.e., the spread of questionable content and sources of information regarding the COVID-19 pandemic, as postulated by Cinelli, et al. (2020). This research illustrates the problem of containing the spread of unverified information about COVID-19, showing that questionable and reliable information spreads according to similar diffusion patterns. Along this line, Gallotti, et al. (2020) suggests that low-quality information anticipates epidemic diffusion in various countries, with the peril of exposing those countries’ population to irrational social behaviors and public health risks. Both studies account for large-scale data collection from online platforms like Facebook and Twitter but do not emphasize the importance of information manipulation on such social media. The work by Singh and colleagues also looked at the spatio-temporal dynamics of misinformation spread on Twitter, drawing a picture with similar implications as the two studies above (Singh, et al., 2020).
More research is needed, as the information landscape evolves, and more scientific insights are unveiled on the clinical and medical implication of this disease, to understand what qualifies for rumors, misinformation, or disinformation campaigns. For example, information about the possible effectiveness of some treatments could be considered as rumors at a given point in time, in absence of definitive scientific consensus; yet, as more clinical evidence emerges, these may become false claims, hence classified as misinformation: one such example is Hydroxychloroquine, a known anti-malaria drug whose effectiveness in treating SARS-CoV-2 (the coronavirus causing the COVID-19 disease) remains debated at this point in time, and whose potentially lethal side-effects limit large-scale testing.
The work by Pennycook and colleagues epitomizes the seriousness of this problem by showing, with a social experiment including 1,600 participants, that subjects tend to share misinformation and false claims about COVID-19 predominantly because they are unable to determine whether the content is scientifically sound and accurate or not (Pennycook, et al., 2020).
Other studies investigated collective attention and engagement dynamics concerning COVID-19 on Twitter. For example, Alshaabi, et al. (2020) analyzed 1,000 unigrams (1-grams) posted on Twitter in 24 languages during early 2020, and compared them with the year prior. The authors emphasize how the global shift in attention to the COVID-19 pandemic is concentrated around January 2020, after the first wave of infections in China started to phase off, and peaked again in early March, when the United States and several other western countries started to get more heavily hit by the pandemic. Their work suggests that social media mirror offline attention dynamics, and hints at the potential implications that diminished collective attention can have on the perception of gravity of this pandemic (or lack thereof).
Various studies investigated emotional and sentiment dynamics on social media conversation pertaining COVID-19 (Gao, et al., 2020; Schild, et al., 2020; Kleinberg, et al., 2020). For example, Kleinberg, et al. (2020) annotated a corpus (N=2,500 tweets and N=2,500 longer texts) producing a ground truth dataset for emotional responses to COVID-19 content. The analysis of this U.K.–centric corpus suggests that issues pertaining family safety and economic stability are more systematically associated with emotional responses in longer texts, whereas tweets more commonly exhibit positive calls for solidarity.
Contrary to that, recent work based on large-scale multiplatform data collections encompassing Twitter and 4chan illustrates the endemic prevalence of hate speech, especially sinophobia, in both platforms (Schild, et al., 2020). Furthermore, the cross-platform diffusion of racial slur and targeted attacks shows how fringe platforms like 4chan are incubators of new hate narratives aimed, in the case of COVID-19, against Asian people; in mainstream social media platforms like Twitter, however, the focus is on putting blame on China for the alleged responsibility in originating the virus and inability to contain it.
On a different note, Gao, et al. (2020) looked at social media sentiment as expression of potential mental health issues associated with social isolation and other side-effects of containment measures enacted to limit the spread of COVID-19 in China. By means of online surveys to complement observational data collected from popular Chinese social media platforms, the authors suggest that social media exposure to outbreak-related content was correlated with increased odds of reporting issues associated with mental health, including depression and anxiety, across different demographics in their population.
Background on COVID-19
The first cases of a novel coronavirus disease (officially named COVID-19 by the World Health Organization on 11 February 2020) were reported in Wuhan, China in late December 2019; the first fatalities were reported in early 2020. The first case in the United States was announced on 21 January 2020 (Holshue et al., 2020): our Twitter data collection aligns with that date (Chen et al., 2020).
The fast-rising infection rates and death toll led the Chinese government to quarantine the city of Wuhan on 23 January 2020 . During this period, other countries began reporting their first confirmed cases of the disease, and on 30 January 2020 the World Health Organization (WHO) announced a Public Health Emergency of International Concern. With virtually every country on Earth reporting cases of the disease, and infections rapidly escalating in some regions of the world, including the U.S., Europe and the Middle East, WHO has subsequently upgraded COVID-19 to a pandemic . On 13 March 2020 the United States government announced the state of national emergency. Our data collection’s end aligns with that date. As of the time of this writing (mid-April 2020), COVID-19 has been reported in every country worldwide, leaving governments all over the globe scrambling for ways to contain the disease and lessen its adverse consequences to their people’s health and economy. Infections exceed two million. Fatalities are well over one hundred thousand. There is still no scientific consensus on the effectiveness of any particular treatment; vaccines are not expected to be available to large swaths of the population for at least a year. COVID-19 has been among the trending topics of discussion on Twitter uninterruptedly since early 2020.
Background on bots
What is a bot. A bot (short for robot) generally refers to an entity operating in a digital space that is controlled by software rather than human. Bots have been categorized according to various taxonomies (Gorwa and Guilbeault, 2018; Woolley, 2016). In this article, we use the term bot as a shorthand to social bot, a concept that refers to a social media account controlled, predominantly or completely, by computer software (a more or less sophisticated artificial intelligence), in contrast with accounts controlled by human users (Ferrara, et al., 2016); this is in line with the recommendations of Gorwa and Guilbeault (2018) who provided the most comprehensive survey on the typologies of bots (“we suggest that automated social media accounts be called social bots” ).
How to create a bot. Early social media bots, in the late 2000s, were created to tackle simple tasks, such as automatically retweeting content posted by a set of sources or finding and posting news from the Web. Today, the capabilities of bots have significantly improved: bots rely on the fast-paced advancements of Artificial Intelligence, especially in the area of natural language generation, and use pretrained multilingual models like OpenAI’s GPT-2 (Radford, et al., 2019) to generate human-like content. This framework allows the creation of bots that generate genuine-looking short texts on platforms like Twitter, making it harder to distinguish between human and automated accounts (Alarifi, et al., 2016).
The barriers to bot creation and deployment, as well as the required resources to create large bot networks, have also significantly decreased: for example, it is now possible to rely upon bot-as-a-service (BaaS), to create and distribute large-scale bot networks using pre-existing capabilities provided by companies like ChatBots.io, and run them in cloud infrastructures like Amazon Web Services or Heroku, to make their detection more challenging (Ferrara, 2019).
Open source Twitter bots. A recent survey discusses readily-available Twitter bot-making tools (Daniel and Millimaggi, 2020): the authors provide an extensive overview of open-source GitHub repositories and describe how prevalent different automation capabilities, such as tweeting or resharing, are across these tools.
According to Daniel and Millimaggi (2020), whose survey focused exclusively on repositories for Twitter bots developed in Python, there are hundreds of such open-source tools readily available for deployment. The authors studied 60 such bot-making tools and enumerated the most common capabilities. Typical automated features of such bots include:
- searching users, trends, and keywords;
- following users, trends, and keywords;
- liking content, based on users, trends, and keywords;
- tweeting and mentioning users and keywords, based on AI-generated content, fixed-templated content, or cloned-content from other users;
- retweeting users and trending content, and mass tweeting;
- talking to (replying) other users, based on AI-generated content, templated content, or clonedcontent from other users; finally,
- pausing activity to mimic human circadian cycles and bursty behaviors, as well as to satisfy API constraints, and to avoid suspension.
According to Daniel and Millimaggi (2020), these features can enable bots to carry out various forms of abuse including: denigrate and smear, deceive and make false allegations, spread misinformation and spam, and finally clone users and mimic human interests. We refer the interested readers to the excellent survey by Daniel and Millimaggi (2020) for further details.
Bot behavior research. Numerous studies have been devoted to characterize the behavior of bots (Stieglitz, et al., 2017; Luceri, et al., 2019b; Pozzana and E. Ferrara, 2020) and the ethical issues pertaining their use (de Lima Salge and N. Berente, 2017). According to Woolley (2016), Howard, et al., 2018, and Woolley and Howard (2018), the behavior of political bots can be categorized with respect to their goals:
- Manufacturing consensus, to enhance the perception of popularity or influence of an entity (political actor, party, organization, etc.);
- Bolstering opinions and voices, to amplify the platform and audience that an entity will receive;
- Cementing polarization, by increasing the inflammatory or divisive nature of an issue or agenda;
- Increasing chaos and confusion, by posting inaccurate information, disinformation, and rumors;
- For algorithmic manipulation, to trick recommendation and ranking systems used by social media platforms, and give higher visibility to certain actors, viewpoints, or campaigns.
According to work by Broniatowski, et al. (2018) and Qi, et al. (2018), similar conclusions can be drawn for bots active in public health discussions, with the intent to spread claims contrary to scientific evidence.
How to detect bots. Researchers in cyber-security have highlighted first some potential challenges associated with the detection of bots (Thomas, et al., 2011; Boshmaf, et al., 2011; Boshmaf, et al., 2013; Abokhodair, et al., 2015; Freitas, et al., 2015). Historically, however, bot detection techniques have been pioneered by groups at Indiana University, University of Southern California, and the University of Maryland, in the context of a program sponsored by DARPA (U.S. Defense Advanced Research Projects Agency) aimed at detecting bots used for anti-science misinformation (Subrahmanian, et al., 2016).
More recently, large bot networks (botnets) have been discovered on Twitter by various research groups (Thomas, et al., 2011; Boshmaf, et al., 2011; Abokhodair,, et al., 2015; Echeverria and Zhou, 2017).
The literature on bot detection has become very extensive (Ferrara, et al., 2016; Stieglitz, et al., 2017; Ferrara, 2018; Yang, et al., 2019b; Cresci, et al., 2019).
In Ferrara, et al. (2016), we proposed a simple taxonomy to divide bot detection approaches into three classes: (1) systems based on social network information; (2) systems based on crowd-sourcing and the leveraging of human intelligence; and, (3) machine learning methods based on the identification of highly-predictive features that discriminate between bots and humans. Other recent surveys propose complementary or alternative taxonomies that are worth considering as well (Stieglitz, et al., 2017; Yang, et al., 2019b; Cresci, et al., 2019).
Some openly accessible tools exist to detect bots on platforms like Twitter:
- Botometer , also used here, is a bot-detection tool developed at Indiana University (Davis, et al., 2016);
- BotSlayer  is an application to detect and track potential manipulation of information on Twitter;
- the Bot Repository  is a centralized database to share annotated datasets of Twitter bots.
Finally, various models have been proposed to detect bots using sophisticated machine learning techniques, such as:
- Deep learning (Kudugunta and Ferrara, 2018);
- Anomaly detection (Minnich, et al., 2017; Gilani, Farahbakhsh, et al., 2017; Gilani, Kochmar, et al., 2017; Echeverria, et al., 2018);
- Time series analysis (Pozzana and Ferrara, 2020; Chavoshi, et al., 2018; Stukal, et al., 2017).
Due to the continuously evolving nature of bots, and the challenges that that poses for detection, in this article we will focus on studying the top and bottom end of the bot score distribution, rather than carrying out a binary classification of accounts into bots and humans.
This avoids the conundrum of dealing with borderline cases for which detection can be inaccurate, and conversely to focus on accounts that exhibit clear human or bot traits. Furthermore, the results will be manually validated for accuracy.
RQ1: Characterizing bot and human behavior
In this section, we address RQ1 with a statistical characterization of bot behavior in our data.
Bot score rank distribution
Bot detection is hard. Bot-making tools continuously evolve, and the capabilities of bots improve while available bot detection techniques catch up. For this reason, it becomes increasingly harder to classify users “in the wild” preserving high degrees of accuracy across the whole spectrum of human-to-bot likeness.
For such a reason, in this study we focus on the top and bottom end of the bot score rank distribution, and isolate accounts in the top decile (i.e., top 10th percentile of the bot score distribution) and flag them as high bot score accounts; conversely, we isolate users in the bottom decile (i.e., bottom 10th percentile of the distribution), and refer to them as low bot score accounts. We will only draw distinctions at the aggregate level between these two groups, without making any further inference, either binary or probabilistic classification, of the nature of any given account.
Similar approaches have been developed in the domain of information warfare detection: one such example, is the CUT (Commenting User Typology) Framework, which was proposed to classify comment abuse, where users are divided into quadrants of behaviors alongside various dimensions of interest (Zelenkauskaite and Balduccini, 2017).
The idea that there exists a continuum of behavior along the dimensions of social media engagement is also in line with two other theories, namely the notions of “super-participation” (Graham and Wright, 2014) and that of “dynamical classes of behavior” (Lehmann, et al., 2012). In our case, we adopt the same idea to segmenting the dimension of human-to-bot likeness and dividing the continuous spectrum of the bot score dimension into percentiles, then focus on the upper and lower deciles of the distribution, which yields two distinct groups obtained with respect to the behavior of interest.
In Table 1, we show the percentile rank distribution of bot scores and average values of a subset of selected user activity features, namely (i) the total number of tweets posted by each users, (ii) the proportion of COVID-19 related tweets observed in our data, (iii) and the account age, measured as the number of days elapsed between the creation of the account and their first COVID-19 tweet in our data. The distribution portrays such aggregate statistics every fifth percentile of the bot scores.
Some striking patterns emerge. As the bot scores increases, the number of total tweets posted by users on average decreases. For example, users in the bottom 5th percentile (0.05) have posted on average over 15 thousand tweets. Conversely, accounts in the top 5th percentile, have posted on average only about 1,600 total tweets. A similar pattern emerges with account age. Accounts in the bottom end of the bot score distribution have been active on average for almost three thousand days (or eight years!) as opposed to accounts with the higher bot scores, whose average age is less than three years.
Table 1: Rank distribution of bot scores and account activity average metrics, along with suspended and verified statistics.
However, the trend is reversed when looking at the fraction of COVID-19 related tweets: accounts with higher bot scores post significantly more COVID-19 tweets than those in the lower end of the distribution. In fact, for accounts in the top 5th percentile of the bot score distribution, the ratio of COVID-19 tweets to their total is 0.81 percent, whereas for the bottom 5th percentile this ratio is 0.03 percent. In other words, accounts with the highest bot scores post about 27 times more about COVID-19 than those with the lowest bot scores.
Suspensions across the bot score spectrum vary from about 2 percent to approximately 3.5 percent, with accounts having bot scores in the 60–80 percentile being more likely to get suspended. Concluding, only 81 accounts (0.1 percent) with bot scores in the top decile are verified. Accounts in the bottom decile are on average 20 times more likely to be verified (avg. ~2 percent) than the top decile (avg. 0.1 percent).
Overall, the insights drawn from the bot score rank distribution analysis suggest that an investigation to characterize the behavior of suspicious accounts with high bot scores is warranted.
Bot score distribution validation
To provide additional insights in the bot score distribution, we leverage the annotations of verified and suspended users. In Figure 1, we illustrate the histogram of bot scores for verified and suspended accounts in our dataset. The two distributions are statistically very significantly different (Mann-Whitney rank test, p-value<0.001).
Figure 1: Distribution of bot scores for verified and suspended accounts in our dataset.
Whereas approximately 90 percent of the verified accounts have bot scores lesser than 0.1, the bot score of suspended accounts is much more broadly distributed, with approximately half of the suspended accounts exhibiting scores higher than 0.1. Whereas in the lower end of the distribution there are both suspended and verified accounts — which is to be expected, since accounts can be suspended for various reasons, not just for being automated — the upper end of the distribution does not contain almost any verified user, but it exhibits hundreds of suspended accounts. This suggests that there is a correlation between account suspension and increased bot likeness.
Figure 2: Distributions of user activity features for low bot score users (left) versus high bot score accounts (right). Whiskers represent the inter-quartile range of the distributions and the jitter is random samples from the underline distributions.
In line with recent analysis by Yang, et al. (2019a), as well as our prior work (Ferrara, 2017), we report six basic account metadata features that are known to carry predictive power in the differentiation between bot and human users, namely (i) topical tweets (in this case, COVID-19 related tweet count); (ii) total number of tweets; (iii) number of followers; (iv) number of friends; (v) number of favorited tweets; and, (vi) number of times the account was added to a list by other users.
In Figure 2, we show the strip plots of the distributions for all features, for accounts in the bottom decile of the bot score distribution (left plot) as well as for accounts in the top decile (right plot). The strip plots convey all observations alongside samples of the underlying distribution data, displayed as jitter over the bot plot that is randomly sampled from the underlying distribution. Visual comparison of the two plots illustrates immediately a striking difference in the distributions: in line with Table 1, high bot-score accounts have significantly more COVID-19 tweets but fewer total tweets, they have significantly fewer followers (one order of magnitude difference), fewer favorited tweets (one order of magnitude difference) and they were added to fewer public lists. These differences are confirmed by statistical analysis: the feature-pairwise Mann-Whitney rank tests are all strongly significant, all p-values<0.001.
Age and provenance analysis
Our final investigation in the characteristics of high and low bot score accounts centers around age and provenance of the user accounts. The analysis above suggested than on average user accounts with higher bot scores in our COVID-19 dataset also exhibit shorter account age. Account age and prevalence of activity related to COVID-19 appear to be very strongly correlated features.
To further investigate this relation, in Figure 3 we show the distributions of account age of user accounts at the time when they joined the COVID-19 discussion. The histogram portrays two very distinct stories for accounts in the top and bottom deciles of the bot score distribution: the former appear to be joining the COVID-19 in the early days since their creation: in fact, the average amount of time elapsed between account creation and first COVID-19 post for high bot score users is less than 100 days. In other words, the vast majority of high bot score accounts have been created relatively in proximity to the emergence of the COVID-19 outbreak and jump on this discussion with high intensity shortly after their creation. For example, over 80,000 high bot score accounts have been created between 50 and 100 days prior to their first COVID-19 tweet. Conversely, it is apparent that accounts in the bottom decile of the bot score distribution have been created significantly prior to the events.
The second aspect that we evaluate is the provenance of these accounts. The attribution of account provenance is a well-known challenging task, because the Twitter API does not provide crucial forensic information that would be required for exact provenance attribution, such as the IP address of the machine or VPN server an account connects from. However, in lieu of such information, the best proxy at our disposal is the ability to reconstruct the server and data center that dispatched each tweet. There are two data centers, namely DC10 and DC11, that dispatch tweets, and 30 servers associated with these data centers. A simple language analysis of the tweets originating from the two data centers clearly suggests that DC10 is used to dispatch tweets originating in Asia, South-East Asia, Russia, and the Middle East. Conversely, DC11 dispatches tweets originating predominantly from Europe and the Americas.
Figure 3: Age distribution of the accounts in the top and bottom decile of the bot score distribution at the time of joining the COVID-19 Twitter discussion.
Figure 4: Distributions of the data center provenance of accounts with high and low bot scores.
Figure 4 illustrates remarkable differences in the data center connectivity patterns between accounts in the top and bottom deciles by bot scores. In particular, it appears evident that DC10 (the data center that serves the Eastern world) dispatches over 35 percent more tweets originating from high bot score accounts (blue bars) than from low bot score ones (orange bars). The opposite patterns appear to be true for DC11 (the data center serving the Western world): DC11 dispatches over 55 percent more tweets from low bot score users than from top score ones. We can only speculate about the origin of such difference, in absence of the investigative tools necessary to get a definitive answer: prior investigations carried out by Twitter unveiled the systematic presence of information operations based in countries such as Russia, Iran, North Korea, and China (Roth, 2019), and this pattern seems to emerge again for COVID-19 discussion (Ching and Seldin, 2020). We speculate that a fraction of these may be carried out using bots, which in turn leave a digital trail associated with the data center used to dispatch tweets.
RQ2: Characterizing bot-populated campaigns
Data-driven computational sensemaking can fail when using human communication data (Lazer, et al., 2014). Common reasons include population and sampling bias, platform-design bias, distortion of human and non-human behavior, multiple hypotheses and comparisons testing, and more, which can hamper computational models and bias results (Ruths and Pfeffer, 2014). An hybrid approach based on combining computational analysis and human validation has been called for in order to make sense of massive-scale communication datasets (Lewis, et al., 2013; DiMaggio, 2015).
Content sensemaking. In this section, we address RQ2 using content and timeseries analysis techniques, in combination with manual content analysis and validation, to characterize bot-populated campaigns. We use two distinct strategies, namely keywords and hashtag analysis. Specifically, keywords are obtained by means of n-gram analysis (an n-gram is simply a sequence of n words). Looking at frequently occurring keywords and hashtags represents a common approach to surface trends of interest in social media datasets (Nazir, et al., 2019). Manual validation for both strategies allows us to corroborate the findings derived from content-based computational analysis and provide interpretations aimed at answering RQ2.
Surfacing characteristic content produced by bots and humans
The goal of the following analysis is to identify patterns of information production and consumption that are characteristic of the high (resp., low) bot score populations. To this aim, we will first isolate all tweets produced by these two groups and carry out a comparative analysis in the adoption patterns of keywords and hashtags for sensemaking purposes. Then we will surface and discuss the expected differences that likely-automated accounts exhibit with respect to likely-human users. Next, we detail the preprocessing steps taken to curate the textual content (tweets) produced and consumed by the two groups.
Preprocessing. The first step was to isolate tweets in English language. The Twitter API provides the estimation of the language, which we leveraged to select a subset of approximately 43.3M tweets. Out of this set, we isolated the tweets produced by the high (resp., low) bot score users (i.e., the users in the top and bottom deciles of the bot score distributions). This produced 671,774 tweets total tweets for the hig bot score accounts, and 756,940 tweets for the low bot score users. Starting from these tweets, we will extract the characteristic hashtags and n-grams preferentially adopted by the two groups of accounts.
Tweet type disaggregation. On Twitter, there are four modalities of posting: (i) original tweets; (ii) reply tweets; (iii) quoted tweets; and, (iv) retweets. Each of these mechanisms is used for different purposes. An original tweet is posted any time a user composes a new tweet from scratch. A reply is an answer to another tweet, typically posted by another user, albeit it is possible to reply to one’s own tweets. A quote embeds another tweet and adds original text typically as a commentary; it’s once again possible to quote one’s own tweets, however typically a quote embeds tweets posted by other users. Finally, a retweet is a one-click operation that allows to reshare on one’s timeline another tweet that will appear without any modifications or commentary (again, typically posted by another user, despite it’s possible to retweet one’s own tweets). In the following analysis, we will disaggregate according to these four communication mechanisms, since they have different aims, and can also be abused in specific ways. By means of this disaggregation, we obtain the following subsets of tweets:
- 50,483 and 83,342 original tweets posted by high and low bot score accounts, respectively;
- 10,852 and 50,756 reply tweets posted by high and low bot score accounts, respectively;
- 70,432 and 153,304 quote tweets posted by high and low bot score accounts, respectively; and,
- 540,007 and 468,539 retweets posted by high and low bot score accounts, respectively.
It’s worth observing how high bot score accounts appear to disproportionally predilect the adoption of retweets (which is a one-click, or if you wish a one-line-of-code operation) whereas low bot score users tend to produce significantly more original, reply, and quoted tweets. For example, low bot score users produce nearly five times more reply tweets, and more than twice quoted tweets, than high score ones.
N-gram extraction. We will carry out a systematic n-gram analysis to surface common sub-sentences that tend to occur frequently in the tweets. We carried out n-gram extraction for n=1, 2, and 3. For n=1 we obtain 1-grams, a.k.a. unigrams. For n=2 we obtain 2-grams, a.k.a. bigrams. Finally, for n=3 we obtain 3-grams, a.k.a. trigrams. In the following, we discuss the trigram analysis which provides the most interpretable results.
Each tweet in the corpus is processed according to the following cleaning protocol. First, the tweet text is lower-cased. Then, links (URLs) are removed, alongside user mentions (username of other Twitter accounts which are preceded by the @ symbol), and hashtags (terms preceded by the # symbol) — a hashtag analysis is indeed carried out separately. Special characters are also removed, to clean the tweet text from non-linguistic symbols such as ampersands, etc. Finally, stop-words, common English-language terms that include short function words, as well as non-lexical words, are also removed using the “nltk” Python library. Finally, for each n-gram we check that at least one word is longer than four characters, to remove common n-grams typical of online slang, such as “lol” or “ah ah”, etc. The n-grams extracted by this process will be analyzed in the next sections.
Hashtag extraction. The Twitter API provides the list of hashtags included in each tweet, which we leverage to extract all hashtags from all 671,774 tweets in the high bot score accounts group (resp., 756,940 tweets in the low bot score users group). For each tweet, we remove hashtags that contain the keywords that we used to seed the data collection, which are listed in Table 1. For example, if a tweet contains two hashtags, e.g., #coronavirus and #ncov2019, the former would be removed because it contains the keyword “corona” that we used to seed the data collection. The removal of hashtags containing seed keywords make surfacing other interesting hashtags easier. We decided to avoid any other preprocessing step, such as removing hashtags shorter than some number of characters, or hashtags that do not appear above a certain threshold, in order to avoid skewing the results in any way. This means that at times we will observe hashtags that exhibit low volume, for example hashtags that contain a misspell, e.g., “coronoavirus” can be observed.
Content analysis: Conspiracies & political propaganda
In the following, we present the core findings that our analysis highlighted. We discuss how a subset of high bot score accounts appear to be engaged in the spread of conspiracies and political propaganda, in stark contrast with the comparison group of low bot score users that is instead engaged in the discussion of public health concerns. We will also provide examples of both trends later in the Validation section.
The spread of conspiracies on online social media is a well-established issue. Numerous studies have been devoted to understand, for example, how unscientific claims circulate online (Scheufele and Krause, 2019; Bessi, et al., 2015; Kahan, et al., 2012; Del Vicario, et al., 2016), or how conspiratorial narratives are constructed online (Starbird, 2017; Arif, et al., 2016; Andrews, et al., 2016), especially in the context of political ideology (Ferrara, 2017; Marwick and Lewis, 2017). Our analysis highlights the emergence of similar issues in the context of the conversation related to COVID-19.
Characteristic n-gram analysis. In Figure 5, for illustration purposes, we show the timeseries of the top four characteristic trigrams produced by three populations: (A) bots fueling conspiracy theories; (B) bots posting COVID-19 news; and, (C) human users (i.e., low bot score accounts). An n-gram is considered characteristic in this analysis if it appears in the top 10 of a group (e.g., the bots) but does not appear in the top 10 of the other (e.g., the human users). This allows us to surface the most popular characteristic semantic trends in each group. Each timeseries in Figure 5 shows the daily volume of tweets containing a given trigram in that population exclusively. It is important to underscore, again, that this is the prevalence of the n-grams in each group, and not in the whole Twitter population — this is in order to give a perspective on the relative volume of content produced by each group relative to each other.
Figure 5: (A) Top four characteristic bot-generated conspiracy-related trigrams (first peaks are highlighted by orange lines); (B) Top four trigrams from news bots (baseline); (C) Top 4 human-generated trigrams.
Figure 5A suggests that there exists a set of bots that is predominantly posting conspiratorial content of political nature in the context of the COVID-19 discussion. Figure 5B shows the trigrams discussed by news-posting bots, another type of bot that is frequently observed in social media discussion pertaining real-world events, according to Gorwa and Guilbeault (2018) and Woolley and Howard (2018). Both panels A and B show content posted by high bot score accounts. Figure 5C shows trigrams posted by likely human users (low bot score accounts). Similar conclusions hold if one looks at a broader set of hashtags, not just the top four of each group.
By comparing the volume of bot-fueled conspiracies (Figure 5A) against the amount of discussion by human users (Figure 5C), we notice that the activity associated with these narratives driven by bots is in all comparable with that of low bot score users concerned with public health risks.
Various alt-right popular narratives can be isolated which are aimed at pushing divisive political ideology.
The top characteristic bot-driven trigrams in this analysis contain keywords such as QAnon, which according to Wikipedia , is “is a far-right conspiracy theory detailing a supposed secret plot by an alleged ‘deep state’ against U.S. President Donald Trump and his supporters”. QAnon has been extensively adopted by alt-right activists to foster participatory advocacy on social media (Zuckerman, 2019), but it has also been abused by the Russian Internet Research Agency (IRA) to push conspiratorial and divisive narratives (Cosentino, 2020a; Cosentino, 2020b). QAnon appears alongside with other known alt-right terms, e.g., GreatAwakening, Inf0wars, WWGWGA (Where We Go One We Go All), WeThePeople, and PatriotsFights (de Saint Laurent, et al., 2020).
In Figure 5A, the first peak of each trigram timeseries is highlighted in orange. A trend emerges: these conspiratorial n-grams appear to all trend around the same time, namely during the second and third week of February 2020. This may be a clue to an orchestrated effort to push this propaganda in a coordinated manner, which is in line with recent findings on group-coordinated disinformation efforts (Pacheco, Hui, et al., 2020; Pacheco, Flammini, et al., 2020). In detail, it appears that a first wave of bot-driven campaigns pushing content from hyperpartizan news site Infowars, peaked on 12 February (cf., Figure 5A, top timeseries), and paved the way to a second wave of bot-fueled trends including the QAnon, GreatAwakening and WWGWGA narratives that appeared in the following days (Figure 5A, other timeseries).
The news-bots serve as a baseline to illustrate the typical time-series patterns that are observed in non-conspiratorial bots. Bursts of activity are prerogative of coordinated operations, and differ from other news-bots’ trends, such as “share link” and “coronavirus live updates” bots that post news about COVID-19 automatically and therefore do not exhibit such bursts of activity (cf., Figure 5B).
For what pertains humans and their activity, it is worth noting how most of the activity seems to be associated with two time periods, namely the early observation phase (the end of January and early February) and the final period (cf., Figure 5C). This is in line with what has been already observed by Alshaabi, et al. (2020): English-speaking human users seemed collectively focused on COVID-19 predominantly when it arrived in the U.S., and after the first fatalities started to occur in the U.S.
To the contrary, bot-fueled conspiracies appeared in the between these two spikes and filled the gap shifting the focus from public health to political ideology for a brief period. The peaks of these conspiracy trends (highlighted in orange in Figure 5A) exhibit the sequential pattern earlier described, where bots pushing Infowars content lead the way to the subsequent trending of other alt-right keywords.
The activity of conspiracy-fueling bots appears to dissipate toward the end of our observation window (early March). This type of bot behavior has been documented already in the past, and is commonly related to as “trend hijacking” (Metaxas and Mustafaraj, 2012; Hadgu, et al.., 2013): bots appear to ride the wave of popularity of a given topic, in this case the trending COVID-19 discussion, to inject a deliberate message or narrative in order to amplify its visibility.
Characteristic hashtag analysis. In Figure 6, we illustrate the timeseries of top four characteristic hashtags characterizing (Figure 6A) the conspiracy-fueling bots, (Figure 6B) the news bots; and (Figure 6C) the human users, in original tweets. Both Figures 6A and 6B focus on high bot score accounts, whereas panel C captures low bot score users. Similar to for the n-gram analysis, hashtags are deemed characteristic to a group if they appear in the top 10 of that group and do not appear in the top 10 of the other. We note how the top 10 of characteristic hashtags for the human users contains, again, predominantly hashtags that are associated with the public health aspects of the COVID-19 pandemic (cf., Figure 6C). Further content analysis shows that the most common characteristic hashtags include news-related terms like #breaking, mentions of influential actors (e.g., #trump), organizations (e.g., #who), and countries (e.g., #iran), alongside with the COVID-19 hashtags used to characterize the disease-related topic, including #2019ncov, #ncov, and #ncov2019.
On the other hand, for high bot score accounts, we observe once again a picture compatible with the ngram analysis discussed above. Alongside with news-related hashtags such as #news and #smartnews (cf., Figure 6B), we observe some alt-right hashtags such as #qanon and #greatawakening (cf., Figure 6A). The peaks of conspiracy-fueling bots (highlighted in orange in Figure 6A) once again occur in the middle of our observation period, whereas news-posting bots appear active throughout the whole period, and human users exhibit activity toward the early and late observation period, namely late January and early March.
Figure 6: (A) Top four characteristic bot-generated conspiracy-related hashtags (first peaks are highlighted by orange lines); (B) Top four hashtags from news bots (baseline); (C) Top four human-generated hashtags.
Taken together, the n-gram and hashtag analyses paint a picture suggestive of the fact that high bot score accounts are injecting content with conspiratorial narratives charged with alt-right ideology. These hypotheses are further validated with a process of manual verification and coding.
The first step in our validation was to determine how many verified accounts in the high bot score population posted about conspiracies and political propaganda. For sake of illustration, we discuss the results for the trigram analysis discussed above. This validation step determined that only nine verified accounts posted content (out of 1,803 total, or 0.05 percent).
Conversely, for low bot score users, the number of verified accounts was 215 (out of 1,037 total, or 20.7 percent). In other words, in the trigram analysis above, low bot score users were 414 times more likely to be verified than high bot score accounts. The former population consisted predominantly of established social media accounts that reported on public health concerns related to the pandemic. On the other hand, the high bot score accounts, of which only 0.05 percent is verified, may have used the COVID-19 conversation as a vector to promote conspiratorial narratives.
Lastly, we manually investigated the content of tweets in both groups discussed in the trigram analysis of original tweets. We document some findings next . As for the high bot score accounts, the most popular tweets were posted by accounts that exhibit clear automation patterns; they have high friends/followers’ ratio and posted thousands of tweets about COVID-19 following templated formats.
For example, we found hundreds of tweets that contain references to news about COVID-19, followed by sequences of hashtags such as those displayed in Figure 6A in combination with a link, typically to YouTube videos (many of which had already been taken down by YouTube as of the time of this paper). Beside YouTube, the most referenced sources included various hyperpartizan news site, such as Infowars, ZeroHedge, etc. Typical sensationalistic headlines suggest that:
- the virus was made in Wuhan labs;
- the virus is a “globalist biological weapon”;
- the virus was imported into China by the U.S. military; and,
- products imported from China may be infected with the virus.
Other hashtags and keywords associated with these narratives include free speech issues (#freespeech, #freezerohedge), “truthers” narratives (#coronavirustruth, #5G) allegedly uncovering globalist conspiracies, with a special emphasis on stories implying a connection between the diffusion of the 5G wireless technology, and the emergence of the virus originally in Wuhan, and a bot-fueled conspiracy (“5G Launches In Wuhan Weeks Before Coronavirus Outbreak”) emerging in late-January 2020.
As for low bot score users, they reference to traditional news sources, and the most referenced accounts are the U.S. President, the CDC, the WHO and various established news organizations including both left and right leaning newspapers. As for their focus, in the earlier period (beginning of February) they tend to cover health-related generic hashtags such as #PublicHealth and #Health, as well as China-centric events keywords (“Wuhan”, “China”, etc.). Later in the observed period, their focus shifts on preventive measures (#WashYourHands, #StopTheSpread, #Quarantine) and lifestyle (#QuarantineLife); they also focus on policies and interventions enacted by the government (#SocialDistancing, #FlattenTheCurve); finally, they discuss the economy (#economy, #stockmarket, #bitcoin).
Concluding, the COVID-19 news bots tend to automatically collect and collate information from the news, and their most tweeted sources exhibit a mix of U.S.-centric (the most popular being Fox News, the New York Times, CNN, and ABC News) and worldwide (e.g., Guardian) news sources and organizations (e.g., Reuters) in English, as well as Twitter-based news bots (e.g., SmartNews). The typical tweet by news bot contains the headline of the news as body of the tweet text alongside with the URL. The large majority of the observed news bots tweets have been captured because the headlines contain one of the keywords used to seed the real-time data collection (Chen, et al., 2020).
COVID-19 is a global crisis and with people being pushed out of physical spaces due to containment measures, online conversation on social media becomes one of the primary tools to track social discussion. In fact, topics of conversation related to COVID-19 have been trending, uninterruptedly or so, ever since the beginning of the outbreak in early 2020. We leveraged a large-scale data collection tracking in real-time COVID-19 tweets since 21 January 2020, the day the first COVID-19 case was reported on U.S. soil. The dataset we adopt here goes through 12 March 2020, the day before the United States government announced the state of national emergency due to the COVID-19 pandemic.
In this paper, we provided an early characterization of the prevalence of accounts that are likely automated and that post content in relation to the ongoing COVID-19 pandemic. To the best of our knowledge, this is the first study to provide some evidence that high bot score accounts are used to promote political conspiracies and divisive hashtags alongside with COVID-19 content.
Limitations of this study
Our study has several limitations. First and foremost, despite the sheer size of the dataset at hand, we are only observing a small fraction, approximately one percent, of the overall Twitter conversation, through the lens of the Twitter API. This has been shown to introduce some biases toward over-represented topics (Morstatter, et al., 2013), and COVID-19 has been the most spoken-about topic of discussion ever since the beginning of the pandemic. Second, another form of bias is automatically introduced when selecting keywords to follow. For example, despite our dataset exhibits dozens of languages, English content represents over two-third of the overall tweets. To mitigate this bias, we concentrated only on English content for this analysis, despite the fact that several interesting phenomena related to the scope of this work may be observed in other languages as well.
The third and most important limitation is related to the challenge of bot detection. Detecting bots is quite hard. Even the most sophisticated machine learning tools have varying levels of accuracy, especially when applied “in the wild” to identify bots in live conversations. To mitigate this issue, in this work we carried out three forms of additional validation, collecting data for suspended and verified accounts, and manually inspecting data of particular relevance or interest.
However, these solutions also have inherent limitations: for example, the list of suspended accounts only reflects Twitters policies to intervene and ban an account but does not provide the rationale for the decision. Furthermore, the manual assessment can be viable for the scrutiny of few case studies, like those tackled in this paper, but is not a scalable strategy to carry out large scale studies. Nevertheless, our analysis (cf., Figure 1) illustrates that there is a strong correlation between increased bot score and higher probability of account suspension. Other key indicators such as the intensity of posting about COVID-19 as a function of account age can also be informative. This suggests that suspension algorithms employed by Twitter may benefit from accounting for such behavioral signatures to improve their accuracy.
In this work, we set forth to investigate two research questions, namely whether automated social media accounts were active in the context of COVID-19 related discussion on Twitter (RQ1), and if so, whether they are engaged in bot behavior similar to what has been observed in prior work, e.g., fueling conspiratorial and ideological narratives (RQ2).
Our findings paint a picture where accounts that are likely automated have been used in malicious manners. We observed how high bot score accounts use COVID-19 as a vector to promote visibility of ideological hashtags that are typically associated with the alt-right in the United States. We have discussed the implications of this discovery and differentiated it from the behavior of human users that are predominantly concerned with public health and welfare.
In the future, we will further investigate the behavior of COVID-19 bots, to understand whether they are exclusively used for propaganda purposes, or whether other uses emerge. Preliminary analysis suggests that some bots may have been used as a form of citizen journalism to unearth information that would otherwise be censored in other platforms in China and surface it in the English-speaking Twitter (Ferrara, 2020).
Further evidence is needed to assess whether this is a systematic effort to promote freedom of speech. We will also delve deeper in the types of conspiratorial narratives that bots promote (Shahsavari, et al., 2020), and study the role of bots in the spread of rumors and fake news (Yang, et al., 2020).
About the author
Emilio Ferrara is Research Assistant Professor of Computer Science at the University of Southern California and Research Team Leader at the USC Information Sciences Institute. His research focuses on characterizing information diffusion and manipulation of social media. He is a recipient of the 2016 Complex System Society Junior Scientific Award, and he received the 2016 DARPA Young Faculty Award.
E-mail: emiliofe [at] usc [dot] edu
The author is grateful to the members of his lab for their invaluable help and support, in particular Emily Chen for initializing the tweet collection, Shen Yan for collecting the list of suspended accounts, and Ashok Deb for collecting the Botometer scores; the author is also grateful to Jason Baumgartner (PushShift.io) for sharing the list of verified Twitter accounts.
3. Gorwa and Guilbeault, 2018, p. 9.
4. Botometer: https://botometer.iuni.iu.edu/.
5. BotSlayer: https://osome.iuni.iu.edu/tools/botslayer/.
6. Bot Repository: https://botometer.iuni.iu.edu/bot-repository/.
8. Due to privacy requirements and in line with Twitter’s Terms of Service, we here only refer to anonymized examples of tweets, removing any information that could be used for the reidentification of the authors.
N. Abokhodair, D. Yoo, and D.W. McDonald, 2015. “Dissecting a social botnet: Growth, content and influence in Twitter,” CSCW ’15: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 839–851.
doi: http://dx.doi.org/10.1145/2675133.2675208, accessed 18 May 2020.
A. Alarifi, M. Alsaleh, and A. Al-Salman, 2016. “Twitter Turing test: Identifying social machines,” Information Sciences, volume 372, pp. 332–346.
doi: http://dx.doi.org/10.1016/j.ins.2016.08.036, accessed 18 May 2020.
J.–P. Allem, E. Ferrara, S.P. Uppu, T.B. Cruz, and J.B. Unger, 2017. “E-cigarette surveillance with social media data: Social bots, emerging topics, and trends,” JMIR Public Health and Surveillance, volume 3, number 4, e98.
doi: http://dx.doi.org/10.2196/publichealth.8641, accessed 18 May 2020.
T. Alshaabi, J.R. Minot, M.V. Arnold, J.L. Adams, D.R. Dewhurst, A.J. Reagan, R. Muhamad, C.M. Danforth, and P.S. Dodds, 2020. “How the world’s collective attention is being paid to a pandemic: COVID-19 related 1-gram time series for 24 languages on Twitter,” arXiv, 2003.12614 (27 March), at http://arxiv.org/abs/2003.12614, accessed 18 May 2020.
C. Andrews, E. Fichet, Y. Ding, E.S. Spiro, and K. Starbird, 2016. “Keeping up with the tweet-dashians: The impact of ‘official’ accounts on online rumoring,” CSCW ’16: Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pp. 452–465.
doi: http://dx.doi.org/10.1145/2818048.2819986, accessed 18 May 2020.
A. Arif, K. Shanahan, F.–J. Chou, Y. Dosouto, K. Starbird, and E.S. Spiro, 2016. “How information snowballs: Exploring the role of exposure in online rumor propagation,” CSCW ’16: Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pp. 466–477.
doi: http://dx.doi.org/10.1145/2818048.2819964, accessed 18 May 2020.
A. Badawy, K. Lerman, and E. Ferrara, 2019. “Who falls for online political manipulation?” WWW ’19: Companion Proceedings of the 2019 World Wide Web Conference, pp. 162–168.
doi: https://doi.org/10.1145/3308560.3316494, accessed 18 May 2020.
A. Badawy, E. Ferrara, and K. Lerman, 2018. “Analyzing the digital traces of political manipulation: The 2016 Russian interference Twitter campaign,” 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM).
doi: https://doi.org/10.1109/ASONAM.2018.8508646, accessed 18 May 2020.
C.A. Bail, B. Guay, E. Maloney, A. Combs, D.S. Hillygus, F. Merhout, D. Freelon, and A. Volfovsky, 2020. “Assessing the Russian Internet Research Agency’s impact on the political attitudes and behaviors of American Twitter users in late 2017,” Proceedings of the National Academy of Sciences, volume 117, number 1, pp. 243–250.
doi: http://dx.doi.org/10.1073/PNAS.1906420116, accessed 18 May 2020.
A. Bessi and E. Ferrara, 2016. “Social bots distort the 2016 U.S. Presidential election online discussion,” First Monday, volume 21, number 11, at https://firstmonday.org/article/view/7090/5653, accessed 18 May 2020.
doi: http://dx.doi.org/10.5210/fm.v21i11.7090, accessed 18 May 2020.
A. Bessi, M. Coletto, G.A. Davidescu, A. Scala, G. Caldarelli, and W. Quattrociocchi, 2015. “Science vs conspiracy: Collective narratives in the age of misinformation,” PLoS ONE, volume 10, number 2 (23 February), e0118093.
doi: http://dx.doi.org/10.1371/journal.pone.0118093, accessed 18 May 2020.
Y. Boshmaf, I. Muslukhov, K. Beznosov, and M. Ripeanu, 2013. “Design and analysis of a social botnet,” Computer Networks, volume 57, number 2, pp. 556–578.
doi: http://dx.doi.org/10.1016/j.comnet.2012.06.006, accessed 18 May 2020.
Y. Boshmaf, I. Muslukhov, K. Beznosov, and M. Ripeanu, 2011. “The socialbot network: When bots socialize for fame and money,” ACSAC ’11: Proceedings of the 27th Annual Computer Security Applications Conference, pp. 93–102.
doi: http://dx.doi.org/10.1145/2076732.2076746, accessed 18 May 2020.
D.A. Broniatowski, A.M. Jamison, S. Qi, L. AlKulaib, T. Chen, A. Benton, S.C. Quinn, and M. Dredze, 2018. “Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate,” American Journal of Public Health, volume 108, number 10, pp. 1,378–1,384.
doi: http://dx.doi.org/10.2105/AJPH.2018.304567, accessed 18 May 2020.
N. Chavoshi, H. Hamooni, and A. Mueen, 2018. “DeBot: Twitter bot detection via warped correlation,” 2016 IEEE 16th International Conference on Data Mining (ICDM).
doi: http://dx.doi.org/10.1109/icdm.2016.0096, accessed 18 May 2020.
E. Chen, K. Lerman, and E. Ferrara, 2020. “#COVID-19: The first public coronavirus Twitter dataset,” arXiv, 2003.07372 (16 March), at http://arxiv.org/abs/2003.07372, accessed 18 May 2020.
N. Ching and J. Seldin, 2020. “US pushes back against Russian, Chinese, Iranian coronavirus disinformation,” Voice of America (16 April), at https://www.voanews.com/covid-19-pandemic/us-pushes-back-against-russian-chinese-iranian-coronavirus-disinformation, accessed 18 May 2020.
M. Cinelli, W. Quattrociocchi, A. Galeazzi, C.M. Valensise, E. Brugnoli, A.L. Schmidt, P. Zola, F. Zollo, and A. Scala, 2020. “The COVID-19 social media infodemic,” arXiv, 2003.05004 (10 March), at http://arxiv.org/abs/2003.05004, accessed 18 May 2020.
G. Cosentino, 2020a. “From pizzagate to the great replacement: The globalization of conspiracy theories,” In: G. Cosentino. Social media and the post-truth world order: The global dynamics of disinformation. Cham, Switzerland: Palgrave Pivot, pp. 59–86.
doi: https://doi.org/10.1007/978-3-030-43005-4_3, accessed 18 May 2020.
G. Cosentino, 2020. “Polarize and conquer: Russian influence operations in the United States,” In: G. Cosentino. Social media and the post-truth world order: The global dynamics of disinformation. Cham, Switzerland: Palgrave Pivot, pp. 33–57.
doi: https://doi.org/10.1007/978-3-030-43005-4_2, accessed 18 May 2020.
S. Cresci, R. Di Pietro, M. Petrocchi, A. Spognardi, and M. Tesconi, 2015. “The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race,” WWW ’17 Companion: Proceedings of the 26th International Conference on World Wide Web Companion, pp. 963–972.
doi: http://dx.doi.org/10.1145/3041021.3055135, accessed 18 May 2020.
F. Daniel and A. Millimaggi, 2020. “On Twitter bots behaving badly: A manual and automated analysis of Python code patterns on GitHub,” Journal of Web Engineering, volume 18, number 8, pp. 801–836.
doi: http://dx.doi.org/10.13052/jwe1540-9589.1883, accessed 18 May 2020.
C.A. Davis, O. Varol, E. Ferrara, A. Flammini, and F. Menczer, 2016. “BotOrNot: A system to evaluate social bots,” WWW ’16 Companion: Proceedings of the 25th International Conference Companion on World Wide Web, pp. 273–274.
doi: https://doi.org/10.1145/2872518.2889302, accessed 18 May 2020.
C.A. de Lima Salge and N. Berente, 2017. “Is that social bot behaving unethically?” Communications of the ACM, volume 60, number 9, pp. 29–31.
doi: http://dx.doi.org/10.1145/3126492, accessed 18 May 2020.
C. de Saint Laurent, V. Glaveanu, and C. Chaudet, 2020. “Malevolent creativity and social media: Creating anti-immigration communities on Twitter,” Creativity Research Journal, volume 32, number 1, pp. 66–80.
doi: http://dx.doi.org/10.1080/10400419.2020.1712164, accessed 18 May 2020.
M. Del Vicario, A. Bessi, F. Zollo, F. Petroni, A. Scala, G. Caldarelli, H.E. Stanley, and W. Quattrociocchi, 2016. “The spreading of misinformation online,” Proceedings of the National Academy of Sciences, volume 113, number 3 (19 January), pp. 554–559.
doi: http://dx.doi.org/10.1073/pnas.1517441113, accessed 18 May 2020.
P. DiMaggio, 2015. “Adapting computational text analysis to social science (and vice versa),” Big Data & Society (1 December).
doi: http://dx.doi.org/10.1177/2053951715602908, accessed 18 May 2020.
J. Echeverria and S. Zhou, 2017. “Discovery, retrieval, and analysis of the ‘Star wars’ botnet in Twitter,” ASONAM ’17: Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 1–8.
doi: http://dx.doi.org/10.1145/3110025.3110074, accessed 18 May 2020.
J. Echeverria, E. De Cristofaro, N. Kourtellis, I. Leontiadis, G. Stringhini, and S. Zhou, 2018. “LOBO: Evaluation of generalization deficiencies in Twitter bot classifiers,” ACSAC ’18: Proceedings of the 34th Annual Computer Security Applications Conference, pp. 137–146.
doi: http://dx.doi.org/10.1145/3274694.3274738, accessed 18 May 2020.
E. Ferrara, 2020. “#COVID-19 on Twitter: Bots, conspiracies, and social media activism,” arXiv, 2004.09531v1 (20 April), at https://arxiv.org/abs/2004.09531v1, accessed 18 May 2020.
E. Ferrara, 2019. “The history of digital spam,” Communications of the ACM, volume 62, number 8, pp. 82–91.
doi: https://doi.org/10.1145/3299768, accessed 18 May 2020.
E. Ferrara, 2018. “Measuring social spam and the effect of bots on information diffusion in social media,” In: S. Lehmann and Y.–Y. Ahn (editors). Complex spreading phenomena in social systems: Influence and contagion in real-world social networks. Cham, Switzerland: Springer, pp. 229–255.
doi: https://doi.org/10.1007/978-3-319-77332-2_13, accessed 18 May 2020.
E. Ferrara, 2017. “Disinformation and social bot operations in the run up to the 2017 French presidential election,” First Monday, volume 22, number 8, at https://firstmonday.org/article/view/8005/6516, accessed 18 May 2020.
doi: http://dx.doi.org/10.5210/fm.v22i18.8005, accessed 18 May 2020.
E. Ferrara, 2015. “Manipulation and abuse on social media,” ACM SIGWEB Newsletter, article number 4.
doi: http://dx.doi.org/10.1145/2749279.2749283, accessed 18 May 2020.
E. Ferrara, O. Varol, C. Davis, F. Menczer, and A. Flammini, 2016. “The rise of social bots,” Communications of the ACM, volume 59, number 7, pp. 96–104.
doi: http://dx.doi.org/10.1145/2818717, accessed 18 May 2020.
C. Freitas, F. Benevenuto, S. Ghosh, and A. Veloso, 2015. “Reverse engineering socialbot infiltration strategies in Twitter,” ASONAM ’15: Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 25–32.
doi: http://dx.doi.org/10.1145/2808797.2809292, accessed 18 May 2020.
R. Gallotti, F. Valle, N. Castaldo, P. Sacco, and M. De Domenico, 2020. “Assessing the risks of ‘infodemics’ in response to COVID-19 epidemics,” arXiv, 2004.03997 (11 April), at http://arxiv.org/abs/2004.03997, accessed 18 May 2020.
J. Gao, P. Zheng, Y. Jia, H. Chen, Y. Mao, S. Chen, Y. Wang, H. Fu, and J. Dai, 2020. “Mental health problems and social media exposure during COVID-19 outbreak,” Preprints with The Lancet (20 February).
doi: http://dx.doi.org/10.2139/ssrn.3541120, accessed 18 May 2020.
Z. Gilani, E. Kochmar, and J. Crowcroft, 2017. “Classification of Twitter accounts into automated agents and human users,” ASONAM ’17: Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 489–496.
doi: http://dx.doi.org/10.1145/3110025.3110091, accessed 18 May 2020.
Z. Gilani, R. Farahbakhsh, G. Tyson, L. Wang, and J. Crowcroft, 2017. “An in-depth characterisation of bots and humans on Twitter,” arXiv, 1704.01508 (5 April), at http://arxiv.org/abs/1704.01508, accessed 18 May 2020.
R. Gorwa and D. Guilbeault, 2018. “Unpacking the social media bot: A typology to guide research and policy,” Policy & Internet (10 August).
doi: http://dx.doi.org/10.1002/poi3.184, accessed 18 May 2020.
T. Graham and S. Wright, 2014. “Discursive equality and everyday talk online: The impact of ‘superparticipants’,” Journal of Computer-Mediated Communication, volume 19, number 3, pp. 625–642.
doi: http://dx.doi.org/10.1111/jcc4.12016, accessed 18 May 2020.
N. Grinberg, K. Joseph, L. Friedland, B. Swire-Thompson, and D. Lazer, 2019. “Fake news on Twitter during the 2016 U.S. presidential election,” Science, volume 363, number 6425 (25 January), pp. 374–378.
doi: http://dx.doi.org/10.1126/SCIENCE.AAU2706, accessed 18 May 2020.
A.T. Hadgu, K. Garimella, and I. Weber, 2013. “Political hashtag hijacking in the U.S.,” WWW ’13 Companion: Proceedings of the 22nd International Conference on World Wide Web, pp. 55–56.
doi: http://dx.doi.org/10.1145/2487788.2487809, accessed 18 May 2020.
M.L. Holshue, C. DeBolt, S. Lindquist, K.H. Lofy, J. Wiesman, H. Bruce, C. Spitters, K. Ericson, S. Wilkerson, A. Tural, G. Diaz, A. Cohn, L. Fox, A. Patel, S.I. Gerber, L. Kim, S. Tong, X. Lu, S. Lindstrom, M.A. Pallansch, W.C. Weldon, H.M. Biggs, T.M. Uyeki, and S.K. Pillai, for the Washington State 2019-nCoV Case Investigation Team, 2020. “First case of 2019 novel coronavirus in the United States,” New England Journal of Medicine, volume 382, number 10, pp. 929–936.
doi: http://dx.doi.org/10.1056/NEJMoa2001191, accessed 18 May 2020.
P.N. Howard, S. Woolley, and R. Calo, 2018. Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration, Journal of Information Technology & Politics, volume 15, number 2, pp. 81–93.
doi: http://dx.doi.org/10.1080/19331681.2018.1448735, accessed 18 May 2020.
T. Hwang, I. Pearce, and M. Nanis, 2012. “Socialbots: Voices from the fronts,” Interactions, volume 19, number 2, pp. 38–45.
doi: http://dx.doi.org/10.1145/2090150.2090161, accessed 18 May 2020.
D.M. Kahan, E. Peters, M. Wittlin, P. Slovic, L.L. Ouellette, D. Braman, and G. Mandel, 2012. “The polarizing impact of science literacy and numeracy on perceived climate change risks,” Nature Climate Change, volume 2, number 10, pp. 732–735.
doi: http://dx.doi.org/10.1038/nclimate1547, accessed 18 May 2020.
B. Kleinberg, I. van der Vegt, and M. Mozes, 2020. “Measuring emotions in the COVID-19 real world worry dataset,” arXiv, 2004.04225 (14 May), at http://arxiv.org/abs/2004.04225, accessed 18 May 2020.
S. Kudugunta and E. Ferrara, 2019. “Deep neural networks for bot detection,” Information Sciences, volume 467, pp. 312–322.
doi: http://dx.doi.org/10.1016/j.ins.2018.08.019, accessed 18 May 2020.
D. Lazer, R. Kennedy, G. King, and A. Vespignani, 2014. “The parable of Google flu: Traps in big data analysis,” Science, volume 343, number 6176 (14 March), pp. 1,203–1,205.
doi: http://dx.doi.org/10.1126/science.1248506, accessed 18 May 2020.
J. Lehmann, B. Gonçalves, J.J. Ramasco, and C. Cattuto, 2012. “Dynamical classes of collective attention in Twitter,” WWW ’12: Proceedings of the 21st International Conference on World Wide Web, pp. 251–260.
doi: http://dx.doi.org/10.1145/2187836.2187871, accessed 18 May 2020.
S.C. Lewis, R. Zamith, and A. Hermida, 2013. “Content analysis in an era of big data: A hybrid approach to computational and manual methods,” Journal of Broadcasting & Electronic Media, volume 57, number 1, pp. 34–52.
doi: http://dx.doi.org/10.1080/08838151.2012.761702, accessed 18 May 2020.
J. Li, Q. Xu, R. Cuomo, V. Purushothaman, and T. Mackey, 2020. “Data mining and content analysis of Chinese social media platform Weibo during early COVID-19 outbreak: A retrospective observational infoveillance study,” JMIR Public Health and Surveillance, volume 6, number 2, e18700.
doi: http://dx.doi.org/10.2196/18700, accessed 18 May 2020.
L. Luceri, S. Giordano, and E. Ferrara, 2020. “Detecting troll behavior via inverse reinforcement learning: A case study of Russian trolls in the 2016 US election,” Proceedings of the International AAAI Conference on Web and Social Media; version at https://arxiv.org/pdf/2001.10570.pdf, accessed 18 May 2020.
L. Luceri, A. Deb, S. Giordano, and E. Ferrara, 2019a. “Evolution of bot and human behavior during elections,” First Monday, volume 24, number 9, at https://firstmonday.org/article/view/10213/8073, accessed 18 May 2020.
doi: http://dx.doi.org/10.5210/fm.v24i9.10213, accessed 18 May 2020.
L. Luceri, A. Deb, A. Badawy, and E. Ferrara, 2019b. “Red bots do it better: Comparative analysis of social bot partisan behavior,” WWW ’19: Companion Proceedings of the 2019 World Wide Web Conference, pp. 1,007–1,012.
doi: https://doi.org/10.1145/3308560.3316735, accessed 18 May 2020.
A. Marwick and R. Lewis, 2017. “Media manipulation and disinformation online,” Data & Society (16 May), at https://datasociety.net/library/media-manipulation-and-disinfo-online/, accessed 18 May 2020.
P.T. Metaxas and E. Mustafaraj, 2012. “Social media and the elections,” Science, volume 338, number 6106 (26 October), pp. 472–473.
https://doi.org/10.1126/science.1230456, accessed 18 May 2020.
A. Minnich, N. Chavoshi, D. Koutra, and A. Mueen, 2017. “BotWalk: Efficient adaptive exploration of Twitter bot networks,” ASONAM ’17: Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 467–474.
doi: https://doi.org/10.1145/3110025.3110163, accessed 18 May 2020.
F. Morstatter, J. Pfeffer, H. Liu, and K.M. Carley, 2013. “Is the sample good enough? Comparing data from Twitter’s streaming API with Twitter’s firehose,” Seventh International AAAI Conference on Weblogs and Social Media, at https://www.aaai.org/ocs/index.php/ICWSM/ICWSM13/paper/view/6071, accessed 18 May 2020.
F. Nazir, M.A. Ghazanfar, M. Maqsood, F. Aadil, S. Rho, and I. Mehmood, 2019. “Social media signal detection using tweets volume, hashtag, and sentiment analysis,” Multimedia Tools and Applications, volume 78, pp. 3,553–3,586.
doi: https://doi.org/10.1007/s11042-018-6437-z, accessed 18 May 2020.
D. Pacheco, A. Flammini, and F. Menczer, 2020. “Unveiling coordinated groups behind White Helmets disinformation,” WWW ’20: Companion Proceedings of the Web Conference 2020, pp. 611–616.
doi: https://doi.org/10.1145/3366424.3385775, accessed 18 May 2020.
D. Pacheco, P.–M. Hui, C. Torres–Lugo, B.T. Truong, A. Flammini, and F. Menczer, 2020. “Uncovering coordinated networks on social media,” arXiv, 2001.05658 (16 January), at http://arxiv.org/abs/2001.05658, accessed 18 May 2020.
G. Pennycook, J. McPhetres, Y. Zhang, and D.G. Rand, 2020. “Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy nudge intervention,” PsyArXiv (18 March).
doi: http://dx.doi.org/10.31234/OSF.IO/UHBK9, accessed 18 May 2020.
I. Pozzana and E. Ferrara, 2020. “Measuring bot and human behavioral dynamics,” Frontiers in Physics, volume 8 (22 April).
doi: http://dx.doi.org/10.3389/FPHY.2020.00125, accessed 18 May 2020.
S. Qi, L. AlKulaib, and D.A. Broniatowski, 2018. “Detecting and characterizing bot-like behavior on Twitter,” In: R. Thomson, C. Dancy, A. Hyder, and H. Bisgin (editors). Social, cultural, and behavioral modeling: 11th International Conference, SBP–BRiMS 2018, Washington, DC, USA, July 10–13, 2018, Proceedings. Lecture Notes in Computer Science, volume 10899. Cham, Switzerland: Springer, pp. 228–232.
doi: http://dx.doi.org/10.1007/978-3-319-93372-6_26, accessed 18 May 2020.
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, 2019. “Language models are unsupervised multitask learners,” at https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf, accessed 18 May 2020.
Y. Roth, 2019. “Information operations on Twitter: Principles, process, and disclosure” (13 June), at https://blog.twitter.com/en_us/topics/company/2019/information-ops-on-twitter.html, accessed 18 May 2020.
D. Ruths and J. Pfeffer, 2014. “Social media for large studies of behavior,” Science, volume 346, number 6213 (28 November), pp. 1,063–1,064.
doi: http://dx.doi.org/10.1126/science.346.6213.1063, accessed 18 May 2020.
D.A. Scheufele and N.M. Krause, 2019. “Science audiences, misinformation, and fake news,” Proceedings of the National Academy of Sciences, volume 116, number 16 (16 April), pp. 7,662–7,669.
doi: http://dx.doi.org/10.1073/PNAS.1805871115, accessed 18 May 2020.
L. Schild, C. Ling, J. Blackburn, G. Stringhini, Y. Zhang, and S. Zannettou, 2020. “‘Go eat a bat, Chang!’: An early look on the emergence of Sinophobic behavior on Web communities in the face of COVID-19,” arXiv, 2004.04046 (8 April), at http://arxiv.org/abs/2004.04046, accessed 18 May 2020.
S. Shahsavari, P. Holur, T.R. Tangherlini, and V. Roychowdhury, 2020. “Conspiracy in the time of corona: Automatic detection of Covid-19 conspiracy theories in social media and the news,” arXiv, 2004.13783 (28 April), at http://arxiv.org/abs/2004.13783, accessed 18 May 2020.
C. Shao, G.L. Ciampaglia, O. Varol, K.–C. Yang, A. Flammini, and F. Menczer, 2018. “The spread of low-credibility content by social bots,” Nature Communications, volume 9, article number 4787 (20 November).
doi: https://doi.org/10.1038/s41467-018-06930-7, accessed 18 May 2020.
L. Singh, S. Bansal, L. Bode, C. Budak, G. Chi, K. Kawintiranon, C. Padden, R. Vanarsdall, E. Vraga, and Y. Wang, 2020. “A first look at COVID-19 information and misinformation sharing on Twitter,” arXiv, 2003.13907 (31 March), at http://arxiv.org/abs/2003.13907, accessed 18 May 2020.
K. Starbird, 2017. “Examining the alternative media ecosystem through the production of alternative narratives of mass shooting events on Twitter,” Eleventh International AAAI Conference on Web and Social Media, at https://aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/view/15603, accessed 18 May 2020.
M. Stella, E. Ferrara, and M. De Domenico, 2018. “Bots increase exposure to negative and inflammatory content in online social systems,” Proceedings of the National Academy of Sciences, volume 115, number 49 (20 November), pp. 12,435–12,440.
doi: http://dx.doi.org/10.1073/pnas.1803470115, accessed 18 May 2020.
S. Stieglitz, F. Brachten, B. Ross, and A.–K. Jung, 2017. Do social bots dream of electric sheep? A categorisation of social media bot accounts,” arXiv, 1710.04044 (11 October), at http://arxiv.org/abs/1710.04044, accessed 18 May 2020.
D. Stukal, S. Sanovich, R. Bonneau, and J.A. Tucker, 2017. “Detecting bots on Russian political Twitter,” Big Data, volume 5, number 4, pp. 310–324.
doi: http://dx.doi.org/10.1089/big.2017.0038, accessed 18 May 2020.
V.S. Subrahmanian, A. Azaria, S. Durst, V. Kagan, A. Galstyan, K. Lerman, L. Zhu, E. Ferrara, A. Flammini, and F. Menczer, 2016. “The DARPA Twitter bot challenge,” Computer, volume 49, number 6, pp. 38–46.
doi: https://doi.org/10.1109/MC.2016.183, accessed 18 May 2020.
J. Sutton, 2018. “Health communication trolls and bots versus public health agencies’ trusted voices,” American Journal of Public Health, volume 108, number 10, pp. 1,281–1,282.
doi: https://doi.org/10.2105/AJPH.2018.304661, accessed 18 May 2020.
K. Thomas, C. Grier, D. Song, and V. Paxson, 2011. “Suspended accounts in retrospect: An analysis of Twitter spam,” IMC ’11: Proceedings of the ACM SIGCOMM Conference on Internet Measurement, pp. 243–258.
doi: https://doi.org/10.1145/2068816.2068840, accessed 18 May 2020.
S. Vosoughi, D. Roy, and S. Aral, 2018. “The spread of true and false news online,” Science, volume 359, number 6380 (9 March), pp. 1,146–1,151.
doi: https://doi.org/10.1126/science.aap9559, accessed 18 May 2020.
S.C. Woolley, 2016. “Automating power: Social bot interference in global politics,” First Monday, volume 21, number 4, at https://firstmonday.org/article/view/6161/5300, accessed 18 May 2020.
doi: http://dx.doi.org/10.5210/fm.v21i4.6161, accessed 18 May 2020.
S.C. Woolley and P.N. Howard, 2018. Computational propaganda: Political parties, politicians, and political manipulation on social media. New York: Oxford University Press.
doi: http://dx.doi.org/10.1093/oso/9780190931407.001.0001, accessed 18 May 2020.
K.–C. Yang, C. Torres-Lugo, and F. Menczer, 2020. “Prevalence of low-credibility information on Twitter during the COVID-19 outbreak,” arXiv, 2004.14484 (29 April), at http://arxiv.org/abs/2004.14484, accessed 18 May 2020.
K.–C. Yang, O. Varol, P.–M. Hui, and F. Menczer, 2019a. “Scalable and generalizable social bot detection through data selection,” arXiv, 1911.09179 (20 November), at http://arxiv.org/abs/1911.09179, accessed 18 May 2020.
K.–C. Yang, O. Varol, C.A. Davis, E. Ferrara, A. Flammini, and F. Menczer, 2019b. “Arming the public with artificial intelligence to counter social bots,” Human Behavior and Emerging Technologies, volume 1, number 1, pp. 48–61.
doi: http://dx.doi.org/10.1002/hbe2.115, accessed 18 May 2020.
A. Zelenkauskaite and M. Balduccini, 2017. “‘Information warfare’ and online news commenting: Analyzing forces of social influence through location-based commenting user typology,” Social Media + Society (17 July).
doi: http://dx.doi.org/10.1177/2056305117718468, accessed 18 May 2020.
E. Zuckerman, 2019. “QAnon and the emergence of the unreal,” Journal of Design and Science, number 6 (15 July).
doi: http://dx.doi.org/10.21428/7808da6b.6b8a82b9, accessed 18 May 2020.
Received 18 April 2020; revised 5 May 2020; revised 14 May 2020; accepted 14 May 2020.
This paper is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
What types of COVID-19 conspiracies are populated by Twitter bots?
by Emilio Ferrara.
First Monday, Volume 25, Number 6 - 1 June 2020