This paper aims to analyze and compare the discursive strategies used to spread and legitimate disinformation on Twitter and WhatsApp during the 2018 Brazilian presidential election. Our case study is the disinformation campaign used to discredit the electronic ballot that was used for the election. In this paper, we use a mixed methods approach that combined critical discourse analysis and a quantitative aggregate approach to discuss a dataset of 53 original tweets and 54 original WhatsApp messages. We focused on identifying the most used strategies in each platform. Our results show that: (1) messages on both platforms used structural strategies to portray urgency and create a negative emotional framing; (2) tweets often framed disinformation as a “rational” explanation; and, (3) while WhatsApp messages frequently relied on authorities and shared conspiracy theories, spreading less truthful stories than tweets.
Disinformation and social media
Discourse and legitimation strategies
Introduction: Social media in public service
The 2018 Brazilian presidential election happened amidst several controversies, especially surrounding Jair Bolsonaro, the representative of the Social Liberal Party (PSL). Filling his campaign with far-right views and polemic declarations about minorities and opponents, the candidate defeated leftist Fernando Haddad from the Worker’s Party (PT). The election and the campaign itself were extremely polarized (Soares, et al., 2019), with candidates connected to center ideologies being almost ignored by the public. The pre-election period, as a whole, was also atypical. Bolsonaro avoided televised debates, arguably the most important moments of the campaign (Vasconcellos, 2011). In September 2018, shortly after the beginning of the official campaign period, Bolsonaro was attacked during a rally, which made it impossible for him to participate in mediated events. Nevertheless, at the same time, he heavily used social media channels to spread his message. Bolsonaro’s campaign heavy use of social media was also connected to a spread of disinformation by his supporters (Machado, et al., 2018; Soares, et al., 2019). By the end of October 2018  and again in September and October 2019 , media outlets and opposite parties made several denouncements, accusing his campaign of manipulation through a massive spread of disinformation.
In this paper, we focus on uncovering the discursive strategies used to legitimate and spread disinformation both on Twitter and WhatsApp. Our case study is focused on the disinformation campaign that aimed to discredit the electronic ballot and further, the democratic process as a whole. This campaign was strongly spread in the last week before the second and final round  of the election, and was based on several stories about how the reliability of the electronic ballot as well as conspiracy theories about how the Worker’s Party and the Supreme Court were working to defeat Bolsonaro through fraud. We decided to look at this particular disinformation campaign because it may have had a strong influence in the country since the content portrayed the whole Brazilian democratic system as a fraud, thus affecting trust in democracy. Furthermore, this campaign was fueled by Bolsonaro himself. Bolsonaro often criticized the Brazilian Supreme Electoral Court and the electronic ballot during the campaign, Facebook and Google even had to remove videos in which Bolsonaro spread disinformation about the electronic ballot due to a judicial order . This context makes it more important to understand how this particular disinformation campaign resonates on social media and communication apps. We look at one campaign instead of numerous campaigns in order to derive a more in-depth analysis.
We chose to focus on how these strategies were used on both platforms because Twitter and WhatsApp were two important channels for this presidential campaign. Twitter was a key place for politicians and their “official” discourse, while also hosting many supporters and often, hashtag disputes between different political views (Weber, et al., 2013). WhatsApp, on the other hand, was a more “private” place, often used to spread information to family and friends and to participate in groups for “alternative” media (Resende, et al., 2018). Finally, we propose a mixed methods approach, combining a qualitative critical discourse analysis perspective (Fairclough, 2001) to discuss the disinformation strategies used for the campaign against the ballot and a quantitative analysis of these strategies on data that we collected from WhatsApp and Twitter. Therefore, our study also contributes to disinformation scholarship as a methodological example of applying discourse analysis to analyze disinformation campaigns. Our dataset is composed of 53 original tweets and 54 original WhatsApp messages about the “fraud” in the electronic ballot. These original messages comprised a total of 15,257 retweets and 1,134 shares.
Disinformation and social media
There is considerable research examining how disinformation is created and spread through social media channels and its effects (for example, Bradshaw and Howard, 2018, 2017; Marwick and Lewis, 2017; Derakshan and Wardle, 2017). Part of these studies examined terminology, focusing on defining disinformation and misinformation. In this case, for many, misinformation encompasses false, manipulated or misleading information that was shared without the intention to deceive, or inadvertently. Disinformation, on the other hand, has the intention to deceive (Derakshan and Wardle, 2017; Nemr and Gangware, 2019; Fallis, 2015, amongst others). In this paper, we will use this notion of disinformation.
Studies have also focused on disinformation campaigns. A disinformation campaign is, roughly, the coordinated spread of disinformation as means to an end, with the intention of influencing public opinion through social media. Disinformation campaigns are strongly connected to political propaganda, sometimes used as tools to promote specific political views (Jack, 2017; Bastos and Mercea, 2017). These campaigns often rely on trolls and botnets (Ong and Cabañes, 2018), political influencers and activists (Soares, et al., 2018), hyperpartisan outlets (Marwick and Lewis, 2017) and other strategies to coordinate and legitimate the spread of biased and manipulated content.
We understand that disinformation about the electronic ballot was a disinformation campaign because it was somehow coordinated. Political influencers, such as Bolsonaro himself and other political leaders that supported him, often said publicly that the electronic ballot was untrustworthy. Bolsonaro removed his videos criticizing the electronic ballots and the Brazilian Supreme Electoral Court. His son Flávio also shared a manipulated video to claim that the electronic ballots were rigged . Bolsonaro’s party PSL called on supporters to “inspect” electronic ballots on voting day . Activists also resonated with disinformation about the electronic ballot in Pro-Bolsonaro demonstrations . As we show in our analysis, this disinformation campaign also had an impact on social media. Opinion leaders and activists spread disinformation on Twitter. On WhatsApp, disinformation was spread through large political groups.
As macro conversations and political activism on social tend to connect weakly tied users (Bennett and Segerberg, 2012; Bruns and Moe, 2014), we also understand disinformation campaigns spread through macro conversations and weakly tied users. Thus, in this campaign there was some level of coordination at the outset and an aligned discourse. There was some level of coordination between Bolsonaro, his party and other political leaders central in his campaign; the discourse of this particular campaign was used to ultimately benefit Bolsonaro. This was also linked to the polarized context of the campaign, so the disinformation campaign was used as a political weapon by Bolsonaro and his political colleagues.
Disinformation campaigns are also connected to an increase of polarization and toxicity in political conversations on social media (Ong and Cabañes, 2018). Polarized political contexts have been associated with more or less isolated groups that share few connections between them. This structure is connected to homophily (groups tend to aggregate people with similar views) (Gruzd and Roy, 2014; Bastos, et al., 2018), and a reinforcement of like-minded content rather than political debate (Soares, et al., 2018). Because polarization tends to reduce diversity, these groups act to reinforce their own views and are more prone to circulate disinformation (Benkler, et al., 2018). Thus, using content that describes a political context in a polarized way may be strongly connected to strategies of disinformation, particularly in this case study.
Finally, disinformation campaigns have become more common in social media because of the affordances provided by these tools (Gu, et al., 2017). This is important because users seem to act differently on different platforms and, in our case, Twitter is linked to public discourse, while WhatsApp is used for more private conversations (Valeriani and Vaccari, 2018). This is also relevant because unlike traditional media, social media channels rely on actors to spread messages. When actors spread these messages, they often reinforce content and legitimate it. Discursive strategies used in disinformation campaigns may be designed to fuel this spread.
Discourse and legitimation strategies
The phenomenon of disinformation is directly connected to the strategies of validation through which these discourses are composed. For social actors to be able to share disinformation, they need to frame it as valid through legitimation strategies. Legitimation is related to the role discourse plays in promoting the acceptance of social practices, social relations and power structures, often naturalizing domination. It is, thus, a way to justify social order through discourse (Van Leeuwen, 2007; Van Leeuwen and Wodak, 1999).
A discourse strategy is an action used to frame reality, to construct it through a particular perception. Framing is a fundamental operation of discourse, based on the selection and composition of textual elements to produce a desired meaning. Discursive strategies of legitimation are forms through which these actions on the text try to convey credibility or validation to its meaning (Reyes, 2011). Van Leeuwen (2007) proposes four big categories for legitimation and credibility through discourse:
Authorization (A) — This category is connected to the message being validated by authorities such as institutions, leaders or traditions. Authorization may occur through personal or impersonal authority, tradition or custom, expertise or role models.
Moral evaluation (ME) — This category is associated with legitimation using moral values. It may occur through adjectives or descriptions of the subjects or objects involved; their framing as ‘good’ or ‘bad’ (which Van Leeuwen explains may be very oblique and not direct). Moral evaluation can happen through ‘evaluation’ (when there is a description or an attribution of moral value), ‘abstraction’ (when a practice is ‘moralizing’); or ‘analogy’ (when there is an implicit or explicit comparison between two things that are framed as morally desirable and undesirable).
Rationalization (R) — This category validates discourse through rationalization. It may happen through instrumental rationalization (when the discourse validates the action by reference to goals, uses and effects) or theoretical rationalization (when rationalization takes the form of ‘the way things are’).
Mythopoesis (M) — is validation through stories or narratives. These stories may present themselves as moral tales, in which good are rewarded; or cautionary tales, where bad are punished.
We believe these strategies might be used in disinformation campaigns, as content has to find ways in order to secure legitimacy in order to effectively affect social practices. Furthermore, polarized structures that emerge in political conversations on social media, homophilous and often isolated (Gruzd and Roy, 2014; Bastos, et al., 2018; Soares, et al., 2018; Benkler, et al., 2018), may provide a perfect environment for this type of discourse to thrive. For this reason, we decided to look at one disinformation campaign heavily shared on social media platforms.
For this study, we analyzed the discursive strategies used in a particular disinformation campaign against the Brazilian electronic ballot on Twitter and WhatsApp to understand how discursive strategies were used to legitimate and help spread this content. Our case study is a disinformation campaign that was spread during the last week of the second round of the 2018 Brazilian presidential campaign. This disinformation campaign, spread on WhatsApp and Twitter, focused on conspiracy theories about how the Worker’s Party and supporters of Haddad were going to manipulate the electronic ballot and the election itself. The content would often imply that, if Jair Bolsonaro did not win, the election process was tainted and that supporters of his candidacy should do something about it.
To discuss the strategies used in this case, we analyzed two datasets. One dataset was retrieved from the WhatsApp Electoral Monitor (Resende, et al., 2018), which tracked more than 500 WhatsApp political public groups in Brazil. Another dataset was collected from Twitter, with data derived directly through the API using Social Feed Manager (Prom, 2017). To collect tweets we used the keyword “urna” (ballot in Portuguese) and collected public tweets during the last week of the campaign through streaming API. We gathered 92,632 tweets. This data comprised tweets in several different contexts, as the only filter was the keyword. Then, to refine our dataset, we selected the most retweeted original tweets that had our disinformation case study  among the top thousand retweeted messages. To collect the WhatsApp messages, we filtered the 20 most shared original messages during each of the seven days before the election date (28 October 2019). Often, these public political groups were used to massively spread disinformation as the usage of these groups seemed to be more related to share different content than actually engage in dialogue (Resende, et al., 2018). The WhatsApp Monitor included some information about messages, such as the number of shares, number of groups in where it was shared and number of users who shared the messages. We used the number of shares as the criteria to collect messages. Once again, our dataset comprised messages about different topics, so we further filtered this dataset of 140 WhatsApp messages to select only messages that included disinformation about the electronic ballot. Our final dataset resulted in 53 tweets and 54 WhatsApp messages.
Table 1: Dataset summary. Tweets WhatsApp messages Number of original messages 53 54 Total RT/shares 15,257 1,134 Max RT/shares 3,187 99 Min RT/shares 19 3
For the analysis, we used Fairclough’s (2001) framework for discourse analysis with three major categories: 1) The text itself; 2) the discursive practice that is connected to the text; and 3) and the social practice that follows both. We focused on the text as the structure of the message. The text may employ strategies to frame discourse, manipulating individuals to act on it. For this micro-level of messaging, we considered the typical affordances of the Internet and its language use (Herring, 2010). That is, we analyzed how messages used certain linguistic strategies to frame a message, connecting it to certain meanings to spread content.
The discursive legitimation strategies, in this framework, are the second level of analysis, the intermediary level, connected to discursive practices. For this level, we used the four categories of legitimation as suggested by Van Leeuwen (2007).
After this first part, we were able to advance analysis to a macro-level and discuss how the platforms were used and affected the spread and validation of disinformation, connected to the last of Fairclough’s (2001) categories, the social practices that were supported and created on these platforms. In this last part, we discuss how these strategies are part of the disinformation ecosystem.
1) Textual structure (micro-level)
To analyze the textual structure, we first looked for content characteristics that were used to frame disinformation. After qualitatively analyzing messages, we were able to focus on five categories: the presence of emojis, hashtags, links, words of urgency and incentives to share.
Use of emojis: Emojis are signs used to express emotion and are common on the Internet (Herring, 2010). Therefore, emojis are used to substitute non-verbal elements typical of face-to-face conversations. Emojis are also used to give text a certain emotional framing, connected to a general sentiment that one wants to associate with a given message.
Presence of hashtags: A hashtag, or #, followed by a keyword, is mostly used to index a message. Hashtags are very popular on Twitter and are used to create a sense of wider conversation by tagging a message, which might also help to increase its visibility (Bruns and Burgess, 2011; Bode, et al., 2014). Hashtags also have other functions, connected to framing the message, such as creating slogans, mobilizing people or contextualizing content (Recuero, et al., 2015).
Presence of links: A hyperlink is a connection to another Web site that can be used to share news or other resources. As a structural feature, a link also creates a frame of interpretation for content, providing context for a message (boyd, et al., 2010).
Words of urgency: This category focused on words of warning, which we found typically in these datasets, such as “urgent”, “extremely important” or “extremely urgent”. These phrases could be used to frame content as significant or necessary.
Incentives to share: Finally, we also found that these messages tried to incentivize users to share, using “call to actions” strategies, such as the usage of words and phrases such as “share”, “pass through” or “viralize”. This terminology was also connected to a framing structure for content, as these words encouraged users to legitimate the content by dissemination.
Figure 1 summarizes the micro-level characteristics that we found on each dataset.
Figure 1: Comparison of frequency of linguistic strategies used on Twitter and WhatsApp.
The use of emojis in messages containing disinformation was uncommon either on Twitter (21 percent) and WhatsApp (26 percent). Table 2 describes emoji usage on each platform. On Twitter, emojis were mostly used to portray some negative emotion about the content, for example “(...) TSE authority deceives 😱😡”  and “No ballot had trouble typing 13. 🤔 Amazing right?”;  and to convey urgency, for example “📣 ATTENTION (...)” . The Brazilian flag was also used in a few tweets, as a way to reinforce a sentiment that reflects Bolsonaro’s nationalist campaign. On WhatsApp, the most frequent emojis were the “negative” hand (👎) which conveyed something bad about content and the emoji 👉 was used to remind the finger gun Bolsonaro used as a symbol in his campaign. For example: “👉 *Bolsonaro 77%* 👎 *Haddad 23%*”. The folded hands were also used on WhatsApp, in context where it indicated a religious purpose to the message, such as: “(...) Lets work and God bless this nation 🙏”  and “(...) out battle is against the Evil. 🙏 (...)” . Both guns and religion were frequent themes on Bolsonaro’s campaign and his own discourses.
Table 2: Emoji usage. Emoji Function 👆 0 1 identification 👉 0 8 identification 👎 0 8 negative 🙏 0 4 identification 😨 0 2 negative 🇧🇷 3 1 identification 🚨 0 1 urgency 🔴 0 1 urgency (red circles used to highlight part of the message) 😱 2 0 negative 😡 2 0 negative 📣 3 0 urgency 🤔 3 0 negative 😂 1 0 negative (laughing at something deemed ridiculous) 🚫 1 0 negative 💣 1 0 urgency 💰 1 0 negative (corruption) 💸 1 0 negative (corruption)
The most frequent category on WhatsApp was identification, with 14 occurrences, followed by negative sentiments about content (10 occurrences) and urgency (2). On Twitter, the most frequent category was “negative sentiments about content” (8 occurrences), followed by urgency and identification (3 each). Thus, emoji’s usage was connected to (1) identification with the candidate, as many of the most used emojis mimic Bolsonaro’s discourse and themes used by his campaign (such as nationalism, religion and guns) and, (2) framing the content as urgent and negative, creating an emotional context. In general, emojis were more frequent on WhatsApp and usually had an identification function. On Twitter, emojis were generally used to create a negative emotional framing.
Hashtag use was not frequent on WhatsApp, only appearing in four percent of the messages. On the other hand, they were much more frequent on Twitter (26 percent of the tweets had hashtags). On WhatsApp, the only hashtag used was #B17, which was used to show support to Bolsonaro (17 was his number in the election) and thus, to show identification with his discourse. On Twitter, we found hashtags were mostly used to strengthen and spread a message, by reinforcing distrust in the electronic ballot with statements against the election process, for example #IDontAcceptFraud and #VoteOnPaperBallot, and threats, for example #WithFraudBrasilSTOPS, which threatened a general strike if Bolsonaro lost the election (see Table 3). These differences illustrate that hashtags had a role in Twitter discourse, where they conveyed statements that not only supported the disinformation campaign, but also threatened the Worker’s Party and the Supreme Court. Again, hashtags were used to create an emotional negative framing around information. The words of order in these hashtags were likely used as a strategy to mobilize people to engage in the campaign (similar to what Recuero, et al.  identified in hashtag usage in protests).
On WhatsApp, on the other hand, hashtags were used to show identification with the candidate and content.
Table 3: Hashtag usage. Hashtag Function #IDontAcceptFraud 1 0 statement #dontgiveashitaboutIbope 1 0 statement #VoteOnPaperBallot 7 0 statement #NortheastIs17 1 0 identification #TheSluttyIsOverPT 1 0 threat #PTsCensorship 1 0 statement #FraudsOnTheBallots 1 0 statement #BolsonaroElectedPresident 1 0 statement #WithFraudBrasilSTOPS 1 0 threat #B17 0 2 identification
We also investigated the use of links. They are important because linkage to other sites might indicate that users used external sources to legitimize content that they shared. The use of links was the same on both platforms (13 percent of WhatsApp messages and tweets). On WhatsApp, links used connected to Facebook, which was the shared “source” of disinformation. One example was a message that pointed to an election poll that showed Bolsonaro with a big advantage, 77 percent of voters. In this case, the external source was used to validate a biased poll and reinforce an idea that fraud was the only way in which Bolsonaro could lose. On Twitter, links were used in the same way: to share an external source to support disinformation. Some tweets, for example, linked to mainstream media, however, with a biased conclusion based on the news that they shared. Others linked to hyperpartisan outlets that supported Bolsonaro and were also engaged in disinformation campaigns. Thus, on both platforms, external links either to Facebook posts or hyperpartisan outlets played an important role by clickbaiting individuals to disinformative content (Benkler, et al., 2018). We observed, thus, that the same strategy was used on both platforms.
Textual strategies were also used to create a sense of urgency by including phrases like “extremely important” which was frequently used on WhatsApp messages (50 percent). On Twitter, this strategy was much less prevalent (only seven percent of tweets). On WhatsApp, words of urgency were generally used in capital letters, for example: “*URGENT* The left coup is as follows: (...). *We need to mass-spread this at record speed to undermine the effect before it even happens*.”  This example illustrates that urgent phrases in WhatsApp messages were frequently encouraging others to share or take action and often were connected to emotions. On Twitter, urgent phrases were used similarly: “ATTENTION, PASS ALONG: (...)”  and “EXTREMELY URGENT! (...)”  These phrases were used to create a sense of urgency for readers and to stimulate action, as on WhatsApp. This can be perceived as a strategy to validate disinformation by constantly appealing to emotions, especially fear (Reyes, 2011). The usage of urgent words was similar on both platforms, but it was much more frequent on WhatsApp. As described in Table 4, “attention” was the most frequent urgent word used on Twitter. On WhatsApp, “urgent” was used in 19 messages, while “extremely urgent” and “extremely important” were also present in some messages.
Table 4: Usage of words or phrases expressing urgency. Words or phrases Attention 3 1 Extremely urgent 1 3 Extremely important 0 5 Don’t forget 0 1 Urgent 0 19 Bomb 0 1
The incentive to share was again most prominent on WhatsApp — present in all but six messages (89 percent). On the other hand, it was only present in seven percent of tweets. Even though both platforms depend on actions by users to spread content, one possible explanation is that WhatsApp content was more private and content was confined within groups. Thus, it depended on users to pass it through to other groups or users, which may have been the underlying motivation for this strategy (Resende, et al., 2018). This also might be related to how users perceived WhatsApp as a more private environment (Valeriani and Vaccari, 2018), so they needed to share content in order to interact with their contacts. Table 5 describes the most used words or phrases to encourage others to share.
Table 5: Usage of words or phrases encouraging sharing information. Words or phrases Pass along 3 13 RT it 1 0 Share it 0 14 Go viral 0 8 Spread it 0 11 Send it 0 2 Propagate it 0 3 Copy and paste it to your friends 0 1
As we observed through our analysis, the structure of WhatsApp messages frequently included the use of expressions asking users to spread information, such as: “PASS ALONG TO THE LARGEST NUMBER OF PEOPLE (...). So, it is better to prevent than to repair!”  This strategy was also usually associated with a sense of urgency, as a logical reaction for a group to challenge fear. Thus, an incentive to share validated disinformation by presupposing prophetic scenarios, encouraging people to act as a needed sacrifice in order to prevent a doomed future. As described in Table 5, several different words or phrases were used in WhatsApp to stimulate users to share messages. The use of this strategy on Twitter was less prevalent and diverse, for example “Watch it and RT it”  and “ATTENTION, PASS ALONG: (...)” 
In this analysis, we observed that hashtags, emojis, and words were used to frame disinformation through (1) negative sentiments, support and identification with Bolsonaro’s campaign, and mostly, (2) to convey urgency and to encourage users to share and help spread messages. While there were differences in how each of these strategies were used (sometimes urgency was conveyed through emojis, sometimes, through text), both platforms used them. We also saw that links were mostly used to provide some credibility.
2) Legitimation strategies (intermediary level)
We now proceed to an intermediary level of analysis, focusing on legitimation strategies used to validate disinformation (Figure 2). The categories were not exclusive, therefore a message could use more than a single strategy.
Figure 2: Frequency of legitimation strategies used on Twitter and WhatsApp.
We explore the intermediary level of analysis by once again highlighting the frequency of each category on the platforms.
Authorization, that is, to validate the message based on people and institutions credibility, was a strategy used more frequently on WhatsApp (33 percent) than Twitter (11 percent). On Twitter, authorization was mostly used by mentioning someone who would provide credit to a message, by mentioning an “authority” opinion regarding electronic polls, sharing media stories or by the claim of legitimacy from specific users. For example, one user claimed legitimacy by stating: “you’re questioning the electronic ballot, you don’t understand a think, SHUT UP BECAUSE I’M A F*CKING SYSTEM ANALYST”  (56 RT). In this case, the user mentioned their authority to question the electronic ballot.
On WhatsApp, authorization was mostly linked to sharing election polls from “alternative” poll companies to reinforce the idea that the only way Bolsonaro could be defeated was through fraud. In these cases, polling institutions were the authority to legitimate messages, as in the example below:
*DEMORALIZATION OF IBOPE, DATAFOLHA POOL: POOL Tracking BTG* Just Published for the Financial Market: 👉. *Bolsonaro 77%* 👎 *Haddad 23%* (...). They Want *Manipulate the Pools defraud the ballot results!* *YOU CAN AVOID THE FRAUD BY PASSING ALONG*.  (Shared 44 times, on 33 groups by 41 users).
On Twitter, authorization was mostly used in a direct connection to electronic ballots. In some cases, users mentioned some “authority” or framed themselves as authorities, they were reinforcing a narrative that framed the electronic ballot as unreliable. Similarly, when users linked to information from media outlets, they were using the authority of those outlets to highlight cases in which electronic ballots had technical problems. On the other hand, on WhatsApp, authorization was mostly used when messages shared polls, to reinforce the narrative that only fraud could prevent Bolsonaro from victory. Although the use of hyperlinks was not a prominent structural strategy, they were used in some messages that relied on authorization to legitimize content, as links appeared as a form to convey authorization to content as something reliable.
Moral evaluation was used to qualify and describe objects based on moral values. This category of legitimation was found in more than half of the tweets that we analyzed (52 percent) and it was mostly used to discredit the Brazilian Electoral Court, described as unreliable and corrupt; polling institutes, generally associated to leftists; and, to generally describe the electronic ballot as unreliable. The example below illustrates how a tweet tried to discredit the Brazilian Superior Electoral Court (Tribunal Superior Eleitoral, TSE) by describing the court as corrupt, questioning its morality.
HORROR SHOW AT THE EXPERTISE THIS SATURDAY PERFORMED ON THE BALLOTS THAT GAVE PROBLEMS AND WERE SEIZED Ballot used on the first round presents the same error reported by thousands of people during the public test developed at TRE in São Paulo. TSE authority deceives 😱😡 (3187 RT) 
On WhatsApp, we found that this category was less frequent (26 percent of the messages). It was mostly used to discredit polling institutes and media outlets. In the example below, the message tagged traditional polling institutes (Datafolha and Ibope) as liars and other “alternative” institutes as truthful.
BOLSONARO 60.6 x 39.4 HADDAD https://www.oantagonista.com/brasil/bolsonaro-606-x-394-haddad-crusoe-empiricus/ SÃO PAULO: BOLSONARO 68,4% x 31,6% https://www.infomoney.com.br/mercados/politica/noticia/7732004/bolsonaro-tem-684-do-votos-validos-contra-316-de-haddad-em-sao-paulo PEOPLE: LETS SPREAD THESE TWO RECENT POOLS, from october 25 AND 26. Its very importante. Don’t let the liars Datafolha and Ibope set the tone for this election, because all they want is to paint a scenario of proximity and allow the PT fraud.  (shared 14 times, on 13 groups by 12 users).
At large, moral evaluation was similarly used on both platforms. The main difference was regarding frequency, as it was almost twice more frequently applied on Twitter than on WhatsApp. On Twitter, moral evaluation targeted a number of social actors, including the Brazilian Superior Electoral Court (TSE) and polling institutes, often with emojis (as in the earlier example) and hashtags (for example, #VoteOnPaperBallot was very frequent, as messages associated electronic ballot fraud specifically with the TSE) that reinforced the “scandalous” behavior of a specific institution. On WhatsApp, messages mostly focused on polling institutes and media outlets by inventing conspiracies in which these two social actors were involved with leftists to influence the elections. On WhatsApp, hashtags were not used for moral evaluation, but emojis were appllied to portray negative sentiments and to demonstrate identification. In general, moral evaluation was often spread with emojis that showed negative emotions (and thus, disapproval), as well as identification hashtags and emojis, to convey that users who shared information were the “good guys” rather than the corrupt “others”. This type of moral evaluation reinforces an affective polarization, typical of manipulative discourse (van Dijk, 2006).
Rationalization was a very prominent strategy on Twitter (77 percent). It was also used in almost half of the WhatsApp messages (46 percent). On Twitter, rationalization was frequently used through argumentation and explanations of why the electronic ballot was untrustworthy, encouraging how paper ballot should be used. In some of these cases, users mentioned several “possible” ways to influence electronic ballots to prove their point. In other messages, the false argument, to frame the electronic ballot as unreliable, was that other countries do not use electronic ballots.
On WhatsApp, some messages would ask users to photograph their voter receipts, because it could be used to prevent fraud by proving their votes for Bolsonaro. This proposal, however, was illegal. Rationalization was also used to share other false or imprecise information about voting processes, such as the period of time that users could vote, such as mentioning that in case individuals did not vote for governor (voting for governors of states occurred on the same day as president) their votes would not count. False information was used to reinforce a narrative of fraud.
Here are a few examples:
An electronic ballot you can configure as you want it. Including to appear a picture of one candidate and the vote to be registered for other.  (109 RT)
The Patriotic groups Pro-Bolsonaro are asking for everyone to take the Result Ticket (individual) put the number 17 clearly visible on his [Bolsonaro’s] field, photograph it and send it to Bolsonaro’s Representation on Social Media. So, they will have the receipt from session, location, etc. If it proves more than 60 million valid votes and don’t appear the result, the fraud is proven and with attorneys they contest the Election and they will have to start over on paper ballots. This is a good tactic. Pass along this idea to as many people as possible. 50+1 already proves the fraud if it happens.  (shared 16 times, on 14 groups by 16 users).
In general, rationalization was differently used on each platform. Tweets mostly focused on technical issues that could be used to manipulate and hack electronic ballots. Therefore, electronic ballots were framed as unreliable because of the possibility of fraud. On Twitter, hashtags (such as #VotesOnPaperBallot) were used to reinforce the content of tweets, as messages framed the electronic ballot as unreliable. Similarly, hashtags were used to portray urgency (📣) and negative emotions (🤔) in order to reinforce content criticizing electronic ballots. There were claims for alternative processes, with suggestions of technical issues regarding the electronic ballot. Links were used to reinforce these technical claims, as users shared stories (usually from hyperpartisan sources) mentioning problems with electronic ballots. This was likely related to the use of Twitter, as conversations on the platform were more “public” and users were more likely to be corrected (Bruns and Moe, 2014; Valeriani and Vaccari, 2018), so disinformation was spread by distorting messages that tried to portray false information as a “technical” argument.
On WhatsApp, rationalization was used to mention false or imprecise information about the processes, asking individuals to photograph their voter receipts in order to prove their vote for Bolsonaro. In these cases, rationalization strategies were aligned with a bigger picture of the fraud narrative. On WhatsApp, emojis, hashtags and links were not used in messages relying on rationalization. Urgent words/phrases and incentives to share were present in messages that used rationalization to legitimize content. However, urgent words/phrases and incentives to share were used in most WhatsApp messages, therefore this is not a particularity of messages that used rationalization to legitimize content.
In general, messages on both platforms used rationalization to create a sentiment of discredit of the electoral process, as this strategy was used to mention technical issues regarding the electronic ballot (mostly on Twitter) and to mislead users by mentioning false or imprecise facts regarding the electoral process (mostly on WhatsApp).
Mythopoesis, the validation through stories or narratives, was rarely used on Twitter (11 percent). On the other hand, it was used in more than half of WhatsApp messages (52 percent). On Twitter, mythopoesis was used to connect messages with other parallel narratives or by using an anecdotal tone when criticizing the electronic ballot. In general, the messages did not use more elaborate stories. See the following example, in which an anecdote was used to frame the electronic ballot as unreliable:
Have you ever thought: the vote in the electronic ballot is like making a deposit in a bank account and the cashier is not committed to giving a receipt. 
Although mythopoesis was not frequent on Twitter, some messages used structural strategies to reinforce content. Emojis (such as 📣), urgent words (attention) and incentives to share (pass along) were used to portray urgency and mobilize users to share stories. Furthermore, one message use the #VoteOnPaperBallot to reinforce its content.
On WhatsApp, the strategy was based on telling stories demonizing the Worker’s Party or its candidate Fernando Haddad, to justify the narrative of electronic ballot fraud, once again appealing to specific emotions, particularly fear (Reyes, 2011). These stories also mentioned media outlets, polling institutes and the Brazilian Superior Electoral Court, that they were involved in a conspiracy to alter the elections. The framing of “good” versus “evil” was a typical legitimation strategy. A strategy of “us” versus “them” is often used in manipulative discourse (van Dijk, 2006). Many of the stories on WhatsApp included fabricated conspiracy theories, as the example below (which was the most shared message on the platform):
PASS ALONG TO THE LARGEST NUMBER OF PEOPLE Everything Joice Halssemann said so far happened. So it’s better to prevent than to repair! The PT has one last card up his sleeve. With 3 days to go, there will be a false attack on Haddad and Manuela. They will let be real beaten to bruise. The bruises are to ensure that the hypothesis that it is a lie will be soon rejected. The alleged perpetrator will be White with blue eyes and will wear a Bolsonaro’s shirt with a swastika and a Hitler picture. (...) An #elenão manifestation will take place on Saturday, this manifestation will is going to have the same number [of people] as the first one, but the media will notice as the biggest manifestation of history. (...) On Sunday the ballots will be manipulated. 51% to 49% in favor of Haddad, and at the same time, in all channels, political analysts will tell the victory is normal (...). Spread this as much as you can while there’s time. If it doesn’t happen, PT was afraid the leak of the plan could cause bigger indignation than they can bear, and attract the army’s fury. If PT will have the courage to put this into practice, the people will already be aware, and may go to war. (...)  (shared 99 times, on 51 groups by 81 users)
As usual on WhatsApp, all but one message that relied on mythopoesis also encouraged users to share and half of them included urgent phrases and words. Emojis were used to portray some negative emotions and reinforce the narrative, to create a sense of urgency and to create a frame of identification. Therefore, mythopoesis messages used emojis in all three functions that we identified.
In general, legitimation strategies were used differently on each platform, likely related to the affordances of the platforms and how users perceive them (Gu, et al., 2017; Valeriani and Vaccari, 2018). Tweets mostly used rationalization to mention technical issues that could be used to alter the elections and create a sentiment of distrust over electronic ballots. Moral evaluation was also frequently used, in this case to frame the electronic ballot itself, Brazilian Superior Electoral Court and polling institutes as untrustworthy. Mythopoesis and authorization were rarely used on Twitter, present in only six messages each. When used, mythopoesis connected a given message with other narratives or used anecdotes to justify disinformation, and authorization relied on some kind of authority to justify distrust in the electronic ballot. In general, tweets tried to use rational or technical ideas to justify a notion of unreliability in the electronic ballot. Legitimation strategies were also related to structural strategies. Moral evaluation and rationalization frequently used hashtags to reinforce their content (#VoteOnPaperBallot, for example, was the most frequent). Similarly, messages relying on moral evaluation and/or rationalization also used emojis, mostly to reinforce their content by framing some negative emotion (usually targeting the Brazilian Superior Electoral Court). Although authorization was rarely used on Twitter, many of these messages included links to portray authority. Mythoposesis was also rarely used and some these tweets included statement hashtags and emojis to frame negative sentiments and urgency.
On WhatsApp, mythopoesis was the most frequent strategy, used to spread stories affirming leftists, media outlets, polling institutes and even the Brazilian Superior Electoral Court were involved in a conspiracy to alter the elections. Many WhatsApp messages also relied on rationalization to share false or imprecise information regarding the electoral system. Moral evaluation was used similarly to Twitter, by negatively framing social actors involved in the elections. Authorization was mostly used when messages included biased polling to prove that the only way Bolsonaro could lose the election was through fraud. In general, on WhatsApp, rationalization, moral evaluation and authorization were used to justify a narrative that there was a conspiracy to fraud the elections presented in messages that relied on mythopoesis.
On WhatsApp, messages using mythopoesis to legitimize content were also using emojis in the three functions that we identified (negative, identification and urgency), along with incentives to share. As a matter of fact, incentives to share and urgent language were prevalent along with all of the legitimation, as very prominent structural strategies on WhatsApp. Emojis were also frequently used along with moral evaluation, mainly to portray a negative sentiment and urgency.
We also looked at whether messages used only one or multiple legitimation strategies (Tables 6 and 7).
Table 6: Messages with one legitimation strategy. Legitimation strategy Authorization (A) 1 11 Moral evaluation (ME) 8 0 Rationalization (R) 16 9 Mythopoesis (M) 1 13
Table 7: Messages with multiple legitimation strategies. Legitimation strategy A/ME 0 1 A/R 3 5 A/M 1 0 ME/R 18 0 ME/M 1 3 R/M 3 2 A/ME/R 1 0 A/ME/M 0 1 ME/R/M 0 9
On Twitter, the predominance of rationalization and moral evaluation was reflected in our results. Messages using both strategies were the most frequent in our dataset (18 tweets — 34 percent). They were frequently used to mention technical issues regarding the electronic ballot (R) and frame the Brazilian Superior Electoral Court as untrustworthy (ME). There were also a high frequency of messages using only rationalization (16–30 percent) or moral evaluation (8–15 percent) and the strategies were similar.
On WhatsApp, messages relying only on mythopoesis (13–24 percent) were most frequent. These messages were mostly sharing narratives that the leftists, media outlets, polling institutes and electoral organizations were organized to alter the election. Messages using only authorization (11–20 percent) were also frequent, mostly reproducing biased or false poll results to claim that only a fraud could prevent Bolsonaro from winning the elections. Messages using only rationalization (9–17 percent) were usually suggesting ways to prevent fraud such as photographing voter receipts to prove votes for Bolsonaro. We also identified some messages using moral evaluation, rationalization and mythopoesis (9–17 percent). These nine messages were similar, as they created a narrative that the electronic ballot would be programmed to change to different hours during the election (M), justifying that the leftists were “capable of everything” to alter the elections (ME). Hence, it was necessary to vote before 4 PM (R).
3) Social practices (macro-level)
At this level, we discuss how structural and legitimation strategies were used to affect social practices. As we identified, disinformation about the electronic ballot relied on several strategies to spread and legitimize it. Furthermore, these strategies were used differently on each platform that we studied — Twitter and WhatsApp.
Structural strategies were frequently used to enhance a social practice of mobilization. Mostly, urgent words/phrases and incentives to share were used, as these strategies created a sense of urgency and relied on “call-to-action” strategies to mobilize users to share messages. Although this strategy was more frequent on WhatsApp, we noted that disinformation campaigns use strategies to engage users to spread false content, fueling the visibility of disinformation based on platform designs (Gu, et al., 2017).
We also identified structural strategies used to create emotional framing and reinforce negative sentiments towards leftists and other social actors targeted in messages. This was mostly done through emojis and hashtags. These strategies were used to mobilize users through sentiments. This emotional and negative context seemed to be very important both on Twitter and WhatsApp, even though it was created differently with emojis on WhatsApp and hashtags on Twitter. This emotional approach helped to legitimize discourse (Reyes, 2011), as well as foster a division between “us” and “them” (van Dijk, 2006). These two strategies seemed to be key to understand this specific disinformation campaign.
On Twitter, most of the content was legitimized by rationalization, with disinformation often framed by “rational” explanations. On WhatsApp, the majority of the content was legitimized by authorities and conspiracy theories. While Twitter is a more public tool and, thus, easier to discredit conspiracies, WhatsApp is more private. These differences affected how users use each platform (Valeriani and Vaccari, 2018). Thus, WhatsApp seemed to be a more appropriate place to spread less truthful stories than Twitter. The differences that we found, therefore, may be related to different affordances and social usages of each platform.
Finally, while each platform relied on different strategies to share and legitimize disinformation, they both enhanced a social practice of distrust in the electoral system. They were used to justify that any scenario in which Bolsonaro was not elected would only be possible based on fraud, reinforcing polarization (Ong and Cabañes, 2018). Furthermore, these messages created a frame of distrust towards the entire electoral system. Rationalization tweets, for example, repeatedly mentioned that electronic ballots were unreliable due to technical issues. Accordingly, messages using moral evaluation on both platforms criticized the Brazilian Electoral Supreme Court, media outlets and polling institutes. Messages using authorization as legitimation strategies, mostly on WhatsApp, included biased polls to justify a narrative that Bolsonaro could only lose in case of fraud. Finally, mythopoesis was used to spread several stories and conspiracy theories mentioned that social actors and electoral organizations were aligned with the Workers’ Party to manipulate electronic ballots. Altogether, these different strategies were always used to reinforce a frame of distrust towards the elections.
This type of disinformation campaign is highly dangerous for democratic societies, as they directly affect the sense of trust in democratic institutions (as those involved in the elections) and fundamental social actors (mainstream outlets, for example) (Marwick and Lewis, 2017). The disinformation campaigns do so by sharing diverse pieces of (dis)informations that in the end are all used to justify one similar narrative. In this case, the electoral system was portrayed as untrustworthy thanks to an imaginary plot to by leftists. Although disinformation was shared by a number of users in different contexts, they all reproduced the same discourse. In particular, in the disinformation campaign that we analyzed, these messages were used as political weapons to reinforce a narrative that was positive for Bolsonaro. The connections created an ecosystem of disinformation, in which users shared and reinforced disinformation (Benkler, et al., 2018). As we also identified, this ecosystem was created on multiple platforms, as the disinformation campaign was spread both on Twitter and WhatsApp. As a result, the sense of distrust remained as a social practice during the elections, and even after it, diminishing the democratic sphere of society.
In this study, we aimed to analyze discursive strategies used in disinformation with Twitter and WhatsApp. We used a critical discourse analysis perspective (Fairclough, 2001) to look at messages from both platforms containing disinformation related to electronic ballots during the last week before the runoff of the 2018 Brazilian elections. The two main contributions of our study are (1) the use of a discourse analysis approach to understand disinformation campaigns; and, (2) the identification of strategies used to spread disinformation.
Regarding the structural level of messages, our main findings suggest that (1) emojis and hashtags were mostly used to create emotional framing and reinforced content; and, (2) urgent words/phrases and incentives to share, especially prevalent on WhatsApp, were used to create a frame of mobilization.
Regarding legitimation strategies, we identified that they were differently used on each platform: (1) Twitter relied mostly on rationalization and moral evaluation to, respectively, mention technical issues regarding the electronic ballot and criticize social actors involved in the election process; and, (2) on WhatsApp, mythopoesis was the most frequent strategy by sharing stories mentioning a conspiracy to affect the election, while authorization was used to reproduce biased polls and rationalization was used to suggests ways to prevent fraud. Finally, the combination of these results suggested that WhatsApp was used to spread more false content than Twitter, possibly because of the public nature of Twitter.
There are some limitations to this study. We examined only one case and for a limited period. Future studies might compare the results from this study with other disinformation campaigns. Furthermore, a cross-platform study including other platforms, such as Facebook and YouTube, would contribute to this discussion.
About the authors
Raquel Recuero (Ph.D. Universidade Federal do Rio Grande do Sul) is a professor at both Universidade Federal de Pelotas (UFPEL) and Universidade Federal do Rio Grande do Sul and a researcher for CNPq (National Research Council) and the MIDIARS (Media, Discourse and Social Networks) research lab coordinator. Her research interests include discourse, political conversations, gender, violence, social networks and social media.
E-mail: raquelrecuero [at] gmail [dot] com
Felipe Bonow Soares (Ph.D. Universidade Federal do Rio Grande do Sul) is a researcher at the MIDIARS research lab. His research interests include social network analysis, public sphere and disinformation.
E-mail: felipebsoares [at] hotmail [dot] com
Otávio Vinhas is a Ph.D. student in information and communication studies at University College Dublin. He is a researcher at the MIDIARS research lab. His research interests include social networks, misinformation and sociocybernetics.
E-mail: otavio [dot] vinhas [at] gmail [dot] com
This project is partially funded by FAPERGS (grant number 19/2551-0000688-8), CNPq (grant number 301433/2019-4) and CAPES/PRINT (funding process number 88887.363265/2019-00).
2. https://noticias.uol.com.br/politica/ultimas-noticias/2019/09/19/fake-news-pro-bolsonaro-whatsapp-eleicoes-robos-disparo-em-massa.htm and https://www.theguardian.com/world/2019/oct/30/whatsapp-fake-news-brazil-election-favoured-jair-bolsonaro-analysis-suggests.
8. Disinformation was checked using Brazilian fact-checking outlets, such as Publica (https://apublica.org/tag/fact-checking/), Aos Fatos (https://www.aosfatos.org) and others.
9. Translated from Portuguese: “(...) T&eeacute;cnico do TSE disfarça 😱😡.”
10. Translated from Portuguese: “(...) Nenhuma urna teve problema ao digitar 13. 🤔 incrvel né?”
11. Translated from Portuguese: “📣 ATENÇÃO (...)”.
12. Translated from Portuguese: “(...) Bora trabalhar e que Deus abençoe esta nação!!!! 🙏.”
13. Translated from Portuguese: “(...) nossa batalha é contra o Mau. 🙏 (...).”
14. Translated from Portuguese: *URGENTE* O golpe da esquerda é o seguinte: (...). “Precisamos divulgar isto em massa, numa velocidade recorde, para minar o efeito, antes mesmo de ocorrer!”
15. Translated from Portuguese: “ATENÇÃO, REPASSEM”.
16. Translated from Portuguese: “URGENTÍSSIMO!”
17. Translated from Portuguese: “REPASSEM PARA O MAIOR NUMERO DE PESSOAS (...). Então é melhor prevenir do que Remediar!”
18. Translated from Portuguese: ”Vejam e dêem RT“.
19. Translated from Portuguese: ”ATENÇÃO, REPASSEM”.
20. Translated from Portuguese: “Ain mas você está questionando a urna eletrônica, você não entende nada e CALA A BOCA PORQUE EU SOU ANALISTA DE SISTEMAS PORRA”.
21. Translated from Portuguese: “DESMORALIZAÇÃO DA PESQUISA IBOPE, DATAFOLHA: PESQUISA Tracking BTG* Acabou de Sair para o Mercado Financeiro: 👉 *Bolsonaro 77%* 👎 *Haddad 23%* (...). Querem *Manipular as Pesquisas para Fraudar os Resultados das Urnas!* *VOCÊ PODE EVITAR A FRAUDE, REPASSANDO*.”
22. Translated from Portuguese: “SHOW DE HORRORES NA PERCIA REALIZADA NESTE SÁBADO NAS URNAS QUE DERAM PROBLEMAS E FORAM APREENDIDAS Urna utilizada no 1º turno apresenta o mesmo erro denunciado por milhares de pessoas, durante teste público ocorrido no TRE de São Paulo. Técnico do TSE disfarça.”
23. Translated from Portuguese: “BOLSONARO 60,6 x 39,4 HADDAD https://www.oantagonista.com/brasil/bolsonaro-606-x-394-haddad-crusoe-empiricus/ SÃO PAULO: BOLSONARO 68,4% x 31,6% https://www.infomoney.com.br/mercados/politica/noticia/7732004/bolsonaro-tem-684-do-votos-validos-contra-316-de-haddad-em-sao-paulo PESSOAL: VAMOS ESPALHAR ESTAS DUAS PESQUISAS RECENTES, de 25 e 26 de outubro. É importantssimo. Não deixem os mentirosos Datafolha e Ibope darem o tom desta eleição, pois tudo que querem é pintar um quadro de proximidade e permitir a fraude do PT.”
24. Translated from Portuguese: “Uma urna eletrônica você pode programá-la como quiser. Inclusive para aparecer a foto de um candidato e o voto ser computado para o outro.”
25. Translated from Portuguese: “Os grupos Patriticos Pro-Bolsonaro estão pedindo pra todo pessoal pegar o Bilhetinho do Resultado (individual) colocar o n° 17 bem visvel no campo dele, fotografa-lo e envia-lo para à Representação Bolsonaro pelas Redes Sociais. Assim, eles tero o comprovante por sesso, local, etc. Se comprovar mais de 60 milhoes de votos vlidos e não aparecerem no resultado, comprova-se à fraude e, com os Advs eles impugnação à Eleição e terão que re-faze-la em células de papel. Essa é uma boa tática. Repassar para o máximo de pessoas essa idéia. 50+1 j comprova à fraude se houver.”
26. Translated from Portuguese: “Já pararam pra pensar: Que o voto na urna é como fazer um depósito na conta e o caixa não ter o comprometimento de dar o recibo.”
27. Translated from Portuguese: “REPASSEM PARA O MAIOR NUMERO DE PESSOAS Tudo Que a Joice HALSSEMANN falou até agora aconteceu. Então é melhor prevenir do que Remediar! O PT tem uma última carta na manga. Faltando 3 dias para eleições terá um falso ataque contra o Haddad e Manuela. Eles se deixaro bater de verdade até causar ematomas. Esses hematomas e para garantir que a hipótese de que seja mentira seja logo rechaada. O suposto agressor será Branco dos olhos azuis e usará uma camisa do Bolsonaro com uma suástica e foto de Hitler. (...) Uma manifestaçõo #eleno acontecer no sábado, essa manifestação vai ter o mesmo número da primeira, mas a mídia divulgará como a maior manifestação da história. (...) No domingo as urnas sero manipuladas. 51% a 49% para o Haddad, e na mesma hora, e em todos os canais, analistas polticos vo dizer que é normal (...). Divulgue o máximo que puder, enquanto ainda é tempo. Se não acontecer, o PT ficou com medo do vazamento do plano causar uma indignação maior do que eles possam suportar, e atrair a fúria do exército. Se o PT tiver coragem e colocar em prática, o povo já vai estar sabendo, e poderá partir para a guerra. (...)”.
M.T. Bastos and D. Mercea, 2017. “The Brexit botnet and user-generated hyperpartisan news,” Social Science Computer Review, volume 37, number 1, pp. 38–54.
doi: https://doi.org/10.1177/0894439317734157, accessed 21 December 2020.
M. Bastos, D. Mercea, and A. Baronchelli, 2018. “The geographic embedding of online echo chambers: Evidence from the Brexit campaign,” PLoS ONE, volume 13, number 11 (2 November), e0206841.
doi: https://doi.org/10.1371/journal.pone.0206841, accessed 21 December 2020.
Y. Benkler, R. Faris, and H. Roberts, 2018. Network propaganda: Manipulation, disinformation, and radicalization in American politics. New York: Oxford University Press.
doi: https://doi.org/10.1093/oso/9780190923624.001.0001, accessed 21 December 2020.
W.L. Bennett and A. Segerberg, 2012. “The logic of connective action: Digital media and the personalization of contentious politics,” Information, Communication & Society, volume 15, number 5, pp. 739–768.
doi: https://doi.org/10.1080/1369118X.2012.670661, accessed 21 December 2020.
L. Bode, A. Hanna, J. Yang, and D.V. Shah, 2014. “Candidate networks, citizen clusters, and political expression: Strategic hashtag use in the 2010 midterms,” Annals of the American Academy of Political and Social Science, volume 659, pp. 149–165.
doi: https://doi.org/10.1177/0002716214563923, accessed 21 December 2020.
d. boyd, S. Golder, and G. Lotan, 2010. “Tweet, tweet, retweet: Conversational aspects of retweeting on Twitter,” HICSS ’10: Proceedings of the 2010 43rd Hawaii International Conference on System Sciences, pp. 1–10.
doi: https://doi.org/10.1109/HICSS.2010.412, accessed 21 December 2020.
S. Bradshaw and P.N. Howard, 2018. “Challenging truth and trust: A Global inventory of organized social media manipulation,” Computational Propaganda Project Report, University of Oxford, at https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/07/ct2018.pdf, accessed 21 December 2020.
S. Bradshaw and P.N. Howard, 2017. “Troops, trolls and troublemakers: A global inventory of organized social media manipulation,” Computational Propaganda Project, Working Paper, number 2017.12, at https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/07/Troops-Trolls-and-Troublemakers.pdf, accessed 21 December 2020.
A. Bruns and H. Moe, 2014. “Structural layers of communication on Twitter,” In: K. Weller, A. Bruns, J Burgess, M. Mahrt, and C. Puschmann (editors). Twitter and society. New York: Peter Lang, pp. 15–28.
A. Bruns and J.E. Burgess, 2011. “The use of Twitter hashtags of ad hoc publics,” Proceedings of the Sixth European Consortium for Political Research (ECPR) General Conference 2011, pp. 1–9.
H. Derakhshan and C. Wardle, 2017. “Information disorder: Definitions,” Understanding and Addressing the Disinformation Ecosystem, Annenberg School for Communication University of Pennsylvania, pp. 5–12, at https://pdfs.semanticscholar.org/9d91/58807cbf03fff609e74ef9e0e61c2e6088d8.pdf, accessed 21 December 2020.
N. Fairclough, 2001. Discurso e mudança social. Brasília: Editora UnB.
D. Fallis, 2015. “What is disinformation?” Library Trends, volume 63, number 3, pp. 401–426.
doi: https://doi.org/10.1353/lib.2015.0014, accessed 21 December 2020.
A. Gruzd and J. Roy, 2014. “Investigating political polarization on Twitter: A Canadian perspective,” Policy & Internet, volume 6, number 1, pp. 28–45.
doi: https://doi.org/10.1002/1944-2866.POI354, accessed 21 December 2020.
L. Gu, V., Kropotov, and F. Yarochkin, 2017. “The fake news machine: How propagandists abuse the Internet and manipulate the public,” at https://documents.trendmicro.com/assets/white_papers/wp-fake-news-machine-how-propagandists-abuse-the-internet.pdf, accessed 25 January 2019.
S. Herring, 2010. “Computer-mediated conversation: Introduction and overview,” Language@Internet, volume 7, at http://www.languageatinternet.org/articles/2010/2801, accessed 21 December 2020.
C. Jack, 2017. “Lexicon of lies: Terms for problematic information,” Data & Society Research Institute, at https://datasociety.net/pubs/oh/DataAndSociety_LexiconofLies.pdf, accessed 21 December 2020.
C. Machado, B. Kira, G. Hirsch, N. Marchal, B. Kollanyi, P.N. Howard, T. Lederer, and V. Barash, 2018. “News and political information consumption in Brazil: Mapping the first round of the 2018 Brazilian presidential election on Twitter,” Project on Computational Propaganda, Data Memo, 2018.4, at https://comprop.oii.ox.ac.uk/research/posts/news-and-political-information-consumption-in-brazil-mapping-the-2018-brazilian-presidential-election-on-twitter/, accessed 21 December 2020.
A. Marwick and R. Lewis, 2017. “Media manipulation and disinformation online,” Data and Society Research Institute (15 May), at https://datasociety.net/library/media-manipulation-and-disinfo-online/, accessed 21 December 2020.
C. Nemr and W. Gangware, 2019. “Weapons of mass distraction: Foreign state-sponsored disinformation in the digital age,” Park Advisors, at https://static1.squarespace.com/static/5714561a01dbae161fa3cad1/t/5c9cb93724a694b834f23878/1553774904750/PA_WMD_Report_2019.pdf, accessed 21 December 2020.
J.C. Ong and J.V.A. Cabañes, 2018. “Architects of networked disinformation: Behind the scenes of troll accounts and fake news production in the Philippines,” at http://eprints.whiterose.ac.uk/127312/, accessed 21 December 2020.
C.J. Prom, 2017. “Social feed manager: Guide for building social media archives” (7 June), at https://gwu-libraries.github.io/sfm-ui/resources/SFMReportProm2017.pdf, accessed 21 December 2020.
R. Recuero, G. Zago, M.T. Bastos, and R. Araújo, 2015. “Hashtags functions in the protests across Brazil,” SAGE Open (11 May).
doi: https://doi.org/10.1177/2158244015586000, accessed 21 December 2020.
G. Resende, J. Messias, M. Silva, J. Almeida, M. Vasconselos, and F. Benevenuto, 2018. “A system for monitoring public political groups in WhatsApp,” WebMedia ’18: Proceedings of the 24th Brazilian Symposium on Multimedia and the Web, pp. 387–390.
doi: https://doi.org/10.1145/3243082.3264662, accessed 21 December 2020.
A. Reyes, 2011. “Strategies of legitimization in political discourse: From words to actions,” Discourse & Society, volume 22, number 6, pp. 781–807.
doi: https://doi.org/10.1177/0957926511419927, accessed 21 December 2020.
F.B. Soares, R. Recuero, and G. Zago, 2019. “Asymmetric polarization on Twitter and the 2018 Brazilian presidential elections,” SMSociety ’19: Proceedings of the Tenth International Conference on Social Media and Society, pp. 67–76.
doi: https://doi.org/10.1145/3328529.3328546, accessed 21 December 2020.
F.B. Soares, R. Recuero, and G. Zago, 2018. “Influencers in polarized political networks on Twitter,” SMSociety ’18: Proceedings of the Ninth International Conference on Social Media and Society, pp. 168–177.
doi: https://doi.org/10.1145/3217804.3217909, accessed 21 December 2020.
A. Valeriani and C. Vaccari, 2018. “Political talk on mobile instant messaging services: A comparative analysis of Germany, Italy, and the UK,” Information, Communication & Society, volume 21, number 11, pp. 1,715–1,731.
doi: https://doi.org/10.1080/1369118X.2017.1350730, accessed 21 December 2020.
T.A. van Dijk, 2006. “Discourse and manipulation,” Discourse & Society, volume 17, number 3, pp. 359–383.
doi: https://doi.org/10.1177/0957926506060250, accessed 21 December 2020.
T. Van Leeuwen, 2007. “Legitimation in discourse and communication,” Discourse & Communication, volume 1, number 1, pp. 91–112.
doi: https://doi.org/10.1177/1750481307071986, accessed 21 December 2020.
T. Van Leeuwen and R. Wodak, 1999. “Legitimizing immigration control: A discourse-historical analysis,” Discourse Studies, volume 1, number 1, pp. 83–118.
doi: https://doi.org/10.1177/1461445699001001005, accessed 21 December 2020.
F. Vasconcellos, 2011. “Quem se importa com os debates eleitorais na TV?” Proceedings of IV Encontro da Compolítica, at http://www.compolitica.org/home/wp-content/uploads/2011/03/FabioVasconcellos-1.pdf, accessed 21 December 2020.
I. Weber, V.R.K. Garimella, and A. Teka, 2013. “Political hashtag trends,” In: P. Serdyukov, P. Braslavski, S.O. Kuznetsov, J. Kamps, S. Rüger, E. Agichtein, I. Segalovich, and E. Yilmaz (editors). Advances in information retrieval. Lecture Notes in Computer Science, volume 7814. Berlin: Springer, pp 857–860.
doi: https://doi.org/10.1007/978-3-642-36973-5_102, accessed 21 December 2020.
Received 29 February 2020; revised 9 September 2020; accepted 21 December 2020.
“Discursive strategies for disinformation on WhatsApp and Twitter during the 2018 Brazilian presidential election” de Raquel Recuero, Felipe Soares e Otávio Vinhas está licenciado com uma Licença Creative Commons — Atribuição-NãoComercial-CompartilhaIgual 4.0 Internacional.
Discursive strategies for disinformation on WhatsApp and Twitter during the 2018 Brazilian presidential election
by Raquel Recuero, Felipe Soares, and Otávio Vinhas.
First Monday, Volume 26, Number 1 - 4 January 2021