Computational Propaganda And Social Media: Global Perspectives

Authors

  • Jeff Hemsley Syracuse University
  • Gillian Bolsolover Oxford University
  • Samantha Bradshaw Oxford University
  • Mark Owen Jones University of Exeter
  • Alex Hogan ETIC Lab
  • Saiph Savage ETIC Lab

Keywords:

Computational Propaganda, Social Media, Political Bots, Digital Public Life

Abstract

The Internet was initially seen as a democratising technology that would redistribute power to networked individuals, break the control of information gatekeepers leading to more diverse information and foster an online public sphere in which networked individuals could discuss issues, reaching conclusions that would contribute to the political process (Rheingold, 1993; Shapiro, 2000). However, with the rise of social media, the control of online distribution channels has been concentrated into a small number of hands, with new and less accountable platforms replacing traditional media gatekeepers (Lovink, 2011). As online spaces have become domesticated, their participatory potential has been undermined by colonisation by the market, censorship by organisations, states and industries, and appropriation by political and cultural elites (Cammaerts, 2008). Social media has become an increasingly important source of news for Internet users. In the US and UK, direct entry to the website of the news provider remains the most common way of accessing online news. However, social media also takes a large chunk with 35% of people in the US and 25% in the UK accessing news via social media; in Hungary, Greece and Brazil social media is the most common way of accessing online news with more than 50% of people accessing news via social media (Newman, Fletcher, Levy, & Nielsen, 2016). In addition to users obtaining news information on social media, users posts are also seen as important predictors of public sentiment in elections and other political issues (Gayo-Avello, 2012) and the social information about one’s online connections’ political opinions and intentions has been shown to influence offline voting behaviour (Bond et al., 2012). This situation has created a structure ripe for exploitation. Rather than becoming a place for the empowerment of networked individuals, rational debate and diverse information, recent events had pointed to the acceleration of online echo chambers and concerted efforts to distribute misleading or false information or to manipulate the online information environment for political purposes. These issues rose to a head in the 2016 U.S. presidential election, with automated accounts contributing between 20 and 25 percent of the Twitter traffic about the election during the days leading up to the vote; there was evidence of much greater automation in pro-Trump as opposed to pro-Clinton accounts, with highly automated pro-Trump activity outnumbering pro-Clinton five to one. (Howard, Woolley, & Kollanyi, 2016). By producing a large number of tweets using automation, these accounts, some of which are designed to mimic regular users, flood the public opinion environment on these platforms, play on the way that these platforms calculate popular or trending content and spread particular ideas with the force of automated technology. There have, similarly, been recent concerns about the spread of misinformation (dubbed “fake news”) online. It was widely reported that fake news stories generated more engagement on Facebook than those from major news outlets during the U.S. election (Silverman, 2016). Prominent online platforms such as Facebook, Twitter and Google have announced measures to tackle false information, automation and online harassment (Solon & Wong, 2016) (other references) and in January 2017 the UK government announced an inquiry into fake news distributed on social media (UK Parliament, 2017). These issues have recently sprung into the headlines but there is still a great deal of misinformation about this misinformation. Furthermore, the vast majority of discussion about these issues has focused on U.S. (and U.K.) politics, ignoring how these issues might play out in different political systems or different media systems. Our proposed panel draws together four papers that address the issues of computational propaganda on social media from a global perspective and taking both citizen-centred and state-centred approaches. The first paper presents a study of the Chinese state’s social media propaganda strategy that shows, contrary to established wisdom, that this strategy focuses on distraction and positive propaganda rather than attacking critics. The second paper turns attention to the Middle East, documenting how anti-Shia and anti-Iranian hate bots have flooded conversation about political issues on Twitter in Gulf states, jeopardising free speech and drowning out legitimate debate. The third paper provided a grounded study of alt-right communities on 4Chan in the lead up to the U.S. Presidential election, providing a model of communication in these social media communities can lead to collective action and how active participants in these communities have now started to move into other political spaces, such as the French election. The fourth paper zooms out to provide a global picture of social media manipulation by state actors, comparing the size, scale and extent of this practice in the 25 countries with the highest global military expenditures. Together these four papers will provide global overview of the burgeoning practice of computational propaganda, techniques of influencing human action through the manipulation of emotions and representations using automated, technological or online means.

Downloads

Published

2017-10-31

How to Cite

Hemsley, J., Bolsolover, G., Bradshaw, S., Jones, M. O., Hogan, A., & Savage, S. (2017). Computational Propaganda And Social Media: Global Perspectives. AoIR Selected Papers of Internet Research. Retrieved from https://spir.aoir.org/ojs/index.php/spir/article/view/10032

Issue

Section

Panels