SAFE FROM “HARM”: THE GOVERNANCE OF VIOLENCE BY PLATFORMS

Authors

  • Julia Rose DeCook Loyola University Chicago, United States of America
  • Kelley Cotter Pennsylvania State University, United States of America
  • Shaheen Kanthawala University of Alabama, United States of America

DOI:

https://doi.org/10.5210/spir.v2021i0.12160

Keywords:

harm, platform governance, discourse analysis, violence, moderation

Abstract

Platforms have long been under fire for how they create and enforce policies around hate speech, harmful content, and violence. In this study, we examine how three major platforms (Facebook, Twitter, and YouTube) conceptualize and implement policies around how they moderate “harm,” “violence,” and “danger” on their platforms. Through a feminist discourse analysis of public facing policy documents from official blogs and help pages, we found that platforms are often narrowly defining harm and violence in ways that perpetuate ideological hegemony around what violence is, how it manifests, and who it affects. Through this governance, they continue to control normative notions of harm and violence, denying their culpability, and effectively manage perceptions of their actions and directing users’ understanding of what is “harmful” versus what is not. Rather than changing the mechanisms of their design that enable harm, the platforms reconfigure intentionality and causality to try to stop users from being “harmful,” which, ironically, perpetuates harm.

Downloads

Published

2021-09-15

How to Cite

DeCook, J. R., Cotter, K., & Kanthawala, S. (2021). SAFE FROM “HARM”: THE GOVERNANCE OF VIOLENCE BY PLATFORMS. AoIR Selected Papers of Internet Research, 2021. https://doi.org/10.5210/spir.v2021i0.12160

Issue

Section

Papers D