Online content moderation and the Dark Web: Policy responses to radicalizing hate speech and malicious content on the Darknet

Authors

DOI:

https://doi.org/10.5210/fm.v24i12.10266

Keywords:

Dark Web, Darknet, Radicalization, Content Moderation

Abstract

De-listing, de-platforming, and account bans are just some of the increasingly common steps taken by major Internet companies to moderate their online content environments. Yet these steps are not without their unintended effects. This paper proposes a surface-to-Dark Web content cycle. In this process, malicious content is initially posted on the surface Web. It is then moderated by platforms. Moderated content does not necessarily disappear when major Internet platforms crackdown, but simply shifts to the Dark Web. From the Dark Web, malicious informational content can then percolate back to the surface Web through a series of three pathways. The implication of this cycle is that managing the online information environment requires careful attention to the whole system, not just content hosted on surface Web platforms per se. Both government and private sector actors can more effectively manage the surface-to-Dark Web content cycle through a series of discrete practices and policies implemented at each stage of the wider process.

Author Biography

Eric Jardine, Virginia Tech

Assistant Professor, Political Science, Virginia Tech, and Senior Fellow, Centre for International Governance Innovation (CIGI)

Downloads

Published

2019-12-02

How to Cite

Jardine, E. (2019). Online content moderation and the Dark Web: Policy responses to radicalizing hate speech and malicious content on the Darknet. First Monday, 24(12). https://doi.org/10.5210/fm.v24i12.10266