Caught in a feedback loop? Algorithmic personalization and digital traces

Wiebke Loosen, Marco T Bastos, Cornelius Puschmann, Uwe Hasebrink, Sascha Hölig, Lisa Merten, Jan­-Hinrik Schmidt, Katharina E Kinder­-Kurlanda, Katrin Weller

Abstract


Algorithmically calculated decisions about relevance and news play an increasingly important role in how we perceive the world. This panel introduces new theoretical and methodological approaches that explore algorithmic public spheres and the digital trace data that enables them, focusing on shifts in editorial decision making, user choices in algorithmically driven environments, the epistemological implications involved in researching these topics, and suggesting potential remedies for the normative problems of a contemporary computer-generated view of the world.

The terms “algorithm”, “big data”, “digital traces” are increasingly used as convenient blanket labels to address a range of developments which reshape our understanding of fundamental concepts such as “public”, “relevance”, and “news”. Algorithms working on large amounts of user- and system-generated data construct spheres of public communication, for example, by identifying and connecting users with compatible attributes, interests, and activity patterns (e.g. Beam, 2014), or by filtering content based on algorithmically constructed indicators of relevance (e.g. Eslami et al., 2015). Thus, it is increasingly important to investigate the practices, mechanisms, power structures, and dynamics of such algorithmic public spheres. Approaches to combine the study of the construction and inscribed mechanisms of algorithms with a perspective on their societal consequences already include the idea of “algorithmic accountability“ (Diakopoulos, 2014), “algorithmic ideology“ (Mager, 2012), or “algorithmic harms” (Tufekci, 2015).

If, as Ananny (2015) argues, algorithms possess the ability to convene people by inferring associations and the power to suggest probable actions, this also makes necessary a reformulation of questions that are at the heart of research on journalism and editorial decision making: How do algorithms define relevance? What are the criteria behind their selection mechanisms and how “objective” are these criteria? There are also novel methodological challenges: How are these mechanisms to be studied if we do not have direct access to the algorithms involved, but can only infer their working from digital traces that are made visible or accessible? And, more generally, how should digital traces be interpreted at scale?

*References*

Ananny, M. (2015). Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness. _Science, Technology & Human Values_, 1–25. doi: 10.1177/0162243915606523

Beam, M.A. (2014). Automating the news. How personalized news recommender system design choices impact news reception. _Communication Research_, 41(8), 1019–1041. doi: 10.1177/0093650213497979

Diakopoulos, N. (2014). Algorithmic accountability reporting. On the investigation of black boxes. Tow Center for Digital Journalism. Retrieved from http://towcenter.org/wp-content/uploads/2014/02/78524_Tow-Center-Report- WEB-1.pdf.

Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., Hamilton, K., & Author 1. (2015). I always assumed that I wasn't really that close to [her]: Reasoning about invisible algorithms in the news feed. _Proceedings of the 33rd Annual SIGCHI Conference on Human Factors in Computing Systems_, Association for Computing Machinery (ACM):153–162.

Mager, A. (2012). Algorithmic ideology: how capitalist society shapes search engines. _Information, Communication and Society_, 15 (5), 769–787. doi:10.1080/1369118X.2012.676056

Tufekci, Z. (2015). Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. _Colorado Technology Law Journal_, 13(2), 203–217.

Keywords


personalization, glass-boxing, algorithm audit, big data, audiences

Full Text:

PDF

Refbacks

  • There are currently no refbacks.