First Monday

DIY videos on YouTube: Identity and possibility in the age of algorithms by Christine T. Wolf



Abstract
This paper analyzes interviews with individuals discussing their experiences of searching for and watching DIY videos on YouTube. By exploring the entanglement of individuals’ search practices and the algorithmic underpinnings of the platform, this paper examines how experiences on Web 2.0 platforms can work to narrow, rather than widen, information worlds. Contributing to ongoing conversations in critical algorithm studies, this paper illustrates how even mundane practices like watching home repair videos on YouTube can play a role in identity-making and the shaping of modern subjectivities.

Contents

Introduction
Making worlds small — The rise of algorithmic spaces
Methods
The case of DIY home repair videos on YouTube
Recommendations on YouTube
Big data, small worlds

 


 

Introduction

The user-generated content characteristic of Web 2.0 platforms has, from its early inception, held the promise of transforming the nature, form, and impact of information and knowledge sharing among peers. In the words of Davis (2005) “Web 2.0 is an attitude not a technology,” focused on “opening” processes of social participation. Allowing users to generate and distribute their own content can support informal and self-directed learning (e.g., Bower, et al., 2010; McLoughlin and Lee, 2010; Redecker, et al., 2009), working to erode historical divisions between “expert” and “lay” knowledge by allowing local knowledge, such as experience and know-how, to serve as alternative bases of authority.

One domain where this can be clearly seen is in do-it-yourself (DIY) culture, an arena in which YouTube’s impact has been foundational (Gauntlett, 2011). DIY projects may span a broad range of topics and include things like home life (such as home repair, decoration, cooking, and gardening), crafting (such as knitting, sewing and scrapbooking), personal fashion and style (such as jewelry, make-up, and hair techniques), making and tinkering with computers, and so on. The common thread is that individuals “do-it-yourself,” meaning amateur, untrained individuals learn how to do specialized, expert tasks. Although DIY endeavors today are not explicitly linked to the anti-capitalist, punk heritage from which the term originates (e.g., Dale, 2009), the destabilizing potential of DIY cultures on traditional knowledge systems is undeniable. Adding in Web 2.0 platforms — infused with discourses of increased participation and empowerment — the possibility of DIY’s disruptive potential to “scale up” becomes compelling.

Emergent technologies have often generated “hype” (Silver, 2008), with some even able to capture certain imaginaries about progress and innovation, evoking charismatic or even religious properties (Ames, 2015; Ames, et al., 2015). Exploring the ways in which Web 2.0’s “hype” has played out, many scholars have critically pointed to the broader economic and business interests at play, noting the users (and more importantly their metadata) are the real products from which platforms gain (Dijck and Nieborg, 2009; Scholz, 2008). Despite the harsh realities of market incentives, individuals experience Web 2.0 platforms as sites of social media, implicating not only practices of information retrieval, but also entertainment (e.g., Haridakis and Hanson, 2009), identity-seeking (e.g., Wesch, 2010), and affiliation-building (e.g., Rotman and Preece, 2010). Thus, these platforms offer rich sites to explore the entanglement of the personal, social, and economic realms of everyday life.

This paper focuses on the experience of watching videos on YouTube. Practices of watching media content can be instructional and educational, but also transformative in shaping an individual’s perceptions of what is or might be possible. I focus on one particular type of DIY activity, home repair. I conducted semi-structured interviews with individuals who engaged in DIY home repair activities to broadly understand their use of information in these activities. The prominence of YouTube as a central source of information emerged during data collection; participants were not purposively recruited for their YouTube use. Through interview accounts, we see how watching videos is a process of information seeking that involves knowledge acquisition, but also involves personal assessments of ability, risk, and self-confidence. Participants describe a preference for videos created by other DIYers, often describing them as “straightforward” and reflecting seemingly obvious “common sense.” These accounts highlight the systematic, though subtle, influence of homophily, a preference for “sameness,” that can be further perpetuated by algorithmic sorting.

By examining how the social and material aspects of YouTube are entangled in search practices, we can see how these experiences might work to narrow, rather than widen, individuals’ information worlds. Contributing to ongoing conversations in critical algorithm studies, this paper demonstrates how even mundane practices like looking for home repair videos on YouTube can play a role in sense- and identity-making, highlighting the role of computational systems in the shaping of modern subjectivities.

 

++++++++++

Making worlds small — The rise of algorithmic spaces

Algorithmic systems have become an increasingly prevalent part of contemporary life, with growing influence in a wide swath of activities — even generating specific styles of music (Wilf, 2013) or artistic paintings (Kasao and Miyata, 2005). Algorithms are curious sources of novel creative or artistic engagement, but they are also increasingly involved in complex decision-making processes. Their seemingly straightforward processes of computational processing and execution are often perceived as “objective” and unbiased, what Lustig and Nardi (2015) have coined “algorithmic authority.” The decisions undergirded by automatic, algorithmic outputs can, at times, have consequential outcomes on lives and livelihoods — algorithms now routinely making determinations as serious as credit worthiness (Citron and Pasquale, 2014; Pasquale, 2015b), financial trades (Kirilenko and Lo, 2013), and even assessments of “foreignness” (Cheney-Lippold, 2016). The computational decision-making enabled by the algorithmic processing of “big data” is also transforming the nature of penology and policing in the United States (Berk, 2013; Brennan and Oliver, 2013).

Although taking on deeper and more consequential roles in contemporary life at rapid paces, the scope of algorithmic decision-making systems’ powers are often obscured — frequently opaque or at times actively hidden, creating what Pasquale (2015b) has called a “black box society.” Many factors contribute to the opacity of these systems. Algorithms have traditionally been defined as simply “a sequence of computational steps that transform the input into the output” [1]. But the “big data” algorithms, part of complex computational systems we encounter today are often learning algorithms acting amid and in concert with many other algorithms over vast and dynamic datasets. They are difficult to “know” in the traditional sense, due to the sheer scale and velocity of their complexity (Burrell, 2016; Seaver, 2013). Beyond technical knowing, the influence of algorithmic systems is often invisible. Many individuals are unaware that their online experiences are algorithmically curated, often attributing how and when content is presented to the actions of other users rather then the platform itself (Eslami, et al., 2015; Rader and Gray, 2015).

a. Algorithmic spaces online

The influence of algorithms in shaping online spaces has been a concern since the early Web, particularly with their use in search engine results. A broad body of literature has explored this issue in the context of Google.com, which introduced the PageRank Algorithm to their search engine in the late 1990s (Page, et al., 1999). PageRank garnered attention and concern in diverse fields, but a close examination of this literature is outside the scope of this paper. These early concerns over algorithmic curation highlight the powerful, yet typically invisible, control these systems can wield in determining information access. Their invisibility is a concern not only because it denies individuals the opportunity for informed participation in these systems of partiality, but also because the invisibility removes the possibility for parties to hold these systems accountable for their profound and, at times, biased effects (Tufekci, 2015a)

In addition to their role in the display of search results, algorithms also shape individuals’ experiences with Web 2.0 platforms in many ways. Scholars have pointed to ways in which many Web 2.0 platforms have disciplinizing effects on individuals — for example, by narrowly dictating what and how content may be generated to fit into a platform’s “template” sociality (Arola, 2010). Web 2.0 platforms also shape individuals’ experiences through the cultural expectations they create for certain comportment and presentation, mandating particular forms of participation and interactivity (Jarrett, 2008) and creating broader imperatives for self-promotion and branding (Marwick, 2013).

In tandem with the social pressures of Web 2.0 interactivity, the elaborate algorithmic systems that govern what content is displayed, when, and to whom also influences individuals’ experiences in these online spaces. Often displayed by way of “recommended” or “trending” content, a hallmark of Web 2.0 platforms today is their ability to shape notions of “relevance” by algorithmic sorting and customization (Gillespie, 2014).

While these filtering mechanisms can help individuals make sense of the information deluge online, they are not value-free, nor without consequence. Algorithmic processing has the power to — whether intentional or not — manipulate social relationships and institutions. They can do so by, for example, bolstering or snuffing out social movements through the regulation of visibility, implicitly legitimating some groups and not others (Tufekci, 2015a, 2015b) or by eroding traditional democratic processes through the manipulation of political elections (Zittrain, 2014). Algorithms can also manipulate labor markets — whether by creating entirely new markets in their wake (as we have seen with the rise of search engine optimization (SEO) developing in response to PageRank and other search engine sorting algorithms (for a history of search engines, see Seymour, et al., 2011) or drastically re-configuring and “disrupting” traditional markets by mandating accountability to opaque “ratings” schemes, as we have seen with the rise of “algorithmic management” in hospitality and other service industries (Orlikowski and Scott, 2015).

Algorithms can also have reverberations for individuals on a micro-level. They can undermine psychological health and well-being by manipulating moods (Kramer, et al., 2014). They can also reconfigure interpersonal relationships and feelings of social closeness — in the context of the Facebook NewsFeed, which tailors the content displayed for each user, Eslami, et al. (2015) found that individuals attributed missed posts as an indication of a weak relationship — missed posts must mean we’re just not “that close.” Curating algorithms in social media can also influence content creation, as Bucher (2012) notes algorithms can create the “constant possibility of disappearing and becoming obsolete” by algorithmic decree [2]. Others have taken a more optimistic view on algorithms as social objects — because they represent a source of uncertainty, they can evoke speculative imaginaries (Bucher, 2016) or spark playful attempts to “game the algorithm” (Mahnke and Uprichard, 2014).

b. Homophily and preferences for sameness

The rise of algorithmic systems is situated within broader discourses of human information behavior. Homophily, or the concept that people favor those perceived to be similar to themselves, has long been an established area of interest in sociological research (McPherson, et al., 2001). Often looked at in social connections and the formations of networks, homophily provides an important frame for understanding how information and ideas spread: despite the natural propensity to favor those similar to us, weak ties provide exposure to novel information (Granovetter, 1973). Thus, while we prefer similarity, difference is when new ideas are encountered.

In online contexts, the issue of homophily has garnered attention from scholars in the role it plays in shaping social networks generally (e.g., Bisgin, et al., 2010; De Choudhury, et al., 2010; Thelwall, 2009). While a review of this extensive literature is outside our scope here, there are several points relevant to information behavior in participatory platforms. Homophily plays a role in how individuals assess the credibility and utility of user-generated content, for example in platforms like TripAdvisor (Ayeh, et al., 2013). Homophily can also affect individuals’ behavior around online information, influencing their likelihood to retweet on Twitter (Macskassy and Michelson, 2011), pin on Pinterest (Chang, et al., 2014) or link to in blogs (Gilbert, et al., 2009) — with a preference for similarity creating what some have dubbed online “echo chambers.” The influence of homophily can even go further by also influencing content creation, in addition to content sharing. In a study of the photo-sharing platform Flickr, researchers found that after a new social tie was created, users would begin to upload more similar photos (Zeng and Wei, 2013). This highlights the potential for homophily to shape social reality by influencing future behavior, in the case of Flickr photos, even shaping traditionally “personal” practices of creativity and aesthetic preferences.

c. Information poverty and small information worlds

Homophily is similar to the concept of a “small world,” explored in the field of library and information science. The “small” or “impoverished” information world perspective is helpful in highlighting the role of information behavior in shaping an individual’s social reality. Developed from her empirical work in settings like retirement communities (Chatman, 1996), low-wage work (Chatman, 1990), and prisons (Chatman, 1999), Chatman observed practices of information avoidance in small groups as a way of protecting against outsiders. Keeping one’s information world small is achieved through secrecy and avoidance of exposure to novel or horizon-broadening sources of information:

Within a small world, most of the information deriving from the larger outside world has little lasting value ... [outside information] might simply be to measure the overall soundness of the world “out there,” to maintain a connection, or to engage in “small-talk.” [3]

Although narrowing, these practices enable individuals with powerful forms of coping and control, by allowing them to construct the social order and coherence of their everyday lives. Despite these coping mechanisms, these “small worlds” leave individuals in states of information poverty. Even in the age of “information overload” online, individuals can also experience forms of information poverty. Lingel and boyd (2013) explored how specific facets of an individuals’ world might be “small” or impoverished, while others may not be as narrow (in their case, exploring how stigma associated with particular topic or arenas may deprive individuals of needed information). When considered together with homophily, this perspective illustrates how individuals’ strong preference for social similarity can have profound reverberations not only for information seeking and searching behavior but also the shaping of their social realities — similarity breeds preference and comfort, favoring the self-protection gained from avoidance and making worlds and realities small.

d. Algorithms, identity, and narrowing selves

In online spaces, these effects are amplified by the introduction of systems that automatically sort content and make customized recommendations for each individual user, creating what Pariser (2011) calls the “filter bubble”:

The new generation of Internet filters looks at the things you seem to like — the actual things you’ve done, or the things people like you like — and tries to extrapolate. They are prediction engines, constantly creating and refining a theory of who you are and what you’ll do and want next. Together, these engines create a unique universe of information for each of us — what I’ve come to call a filter bubble — which fundamentally alters the way we encounter ideas and information. [4]

Implicit in Pariser’s definition is the identity modeling taking place within algorithmic systems — these systems are “constantly creating and refining a theory of who you are.” The entanglement of self and computational systems is undeniable. Modern selves are, in the words of Horning (2012), “data selves” — subjectivities constructed in relation to and co-constituted through the data meant to represent them. No longer tied to traditional marketing segmentation or demographic categories, “big data” identities are extrapolated from data traces and numerical predictions, capturing what some call the “pre-personal” — inarticulable facets of taste and preference individuals themselves are not even aware of (Hallinan and Striphas, 2016).

The danger lies in the potential for algorithmic living to “engineer out of daily experience all manner of ‘inconvenient’ cultural and social practices” (Pasquale, 2015a). Algorithms enact narrow experiences, privileging notions of preference and predictive “satisfaction,” while eliding opportunities for the generative potential of serendipity, defamiliarity, difference, or even distaste. Exposure to novelty is not only important for new knowledge acquisition, but can also be instrumental in shaping the self — “reading” or watching media involves active processes of interpretation and imagination (Hall, 1973), processes through which audiences or publics are created (Warner, 2002). Media artifacts not only convey information at face value (e.g., how to complete this DIY task), but also carry cultural implications by inviting viewers to imagine whom it was created by and for, what the intended uses might be, what the possible ways to contribute or reuse it are, and so on. Thus, just as “sharing” media is a form of participation in Web 2.0 platforms (John, 2013), interpreting or “reading” content is also a powerful type of social engagement through which users come to imagine the possibilities of the world and their place in it.

 

++++++++++

Methods

This paper analyzes data from an exploratory, qualitative study looking at the information behavior of individuals who engage in DIY home projects of improvement and repair work. Home renovation is a common activity of everyday life (Allon, 2008), with many homeowners attempting to renovate themselves (Williams, 2008, 2004; Wolf and McQuitty, 2011). Informal information seeking is a prominent feature of everyday life, (Savolainen, 1995), with DIY activities in particular requiring extensive information gathering efforts. DIY is a particularly appropriate topic to explore the entanglement of Web 2.0 platforms and everyday life given prominent role of peer-to-peer and user-generated content in DIY culture (Gauntlett, 2011; Miller and Sinanan, 2014).

I recruited participants at a large public research university in a Midwestern state in the United States. To achieve a variety of experiences, recruitment criteria was broad and only required that participants be 18 years or older and have had completed some type of home improvement or repair project in the last 12 months. I conducted in-depth, semi-structured interviews during March and April 2013. Interviews were audio-recorded and transcribed. In total, I interviewed 21 individuals, with interviews ranging in length from 35 minutes to 120 minutes, with a mean time of 78 minutes and a median time of 90 minutes. Of these 21 participants, 11 identified as female and 10 as male. Their ages ranged from 24 to 75 years, with a mean age of 41 and a median age of 34 years. Recent projects participants discussed spanned a wide range of home renovation work including: minor and small repairs and improvements like replacing fixtures or painting rooms, medium projects like installing flooring, and large projects such a full kitchen or even a full house renovation. The prominence of YouTube as a source of information used in home repair activities emerged during data collection; participants were not screened beforehand to specifically sample for YouTube use. Attesting to the prominence of the platform, 20 of the 21 participants reported using YouTube videos in some way during their home repair information practices.

I analyzed data inductively. Although not adopting the formalities of traditional grounded theory, my approach was informed by Charmaz’s (2014) constructivist approach to inductive analysis. To begin, I used an open coding method to organize a sample of the interview data, specifically the first seven interview transcripts and their accompanying notes to understand the breadth of individual experiences discussed. Based on these codes, I wrote analytical memos to identify dominant sources of information, specifically around the topics of online and off-line information seeking. I then used focused and axial coding to explore these themes in more depth, re-coding the initial seven interviews, along with the remaining fourteen. I wrote analytical memos after coding each interview, letting me stay close to the data while comparing emergent themes. I deepened these analyses by reading relevant media, including numerous news and blog articles; interviews with a member of YouTube Search and Discovery team [5]; and closely examining the YouTube platform itself by searching for and watching videos related to DIY home repair.

 

++++++++++

The case of DIY home repair videos on YouTube

The following section explores three main themes from interview data. These cover how YouTube videos have come to transform information practices, how participants’ describe using these videos to assess their abilities and self-confidence in attempting DIY repair activities, and the role of “common sense” in determining the credibility of videos. Finally, the section closes by exploring the role of recommendations in the YouTube user experience.

a. Transforming information practices

Many participants talked about the prominent role YouTube played in their DIY information practices, often noting how the videos have changed information routines, impacting the perceived relevance of other media sources, such as books:

YouTube’s great because you can watch stuff actually being done. I’ve got a couple of home improvement books, but I’ve never even cracked them open because it’s so much easier to just search for what I want electronically. (P7)

This participant reflected on the ease of retrieving information digitally (i.e., search) versus the organization schemes in other forms, such as books. Another participant reflected on the felt obsolesces of books in the wake of YouTube videos, noting the change in his own practice:

I think YouTube has probably eliminated most of these home improvement books ... I’ve got a couple, I haven’t looked at them in years because you can just go to YouTube and watch a guy do it ... And that’s a lot easier than reading a printed page with maybe one picture to know how to do it. (P1)

Another participant discusses how the dynamic nature of video watching someone via a video is easier than reading a printed page — makes it a preferred source of information. Another participant elaborated on this:

I think videos are easier to understand. Well, first it involves a lot of terms related to either the components and tools related to this task. It’s easier to see what they look like, so I could later locate those things in a store. And I want also to kind of see the process, how you actually do this. And I can see if they have the same type of faucet that I had, so I can see whether this situation, this tutorial actually apply to me. So I think it’s just easier to understand from the video. (P3)

This reflection also highlights the role videos can play in expanding vocabularies: beyond the DIY task itself, videos can also offer contextual information about tools and supplies. Here, we see a video helpful for the participant not only in identifying what tools are needed, but also the name of those tools and what they look like. Another participant also shared a similar insight, discussing how watching a video helped her understand how a particular tool should be used:

“... the videos were really helpful ... [to see] like how do you spread the adhesive on the floor to glue the tiles? When we did the fake linoleum tile in the bathroom, being able to see someone actually doing that 45 degree angle as they’re dragging the adhesive across the floor, how to use the teeth on the tool, that was helpful.” (P15)

b. Assessing ability and self-confidence

Through these excerpts, we are able to see the growing role of YouTube videos in DIY information practices and the perceived prominence of their superior relevance and ease than other media formats. In addition to being merely more convenient or easy to digest, participants also talked of YouTube impacting their feelings of confidence or self-efficacy when thinking about DIY projects:

I searched the videos on YouTube on replacing faucet. I wanted to determine whether this is something I could do myself or I should hire someone to do it because I don’t have much experience to fix this type of things before so I wanted to see what would be involved. After watching a video ... I feel confident that this was something I could do. (P3)

Another participant described the video-watching process as a type of risk management, carefully weighing the perceived complexity of a given project against her comfort level:

To me, it’s like you’re not actually doing it yourself, but if you look at enough stuff, to me, the way I operate is like, “Oh, that seems straightforward enough that ...” So you’re kind of doing risk management as you’re doing this research and watching other people do it and be like, “Okay, I can... Yeah. I get that and can do that.” (P21)

Another participant described her evaluation process when watching videos, which involved deciding both if the activity depicted in the video was too complicated or unlikely to be the particular home repair problem she was experiencing:

There’s this threshold point that is often indicative in the videos that, “Okay, that is too involved for us,” or that is maybe only like 30% likelihood of the problem and really could be these other much more involved things as well. So we’re just not going to deal and have someone else come in and take a look at it. (P19)

Videos were seen as helpful, even if participants ultimately did not attempt the DIY work depicted. As one participant shared, after watching several videos on how to replace a sump pump (a pump used to remove excess water from a basement) she decided to hire a professional: “there were too many steps, it was too much. It would have involved digging up half our yard and bailing the water out [of the basement] ... and it had to be done [now], so we just hired someone.” (P11). Here, we see the participant recall how she imagined and considered trying to do the work herself: “it would have involved ... ” she explains, as she lists the steps. The standing water in her basement meant time was of the essence. Although she understood the steps involved in the project, and perhaps would have attempted them under different circumstances, the temporal factor here — that the work had to be done now — caused her to decide it would have been “too much” for her to try herself.

The practice of watching videos was a way for participants to configure who they are, who they might be, and who they want to be. By imagining themselves doing the work, participants illustrate the objectified self, the “me” that Mead (1934) describes. Then, when talking about the actual decision to DIY or not, participants embody the subjective self, Mead’s “I.” Toggling back and forth between the self as object and the self as subject, participants actively work to organize their sense of self and their identity as a DIYer. The person is the video is a DIYer, and I’m a DIYer, but is the task something I feel comfortable doing? The experience of watching a video enables participants to play with what Markus and Nurius (1986) call their possible selves, their ideas about who they might become, who they would like to become, and who they fear becoming. Self-reliance and a can-do/will-do attitude are the bedrock of the DIY ethos. But this ethos reflects a type of aspirational identity (Thornborrow and Brown, 2009), the pursuit of which must be carefully tempered by the demands of safety and comfort, as well as the recognition (and fear) of potential danger.

The way participants are describing videos shows us that they prefer YouTube videos because it is a medium capable of conveying information in dynamic ways. But beyond these more instructional or cognitive purposes, videos also play a role in processes of identity-making, impacting whether participants perceive activities and tasks as something they are capable of doing. This implicates feeling of self-confidence, self-efficacy, and the imagined possibilities of what one can or might possibly do.

c. Common sense and credibility

Many participants described a “you can just tell” heuristic when evaluating the credibility of videos. One participant described sometimes encountering funny, prank, or viral videos, which can easily be dismissed as irrelevant: “Sometimes you don’t find helpful things on [YouTube] ... Maybe situations where you’re searching for the terms and things come up with and they ended up blowing up the water heater or something. That’s more of a funny thing.” (P4)

Beyond these types of irrelevant videos, participants described a video as “speaking for itself,” with the finished project depicted in the video signifying the content’s credibility or accuracy:

Sometimes I just watch the YouTube video of someone doing something and I can tell if they’re doing a good job or not ... You can see as they’re doing it if anything they’re doing would cause a problem. You can see when they’re done with it that it works and everything. (P7)

Another participant expressed a similar sentiment, holding up first-person perspective and demonstration in the videos as an indicator of their credibility: “... If you make a video and you’re under a sink and you’re cleaning it then that lends some credibility to me.” (P17)

This “speaking for itself” heuristic is a naturalization of common sense; the evaluation of videos as commonsensical or not feels like a natural and normal part of video watching. This naturalization speaks to the virtual “first hand experience” that watching videos seems to elicit, pointing to their importance in the shaping possible selves.

Many large, commercial hardware and supply stores also post DIY videos on YouTube. These stores, like Lowes or Home Depot, are frequently referred to “big box” stores, given the large box layout the physical stores follow. As one participant shared, there can be tension in discerning the credibility of videos, particularly user-generated videos in comparison to store videos: “I think the user-generated videos have a varying degree of quality. Some of them are really kind of detailed and some of them are not very useful ... But the store-maintained ones sometimes they might look like a commercial, so it also depends.” (P3)

This skepticism towards commercialization echoed in many interviews, with several participants expressing concern over commercial videos making repairs seem overly complicated with the use of jargon or specialized tools when more common ones would suffice. Another common concern was the use of time-lapse editing, which could make projects seem overly simplified:

I think sometimes my wife watches the professional ones, she feels we could do every project in the house. And then, they have sliced there behind the scenes, they have like 50 contractors working on this project. So, it looks easy to us, but they cut out the other parts of where they had a bunch of contractors coming in. (P18)

These skepticisms towards professional videos hint at a desire for transparency and authenticity. As one participant phrased it, knowing those in videos were working on their homes reflected a certain level of care and craft:

I’d say it’s more important to know what they did in their own home, because when you ... I’ll say, if you’re doing it in your own home, you really do not wanna mess it up. So I thought that them giving their personal experience of what they did in their own home is very valuable to me. And that always helped me out a lot, without question. (P13)

While these excerpts highlight the prominent role videos play in participants’ DIY information behavior, strikingly absent from these discussions are mentions of the YouTube platform itself or how participants go about finding videos. When asked how they searched for and found relevant videos, many participants described entering general keyword searches and then browsing through videos. In the words of one participant, “Sometimes I make it a question or just put the one word in and then see what comes out.” (P11) Despite participants’ perceived straight-forwardness of searching and waiting to “see what comes out,” what content appears and actually gets watched on YouTube is heavily dependent on the recommended or “related” video feature (Zhou, et al, 2010) a feature many participants described simply, and often fondly, as “helpful,” “great,” or “useful” in finding relevant videos. The absence of the platform itself — its seeming invisibility — in interviews is concerning though not entirely surprising. Many systems do not make their underlying algorithmic curation transparent, leaving many users unaware of the amount of manipulation or “customization” shaping their interaction (e.g., Eslami, et al., 2015; Rader and Gray, 2015). Surreptitious curation is concerning because it eliminates the opportunity for individuals to participate in these systems with informed consent. While searching for videos may seem “straightforward” and of little lasting consequence if the “right” one is found, algorithmic platforms can easily manipulate individuals’ moods (Kramer, et al., 2014) and decrease feelings of closeness and affiliation with others (Eslami, et al., 2015). As individuals enlist YouTube to help scope and envision DIY tasks, as we see here, they also imagine their own ability and make self-assessments of risk, comfort, and possibility. These assessments are, unbeknownst to individuals, crafted by the platform’s elaborate recommendation systems, which not only shape what information is presented to them and in what order, but can also, in turn, constrain how they conceive of or imagine what is possible.

For these participants, searching for and watching videos on YouTube is first and foremost an experience of content. They describe in careful detail how they go about assessing the credibility of the people and practices depicted in the videos. They describe how watching videos is helpful, not only in the visual and spatial aspects of film (versus print, for example) but also in imagining themselves doing the work. The particular mechanics of the platform — the how and why of what videos are presented to them — sink into the background. Given the central role media like these videos play in constructing notions of self, ability, and confidence, the seeming invisibility of the platform — particularly the algorithmic sorting that provides a heavily customized experience — raises concerns over the potential power algorithms wield in shaping social realities.

 

++++++++++

Recommendations on YouTube

In an interview with the Computerphile Project Cristos Goodrow [6], part of YouTube’s Search and Discovery team, explained the basic idea of how the platform generates recommended videos:

The simplest thing is that if we see you watch one of a certain video then we know that other people who have watched that video in the past went on to watch this other video. So it’s quite natural that if we’ve seen you watch this first one, we might think that you’d want to watch the other one too. That’s mostly just a matter of accounting more than anything else. It’s just keeping track of which videos get watched together. So that’s the simplest way we do it.

He goes on further to elaborate on how the more sophisticated recommendations attempt to discern why a user is watching a particular video:

I think the sophistication comes from figuring out what’s important about the video that indicates your interest in it. So for instance you might have watched some soccer videos but it was because those are the most amazing goals and you’re not really interested in soccer, you’re just interested in amazing sporting things. Or you might have watched those videos because they are a team that you follow because they are from an area of the world that you’re from. And so the sophistication comes in separating those two kinds of associations.

Noting the shortcomings of their approach, Goodrow explains:

It’s easy for our systems to become confused about the fact that you happened to watch it at one time, but you’re not really interested in it very much. As opposed to well, you watched a little bit less of something, but it’s something you’d like to see more of. The information we have to go on is what you and other people have watched in the past.

When asked about the role of randomness and how YouTube attempts to address homogeneity, he responds:

We do work on trying to increase diversity. And the challenge there is that we have an intuitive belief that increasing diversity will lead to the opportunity to get even more viewership in the future. But every time we try to increase diversity we tend to reduce the amount of viewership we have in the short term. I think it’s because we haven’t quite gotten it right yet. Until we can recommend unexpected things that are almost always right, people will just go away. They won’t see as much of what they were looking for or what they expected and they will just go away. So I guess it’s quite a high bar to add some diverse things there. So even though we have this intuition and our experiments tend to make us more conservative with it, that’s the way we want to go about trying to add, I wouldn’t call it, I would say that the naïve approach to diversity would be trying to add a random element. And I can assure you that that’s not going to work because you give up too much viewership in the short term. And so what we’ve learned is that random isn’t good enough and we need to make it so that we’re almost always right when we add some diverse or unexpected element.

His response to the problem of homogeneity is telling: while YouTube recognizes the benefit of diversity, they are in the business of retaining viewers. YouTube generates revenue on ad revenue and if users “go away” then they are not making money. Therefore, diversity or heterogeneity — elements that “tend to reduce the amount of viewership we have in the short term” — is set in opposition to their business interests. This aligns with ongoing critical inspections of Web 2.0 platforms that note the economic and market forces at play, which often stand in contrast to its idealistic hype (Lesage and Rinfret, 2015; Scholz, 2008; Zimmer, 2008). It is not just that Web 2.0 platforms pervert notions of “participation” by creating what Petersen (2008) calls “infrastructures of exploitation.” As platforms like YouTube become more and more embedded in everyday information behavior — and seen as natural sources of straightforward and authentic information — they are also shaping users’ subjectivities and senses of self. At the same time Web 2.0 platforms are creating the markets they then exploit, they are also shaping who users are and might become by dramatically tailoring the information they are exposed to.

 

++++++++++

Big data, small worlds

By analyzing these interviews, we see how Web 2.0 platforms like YouTube have transformed information practices. This is do in part to the dynamic nature of videos as a media form and their perceived superiority when compared to traditional forms of information (e.g., books). Videos also make the activities portrayed visible in a central way that enables users to engage in identity work by watching others and then assessing their own capabilities and possibilities. This case is significant because it allows us to empirically examine how selves are entangled in sociotechnical systems, extending concepts of information retrieval and recommendations to also encompass notions of shaping modern subjectivities and feelings of self-efficacy.

Drawing out these points contributes to ongoing conversations in social computing and science and technology studies on the power of algorithmic sorting and the growing reach (and increasing opacity) of computational processing in everyday life. Through everyday information practices, people are continually made and remade through their exposure to ideas — these ideas shape identity making by influencing perceptions of what is or might be possible. Calling attention to the role of self in media consumption — and the inevitable entanglement of self and algorithmic recommender system — creates new provocations for researchers examining the cultural implications of Web 2.0 platforms. Just as system constraints on how content can be created or input can have disciplinizing effects on users (Bucher, 2012; Jarrett, 2008; Marwick, 2013), so too can these platforms reconfigure and otherwise constrain users’ very senses of self and subjectivity by narrowing, rather than widening, their information worlds. End of article

 

About the author

Christine T. Wolf is a Ph.D. candidate in Informatics at the School of Information and Computer Sciences at University of California, Irvine and a student co-op researcher at IBM Research, Almaden Research Center. She holds an M.S. in Information from the University of Michigan, Ann Arbor, and a J.D. from Southern Methodist University. Her research focuses broadly on understanding big data as an everyday experience.
E-mail: wolfct [at] uci [dot] edu

 

Acknowledgments

Thank you to the participants in this study who generously shared their DIY experiences with me. Many thanks to Paul Dourish, whose guidance and feedback on early drafts was instrumental in the shaping of this work. Also thanks to Morgan Ames, Kathryn Ringland, and many helpful members of LUCI who gave early feedback, as well as the anonymous First Monday reviewers whose comments strengthened this work. This project was supported, in part, by a small grant from the University of Michigan School of Information, as well as an Achievement Rewards for College Scientists (ARCS) fellowship. All opinions are my own and do not reflect any institutional endorsement.

 

Notes

1. Cormen, et al., 2009, p. 6.

2. Bucher, 2012, p. 1,164.

3. Burnett, et al., 2001, p. 537.

4. Pariser, 2011, p. 18.

5. “YouTube”s secret algorithm,” at https://www.youtube.com/watch?v=BsCeNCVb-d8 (24 April 2014).

6. “YouTube search & discovery,” at https://www.youtube.com/watch?v=JCtV7TmLTqQ (2 May 2014).

 

References

F. Allon, 2008. Renovation nation: Our obsession with home. Sydney: New South.

M. Ames, 2015. “Charismatic technology,” Aarhus Series on Human Centered Computing, volume 1, number 1, at http://ojs.statsbiblioteket.dk/index.php/ashcc/article/view/21199, accessed 31 May 2016.

M. Ames, D. Rosner, and I. Erickson, 2015. “Worship, faith, and evangelism: Religion as an ideological lens for engineering worlds,” CSCW ’15: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 69–81.
doi: http://dx.doi.org/10.1145/2675133.2675282, accessed 31 May 2016.

K. Arola, 2010. “The design of Web 2.0: The rise of the template, The fall of design,” Computers and Composition, volume 27, number 1, pp. 4–14.
doi: http://doi.org/10.1016/j.compcom.2009.11.004, accessed 31 May 2016.

J. Ayeh, N. Au, and R. Law, 2013. “‘Do we believe in TripAdvisor?’ Examining credibility perceptions and online travelers’ attitude toward using user-generated content,” Journal of Travel Research, volume 52, number 4, pp. 437–452.
doi: http://doi.org/10.1177/0047287512475217, accessed 31 May 2016.

R. Berk, 2013. “Algorithmic criminology,” Security Informatics, volume 2, number 5, at http://security-informatics.springeropen.com/articles/10.1186/2190-8532-2-5, accessed 31 May 2016.
doi: http://doi.org/10.1186/2190-8532-2-5, accessed 31 May 2016.

H. Bisgin, N. Agarwal, and X. Xu, 2010. “Investigating homophily in online social networks,” 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), volume 1, pp. 533–536.
doi: http://doi.org/10.1109/WI-IAT.2010.61, accessed 31 May 2016.

M. Bower, J. Hedberg, and A. Kuswara, 2010. “A framework for Web 2.0 learning design,” Educational Media International, volume 47, number 3, pp. 177–198.
doi: http://doi.org/10.1080/09523987.2010.518811, accessed 31 May 2016.

T. Brennan and W. Oliver, 2013. “The emergence of machine learning techniques in criminology,” Criminology & Public Policy, volume 12, number 3, pp. 551–562.
doi: http://doi.org/10.1111/1745-9133.12055, accessed 31 May 2016.

T. Bucher, 2016. “The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms,” Information, Communication & Society.
doi: http://doi.org/10.1080/1369118X.2016.1154086, accessed 31 May 2016.

T. Bucher, 2012. “Want to be on the top? Algorithmic power and the threat of invisibility on Facebook,” New Media & Society, volume 14, number 7, pp. 1,164–1,180.
doi: http://doi.org/10.1177/1461444812440159, accessed 31 May 2016.

G. Burnett, M. Besant, and E. Chatman, 2001. “Small worlds: Normative behavior in virtual communities and feminist bookselling,” Journal of the American Society for Information Science and Technology, volume 52, number 7, pp. 536–547.
doi: http://doi.org/10.1002/asi.1102, accessed 31 May 2016.

J. Burrell, 2016. “How the machine ‘thinks’: Understanding opacity in machine learning algorithms,” Big Data & Society, at http://bds.sagepub.com/content/3/1/2053951715622512, accessed 31 May 2016.
doi: http://doi.org/10.1177/2053951715622512, accessed 31 May 2016.

S. Chang, V. Kumar, E. Gilbert, and L. Terveen, 2014. “Specialization, homophily, and gender in a social curation site: Findings from Pinterest,” CSCW ’14: Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 674–686.
doi: http://doi.org/10.1145/2531602.2531660, accessed 31 May 2016.

K. Charmaz, 2014. Constructing grounded theory. Second edition. Thousand Oaks, Calif.: Sage.

E. Chatman, 1999. “A theory of life in the round,” Journal of the American Society for Information Science, volume 50, number 3, pp. 207–217.

E. Chatman, 1996. “The impoverished life-world of outsiders,” Journal of the American Society for Information Science, volume 47, number 3, pp. 193–206.

E. Chatman, 1990. “Alienation theory: Application of a conceptual framework to a study of information among janitors,” Reference Quarterly, volume 29, number 3, pp. 355–368.

J. Cheney-Lippold, 2016. “Jus algoritmi: How the National Security Agency remade citizenship,” International Journal of Communication, volume 10, at http://ijoc.org/index.php/ijoc/article/view/4480, accessed 31 May 2016.

D. Citron and F. Pasquale, 2014. “Scored society: Due process for automated predictions,” Washington Law Review, volume 89, number 1, pp. 1–33, and at https://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1318/89WLR0001.pdf, accessed 31 May 2016.

T. Cormen, C. Leiserson, R. Rivest, and C. Stein, 2009. Introduction to algorithms. Third edition. Cambridge, Mass: MIT Press.

P. Dale, 2009. “It was easy, it was cheap, so what? Reconsidering the DIY principle of punk and indie music,” Popular Music History, volume 3, number 2, at https://journals.equinoxpub.com/index.php/PMH/article/view/7044, accessed 31 May 2016.
doi: http://doi.org/10.1558/pomh.v3i2.171, accessed 31 May 2016.

I. Davis, 2005. “Talis, Web 2.0 and all that” (4 July), at http://blog.iandavis.com/2005/07/talis-web-2-0-and-all-that/, accessed 14 May 2016.

M. De Choudhury, H. Sundaram, A. John, D. Seligmann, and A. Kelliher, 2010. “‘Birds of a feather’: Does user homophily impact information diffusion in social media?&edquo; arXiv.org (9 June), at http://arxiv.org/abs/1006.1702, accessed 31 May 2016.

J. van Dijck and D. Nieborg, 2009. “Wikinomics and its discontents: A critical analysis of Web 2.0 business manifestos,” Media, Culture & Society, volume 11, number 5, pp. 855–874.
doi: http://dx.doi.org/10.1177/1461444809105356, accessed 31 May 2016.

M. Eslami, A. Rickman, K. Vaccaro, A. Aleyasen, A. Vuong, K. Karahalios, K. Hamilton, and C. Sandvig, 2015. “‘I always assumed that I wasn’t really that close To [her]’: Reasoning about invisible algorithms in the News Feed,” CHI ’15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 153–162.
doi: http://dx.doi.org/10.1145/2702123.2702556, accessed 31 May 2016.

D. Gauntlett, 2011. Making is connecting: The social meaning of creativity from DIY and knitting to YouTube and Web 2.0. Cambridge: Polity Press.

E. Gilbert, T. Bergstrom, and K. Karahalios, 2009. “Blogs are echo chambers: Blogs are echo chambers,” HICSS ’09: 42nd Hawaii International Conference on System Sciences, pp. 1–10.
doi: http://doi.org/10.1109/HICSS.2009.91, accessed 31 May 2016.

T. Gillespie, 2014. “The relevance of algorithms,” In: T. Gillespie, P. Boczkowski, and K. Foot (editors). Media technologies: Essays on communication, materiality, and society. Cambridge, Mass.: MIT Press, pp. 167–194.
doi: http://dx.doi.org/10.7551/mitpress/9780262525374.003.0009, accessed 31 May 2016.

M. Granovetter, 1973. “The strength of weak ties,” American Journal of Sociology, volume 78, number 6, pp. 1,360–1,380.

S. Hall, 1973. “Encoding and decoding in the television discourse,” Birmingham: paper for the Council of Europe Colloquy on Training in the Critical Reading of Television Language, at http://www.birmingham.ac.uk/Documents/college-artslaw/history/cccs/stencilled-occasional-papers/1to8and11to24and38to48/SOP07.pdf, accessed 31 May 2016.

B. Hallinan and T. Striphas, 2016. “Recommended for you: The Netflix Prize and the production of algorithmic culture,” New Media & Society, volume 18, number 1, 117–137.
doi: http://dx.doi.org/10.1177/1461444814538646, accessed 31 May 2016.

P. Haridakis and G. Hanson, 2009. “Social interaction and co-viewing with YouTube: Blending mass communication reception and social connection,” Journal of Broadcasting & Electronic Media, volume 53, number 2, pp. 317–335.
doi: http://doi.org/10.1080/08838150902908270, accessed 31 May 2016.

R. Horning, 2012. “Notes on the ‘data self’,” New Inquiry (2 February), at http://thenewinquiry.com/blogs/marginal-utility/dumb-bullshit/, accessed 15 May 2016.

K. Jarrett, 2008. “Interactivity is evil! A critical investigation of Web 2.0,” First Monday volume 13, number 3, at http://firstmonday.org/article/view/2140/1947, accessed 1 September 2013.

N. John, 2013. “Sharing and Web 2.0: The emergence of a keyword,” New Media & Society, volume 15, number 2, pp. 167–182.
doi: http://doi.org/10.1177/1461444812450684, accessed 31 May 2016.

A. Kasao and K. Miyata, 2005. “Algorithmic painter: A NPR method to generate various styles of painting,” Visual Computer, volume 22, number 1, pp. 14–27.
doi: http://doi.org/10.1007/s00371-005-0353-8, accessed 31 May 2016.

A. Kirilenko and A. Lo, 2013. “Moore’s Law versus Murphy’s Law: Algorithmic trading and its discontents,” Journal of Economic Perspectives, volume 27, number 2, pp. 51–72.
doi: http://doi.org/10.1257/jep.27.2.51, accessed 31 May 2016.

A. Kramer, J. Guillory, and J. Hancock, 2014. “Experimental evidence of massive-scale emotional contagion through social networks,” Proceedings of the National Academy of Sciences, volume 111, number 24 (14 June), pp. 8,788–8,790.
doi: http://doi.org/10.1073/pnas.1320040111, accessed 31 May 2016.

F. Lesage and L. Rinfret, 2015. “Shifting media imaginaries of the Web,” First Monday, volume 20, number 10, at http://firstmonday.org/article/view/5519/5000, accessed 31 May 2016.
doi: http://doi.org/10.5210/fm.v20i10.5519, accessed 31 May 2016.

J. Lingel and d. boyd, 2013. “‘Keep it secret, keep it safe’: Information poverty, information norms, and stigma,” Journal of the American Society for Information Science and Technology, volume 64, number 5, pp. 981–991.
doi: http://doi.org/10.1002/asi.22800, accessed 31 May 2016.

C. Lustig and B. Nardi, 2015. “Algorithmic authority: The case of Bitcoin,” 2015 48th Hawaii International Conference on System Sciences (HICSS), pp. 743–752.
doi: http://doi.org/10.1109/HICSS.2015.95, accessed 31 May 2016.

S. Macskassy and M. Michelson, 2011. “Why do people retweet? Anti-homophily wins the day!” Fifth International AAAI Conference on Weblogs and Social Media, at http://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/view/2790, accessed 31 May 2016.

M. Mahnke and E. Uprichard, 2014. “Algorithming the algorithm,” In: R. König and M. Rasch (editors). Society of the query reader: Reflections on Web search. Amsterdam: Institute of Network Cultures, pp. 256–271, and at http://networkcultures.org/query/wp-content/uploads/sites/4/2014/06/19.Mahnke_Uprichard.pdf, accessed 31 May 2016.

H. Markus and P. Nurius, 1986. “Possible selves,” American Psychologist, volume 41, number 9, pp. 954–969.
doi: http://dx.doi.org/10.1037/0003-066X.41.9.954, accessed 31 May 2016.

A. Marwick, 2013. Status update: Celebrity, publicity, and branding in the social media age. New Haven, Conn.: Yale University Press.

C. McLoughlin and M. Lee, 2010. “Personalised and self regulated learning in the Web 2.0 era: International exemplars of innovative pedagogy using social software,” Australasian Journal of Educational Technology, volume 26, number 1, pp. 28–43, and at http://ajet.org.au/index.php/AJET/article/view/1100, accessed 31 May 2016.
doi: http://dx.doi.org/10.14742/ajet.1100, accessed 31 May 2016.

M. McPherson, L. Smith-Lovin, and J. Cook, 2001. “Birds of a feather: Homophily in social networks,” Annual Review of Sociology, volume 27, pp. 415–444.
doi: http://doi.org/10.1146/annurev.soc.27.1.415, accessed 31 May 2016.

G. Mead, 1934. Mind, self & society from the standpoint of a social behaviorist. Chicago: University of Chicago Press.

D. Miller and J. Sinanan, 2014. Webcam. Cambridge: Polity Press.

W. Orlikowski and S. Scott, 2015. “The algorithm and the crowd: Considering the materiality of service innovation,” MIS Quarterly, volume 39, number 1, pp. 201–216, and at http://aisel.aisnet.org/misq/vol39/iss1/12/, accessed 31 May 2016.

L. Page, S. Brin, R. Motwani, and T. Winograd, 1999. “The PageRank citation ranking: Bringing order to the Web” (11 November), at http://ilpubs.stanford.edu:8090/422/, accessed 15 May 2016.

E. Pariser, 2011. The filter bubble: What the Internet is hiding from you. New York: Penguin Press.

F. Pasquale, 2015a. “The algorithmic self,” Hedgehog Review, volume 17, number 1, at http://www.iasc-culture.org/THR/THR_article_2015_Spring_Pasquale.php, accessed 31 May 2016.

F. Pasquale, 2015b. The black box society: The secret algorithms that control money and information. Cambridge, Mass.: Harvard University Press.

S. Petersen, 2008. “Loser generated content: From participation to exploitation,” First Monday, volume 13, number 3, at http://firstmonday.org/article/view/2141/1948, accessed 30 July 2012.
doi: http://dx.doi.org/10.5210/fm.v13i3.2141, accessed 31 May 2016.

E. Rader and R. Gray, 2015. “Understanding user beliefs about algorithmic curation in the Facebook news feed,” CHI ’15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 173–182.
doi: http://dx.doi.org/10.1145/2702123.2702174, accessed 31 May 2016.

C. Redecker, K. Ala-Mutka, M. Bacigalupo, A. Ferrari, and Y. Punie, 2009. “Learning 2.0: The impact of Web 2.0 innovations on education and training in Europe,” at http://ftp.jrc.es/EURdoc/JRC55629.pdf, accessed 31 May 2016.

D. Rotman and J. Preece, 2010. “The ‘WeTube’ in YouTube — Creating an online community through video sharing,” International Journal of Web Based Communities, volume 6, number 3, pp. 317–333.
doi: http://doi.org/10.1504/IJWBC.2010.033755, accessed 31 May 2016.

R. Savolainen, 1995. “Everyday life information seeking: Approaching information seeking in the context of ‘way of life’,” Library & Information Science Research, volume 17, number 3, pp. 259–294.
doi: http://doi.org/10.1016/0740-8188(95)90048-9, accessed 31 May 2016.

T. Scholz, 2008. “Market ideology and the myths of Web 2.0,” First Monday, volume 13, number 3, at http://firstmonday.org/article/view/2138/1945, accessed 31 May 2016.

N. Seaver, 2013. “Knowing algorithms,” Media in Transition 8; version at http://nickseaver.net/s/seaverMiT8.pdf, accessed 31 May 2016.

T. Seymour, D. Frantsvog, and S. Kumar, 2011. “History Of search engines,” International Journal of Management & Information Systems (IJMIS), volume 15, number 4, at http://cluteinstitute.com/ojs/index.php/IJMIS/article/view/5799, accessed 31 May 2016.
doi: http://doi.org/10.19030/ijmis.v15i4.5799, accessed 31 May 2016.

D. Silver, 2008. “History, hype, and hope: An afterward,” First Monday, volume 13, number 3, at http://firstmonday.org/article/view/2143/1950, accessed 31 May 2016.
doi: http://dx.doi.org/10.5210/fm.v13i3.2143, accessed 31 May 2016.

M. Thelwall, 2009. “Homophily in MySpace,” Journal of the American Society for Information Science and Technology, volume 60, number 2, pp. 219–231.
doi: http://doi.org/10.1002/asi.20978, accessed 31 May 2016.

T. Thornborrow and A. Brown, 2009. “‘Being regimented’: Aspiration, discipline and identity work in the British parachute regiment,” Organization Studies, volume 30, number 4, pp. 355–376.
doi: http://doi.org/10.1002/asi.20978, accessed 31 May 2016.

Z. Tufekci, 2015a. “Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency,” Colorado Technology Law Journal, volume 13, pp. 203–217, and at http://ctlj.colorado.edu/wp-content/uploads/2015/08/Tufekci-final.pdf, accessed 31 May 2016.

Z. Tufekci, 2015b. “Algorithms in our midst: Information, power and choice when software is everywhere,” CSCW ’15: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, p. 1,918.
doi: http://doi.org/10.1145/2675133.2697079, accessed 31 May 2016.

M. Warner, 2002. Publics and counterpublics. New York: Zone Books.

M. Wesch, 2010. “YouTube and you: Experiences of self-awareness in the context collapse of the recording Webcam,” Explorations in Media Ecology, volume 8, number 2, pp. 19–34.

E. Wilf, 2013. “Toward an anthropology of computer-mediated, algorithmic forms of sociality,” Current Anthropology, volume 54, number 6, pp. 716–739.
doi: http://doi.org/10.1086/673321, accessed 31 May 2016.

C. Williams, 2008. “Re-thinking the motives of do-it-yourself (DIY) consumers,” International Review of Retail, Distribution and Consumer Research, volume 18, number 3, pp. 311–323.
doi: http://doi.org/10.1080/09593960802113885, accessed 31 May 2016.

C. Williams, 2004. “A lifestyle choice? Evaluating the motives of do–it–yourself (DIY) consumers,” International Journal of Retail & Distribution Management, volume 32, number 5, pp. 270–278.
doi: http://doi.org/10.1108/09590550410534613, accessed 31 May 2016.

M. Wolf and S. McQuitty, 2011. “Understanding the do-it-yourself consumer: DIY motivations and outcomes,” AMS Review, volume 1, numbers 3–4, pp. 154–170.
doi: http://doi.org/10.1007/s13162-011-0021-2, accessed 31 May 2016.

X. Zeng and L. Wei, 2013. “Social ties and user content generation: Evidence from Flickr,” Information Systems Research, volume 24, number 1, pp. 71–87.
doi: http://doi.org/10.1287/isre.1120.0464, accessed 31 May 2016.

R. Zhou, S. Khemmarat, and L. Gao, 2010. “The impact of YouTube recommendation system on video views,” IMC ’10: Proceedings of the 10th ACM SIGCOMM Conference on Internet Measurement, pp. 404–410.
doi: http://doi.org/10.1145/1879141.1879193, accessed 31 May 2016.

M. Zimmer, 2008. “The externalities of search 2.0: The emerging privacy threats when the drive for the perfect search engine meets Web 2.0,” First Monday, volume 13, number 3, at http://firstmonday.org/article/view/2136/1944, accessed 31 May 2016.
doi: http://dx.doi.org/10.5210/fm.v13i3.2136, accessed 31 May 2016.

J. Zittrain, 2014. “Engineering an election: Digital gerrymandering poses a threat to democracy,” Harvard Law Review Forum, at http://harvardlawreview.org/2014/06/engineering-an-election/, accessed 31 May 2016.

 


Editorial history

Received 27 May 2016; accepted 28 May 2016.


Creative Commons License
“DIY videos on YouTube: Identity and possibility in the age of algorithms” by Christine T. Wolf is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

DIY videos on YouTube: Identity and possibility in the age of algorithms
by Christine T. Wolf.
First Monday, {$numberTitle}
https://firstmonday.org/ojs/index.php/fm/article/download/6787/5517
doi: http://dx.doi.org/10.5210/fm.v21i6.6787