ContentsIntroduction — Yahoo! and the dangers of convergence
Web 2.0 — What does it mean?
Convergence — Where Web 2.0 sits
The history of data services — The origins of Web 2.0?
Web 2.0 as an argument against convergence
The recent announcement by Microsoft of a bid to acquire Yahoo! in a hostile takeover provides stark evidence of the continuing complexity of the intersection of computing and media businesses battling for dominance in the global market. Just as in the case of TimeWarner and AOL (Klein, 2003), the proposed MicrosoftYahoo! deal is about convergence. The big difference, however, is the new context of threats and opportunities which have led to Redmond’s latest effort to deploy its legendary financial muscle in pursuit of corporate goals of market domination. This difference emerges from changing conditions of networked mediacomputing which are in part associated with the rise of Web 2.0 and which provide an essential clue to understanding why Web 2.0 occupies such an important position in contemporary thinking about the Internet. As I will explain in this paper, Web 2.0 can itself be understood fully only by locating its emergence and significance within the broad movement of convergence of old and new media forms.
To introduce this topic, consider why Yahoo! is vulnerable to, or perhaps even welcoming of, a takeover by its longtime Internet competitor. This vulnerability occurs because of decisions, taken in the first years of this century, that reoriented Yahoo! away from being an Internet search and portal company to attempting to promote new channels to market for traditional media products via the Internet, or supported by the Internet (e.g., Koman, 2006). This change was clear in the appointment of media industry heavyweight Terry Semel as CEO in 2001 and in subsequent appointments and business planning that prioritised the relationship of Yahoo! to television and film producers . It can be argued that the problems Yahoo! has had with its underlying network technologies (affecting the utility of its searchadvertising connection, but also its Messenger service and “360” profile/networking service) are a consequence of this shift in organisational focus. Yahoo! has suffered from its efforts to change from being a dot.com innovator (which had made Yahoo! such a significant Internet business by the time Semel arrived) to also provide online media services. Notably, Semel’s departure in mid2007 was greeted with relief by many Internet analysts and commentators precisely because it was thought to signal a return to a focus on core networked computing functionality. One summed up the change: “The valley [Silicon Valley] will take over Hollywood. Not the other way around” (Arrington, 2007).
The recent weakness of Yahoo! can be ascribed to an illconceived ambition to be a media company, rather than Internet company. Microsoft’s primary goal in the proposed takeover is to strengthen itself against Google, the most obvious competitor for its global dominance of computing services and the most successful Web 2.0 business to date. The legacy of Yahoo!’s media strategy is unlikely to be of much appeal to Microsoft which has its own largely unhappy history of attempts to ‘converge’ with media provision. Indeed Yahoo!’s vulnerability results from its failure to understand the relationship between the Internet and traditional media, just as was the case in the failure of TimeWarner/AOL. Convergence, it seems, is not quite as simple or obvious as we might first believe. Equally, Yahoo!’s current vulnerability for takeover reflects the company’s relative failure in recent years to move successfully into the era of Web 2.0, for the concentration on media partnerships has left Yahoo! struggling with technological approaches and assumptions about the Internet that date from the 1990s (Koman, 2006). And, with this in mind, let me now turn to the main focus of this paper: what Web 2.0 means, and what it tells us about the adoption and development of the Internet more generally.
Web 2.0 is a curious term, laden with uncertainty. As I have explored elsewhere (Allen, 2007), this situation results largely from the way the term was initially promoted by Tim O’Reilly via his Web 2.0 conference in 2004, as well as the influence of the Internet itself in hosting and spreading debate about Web 2.0 (see especially Graham, 2006 and Doctorow, 2006 for interesting views on O’Reilly’s role). It is not the case, however, that there are competing definitions for the term: rather, Web 2.0 is a shorthand term for many different things, some in conflict, some overlapping but marked especially by the fact that they are ontologically noncompatible. In short Web 2.0 is about ideas, behaviours, technologies and ideals all at the same time. Moreover, its distinctive assertion of a change in state, from Web 1.0 (a term that was never used in any case) to Web 2.0, begs the question of the degree to which this change has actually occurred or may be occurring because of something new, or simply involves a reexpression of things previously understood as ‘the Web’, but placed in a new arrangement or seen in a new light.
I would argue that while many current Internet developments, activities, applications and the like can be understood as examples of ‘Web 2.0’, they do not themselves constitute it. Rather Web 2.0 is a conceptual frame, within which we can correlate and make sense of those diverse events even as we use it as a convenient shorthand. What then are the main elements to this frame, the pieces which, together, create the boundaries around the picture of the current state of the World Wide Web? There are four – technology, economy, users and philosophy – and I will outline each in turn, drawing on both the originating material for the idea of Web 2.0 and on subsequent debates (O’Reilly, 2005; Musser, 2006; Madden and Fox, 2006; and see Allen, 2007 for further details).
Firstly, Web 2.0 is a term applied to approaches to the design and functionality of Web sites and the services they offer, emerging in recent years, and essentially describing technological implementations that prioritise the manipulation and presentation of data through the interaction of both human and computer agents. One example would be a mashup of data from one Web site with data from another site, which is then seamlessly presented to users through a third site, in which users can control how and what data combinations occur. Another example might be the automated collation of data about one user of a Web site which is then utilised to enable other users to contact or interact with that first user. The concepts behind these technologies are not new, but their application to the web has become significant in recent years, thus justifying a description – Web 2.0 – that implies a different kind of World Wide Web to that of the 1990s.
Second, Web 2.0 refers to a business model for financial success in using the Internet to put people and data together in meaningful exchanges. At its simplest, Web 2.0 business approaches are those in which Web companies offer services that allow advertisers to reach consumers with marketing communications that are precisely targeted to the specific users most likely to engage with that marketing and at a time when the advertising will have its greatest effect (e.g., when a purchasing decision is about to be made). By offering a free and attractive service to users, these Web companies will create an audience that can be addressed very effectively by advertisers who will, in return, provide the revenues necessary for financial success. The attention economy was often proposed as the basis for business success for the Web in the 1990s (Goldhaber, 1997 amongst others); but Web 2.0 is presumed to be a radical advance in that it involves more sophisticated technologies to permit acquisition of detailed data about users, the delivery of more precisely targeted advertisements, and a closer integration between online behaviour and marketing communication.
Third, Web 2.0 attempts to describe services and activities that permit or create a new kind of media consumer who is more engaged, active and a participant in the key business of the Internet: creating, maintaining and expanding the ‘content’ which is the basis for using the Internet in the first place (Hinchcliffe, 2006). Once again, as with the other two approaches to defining Web 2.0, this situation is not entirely new: Internet users have always been thought of as ‘different’ to traditional media consumers, as being users and not audiences. However, Web 2.0 implies a significant acceleration in the number of active participants and the quality and attractiveness of their contributions to the point where provision of services to these ‘produsers’ (see Bruns, 2008), rather than delivery of content, is the key element in harnessing the technologies to achieve the successful implementation of the business approach outlined above.
Fourth and finally, Web 2.0 is a political statement of a kind of libertarian capitalism that appears to suit an era in which societies are more and more intensely ‘mediated’ by all forms of entertainment and information media, particularly in the economic firstworld nations such as the U.S. that are driving Internet development but elsewhere as well. The politics of Web 2.0 are expressed in traditional democratic terms, emphasising freedom of choice and the empowerment of individuals through what O’Reilly (2005) has termed the “architecture of participation”. However, crucially, this freedom and empowerment relates to a more democratic form of media consumption and production, of making the Internet itself ‘democratic’. Web 2.0, it is claimed, positions users of the Internet, both large and small, as relatively equal and equally engaged participants. Of course, there have been similar claims to the democratising potential of the Internet for many years  well prior to even the emergence of the World Wide Web; what is different now, however, is that, within Web 2.0, the focus is on ‘democracy’ as a state of affairs within the Internet itself, rather than as a term suggesting ideals of equality in society as a whole, that might be achieved through the democratising possibilities of networked communications.
And what ties all four of these elements together is that their current valence and application depends on articulating the renewed relevance of longer established elements and principles of the Internet, despite of the apparent decline caused by the dot.com crash at the turn of the century. Thomas , for example, concludes that “Web 2.0 has galvanised some sectors of the digital community to philosophize about wiredness [sic] in a way that was notably missing for quite some years, ever since the initial euphoria of preWeb Internet cultures was damped down by the dotcom boomandbust in the rush to monetize every pixel”. More prosaically, O’Reilly (2007) identified that Web 2.0 seemed to make sense for him in 2004 because “a lot of programmers were out of work, and there was a general lack of interest in web applications … [but] we saw a resurgence coming… .” Indeed, O’Reilly’s founding assumption, as he came to formulate and promote Web 2.0, was that many Internet companies had survived and prospered despite the crash; thus, the crash provided a brutal but very accurate assessment of the kinds of approaches and business thinking that would work and these should drive the next phase of Internet development (O’Reilly, 2005).
Proponents of Web 2.0 variously claim it to be a continuation of the World Wide Web but in a better way, or a return to the approaches and the possibilities of the Internet before the World Wide Web, or even a second attempt at what the Web was originally trying to achieve. In many ways, all of them are correct, because there is no clear logical path of development that can be followed here. As noted above, all four key elements to Web 2.0 do refer directly to approaches and behaviours before 2004; they all also claim to be providing something which breaks with the assumed first version of the Web. This contradiction, of course, is precisely what makes Web 2.0 a marketable meme and, for some, a compelling statement of the direction of Web development – it is the past, present and future, all at once. O’Reilly, the father of Web 2.0 as much as BernersLee was the father of the World Wide Web, has consistently used the term (as have many others) to reinvigorate our collective enthusiasm for the ‘Web’, recognising both current and past explorations and initiatives, and harnessing the validity of them in a clarion call for the future.
Web 2.0 emerges from 2004 onwards, rapidly – though not without criticism – moving to become a ‘given’ in debates and discussions of the Internet, so much so that people now use the rhetoric of Web 3.0 if they are attempting to look into the future (e.g., Martin, 2008) . Web 2.0 is, as I have demonstrated above, a concept that, by framing many individual elements, allows for the development of a general and coherent picture of the Web that highlights ‘good’ Web practice (where good means one or more of profitable, useful, technologically clever, and socially valuable). In this manner, Web 2.0 fulfils the original aim of its chief promoter, Tim O’Reilly, who wished to find a way to ‘sell’ the Internet by reinventing the Web for a new round of investment and development. This selling process came at a time of continued weakness in the Internet industry, as it recovered from the crash, and when it was believed that, without the kind of proselytising inherent in Web 2.0, there was a risk society may not have the kind of sustainable, useful data networking services which had been the promise of the boom in the 1990s.
What I now wish to consider is the way in which the general picture of the web as proposed by Web 2.0 can be related to the broader field of Internet and media development. In doing so, I attempt to show how Web 2.0 is a quite specific challenge to some orthodox assumptions about the general state of Internet, media and telecommunications that is framed by the equally conceptual term, convergence. It may seem odd to imagine that Web 2.0 can be an argument against convergence. Surely Web 2.0, just like the World Wide Web in the 1990s and preWeb Internet services before then, is part of the overall ‘convergence’ of media and information? If we say yes to this question, then the answer fails to account for the fact that convergence is not simply a vague delineation of broad assumptions about the likely shape of infomedia futures, but is a highly significant, historically located struggle for control over the particular form of that infomedia future, a struggle being waged by various competing elements seeking to shape that future primarily for their own profit.
Much has been written about convergence (both as part of the process of creating convergence and as critical commentary upon it) and I do not wish to rehearse these debates here. However, it is a reasonable summary to say that convergence has come to mean the process by which various instantiations of human behaviour involving information transfers and exchanges, previously separate, come together to occur in a comprehensive, interlinked manner. Convergence would break down the boundaries between distinct worlds of electronic media (both broadcast television and radio, and their subscription equivalents), print media (both newspapers and magazines), screened movies (both in cinemas and at home), computer gaming (both games as normally understood and also gambling), multimedia (as understood in its 1980s form) and last but not least, voice and text telecommunications (both electronic and printbased forms such as the fax). More significantly, and as implied by the term itself, however, convergence would involve the confluence of several different activities, unified within a single new converged form. Thus, at one level, convergence might be thought to mean the delivery of movies once available only in cinemas over the same communications links used for voice communication; but, in the end, convergence would be combinations such as watching a ‘television’ program while chatting, Internetstyle, to other viewers at the same time, while also playing games with those viewers involving the content of that program, all via the same system of reception and transmission.
Convergence is best thought of as a reorganisation of the economic structures and social practices for the provision and consumption of a broad range of communication and information services enabled by technological advances that lead to the digitisation of data, and its circulation at everincreasing speeds over computerbased networks involving direct connections through telecommunications links. While these technologies provide the conditions for the possibility of convergence, they do not determine its particular forms because the technologies only come to be applied in ways that, in a predominantly freemarket global economy, serve the needs of private corporations and their financial interests in the media and information industries.
Those needs are not necessarily coherent. Not only is there competition between various corporations, with different kinds of convergence suiting different interests, but they must also take account of the behaviour of media users, even as corporations attempt to shape that behaviour so it better suits their profitoriented needs. Convergence is, therefore, a site for the ongoing struggles within capitalism for the harnessing of technological advance in pursuit of private profit by exploiting the desires of consumers. Essentially convergence is making media consumers ‘work’ in the service of capital investment in the media. However, the particular characteristics of the technologies which drive convergence also undermine this process by creating opportunities for capital investment outside of established structures and by empowering consumers/users so they are much less readily shaped to suit established corporate goals.
Convergence, while often promoted, praised, and assumed, is not however a straightforward process; it is as much a source of confusion about the future of media and information in society as it is a clear guide to intended outcomes. Why is this? First of all, while we popularly date convergence as a phenomenon of the late 1990s, there is no doubt that the industries associated with all of the media and information activities which might ‘converge’ were already involved with convergence before this time. Thus, while the Internet, as it emerged in the 1990s, was characterised as the harbinger of convergence, it also disrupted existing assumptions about the trajectory of convergence and interrupted efforts already underway to achieve convergence. Equally, the ability to debate and define sensibly the relationship between convergence and the Internet was hampered by the fact that ‘Internet’ could mean variously (and simultaneously) the underlying protocols for interconnection; the infrastructure of computerised switches, routers and cable or wireless networks by which the Internet worked; the services that it provided; and the culture and experience of using the Internet.
While the ambiguity of the word ‘Internet’ has led to contradictory approaches to convergences, the underlying reason for that ambiguity also provides a way to analyse and understand why Web 2.0 is an argument against convergence. The ‘Internet’ is both the infrastructure by which data services are possible and also those data services; these two aspects of the Internet are owned and controlled by different entities in the struggle of convergence, and involve different possibilities for economic exploitation, not least because, while the ‘web’ as a data service requires that infrastructure, the infrastructure can (and is) put to other purposes. Moreover, while some elements of the Internet’s architecture enable convergence in a theoretical sense (for example interconnection of disparate networks and devices and equal treatment of data packets), those elements can also inhibit the actual occurrence of convergence by complicating the degree of control which convergence competitors can exert.
The dot.com crash has already been noted for its importance in generating both the need for, and some of the underlying logic of, the development of Web 2.0. It was also significant for exposing the significant oversupply in data network data traffic capacity, representing a very large sunk investment for telecommunications providers from which they were unable to profit or even cover costs (Schaff, 2002). As a result, after the crash, one of the key tasks facing the owners of this infrastructure was to find ways to make it profitable. At the same time, there were continued pressures to extend, develop and improve the network – especially at the consumers’ end – so as to create a true broadband Internet. These pressures were in part a reflection of the rhetoric of government policy makers pursuing rather idealistic goals of information economies and increased citizen engagement through highspeed networks (Allen, 2006). The pressures also stemmed from demands by consumers for higher-speed and always available Internet access. Yet, at base, the business of telecommunications infrastructure providers remained the same: to invest money in creating networks that would allow those companies to extract significant surpluses from the use of the networks by others.
In this environment, the focus for predictions and developments began to fall more on the provision of voice and audiovisual program services, two traditional media and communications forms that had long preceded the ‘Internet’, but that, now, could be refashioned, utilising Internet infrastructure. In this moment of convergence, which involved both traditional telephoneservice oriented telecommunications companies as well as newer, subscription television corporations (which, in some countries were one and the same), there emerged a necessary struggle over the real meaning of the Internet. The owners of network infrastructure, while promoting the Internet and claiming to be its pioneers, sought to emphasise the particular uses (voice and video communication; and television and movie presentation) which would suit their position as controlling the infrastructure, because they would be the ones to benefit from and arbitrate the uses of this kind of Internet. The future of converged telecommunications was thought to lie in the socalled ‘triple play’ of Internet Telephony (or VOIP), Internet Television, and general Internet access (e.g., OECD, 2006)
However, in this triple play, the last use was the least significant for those wishing to promote investment in highspeed networking. The Web was not an opportunity here, but a threat – a kind of Internet use which excluded the providers of the infrastructure, and which also threatened the audience base of the media organisations which might partner those providers in the delivery of audiovisual content. The Web was, primarily, profitable to commercial Web users, not network or content owners. Moreover, because the Web of this era had been designed for largely lowerspeed connectivity, it did not appear to offer a compelling reason for the investment in, or purchase of, the highspeed networks which the telecommunications providers wished to build. Essentially, since the Internet had come to be popularly equated with the World Wide Web, a data service that did not require broadband (even though such access made it more usable), after the crash, telecommunications providers saw the need and an opportunity to reorient the marketable and consumable Internet away from this form, to one more suitable to their longterm plans and ambitions.
To understand how this situation came to be requires us to consider the longrunning history – from the 1970s at least – of the engagement with networked data services by telecommunications providers (see Carlson, 2007). Technologically speaking, this history runs in parallel with the development of the Internet, up until the mid1990s, with the possibilities of packetswitched data services delivered over telecommunications infrastructure being explored by two groups who, while overlapping, had distinctly different approaches. Existing telecommunications providers and partners in traditional media were just as interested in data services via telecommunications as the computer scientists and engineers who are normally understood to be the heroic originators of the Internet, whether from public research institutions or other nonprofit initiatives, or in the emerging commercial dataservices world.
By the early 1990s, there had emerged several different kinds of data services which prefigure the World Wide Web (see Kyrish, 1996; Abbatte, 1999; Hauben and Hauben, 1997; Hafner and Lyon, 1996; Rheingold, 2000; Grossman, 1997; Herndon, 2007; Wikipedia also provides good summaries of these developments as does Carlson, 2007). There were, variously, hobbyists and enthusiasts providing bulletin board services, some of which were marginally commercial and others were a private, but free service (most famously, the WELL in California). Activists were developing and using services such as Pegasus. There were a small number of relatively successful commercial services, such as Prodigy, CompuServe and AOL. Libraries offered online searching and other data services. The Internet (collectively describing numerous national networks such as AARNET in Australia and NSFNET in the U.S.) was established within universities but increasingly with availability to, or interconnection with, other locations of use. Local public government initiatives were underway as well, bringing services to people within a particular community. UUNET provided a kind of networked service via USENET newsgroups. There had also been efforts to provide online text media – primarily newspaper content – via commercial services such as ViewTron in the U.S. and Prestel in the U.K., though many were no longer in operation by this time. And of course there were emerging possibilities for new kinds of data services of which the last, and equally unsuccessful, was the Microsoft Network concept which was briefly implemented in Australia, for example, as “OnAustralia”, in partnership with the dominant telecommunications company Telstra.
This complex array of competing, overlapping, and duplicating data services, all of which existed side by side and were largely concerned with networking a specific community either based on interest or locality, soon gave way to the world of the Web. BernersLee and others provided the technical breakthrough of HTTP and HTML (BernersLee, 1999), enabling the Internet to become much usable and scalable, for both developers and users, as well as mimicking the graphical interfaces which dominated personal computing. Equally, the public became conscious of the Web through the combined, if competing, efforts of Microsoft in attempting to sell the Windows 95 operating system and of Netscape in establishing its Web browser and server software as the ‘musthave’ applications. Yet, this was not ordained, nor inherent in those earlier developments. For example, longtime data services proponent John S. Quarterman, defined the emerging pattern of network services as a ‘Matrix’ of which the Internet — and indeed the World Wide Web was just one component . While retrospectively, the Internet appears to encompass all of these lines of development, it was by no means inevitable that we would end up with a largely free, publicly oriented and highly distributed data network service of the kind which the web represents.
Whatever the technical similarities amongst these developments, there were two different sets of assumptions about the business of delivering these services. Firstly, telephone companies understood data exchange via telecommunications as an opportunity to sell different services to subscribers, utilising their existing infrastructure. Data services allowed telephone companies, in partnership with media organisations – particularly newspapers that were threatened by increasing use by consumers of broadcast electronic media and who were already moving into computerbased content production – to extract additional revenues from the phone services they already provided and maintained with little further investment in hardware . Indeed, in the largely unsuccessful deployment of these services, consumers were expected to cover the cost of the specialised equipment as well as pay for the data, and for the communications costs. Media organisations, who had invested in producing the news and related content in the first place, also saw these services as a mechanism to generate increased returns on that initial investment. In other words, data services were not a radical break with current practice for these commercially oriented organisations but, instead, a new field for capitalisation and profit. Crucially, for telecommunications providers, the profit was to be made by exploiting their existing network. Notably, too, in their partnerships with media organisations, the focus fell upon content delivery, rather than the more diverse array of data services which have played such a central role in the popularisation of these services.
Developers working outside of telecommunications industries took a different approach. As well as being more open to the possibilities that data services such as email might supplement or even replace traditional communications, these developers treated the existing telecommunications infrastructure as a given, a service that was already provided and could be repurposed. The Internet as we know it emerged because the presumption about the purpose and use of the infrastructure by those who were more interested in connectivity over existing networks won out over the plans and aspirations of those who actually owned the infrastructure. And that success was due in part to the fact that providers who, because they were not in the business of providing voice and related communications emphasised email, chat and other computerbased communication were more successful than those who emphasised payperview content that was essentially the same as was available in print, but simply delivered in a different form.
The rapid emergence of the World Wide Web in the 1990s as a popular, public mechanism for data interchange, drawing on existing Internet applications and creating new ones, meant an end to the few existing data services that, using the same infrastructure, provided payperuse content with limited or no interconnection between systems. Moreover, the particular way in which the web became, effectively, the sole public data service established two assumptions about such services that challenged the profitability of any future development that might attempt to return to the plans of telecommunications and media providers for payperuse systems. First, the academic origins of the Internet, which privileged nonproprietary interconnection and largely uncontrolled data exchange, made it all but impossible for telecommunications providers to utilise their network ownership to continue with or reintroduce, proprietary access arrangements that would monetise the actual content rather than the service. Second, Internet consumers had come to presume that the services to be found online were, largely, there for free, once the costs of the initial connection were met. Aside from exceptions involving unusually highdemand, lowavailability content, Internet content and applications providers also operated on the assumption that users would not pay and, instead, embraced an advertising model (as found in freetoair television) rather than a payperuse model (as was normal in telecommunications).
Thus the Internet of the 1990s, through the dominance of the World Wide Web, challenged the capacity of the owners of both content and, more importantly, infrastructure, to make any significant exploitation of that ownership. At first, telecommunications providers were more than happy to promote and support the Web, accessed mainly via dialup Internet access. For these companies, additional revenues could be earned from the provision of additional fixed telephone lines, timed calls to Internet Service Providers (ISPs) in many countries, and of course the wholesale charging of ISPs themselves. While the emergence of the Web had undone some of their plans for different kinds of data services, it was obviously a source of unexpected shortterm profits. However, the economics of this provision were largely based on exploiting the existing publicswitched telephone network in new ways, without the need for additional investment, effectively layering the Internet on top of existing voice services.
At the same time, while traditional media companies were caught up in the enthusiasm for online media generated by the World Wide Web, there was a fundamental incompatibility between the old and the new. In the form of the World Wide Web, the Internet seemed likely to draw attention (and thus advertising revenue) away from traditional media consumption; and, through file sharing programs, actual content might escape from the control of the major media providers in a manner that would prevent its profitmaking potential. Moreover, in some countries, media organisations had come to own infrastructure not dissimilar to that of telecommunications providers by which to provide subscription television services. While this infrastructure was, soon enough, turned to the task of giving Internet access to consumers (and at speeds much higher than available through dialup), the content being provided over this infrastructure was not in almost all cases actually monetized by the companies concerned. Efforts (notably in the U.S. by Excite@Home) to create Internet content services for cable subscribers failed (e.g., Rose, 2002), leaving those media companies involved in subscription television services in a similar position to the large telecommunications companies: their ownership of the network infrastructure was simply a means by which their customers could access other products and services, providing a revenue stream to companies offering Web services, rather than being a source of significant financial advantage.
The result of this situation, in recent years, has been a very strong emphasis by providers of network infrastructure on the importance, future significance of and general relevance services other than the Web. In doing so, they have argued strongly for convergence, of a kind in which traditional owners of media content, along with traditional owners of infrastructure create partnerships for an Internet dominated by high bandwidth circulation of audiovisual products, including communications. They have, also, been very active in emphasising that future investment in network development – to achieve the kind of broadband networks which policy makers have claimed to be necessary for economic prosperity – can only proceed if there is a clear mechanism by which that investment can be recouped. In America, this insistence has found expression through the arguments for and against network neutrality (e.g., Wu, 2003; Federal Trade Commission, 2007). In this debate, essentially, infrastructure owners claim that they deserve a share of the profits earned by Webbased companies (via their advertising revenues). These owners are also attempting to create arrangements to collect additional payments from users (both providers and consumers) for the content transmitted over that network above and beyond the normal connection charges, based on proposed quality of service guarantees and the like.
The first few years of this decade have, therefore, seen a different kind of argument for convergence than that proposed, loosely, in the 1990s. This argument returns to the origins of the telecommunications sector’s interest in exploiting its ownership of infrastructure but, instead of emphasising data services, it focuses now on voice and video communication because the possibility of monetized data services was lost when the World Wide Web emerged. The network neutrality debate has brought into clear sight the underlying conflict between telecommunications, on the one hand, and the kind of web-based industries that have prospered via the former’s infrastructure. Convergence is now defined by the harmonising of interests between those telecommunications providers and the media corporations who wish to utilise that infrastructure for financially sustainable content delivery.
In this situation, we can see that Web 2.0, as it emerged in 2004 and since, is not just a marketing move designed to return investment to the Internet industries, but also to propose and promote those industries and their role in networked society in the face of threats from the traditional giants of media and telecommunications who see convergence as implying the centrality of their approaches, to the detriment of the Internet as a distinct industrial sector. Web 2.0 addressed the perceived weakness of the Web, as the least significant element in the discourse of the triple play of voice, video and data which dominated the past several years of debate about the future of the Internet.
The surrounding context which I have just discussed starts to make clear the significance of the timing of Web 2.0’s emergence as a conceptual frame by which to promote and describe the benefits of the Web as Internetdelivered data service. Web 2.0 is, in this sense, a general proposition that the Web is not just one of many components of the convergence of media and Internet. Otherwise, telecommunications and media corporations would greatly increase their ability to regain control over the trajectory of development of integrated, interactive media as anticipated in the 1980s, but severely disrupted by the World Wide Web in the 1990s, by sidelining the Web and focusing instead on Internet applications that were not ‘the Web’. To demonstrate the importance of Web 2.0 in this respect, I want to conclude by returning to the definition that I advanced above, examine the relationship between the four key elements of Web 2.0 and show how each constitutes a specific critique of, and argument against, convergence where that term is understood to mean the domination of the Internet by media and telecommunications providers.
Firstly, Web 2.0 emphasises technologies for the creation and operation of Web sites and services that make the web more and more like a computer program, rather than a collection of media channels or forms. Rather than making a Web that is based on the precepts of print and electronic media, Web 2.0 emphasises that the Web involves cybernetic programs that offer and take input from the world and process it so as achieve the goals of users. Sites will involve programming in the computer science sense of the word and thus they are not at all like television ‘programs’; nor can they be understood by those whose business is to create such ‘programs’.
Secondly, Web 2.0 proposes targeted advertising as the basis for successful profit, implicitly claimed to be far superior to traditional media advertising, by creating a climate in which advertising is part of the activities of users, rather than simply being collocated with their media consumption. In this respect, Web 2.0 claims that the Internet, far from converging with traditional media, offers a significantly superior kind of advertising appeal precisely because it is not about consuming media products (and advertising with them) but about doing things which are directly connected to marketing communications (such as searching for information on products).
Third, Web 2.0 promotes an understanding of the user of computer networked services that is completely different to the traditional notion of media audiences. Within Web 2.0, users are primary producers of content. Web 2.0 therefore emphasises applications development, rather than content development, undercutting the power of the media corporations who dominate the creative content industries. It also proposes that the model of ideal ‘media’ behaviour is in producing one’s own content, rather than consuming someone else’s unless that other content has been produced by users like oneself.
Finally, Web 2.0 is also about the politics of networked services and claims to be the vehicle for increased democratisation. It legitimises this claim by favourably contrasting the equality and engagement of users and service providers within Web 2.0 with the apparently undemocratic relations between audiences and producers/broadcasters in the traditional media that create a kind of hegemony in which the latter come to choose for those audiences. Of course, Web 2.0 limits its understanding of democracy to the freedoms we might wish for in using ‘the media’, but nevertheless, can claim considerable authority through this easy contrast. Further, the particular politics of Web 2.0 helps bring together the elements of technology, economics and culture of use already outlined – privileging the fusion of humans and technologies in ways that promote users’ liberation from ‘the media’ as a corporate monolith, in harmony with the freedom of web businesses to profit from those users through their technological engagement.
Web 2.0, in its promotion of these features of Internet use, is itself open to significant criticism for the way it validates a kind of advanced, promotional entrepreneurial capitalism that binds users to profitmaking service providers via the exploitation of those users’ immaterial labour. Web 2.0 also serves as an ideology for the creation of new forms of dependence between individual humans and corporations who, by monopolising and controlling the network activities through which key forms of human sociality becomes possible, can therefore benefit disproportionately from that dependence. Yet, as an argument against convergence, Web 2.0 suggests that, in the struggles for corporate domination and control over technologies that circulate information through our cultures, and make meanings from that information, there is no unanimity of interests between those who see themselves as old media, making new, and those who – by adopting and working within the frame of Web 2.0, seek to make media something else altogether.
About the author
Matthew Allen is Associate Professor of Internet Studies at Curtin University of Technology in Australia.
1. One sign of that change, for example, was the partnership in Australia between Yahoo! and television network Channel Seven that created a new, televisionoriented version of Yahoo! Australia (Yahoo7); many other arrangements were put in place to tie Yahoo! to specific audiovisual content linked with the traditional providers of that content in the television, movie and music industries.
2. E.g., Raab, et al., 1996, pp. 283285.
3. Thomas, 2006, p. 389.
4. At the moment at least this use of Web 3.0 is purely a rhetorical move whose significance lies more in what it says about the prevalence of its predecessor term.
5. See Kitchen, 1998, p. 4; and, Salus, 1995.
6. My thanks to Keith Herndon, a graduate student of mine, for information on this issue.
Janet Abbatte, 1999. Inventing the Internet. Cambridge, Mass.: MIT Press.
Matthew Allen, 2007. “Web 2.0: Discursive entrapment, empowerment or both?” Internet Research 8.0 – Let’s Play! Annual Conference of the Association of Internet Researchers, Vancouver, B.C. (October). Available from the author.
Matthew Allen, 2006. “Broadband technologies and technooptimism and the hopeful citizen,” In: Joël Weiss, Jason Nolan, Jeremy Hunsinger and Peter Trifonas (editors). International handbook of virtual learning environments. Dordrecht: Springer, pp. 15251548.
Michael Arrington, 2007. “Breaking: Yahoo’s Terry Semel Quits.” TechCrunch (18 June 18), at http://www.techcrunch.com/2007/06/18/yahoo-ceo-terry-semel-resigned/, accessed 1 February 2008.
Tim BernersLee, 1999. Weaving the Web. San Francisco: HarperCollins.
Cory Doctorow, 2006. ‘Can Anyone own “Web 2.0?”’ BoingBoing (26 May), at http://www.boingboing.net/2006/05/26/can-anyone-own-web-2.html, accessed 10 October 2007.
Axel Bruns, 2008. Blogs, Wikipedia, Second Life and Beyond: From production to produsage. New York: Peter Lang.
David Carlson, 2007. “The Online Timeline” David Carlson’s Virtual World, at http://iml.jou.ufl.edu/carlson/timeline.shtml, accessed 18 January 2008.
Federal Trade Commission (FTC), 2007. Broadband connectivity competition policy, Washington D.C.: FTC, at http://www.ftc.gov/reports/broadband/v070000report.pdf, accessed 4 February 2008.
Michael Goldhaber, 1997. “The Attention Economy and the Net”, First Monday, volume 2, number 4 (February), at http://journals.uic.edu/fm/article/view/519/440, accessed 3 February 2008.
Paul Graham, 2006. Interview about Web 2.0, at http://paulgraham.com/web20interview.html, accessed 1 October 2007.
Wendy Grossman, 1997. net.wars. New York: New York University Press, at http://www.nyupress.org/netwars/contents/contents.html, accessed 10 February 2008.
Katie Hafner and Matthew Lyon, 1996. Where wizards stay up late: The origins of the Internet. New York: Touchstone.
Michael Hauben and Rhonda Hauben, 1997. Netizens: On the history and impact of Usenet and the Internet. Los Alamitos, Calif.: IEEE Computer Society Press.
Keith Herndon, 2007. “The history and hype of early online media endeavors,” Internet Research 8.0 – Let’s Play! Annual Conference of the Association of Internet Researchers, Vancouver, B.C. (October).
Dion Hinchcliffe, 2006. “The State of Web 2.0”, Social Computing Magazine (2 April), at http://web2.socialcomputingmagazine.com/the_state_of_web_20.htm, accessed 4 January 2008.
Rob Kitchen, 1998. Cyberspace: The world in the wires. New York: Wiley.
Alec Klein, 2003. Stealing time: Steve Case, Jerry Levin, and the collapse of AOL Time Warner. New York: Simon & Schuster.
Richard Koman, 2006. “Should Semel go? Is Yahoo a media company? Is that a good thing to be? (Yes, Yes, No.).” SiliconValleyWatcher (10 December), at http://www.siliconvalleywatcher.com/mt/archives/2006/12/should_semel_go.php, accessed 1 February 2008.
Sandy Kyrish, 1996. From Videotex to the Internet: Lessons from online services 19811996. Melbourne: La Trobe University Online Media Program, at http://www.latrobe.edu.au/teloz/reports/kyrish.pdf, accessed 13 January 2008.
Mary Madden and Susannah Fox, 2006. Riding the Waves of “Web 2.0”: More than a buzzword but still not easily defined. Washington, D.C.: Pew Internet and American Life Project, at http://www.pewinternet.org/pdfs/PIP_Web_2.0.pdf, accessed 10 October 2007
Rebecca Martin, 2008. “Webvolution,” Catapult: Making ideas happen (Australian Broadcasting Corporation), at http://www.abc.net.au/catapult/indepth/s2091002.htm, accessed 12 February 2008.
John Musser, 2006. Web 2.0 – Principles and practices [Executive summary], O’Reilly Media, at http://www.oreilly.com/catalog/web2report/chapter/web20_report_excerpt.pdf, accessed 10 October 2007.
Organisation for Economic Cooperation and Development (OECD), 2006. Multiple play: Pricing And policy trends, Report from OECD Working Party on Telecommunication and Information Services Policies, at http://www.oecd.org/dataoecd/47/32/36546318.pdf, accessed 10 December 2008.
Tim O’Reilly, 2007. Today’s Web 3.0 nonsense blogstorm, O’Reilly Radar (4 October), at http://radar.oreilly.com/archives/2007/10/web_30_semantic_web_web_20.html, accessed 1 February 2008.
Tim O’Reilly, 2005. What is Web 2.0: Design patterns and business models for the next generation of software, O’Reilly Group, at http://www.oreilly.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html , accessed 1 October 2007.
Charles Raab, Christine Bellamy, John Taylor, William H. Dutton and Malcolm Peltu, 1996. “The information polity: Electronic democracy, privacy, and surveillance,” In: William H. Dutton (editor). Information and communication technologies: Visions and realities. Oxford: Oxford University Press, pp. 283200.
Howard Rheingold, 2000. The virtual community: Homesteading on the electronic frontier. Revised edition. Cambridge, Mass.: MIT Press, at http://www.rheingold.com/vc/book/, accessed 3 February 2008.
Frank Rose, 2002. The $7 billion delusion, Wired (January), at http://www.wired.com/wired/archive/10.01/excite_pr.html, accessed 19 February 2008.
Peter H. Salus, 1995. Casting the net: From ARPANET to Internet and beyond. New York: AddisonWesley.
William Schaff, 2002. “Taking stock: Oversupply and competition keep telecom equipment stocks down,” Information Week (18 March), at http://www.informationweek.com/story/showArticle.jhtml?articleID=6501412 , accessed 1 February 2008
Sue Thomas, 2006. “The end of Cyberspace and other surprises,” Convergence, volume 12, number 4, pp. 383391.
Timothy Wu, 2003. Network neutrality, broadband discrimination, Journal of Telecommunications and High Technology Law, volume 2, pp. 141179, and at http://ssrn.com/abstract=388863, accessed 2 February 2008.
Copyright © 2008, First Monday.
Copyright © 2008, Matthew Allen.
Web 2.0: An argument against convergence
by Matthew Allen
First Monday, Volume 13, Number 3 - 3 March 2008
A Great Cities Initiative of the University of Illinois at Chicago University Library.
© First Monday, 1995-2017. ISSN 1396-0466.