E-books: Histories, trajectories, futures
First Monday

E-books: Histories, trajectories, futures by Michael M. Widdersheim



Abstract
This essay traces the historical trajectory of e-books in the U.S. and imagines their possible futures. Legal, economic, and technical developments that led to contemporary e-books reveal a tension between commercial and non-commercial programming. Commercial e-book designs control end uses, reduce production and distribution costs, stimulate consumption, and monitor user behaviors; however, alternative producers and users on the periphery continue to challenge these centralizing tendencies.

Contents

Introduction
Technological stirrings of e-books
Pre-millennial U.S. publishing industry
Commercial strengths and weaknesses for online e-books
Managing digital rights
Deregulating new e-book markets
DMCA: Helping the invisible hand
1999 and the turning point for commercial E-books
E-books and hyper-commercialized reading
E-books and the student surveillance economy
Open access counter-imaginaries
Conclusion

 


 

Introduction

New technology is itself a product of a particular social system, and will be developed as an apparently autonomous process of innovation only to the extent that we fail to identify and challenge its real agencies. — Raymond Williams [1]

 

Like all the other things human beings build and discover, computers can only be understood productively when they are seen as part of the cultural and historical contexts out of which they emerge. — David Golumbia [2]

 

We must criticize technical images on the basis of their program. We must start not from the tip of the vector of meaning but from the bow from which the arrow was shot. Criticism of technical images requires an analysis of their trajectory and an analysis of the intention behind it. — Vilém Flusser [3]

 

E-books (electronic books), though a seemingly new media type, already have a long and complex history. The story of e-books becomes complicated when it is realized that, like all digital media, there is no ur-e-book, no single kind detached from its environment and stripped of its politics. To understand e-books as a media type therefore means to understand them within particular contexts and under certain conditions.

An e-book, as I define it in this essay, is an assemblage of four necessary components: 1) a book-length file with book-like content, including text or images; 2) a hardware and software platform that is used to create, send, receive, view, and manipulate the text or image file; 3) a communication infrastructure that may include discs, protocols, cables, wireless signals, or other carriers that preserve e-book files and allow them to circulate among networks of machines; and, 4) wetware (people) who create, use, and share e-book files. According to my definition, a standalone book-like file is not an e-book without having been created by people who use hardware and software and transmit e-book files to nodes across a physical or virtual network. With this definition in mind, any cultural understanding of e-books must begin by examining each of the four components: files, platform, network, and people. Beginning the exploration in this way uncovers when something distinctively e-book-like began to appear, for what purposes e-book technologies were created, and by whom. Answering these questions may provide a sense of the programming and trajectories that influence e-book futures.

In this essay, I explore the nature of e-book components using a historical, legal, technical, and economic matrix. I show how e-books developed to become a largely space-biased media type conducive to central administration and control. Commercial book publishers have endeavored to create a monopoly of knowledge with respect to e-books by controlling how e-books are circulated and used (Innis, 2007; 2008). This monopoly was forestalled by a number of factors, including the appearance of third-party online vendors, non-commercial e-book forms, and self-publishing. I also show how the programming of e-books developed within particular apparatuses by certain envisioners in order to serve specific purposes (Flusser, 2011). Commercial e-books are programmed to elicit data capture and increase consumption. The programming of these forms contrast sharply with freely distributed e-books.

Based on the trajectories of e-books that I identify, I make concrete forecasts about what future commercial e-books might look like and what sectors will likely adopt them. A strong contender for e-book adoption is education. The distant future of e-books, however, depends on how well space-biased, commercial e-book forms remain counterbalanced by time-biased, non-commercial alternatives. There is some evidence to suggest that balance is being restored.

 

++++++++++

Technological stirrings of e-books

Manley and Holley (2012) begin their story of e-books in the U.S. in the interim of the World Wars. They describe how elements of the technical apparatus of e-books, the circuitry and interfaces of portable e-reader devices, branched out from and drew upon developments that led to the personal computer. They follow the idea of the computer — and thus the e-book —to the writings of Vannevar Bush. Bush (1945) imagined a hypertext-like device that would store, find, and display informational sources together in a single apparatus. Bush (1945) called the memex “a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility.” The memex that Bush imagined would link rhizomatically organized data in a way that would assist researchers in finding and synthesizing knowledge amidst a deluge of information stored on physical, disparate media. Bush’s idea was a mash-up of existing media technologies of his time: cameras, film, stereoscopes, the Vocoder, facsimile, punch cards, key boards, microfilm, and television. The impetus and pressure that led to Bush’s vision was the massive, imminent destruction that Bush sensed would result from science’s militaristic campaigns in European and Pacific theaters. Legacy e-book hardware, at least in part, was first imagined as a peace-making technology during a time of global war.

Bush’s idea of an automated, electronic, information-processing medium was later re-imagined and co-developed by Alan Kay and other researchers at Xerox’s Palo Alto Research Center (PARC) in the late 1960s and early 1970s, culminating in Alto in 1973. Many of the first microcomputers were being developed around the world during this time, and mainframe supercomputers were already used by many research facilities. Alto is distinctively related to e-books because it was envisioned as a portable, notebook-sized machine like a print codex [4]. Alto was never miniaturized smaller than a desktop computer, however, and the first actual laptop-sized computer — the Dynabook, a portmanteau of “dynamic book” — was later developed by William Morridge at Grid Computer Systems Corporation and manufactured by Toshiba in 1986 [5]. The word Dynabook, coined by Kay, was “influenced by the writings of Marshall McLuhan who described the profound cultural impact of the Gutenberg printing press” [6]. Thus, the technological base for a portable e-book platform— its digital processing, memory, and visual display — was established by the late 1980s, drawing from the imaginations of Bush, Kay, and to some extent McLuhan, and it was sponsored by a comingling of commercial and basic research interests. Kay imagined that in the 1990s there would be “millions of personal computers ... the size of notebooks” [7]. These ideas would transform reading and office work worldwide.

E-book components are not limited to mere physical hardware. While they are read or viewed using computers or other digital electronic devices, their most basic component is the file that holds the content. The first e-book file format was .txt — plain text. Plain text is a near-universal alphanumeric symbol transfer file type that was established as an American Standard Code for Information Interchange (ASCII) in 1963 to “facilitate the exchange of digital information” [8]. Based on telegraphic encodings, ASCII encodes characters into seven-bit binary integers [9]. ASCII was agreed upon by representatives from many U.S. industries for the purposes of interoperability. Michael Hart was the first to create and circulate plain text files of book-like content in 1971 in what would become known as Project Gutenberg [10]. The philosophical principles as stated by Hart (1992) were ease of use, accessibility, cheap reproduction, and interoperability. Hart chose and continued to use .txt files because they were usable by nearly all digital computing devices.

Hart used his computer hours at the University of Illinois Materials Research Lab to transmit e-book files to other computers. Until the mid-1990s, however, e-book files were largely circulated to machines and people through physical media carriers. The text novels by Michael Crichton, Lewis Carroll, and Douglas Adams, for example, were published by Voyager on floppy disks (Victoria and Albert Museum, 1995). Similarly, art books were published as image files on videodiscs, precursors to DVD discs. The storage and circulation of e-book texts on physical media such as magnetic disks and laser and CD-ROM discs was eventually superseded by storage and circulation on servers and digital networks. Digital transmission-control protocol (TCP), Internet protocol (IP), other network standards, and physical network infrastructures were also necessary precursors to contemporary e-books. Networking standards and infrastructure were developed throughout the 1970s, 1980s, and 1990s largely by U.S. agencies and research labs, including the Defense Advanced Research Projects Agency (DARPA), Xerox PARC, Department of Defense (DoD), and National Science Foundation (NSF) (Greenstein, 1998). Network technologies were imagined primarily by military developers as a robust and secure communication media.

A number of proprietary, handheld devices for reading and manipulating e-book texts were developed throughout the 1980s and 1990s, including the Sony Data Discman and Sony Bookman, Soft Book Press’ SoftBook reader, NuvoMedia’s Rocket eBook Reader, and EveryBook by World Electronics (Manley and Holley, 2012). The Discman and Bookman read e-book text files using a CD-ROM drive; the SoftBook reader communicated with Soft Book Press’ online store via telephone jack and modem; and, the other devices received e-book files from online sellers via the Internet. Pre-millennium e-book commercialization was largely unsuccessful because protective legislation was not yet in place and e-book technologies were still developing.

There is no clear way to summarize the programming and values inherent in early e-books. Visionaries such as Hart and Kay imagined that microcomputing technologies would soon be ubiquitous, and indeed they would be. Bush imagined that personal libraries would lead to peace and enlightenment. Shortly thereafter, the DoD and DARPA sponsored the development of a research communication network. Commercial publishers, manufacturers, and distributors began to market e-book files and reading devices even while Project Gutenberg created, accumulated, and circulated public domain texts for free. Personal computing and the Internet expanded significantly in the 1990s, but digital e-book files did not circulate significantly, nor were handheld e-reading devices successful as products. Personal computing technologies were developed within a concatenation of quasi-commercial basic research labs and commercial R&D facilities with funding from both private and public agencies. E-book beginnings were messy and contradictory, and a clear trajectory did not yet emerge. By the mid-1990s, it may have only been possible to see that computing technologies would continue to become smaller, faster, and more sophisticated, and that the Internet infrastructure would continue to grow in connectedness, content, and speed. Energy from other areas was necessary to push e-books in new directions as personal, networked computing devices continued to become a powerful cultural force.

 

++++++++++

Pre-millennial U.S. publishing industry

Computing and networking technologies are only one side of the e-book story. In the mid-1990s, developments of computing and the Internet intersected with the trajectory of a U.S. commercial publishing industry that had been gaining momentum since its inception in the colonies in the 1600s [11]. Early publishers in the American colonies colluded to maintain artificially high prices, but after being undersold by authors’ unlicensed works, publishers began to seek copyright protection both from Britain and from local colonial authorities. Publishers deployed the notion of an “author with rights” to establish a limited monopoly and licensing law in Britain in 1710 — the Statute of Anne [12]. Rice (1997) traces how notions of authorship also changed in America in the latter half of the eighteenth century to “focus on the materiality rather than the activity of authorship” [13]. Copyright, then, was presented as an incentive for authors to produce creative works. Following the passage of the U.S. Constitution in 1787 which guaranteed rights to authors and inventors, the 1790 Federal Copyright Act defined authorship within an economic framework — the “law’s attempt to extend the ideology of the market to the communicative and intentional activity of authorship” [14]. While many authors supported this transformation, not all conceived of authorship in economic terms, including Washington Irving (Rice, 1997). Despite copyright protections, many early writers were still financially unsuccessful (Charvat, 1992). Authors were granted copyright protections, but the laws were in fact instituted to protect publishers’ and printers’ rights, not authors’ [15]. Because authors’ rights were transferrable, publishers often benefited more from copyright legislation than authors.

Following the war with and separation from Britain, the publishing industry in the U.S. grew during the 1800s as a result of population increase, technological advancements in the steam press and typesetting, and a growth of a reading public. Publishers increasingly wanted to import, copy, and resell foreign British works cheaply without paying royalties. Earlier emphases by publishers on copyright and authorship were overturned by the courts in response to publishers’ civic republicanist rhetoric and insistence on intellectual freedom [16]. Domestic publishers continued to collude to keep prices high and to sell foreign works without paying royalties, until fringe publishing houses appeared in second half of the 1800s and destabilized the oligopolistic practices. In order to reestablish stability, the most powerful publishing houses in the industry pushed again for copyright protections, resulting in the International Copyright Act of 1891 [17]. This act granted U.S. protections to copyright holders from other countries.

Commodification of books, oligopolies of book sellers and publishers, and a growing reading public continued into the 1900s. The Copyright Act of 1909 evidenced how Congress would bend to the interests of industry and revise copyright law to account for new technologies and new methods of exploitation. The 1909 Act inaugurated the notion of corporate authorship. The interests and rhetoric of the publishing industry were clear: “Authorship could not be considered mystical or romantic after 1909. It was simply a construct of convenience, malleable by contract.” [18] There were real authors who benefitted financially from book sales, but authorship was used as a construct to serve the interests of publishers.

The visible hand of the publishing industry continued to manage and manipulate market conditions favorable to its own interests. In one instance in the early 1930s, publishers led by Knopf conducted a propaganda campaign to keep book prices artificially high and to stigmatize informal and library lending of books [19]. This campaign resembled later polemics again “piracy.” Photocopying was the next technological threat to the publishing industry. Other industries including film and music also felt threatened by new recording devices, such as magnetic tapes and audio and video cassettes [20]. Lobbying by the media industries resulted in the Copyright Act of 1976, which, due to the prevalence of copying technologies, more explicitly outlined fair use exceptions. Litman (1987) describes the process of drafting the 1976 bill:

A review of the 1976 Copyright Act’s legislative history demonstrates that Congress and the Registers of Copyrights actively sought compromises negotiated among those with economic interests in copyright and purposefully incorporated those compromises into the copyright revision bill, even when they disagreed with their substance. Moreover, both the Copyright Office and Congress intended from the beginning to take such an approach, and designed a legislative process to facilitate it. [21]

In the 1960s when negotiations for the bill began, the publishing industry was a powerful economic and political force capable of shaping legislation tailored to its interests. Other core copyright industries at this time included those involved with motion pictures, sound recordings, music publishing, theater, advertising, radio, television, and cable broadcasting. Computer software businesses would later shape copyright statutes [22]. Copyright legislation was not drafted and initiated by common citizens, but instead resulted from interest group politics:

As the economic power of the core copyright industries has grown, so has their political power. Each of these industries is represented on Capitol Hill by at least one, and often more than one, major trade association. These associations have large budgets and considerable clout ... . New copyright legislation typically is initiated at the copyright industries’ request, and the relevant legislative committees routinely invite representatives of the copyright industries to submit proposed statutory language. Trade associations representing the major copyright industries also play a significant role in both international trade negotiations and unilateral trade proceedings. [23]

Despite the supposed democratic nature of government in the U.S., a critical view of bill formation shows that only the interests of a limited few were programmed into the legislation that affected how and what people read:

The amorphous “public” comprises members whose relation to copyright and copyrighted works varies with the circumstances ... . Although a few organizations showed up at the conferences purporting to represent the “public” with respect to narrow issues, the citizenry’s interest in copyright and copyrighted works was too varied and complex to be amenable to interest group championship. Moreover, the public’s interests were not somehow approximated by the push and shove among opposing industry representatives. To say that the affected industries represented diverse and opposing interests is not to say that all relevant interests were represented. [24]

The publishing industry was therefore in a position of power that could dictate the forms of reading practices. The industry could effectively shape the direction of books and publication by determining how citizens could use and distribute legally purchased books.

The publishing industry that developed in the U.S. beginning in the nineteenth century and continued through the twentieth century was interested in perpetuating books as commodity forms. The industry used copyright legislation to its advantage wherever it could, strategically lobbying Congress along the way to change the legislation to better suit its own purposes. Copyright protections developed due primarily to publishers’ efforts and primarily benefitted them, not authors. In the case of hardcover book sales, for example, publishers received 50 percent of the sale price and authors received 15 percent (Trachtenberg, 2010). Throughout the twentieth century, the publishing industry struggled to combat threats to sales that were lawful and built into copyright exceptions, including the right of first sale. The first sale doctrine, as a fair use exception, allowed social circles and libraries to circulate legally purchased books through lending and borrowing practices and to resell books second-hand. The publishing industry was unsuccessful in eliminating these common practices — lending and resale — both in their lobbying efforts to change laws and in their propaganda campaigns to change sociocultural practices. Exceptions remained in copyright law due at least in part to considerations of libraries, educators, and second-hand resellers.

One of the primary concerns from the beginning of the U.S. publishing industry was how to control the uses of books after they were bought. Resale and lending, in the eyes of publishers, was lost revenue both for publishers and authors. Publishers believed that this revenue could be recaptured by criminalizing resale, lending, and copying. Each iteration of copyright legislation represented an intensification of this desire to control the uses of books.

 

++++++++++

Commercial strengths and weaknesses for online e-books

As the publishing industry entered the last quarter of the twentieth century, the threat of unlawful copying and distribution increased as a result of the emergence of recording technologies, such as the photocopier and audio and video cassettes. However, the near ubiquity and inherent characteristics of the combination between personal computing and networking technologies also offered book publishers, sellers, and authors opportunities.

The first strength of e-book files, as discovered by Michael Hart of Project Gutenberg, was their infinite replicability. Unlike copies of analog materials, digital files were identical, their quality did not change, they were easily and quickly made, and they required near-zero material costs. Digital books offered the industry a solution that would minimize production and shipping costs. The industry could utilize an already existing digital infrastructure — the Internet — to distribute their product. Low material costs and small size meant that the industry could avoid the problems of over production and warehousing they faced with print materials. E-books could also be sold at lower cost than physical books. This market advantage benefitted publishers who could sell e-books at competitive rates, but it raised concerns for authors who collected smaller royalties under the deal. Trachtenberg (2010) clarifies how individual e-book sales returned fewer royalties to authors than traditional book sales:

A new $28 hardcover book returns half, or $14, to the publisher, and 15 percent, or $4.20, to the author. Under many e-book deals currently, a digital book sells for $12.99, returning 70 percent, or $9.09, to the publisher and typically 25 percent of that, or $2.27, to the author.

E-book contract terms caused tension between authors and publishers and increased competition among authors to gain publisher contracts (Trachtenberg, 2010).

The strengths of e-books — cheap and rapid production and distribution — were also their weaknesses. Not just publishers, but any amateur “desktop publisher” could cheaply and rapidly copy and transmit e-book files. Digital files were not scarce resources like physical books. Merely criminalizing most kinds of reproduction and distribution of copyrighted works was not enough for the publishing industry to invest in e-books because the copyright law in 1976 was not enforceable. Authors who wished to self-publish and profit also could not do so because the files could be easily copied unlawfully. Investment in an e-book sector still posed too high a risk because the likelihood of unlawful duplication and distribution was high. The publishing industry thus sought a technological solution that might inhibit piracy and ensure revenue by preventing uncontrolled circulation. The solution was digital rights management coding.

 

++++++++++

Managing digital rights

Digital rights management (DRM) technologies are a lock-and-key system. They are an envelope-like program that restricts what can and cannot be done to the digital file it encases. DRM technologies for digital files began to appear in the mid-1990s in response to the networked computing that grew in the late 1980s and early 1990s. DRM technologies were first designed for CD-ROM discs, but they later expanded to encrypt music files (MP3s), text files (PDFs), movie files (iTunes), and e-book files (Amazon) (Rosenblatt, 2009). The first DRM developers were IBM and Electronic Publishing Resources (EPR). EPR was the first to develop an “end-to-end” system for DRM (Rosenblatt, et al., 2002). Another significant event for the book publishing industry was the publication of Stefik’s (1996) “Letting loose the light: Igniting commerce in electronic publication.” In this article, Stefik outlined his vision for a “trusted system,” a combination of hardware and software that would allow owners of digital objects to automate the remote uses of those objects according to the rules they set. DRM as trusted systems would prohibit users across a network from viewing a digital resource without first paying, for instance, or from copying and distributing the resource unlawfully. The uptake of a trusted system DRM for e-book publishers and potential self-publishing authors was that it became possible to technically control uses of e-book files on the Internet. Publishers’ control of e-book uses seemed close at hand.

The development of DRM technology was a response to industry needs. This kind of innovation process paralleled the development of technology Williams (1975) identified in the television industry:

The key question, about technological response to a need, is less a question about the need itself than about its place in an existing social formation. A need which corresponds with the priorities of the real decision-making groups will, obviously, more quickly attract the investment of resources and the official permission, approval or encouragement on which a working technology, as distinct from available technical devices, depends. [25]

DRM was not a necessary technology, but one that was called for by the trajectory of the book publishing industry. The construction of DRM technology was the final technological advancement that book publishers needed to sell and distribute e-books online at low risk and high return.

 

++++++++++

Deregulating new e-book markets

Like other business and entrepreneurs in the early 1990s, book publishers saw the rise of personal computing and the Internet as an innovative sales and distribution infrastructure. DRM allowed publishers to potentially control and make digital resources such as e-books artificially scarce. Up until 1993, however, the Internet was not an openly commercial landscape, and sales and advertising were ostensibly prohibited there by the NSF’s acceptable use policy, though exceptions did exist [26]. E-books files — the relatively few that existed — were either distributed across virtual networks for free, as in the case of Project Gutenberg, or the files were stored, sold, and transported on physical media such as CD or laser discs. The non-commercialized nature of the Internet was due to both technological and administrative reasons.

Throughout the 1980s, the network of networks that would become known as the Internet was administered by the DoD and used extensively by research universities within the U.S. for government and research purposes. In 1986, oversight of the ARPANet backbone moved to the NSF, which continued to prohibit commercial activities over the Internet but at the same time began working with private corporations such as IBM and MCI to develop networking standards [27]. Non-commercial activities over the Internet continued into the 1990s, but companies such as America Online, CompuServe, and Prodigy began to offer Internet services. In 1994, the NSF initiated plans to gradually transfer management of the Internet to commercial services and fade out its funding (National Science Foundation, 1993). The new privately-managed system was implemented in 1995 [28]. Mosaic, the first widespread browser to use Web technology, was circulated as shareware in 1993–94, making it possible to share pictures and — importantly for companies — visual ads. Netscape, a later version of Mosaic, and Microsoft Windows, both went on the market in 1995, signaling what might be called the commercialization of the Internet [29]. Consumers could navigate the Web using hypertext markup language (HTML), hypertext transfer protocol (HTTP), and uniform resource identifier (URI), technologies developed by Tim Berners-Lee. Early search engines like Yahoo! began to place advertisers’ paid spots at the top of search results, and text files called cookies began to be used by Netscape to enable uninterrupted communication between e-commerce host sites and consumer clients (Kreider, 2000).

Why did the Internet and the Web become commercialized? One explanation points to the mission and objectives of the NSF to fund R&D, not manage infrastructure (National Science Foundatino, 1993). This view notwithstanding, the commercial structuring of this new technology seemed to follow the same trajectory that Williams (1975) identified in the development of television. In the case of TV, he explained:

How the technology develops from now on is then not only a matter of some autonomous process directed by remote engineers. It is a matter of social and cultural definition, according to the ends sought. From a range of existing developments and possibilities, variable priorities and variable institutions are now clearly on the agenda. Yet this does not mean that the issue is undetermined; the limits and pressures are real and powerful. Most technical development is in the hands of corporations which express the contemporary interlock of military, political and commercial intentions. Most policy development is in the hands of established broadcasting corporations and the political bureaucracies of a few powerful states. [30]

Such was the case with the Internet. By 1995, the Internet and Web became fertile ground for book publishers to begin to sell and circulate e-book files. Many Americans with purchasing power owned personal computers, were connected to the Internet, and had acquired Web navigation skills. The Internet had become a commerce-friendly space yet to take tangible form. DRM technology provided a technical solution to the threat of illegal copying. E-book producers could begin to sell dedicated software and hardware to create the end-to-end, trusted system approach envisioned by Stefik.

The 1980s into 1990s was an era of deregulation, privatization, and faith in the invisible hand that was, in fact, actually a very visible hand of oligopolies and lobbying efforts by corporations, especially those from the core copyright industries. The move to Web-based e-book sales posed many uncertainties for publishers and authors. Web infrastructure gradually became “fenced” and controlled by commercial interests, leading to what some commentators rhetorically called part of the “enclosing of the commons” (Boyle, 2008). The commodification of information, data, spaces, and domains, was due to concentrated efforts by a variety of corporate sectors that viewed the Internet as huge potential market. “No trespassing” signs, rent-based practices, proprietary technologies, ownership, paywalls, and surveillance came to overtake the existing culture of the Internet due to the efforts of multiple industries. This shift in the 1990s also spurred interest in commercial e-book forms.

 

++++++++++

DMCA: Helping the invisible hand

The publishing industry had to overcome one final obstacle before it could capitalize on digital technologies and exploit e-book files using the seemingly unlimited potentials inherent in the Web-based market. Copyright legislation criminalized unlicensed copying and distribution of copyrighted files, and DRM created a technical lock-and-key system that might prevent some forms of unauthorized sharing, but no legal assurances for potential e-book publishers yet existed that would deter unauthorized “hacking” of DRM-encased files. In other words, copyright was not enforceable, DRM encryptions were breakable, and decryption of DRM-encased copyrighted files was not illegal. Book publishers, along with other core copyright industries, lobbied Congress for a new statute that would reduce the risk and increase corporate control of digital property transmitted across the Internet. The result of their lobbying efforts was the Digital Millennium Copyright Act (DMCA) that took effect in 1998. DMCA was anti-circumvention legislation that prohibited the “manufacture or distribution of any device, product, or service with the ‘primary purpose or effect’ of deactivating or circumventing a technological protection measure design to protect a copyright owner’s copyright rights.” [31]. The Act was passed through interest group bargaining and despite significant opposition, including that from librarians, educators, software companies, and home recorder manufacturers [32]. The Act was a response in part to the perceived threats of 1980s-era disruptive technologies that could, for example, disable Macrovision encryption on videocassettes and DVDs; but DMCA was in large part an initiative on the part of core copyright industries, especially Hollywood and publishers, to control the distribution and uses of copyrighted works on the Internet (Samuelson, 1999).

DMCA, combined with copyright statute and contract law, significantly limited the legal uses of e-books. One result of DMCA was the shift from ownership of e-books to licensing of e-books. The transaction that occurred between e-book providers and e-book consumers was often accompanied by licensing terms that characterized the transaction as a rental, not a sale. This shift away from personal ownership was significant because it marked the culmination of the publishing industry’s efforts to effectively eliminate the right of first sale, and with it, the pass-along book trade, lending, and resale. Libraries, for example, could often no longer lend an e-book to other libraries through inter-library loan (ILL) without violating the e-book vendor’s terms of use. There could be no second-hand, used e-book market as there was with physical print books. Sales were instead directed to copyright holders. While there were many technical ways to decrypt DRM coding and strip it from an e-book, and in some cases the licensing terms allowed this decryption, there was little incentive for publishers and distributors to allow for broad uses of e-books in the licenses they created. In 1998, through a combination of technological means (DRM and trusted systems), technological legislation (DMCA), and intellectual property legislation (copyright and licensing), power tectonics supporting the publishing industry aligned in such a way to make commercial e-book publishing possible. The U.S. book publishing industry could, at least legally, control the uses of their products to an extent they had sought to do since the advent of mass publishing in the colonies.

 

++++++++++

1999 and the turning point for commercial E-books

Following the passage of DMCA, in 1999 a bevy of e-book providers and e-book reading technologies flooded the market (Manley and Holley, 2012). Dedicated e-reader technologies had not developed at the same rate as DRM anti-circumvention software, the Internet, and the Web, so their features were limited. Herther (2008) reports that by 2001 there were 20 e-reading devices on the market. Despite the dedicated e-reading devices available, e-book reading was still largely restricted to liquid crystal display (LCD) computer screens, and dedicated handheld e-reader devices did not offer the affordances that many readers expected. Some dedicated e-readers used e-ink technology to reduce glare and eye strain, or tried to mimic the weight and tactility of print book reading. Publishers attempted to tether readers to certain devices, both to establish the end-to-end trusted system design and maintain false scarcity of digital resources. In the early 2000s, e-publishing was still limited largely to universities and academic researchers in an attempt to build open access repositories that might allow scholarly communication to continue despite high journal prices [33]. After a brief interlude following early e-reading devices, a second wave of e-readers began in 2007 with the Amazon Kindle, Barnes & Noble Nook, and Kobo [34]. Numerous usability issues still plagued e-book readers, including lack of color graphics on dedicated e-readers, lack of adequate annotation features, and a confusing array of proprietary formats due to lack of coordination by various publishers. The commercial e-book market was largely split between popular consumer titles, which were mainly fiction and were read on dedicated e-readers or multimedia tablets, and educational titles, such as textbooks or other non-fiction purchased by K-12 schools and universities and read online using computers or tablets.

While it seemed that publishing houses had finally manufactured the perfect mixture of technological, legal, and infrastructural conditions for selling and delivering their e-book products, and they seemed poised to establish an e-book oligopoly, book publishers encountered a significant wrinkle: they had to deal with third-party online book sellers and e-reader manufacturers such as Amazon and Barnes & Noble. While book publishers and authors often owned the licensing rights to the e-book titles, online booksellers drew the most consumers to their sites, and vendors could ensure a trusted system architecture. Book publishers were forced to negotiate unsatisfying terms with the sellers, who controlled the market and the popular reading devices. Sellers kept prices down. Book publishers could license their e-books through the vendors at cheap rates or risk not profiting at all. Authors’ e-book royalties were only 25 percent of the sale of an already reduced price (Trachtenberg, 2010). Some authors bypassed publishers to instead publish directly with online vendors. Control of e-book distribution by vendors disrupted to some extent the knowledge monopoly by book publishers that seemed imminent. Other factors also contributed to counterbalancing space-biased commercialism, including non-commercial publishing and self-publishing.

 

++++++++++

E-books and hyper-commercialized reading

Computing and networking technologies combined with DRM coding allowed book publishers and distributers numerous affordances that aligned with their commercial imperatives. These characteristics marked a significant divergence from previous reading practices.

E-reader devices were attractive for consumers because they mimicked the experience of print book reading. At the same time, they allowed for thousands of titles to be stored on a single device or to be instantly downloaded to the device from remote servers. Multiple features of e-readers, such as embedded dictionaries, text-to-speech software, and text enlargement, were attractive for diverse reading needs. These features aside, the primary purposes of e-readers for commercial vendors were 1) to complete the end-to-end trusted system design in order to facilitate the vendor’s control over the digital resource; and, 2) to entice readers to buy more e-books. With the diffusion of wireless local area network (WLAN, or WiFi) networking connections in 1999 and 2003 (Suit Staff, 2014), e-reading devices became not just reading devices, but also mobile storefronts. E-readers, e-reading software, and e-reading apps were spaces that channeled readers into consumptive practices.

The interactive nature of networked computing allowed not only for e-book readers to access information from the e-books they read, but also for device vendors to harvest data about readers’ e-book reading habits. E-book software and e-reading devices continuously collected and transmitted readers’ data to be stored and collected by vendors. This data included what the reader read, how long the reader spent using the reading app, where the reader left off in a book, how many stars the reader gave the book, what the reader highlighted, and any comments or notes the reader made in the book. E-books enabled tracking, monitoring, and surveillance. E-books were not only read by readers, but also actively read the reader. The data collected about consumer e-book reading that was collected by vendors became the private property of the vendors. This aggregate data could be sold to other third parties as market research data, or used by the vendor for advertising purposes.

Given these historical trajectories, the cybernetic character of e-reading today makes new reading experiences imaginable. It may be the case, for instance, that in the near future vendors will distribute surveys and polls through the books that they sell. Vendors may make the surveys optional for those who pay extra, or for those who cannot afford to pay or choose not to, the surveys might be required to complete in order to move on in the book. The surveys would be a form of market research. The questions might ask readers what they thought of the book’s characters, plot, or setting. Survey questions might ask about the behaviors of certain characters, for instance, “Should Katniss be with Peeta?” The results from the polls could be used by vendors for a number of purposes, including the development of new books. Such market research could be used by publishers to decide which books to create and publish. It may soon be the case that books are created mechanically based on market research of this sort, or by teams of humans who co-assemble book plots based on market demands. Books may become more and more homogenized as publishers strive to create books that appeal to the widest audiences. Alternatively, the market research in these kinds of polls could lead to hyper-specific, individualized market segmentation and the creation of a “personalized” e-book, created specifically for an individual based on her preferences.

Another commercial feature of e-books that diverges from print reading is advertisements. While book ads date to the Victorian era England and were later used in 1950s, 1960s, and 1970s by the U.S. tobacco industry (Collins, 2007), book ads were largely abandoned due to author backlash and author contracts. The success of book ads was also limited because books did not guarantee a large audience, the audience demographic for each book was difficult to predict, ads required timeliness, and the ads would quickly become dated (Adner and Vincent, 2010; Carr, 2010). Real-time monitoring and cybernetic reading, however, open new possibilities for advertising because ads can be tailored to readers’ preferences and book content. Microsoft already patented contextual e-book ad technology (Fingas, 2014). Ads could take the form of pop-ups, sponsor activities, or side bars.

Besides explicit, pop-up-like ads, commercial advertisements could also be created in an embedded way as they are in movies. Books could be created at least partly for the purposes of product placement. Carr (2010) reports that this has already happened in the case of The Bulgari Connection, where “the publisher and author received a five figure sum from jewelers Bulgari in exchange for mentioning the company twelve times in the book’s narrative.” The line between books and ads may continue to blur.

As commercial e-books become an attractive media for advertising, e-books may come to resemble other commercial media forms such as television, newspapers, radio, magazines, and much of the Web. E-books seem poised to become “attention-capturing” media whose primary commercial purpose is not necessarily sales or subscriptions, but audience attention. E-books may be distributed to “attract eyeballs” for the purposes of advertising. E-books may become, like other media, a channel for connecting potential consumers to advertisers’ products. If e-books follow this trajectory, then economic models may change as a result. E-books may become free for consumers so long as readers agree to view ads. Readers who can afford to or who choose to can buy ad-free e-books at extra cost. The content of e-books, like the content of newspaper ads or television shows, may come to be dictated by the interests of advertising companies. Only e-books with the greatest audience-generating potential will be published. Authors have staunchly opposed ads in books, but given the large amount of data that e-books collect about reading trends and preferences, and given the emerging ways that e-books can be automatically tailored to match these preferences, the usefulness of human authors for the publishing industry may in many instances become questionable.

 

++++++++++

E-books and the student surveillance economy

The cybernetic, surveillance-friendly nature of e-books aligns not only with the interests of book vendors, publishers, and some authors, but it also appeals to other economic sectors. One sector in the U.S. whose interests closely align with the data-capturing capabilities of e-books is education, especially K-12 schools. There have already been substantial partnerships between educational institutions, e-book publishers, and device manufacturers. In higher education, for example, Amazon partnered with several institutions to pilot test its new e-reading device, the Kindle DX [35]. The device did not fare favorably in terms of usability and accessibility for blind students, but the studies demonstrate the e-publishing industry’s interest in the education market. K-12 schools have also begun experimenting with e-books. E-books are an attractive option for school classrooms and libraries because, assuming students have proper technology and network access, e-books are more portable than print books, and a single e-book file can be accessed by multiple users simultaneously. Economic reasons aside, the aspect of e-books that may be most attractive to school districts and teachers is their capability to monitor students’ reading behaviors. E-books align with what I call the student surveillance economy.

The student surveillance economy is supported by a robust apparatus of techniques, practices, and technologies for student monitoring. Education has, in a sense, always been concerned with monitoring students’ behaviors, but recent political, economic, and technological developments have heightened the surveillance apparatus in education and qualitatively transformed data collection about student performance.

Several items of U.S. legislation have contributed to a heightened emphasis on student surveillance. First, the U.S. No Child Left Behind (NCLB) Act of 2001 makes states accountable for student achievement as measured against state-established standards. In order to receive federal funding, states are required to develop challenging standards and assess students’ adequate yearly progress using an “accountability system” [36]. In cases where schools do not meet the standards, students may transfer to a different public or charter school, and the underperforming school must fund their transportation and “supplemental educational services” [37] . Schools are thus framed as competitors for student retention. Consistently underperforming schools may result in complete staff replacement or private restructuring [38]. NCLB encourages states to use “enhanced assessment instruments” to measure, chart, and evaluate student achievement [39]. The legislation was enacted during of time of fiscal challenges for states, exacerbating pressures on schools to achieve adequate yearly progress (Sunderman, et al., 2005). Adaptive e-books might be marketed to schools as a way to capture student performance data, retain students, and reduce the risk of restructuring.

Also shaping the public K-12 surveillance landscape is the Individuals with Disabilities in Education Improvement Act (IDEIA) of 2004. IDEIA guarantees public school services to all children with disabilities [40]. With parental consent, children may be tested for disabilities, and those with disabilities are given individualized education programs, or IEPs [41]. Evaluation, monitoring, and intervention involving children with IEPs requires significant data collection [42]. Further, IDEIA requires the inclusion of children with disabilities in regular education environments as far as possible, that is, placement in their “least restrictive environment” [43]. Classroom teachers must therefore often accommodate IEP students in regular classrooms by differentiating instruction — by providing individualized instruction to multiple students simultaneously. This poses challenges, but these challenges could be alleviated using distributed interactive technologies. Under NCLB, students with IEPs must still undergo assessment and demonstrate adequate yearly progress [44]. Teachers must therefore conceive of ways to synchronously and asynchronously monitor the learning of students of varying abilities to ensure that they both meet NCLB-based standards and align with IDEIA-based IEPs. All this must occur in a fiscally insecure environment. Mobile, adaptive, data-based technologies like interactive e-books may seem like an attractive solution to teachers given their circumstances. Student reading can be monitored anywhere, data is automatically harvested for reporting purposes, and readings can automatically adapt to students’ abilities and interests.

Schools and teachers are actively encouraged by the U.S. government to implement student surveillance techniques. Another federal statute, Race to the Top (RTT), is a school incentive program begun in 2009 that rewards states that develop “robust data systems to track student achievement and teacher effectiveness” (Duncan, 2009). Highly granular reading data from interactive e-book devices could be linked to these databases. Data collection systems are promoted so that “hopefully, someday, we can track children from preschool to high school and from high school to college and college to career” (Duncan, 2009).

Multiple data-gathering technologies are already in place in schools to support a student surveillance economy. For example, schools nation-wide use a progress-monitoring tool called Study Island, testing software designed to “monitor student progress and differentiate instruction” (Edmentum, n.d.). The questions conform to those on statewide standardized tests. Data from testing software is designed to monitor students’ progress and provide teachers and administrators with an idea of where students stand in terms of academic achievement.

Accelerated Reader (AR) is another type of monitoring technique also tied to software. AR is a level-reading program often used in elementary schools. In an AR program, students first take a test that tells them their reading “level.” Students then read books at that level and take comprehension tests about the books. Students gradually acquire points based on the number of tests taken, the number of points each book test is worth, and the number of questions they answer correctly (Renaissance Learning, 2015). AR was designed as a reading incentive tool, but the data from the quizzes can also potentially tell teachers what students are reading, how well they understand it, and whether they are reading at a desired level. This cultural practice meshes well with e-book features.

Cyberschools and online learning in general generate a large amount of student data. BlackBoard and Edmodo, for instance, which are online course management platforms, record how often students access the course site, what files they view and download, how long they view presentations, when they post discussions, what they post, and when they complete assignments. Massively Open Online Courses (MOOCs) are online courses taken by many students at a time. The massively open nature of these courses means that a massive amount of data is generated. Using the data generated from quizzes, designers can pinpoint where and why course material was poorly explained or misunderstood.

Finally, there are basic surveillance techniques in schools such as ID badges, mirrors, and cameras. Schools have also followed workplaces in their use of remote desktop monitoring software that allows teachers to view students’ computer desktops in real time and even manipulate the cursor. It is evident from these examples that educational institutions already support student surveillance through a range of techniques and technologies. The use of surveillance is promoted and even required by national legislation. Cybernetic e-books align with current technological, legislative, and economic trends. Given the current educational environment, e-books with data capturing capabilities could be adopted by schools in a variety of ways.

First, e-books are attractive to teachers because they capture what students read, where they are in a book, what they’ve highlighted, what notes they’ve taken, and what words they’ve looked up. Teachers who assign e-books to their classes could capture and analyze this data both in the aggregate and for individual students in order to better tailor lessons. This data is collected automatically and can be viewed by teachers anytime.

One potential feature not yet found in e-books is embedded comprehension quizzes. Like AR, e-books could periodically test student comprehension about the content of the e-book in order to monitor student progress and provide feedback to the teacher. Based on a student’s test score, she may have to re-read a section or continue to take the quiz before moving on to the rest of the book. Tutoring technology could accompany the text. Students who can afford to do so might purchase supplemental instructional materials, like CliffsNotes, historical background information, expert commentary, or images. Students might take warm-up quizzes or review quizzes each time they open up the book. The annotation and highlighting features of e-books could facilitate group learning. Students could potentially post questions to classmates about book content. These features may seem to reduce the challenges of differentiated instruction on teachers.

Another feature of e-books that has to some extent become possible and would be extremely attractive to schools is the capability of modifying or individualizing the text based on the student. Texts could modify in terms of vocabulary, structure, length, or language based on student learning needs. E-books have read-aloud features for sight-impaired students, or Braille keyboards for hearing-impaired students. Similarly, more efficient, mechanized instruction becomes possible with interactive e-books, and they enable teachers both to monitor students and to reduce demands for inclusion-based, individualized instruction. E-books can modify themselves and adapt to performance based on student feedback.

It seems likely that e-book technologies will develop in ways such as these that make them attractive to the education market. Whether or not these particular features develop to suit the student surveillance economy, it seems likely that e-book publishers and device manufacturers will continue to use schools and libraries as marketing test beds and will continue to market to public, non-profit sectors (Buschman, 2012; Stevenson, 2012).

 

++++++++++

Open access counter-imaginaries

Accounts of e-books that focus exclusively on commercial products such as the Kindle or Nook have a limited perspective on what e-books are or might be. Current focus on commercial devices suggests that corporate marketing strategies have been successful at branding the e-book and defining what it is, what is imaginable, and what kind of e-books are possible. Despite the commercialization of e-books and their cybernetic potentialities that align with commercial interests, several movements have resisted commercial e-book trends. For example, Project Gutenberg e-book texts are still freely available for download. The commodity form of e-book that has become popular today is a historical product that was strategically constructed by the publishing industry. While early e-book files circulated via disks and CDs were bought and sold, the first online e-books were non-commercial. Project Gutenberg e-texts were created by collectives of volunteers who cared about the free and equitable dissemination of literature. The first works that were published online in e-book form were works in the public domain, and they thus generated no royalties for their copying and distribution. The first e-book publishers shared a philosophy with other early web users of that time, one which emphasized the public good, openness, collaboration, and the value of communally-shared resources.

The philosophy of the digital commons has also been used by counter-copyright movements, anti-DRM activists, and the open access publishing movement. Drawing from the Free Software Movement begun by Richard Stallman in the 1980s in response to the closure of software source code, the Creative Commons movement was started by Lawrence Lessig in 2001 in response to perceived hyper-restrictive copyright regime (Lessig, 2005). Creative Commons licenses attempt to actively broaden potential uses of digital works. Several national institutions are working to preserve and make accessible digital works such as e-books, including HathiTrust, the Digital Public Library of America, Library of Congress, and Internet Archive. Unglue.it (https://unglue.it/) is a non-profit initiative that draws crowdfunding to incentivize authors to attribute Creative Commons licenses to their e-book works in order to make them freely available. Given how e-books can be cheaply and easily shared, alternative incentive systems such as this could challenge authors’ dependence on publishers by supporting them financially.

Several types of e-books are not neatly slotted as either commercial or non-commercial. Google Books, for example, makes searchable over 30 million scanned books out of the 130 million or so in the world (Darnton, 2013; Jackson, 2010), and some of the books are freely viewable. It is not yet clear how this e-book form benefits authors, vendors, and publishers. While Google Books does not include advertisements, non-accessible books display a link for easy purchase. There are also e-books that are published and freely-available after the books have a two-year commercial print run, or free e-books that are printed out and published commercially as print books. In some ways, the commercial/non-commercial dichotomy does not work.

 

++++++++++

Conclusion

Commercial e-books are a product of history that resulted from concerted efforts by the publishing industry to control products and customers. The book publishing industry sought to find a way to reduce production and shipping costs, avoid overproduction, reduce storage costs, increase revenue, maintain scarcity, and lower risk. From its inception in the colonies, the U.S. book publishing industry struggled to counteract pass along social lending, library lending, and second-hand resale of books, both through social engineering and through legislation. Copyright was designed to benefit publishers and printers, not authors, though authors also profited. The rise of personal computing, the rapid adoption of digital network infrastructure by the U.S. populace, and the commercialization of this infrastructure in the late twentieth century created threats and opportunities for the publishing industry and for authors. The industry sought to utilize the commercialized digital infrastructure to distribute their products cheaply at near-zero marginal cost and without losing control of the product post-sale. They nearly achieved this objective through both technological and legal means, first by adopting DRM coding to create a secure end-to-end trusted system, and also by lobbying Congress to criminalize attempts by consumers or competitors who might circumvent the security to create and distribute unlicensed digital copies. Authors received fewer royalties from individual e-book sales, but their opportunities for self-publishing increased, and alternative incentive systems have begun to develop.

The year 1999 marked a significant year in the history of book publishing because, through a combination of DRM, copyright, and licensing, commercial publishers and distributers could control consumers’ uses of books like never before. Besides establishing publisher control over their uses, e-books also created a cybernetic loop, but not in the sense Wiener (1950) envisioned. The interactive nature of e-books created new commercial opportunities for publishers and distributors to shape consumer behavior through data collection techniques, advertising, and market research.

The book publishing industry did not establish a monopoly on e-book control. Publishers depended on online vendors to distribute and deliver their e-book products to consumers. Publishers and vendors battled over pricing, and publishers were ultimately forced to concede lower prices and less control over distribution. Authors generally fared less well with e-book royalties compared to traditional book royalties due to lower sale prices, but digital technologies have also destabilized author-publisher relations.

The technologies inherent in e-books mean that lines between books, advertisements, and marketing will continue to blur. E-books will likely continue to intersect with the education industry, whose interest in the student surveillance economy calls for control and prediction — exactly what that the data-capturing potential of e-books offers. As a result of the largely proprietary, closed nature of commercial e-books, the files may continue to disappear and become irretrievable. Commercial e-publishing in its current form threatens the preservation and archiving of cultural heritage. Striphas [45] states that DRM and intellectual property law troubles the notion of ownership of digital objects, suggesting a refeudalization of digital cultural goods. Some authors suggest that digital media has regressed into a “digital dark ages” due to the destruction and inaccessibility of knowledge (Kuny, 1997). Digital libraries and archives have recognized these threats and have begun to focus on e-book preservation.

DRM technologies will likely become more fine-tuned and less permeable to decryption. Still, calling the current moment dark ages obscures the fact that, unlike the medieval dark ages, media production technologies such as the printing press and the Web are available now. Unlike then, these technologies can be used to counterbalance commercial imperatives. At the margins of society exist counter-commercial movements that seek to revise or otherwise develop alternative solutions to the restrictions presented by copyright and DRM. These movements include digital preservation initiatives, libraries, archives, political movements, and funding campaigns.

The programming and trajectories of commercial e-books suggest that central administration of e-books has shifted away from commercial book publishers and toward commercial distributors. Vendors, and to some extent publishers, will continue to control commercial e-book uses and monitor readers. Commercial e-books are attractive for industries such as education that also seek techniques of control and prediction. In opposition to commercial e-book models, an array of opportunities for counteraction still seems possible, such as open-access publishing and establishing personal data rights. Personal data rights would allow consumers to control how their data is used. The privacy of personal data could come to resemble that of medical records. Rather than have their reading habits monitored, consumers and students using e-books could return cybernetic commodities from private to public governance.

Complex histories and materialities combined to form current e-book artifacts. E-books re-cast perennial questions regarding authorship and texts. The history of the book is replete with questions of authorship, and with their adaptive, interactive features, e-books continue to challenge these understandings. E-books also raise questions about the relationships of power between text and reader. Given the complex politics and histories of e-books, readers might consider how a particular media type — e-books — potentially shapes intellectual life and whether current trajectories are desirable. End of article

 

About the author

Michael M. Widdersheim is a Ph.D. student in the School of Information Sciences at the University of Pittsburgh.
E-mail: mmw84 [at] pitt [dot] edu

 

Notes

1. Williams, 1975, p. 135.

2. Golumbia, 2009, p. 2.

3. Flusser, 2011, p. 49.

4. Manley and Holley, 2012, p. 295.

5. Manley and Holley, 2012, p. 296.

6. Barnes, 2007, p. 23.

7. Ibid.

8. American Standards Association, 1963 (17 June), p. 3.

9. American Standards Association, 1963 (17 June), p. 5.

10. Manley and Holley, 2012, p. 296.

11. Vaidhyanathan, 2001, p. 38.

12. Vaidhyanathan, 2001, pp. 40–42.

13. Rice, 1997, p. 92.

14. Rice, 1997, p. 73.

15. Rose, 1993, pp. 34–48.

16. Striphas, 2006, p. 239.

17. Vaidhyanathan, 2001, pp. 53–55.

18. Vaidhyanathan, 2001, p. 102.

19. Striphas, 2006, p. 242.

20. Striphas, 2006, p. 244.

21. Litman, 1987, p. 879.

22. Cohen, et al., 2010, pp. 29–30.

23. Cohen, et al., 2010, p. 30.

24. Litman, 1989, p. 312.

25. Williams, 1975, p. 19.

26. National Science Foundation, 1992; Office of Inspector General, National Science Foundation, 1993, p. 21.

27. National Science Foundation, 1992; Greenstein, 1998, p. 6.

28. Harris and Gerich, 1996; Jamison, et al., 1998, p. 39.

29. Greenstein, 1998, p. 7.

30. Williams, 1975, p. 134.

31. Cohen, et al., 2010, p. 661.

32. Cohen, et al., 2010, pp. 661–662.

33. MacFadyen, 2011, p. 4.

34. MacFadyen, 2011, p. 5.

35. Widdersheim, 2014, p. 99.

36. No Child Left Behind Act, 2001, § 1111.

37. No Child Left Behind Act, 2001, § 1116 (b)(8)(A).

38. No Child Left Behind Act, 2001, § 1116 (b)(8)(B).

39. No Child Left Behind Act, 2001, § 6112.

40. Individuals with Disabilities Education Improvement Act, 2004, § 601 (d).

41. Individuals with Disabilities Education Improvement Act, 2004, § 614.

42. Individuals with Disabilities Education Improvement Act, 2004, § 614 (c).

43. Individuals with Disabilities Education Improvement Act, 2004, § 612 (a)(5).

44. No Child Left Behind Act, 2001, § 1111.

45. Striphas, 2006, p. 249.

 

References

Ron Adner and William Vincent, 2010. “Get ready for ads in books,” Wall Street Journal (19 August), at http://online.wsj.com/news/articles/SB10001424052748704554104575435243350910792, accessed 12 November 2014.

American Standards Association. Sectional Committee on Computers and Information Processing, X3, 1963. American standard code for information interchange. New York: American Standards Association.

Susan B. Barnes, 2007. “Alan Kay: Transforming the computer into a communication medium,” IEEE Annals of the History of Computing, volume 29, number 2, pp. 18–30.
doi: http://dx.doi.org/10.1109/MAHC.2007.17, accessed 19 May 2015.

James Boyle, 2008. The public domain: Enclosing the commons of the mind. New Haven, Conn.: Yale University Press.

John Buschman, 2012. Libraries, classrooms, and the interests of democracy: Marking the limits of neoliberalism. Lanham, Md.: Scarecrow Press.

Vannevar Bush, 1945. “As we may think,” Atlantic (July), at http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/, accessed 19 May 2015.

Paul Carr, 2010. “Forget ads in books, lit-lovers face an even more hideous prospect,” TechCrunch (20 August), at http://techcrunch.com/2010/08/20/eat-pay-love/, accessed 12 November 2014.

William Charvat, 1992. The profession of authorship in America, 1800–1870. New York: Columbia University Press.

Julie E. Cohen, Lydia Pallas Loren, Ruth L. Okediji, and Maureen A. O’Rourke, 2010. Copyright in a global information economy. Third edition. New York: Aspen Publishers.

Paul Collins, 2007. “Smoke this book,” New York Times (2 December), at http://www.nytimes.com/2007/12/02/books/review/Collins-t.html, accessed 15 November 2014.

Robert Darnton, 2013. “The National Digital Public Library is launched!” New York Review of Books (25 April), at http://www.nybooks.com/articles/archives/2013/apr/25/national-digital-public-library-launched/, accessed 12 November 2014.

Arne Duncan, 2009. “Robust data gives us the roadmap to reform,” U.S. Department of Education (8 June), at http://www2.ed.gov/news/speeches/2009/06/06082009.html, accessed 14 April 2015.

Edmentum, n.d. “Study Island,” at http://www.edmentum.com/products-services/study-island, accessed 14 April 2015.

Jon Fingas, 2014. “Microsoft patents contextual ads in e-books, whether we like it or not,” Engadget (7 August), at http://www.engadget.com/2012/08/07/microsoft-patents-contextual-ads-in-e-books/, accessed 12 November 2014.

Vilém Flusser, 2011. Into the universe of technical images. translated by Nancy Ann Roth. Minneapolis: University of Minnesota Press.

David Golumbia, 2009. The cultural logic of computation. Cambridge, Mass: Harvard University Press.

Shane Greenstein, 1998. “Commercializing the Internet,” IEEE Micro, volume 18, number 6, pp. 6–7.
doi: http://dx.doi.org/10.1109/40.743678, accessed 19 May 2015.

Susan R. Harris and Elise Gerich, 1996. “Retiring the NSFNET backbone service: Chronicling the end of an era,” ConneXions, volume 10, number 4, at http://www.merit.edu/research/nsfnet_article.php, accessed 13 April 2015.

Michael Hart, 1992. “The history and philosophy of Project Gutenberg,” at https://www.gutenberg.org/wiki/Gutenberg:The_History_and_Philosophy_of_Project_Gutenberg_by_Michael_Hart, accessed 9 November 2014.

Nancy K. Herther, 2008. “The ebook reader is not the future of ebooks,” Searcher, volume 16, number 8, pp. 26–40.

Individuals with Disabilities Education Improvement Act, 108 U.S.C., 2004, at http://idea.ed.gov/download/statute.html, accessed 19 May 2015.

Harold A. Innis, 2008. The bias of communication. Second edition. Toronto: University of Toronto Press.

Harold A. Innis, 2007. Empire and communications. Toronto: Dundurn Press.

Joab Jackson, 2010. “Google: 129 million different books have been published,” PCWorld (6 August), at http://www.pcworld.com/article/202803/google_129_million_different_books_have_been_published.html, accessed 12 November 2014.

John Jamison, Randy Nicklas, Greg Miller, Kevin Thompson, Rick Wilder, Laura Cunningham, and Chuck Song, 1998. “vBNS: Not your father’s Internet,” IEEE Spectrum, volume 35, number 7, pp. 38–46.
doi: http://dx.doi.org/10.1109/6.694354, accessed 19 May 2015.

Aaron Kreider, 2000. “Increasing commercialization on the electronic frontier” (11 May), at http://www.campusactivism.org/akreider/essays/ecommercialization.htm, accessed 11 November 2014.

Terry Kuny, 1997. “A digital dark ages? Challenges in the preservation of electronic information,” 63rd IFLA Council and General Conference (Copenhagen, Denmark), at http://archive.ifla.org/IV/ifla63/63kuny1.pdf, accessed 19 May 2015.

Lawrence Lessig, 2005. “CC in review: Lawrence Lessig on how it all began” (12 October), at http://creativecommons.org/weblog/entry/5668, accessed 12 November 2014.

Jessica D. Litman, 1989. “Copyright legislation and technological change,” Oregon Law Review, volume 68, number 2, pp. 275–361, and at https://user-content.perma.cc/media/2014/8/30/23/8/W46W-NDNZ/cap.pdf, accessed 19 May 2015.

Jessica D. Litman, 1987. “Copyright, compromise, and legislative history,” Cornell Law Review, volume 72, number 5, pp. 857–904, and at http://repository.law.umich.edu/articles/224/, accessed 19 May 2015.

Heather MacFadyen, 2011. “The reader’s devices: The affordances of ebook readers,” Dalhousie Journal of Interdisciplinary Management, volume 7, number 1, at https://ojs.library.dal.ca/djim/article/view/2011vol7MacFadyen, accessed 19 May 2015.
doi: http://dx.doi.org/10.5931/djim.v7i1.70, accessed 12 November 2014.

Laura Manley and Robert P. Holley, 2012. “History of the ebook: The changing face of books,” Technical Services Quarterly, volume 29, number 4, pp. 292–311.
doi: http://dx.doi.org/10.1080/07317131.2012.705731, accessed 12 November 2014.

National Science Foundation, 1993. “NSF 93-52 — Network Access Point Manager, Routing Arbiter, Regional Network Providers, and Very High Speed Backbone Network Services Provider for NSFNET and the NREN(SM) Program” (6 May), at https://w2.eff.org/Infrastructure/Govt_docs/nsf_nren.rfp, accessed 13 April 2015.

National Science Foundation, 1992. “The NSFNET Backbone Services Acceptable Use Policy,” at https://w2.eff.org/Net_culture/Net_info/Technical/Policy/nsfnet.policy, accessed 13 April 2015.

No Child Left Behind Act, 107 U.S.C., 2001, at http://www2.ed.gov/policy/elsec/leg/esea02/index.html, accessed 19 May 2015.

Office of Inspector General, National Science Foundation, 1993. “Review of NSFNET” (23 April), at http://www.nsf.gov/pubs/stis1993/oig9301/oig9301.txt, accessed 13 April 2015.

Renaissance Learning, 2015. “Accelerated reader,” at https://www.renaissance.com/products/accelerated-reader, accessed 14 April 2015.

Grantland S. Rice, 1997. The transformation of authorship in America. Chicago: University of Chicago Press.

Mark Rose, 1993. Authors and owners: The invention of copyright. Cambridge, Mass., Harvard University Press.

Bill Rosenblatt, 2009. “The trajectory of DRM technologies: Past, present, and future,” at http://www.virtualgoods.org/2009/TheTrajectoryofDRMTechnologies.pdf, accessed 11 November 2014.

Bill Rosenblatt, William Trippe, and Stephen Mooney, 2002. Digital rights management: Business and technology. New York: M&T Books.

Pamela Samuelson, 1999. “Intellectual property and the digital economy: Why the anti-circumvention regulations need to be revised,” Berkeley Technology Law Journal, volume 14, number 2, at http://people.ischool.berkeley.edu/~pam/papers/Samuelson.pdf, accessed 19 May 2015.

Mark Stefik, 1996. “Letting loose the light: Igniting commerce in electronic publication,” In: Mark Stefik (editor). Internet dreams: Archetypes, myths, and metaphors. Cambridge, Mass.: MIT Press, pp. 219–253.

Siobhan Stevenson, 2012. “W(h)ither the public librarian: Labour in libraries 2.0,” at https://www.youtube.com/watch?v=26WLs7KlV88, accessed 11 November 2014.

Ted Striphas, 2006. “Disowning commodities: Ebooks, capitalism, and intellectual property law,” Television & New Media, volume 7, number 3, pp. 231–260.
doi: http://dx.doi.org/10.1177/1527476404270551, accessed 11 November 2014.

Suit Staff, 2014. “Wireless revolution: The history of WiFi,” The Suit (17 July), at http://www.thesuitmagazine.com/technology/web-a-internet/22360-wireless-revolution-the-history-of-wifi.html, accessed 11 November 2014.

Gail L. Sunderman, James S. Kim, and Gary Orfield, 2005. NCLB meets school realities: Lessons from the field. Thousand Oaks, Calif.: Corwin Press.

Jeffrey A. Trachtenberg, 2010. “Authors feel pinch in age of e-books,” Wall Street Journal (26 September), at http://www.wsj.com/articles/SB10001424052748703369704575461542987870022, accessed 15 April 2015.

Siva Vaidhyanathan, 2001. Copyrights and copywrongs: The rise of intellectual property and how it threatens creativity. New York: New York University Press.

Victoria and Albert Museum, 1995. “The book and beyond: Electronic publishing and the art of the book,” at http://www.vam.ac.uk/vastatic/wid/exhibits/bookandbeyond/case3.html, accessed 9 November 2014.

Michael M. Widdersheim, 2014. “E-Lending and libraries: Toward a de-commercialization of the commons,” Progressive Librarian, volume 42, pp. 95–114, and at http://www.progressivelibrariansguild.org/PL_Jnl/contents42.shtml, accessed 19 May 2015.

Norbert Wiener, 1950. The human use of human beings: Cybernetics and society. Boston: Houghton Mifflin.

Raymond Williams, 1975. Television: Technology and cultural form. New York: Schocken Books.

 


Editorial history

Received 15 January 2015; revised 24 April 2015; accepted 21 May 2015.


CC0
To the extent possible under law, Michael M. Widdersheim has waived all copyright and related or neighboring rights to his paper “E-books: Histories, trajectories, futures.”

E-books: Histories, trajectories, futures
by Michael M. Widdersheim.
First Monday, Volume 20, Number 6 - 1 June 2015
http://journals.uic.edu/ojs/index.php/fm/article/view/5641/4575
doi: http://dx.doi.org/10.5210/fm.v20i6.5641





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2016.