Read related articles on Electronic publishing, Hypertext and Internet publishing
What is the future for hypertext? This article attempts to answer this fundamental question by examining the technological and commercial development of the World Wide Web. What do the experiences of electronic publishers on the Web reveal about the strengths and weaknesses of hypertext? Based on these experiences, some promising avenues for future research are outlined.
ContentsNew Worlds, Ancient Texts
What Is Hypertext?
Strengths and Flaws of Hypertext Theory
An Agenda for Future Work
New Worlds, Ancient TextsHistorians and critics do not usually look to the contemporary business world for research material. According to the conventional wisdom, important historical moments can't be identified, much less analyzed, except at a distance: today's major event is tomorrow's epiphenomenal froth. Despite this, some events, because of their complexity, wide social impact, or resonance with the past, recommend themselves as worthy of the phrase "history in the making." The growth of the Internet, electronic commerce (buying physical goods and services over the Web), and electronic publishing (using the Internet as a publication and distribution medium) are all heralded as developments carrying profound implications for our economic, social, and cultural lives.
Electronic publishing is also one area of contemporary business that seems to offer rich opportunities for the fruitful application of academic theory, and has attracted authors who seek to chart its impact. The first wave of these studies, George Landow's Hypertext: The Convergence of Technology and Contemporary Critical Theory, Jay David Bolter's The Writing Space, and Myron Tuman's Literacy Online: The Promise (and Peril) of Reading and Writing with Computers, all appeared in 1992. Building on writers as diverse as Jacques Derrida and Theodor Nelson, they argued that hypertext systems embodied ideas first advanced in contemporary literary theory. The following year saw volumes that examined the impact-- or potential impact-- of hypertext and electronic publishing on literacy and education; Richard Lanham's The Electronic Word: Technology, Democracy and the Arts and Myron Tuman's Word Perfect.  These works received an excellent summary in 1996, in Ilana Snyder's introductory work Hypertext: The Electronic Labyrinth. 
Together, these works constitute a body of literature which I will refer to as hypertext theory. Differences exist between them, of course, but they all share several important things in common. They are all works of academic humanists, who see in the development of electronic writing the realization and popularization of phenomena described in literary theory. They also represent the only serious attempts to date to apply ideas developed in academic circles to electronic publishing. However, the rapid evolution of new media since 1992 and 1993 raises a simple but important question: How well do they explain today's world? More precisely, how does the technological and commercial development of the World Wide Web challenge hypertext theory? What do the experiences of electronic publishers reveal about the strengths and weaknesses of the literature? Finally, what are the most promising avenues of future research, and what scholarly tools and theories might be profitably applied in their exploration?
This essay seeks to answer these questions. It is organized as follows. In the next section, I review the critical claims of the literature on hypertext. "Hypertext" is not just an abstract or theoretical term: its primary writers speak of it as a real technology, and base their predictions on the assumption that most, or all, of what they describe already exists. For this reason, it is worth examining just how accessible the key features of hypertext are in the real world, and how likely it is that the missing parts will befilled in. I then consider hypertext theory's strengths and flaws, its arguments about how hypertext differs from print, and how it changes notions of reading and authorship. In the final section, I chart some lines for future research in the field. I first outline the virtues of applying the tools of science and technology studies (STS) to studies of hypertext and multimedia. I then describe the research avenues that such an approach would open. It would allow us to examine the technological, economic, and cultural continuities between print and electronic publishing. It would encourage attention to the materialities of multimedia development, and the work that goes into creating electronic content. Finally, it would give us the tools to examine the moral economy of the multimedia workshop, and to unravel the forces that influence the shape and content of new media. 
What Is Hypertext?The literature on hypertext presents a series of stark contrasts between the worlds of print and electronic publishing. The two are fundamentally different technologies which possess very different properties. Modern print culture - the complex of technologies, social roles, and economic structures that define the production and consumption of the written word - has its origins in the late eighteenth and nineteenth centuries. The printed work is stable, static and linear. A newspaper article or book offers a single path for readers to follow, defined by a single author. Reading is an essentially passive and private activity: unlike participants in oral culture, readers are isolated from one another. Printed works are also physically isolated from one another, each its own separate world.
Hypertext is a late 20th-century invention drawing on the ideas of engineers Vannevar Bush and Theodor Nelson. Hypertext began as electronic versions of conventional texts. Unlike printed works, whose fixity defines them as forever finished and whole, electronic texts are infinitely malleable (or "unstable"): they can be updated, reedited, or completely rewritten at any time by their creators. Electronic texts do not have page numbers, title pages, or any of the markers that give books their shape and order. As George Landow writes, "the text appears to fragment, to atomize, into constituent elements.... [T]hese reading units take on a life of their own as they become more self-contained," and lose their intimate connection both with their authors and with other parts of a formerly integrated work. 
New methods take the place of traditional and obsolete means of ordering texts. Most important, hypertexts can be connected to one another via hyperlinks, which when activated call up a related text, picture, video, or other object. A hypertext on the American Civil War, for example, may have hyperlinks to biographies of generals, orders of battle, monographs on international relations in the mid-19th century, pictures of cotton fields, pro-and anti-slavery tracts and sermons, topographical maps, and responses by contemporary readers, to name but a tiny number of possible links. Readers can move very easily from one text to another. They do not experience different works as separate, but instead as interconnected. Electronic texts lose their individual identities, merging together into vast networks of texts at once atomized and integrated, exchanging their autonomy and organic whole for postmodern connectedness.
Hypertexts are accessed by readers via a computer, and they often use software that is identical - or possesses many of the same properties - to that used by authors. A reader may enter a work through a search engine, by browsing a hierarchical tree, or by selecting from a table of contents. Readers proceeding in this manner are less likely to begin at the beginning of a work and read through to the end. Indeed, because they can be accessed at any point, hypertexts do not even have beginnings and ends. Likewise, they do not have clear narrative threads. Readers are guided through hypertexts not by an author's interests, but by their own: they choose links that interest them and pass over links that seem less promising. The trail of links a reader chooses to follow becomes more important than the original work. Authors cannot define what a reader will encounter, but can only offer possibilities to be accepted or rejected. It is the reader who chooses what path to take, and whether even to take the same path twice.
Not only can readers move freely through hypertext; they can add to it. Annotation tools (again, possibly the same tools used by authors) allow readers to create and publish responses to published writings, adding their own insights and perspectives to the range of possible texts other readers may encounter. Readers can also add their own links between extant works, making connections that the original authors did not, or creating entirely new links based on completely different principles. (They cannot modify already-published writing, however.) This ability for new readers to contribute to hypertexts, combined the ability of authors to modify originals, makes it impossible to speak of hypertexts as "finished:" rather, they are inherently unstable.
This give-and-take between authors and readers has several consequences. The body of work that constitutes the hypertext field is unstable and ever-changing: it is constantly growing, acquiring new texts that are connected by an evolving network of links. Because hypertext "offers the reader and writer the same environment," the two actors are no longer as far apart as they used to be.  While "print literacy was... organized in the service of a dominant author, a god-like figure who was normally male," hypertext "gives to readers the power that once had been the prerogative of the author."  Readers who can post their own works are no longer passive recipients of received wisdom, but more like critics and co-authors. The dominance of writers over literary culture disappears: they no longer specify the beginnings and ends of their works, no longer stand above criticism by readers, and may not even be identifiable as the creators of a specific text.
Hypertext thus radically challenges traditional print-based notions of authorship, textual integrity, and reading. It also threatens conventional pedagogy. Traditional teaching, like traditional writing, is linear and top-down. Lectures are exercises in the delivery of canonical bodies of knowledge, while seminars are intellectual plays directed by academic authorities, and both are directed by god-like (or author-like) professors. Hypertext does not lend itself to such teaching methods. It requires a more egalitarian and sophisticated pedagogy, in which the distinction between professors and students, indeed between "central" and "peripheral" works, is rendered irrelevant.
Strengths and Flaws of Hypertext TheoryThe previous section summarized the main points of hypertext theory. This section considers the strengths and flaws of that work (what I think of as first-generation hypertext theory). I first discuss its success in explaining how the structure of texts change when they move from print to electronic form. I then consider arguments about how authorship and reading change with the shift from print to electronic, arguments that I think are much less successful. I conclude with a discussion of the foundational issues that these problems reveal.
The experience of publishing an electronic version of Britannica suggests that hypertext theory is at its most astute when describing the impact of digitization on texts. Hypertext does indeed break down previously organic works into component parts, and dissolves some of the hierarchies and ordering schema of print. Online versions of long encyclopedia articles lose much of the order given to them by page layout. Even though they have cues to suggest that there are "next" and "previous" sections, readers have trouble knowing where they are in an article (near the beginning, exactly in the middle, at the end), or whether they have found an entire article or a section. Britannica's search engine adds to this confusion, by pointing readers to specific sections of long articles, rather than to their beginnings. Specificity in searching is had at the expense of literary order.
Likewise, some style rules that were quite sensible in print become harder to enforce online. For example, Britannica gives an individual's full name on first mention in an article, and only the last name subsequently. But what if there is no single starting-point? This question came up while editing a set of timelines for Britannica CD. In that environment, readers are encouraged to start the timeline wherever they want, and they can move forward or backward with equal ease. In print there is a common beginning, a common frame of reference against which the style rule could be applied. Not so with the timelines.
Finally, some larger schema intended to give intellectual order to the encyclopedia have not survived the jump to cyberspace. In 1974, Encyclopaedia Britannica published a new edition divided into two major sections, the Macropaedia, and the Micropaedia. The Macropaedia consisted of several hundred articles, some hundreds of pages long, that communicated "Knowledge in Depth" on broad topics like "The Cosmos," "Digestion and Excretion," "The Social Sciences," and "Public Works." The Micropaedia's 72,000 articles were a "Ready Reference" of material on more specific subjects, such as plants and animals ("cinchona," "tree frog"), places ("Izmir, Turkey," "Marine Biological Laboratory"), and people ("Darius III," "Tellis, Thomas"). Whatever one may think of the success or failure (or mere folly) of such an effort in print, online the Macropaedia - Micropaedia division has been utterly mystifying. In print, the Macropaedia and Micropaedia could be distinguished by differently-colored spines, by different layouts (two-column, with few illustrations, versus three-column, with pictures on almost every page), and by the fact that you could see how many articles ware on a page. In other words, the printed volumes' physical qualities contained information that readers could interpret to figure out where they were. That information is stripped away, or at best incompletely replicated, in the electronic Britannica.
While hypertext theory is very astute in its analysis of how the shift to electronic publishing affects the experience of reading, its ability to make sense of the impact of hypertext on other aspects of literary experience and production is more uneven. In an effort to draw clear distinctions between print and electronic publishing, hypertext writings have tended to stereotype print, authorship, and reading, and exaggerate the differences between print and electronic texts, authorship, and reading. Drawing on Roland Barthes' S/Z and Michel Foucault's widely-reprinted "What is an Author?" hypertext theory declares that the "author" is a social category that ceases to exist in electronic publishing. 
But the mechanics of authorship vary greatly already in the printed world. Consider two (of countless) examples: scientific articles and legal briefs. In scientific and engineering journals, multiple authorship is the rule, not the exception. Authorship is used as a mechanism for distributing credit for research, rather than as a way of identifying the manipulator of words. The real creativity in science happens at the lab bench or blackboard, not the word processor, and the activity of "writing up results" is generally seen as very different from real research. Articles can have a few, a dozen, or even hundreds of coauthors, and even the rules for determining the order in which coauthors are listed - alphabetically, by seniority, etc.- vary from field to field.
Legal opinions cast the links between writing, creativity, and credit differently. Opinions, which explain the thinking behind a court ruling, are written by clerks, following a judge's general instructions. They may be lightly or heavily revised, but the final versions are published under the judge's name; the identity of the clerk, and the amount of work judges generally spend on revisions, is an open secret, but it is not acknowledged in the documents themselves. (A few very influential judges write their own opinions, but they are as well-known for deviating from the norm as a famous painter who worked without assistants, and painted his own sky and backgrounds, would have been in Renaissance Italy.) 
The examples of scientific and legal authorship share an important characteristic: they break apart the link between writing and creating that is at the very core of humanistic scholarship, which hypertext theorists have taken to be the print world's norm. Finally, some print works ascribe authorship to corporate entities rather than to individuals. Reference works often do not have authors, but base their authority on historical reputation and the strength of the brand name. An encyclopedia has no single author, but hundreds of contributors, ranging from staff writers to eminent scholars. 
Hypertext theory is thus entirely correct to say that the concept of the solitary author is a construct based on patterns of labor, ideas about intellectual property, and the relationship between publication and professional advancement. One need only look at the ecology of printed works to see a number of different species of authors.
Hypertext theory claims that reading a printed text is a passive, solitary activity that literally disciplines and silences the reader. "The traditional practice of literacy, with its authors and readers sharing texts in isolation," Myron Tuman argues, is "a totality envisioned as rigidly hierarchical in its formulation and distribution of knowledge, a top-down system that guarantees the dependence and isolation of individuals." 
But historians and ethnographers of reading show that reading practices vary profoundly across cultures and time. In early medieval monasteries, reading was a public, oral exercise, not a solitary one.  Reading occurs in a diversity of spaces, ranging from the intensely private to the highly public. As Janice Radway's classic Reading the Romance showed, avid readers of romance novels use these apparently sexist and patriarchal books to create privacy and leisure time - a powerful and precious thing for overworked housewives and mothers.  Book groups and journal clubs are devoted to a kind of collective reading that is most definitely not private, passive, or hegemonic. Other reading environments combine the private and public. In the cafes surrounding the University of California, Berkeley, people move back and forth between private and public forms of reading, sharing tables but reading silently, talking across tables about articles or chapters, joining with classmates to discuss assignments. This is an active, engaged and complicated space that affords opportunities for everything from solitary contemplation to group discussion. 
These objections point in turn to several more basic flaws in the hypertext literature's approach to both print and electronic publishing.
The first is that hypertext, while a breathtakingly radical and even noble technology, doesn't actually exist, though hypertext theory is based on the assumption that it does. Versions of some of hypertext's features can be found in software programs like Storyspace and Intermedia, on CDs, and on the World Wide Web. But none of these is hypertext as described in the literature. The Web - by far the most popularly accessible technology incorporating elements of hypertext - does not allow readers to create their own links between documents, nor does it give readers the ability to publish responses (except by publishing their own separate documents). Until these capabilities are popularly (or even universally) available, readers cannot be transformed into writers, and authorial power cannot be diffused; the social system of traditional print culture can survive, more or less intact, in online publishing.
Likewise, while hypertext theory assumes that readers have access to the totality of printed work (or at least literature) in electronic format, no new media system comes close to fulfilling this feature. There is a dazzling amount of material in CDs and available online, and it is growing tremendously; but it is not, and it shows no signs of becoming, the entire world of writing. Until that happens, the world described by hypertext will not come to pass.
The fact that hypertext theory is writing about a revolutionary impact of a nonexistent technology wouldn't matter if its proponents recognized that fact. But they don't. So far as I can tell, never once does a sentence in George Landow's Hypertext, to take one example, start with "Hypertext might" or "Hypertext could." Instead, the language is declarative: hypertext will, does, shall, forces, demands, compels, evinces, or leads to, revolutionary changes in authorship, texts, education, the canon, and the politics of knowledge. There are no things that "might happen," no fuzziness or historical contingency, no space for the author or reader to act as free agents, no possibility for the future to develop one way versus another. Technological determinism is bad enough, but determinism caused by nonexistent technology is worse still. At least, this raises the question of how useful hypertext theory is as a guide to understanding those technologies that do exist, and opens the possibility that more mature theories may be needed to make sense of new media.
Another set of problems derive from hypertext theory's reduction of the complicated, heterogeneous technology of print into a single kind of "text." In his analysis of print culture, George Landow writes that "The object with which one reads the production of print technology is, of course, the book" - a very revealing "of course."  Throughout Hypertext, the words "text" and "book" are essentially interchangeable, and "books" are taken to be academic monographs or novels. But the book is not the only kind of print technology: newspapers, magazines, brochures, coupons, street signs, stickers, t-shirts, and a million other printed artifacts form a kind of textual white noise background to our lives. Indeed, it likely that most of us probably spend more time reading these ephemeral texts than we spend reading books. Further, the physical entities that we call "books" are really a myriad of things. A novel, a supply-parts catalog, and a reference work all consist of pages bound between covers. In a strict material sense they are all books, but they are very different from one another. Indeed, a single book can reflect in microcosm the diversity of the macrocosm: a volume of the Encyclopaedia Britannica can consist of hundreds of articles, ranging from a few sentences to several hundred pages - essentially books within the book.
This reductionist attitude toward the products of print culture is symptomatic of a larger problem, the tendency to overstate the differences between electronic and printed media. The rhetoric of disruption, of total difference, of an immanent (and, to some, apocalyptic) conflict between old media and new, leads to an oversimplified view of both electronic and print media, blinds us to the many important and subtle continuities between them, and obscures some very promising avenues of future study.
Still, despite these flaws, hypertext theory has considerable value as a starting-point for studies of multimedia and electronic publishing. Its analysis of changes that occur in texts as they become hypertexts is of great use to authors and designers; only when moving up beyond the literary work itself do problems occur. Read as specimens of futurist literature, books on hypertext published in the early 1990s have held up better than the countless articles heralding a bright future for interactive TV, push technology, and online communities. Finally, hypertext theory deserves credit for trying to make sense of technologies and forms of commerce that could have a terrific impact on literature, society, and culture. The task for the next generation of scholars of hypertext is not to disavow this work, but to build on it.
An Agenda for Future WorkThe previous section outlined the strengths and flaws of the first generation of hypertext theory. What should the second generation of hypertext scholars do? What questions should it ask, and what tools should it employ? In this section, I will discuss the research avenues opened by adopting the tools of science and technology studies (STS). An STS approach to hypertext and multimedia would provide a solid methodological foundation for examining the interaction of technology and content that is central to making sense of new media. It would allow us to move past the rhetoric of radical difference that characterizes much comparison of print versus electronic media, to appreciate the similarities between the two enterprises, and to explore the continuities between the practices of print and electronic production. Many electronic publishers are also print publishers, and the content that appears online or on CD is leveraged from the print world.
It would also help us understand the materiality of multimedia. Contrary to the arguments of hypertext theorists, who see electronic media as perfectly malleable and moveable ones and zeros - content as a kind of frictionless superfluid - multimedia content is often more tightly connected to the material world of technology and software than the printed word is to the page. Understanding this would call attention to the extremely complicated relationship between technology, content, and design that defines much multimedia development. It would also make clear the importance of surveying the full range of skilled labor and judgment that goes into creating multimedia, and how multimedia development draws together the skills of editors, writers, software designers, database managers, and artists. Finally, it would make clear the need to get onto the shop floor of the new electronic workplace, and reconstruct the moral economy of multimedia and hypertext development, to see how issues of labor, skill, and authority are negotiated in it, and to determine how those negotiations affect the character of new media.
Science and technology studies (STS) is a relatively new and heterogeneous field, but its major works share several core assumptions. First, they deal not just with technologies, but with technological systems, consisting of physical artifacts (e.g., electric power grids, railroad lines, computers and networks), institutions (manufacturers, distributors, regulatory agencies, universities), people (designers, managers, consumers), and culture. Systems, rather than discrete pieces of hardware or software, are the essential units of analysis, as the parts of a system interact in ways that produce behavior that could not be predicted by examining each component in isolation. STS argues for a mutual influence between technology and society: technology can be a powerful agent of change, but culture and politics shape technology just as powerfully. The design of machines reflect political interests, aesthetics, and economics as strongly as they do the laws of physics. This raises the possibility that, contrary to the claims of determinists and enthusiasts, technologies are not simply solutions to technical problems, but can be solutions to social ones as well. Eschewing technological determinism and emphasizing the contingency of technological change forces one to look closely at the agents - engineers, companies, government agencies, consumers - who are involved in shaping technologies, and to reconstruct the histories of their interaction. It requires students to ask how what constitutes a desirable technology, or an objective solution, are defined. Finally, it suggests that the history of technology is marked as much by continuities as by radical change. Even apparently revolutionary technologies may reinforce existing political arrangements or serve the interests of established powers; indeed, the rhetoric of radical change may be employed to obscure underlying continuities. 
Science and Technology Studies
The first virtue of an STS approach to hypertext and multimedia development is that it would let us see the continuities - technical, intellectual, and cultural - between print and electronic publishing. This is an aspect of the history of new media that has been obscured by the current literature's rhetoric of radical difference. The more recent literature on "the future of the book" has looked to the past to chart the trajectory of book culture, and demonstrates the possibilities opened up by a greater degree of respect for the history of media. Frederick Kilgour's Evolution of the Book, for example, adapts the "punctuated equilibrium" evolutionary model of biologists Steven Jay Gould and Niles Eldredge to develop a history of writing and printing from Mesopotamian times to the present divided into seven punctuations. The first occurs around 2500 BC, with the development of clay tablets in Mesopotamia, which combined an abundant natural resource (clay) with a writing system recently developed from an older system of tokens. The second occurs five hundred years later in Egypt, with the development of the papyrus roll, on which scribes could write more quickly and easily, and which were easier to handle, store, and transport. The codex book, which makes its appearance in the second century AD, marks another evolutionary jump. The fourth begins in the 13th century, with the invention of word spacing, pagination, and indexing, and culminates around 1450 with Gutenberg's printing press. Gutenberg's system remained unchanged in its essentials until the nineteenth century, when steam-powered presses appeared. Freed from the limits imposed by human strength and endurance, capable of printing on both sides of a sheet, they turned out thousands of pages per hour, supplying the cheap fiction, newspapers, and magazines that made the printed word into a mass-market commodity. 
Each of these shifts had radical consequences, but Kilgour shows that they are best understood within an evolutionary rather than revolutionary framework. This is as true for the most recent shifts - offset printing in the 1970s, and electronic publishing in the 1990s - as for the most ancient. Decades before the Internet and computers became a medium for distributing and reading texts, publishers invested billions of dollars on computer hardware, proprietary publishing systems, databases, and other equipment electric and ephemeral. (More recently, conglomerates who have absorbed small publishing houses have had to deal with the chaos of having dozens of different systems tracking payments to authors, copyright, printing, advertising, and inventory.) These preinstalled technological bases represent tremendous investments of money and company time, and publishers are far more likely to try to adapt them to the online world than to abandon them altogether. The capabilities of existing technology may have a serious impact on the character of a publisher's online venture. Until recently, however, these systems have been designed to print books, newspapers, magazines, pamphlets, and other hard-copy documents. Content has long existed in electronic form within publishing companies. What's new is that it can now be delivered to customers without first being printed. What's revolutionary is that, thanks to Web browsers (and, in the near future, electronic books), readers have the power to read electronic documents. 
Some publishers work in both print and electronic publishing, creating yet another line of continuity between the two media. Despite predictions that electronic publishing would spell the death of print, most publishers have found ways to minimize competition between printed and electronic versions of their products. Free magazine sites such as Fast Company and Wired (and the Web sites for cartoons Dilbert and Doonesbury) publish content online a couple weeks after it has appeared in print, so as not to erode their print-subscriber base. On the other hand, subscription-only sites like the Chronicle of Higher Education may offer members advance copies of articles online. Newspapers are experimenting with other business models. Unlike long-lead journalism, their content is extremely time-sensitive-- "yesterday's news" is an apt epithet, and no one knows what tomorrow's news will be-- so some are hoping to make money by offering additional content available only online, or by selling material from online archives. However, in all cases, the divergence between the printed and online versions of periodicals is relatively small. This means that the two will resemble one another in organization and content, at least until online publishing is profitable enough to exist as a self-sustaining enterprise. The journalism backgrounds of online writers combine with the commercial and literary realities of print publishing to guarantee that the application of the intellectual habits of newspaper and magazine writing define the style and voice of online content-- even in electronic-only venues like Slate and Salon. 
Further, online publishing offers some economic advantages that are attractive to print publishers. Lowering the technical barriers of online commerce and convincing the public to pay for content are big challenges; but once they're solved - and it is an article of faith that they will be - existing publishers could benefit handsomely. The publishing industry's investment in integrated computer systems, which allow publishers to track everything from author's advances and royalties to distribution and sales, is justified as necessary to make the industry faster,more cost-effective, and more efficient. The Web, by giving publishers the ability to eliminate large production costs, speed delivery of new content to market, and reach new markets at an extraordinarily low price, could be the ultimate cost-cutting device. Electronic publishing, whether in CD, DVD, or online, offers another medium for selling copyrighted works. Many CDs are built around material that appeared previously in print, supplemented (or "enhanced," as marketing would put it) with search engines, video and audio, glossaries or dictionaries, or primary documents. Voyager and Byron Preiss Multimedia, two leading CD companies, publish a number of discs that are built around printed books, ranging from Stephen Jay Gould's Bully for Brontosaurus to Bruce Catton's American Heritage Picture History of the Civil War, while Encyclopaedia Britannica and Oxford University Press offer digitally-delivered versions of printed reference works. Other CDs are digitized versions of documentary films or movies-- or, in one case, the "rockumentary" This Is Spinal Tap-- with text enhancements, alternate takes, or interviews. Putting newspapers and magazines online gives publishers another opportunity to sell advertising.
In short, at several important levels - technological, commercial, and intellectual - the advent of electronic publishing is not quite the revolution that hypertext theory assumes. In technical terms it is somewhat more like a step in an evolutionary process. Many online publishers continue to publish in print, while others have strong roots in traditional publishing. In commercial terms, online products and strategies may complement, rather than compete with, their printed counterparts. Finally, many CDs and Web sites are built around content that was previously published - not created specifically for electronic presentation.
Too little of the current literature on hypertext appreciates the complexity of electronic publishing, or the kind and amount of labor necessary. John Perry Barlow, for example, has constructed an elaborate and influential theory of authorship, intellectual property, and the economics of content around the premise that the Internet is "one metabottle" whose content is "highly liquid patterns of ones and zeros," capable of endless duplication and distribution, rather than ink on a physical page. In fact, more often than users realize, the relationship between design, technology, and content in multimedia is remarkably intimate and complicated-- far more so than the relationship between content and print. As Richard Grusin notes, most hypertext theory assumes that electronic content has escaped the mortal coil of materiality for an ether of instant communication and effortless replication, and develops an "analysis of electronic writing [that] relies heavily on its unproblematic characterization as immaterial, ephemeral, evanescent," However, he continues, "The problem with this characterization... is that these ephemeral electromagnetic traces are dependent on extremely material hardware, software, communication networks, institutional and corporate structures, support personnel, and so on." 
The reality of new media is very different from world described by Barlow and hypertext theory. The raw materials of Web sites and CDs - text documents, JPEGs and GIFs, QuickTime movies, audio files - are highly fungible, and they can be copied and shared with great ease. But as anyone has learned who has tried something as simple as opening a document written on one word processor with a different program - or opening a Windows document on a Macintosh - software does not speak in a universal language of ones and zeros. The fact that converting documents from one format to another can be a formidable task gives readers some sense of how different a multimedia CD can be from the basic text, pictures, and video that go into it. The finished products are very different from the raw material. Indeed, the work of finishing often introduces layers of custom markup, translation into proprietary formats, or integration into special software packages. All this makes the final work very difficult to pour back into the "metabottle."
So even in electronic form, content - the Platonic realm of ideas - is still constrained by the prison-house of materiality: not the printed page this time, but the electronic structures and formats in which content is encoded. If anything, multimedia content is actually more tightly interconnected with design and programming than it is with print, and the relationship between the three is more unpredictable and mutually influential than most people realize. Print has influenced writing and reading in numerous subtle ways, but (at least after several centuries of learning to deal with the linearity imposed by the medium) the expression of ideas in print is independent of printing technologies: a handwritten essay and a printed one are the same, and authors don't have to worry that the choice of printing press will affect what they can say. The content of multimedia, in contrast, can be powerfully influenced by the capabilities of the technology, the flexibility of a design, the ingenuity of an artist. Far from escaping the last bounds that tied it to the page, electronic text falls back into the prison of materiality, and joins pictures, sound and video in being heavily influenced by recording (e.g. HTML, Shockwave, Java) and playback (e.g., browsers, CD interfaces) technologies. 
The experience involved in creating set of timelines created for the 1998 Britannica CD illustrates this relationship. The basic screen interface consists of a column with years, flanked by two large columns for text; on the outsider are two small columns for pictures. Navigational buttons and pull-down menus surround this main body. This design allows readers to load into each text column a different subject, such as science and literature, or architecture and women's history, and thus to compare them. The timeline was created in Shockwave. Since the program is four megabytes, this is something of a technical tour de force, as Shockwave was designed for creating small animations and programs. All the data - pictures, writing, everything - are in a special Shockwave-only format.
As the project unfolded, we discovered a variety of things that required us to make changes in the design, text, or programming. The need to keep loading times short - a Shockwave animation reloads every time you go to it, rather than being cached - forced us to keep the pictures relatively small, and put a 250-entry limit on each subject. The original design called for a timeline that displayed ranges of dates, as many printed timelines do. However, the sorting engine that figured out which entries belonged where, and wove together entries from two different subjects, couldn't handle such complexity. This in turn required revising the timeline text. Entries on things as broad as the development of Gothic art or the start of the Renaissance had to be rewritten to refer to specific years or events. Since each timeline entry occupied a fixed amount of space, no entry could be longer than a certain number of characters. Any change to the design - such as resizing the space, or changing the point size of the type, or switching to a new font - would affect that limit, and force yet more changes in the text. However, readability had to be balanced against content: the easiest-to-read entries would have been so short as to be almost meaningless.
This complexity is a function of the fact that multimedia development is both high-tech and labor-intensive. The fact that the work is done on computers doesn't mean (as some assume) that the computers do the work: behind every CD or Web site is an immense investment of human time and energy. Like many forms of automation, which promise error-free work devoid of human involvement but in fact require skilled labor to succeed, the computerization of printed publications and databases is an expensive and time-consuming process.  The conversion of library cards into electronic records, as Nicholson Baker shows in his passionate essay "Discards," is not a "mechanical" process, in the sense of being easy and error-free. Computer databases lack the flexibility of human catalogers to deal with variant names (such as "Alfred Lord Tennyson" versus "Alfred, Lord Tennyson"), while digitization leaves out subtle but potentially important kinds of data contained on cards, such as acquisition dates, accession numbers, and the physical evidence of a subject's popularity manifest in a card's repeated handling.  Anyone who has encountered online versions of literary works with misspelled words, typos, or other problems has glimpsed the challenges of creating these texts. As one review noted, "the rigor, selectivity and practices that good publishers have developed to ensure the accuracy of their documents" is often missing from online databases and literature collections, to their considerable detriment. 
The numerous online versions of Shakespeare's plays give a sense of the work required to create even simple electronic versions of printed texts. If any body of work could support a multitude of sites - hosting variant editions, allowing readers to compare changes in punctuation, annotation, etc. - it's the Bard's. However, rather than develop a robust ecology of multiple editions and texts, most online Shakespeare apparently are built around a digitized version of a single late 19th-century edition. This source has exhibited weedy behavior, establishing itself in a variety of locations. It is not technically difficult to create a digital version of a printed work: a scanner, OCR software, and word processor are all you need to create text document versions of long works, while short ones can be retyped. But one still has to correct mistakes created during the OCR stage, give the document a uniform format, and add hyperlinks (among many things) to create a finished text. Especially for someone developing a Web site in her spare time, the decision to use an existing digital copy can make good sense.
These examples illustrate the very human choices that lie at the foundation of even the most automated information-management and -retrieval systems. The fact that so much of the experience of multimedia involves interacting with interfaces, search engines, and software - sometimes so much so that it makes it hard to focus on the content underneath - draws attention away from the human judgment that defines how these systems will perform. Consider two central technologies to hypertext, search engines and hyperlinking. The Britannica search engine is a sophisticated WAIS-based engine that works by running search words entered by the user against a database of index terms, and returning a relevancy-ranked hit list. It is a rather sophisticated piece of programming; and it is worth remembering that the index terms are created by humans who read each article and decide what terms should be attached to it. Likewise, hyperlinks - the very stuff of hypertext - are set by editors, not by a program. In both cases, different people would perform the same tasks differently, depending on what they knew about the subject and their ideas about what constitutes good indexing or hyperlinking. This is not to say that these systems are flawed because of this human element, or that they would perform better if they were completely automated. The history of technology shows that there's no such thing as a system that performs entirely without human judgment. Even high-tech systems, it turns out, reside on bedrocks of human thought and choice.
This shows that the work and technical challenge of publishing electronic content is really quite substantial. The work of converting data from one format to another (a serious challenge for any company, as the income of consulting firms specializing in converting "legacy data" demonstrates); of running index routines and search terms; adding metadata; even keying in printed works - all are important, and all need to be studied more closely.
It will also be essential for future students of hypertext to look beyond the electronic writer's garret - which has been the exclusive focus of hypertext theorists to date - and consider the work of all the varied contributors to the enterprise of electronic content creation, be they authors, editors, artists, programmers, or interface designers. Decades ago, historians of the book discovered that looking at the world of printers, engravers, and book-sellers could yield a richer and more interesting picture of intellectual life than one based exclusively on writers. Adopting this kind of wide view, and treating the work of artists, programmers, and others as seriously as that of writers, will be essential to improving our understanding of the world of new media. To paraphrase British labor historian E.P. Thompson's The Making of the English Working Class, a broader perspective of electronic content production will save those other workers from the enormous condescension of theory.  The continuing importance of skill and choice raises the question of how tacit assumptions, cultural norms, and other factors shape the data that serve search engines and compression algorithms - and, indeed, how those factors shape the software itself. In his controversial but well-known study of numerically-controlled machine tools, David Noble argued that automated - and apparently objective and scientific - systems reflect the political and economic interests of their creators. The next section follows one avenue into this sort of inquiry. 
Not only does multimedia development require the combined skills of many different kinds of people; it casts the relationships between those people differently than in print. Print production is mainly organized as an assembly line, with clearly-defined roles and divisions of labor. The creation of multimedia is, in contrast, a fundamentally collective affair. Such collectives are normally called "teams," but that title imposes on them a coherence and uniformity that they almost never possess. Team members possess common skills, share a certain outlook, play by the same rules, and agree on what constitutes winning and losing. The phrase "collaboration," with its overtones of power struggles, ambiguities, and temporary alliances with alien groups, does a better job of capturing the emotional texture of development work. (It also calls attention to the fact that multimedia development, unlike sports, does not have clear rules and fixed playing periods.)
At this early stage in its history, different groups are still struggling to establish places for themselves in the development hierarchy. Authors, whose creative vision once determined the alpha and omega of publishing, are both critical and marginal: they're valued for their intelligence and manuscripts, but generally are not major participants in development work. Paul Roberts' experience on "Virtual Grub Street" is somewhat extreme, but only because it reflects so vividly the forces acting on all multimedia writers.  Programmers reduce multimedia to a precise world of pixels and numbers and instructions; for them development consists of long stretches of programming broken by fights to minimize feature creep. Animators and artists think in terms of storyboards and images rather than numbers or words. Interface designers think in terms of architecture, screen layouts, and look and feel. For them, the creative process is a constant fight to maintain visual coherence and style, and to keep the pieces unified, manageable, and accessible. The characters of the working lives of different multimedia laborers are as varied as their work: the stereotypes that art editors work together in open spaces (the more loft-like the better), text editors labor in absolute silence, and programmers work in a kind of isolated chaos, have some basis in reality.  As a result, someone has to process and filter information to members of the group: specs to programmers, lists of pictures to art editors, chronologies of action and storyboards to animators, geocoded data and xeroxes of maps to cartographers.
Learning to speak these multiple languages, to translate information into those forms that one's collaborators will not only use but tolerate, is an essential skill for producers to master. Before information can become electronic liquid, pouring effortlessly from words into pictures into databases, it must first move between the various cultures of its creators. The world of multimedia development, in short, is a trading zone that brings together people with vastly different backgrounds and expertise, their own local cultures and career trajectories, and diverse professional expectations. Balancing these different styles of working and thinking, and the demands each makes, can be a tremendous challenge. 
Case studies suggest that companies deal with these tensions in various ways. According to Fred Moody's I Sing the Body Electronic: A Year with Microsoft on the Multimedia Frontier, Microsoft's multimedia efforts are dominated by two major groups: "designers," who are responsible for the creating the vision of a product, and "developers," who do the work of transferring it from the spec sheet and blackboard into the precise universe of C++ and CDs. Between the two are great divides of background and culture. As one seasoned manager put it, "Designers are invariable female, are talkative, live in lofts, have vegetarian diets, and wear found objects in their ears. Developers are invariably male, eat fast food, and don't talk except to say 'Not true.'"  The existence of these two cultures creates a tension that is alternately infuriating and exhilarating; they also reflect the high-pressure, eccentric yet analytical culture of the division's software manufacturer parent.
The situation at Britannica is different, because it got into electronic publishing from print rather than software.  Programmers have less influence on the day-to-day work of editing and developing content, and there is no equivalent to developers: that work is split between marketers and product managers. Instead, editors (who hardly figure in Moody's story) have turned into producers. For years they were responsible for developing article outlines and working with authors, and this role has evolved into one in which they develop the basic vision for a project, then work with art editors, cartographers, and contributors to fashion and fit together its various pieces.
The contrasting examples of Microsoft and Britannica examples suggest that there is a range of working styles in commercial multimedia development. All of them are probably different from the working style of academia, whose culture is dominated by faculty and whose priorities are education and learning, with the result that multimedia development serves pedagogical and research functions. Sites like Rice University's Galileo Project, Virginia's In the Valley of the Shadow, and Tuft's Perseus Project were started and controlled by faculty. Academic multimedia development seems to adopt the rules of precedence and deference that structure relations among professors, graduate students, undergraduates, and staff. It has also undoubtedly profited from the extension of the traditional freedom accorded to faculty in teaching and research into the realm of electronic content development. Indeed, the debate over institutional requirements that faculty put course material online, and the question of who owns that material once it goes online, can be seen as a struggle over whether the traditional moral economy of academia will define the creation of virtual universities and courseware, or whether a new moral economy (dominated by administrators and managers, profit- and product-oriented, and turning teachers from central players into interchangeable human resource inputs) will be incubated in cyberspace - and someday overrun academia. 
Existing moral economies have an effect on the way multimedia development is organized, what goals it pursues, and who controls it. Technology-- especially development tools-- can likewise have a strong influence on how work is organized within a group or institutions, and where creative power resides. Database, graphic, animation, and interface development are all currently specialized activities. The tools required for each are all different, relatively expensive, and require considerable training and experience to use skillfully. Anyone familiar with word processing can use an HTML editor; but Access, Photoshop, Shockwave, Java, and Perl require entirely different levels of technical competence, substantial capital investment, and expensive employees. This gives definite advantages to well-funded companies over individual content creators, and affects the size of development groups and the complexity of relationships within them. Changes in key development tools that seek to lower the technical bar for multimedia authoring and streamline development (like Night Kitchen's TK3) could affect the composition and organization of the multimedia workplace.
An illustrative parallel can be drawn with the history of rock music recording. Until the 1950s, music-recording technology was complicated and expensive, which limited the number of recording companies and rendered studio time a precious commodity. Recording companies and their delegates (A&R men and producers) had great control over musicians, and the studio itself operated in "craft-union mode," characterized by "strict division[s] of labor" between musicians, recording engineers, and producers. The development of multitrack recording technology changed the economics, and the moral economy, of the studio. By lowering investment costs and simplifying the recording process, multitrack made it possible in the 1950s for entrepreneurs to open studios in such far-flung places as Memphis, Detroit, and Clovis, New Mexico. It also allowed them to develop a new way of working in the studio. Smaller studios allowed "an interchange of skills and ideas among the musicians, technicians, and music market entrepreneurs," gave musicians more control over the recording process, and broke down the craft-union mode of production. In effect, the studio was transformed from a space in which songs were simply recorded to a space in which musicians experimented with microphone placements, arrangements, and sound effects - a space supporting creativity as well as productivity. Since the 1960s, the steadily falling cost of recording equipment has made it possible for groups to own their own studios, and control the recording process even more completely. It has also changed the role and self-image of recording engineers. In the immediate postwar years, the height of technical craft was to create a recording that sounded like it came from the dance hall or auditorium, rather than the studio. However, as the studio itself became a performace space, engineers began to represent themselves less as craftsmen than as artists whose instrument is the studio itself, and whose expertise is defined by their ability to manipulate the studio's acoustic properties. 
It will be important for students of multimedia to analyze and compare these different organizations, divisions of labor, and cultures, for they have a subtle but profound influence on the kinds of products - the kind of content - companies create. The fluid character of the multimedia environment means that developers face lots of problems in their work. But the fact that content, technology, and design, are interdependent - and all are dependent on human labor, choices, and judgment - means that any given problem can have a number of different solutions. Indeed, it is misleading to think of a problem as being a "technology" problem, or a "content" one. A graphics-intensive but slow-loading piece of multimedia could be made faster by rewriting the code, by changing the structure of the underlying database to speed up (or reduce the number of) callouts a program makes, or by shrinking the pictures. Under these circumstances, the nature of relations within groups - how the various members of a group interact, how they assess the value of different features and skills, and how they understand and solve problems - can strongly influence what solutions are chosen. To put it another way, the moral economy of the multimedia workplace - the rules that define the interactions between and contributions of different players in the development process - is an essential factor in determining the shape of new media. The term "moral economy" comes from E. P. Thompson, who argued that precapitalist economies were "moral" economies, in which actors were driven more by social norms and class-defined expectations about roles and obligations than by the impersonal rules of the marketplace.  The moral economy of the multimedia workplace has not yet received much attention, in part because academic multimedia development seems to adopt the rules of precedence and deference that structure relations among professors, graduate students, undergraduates, and staff. By paying attention to such moral economies, we will be in a better position to understand how roles are created, how authority between competing forms of skill and expertise are established and maintained, and how trade-offs between content and technology are decided.
ConclusionIn Hypertext: The Convergence of Technology and Contemporary Critical Theory, George Landow argued that hypertext systems could serve as laboratories for testing critical theory. In that spirit, this article has reported on the experimental evidence gathered from the new media marketplace to test the current viability of hypertext theory. The first generation of academic writing on hypertext concentrated on applying the tools of literary criticism to emerging technologies like Intermedia and Storyspace. This work has proved quite useful in making sense of certain aspects of multimedia, but the problematic character of its analyses of authorship and reading reveal a number of weaknesses. It theorizes about an ideal rather than real technology, exaggerates the differences between print and electronic publishing, considers books to be the natural traditional form of "text," and focuses too narrowly on academic work and writing.
Adopting the tools of STS in future studies in an analysis of a wider variety of development environments would allow us to build on the foundation laid by this first generation of work. It would yield a perspective that gives equal attention to the important continuities between print and electronic publishing, as well as to the differences. It would give solid reasons for examining the challenges involved in making electronic content - or making content electronic - and a better appreciation of the real strengths and limitations of the medium, particularly the degree to which electronic texts are malleable and fungible. It would widen our focus away from the sharp end of writing to include designers, database managers, programmers, and all other people who create electronic texts. It would give us the means to appreciate the complicated relationship between content, design, and programming. Finally, it would provide the foundation for studying the moral economy of the electronic workplace - the ways in which authority and labor are divided between editors, artists, software designers, and producers - and seeing how moral economies shape the creation of digital content.
AcknowledgementsMy thanks to Amy Huberman, David Kaiser, Robert Kohler, Christopher Kutz, and Heather Pang for their advice and encouragement. The arguments and opinions expressed in this paper are entirely my own, and do not represent those of Encyclopaedia Britannica. Author's address: Encyclopaedia Britannica, 310 South Michigan Avenue, Chicago, IL 60604; email@example.com.
About the AuthorAlex Soojung-Kim Pang has created electronic content at Encyclopaedia Britannica since 1996. He holds a Ph.D. in History and Sociology of Science from the University of Pennyslvania, and has conducted research on visual representation in science, the history of American technology, and science in Victorian culture. His article on "The Work of the Encyclopedia in the Age of Electronic Reproduction" appeared in the September issue of First Monday.
Notes George Landow, Hypertext: The Convergence of Technology and Contemporary Critical Theory (Baltimore: Johns Hopkins University Press, 1992), chapter 1 of which is a vailable online; Jay David Bolter, The Writing Space (NJ: Erleben, 1992); Myron Tuman, ed., Literacy Online: The Promise (and Peril) of Reading and Writing with Computers (Pittsburgh: Pittsburgh University Press, 1992); Richard Lanham, The Electronic Word: Technology, Democracy, and the Arts (Chicago: University of Chicago Press, 1993); Myron C. Tuman, Word Perfect: Literacy in the Computer Age (Pittsburgh: Pittsburgh University Press, 1993).
 Because I will be concentrating on the common themes in this literature, I want to acknowledge that there are differences of approach, and some differences of opinion, between the various books under discussion. They do not present a completely unified front. George Landow's Hypertext argues that developments in hypertext represent the realization of poststructuralist and postmodernist theories about texts, authors, readers, interpretation, and meaning. Jay David Bolter's The Writing Space is a more technically-oriented coverage of the same issues. Richard Lanham's The Electronic Word and Myron Tuman's Word Perfect both offer definitions of hypertext, but they are mainly concerned with the pedagogical implications of the new technology. Lanham examines the impact of electronic texts in the context of the history of rhetoric, rather than modern literary theory; Tuman approaches the subject from the perspective of educational theory. Finally, Ilana Snyder's Hypertext: The Electronic Labyrinth draws upon all of these works, presenting and comparing their positions. However, these works are often dealt with as a group, and my interest is in reexamining their common, core assumptions. See Richard Grusin, "What is an Electronic Author? Theory and the Technological Fallacy," Configurations 3 (1994), 469-483, and Lanham's review of Bolter, Landow, Tuman, and other authors in his The Electronic Word: Democracy, Technology and the Arts (Chicago: University of Chicago Press, 1993), chap. 8.
 Michel Foucault, "What is an Author?" in Chandra Mukerji and Michael Schudson, eds., Rethinking Popular Culture: Contemporary Perspectives in Cultural Studies (Berkeley: University of California Press, 1991), 446-464.
 Ethnographies of reading include Jonathan Boyarin, ed., The Ethnography of Reading (Berkeley: University of California Press, 1993); Brian Street, ed., Cross-Cultural Approaches to Literacy (Cambridge: Cambridge University Press, 1993). On medieval reading, see Paul Saenger, Spaces Between Words: The Origins of Silent Reading (Stanford: Stanford University Press, 1997).
 Theoretical overviews can be found in Wiebe E. Bijker, Hughes, and Trevor Pinch, eds., The Social Construction of Technological Systems (Cambridge: MIT Press, 1987); and Hughes, American Genesis: A Century of Invention and Technological Enthusiasm (New York: Viking, 1989), esp. chap. 1. The most ambitious history of technological systems is Thomas P. Hughes, Networks of Power: Electrification in Western Society, 1880-1930 (Baltimore: Johns Hopkins University Press, 1983).
 On the rhetoric of radical difference, see Paul Duguid, "Material Matters: The Past and Futurology of the Book," in Geoffrey Nunberg, ed., The Future of the Book (Berkeley: University of California Press, 1996), 63-101. On the history of print, see Frederick Kilgour, The Evolution of the Book (Oxford: Oxford University Press, 1998).
 On systems integration, see John Marks, "Publish and Don't Perish: To Survive, Publishing Houses Embrace New Business Techniques," Online U.S. News (12 January 1998); on electronic books, see Janelle Brown, " Look Ma, No Ink!" Salon (2 September 1998); Steve Silberman, "Ex Libris: The Joys of Curling Up With a Good Digital Reading Device," Wired 6.07 (July 1998).
 Less reputable publishers, most notably in the X-rated and adult entertainment industry, have also leveraged back files and portfolios into online services: see Janelle Brown, " Has the Web Made Porn Respectable?" Salon (20 October 1998). Paul Franson, "The Net's Dirty Little Secret: Sex Sells," Upside (2 March 1998), discusses the technical aspects of the industry.
 John Perry Barlow, "The Economy of Ideas," Wired 2.03 (March 1994), 84-90, 126-129; see also Esther Dyson, "Intellectual Value," Wired 3.07 (August 1995), 136-141, 181-185; Richard Grusin, "What is an Electronic Author? Theory and the Technological Fallacy," Configurations 3 (1994), 469-483, quote on 476.
 The current attitude continues a century's tendency to ignore the very hard work on the part of printers, engravers, and other artisans that goes into the creation of new media, whether it be photography, television, or computing: see Alex Soojung-Kim Pang, "Victorian observing practices, printing technology, and representations of the solar corona," Journal for the history of astronomy 25 (November 1994), 249-274; 26 (February 1995), 1-13.
 On the hidden costs of automation and mechanization, see Harley Shaiken, Work Transformed: Automation and Labor in the Computer Age (Lexington, MA: Lexington Books, 1984); Ruth Schwartz Cowan, More Work for Mother (New York: Basic Books, 1983); Deborah Fitzgerald, "Farmers Deskilled: Hybrid Corn and Farmers' Work," Technology and Culture (1993), 324-343.
 Robert Darnton, The Literary Underground of the Old Regime (Cambridge: Harvard University Press, 1980); Alvin Kernan, Samuel Johnson and the Impact of Print (Princeton: Princeton University Press, 1989). E. P. Thompson, The Making of the English Working Class (London: Vintage, 1963), quote on p. 12: "I am seeking to rescue the poor stockinger, the Luddite cropper, the 'obsolete' hand-loom weaver, and even the deluded follower of Joanna Southcott, from the enormous condescension of posterity."
 Paul Roberts, "Virtual Grub Street: Sorrows of a Multimedia Hack," Harper's Magazine (June 1996), 71-77; see also Carina Chocano, "Don't Worry, Be Hacky," Salon.
 Ellen Ullman, Close to the Machine: Technophilia and its Discontents (San Francisco: City Lights Books, 1997); see also Ullman, "D isappearing Into the Code," Salon 21st.
 Fred Moody, I Sing the Body Electronic: A Year With Microsoft on the Multimedia Frontier (New York: Penguin, 1993), quote on 31. Another useful data point is Cate C. Corcoran, "Web Producer," Hotwired (14 October 1998).
 This is described more fully in Pang, "The Work of the Encyclopedia in the Age of Electronic Reproduction," First Monday (September 1998).
 David Noble, "Digital Diploma Mills: The Automation of Higher Education," First Monday (January 1998).
 Simon Frith, "Art v. Technology," Media, Culture, and Society 8 (1986), 263-279; Frith, "The Industrialization of Music," in Music for Pleasure: Essays in the Sociology of Pop (London: Routledge, 1998), 11-23; Edward Kealy, "From Craft to Art: The Case of Sound Mixers and Popular Music," in Simon Frith and Andrew Goodwin, eds., On Record: Rock Pop, and the Written Word (New York: Pantheon, 1990), 207-220, quotes on 213.
 E. P. Thompson, "The Moral Economy of the English Crowd in the Eighteenth Century," Past and Present 50 (1971), 76-136. My application of it to this very different situation is inspired by Robert Kohler's use of the concept in his study of modern genetics, Lords of the Fly: Drosophila Genetics and the Experimental Life (Chicago: University of Chicago Press, 1994).
ReferencesNicholson Baker, "Discards," in Baker, The Size of Thoughts (New York: Vintage, 1997), 125-181.
John Perry Barlow, "The Economy of Ideas," Wired 2.03 (March 1994), 84-90, 126-129.
Wiebe E. Bijker, Hughes, and Trevor Pinch, eds., The Social Construction of Technological Systems (Cambridge: MIT Press, 1987).
Jay David Bolter, The Writing Space (NJ: Erleben, 1992).
Jonathan Boyarin, ed., The Ethnography of Reading (Berkeley: University of California Press, 1993).
Janelle Brown, " Look Ma, No Ink!" Salon (2 September 1998).
Janelle Brown, " Has the Web Made Porn Respectable?" Salon (20 October 1998).
Carina Chocano, "Don't Worry, Be Hacky," Salon.
Cate C. Corcoran, "Web Producer," Hotwired (14 October 1998).
Ruth Schwartz Cowan, More Work for Mother (New York: Basic Books, 1983).
Robert Darnton, The Literary Underground of the Old Regime (Cambridge: Harvard University Press, 1980).
Paul Duguid, "Material Matters: The Past and Futurology of the Book," in Geoffrey Nunberg, ed., The Future of the Book (Berkeley: University of California Press, 1996), 63-101.
Esther Dyson, "Intellectual Value," Wired 3.07 (August 1995), 136-141, 181-185.
Joseph J. Esposito, "Very Like a Whale: The World of Reference Publishing," Logos 7:1 (1996), 73-79.
Deborah Fitzgerald, "Farmers Deskilled: Hybrid Corn and Farmers' Work," Technology and Culture (1993), 324-343. http://dx.doi.org/10.2307/3106539
Michel Foucault, "What is an Author?" in Chandra Mukerji and Michael Schudson, eds., Rethinking Popular Culture: Contemporary Perspectives in Cultural Studies (Berkeley: University of California Press, 1991), 446-464.
Paul Franson, "The Net's Dirty Little Secret: Sex Sells," Upside (2 March 1998).
Simon Frith, "Art v. Technology," Media, Culture, and Society 8 (1986), 263-279. http://dx.doi.org/10.1177/016344386008003002
Simon Frith, "The Industrialization of Music," in Music for Pleasure: Essays in the Sociology of Pop (London: Routledge, 1998), 11-23.
Peter Galison, Image and Logic (Chicago: University of Chicago Press, 1997).
Richard Grusin, "What is an Electronic Author? Theory and the Technological Fallacy," Configurations 3 (1994), 469-483. http://dx.doi.org/10.1353/con.1994.0039
Thomas P. Hughes, Networks of Power: Electrification in Western Society, 1880-1930 (Baltimore: Johns Hopkins University Press, 1983).
Thomas P. Hughes, American Genesis: A Century of Invention and Technological Enthusiasm (New York: Viking, 1989).
Edward Kealy, "From Craft to Art: The Case of Sound Mixers and Popular Music," in Simon Frith and Andrew Goodwin, eds., On Record: Rock Pop, and the Written Word (New York: Pantheon, 1990), 207-220.
Alvin Kernan, Samuel Johnson and the Impact of Print (Princeton: Princeton University Press, 1989).
Frederick Kilgour, The Evolution of the Book (Oxford: Oxford University Press, 1998).
Robert Kohler, Lords of the Fly: Drosophila Genetics and the Experimental Life (Chicago: University of Chicago Press, 1994).
George Landow, Hypertext: The Convergence of Technology and Contemporary Critical Theory (Baltimore: Johns Hopkins University Press, 1992).
Richard Lanham, The Electronic Word: Technology, Democracy, and the Arts (Chicago: University of Chicago Press, 1993).
Karen Lunsford, "Electronic Texts and the Internet: A Review of The English Server," Computers and the Humanities 29 (1995), 297-305. http://dx.doi.org/10.1007/BF01830398
John Marks, "Publish and Don't Perish: To Survive, Publishing Houses Embrace New Business Techniques," Online U.S. News (12 January 1998).
Fred Moody, I Sing the Body Electronic: A Year With Microsoft on the Multimedia Frontier (New York: Penguin, 1993).
David Noble, The Forces of Production (Oxford: Oxford University Press, 1984).
David Noble, "Digital Diploma Mills: The Automation of Higher Education," First Monday (January 1998).
Alex Soojung-Kim Pang, "Victorian observing practices, printing technology, and representations of the solar corona," Journal for the history of astronomy 25 (November 1994), 249-274; 26 (February 1995), 1-13.
Aex Soojung-Kim Pang, "The Work of the Encyclopedia in the Age of Electronic Reproduction," First Monday (September 1998).
Janice Radway, Reading the Romance: Women, Patriarchy, and Popular Fiction (Chapel Hill: University of North Carolina Press, 1984).
Paul Roberts, "Virtual Grub Street: Sorrows of a Multimedia Hack," Harper's Magazine (June 1996), 71-77.
Paul Saenger, Spaces Between Words: The Origins of Silent Reading (Stanford: Stanford University Press, 1997).
Harley Shaiken, Work Transformed: Automation and Labor in the Computer Age (Lexington, MA: Lexington Books, 1984).
Steve Silberman, "Ex Libris: The Joys of Curling Up With a Good Digital Reading Device," Wired 6.07 (July 1998).
Ellen Slezak, ed., The Book Group Book (Chicago: Chicago Review Press, 1995).
Ilana Snyder, Hypertext: The Electronic Labyrinth (New York: New York University Press, 1996).
Brian Street, ed., Cross-Cultural Approaches to Literacy (Cambridge: Cambridge University Press, 1993).
E. P. Thompson, The Making of the English Working Class (London: Vintage, 1963).
E. P. Thompson, "The Moral Economy of the English Crowd in the Eighteenth Century," Past and Present 50 (1971), 76-136. http://dx.doi.org/10.1093/past/50.1.76
Myron Tuman, ed., Literacy Online: The Promise (and Peril) of Reading and Writing with Computers (Pittsburgh: Pittsburgh University Press, 1992).
Myron C. Tuman, Word Perfect: Literacy in the Computer Age (Pittsburgh: Pittsburgh University Press, 1993).
Ellen Ullman, Close to the Machine: Technophilia and its Discontents (San Francisco: City Lights Books, 1997).
Ellen Ullman, "D isappearing Into the Code," Salon 21st.
Copyright © 1998, First Monday
Copyright © 1998, author
Hypertext, the Next Generation: A Review and Research Agenda by Alex Soojung-Kim.
First Monday, Volume 3, Number 11 - 2 November 1998
A Great Cities Initiative of the University of Illinois at Chicago University Library.
© First Monday, 1995-2016.