First Monday

Reinventing academic publishing online Part II A socio-technical vision



Abstract
Part I of this paper outlined the limitations of feudal academic knowledge exchange and predicted its decline as cross–disciplinary research expands. Part II now suggests the next evolutionary step is democratic online knowledge exchange, run by the academic many rather than the few. Using socio–technical tools it is possible to accept all, evaluate all and publish all academic documents. Editors and reviewers will remain, but their role will change, from gatekeepers to guides. However, the increase in knowledge throughput can only be supported by activating the academic community as a whole. Yet that is what socio–technical systems do — activate people to increase common gains. Part 1 argued that scholars must do this or be left behind in the dust of progress. The design proposed here is neither wiki, nor e–journal, nor electronic repository, nor reputation system, but a hybrid of these and other socio–technical functions. It supports print publishing as a permanent archive byproduct useful to a living, online knowledge exchange community. It could also track academic submissions, provide performance transcripts to promotion committees, enable hyperlinks, support attribution, allow data–source sharing, retain anonymous reviewing and support relevance and rigor in evaluation. Rather than a single “super” KES, a network of online systems united by a common vision of democratic knowledge exchange is proposed.

Contents

Socio–technical design
Current knowledge exchange systems
Democratic online knowledge exchange
Discussion
Conclusion

 


 

Socio–technical design

Systems theory

Part I of this paper defined a knowledge exchange system (KES) as one that aims to develop, discriminate and disseminate human knowledge. An electronic KES that works across a computer network is a socio–technical system (STS) — a social system sitting on a technical base. E–mail is a simple example, where people communicate socially using computer technology. More complex examples include social networks like Facebook, wikis like Wikipedia, and electronic trade systems like eBay. In contrast, sending a physical letter by the post is a socio–physical system — social communication on a physical base. In both cases the social system is characterised by people interacting with people, whether electronically or physically.

In general systems theory (Bertalanffy, 1968) autonomous (self–directing) parts interact to create equally autonomous wholes. Such systems don’t reduce easily to their parts, because their creation involved not just those parts but also complex feedback and feed–forward interactions. These interactions produce emergent holistic properties like the ability to self–organize and self–maintain, which are still poorly understood (Maturana and Varela, 1998). For example, a person is made up of autonomous biological cells, but if the cells separate, even if each cell is kept alive, “the person” no longer exists. Such a system is more than the sum of its parts, so reductionism (dividing it into parts) does not fully explain it. Higher systems emerge from lower level interactions.

Socio–technical levels

Socio–technical systems exist on four levels — hardware, software, personal and communal (Whitworth, 2009a). Each system level emerges from the previous, with software data flows emerging from computer hardware events, human cognitions arising from neuronal information flows, and communities arising from people interacting socially. This gives hardware systems, software systems, human–computer interaction (HCI) systems and socio–technical systems (Table 1). Each level adds a new discipline perspective, from engineer to computer scientist to psychologist to sociologist.

 

Table 1: Socio–technical levels.
LevelSystemDiscipline(s)Examples
SocialCommunalSTSSociology, psychology, computing, engineeringAdd norms, culture, laws, zeitgeist, group identity, customs, myths, etc. to below
PersonalHCIPsychology, computing, engineeringAdd semantics, attitudes, beliefs, opinions, ideas, etc. to below
TechnicalInformationIT (hardware plus software)Computing, engineeringAdd programs, data, code, bandwidth, etc. to below
PhysicalHardwareEngineeringCircuits, lines, voltages, heat, matter, energy, etc.

 

Note that emergent levels redefine the entire system. System levels are not system parts but whole system views, e.g., a mobile phone cannot be divided into hardware and software parts. A hardware description describes the whole phone, as does a software description. Hardware and software are both ways to view the whole system, like seeing the same object from a different vantage point.

For example a pilot flying a plane is a single system with various levels, not a human part (pilot) alongside a technical part (plane), though one may see it that way. For example, from a physical perspective the bodies of the human crew are just as physical as the plane body, with weight, volume, etc. And the information level describes not only the onboard computers that control its mechanics but also neuronal information processing in the pilot’s brain. When one takes a personal view, the pilot would see the plane as an extension of him or herself, just as hands and legs are. Adding a pilot to a plane makes the whole system strategize and predict in an aerial dogfight say quite differently from a computer drone. Finally if the plane formed part of a squadron it might do things it would not do if working alone, such as expose itself as a decoy.

Socio–technical requirements

In general, as systems evolve to higher emergent levels, system success is defined by performance at the highest level, regardless of lower level performance, e.g., computers “crash” when a software infinite loop error forces users to reboot the machine. We say the system “failed” even though the hardware is in fact working perfectly. Likewise a Web site that people cannot use “fails” even if the software runs perfectly.

In contrast, failure at the lowest level causes the entire system to fail, regardless of higher level performance, e.g., if a system fails as hardware it doesn’t matter how good the software is, how usable the HCI, or how high the community’s morale.

A socio–technical system can fail as hardware, as software, as a user interface, or be a social failure, e.g., a community Web site that no one visits. While social and technical failures seem different they have one thing in common — the system doesn’t run.

To succeed socio–technical systems must meet both social and technical design requirements. To traditional technical (hardware and software) needs, are added both HCI needs like usability and community needs like fairness. This increases the demands put on developers, but also increases the system’s potential, as the hundreds of millions of users of Facebook, MySpace and Wikipedia testify.

Socio–technical design adds social requirements to technical ones. It also puts social before technical not after, i.e., one first defines the human and social requirements, then one designs the technology to fit. This ensures the system performs successfully at its highest level. In contrast “technology driven” design often gives a socio–technical gap, between what society wants and what the technology does (Whitworth and Whitworth, 2004).

Academic requirements

The design of a successful KES must begin with the social principles upon which it is founded. Part I argued that the goal of academic knowledge exchange, to develop, discriminate and disseminate knowledge, is better served by a democratic social system rather than the current feudal one. The social contrast is between an elite few guarding limited knowledge riches from the ignorant many who may corrupt it, and open knowledge exchange between equal and free citizens, where truth prevails because the community wishes it so. From the different social philosophies we call feudal and democratic arise vastly different technology designs. The first STS choice is its social basis, and only after that is decided is the technology chosen. Democratic interaction is thus a social requirement of academic knowledge exchange.

In addition, academia demands attribution — that name(s) crediting a work’s creator(s) appear whenever it is published or quoted. Scholars give copyright ownership of their work to publishers but retain attribution rights — their name stays on the published paper. Conversely plagiarism (giving another’s work under one’s own name) is an academic sin. Attribution allows social accountability — credit for intellectual work accrues to its creator. This in turn lets academics freely give away their research to others, as their university rewards them for valued work credited to their name by promoting them.

The above suggests that innovative development, quality discrimination, effective dissemination, democratic participation and author attribution are academic KES requirements. If so, how do existing systems satisfy these specifications?

 

++++++++++

Current knowledge exchange systems

Traditional journals

Part I of this paper argued that traditional print–based academic journals discriminate and attribute well but are weak on cross–disciplinary innovation, as experts in one field cannot breach the intellectual defenses of another. In addition, the low readership, long publishing times and inaccessibility of journals mean that they disseminate knowledge poorly. When academic papers are locked away in exclusive journals accessible only to wealthy university libraries, the knowledge may as well not exist for those without pay–per–view privileges. Nor is it a particularly democratic system, as the majority contribute little to mainstream information flows, whose content is decided by processes largely hidden from view. So traditional journals seem to satisfy two KES criteria well (discrimination and attribution) but three poorly (democracy, dissemination and development).

Electronic journals

At first electronic publishing seemed simple — build the technology and they will come. E–journals were expected to reduce cycle times, increase throughput and support multimedia formats, i.e., improve dissemination. It was therefore a surprise when they did not succeed as expected. One review even questioned the: “… extent introducing advanced technologies supports the ultimate objective of research — creating knowledge.” (Hovav and Gray, 2004)

Traditional page costs make journal editors the scrooges of academic knowledge, as print journals cannot publish pages they cannot pay for. Electronic publishing reduces printing, binding and shipping costs, making it economically feasible to publish all submissions. If memory is cheap, and it is, one can Web publish 100 percent of submissions for the same cost as physically publishing five percent of them. So if e–journals improve dissemination, why haven’t they replaced printed journals?

The problem is social rather than technical. E–journals published more but were seen as of lower quality, and so attracted fewer and poorer submissions. As an IEEE proceedings editor–in–chief notes: “Lack of quality conntrol [1], in fact, proves to be the most serious charge leveled against e–journals.” (Ulaby, 2006). Improving dissemination at the expense of discrimination just improves one criteria at the expense of another, as easy publishing effectively devalues the academic currency of “being published”. E–journals improve one KES criteria (distribution) but reduce another (discrimination) so overall performance didn’t increase as expected.

E–journals can of course just become more rigorous. Some IS conferences today are so selective they are rated higher than journals (Grudin, 2005). However this would simply recreate the rigor problem online. Computerizing traditional publishing by adding e–mail speeds up transmissions but leaves social structures unaltered — it is still authors petitioning powerful publishers. To really change an STS one must change its social structure as well.

Wikis

Given the amazing success of Wikipedia, why not exchange research knowledge on a wiki base? While wikis democratically engage community participation, disseminate freely and quickly to all and allow innovation, they neither warranty quality nor attribute well. For example, while some wiki articles may be excellent others may not, depending on who contributes. There is no overall quality control standard. Also, if a complex article is produced by hundreds of people, who then is “the author”? However already some developers are changing the traditional wiki structure to add attribution and review, e.g., Scholarpedia (http://scholarpedia.org/), Wikigenes (http://www.wikigenes.org/) and Citizendium (http://en.citizendium.org/).

Online repositories

Yet the electronic repository has added value. For example, arXiv (http://xxx.lanl.gov), created in 1991 by physicist Paul Ginsparg to share knowledge, has been a successful electronic KES. Every morning theoretical physicists download new papers in their field and discuss them over morning coffee, long before they get into print. While journals like Nature initially objected to this copying of “their” material, author pressure forced them to relent, as “when the cows leave there is no milk”. Nature is now a leader in scientific social networking with its Nature Network (http://network.nature.com).

When asked why this knowledge exchange advance did not quickly spread to other fields, Ginsparg suggested that: “… physicists are self–selected to value eccentricity and novelty of ideas above all else, even at considerable professional risk to themselves.” [2] In other words, physicists are socially less conforming.

Slowly other disciplinary archives have arisen:

The latter uses a Creative Commons license, and links to the open access, peer review Journal of Online Learning Teaching (JOLT at http://jolt.merlot.org/). The repository system combines with an academic journal for mutual gain.

Repositories improve dissemination and allow creative ideas but, like wikis, have no discrimination function and still struggle to activate the academic community. Even so, while e–journals were expected to replace print journals but did not, online repositories succeeded by running in parallel to existing journal systems. Their success suggests the value of open dissemination, and the continuation of print journals suggests that quality discrimination also has value. The conclusion that both selection and dissemination are important in academic knowledge exchange is explored in the next section.

Socio–technical change

One can seek to change the academic publishing system at any STS level (Table 1). Using e–mail to submit papers and reviews changes the technology but not the people or social process. General calls to authors to improve relevance (Lee, 1999) or article quality (Paul, 2005) try to change the people in the system, but not the technology or processes. Proposed changes to journal policies address the social rules but typically ignore personal and technical issues, e.g., that journals adopt an innovation “affirmative action” policy of publishing a first time author each issue, or stating none was found.

Such approaches are useful, but STS design aims to change all levels at once, in an integrated fashion, not just technical architectures but also human and social goals, roles and rules. If technology is built first and used later, social issues will be an afterthought, perhaps why social problems like spam and scams plague many technical systems today. In STS design, under discussion are not just technology forms but also social forms. Social requirements are not just considered, but considered first, as they create “flow–down” requirements for the technology, e.g., this approach can avoid e–mail spam (Whitworth and Liu, 2009a). We now consider a socio–technical synthesis to support the democratic development, selection and dissemination of attributed knowledge.

 

++++++++++

Democratic online knowledge exchange

Much debate on electronic publishing is framed by assumed opposites, like rigor vs. relevance, online vs. print publishing, and selection vs. dissemination. Part I denied the view that rigor and relevance are mutually exclusive, arguing that good science requires both. We now also drop the view that print and electronic publishing are competitive options, as an online system can generate a print publication. Finally, the print tradition confounds dissemination and selection because high publishing costs require selective runs, but online publishing has no such constraints, so electronic knowledge exchange can increase both dissemination and discrimination at once.

Publish all and rate all

Electronic repositories like arXiv increase knowledge dissemination but not discrimination, as there are no reader quality guidelines. More people publishing more inevitably means more bad papers as well as more good ones. Yet such a system could also discriminate good from bad, by allowing:

  1. Higher rating discrimination (a many–point scale, not just accept/reject);
  2. More submissions to be rated (rate all);
  3. More people to rate (more community involvement); and,
  4. Different ways of rating (formal review vs. informal use ratings).

Figure 1 is a KES design that publishes all and assesses all. Print journals are limited to an accept/reject dichotomy, which implies that quality is an all or nothing thing. In contrast, an open KES can rank papers on a many–point scale, which conveys more information to the reader. The Figure 1 pyramid represents a 1–5 rating system (Limited to Excellent), plus a 0 Not Yet Rated category, and a -1 Not Recommended category. The actual scale would be a ten–point semantic differential, plus a reject option (-1). Ratings could be broken down by criteria like relevance, rigor, writing, comprehensiveness, logical flow and originality.

 

Figure 1: A democratic KES design
Figure 1: A democratic KES design.

 

The top white triangle of the pyramid represents the current say 10 percent of submissions that a top journal might print, while the remaining 90 percent of “rejected” knowledge is not available to readers. In this system however all the knowledge a reader chooses to make visible is available for use.

A natural initial response is that this involves too much work. Yet already to reject even the worst paper someone must read it to some degree, i.e., traditional systems already assess every submission as otherwise how is the decision to reject made? The only difference is that while print journals reject in secret, an open KES displays papers it “rejects”, i.e., is transparent rather than opaque. The difference is not how many papers are assessed, but whether the assessment is visible or not. If all submissions must be assessed anyway, why not do it openly?

Another response is that we already have too much to read without letting in more, but blame the Internet for that. As academic journals try to deny the rising flood of new knowledge, the 10 percent of stale knowledge that filters through their walls years later is becoming undrinkable. If there really is that much to know out there, isn’t it better to see it than not see it, and to choose the 10 percent you can read? Isn’t it better to be an academic citizen than an academic serf?

Access registration

A system where anyone can publish invites the problem of spam — unwanted additions by people abusing the system for their own ends. Hence to submit to a repository one must register as a university academic, perhaps even in the right field, i.e., have a good reputation. Spam is diverted by checking the credentials of who is submitting, not censoring what is submitted. Once one becomes a member of the contributing community all are equal, though those misusing its privileges can be banned. Unlike arXiv and CoRR, where anonymous gatekeepers can exclude a paper they “deem inappropriate” if they dislike its content, we propose a truly open system without any content censorship. Author concern for their public reputation is a better quality control than censorship, so the submitter need only be a recognized academic citizen. Note that the citizenship is a privilege a community can revoke if it is abused.

Anonymous review

While authors can publish freely, their work reflects upon them, so anonymous private review before publishing is offered to let authors improve their paper in private before going public. If authors choose to be reviewed, the paper stays invisible to readers and an editor invites expert review. This almost always improves quality, with the added advantage that reviewed papers can enter the KES system at their rated level, e.g., a paper with good reviews can immediately enter as Good (3), while non–reviewed papers enter initially as Not Yet Rated (0). The decision to publish is always with the author(s), e.g., an author with bad reviews may decide to publish anyway, even with a “Not Recommended” (-1) rating.

Reader ratings

Complementary to anonymous expert ratings are general reader ratings. While experts may bias to rigor, readers may bias to relevance. This rebalances the rigor–relevance skew identified in Part I. Expert ratings and reader ratings are complementary views not mutually exclusive options. Each represents a different perspective, and the reader is free to choose which to follow.

Readers could formally vote for papers, or informally “vote” by their mouse clicks, e.g., number of views or downloads (as Communications of the ACM currently reports). Reader interest would identify papers that raise important issues, whether rigorous or not. A paper disliked by specialist experts could rise in the KES hierarchy by popular acclaim, or good ratings by respected reviewers could direct readers to useful papers that are hard to read.

The overall rating could be a 50:50 combination of rigor and relevance, but some KES systems could have a different ratio, depending on target audience. Equally the reader could set their own preference, e.g., select papers by 60 percent rigor and 40 percent relevance.

This system supports both pre– and post–publication metrics, as some papers are reviewed then published while others are published then rated. While excellent papers may by review rise immediately to the top, others may rise only slowly after years of reader comment and many versions. As in gardening, not all plants grow at the same rate.

Community participation

If publishing all increases throughput, how then to sustain quality? Already good reviewers are hard to find, and short reviews of a few vague lines increasingly common. As an editor, it is embarrassing to issue an “Accept with Revisions” based on single line reviews like “Clarify focus”.

Systems like Wikipedia solve the non–participation problem by activating an online community that motivates and rewards its members, e.g., Wikipedia has a social hierarchy of “stewards”, “bureaucrats” and “sysops”. It is democratic as all community members can aspire to these roles by good acts, e.g., Slashdot’s automated rating system lets readers become moderators if they act well (Benkler, 2002): they must be registered (not anonymous), regular users (used the site for a time) with positive “karma” (based on how others rate their comments). Registered readers have five “influence points” to spend on others comments as they wish over a three–day period (or they expire). Highly rated commentators get more points and hence a louder “voice”. This democratically spreads influence among many rather than restricting it to a few, avoiding gatekeeper bottlenecks. Wikipedia has challenged the experts of Encyclopaedia Britannica by tapping the power of the community. It gives better knowledge on a wider range of subjects at a faster rate because it has a bigger knowledge engine — everybody.

A similar KES function could offer readers a natural path to associate reviewer, reviewer, senior reviewer or associate editor, using reader base functions like rating and commenting to recruit and assess reviewers. The assumption that only an exclusive few can review is replaced by a democratic meritocracy, where what you know not who you know determines who reviews. Indeed the IGI handbooks solve the reviewer bottleneck by getting chapter authors to review each other, i.e., democratizing the reviewing process. More reviewers reduces expertise but increases the number of points of view. In a large KES over a few reviews differences will wash out. The approach is not recommended for small groups, e.g., asking conference mini–track applicants to review each other could be unwise, as in a small group faulting another increases one’s own acceptance chances.

Interaction could also increase review quality. While during a review assessments must be done independently to avoid group order effects, after the reviews are in reviewers could see the anonymous reviews of others, as MIS Quarterly permits. This could allow a second review cycle, where reviewers can revise to improve group decision performance (Whitworth, et al., 2001).

If authors are rated by their readers and university professors by their students, why not let authors comment on their reviews? A raised concern is that while such feedback might improve quality, it might positively bias reviews. The same concern was raised when students first rated their teachers, that it would pressure them to give all A’s. If students can assess the teacher quality apart from their grade, authors can assess review quality apart from the review decision.

Print journal archive

To enable continuous growth the KES could each month move top rated papers to a paginated permanent journal archive that cannot be amended, which could appear in print form. Archiving removes papers from the dynamic process, making room for others to rise. Equally the bottom 10 percent of un–accessed papers could also be deleted. The print journal would be seamless with its online base, like the visible tip of a large knowledge iceberg. As groups like the ACM already have repository and journal functions, it makes sense to merge them. In this view print journals will not disappear, but simply become a part of a larger online KES community.

Reader recommendations

Online commenting is like “Letters to the Editor” except easier to do. Allowing comments encourages input from practitioners, for whom the format and reference demands of academic writing are often excessive. Comments could be filtered as Slashdot does or managed by editors.

A wiki–type editing tool could let others show rather than tell proposed changes by directly editing the document source. In moderated publishing with attribution, authors retain ownership but attribute accepted reader changes. Google Knol is an example (http://knol.google.com/k). Readers directly alter the document, but the author/editor can accept or reject them one at a time, as one does with spell checker suggestions. This could allow social comments like “Thank you, I never considered that” or “No thanks — we tried it and it didn’t work.” Useful reader changes could be acknowledged in the paper by name, and helpful contributors even invited to become co–authors. Such systems would recognize the value of comments by others (Hendler, 2007).

The KES could also calculate the difference as version control systems do, comparing before and after documents to calculate the percentage of words changed in a given document. Such estimates would need author confirmation, but suggest a system to recognize micro–contributions — the words in a document usefully contributed by others. A future paper could be 90 percent written by the original authors and 10 percent by 200 others. For an individual these micro–contributions could aggregate over many papers.

Audit trail

While who is doing what may be private, when and how often things happen need not be, e.g., a turnstile at a sports game counts the people entering without privacy concerns. A computer can register when a paper is received, when put to review, when reviewed and when an editorial decision is made, without recording who the people are. Making audit trails transparent lets authors check paper status at any time (cf. package delivery tracking). It avoids submitting a paper by e–mail, waiting three months, then finding it has been forgotten about. Reviewers could set a review availability calendar, with due dates that suit them, given say a 30–day review cycle. In contrast, in editor–driven review scheduling, many requests come when one is busy and none when one is free. If the result is that certain months are unpopular, then at least editors can plan for this.

Publishing control

An online KES does not threaten publishers, only unwarranted publishing powers. There will always be a publishing role, but currently authors give all rights to their work to publishers, who can then do as they wish with it. For example, a chapter published in one IGI book (Whitworth and Liu, 2008) was reprinted in two other books (Whitworth and Liu, 2009b; Whitworth and Liu, 2009c) without the “Editors” of these books advising the authors. Publishers need the right to publish at a given time but not the right to publish for all time.

A democratic KES would leave future publishing rights with its author community, where it belongs. To republish a document, publishers would then need another author permission. This also lets authors fix errors and update the work if they want to, which better fits the original social goal of copyright — to encourage creativity (not suck it dry).

Many publishers take the research of scholars, format and copyright it, and then sell this work back to the universities who paid for it in the first place. Academics edit, write, review, revise, copy–edit and help market highly successful research handbooks, and in return get only an electronic copy of “their” chapter, which they may put on their Web site if they complete a request form and get e—mail permission. Yet the privilege of publishing that publishers offer authors is actually provided by the academic institutions that reward and promote based on a publication record, not the publishers who distribute the work.

The cost of research publications is now so high that university libraries face a “serials crisis”. What explains this high cost when the cost to make the product is largely borne by those who buy scholarly books and journals? Is it just what the market will bear? It seems like taking a bakery’s free cakes, icing them, then selling them back to the same bakery’s hungry staff. One wonders why the bakeries don’t distribute their own products.

Actually they tried to. In developed regions almost 100 percent of universities have digital repositories where in theory academics can share their work (van Westrienen and Lynch, 2005). Yet largely they don’t, as despite open archive efforts (http://www.openarchives.org/) individual contribution rates are only 10–15 percent. Although over 100,000 physics papers were self-archived on arXiv by authors in a decade, this was a small percentage of all physics papers published in the period (Harnad, 1999). Reasons given include the extra effort, no personal benefit, copyright concerns and that free sharing is not the academic norm (Davis and Connolly, 2007). Higher rates are possible, e.g., chapter sharing of the Handbook of Research on Socio-Technical Design and Social Networking Systems is over 50 percent (see http://brianwhitworth.com/STS/).

This paper argues that if freely sharing research publications is not the norm, then we need to make it so. The democratic KES proposed builds both open sharing and personal recognition into the basic architecture, as social axioms of the KES design.

Source data sharing

Research is as much about gathering qualitative and quantitative information as analyzing it. In fields as diverse as Web analytics, genetics, archeology and astronomy it pays to share expensive or unique information sources, e.g., making the Dead Sea scrolls available online to all scholars. An online KES that accepts various binary forms of source data is needed and open archive document interoperability is critical. Research data sharing already occurs in disciplines like astronomy, e.g., the search for extra–solar planets using gravitational micro–lensing involves observations from telescopes around the globe (Bond, et al., 2004).

A “technical report” of a few pages may describe an attached data set that others can use provided they cite it. A KES data source category could be ranked by citations, not reviews. This would allow universities to recognize data source contributions for tenure and promotion, and give those who create hard to collect data sets a reason to make them available to others.

Links

A KES’s internal links let readers “drill down” into a paper they liked, to the author’s other works, to those they collaborate with, to papers they reference, and to previous versions, reviews, reader comments and author replies. Links also let readers “drill out” to an author’s home page or online Web sources. The contrast is important. A problem for librarians trying to store content is the cost of platforms and staff to support different data formats and securing copyright permissions and license agreements (Thompson, 2005). If libraries link to information rather than store it. they will become knowledge portals rather than knowledge repositories, and librarians knowledge guides rather than custodians. A knowledge system need not contain all the knowledge it uses.

Performance reports

Just as students request a transcript report from a university, so KES contributors could request reports not just of their publications, but of their citations, number of comments, downloads and views they generated, as well as their reviews, data sources, comments and service contributions to the community. Aggregating micro–contributions over many papers could recognize the work of those who amend as well as those who create.

Only the person concerned could request reports of anonymous contributions like reviewing. Just as in traditional reviewing, anonymous reviewers are known to the editor, so KES anonymity means the review is not signed for others to see, not that it is unknown to the system. That reviewers can request reports summarizing their work would increase the recognition of reviewer contributions. To ensure veridicality, individuals could give the KES system permission to send verified reports on them directly to institutions, as students can now permit universities to send transcripts directly to prospective employers.

Increasing knowledge flows

Democratic knowledge exchange would radically change the old knowledge flows. In the traditional paradigm (Figure 2a) authors submit to editors (a), who allocate reviewers (b) whose reviews (c) help editors decide what readers see [3] (d). In this feudal KES a few decide what everyone else reads. While this has worked in the past, in cross–disciplinary research so many people pursue so many research directions that knowledge gatekeepers become, if they are not already, knowledge bottlenecks.

In a democratic design (Figure 2b) authors can still submit privately to editors (a1) who ask reviewers (b) to create reviews (c) that influence what readers read (d), but they can also submit direct to the public (a2). This changes the role of editors and reviewers from knowledge gatekeepers to knowledge guides. Readers decide for themselves if something is worth reading or not (rather than editors doing it for them). Yet they will still welcome expert guides to quality, as Zagat’s restaurant reviews advises diners where to find good food.

The main effect of democratic knowledge exchange is more knowledge flows, as in Figure 2b:

  1. Readers to Readers (m:n): An online reader–to–reader rating system (e.g., Amazon).

  2. Reader to Reader (1:1): Reader bios, photos or e–mail details (e.g., ACM members).

  3. Reader to Readers (1:m): A “Letters to the Editor” where readers can opine.

  4. Readers to Author (m:1): Authors open their article to public or private comments.

  5. Author to Readers (1:m): Discretionary author asides to readers outside the main text.

  6. Author to Author (1:1): A community of journal authors who help each other.

  7. Authors to Authors (m:n): Reports on author citation rates by other authors.

  8. Authors to Editor (m:1): If authors can grade their reviews, then bad reviewers may not drive away good authors.

  9. Editor to Editor (1:1): Editors could network to place a paper appropriately. Currently, sending good work to the wrong journal or conference mini–track means authors wait months to find it rejected not by quality but type. KESs with different audiences could form a distributed collective to exchange misplaced papers.

  10. Reviewer to Reviewer (1:1): Many authors wish the reviewer who loves their paper would talk to the one who hates it, and resolve their differences.

  11. Document to document links (m:n): Document hyperlinks to other documents.

The detailed KES design is more than can be described here, but it should be clear that it would exchange more knowledge than current systems.

 

Figure 2a: Feudal information flows
Figure 2a: Feudal information flows .

 

 

Figure 1: A democratic KES design
Figure 1: A democratic KES design.

 

Socio–technical tools

The above expansion requires new socio–technical tools, but one cannot move tools from one domain to another without adaption, e.g., wikis allow readers to copy and use while academics must quote and reference. STS functions that could be adapted to an academic KES design include:

  1. Reader comments. Commenting is useful when the sum of the knowledge of many readers exceeds that of a few experts, as readers can correct errors of fact, supply references and suggest examples. Bulletin board forums like AnandTech (http://www.anandtech.com/) illustrate the power of the many to solve problems.

  2. Reputation ratings. Reputation systems are a community–based form of quality control used by systems like Amazon. Rather than restricting assessment to an elite few, such systems let everyone vote democratically and average the scores. Where the number of participants is large, individual variances tend to wash out. Such tools process many–to–many interactions, from the group to the group, and overcome group information exchange bottlenecks (Whitworth, 2009a).

  3. View filters. Rating systems allow view filters, as in Slashdot where readers adjust their view filter from -1 to 5 to see higher or lower quality comments. Anonymous comments are rated at 0 so if a reader’s filter is set to 1 they don’t see them. Likewise KES readers could set their view filter to any quality level, from low to high. Readers who set their view filter to 5 in Figure 1 would only see the highest rated papers, much as current print journals now present (except here this is by choice not necessity). One would expect most readers to access top rated papers but some may choose to “bottom feed”.

  4. Same again functions. Same again functions help readers who find something valuable to find more of the same, e.g., Amazon lets readers jump directly to other books liked by those who liked the book they are looking at. Some systems let one find other documents rated highly by the same people who rate like you. KES readers could use the papers they value as ways to find similar others.

  5. Social bookmarks. Systems like Digg (digg.com/, del.icio.us (delicious.com/) and StumbleUpon (http://www.stumbleupon.com/) use community–based tagging to let users share favorite links. A similar KES system tool could show the favorite papers of scholars broken down by field. As in StumbleUpon, individual scholars could also personally recommend favorite works as links.

  6. Social networks. Systems like Facebook succeed by letting people network and scholarly groups, like ACM, allow members to present biographies to each other. A similar system within a KES would permit academics to connect personally as well as by paper content. Different STS functions need not be different technical systems.

  7. Version control. Wikis are version control systems that allow readers to update versions and most repositories also allow versions from registered author(s). The social principle is that authors should be able to update their own work to a later version if the original is not deleted, cf. current print publishing where errors made are irrevocable. Version control lets authors update their work as it evolves, without the criticism that they are “self–plagiarizing”.

Some examples

Systems with some of the features proposed already exist in specialist fields. The Pool displays new media projects on a graph, and rise or fall by reader ratings based on rater reputation (http://pool.newmedia.umaine.edu/). The democratic publishing concept for documents in general is illustrated by efforts like Scribd (http://www.scribd.com), where people freely Web publish in a variety of document formats. Every word is indexed for searching, and it has over 50 million readers a month. Academic publishing projects, like the Public Knowledge Project (http://pkp.sfu.ca/) and Connotea (http://www.connotea.org/), still seek acceptance but systems based on academic communities, like the Social Science Research Network (SSRN at http://www.ssrn.com/), are growing. Academia’s needs aren’t addressed by mainstream knowledge exchange, so universities must use their information technology assets to improve their knowledge productivity.

 

++++++++++

Discussion

Democratic information exchange

A social system can be controlled by one person (autocracy), by a few people (aristocracy) or by many people. While democracy is theoretically government “by the people for the people”, how exactly that occurs is open to interpretation. For example, if the people govern through elected representatives, questions like “How are they elected?”, “How often are they elected?” and “For how long are they elected?” are not trivial. Though there is no agreed detailed definition of democracy, the democratic dimension of sharing rather than centralizing social control is an accepted concept that technology systems can support. Some aspects of this ideal seem to be:

  1. Legitimate rights. A people that governs itself will usually give itself legitimate rights, defined elsewhere as interactions that benefit the society as a whole and are also fair to the individuals involved (Whitworth and de Moor, 2003). Rights here are essentially permissions the society gives to individuals to do things (Freeden, 1991). Legitimacy analysis seeks to specify these rights as (actor, object, method, context) quartets, which code can support (Table 2), where a context can be a container, a group or a higher right (Whitworth, et al., 2006). An information right then lets some actor apply some program method to some information object in a given context, e.g., an item’s owner can delete it if it is not in an archived container and if the owner is not banned. Such specifications require the social rules to be conceptually clear before design begins, e.g., can readers always comment on posted items, or is “Accept Comments? Yes/No” a settable item property?

  2. Transparency. People cannot govern themselves if they don’t know what’s happening, so all democracies have the equivalent of a free press. With transparency justice is not only done but also seen to be done. Both Wikipedia and Slashdot are transparent, as editors can view a contributor’s history and ban those who abuse the system. In a transparent KES others can see what is going on, and that others in the community are watching encourages good behavior.

  3. Freedom. If a society transparently gave all its citizens fair rights by force it would still be slavery not democracy. If individuals are not free then the people are not free, and if the people are not free how can they be said to govern themselves? Hence technology systems should be designed to offer people choices not to take choices from them (Whitworth and Liu, 2008). Online freedom means people can choose where they go online, which is why participation is the main yard stick of socio–technical system success. STSs invite rather than coerce participation, e.g., if there are too few reviewers, rather than forcibly rejecting more authors why not encourage more citizens to volunteer?

  4. Order: As physical society supports order by systems of “justice”, so socio–technical systems need defenses against anti–social acts. Both Wikipedia and Slashdot use software mechanisms to oppose “trolls”, e.g., Slashdot prevents users from posting more than once in sixty seconds. Democracy increases knowledge productivity but is also more open to social hijacking, so it needs mechanisms to prevent that, e.g., that participants register. The ignorant try to appropriate the productivity of society for personal goals of profit, power or domination, unaware that this inevitably chokes the source of that productivity, which is social synergy. If the corrupt “succeed” then social synergy collapses and everyone becomes poor, as nations like Zimbabwe illustrate today. In physical society the corrupt who seek something for nothing can be denied by good citizens supported by laws and government. In online society the same principles apply, except now the software must support legitimacy, as justice systems don’t work well online. To avoid social collapse a socio–technical system must be legitimate by design.

A democratic online KES set up to support legitimacy, transparency, freedom and order will prosper. Examples include the Internet itself, where knowledge flows freely through open channels in unexpected ways, and open source software groups, where critics work with innovators to develop community tools (Ljungberg, 2000).

 

Table 2: Socio–technical actors, objects and methods.
ActorObjectMethod
Social
(exists outside the STS)
Persona/avatar
(represents a person)
Create/(Un)Delete/Rename/
Group
(a list of people with a group identity)
Group
(a list of personae)
Join/Resign/Include/Exclude
Allocate Roles
Agent
(for people/groups)
ContainerCreate/Rename
 – MeetingLogon/Logoff/Open/Close
 – Heading (contains items)Move/(Un)Archive
 – Item (conveys meaning)Create/Delete/Attribute
 – Content itemEdit/Revert/Order/Move
 – Comments (dependent meaning)View/Hide/Order/Move
 – Mail (transmit meaning)Send/Receive/Permit/Move
 – Votes (choice position meaning)Vote/Abstain/DisplayAll
 – Rights (recursive objects)Transfer/(Un)Delegate

 

Privacy

Social concepts are complex, e.g., privacy is less about keeping personal data secret than about the right to own personal information. If freedom is the right to own one’s physical self, then privacy is the right to own one’s information self. If so, then privacy means one can choose to make one’s personal data public! Privacy is about control not secrecy, e.g., when the NSF grant system requires that grants are “private”, regardless of what authors or reviewers want, that is autocratic control not privacy. If author(s) and reviewer(s) freely agree to release a grant, the KES should respect that choice and let them. Equally the opposite suggestion, that all reviews must be made public to improve quality (Weber, 1999), is also anti–democratic. Neither forcing disclosure nor forcing non–disclosure supports privacy, which is about choice not coercion. Whether a review is made public should rest with the authors and the reviewers who created it. If reviewer and author both agree, the KES should release reviews, which could help new grant writers.

Social principles like privacy are not absolute but contextual to other social needs, e.g., privacy may give way to security needs but later reassert. During a review anonymity is critical to avoid response sequence bias, but after the decision is made there is no reason a reviewer cannot reveal themselves — if they wish to. Again, privacy is about the choice to reveal oneself, not simply about secrecy.

Who pays?

To the question “who will pay?” for the open exchange of research knowledge the sensible answer is “those who gain.” Current business models offer these options:

  1. Reader pays. Readers pay to subscribe to journals that publish research.

  2. Author pays. Common in medicine, where author grants can pay publishing costs.

These models assume the main publishing stakeholders are readers and authors, with publishers the “arms dealers” in the middle (Esposito, 2004). Yet both models are today struggling to remain viable. The socio–technical model suggests a new stakeholder beyond the individuals involved: the academic community.

This new “player”, the community, exists on the fourth emergent level of Table 1. Seeing only individual level gains, and failing to see community level gains as a “real”, denied the global benefits of the World Wide Web for many years. On an individual level it did not seem profitable, yet if a computer virus destroyed the World Wide Web today, humanity would start building it again tomorrow. The question “who will pay” would not be an issue — funds would be found as we all know its value. Yet before the Web was developed, cost was a huge barrier, e.g., Microsoft rejected Berners–Lee’s (2000) proposal because it was uneconomic, yet now they too make money from it. The problem was that it was being evaluated on the wrong level. It took someone like Berners–Lee to see the potential value for everyone, not just “us”. Indeed the existence of such community opportunities suggests how Kant’s idea of doing what is categorically right can be pragmatic as well as idealistic (Hare, 1997).

Framing the “who pays to share knowledge online” issue in individual terms is a red herring that one can chase into the oceans of e–commerce. We prefer to frame social gains in social terms, based on the Part I argument that the production and exchange of knowledge is at heart a community gain. The logic that what benefits the community should be paid for by the community is reasonable. Social systems of grants and promotions can be seen as essentially mechanisms to induce individuals seeking advantage to benefit their community.

Socio–technical systems like Wikipedia do the same, but just ask people directly to help the community. What has surprised many is that even if community service is not enforced, enough people still choose to serve to create social synergy. To be clear, these people help others not for direct physical gain such as pay but just to help the community they belong to. Apparently, before socio–technical systems, no governance system thought that simply asking free people to be good citizens would work. Certainly no one predicted that Wikipedia’s unpaid volunteers could challenge the paid experts of Encyclopaedia Britannica, yet it has.

While the idea of community resources funding community gains sounds vague, the Internet is an example. It was originally funded by the U.S. Department of Defense, a government agency, and is today funded by international organizations. The support can come from a grassroots level as with Wikipedia, an institute or consortium can pay on behalf of the community, as Cornell publishes arXiv, or the community can form a social entity and charge membership taxes. The example of open source software development is not naïve, given recorded instances of commercial enterprises adopting open source method to produce both better products and profits (Boehm and Ross, 1989; Lerner and Tirole, 2002; Nambisan and Wilemon, 2000; von Hippel and von Krogh, 2003; West, 2003). Open source Unix products created by unpaid communities now challenge commercial products, e.g., Open Office (http://www.openoffice.org/). These proven systems succeed by openly inviting people to be “small heroes” — to do good acts without social bribery or coercion. The hard part, it seems, is to believe enough in people to ask them to do this.

The logic behind the success of socio–technical systems, and perhaps human society itself, is that communities do what benefits them just as individuals do. Both are systems, and for communities the benefit arises from social synergy — productivity gains above those members gain by working alone (Whitworth, 2009b). As synergy derives from interactions it increases geometrically with group size, and in large groups non–zero sum synergy gains increase the productive pie more than zero sum profit increases the slices (Wright, 2001). In very large groups, as technology now allows, synergy effects dominate, e.g., businesses like eBay work better the bigger they are. It is the size factor that changes the preferred economic model from one of individual profit to one of individual service and community profit.

Democratic knowledge exchange will evolve not because it is right but because it works. Firstly, publishers will not openly oppose community publishing as prosecuting authors putting their work on their Web site is not good for their reputation. Communities may be suspicious, divided and fickle, but united they prevail. For example, Gerald Ratner had a multimillion dollar jewelry business in England until a 2005 speech, when asked how he sold a decanter so cheaply he said, “because its total crap” (Weir, 2005). After the publicity that followed the company’s shares dropped £500 million in a few days and he was forced to resign. What publisher can risk being branded “anti–author”?

Secondly, academic organizations that support a community gain a good reputation within it, which in academia means attracting better staff and students. The driving force of community gain and loss will manifest on the individual level as fame and shame, from which profit and loss derive. Ultimately, in a highly socialized world, community feedback will prove to be a greater driving force than profit: “On this point Samuel Johnson was simply wrong when he famously said that no one but a blockhead ever wrote except for money. The truth is that recognition is the greater motivator.” (Esposito, 2004)

 

++++++++++

Conclusion

The KES design proposed in Figure 1 is a hybrid of electronic repository, e–journal, print journal, reputation system and other socio–technical systems. It is intentionally generic as what is envisioned is not a monolithic structure, but a loose confederation of different systems linked by common open knowledge exchange principles. Like Shirky (2008), we believe the future of the semantic Web is in bottom up folksonomies, not top–down ontologies.

Socio–technology is what Hovav (2008) calls a competency–destroying innovation — a new technology that redefines the competencies that underlie social power. For example, the invention of printing eventually made obsolete the competency of scribes, which changed medieval power structures from the bottom up. As the Internet changes knowledge exchange academia must either reinvent itself or withdraw to exclusive irrelevance. The current medieval structures of higher education are as Peter Brantley (2009), executive director of the Digital Library Federation, notes — obstacles preventing online collaborations between educational institutions. The demands of cross–disciplinary research will drive the evolution of a new range of academic competencies. When integration is as valued as specialization, availability as valued as exclusivity, guides as valued as gatekeepers, innovation as valued as stability and relevance as valued as rigor, then the shift from feudal to democratic knowledge exchange will be afoot.

Naturally change will be opposed by those vested in academic power structures and feared by those dependent upon them. Yet fear of change is not, and has never been, a good reason to avoid progress. The study of hard drive technology evolution illustrates how initially disruptive innovations morph into constructive successes (Christensen, 1997). The lesson is that so–called disruptive change is an opportunity if one does not try to deny it. So let us not try to impose on the age of electronic knowledge exchange the rules of the previous printed age (Pinter, 2008).

Universities currently outsource the marketing and distribution of their knowledge to publishers with little interest in their communities. If knowledge distributors who create no knowledge dominate its exchange, then the publishing tail is wagging the academic dog. This is not good for anyone, publishers included, since as we have argued here, it is a recipe for academic decay. If universities let publishers kidnap their knowledge and hold them to copyright ransom (Willinsky, 2000), they fail their public duty of knowledge guardianship.

The blueprint for change that this paper presents can only be created by the academic community. Technologies enable communities but only communities can make technology come alive. Modern socio–technology has the tools, but the social will to make it happen is still needed. If academics reject this option, it is not unthinkable that the greater community will bring an end to the university as we know it (Taylor, 2009). Like the aristocrats of the past, they may not disappear but they will fade into irrelevance. Academia is powerful but not invulnerable to the power of the Web 2.0 world that O’Reilly and others envision: “… if scholarly output is locked away behind fire walls, or on hard drives, or in print only, it risks becoming invisible to the automated Web crawlers, indexers, and authority–interpreters that are being developed. Scholarly invisibility is rarely the path to scholarly authority.” (Jensen, 2009)

An online KES that accepts all, reviews all, and publishes all would reinvent the original spirit of academic publishing. It would also help promotion and tenure committees select better by giving more details on publishing, reviewing, citations, contributions, comments, downloads, views and online service roles. The current state of information overload arises from too many isolated facts and not enough integration. Over sixty years ago Nicholas Butler, then President of Columbia University, observed that experts know more and more about less and less. As specialization increases, each “expert” sees less and less of the whole picture. Unless the specialist stranglehold on knowledge exchange is broken, knowledge cannot flow across subject boundaries where the breakthroughs are. The future of knowledge growth needs not only intelligence, people using their own brains, but also extelligence, the social use of the brains of others. Socio–technology lets us integrate as well as specialize, connect as well as isolate and merge as well as purify knowledge. It can support the evolution of academic knowledge exchange to an electronic democracy. End of article

 

About the authors

Brian Whitworth is a Senior Lecturer at Massey University (Albany), Auckland, New Zealand. He holds a B.Sc. (Mathematics), B.A. (Psychology), M.A. (neuro–psychology), and an Information Systems Ph.D. He has published in journals like Small Group Research, Group Decision & Negotiation, Database for Advances in IS, Communications of the AIS, IEEE Computer, Behavior and Information Technology (BIT), Communications of the ACM and IEEE Transactions on Systems, Man and Cybernetics. Topics include generating online agreement, voting before discussing, legitimate by design, spam and the socio–technical gap and the web of system performance. He, with Aldo de Moor, edited the Handbook of Research on Socio–Technical Design and Social Networking Systems (Hershey, Pa.: Information Science Reference, 2009). See http://brianwhitworth.com.

Rob Friedman is an associate professor of humanities and information technology and directs the science, technology and society program at New Jersey Institute of Technology. His research examines science and culture, socio–technical systems design, and technology’s role in education. He is first author of a reference guide to the theory and research supporting the field of technology and innovation management. Principle Concepts of Technology and Innovation Management: Critical Research Models, published in August 2008 by IGI Publishing. Friedman serves as editor–in–chief of the ACM’s Special Interest Group for Information Technology Education’s peer–reviewed SIGITE Newsletter. He teaches graduate and undergraduate courses on socio–technical systems in their cultural contexts.

 

Acknowledgements

Brian thanks his son Alex at Princeton for valuable ideas and discussion. Thanks also to Guy Kloss, Massey University, and to Daniel Mietchen, Friedrich Schiller University, for useful comments and links.

 

Notes

1. The spelling error is in the original.

2. Laughlin, 2005, p. 179.

3. The “reader” here is a role not a person, so one person can be both a reader and an author.

 

References

Y. Benkler, 2002. “Coase’s penguin, or, Linux and the nature of the firm,” Yale Law Journal, volume 112, number 3, pp. 369–446, and at http://www.benkler.org/CoasesPenguin.html.http://dx.doi.org/10.2307/1562247

T. Berners–Lee, 2000. Weaving The Web: The original design and ultimate destiny of the World Wide Web. New York: HarperCollins.

L. von Bertalanffy, 1968. General system theory: Foundations, development, applications. New York: Braziller.

B. Boehm and R. Ross, 1989. “Theory–W software project management: Principles and examples,” IEEE Transactions on Software Engineering, volume 15, number 7, pp. 902–916.http://dx.doi.org/10.1109/32.29489

I. Bond, A. Udalski, M. Jaroszyński, N. Rattenbury, B. Paczyński, I. Soszyński, L. Wyrzykowski, M. Szymański, M. Kubiak, O. Szewczyk, K. Źebruń, G. Pietrzyński, F. Abe, D. Bennett, S. Eguchi, Y. Furuta, J. Hearnshaw, K. Kamiya, P. Kilmartin, Y. Kurata, K. Masuda, Y. Matsubara, Y. Muraki, S. Noda, K. Okajima, T. Sako, T., Sekiguchi, D. Sullivan, T. Sumi, P. Tristram, T. Yanagisawa and P. Yock, 2004. “OGLE 2003–BLG–235/MOA 2003–BLG–53: A planetary microlensing event,” Astrophysical Journal, volume 606, number 2, pp. L155–L158.http://dx.doi.org/10.1086/420928

P. Brantley, 2009. “An interview with Peter Brantley at CNI’s 2007 spring task force meeting,” at http://www.educause.edu/blog/gbayne/AnInterviewwithPeterBrantleyat/166897, accessed 23 April 2009.

C. Christensen, 1997. The innovator’s dilemma: When new technologies cause great firms to fail. Boston: Harvard Business School Press.

P. Davis and M. Connolly, 2007. “Institutional repositories: Evaluating the reasons for non–use of Cornell University’s installation of DSpace,” D–Lib Magazine, volume 13, numbers 3/4, at http://www.dlib.org/dlib/march07/davis/03davis.html.

J. Esposito, 2004. “The devil you don’t know: The unexpected future of open access publishing,” First Monday, volume 9, number 8, at http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1163/1083.http://dx.doi.org/10.5210/fm.v9i8.1163

M. Freeden, 1991. Rights. Minneapolis: University of Minneapolis Press.

J. Grudin, 2005. “Human factors, CHI and MIS,” In: P. Zhang and D. Galletta (editors). Human—computer interaction and management information systems: Foundations. Advances in management information systems, number 5. Armonk, N.Y.: M.E. Sharpe, pp. 402–421.

R. Hare, 1997. “Could Kant have been a utilitarian?” In: R. Hare, (editor). Sorting out ethics. Oxford: Oxford University Press.

S. Harnad, 1999. “Free at last: The future of peer–reviewed journals,” D–Lib Magazine, volume 5, number 12, at http://www.dlib.org/dlib/december99/12harnad.html, accessed 23 April 2009.

J. Hendler, 2007. “Reinventing academic publishing — Part 1,” IEEE Intelligent Systems, volume 22, number 5, pp. 2–3.http://dx.doi.org/10.1109/MIS.2007.4338485

A. Hovav, 2008. “The socially driven life cycle of academic scholarship: A longitudinal study of six electronic journals,” IEEE Transactions on Professional Communication, volume 51, number 1, pp. 79–94.http://dx.doi.org/10.1109/TPC.2007.2000045

A. Hovav and P. Gray, 2004. “Managing academic e–journals,” Communications of the ACM, volume 47, number 4, pp. 79–82.http://dx.doi.org/10.1145/975817.975821

M. Jensen, 2007. “The new metrics of scholarly authority,” Chronicle Review, volume 53, number 41 (15 June), p. B6, and at http://chronicle.com/article/The–New–Metrics–of–Scholarly/5449, accessed 23 April 2009.

R. Laughlin, 2005. A different universe: Reinventing physics from the bottom down. New York: Basic Books.

A. Lee, 1999. “Rigor and relevance in MIS research: Beyond the approach of positivism alone,” MIS Quarterly, volume 23, number 1, pp. 29–33.http://dx.doi.org/10.2307/249407

J. Lerner and J. Tirole, 2002. “Some simple economics of open source,” Journal of Industrial Economics, volume 50, number 2, pp. 197–234.http://dx.doi.org/10.1111/1467-6451.00174

J. Ljungberg, 2000. “Open source movements as a model for organizing,” European Journal of Information Systems, volume 9, number 4, pp. 208–216.http://dx.doi.org/10.1057/palgrave.ejis.3000373

H. Maturana and F. Varela, 1998. The tree of knowledge: The biological roots of human understanding. Revised edition. Boston: Shambala.

S. Nambisan and D. Wilemon, 2000. “Software development and new product development: Potentials for cross–domain knowledge sharing,” IEEE Transactions on Engineering Management, volume 40, number 2, pp. 211–220.http://dx.doi.org/10.1109/17.846788

R. Paul, 2005. “Editor’s view: An opportunity for editors of IS journals to relate their experiences and offer advice. the Editorial view of Ray J. Paul. First in a series,” European Journal of Information Systems, volume 14, pp. 207–212.http://dx.doi.org/10.1057/palgrave.ejis.3000542

F. Pinter, 2008. “The transformation of academic publishing in the digital era,” Oxford Internet Institute, at http://webcast.oii.ox.ac.uk/?view=Webcast&ID=20081121_268, accessed 23 April 2009.

C. Shirky, 2008. Here comes everybody: The power of organizing without organizations. New York: Penguin Press.

M. Taylor, 2009. “End the university as we know it,” New York Times (26 April), p. A23, and at http://www.nytimes.com/2009/04/27/opinion/27taylor.html, accessed 26 April 2009.

J. Thompson, 2005. “Survival strategies for academic publishing,” Chronicle of Higher Education, volume 51, number 41 (17 June), p. B6.

F. Ulaby, 2006. “Electronic journals versus print: Publishing in the electronic age,” Proceedings of the IEEE, volume 94, number 6, pp. 1,043–1,044.

E. von Hippel and G. von Krogh, 2003. “Open source software and the ‘private–collective’ innovation model: Issues for organization science,” Organization Science, volume 14, number 2, pp. 209–223.http://dx.doi.org/10.1287/orsc.14.2.209.14992

R. Weber, 1999. “The journal review process: A manifesto for change,” Communications of the Association for Information Systems, volume 2, number 12, pp. 1–24.

S. Weir, 2005. History’s worst decisions and the people who made them. Millers Point, N.S.W., Australia: Pier 9.

J. West, 2003. “How open is open enough? Melding proprietary and open source platform strategies,” Research Policy, volume 32, number 7, pp. 1,259–1,285.

G. van Westrienen and C. Lynch, 2005. “Academic institutional repositories: Deployment status in 13 nations as of mid 2005,” D–Lib Magazine, volume 11, number 9, at http://www.dlib.org/dlib/september05/westrienen/09westrienen.html, accessed 23 April 2009.

B. Whitworth, 2009a. “The social requirements of technical systems,” In: B. Whitworth, B., and A. de Moor (editors). Handbook of research on socio–technical design and social networking systems. Hershey, Pa.: Information Science Reference; and at http://brianwhitworth.com/STS/STS-chapter1.pdf.

B. Whitworth, 2009b. “A social environment model of socio–technical performance,” at http://brianwhitworth.com/social-environment-model.pdf.

B. Whitworth and T. Liu, 2009a. “Channel e–mail: A sociotechnical response to spam,” Computer, volume 42, number 7, pp. 63–72.http://dx.doi.org/10.1109/MC.2009.214

B. Whitworth and T. Liu, 2009b. “Politeness as a social computing requirement,” In: E.J. Szewczak (editor). Selected readings on the human side of information technology. Hershey, Pa.: Information Science Reference, pp. 425–442.

B. Whitworth and T. Liu, 2009c. “Politeness as a social computing requirement,” In: P. Zaphiris and C.S. Ang (editors). Human computer interaction: Concepts, methodologies, tools, and applications. Hershey, Pa.: Information Science Reference, pp. 2,675–2,690.

B. Whitworth and T. Liu, 2008. “Politeness as a social computing requirement,” In: R. Luppicini (editor). Handbook of conversation design for instructional applications. Hershey, Pa.: Information Science Reference, pp. 419–436, and at http://brianwhitworth.com/polite2.pdf.

B. Whitworth, A. de Moor, and T. Liu, 2006. “Towards a theory of online social rights,” In: R. Meersman, Z. Tari and P. Herrero (editors). On the Move to Meaningful Internet Systems 2006: OTM 2006 Workshops. Lecture Notes in Computer Science, number 4277. Berlin: Springer–Verlag, pp. 247–256, and at http://brianwhitworth.com/cominf06–final.rtf.

B. Whitworth and E. Whitworth, 2004. “Spam and the social–technical gap,” Computer, volume 37, number 10, pp. 38–45, and at http://brianwhitworth.com/spam-computer.pdf.http://dx.doi.org/10.1109/MC.2004.177

B. Whitworth and A. de Moor, 2003. “Legitimate by design: Towards trusted virtual community environments,” Behaviour & Information Technology, volume 22, number 1, pp. 31–51, and at http://brianwhitworth.com/legitimacy2002.pdf.http://dx.doi.org/10.1080/01449290301783

B. Whitworth, B. Gallupe, and R. McQueen, 2001. “Generating agreement in computer-mediated groups,” Small Group Research, volume 32, number 5, pp. 625–665, and at http://brianwhitworth.com/sgr01.pdf.http://dx.doi.org/10.1177/104649640103200506

J. Willinsky, 2000. “Proposing a knowledge exchange model for scholarly publishing,” Current Issues in Education, volume 3, number 6, at http://cie.asu.edu/volume3/number6/.

R. Wright, 2000. NonZero: The logic of human destiny. New York: Pantheon.

 


Editorial history

Paper received 20 December 2008; revised 25 April 2009; revised 14 July 2009; revised 24 August 2009; accepted 26 August 2009.


Creative Commons License
This paper is licensed under a Creative Commons Attribution–Noncommercial–Share Alike 3.0 United States License.

Reinventing academic publishing online. Part II: A socio–technical vision
by Brian Whitworth and Rob Friedman.
First Monday, Volume 14, Number 9 - 7 September 2009
https://firstmonday.org/ojs/index.php/fm/article/download/2642/2287