First Monday

Science 2.0 (change will happen ...) by Jean-Claude Burgelman, David Osimo, and Marc Bogdanowicz

In this paper we outline some of the main trends and changes we consider will affect science over the next 20 years, mainly driven by a new socio–technological paradigm, which results from the use of information and communication technologies. We first analyze three main trends (growth of scientific authorship; growth in scientific publishing; growth in data availability and processing) which are already visible now but will grow exponentially in the coming decades and will thus affect the dynamics of science.

We then frame the above changes in the context of the transformation of the scientific production and publication conditions — seen as production process of a cultural good — which then feedback into the nature of science itself. Finally, we will take together these interrelated growth trends of authors, publications and data and pinpoint their profound and multiple impacts on the very nature of scientific work and its professional dynamics, in terms of increased openness, instability and inequality.


1. The three observable trends
2. More (scientific) publishing but less (legitimate) publishers?
3. Science 2.0 is not per se better
4. Does future science need a different science policy?



1. The three observable trends


Figure 1: The three dimensions of change in science
Figure 1: The three dimensions of change in science.


Innovation and research are on top of the policy agenda of developed as well as developing countries, especially in times of crisis and restructuring. Global competition puts pressures to companies and regions to enhance their competitiveness and avoid commoditization through knowledge, high skills and research. Yet governments struggle to design appropriate research policies for a fast–changing and unpredictable world.

There are in particular three key areas of fast change that are likely to lead to a systemic change in science, with the deriving opportunities and threats: the growth of authorship, the explosion of publication, and the availability of data.

1.1. The growth of authorship

From universal authorship …
The explosion of user–generated content on the Web is modifying the habits and the expectations of many indivuduals. We are witnessing an unprecedented explosion of authorship, as described by Pellis and Bigelow (2009): “nearly universal literacy is a defining characteristic of today’s modern civilization; nearly universal authorship will shape tomorrow’s.” Even though this is a rather optimistic and ethnocentric view of the Western world [1], it is a fact that authorship, when including new media, is growing 10–fold each year when it once grew 10–fold each hundred years (see Figure 2). And the human resources available are not that scarce: with the time freed by a one percent reduction in yearly television consumption in the U.S., 100 Wikipedias could be built every year (Shirky, 2008).


Figure 2: Number of authors published each year
Figure 2: Number of authors published each year. For new media, the number refers to authors with at least 100 readers. Source: Pellis and Bigelow (2009).


The main reason for such evolution lies in the drastic reduction of barriers to entry, in particular to the stages of publishing and distribution, as we will describe later. Because of these lowered barriers to entry, publishing has become free, Web–based, or self–produced [2].

Hence, there is a behavioural change where individuals see authorship as a benefit rather than as a cost, perhaps because of the intellectual satisfaction given by immediate publishing.

Further, it is also the readership, rather than publishers, deciding the relevance and importance of published material — after it is published. It’s a publish–then–filter approach (Shirky, 2008) as contrasted to a traditional filter–then–publish approach. This often immediate acknowledgement of the author might also reinforce a behavioural change. And readership follows and grows, for reasons still in debate: fame–of–the–day, success of the anonymous, disenchantment of traditional media, generational change.

… to “everyone a scientific author”?
We can easily expect this trend to become reality in science too and this shall have clear repercussions in research. The boundary between amateur and professional researcher would then blur and the number of traditionally speaking scientific authors and extra–institutional authors claiming the scientific status of their work will grow [3].

One key driver of this explosion of authorship is the “democratisation of software”. The software for data processing has become now much cheaper or free thanks to open source, cloud computing and users communities, which provide free or low–cost software solutions, and the support network to use them profitably.

For example, platforms such as Many Eyes [4] and Google Maps [5] enable amateurs to visualize and explore data with tools previously available only to professionals.

Collaborative Web—based platforms enable small contribution for data analysis [6] and continuous collaboration which is not framed in articles and papers, through tools similar to Google Wave [7].

Statistical packages [8] have generated, often with the strong support of their (commercial) initiators, large worldwide communities of users, sharing not only the basic information but developing together, and often for free, large mathematical programs to be improved in the already mature style as proposed by open source software communities.

This release of tools also affects the review and assessment of scientific output. In a future science scenario as we envisage here, scientific output can easily be reviewed, assessed, rated and commented upon by nearly anyone. This can already be observed in such official initiatives as the Peer–to–Patent project that allows anyone to comment on patent applications and to submit relevant evidence that allows the U.S. Patent Office to assess the originality of a claim [9].

Such explosion in the number of potential scientists as authors, publishers, contributors and evaluators, will not only affect the source and legitimacy of science but also its quality and nature, as we further describe in this paper.

1.2. The growth of (beta) publication?

From the fragmentation of scientific outputs …
Simultaneously to the explosion of the number of authors, we note the fragmentation of scientific output. A scientific article has been for more than 300 years the central piece of consolidation, communication and interaction of scientific debate and has proven to be a formidable method to encourage scientists to disclose their results and ensure their social legitimacy. The key advice Faraday gave to young researchers — “Work. Finish. Publish.” — has been the pillar of modern science. But publishing in scientific journals is perceived as a potential bottleneck and a limit to knowledge sharing because of its long time frame for writing, reviewing and publication, as well as for what is sometimes labelled the inherent conservatism of peer reviewing (the system tending to “more of the same” output).

Today and increasingly in the future, a plurality of smaller, less formal outputs are used by researchers to communicate and exchange ideas, such as blog entries, draft work and short analysis (Nielsen, 2008). Such behaviour can be seen as the modern form of personal, sometimes almost intimate, correspondence between scientists as witnessed in many biographies of famous researchers. Today, this informal correspondence of intermediate results happens in the open Web, through blogs and similar open tools, not as a one–to–one but as a many–to–many discussion. Ultimately, this mechanism expands multidisciplinary and unexpected exchanges. In fact, blogs are very much a reality in the academic world. ScienceBlogs [10] is a portal dedicated exclusively to scientific blogs, and the Academic Blog Portal [11] lists hundred of blogs for any scholarly domain, from mathematics to law.

… To “Liquid science” …
These trends towards publication of drafts and non–finalized output is a visible trend across different Web applications. On the Web, innovative services are released publicly before being finalized, through the so–called “permanent beta” approach, because large–scale deployment brings insights that are not possible to replicate in a protected environment [12]. A parallel change is likely to happen in science: results are delivered not only as a finalized product (publication in a journal or a book) but as draft products, in order to enable shorter and more frequent feedback mechanisms and continuous improvement, enabled by the sharing of rough output before publication.

Bibliographies become fundamental tools for identifying oneself and for getting in touch with others with similar interests. Mendeley [13] is an example of a blend of a social network of researchers and a bibliographic management system.

Data are released in rough format to enable others to devise alternative interpretations: every chart in the policy publications of OECD has a link to a data repository for readers to create their alternative interpretations of data (see for example,3352,en_2825_293564_1_1_1_1_1,00.html).

Scientists use blogs to exchange early views on their work, such as Tao’s posts on proving the Poincare’s conjecture in mathematics [14]. The so–called “Open Notebook” makes available the working notes of scientists [15]. arXiv allows researchers to publish articles once submitted to journals, without waiting for the full review and editorial process [16].

Comments become a prominent way of interaction and a genre in itself. Any form of scientific output will be directly available for comment [17], partially or totally breaching with confidentiality and exclusivity rights associated traditionally with research output.

In other words, researchers increasingly allow other researchers to look at their background notes and rough material. Indeed, we are witnessing the emergence of “open science”. This change is consistent with the theory about non–linear research and development processes (Rosenberg, 1982), emphasizing the importance of early feedback mechanisms in research and development cycles. Table 1 illustrates the different degrees of openness in scientific output.


Table 1: Degrees of openness in science, for the years 2010 and 2030.
TimeBibliographyDataFirst analysis,
working notes
Draft paperArticleComments on other work
2010Not publicNot publicNot publicNot publicPublicInternal, public only through articles
2030PublicPublicPublicPublicPublicPublic by all means and at all stages of work


Consequently, the granularity of research is changing, through the rise of micro–science. Individuals will come up with ideas and interpretation of data, not framed in a fully–fledged research article as part of a systematic theory but on a specific insight, such as a chart or micro–analysis.

This could have a disruptive impact on the rate of innovation. Key scientific findings usually appear unplanned, through continuous exchanges of view between researchers with similar interests in different disciplines, and between relevant individuals at different levels of the innovation cycle. Even Albert Einstein benefited from the knowledge of his mathematician friend Marcel Grossman, pointed Einstein to the work of Bernhard Riemann (Nielsen, 2008).

The sharing of intermediary products, either voluntary or automated through recommendation systems, facilitates unexpected spillover and serendipitous cross–fertilisation of interdisciplinary research activities. Web–based tools help augmenting this cross–fertilisation by providing the means for those with similar unique research interests to connect, for example by mapping the reciprocal bibliographies and providing a recommendation service similar to Amazon’s recommendation engines.

1.3. The growth of data and processing

From more data available …
The data landscape transforms because of interrelated major trends. The cost for accessing data has dramatically lowered: much of the useful statistics and more general data from (often publicly funded) research are now published and freely accessible in raw format on the Web.

For example, government data are increasingly being published as raw data [18], in order to enable third parties to build applications and services (Osimo, 2009). By doing so governments are allowing users to dig into data, not only browse final products (see Figure 3 below).


Figure 3: UK model of open government data
Figure 3: U.K. model of open government data. Adapted from POIT (2009).


Secondly, there is much more data collected and archived today than ever before, and the volume is growing at an exponential rate. The digitisation of many records (that were earlier hand–written and archived on paper such as individual medical records) allows now almost costless archival transmission, reproduction or processing. Most importantly, the digital recording of our collective behaviour through pervasive and connected sensors and devices such as RFID and embedded systems (the so–called Internet of things) allows for unprecedented levels of data collection.

Interestingly, this inflation in data collection is due to both public and private actors, generating a large debate about the legitimacy of using so-called “non–official” data for research. Where earlier practice was clearly pointing at the legitimacy of public and acknowledged sources, today’s practices progressively encourage the use of alternative sources, from real organisations such as hospitals, schools, municipalities and banks or virtual and automated ones such as Web sites.

Thirdly, the rapid growth of official and unofficial sources of data has generated an immense reservoir of data potentially useful to research. The abundance and greater access to such data, often more timely and by nature far more detailed that usual aggregated statistics, offers huge opportunities to answer new and old questions. Much of it is micro–level data, such as company or individual data. Consequently, the use of micro–level data to generate meso– and macro–level analysis has become a central methodological challenge for many researchers (De Panizza and De Prato, 2009).

… to a new kind of science?
In several scientific domains, including the “hard” sciences, a new type of scientific proof is emerging, competing with the traditional experimental methodologies that science has been supporting and developing for centuries. The current competencies in data mining, database bridging and mathematical algorithmic analysis seem to allow for a competing model of scientific research, based rather on correlations and probabilistic results developed through the use of heterogeneous macro–databases, rather than traditional experimentation. This seems to be a new way to create science.

Only a robust conceptual framework allows serious investigation, be it under one or the other of aforementioned procedures. As stated boldly by Anderson (2007):

“The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.” [19]

Truly, the nature of science is affected by new possibilities.

Basically, there is an abundance of both official and non–official data as well as data processing tools. Both will provide new opportunities to various actors in science production and publication processes.



2. More (scientific) publishing but less (legitimate) publishers?

Can we integrate most of our observations, even if based on weak signals, in an analytical framework that would help us better understanding potential future changes?

If considering science production as a specific case of cultural goods production [20] — in particular dissemination through publication since scientific publication is the de facto cultural good — we can borrow from this framework to identify mechanisms and impacts of digitisation on science.

This framework, well documented in available recent literature [21], has usually been applied to goods such as those produced by the media or entertainment industries. We will illustrate that scientific publication is confronted to very similar changes as those that have affected the general press as well as the book and music industries.

Cultural goods markets are characterised usually by a specific set of characteristics: (i) uncertain demand; (ii) short period of profitability; (iii) infinite variety of supply; and, (iv) vertical differentiation of markets. Such characteristics are also present in science production and publication. There is no clearly defined demand, even less so as beyond scientific specialisation. Publications are not the result of strict “market analysis”, but rather a serendipitous process (research) marked at best by occasional and often short–lived peaking events, that in turn create a momentary demand with, potentially, a period of higher profitability for publication [22]. The infinite variety of supply simply corresponds to the infinite amount of research questions within scientific disciplines, each segmented in a variety of specialisation favouring the creation of small niche markets for publication and the consequent vertical differentiation of the market.

Last but not least, cultural goods have their production characterised by high fixed costs and low marginal costs. The initial investment to create the first “copy” — the very mission of the creator — is extremely high, but then additional copies can be (re)produced at little additional cost. This need for an early stage investment can affect power relations in the value chain, depending on who ensures financing of the creative process. Research is an expensive and uncertain investment, while the dissemination of its results is comparatively done at very low marginal cost.

The value chain of cultural goods is a classical and linear retail distribution. The product, from its creation to consumption, goes through a series of necessary intermediaries to allow for commercialisation, with each of the intermediaries exercising a specific role (or cumulating several in case of vertical integration) and aiming at optimising its benefit and position. This value chain is complemented with the supply of intermediary inputs at most stages of the chain. This supply includes the design, development and production of all the inputs necessary for the undertaking of the production and commercialisation of the cultural good (e.g., access to content, content processing tools, publishing material, machinery or platform and network access which enable content creation, publication, distribution and retail). In the case of science production, we have already noted that data and data processing tools are important intermediary inputs in the creation stage.

Basically, different actors (mainly creators, publishers, distributors and retailers) with different objectives and competences are occupying various positions of this value chain.

A dominant actor in the cultural goods value chain is the publisher. The publisher is responsible for (re)producing a physical or virtual good, managing the corresponding intellectual rights, handling marketing and often distribution itself. It is fair to say that the publisher takes most of the financial risks, when it comes to commercializing the output of the creative process, Therefore, the publisher tries often to position itself in a gate–keeping position, through the management of exclusivity rights, pre–financing (through royalties paid in advance) of the creative process, its more or less authoritative guidance (“on order” creation) or even the integration of the creative stage within its organizational structure (vertical integration).

Publishers are often presented as the central economic actors in the cultural goods value chain, ruling the overall organisation of the market and often moving in an oligopolistic structure.

Given the characteristics of the market and its value chain, the mutual relations and interests of the actors will generate a specific pattern to creative and publishing processes. Transformations of these processes can be disruptive, affecting the nature of the creative process itself. Our claim is that scientific activity is currently undergoing such a transformation.

First, the relevance of the above framework for the interpretation of scientific activity is straightforward: a creator (scientist) processing contents (data) in a creative way (research concept and methods) generates scientific output (article) that she wants to publish. The publisher is in a dominant position as he exerts all the above described competences (exclusivity, quality control, management of rights, marketing, distribution logistics). The access to distribution channels is hence organised and clearly restricted.

With the current changes we portrayed earlier in this paper, a near infinite number of potential creators (universal literacy and authorship) with lowered barriers to entry (cost of initial investment in data and data processing tools as free intermediary inputs) is enabled to publish directly on the Web or any other mass digital platform (free channels of distribution) benefiting from a wide and differentiated audience (universal access to the Internet; multiplication of niche markets).

This in turn affects the nature of scientific work itself, from the creative stage (conceptual frameworks, quality control of data, perpetual beta) to the consumption stage (free infinite access, publish–then–filter approach) passing by the weakened role of publishers as “conservative” quality control institutions. These changes allow, or even favour, the emergence of a new praxis in science, which resembles similar transformations in other cultural goods industries such as media or music — more creators, more trials, more inputs and tools.

However, scientific activities are different from other cultural goods industries. We have chosen to point at two differences, seen as most relevant to the purposes of this paper.

First, scientific activity is not financed strictly like other cultural goods. In the vast majority of cases, pre–financing is not managed by scientific publishers. The creative stage in science belongs to another value chain — that of research, marked by the needs and opportunities offered by large multinational companies and public authorities funding R&D. In that sense, scientific publishers are in a much weaker position in the value chain.

Second, scientific activities in society are not purposeless, unlike some cultural goods [23]. Briefly, science is purposeful as its outputs are at the core of our contemporary notions of life and society. Science is — boldly speaking — represent a search for “truth”.

We have seen scientific results competing with other results, on creationism, climate change, epidemics, etc. Some claim that Western societies have entered a deep crisis about the notion of human progress and its relation to science and technology.

Scientific publishers, with their acknowledged peer–reviewing systems, are expected to ensure the validity of scientific statements. Current changes put pressure on publishers. With any failure of the peer–reviewing process, one of their best assets as natural and acknowledged gatekeepers could be challenged. Hence, the reduction of the peer–reviewing (publish–then–filter) power of publishers and their exclusive rights (perpetual beta) further weaken their positions in the market [24].

Hence differences we noted earlier between scientific activities and cultural goods in general both point to the possible weakening of the role of publishers in the value chain.

The rise of universal authorship, lowering of entry barriers such as the access cost to intermediary inputs, threats to the publishers’ gate–keeping power and the opening of new channels of distribution all affect the scientific publication value chain. These transformations affect not only scientific publishing, but also the very nature of scientific work by providing additional creative production inputs, processes and output.

In the next section, we detail some of the consequences of these transformations.



3. Science 2.0 is not per se better

The three trends described earlier, and the way that they might affect the rules of the game, can be seen as three dimensions of change, in turn having a major impact on the quantity and very nature of science.

These changes are already visible but the result is not necessarily a linear progress towards a democratisation of science and a more equal knowledge society. Indeed, we are likely to see an increase in inequality and concentration.

The explosion of scientific authorship, content and data will eventually encounter the limited capacity of human attention. Trusted filters and services will have to help readers make sense of overwhelming content. Filtering will rely on reputation management systems that will help readers sift through content. A new model will be based on actual referral and links to contributions. As Sterling (2005) noted: “We (the scientist of tomorrow) never bother to ‘publish’: we just post our findings on weblogs, and if they get a lot of links, hey, we’re themMost frequently cited.”

Reputation will become the most precious resource, generating attention, power and funding (Picci, 2007). Reputation management systems will be crucial: new approaches, more open, flexible and transparent are emerging, based on implicit and explicit data (such as incoming links, page views or ratings). Yet no clear model is already in sight. Indeed current systems can be gamed. It will certainly be an “evolving model” for reputation management, based on qualitative and quantitative data, which will highlight a contrast with the existing “impact factor” model. However this does not mean that a organic, bottom–up and peer–to–peer approach is the only scenario. It is quite possible that a third–party reputation management system could be developed, which assesses and rates researchers in a dynamic way, including qualitative and quantitative input, emphasis on network analysis, implicit and explicit data — call it a Google of reputation. Trust on these reputation systems will be key, and their power will be immense — but easily contested by competitors. Existing indexes will very much remain important, but will become more open and transparent, thanks to a diversification of publication and access to readers’ feedback.

Certainly in this field of reputation and quality assessment networks Individuals will become more important and exploited. Personal, transparent referral, endorsement and recommendation will be the main tools for developing new contacts and collaboration, leading to new sources of funding. Software will be developed that will enhance reputation–based management with endorsement, and cross it with network analysis of recommendations, to build a system that we could call “PeopleRank”, a personal version of Google’s PageRank algorithm [25].

Bottom–up collaborative filtering creates strong positive feedback mechanisms. In a world of extreme abundance and free horizontal exchange of preferences among users, the choices of one user are likely to influence the choices of other users. In system where many individuals can choose between many options, distribution of preferences tend to be more unequal, showing steep power law distribution (Barabasi, 2003). In any system sorted by rank, the value for the Nth position will be 1/N. For whatever is being ranked — income, links, traffic — the value of second place will be half that of first place, and tenth place will be one–tenth of first place. For example, Clay Shirky in 2003 observed that the distribution of readership of blogs follows a power law curve and is much more unequal than traditional publishing (see Figure 4): “diversity plus freedom of choice creates inequality, and the greater the diversity, the more extreme the inequality.”


Figure 4: Most popular weblogs in 2003 arranged in rank order by number of outbound links
Figure 4: Most popular weblogs in 2003 arranged in rank order by number of outbound links.


In the sciences, this will create a “star system” of researchers, universities, research centres, journals and publishers [26]. These will increase their influence, power and revenue through positive feedback loops. The rest will strongly suffer from attention deficit and possibly funding reduction, deriving from an abundance of available research. Excellence by 2030 will most probably be determined by the same basic mechanism: peers. There will no longer be a few gatekeepers of this process. Being “excellent” will depend on “objective” measures, reputation, and alerting processes.

Second–level research — the type so often funded by traditional government funding — will be crowded out of the market as existing institutional barriers will not be strong enough to resist open competition.

Because the reputation of researchers will be more openly managed, it will become crucial for researchers to reach a wider public. The capacity to communicate effectively will become a core competence of researchers.

But the openness of the reputation system will also induce growing risks for non–scientific theories and less rigorous scientists having a disproportionate amount of attention because of their capacity to communicate. In such an open system, the lack of scientific literacy is likely to worsen the quality of scientific output (Mooney, 2009). The degree of scientific literacy of the population will be a key variable in whether science 2.0 leads to improved or less desirable scientific production.

A particular effort will be needed to appreciate and reward the contributions of researchers in open, collaborative science initiatives [27]. One solution could be created based on the work on evaluating the contributions of developers to open source projects. As shown in Figure 5, IBM developed a visualization tool which presents clearly the different contributions to open source projects. At a glance, this tool can summarize the overall level of coding and discussion in a project, illustrate which individuals are recent key contributors, and allow comparisons across multiple projects [28].


Figure 5: Example of reputation management systems designed for open collaboration
Figure 5: Example of reputation management systems designed for open collaboration.


This unequal distribution is likely to be seen in the geographical distribution of research, which will concentrate in a limited number of world–class centres. Based on network theory, we favour the idea that geographical location will become increasingly important and concentrated in a few hotspots. However these hotspots will be much less stable, with new hotspots quickly emerging and disappearing. These hotspots will have a global reach [29]. The role of a national system of innovation would be in question: larger regional areas could become much more important than today. Government agencies already seem to embrace this change: policy discourse is focussing increasingly on the creation of world–class research centres, because of the global nature of competition (European Commission, 2009).



4. Does future science need a different science policy?

As we have tried to show in this paper, science will undergo deep changes in the years leading up to 2030, and the speed of change is likely to accelerate. In particular, we envisage that the proliferation of scientific authorship, fragmentation of research output, and increased availability of data will lead to:

Obviously, this change is not per se better, and there is risk of negative impact on the overall welfare of societies. The focus on world–class research could hinder the very existence of second–tier research centres, which are necessary for ensuring a sufficient scientific culture among citizens. The disruption of the publishing value chain will be painful not only in terms of job loss, but also for the sustainability of scientific production. The exploitation of reputation could pave the way for less scientific theories to become widespread.

The disruptive and pervasive nature of these changes suggests that this is not a linear evolution, but a change of systemic nature. The risks attached to this change should make us consider the appropriateness of current policy tools for this new framework. End of article


About the authors

Jean–Claude Burgelman joined the European Commission in 1999 as a Visiting Scientist in the Joint Research Centre (Institute of Prospective Technological Studies — IPTS), where he became Head of the ICT unit in 2005. In January 2008, he joined the Bureau of European Policy Advisers as adviser for innovation policy. Since 1 October 2008 he is advisor for ERA governance at DG RTD, in charge of setting up the European Research Area Board. Until 2000 he was full professor of communication technology policy at the FUB and involved in science and technology assessment.

He has been visiting professor at the University of Antwerp, European College of Brughes and University of South Africa.

He chairs the World Economic Forum’s Global Agenda Council on Innovation.

D. Osimo joined IPTS in 2005 to coordinate research activities on e–government. His current research interests cover future models of government, mapping e–government research, and the proactive role of users in delivering public services.

M. Bogdanowicz manages a techno–economic research team within the “Information Society” Unit of the Institute for Prospective Technological Studies, a research Institute part of the JRC of the European Commission.



1. In this evolution, language has become key. The generalized use of English marks today the authorship of millions of Western authors, reflecting, as it always did through history, the presence of a dominant economic system and that of an advanced and standardising educational system. The emergence of new dominant players, in particular in Asia, might affect radically this linguistic predominance.

2. Through tools such as

3. Also probably supported by the ironical “publish or perish” internal norm of the scientific community.



6. E.g.,


8. Such as STATA. For more see




12. In the words of Tim O’Reilly, “It’s no accident that services such as Gmail, Google Maps, Flickr,, and the like may be expected to bear a ‘Beta’ logo for years at a time.” See “What is Web 2.0,” at





17. For example using free Web–based collaborative annotating tool such as

18. There is currently a wave of public data catalogues being released, ranging from house pricing to suicide rates or traffic data, inviting the citizens to take advantage of such public information, and also develop applications that would process the data in any purposeful way. This is being implemented in the U.S. at, in the U.K. at, but also in Australia ( and at regional level in Asturia and the Basque Country, Spain.

19. In economic science and econometrics, the popular success of the book Freakonomics (S.D. Levitt and S.J. Dubner. Freakonomics: A rogue economist explores the hidden side of everything. New York: William Morrow, 2005) offers a good example for debate.

20. We do not argue here the rationale of such categorisation. Basically, science is produced by (highly) qualified human resources, delivering an intangible and knowledge–intensive output. It results from a creative process, which, contrary to some other creative content goods, is not purpose–free. We will come back to this debate at the end of this section.

21. For more see, for example, R. Caves, 2000. Creative industries: Contracts between art and commerce. Cambridge, Mass.: Harvard University Press. See also: J. Mateos-Garcia, A. Geuna, and W.E. Steinmueller, 2008. “State of the art of the European creative content industry and market and national/industrial initiatives,” In: F. Abadie, I. Maghiros, and C. Pascu (editors). The future evolution of the creative content industries: Three discussion papers, Sevilla, Spain: European Communities, pp. 16–17, and at

22. Be it in terms of sales or scientific fame.

23. Typically cultural goods ranked as creative content goods and close to entertainment are categorized as “without purpose”, in contrast for example to “serious gaming” addressing corporate or educational purposes.

24. The recent case of the “Climategate” — a strong attack against the Intergovernmental Panel on Climate Change before and again after the Copenhagen U.N. Summit of December 2009, based initially in an illegal access to the correspondence of U.K. researchers (Prof. P. Jones, University of East Anglia) — is an interesting case that shows a mixture of hacker culture, criticism of scientific practices and de–legitimisation of scientific publishers (see Nature’s editorial, “Climatologists under pressure,” at The details of the case draw on a variety of elements, including the role of peer reviewing. It is closer to a reputation–focused attack, than to a scientific debate, and illustrates some of our later conclusions.

25. The Google PageRank algorithm filters content accordingly to a multitude of variables, including the number of incoming links for a given page. A “PeopleRank” system would recommend and filter contacts according to feedback, endorsement and recommendations they have received by other people, weighting the different values according to the value of each recommender.

26. In practice, this trend is already present currently, with various actors probably abusing the initial objectives of performance rankings such as the Academic Ranking of World Universities (ARMU; or that of the ISI list of E. Garfield currently divulgated by Thomson Reuters Web of Knowledge (

27. See also M. Nielsen, “The future of science,” at

28. See L.–T. Cheng and B. Kerr, “A collaborative user experience project,” at .

29. In a series of recent studies, Prof. F. Malerba (Bocconi University, Italy) develops the concept of Knowledge hubs, a concept very close to what we allude to, and that attracts quite logically huge interest in private and public decision–maker circles.



C. Anderson, 2007. “The end of theory: The data deluge makes the scientific method obsolete,” Wired, volume 16, number 7, at

A.–L. Barabasi, 2003. Linked: How everything is connected to everything else and what it means for business, science, and everyday life. New York: Plume.

A. De Panizza and G. De Prato, 2009. Streamlining microdata for the analysis of ICT, innovation and performance. Seville, Spain: European Commission, Joint Research Centre, Institute for Prospective Technological Studies, at

European Commission, 2009. Preparing Europe for a new Renaissance: A strategic view of the European research area. First report of the European Research Area Board — 2009. Brussels: European Commission.

C. Mooney and S. Kirshenbaum, 2009. Unscientific America: How scientific illiteracy threatens our future. New York: Basic Books.

M. Nielsen, 2008. “The future of science,” at

D. Osimo, 2009. “A short history of eGovernment: From cool projects to policy impact,” In: J. Gøtze and C.B. Pedersen. State of the eUnion: Government 2.0 and onwards, at

POIT, 2009. “Power of Information Taskforce Report,” at

D.G. Pellis and C. Bigelow, 2009. “A writing revolution,” Seed (20 October), at

L. Picci, 2007. “Reputation–based governance,” First Monday, volume 12, number 9, at

N. Rosenberg, 1982. Inside the black box: Technology and economics. Cambridge: Cambridge University Press.

C. Shirky, 2008. “Gin, television, and social surplus” (26 April), at

C. Shirky, 2003. “Power laws, weblogs, and inequality” (8 February), at

B. Sterling, 2005. “Ivory tower,” at


Editorial history

Paper received 23 April 2010; accepted 21 June 2010.

Copyright © 2010, First Monday.
Copyright © 2010, Jean–Claude Burgelman, David Osimo, and Marc Bogdanowicz.

Science 2.0 (change will happen ...)
by Jean–Claude Burgelman, David Osimo, and Marc Bogdanowicz.
First Monday, Volume 15, Number 7 - 5 July 2010