The World is Not Flat: Expertise and InPhO
First Monday

The World is Not Flat: Expertise and InPhO by Colin Allen, Cameron Buckner, and Mathias Niepert



Abstract
The Indiana Philosophy Ontology (InPhO) is a “dynamic ontology” for the domain of philosophy derived from human input and software analysis. The structured nature of the ontology supports machine reasoning about philosophers and their ideas. It is dynamic because it tracks changes in the content of the online Stanford Encyclopedia of Philosophy. This paper discusses ways of managing the varying expertise of people who supply input to the InPhO and provide feedback on the automated methods.

Contents

Introduction
Stratified collaboration
Application and motivation
Discussion
Concluding remarks

 


 

Introduction

The rise of collaborative projects on the World Wide Web has been of major significance to Internet users, creating some of the most visited sites on the Web. In Wikipedia, every reader is also potentially an author, and YouTube similarly turns everyone into a potential supplier of video content. A key idea behind Web 2.0 is to exploit mass collaboration for useful purposes. The resulting democratization of Web content seems to suggest that the world is flat. And yet we know that domain expertise is still a valued commodity even on the Web. Despite the success of Wikipedia and similar projects, alternative models for presenting and protecting the work of domain experts have an important role to play on the Web. Dissatisfaction with the “anything–goes” frontier of Wikipedia is a major driving force in the search for new models. The world is not flat. This does not mean we should retreat to mountain–top sanctuaries of expert–generated and refereed content. Rather, we need expert–centered models for knowledge and dissemination that can incorporate aspects of Web 2.0 to good effect.

In this paper, we describe how the Indiana Philosophy Ontology project (InPhO) proposes to do this in support of the Stanford Encyclopedia of Philosophy (SEP). The SEP is an online, open access encyclopedia of philosophy that is written and reviewed by academically qualified authors and editors. It currently has over 1,000 published articles that are available at http://plato.stanford.edu/ entirely free of charge to anyone with an Internet connection. Over 1,350 philosophers are involved in writing and editing the entries. Entries are refereed by members of the editorial board prior to first publication, and again reviewed whenever they are substantively revised. Authors are free to revise their entries at any time, and a system of automated reminders ensures that entries are revised at least once every three–five years, so that the articles are kept up to date. The SEP serves nearly one million individual requests for entries each week, and this number has been steadily growing since its inception in 1996. It is the most highly used resource for philosophy on the Web, partly due to its high visibility in Google’s search results and easy accessibility, but also due to its reputation for being a high–quality resource. Readers span a wide range, from interested members of the public, through high school and college students, to academic specialists in a wide range of humanities and science disciplines. Currently containing over 11.5 million words and growing at over 10,000 words/month, the entire Encyclopedia is beyond the comprehension of any single individual.

The SEP provides the launching path for the InPhO Project. Our goal is to take this constantly evolving, dynamic resource in philosophy and use it to build a dynamic representation of the entire discipline of philosophy (Niepert, et al., 2007). The resulting representation of the entire field can support many purposes for scholars and learners in philosophy, as well as for readers of the SEP more generally, for instance by enhancing search and navigation and facilitating research and discovery in a variety of digital philosophy applications.

Projects in the digital humanities present computer scientists with unique scientific and technological challenges. Software is needed to aid the construction, management, and presentation of machine–readable representations of complex ideas. The task of information integration and extraction in the context of the humanities is particularly challenging because the humanities use abstract language that demands the kind of subtle interpretation often thought to be beyond the scope of artificial intelligence. Nevertheless, the viability of digital humanities depends on having tools for automatically extracting the semantic relationships that hold within and between different texts. Ordinary statistical methods of “latent semantic analysis” are alone inadequate to the task. Such techniques will need to be enhanced by expert knowledge. This knowledge can be gathered directly from domain experts, although a major challenge that we address in the InPhO Project is how to gather this information in a way that is not a burden to the experts themselves. The knowledge of experts can also be inferred indirectly from other sources. For instance, representations of the organizational principles that experts impose upon their professional work can be derived from the tables of contents of textbooks and anthologies, and from the sections found in conference programs. Digital tools for the humanities will also need to be capable of dynamically tracking the introduction of new ideas and interpretations and applying them to older texts in ways that foster novel understanding. At the InPhO portal (http://inpho.cogs.indiana.edu) we are building an interactive taxonomy of philosophical ideas. This taxonomy can be used to explore subject areas and terms that have been extracted from various digital sources of information about philosophy. We are also organizing information about philosophers, mapping relationships among them and their relationships to the ideas that make up the discipline.

An initial motivation for the InPhO Project was the problem of maintaining proper cross–references between the articles of the SEP. Cross–referencing has traditionally been regarded as an important function of encyclopedias. In the SEP, no individual author or editor knows enough about the entire Encyclopedia to be able to generate comprehensive cross–references. Furthermore, the articles themselves are being revised continually. Even if one believes one has a handle on the content of an article it could change at any moment. Furthermore, its relationships to other articles are changing continually. In a traditional encyclopedia the human effort required to generate cross–references is a worthwhile investment because of the fixed nature of a printed edition. In the dynamic world of Web–based encyclopedias, the effort would be interminable.

 

++++++++++

Stratified collaboration

We have therefore set out to develop a system for combining information made available by data mining the SEP itself, by mining other philosophical resources, and by gathering human input from individuals of varying levels of expertise. The idea is that the SEP provides us with high–quality content and direct access to the experts who created that content. In addition, other sources of information are available, but these are of unknown and perhaps lesser quality. The goal is to find a way to combine those information sources in a way that yields a structured representation of the discipline.

 

Figure 1: The InPhO Architecture
Figure 1: The InPhO Architecture (reprinted from Niepert, et al., 2007 where it is explained in more detail).

 

Figure 1 shows the basic scheme for integrating these diverse sources of information. We start with the SEP content provided by authors. We next use software to carry out latent semantic analysis of the terms appearing in the SEP, tracking statistically how terms in the Encyclopedia relate to one another. These statistical techniques are far from perfect, so we need some way of verifying that information — some way, that is, of making sure that the information is up to the editorial standards of the SEP, and scholarly standards in general. Other sources of information are available, such as Wikipedia, WordNet, and the Philosophy Family Tree. Those sources are, of course, unverified by us, so again we need some way of verifying the information they provide. It is at this verification step where the SEP community has a role. Authors, subject editors, and many readers are all capable of rendering judgments about the information that has been extracted by automatic means. However, an important part of our Project is to recognize that the potential verifiers may be stratified in respect to their expertise.

Our ultimate goal is to create a “dynamic ontology” — a machine–readable taxonomy of philosophical ideas that is revisable as the SEP evolves. However, ontology design is a complicated task, and we cannot undertake to train all the potential evaluators in its principles. Authors and editors for the SEP are, of course, especially busy people. So, we have to find some way to collect the information we need from them as surreptitiously as possible. To this end we have developed interfaces illustrated in Figures 2 and 3. The idea behind these interfaces is that we can gather some simple judgments that people make about parts of the software–generated material and use these as data for software that can construct a model for the best way of arranging the philosophical ideas into a computationally–tractable ontology.

 

Figure 2: The evaluation interface for the philosophical ideas taxonomy
Figure 2: The evaluation interface for the philosophical ideas taxonomy.

 

 

Figure 3: The evaluation interface for the philosopher interface
Figure 3: The evaluation interface for the philosopher interface.

 

Because content experts are busy people, they don’t want to be bothered with the output of bad programs. Nor do they want their hard work to be messed up by amateurs or otherwise unknowledgeable people. We surmise that one of the main reasons one doesn’t find many of the foremost experts writing (e.g.) for Wikipedia is that others may change what they have written at any moment. It’s not just an issue of being able to claim authorship of what’s there, although that’s certainly an incentive for most academic writing; it’s also the fact that that anything one has written could be gone in a moment. Any expertly crafted ontology is likely to foster the same feelings toward its structure and content. There are, of course, knowledgeable amateurs who have the time and motivation to contribute. With the InPhO, we aim to identify those people and build a community with them. There are also well–intentioned amateurs who are relatively plentiful, and who are motivated to contribute their time and effort, but they make mistakes or they may be uninformed about critical issues. Our challenge is how to use the information that is provided by people with varying levels of expertise, while making sure that erroneous information doesn’t get incorporated into the core presentation of the SEP.

This issue must be tackled in a way that doesn’t require the known experts to verify everything that has been submitted. Such verification is a task for which they have neither time nor inclination. Software, in contrast, has lots of time and no motivation problems, but it’s clueless. One aspect of the InPhO Project is to make the software a bit more clueful. Can we use software in a way that will tie these three kinds of communities together — the experts, the knowledgeable amateurs, and the general readership — in ways that that will serve the needs and desires of all?

The rhetoric surrounding Web 2.0 gives the impression that there should be opportunities for everyone to contribute to everything. That is not in fact the way things are.

Let us re–emphasize the central theme of this paper: the world is not flat. The rhetoric surrounding Web 2.0 gives the impression that there should be opportunities for everyone to contribute to everything. That is not in fact the way things are. There are people with different levels of expertise. There are people with different degrees of control over what is on the Web. There are communities of experts, knowledgeable amateurs, and enthusiastic readers. We should be thinking about how to use them differently.

Because the world is not flat, we want to capture the different levels of knowledge that are already out there among the various communities that have found the SEP useful. There is also variation among the digital resources and software that are available. We’ll tentatively call our model for combining these sources “Web 2.01” — although it may be a very minor innovation it is, it seems necessary to think in terms of encouraging mass participation while tracking the varying degrees of reliability of contributors. There is a need for stratified collaboration.

 

Figure 4: The InPhO layer cake
Figure 4: The InPhO layer cake: The SEP provides expert generated content at the bottom, and expert review at the top, while in between are layers of software and people with less expertise, who nevertheless make valuable contributions.

 

Figure 4 shows the layer cake approach that we have for the InPhO. At the foundation there is the SEP itself and the individual entries that make up the Encyclopedia. We can extract some useful information from that using semantic analysis software. These statistical relationships provide hypotheses about which terms are meaningfully related to which. But we know that software is not very good at reading text. So, these are just tentative judgments about possible relationships.

Above this we have a community of motivated readers of the SEP, of varying levels of expertise, who can look at that software output and tell us whether they agree or disagree with the hypothesis that the terms are semantically related. They can rate how related they think these terms are, and whether one is a more general term or more specific term than another. In other words, people can look at this output and with a set of fairly simple questions provide feedback on what the software is suggesting. That feedback can be combined with the original statistical data and fed into another layer of software, which attempts to fit it all together. One expects to get inconsistent judgments from the various people looking at it. One also expects to get different answers from the statistics and the people sometimes. But if we are clever enough, it should be possible to write programs that will help us figure out what the relationships among these terms might be — to try to capture the best structural model of the data that has been gathered. The technical details of our approach (described in Niepert, et al., 2008) involve a non–monotonic reasoning technique called “answer set programming” which is robust in the face of inconsistent information. This allows us to accommodate the conflicting judgments of experts and non–experts, while giving the expert judgments more credence.

 

++++++++++

Application and motivation

How might the resulting populated ontology be applied to the goal of maintaining cross–references? Currently, this task is carried out manually by authors and editors. They are asked at publication time to look through the table of contents and to use the SEP’s search engine to try to identify related entries. This process is rather haphazard. Instead, the structured information in the InPhO can be used to provide suggestions that have been partly filtered through people and through the software. We can then look to see which of these suggestions are selected by the authors and editors, and we can then use that information further to train the software doing the extraction. Thus, we can feed back into the software the judgments of the authors and editors from a variety of practical applications.

By replacing the open–ended task of searching for related entries with the simpler task of evaluating relationships among a few key terms in their articles, authors and editors are being asked to do rather less work than currently. The supporting software should, therefore, make their task easier so long as they are being given reasonably good suggestions. The software–generated suggestions will not be perfect, of course — otherwise there would be no need for human evaluation — but we believe that our experts will be happy with something like 90 percent accuracy in producing relevant terms, and with the knowledge that their input is helping to improve that percentage. The connections that have been verified at the top level by our content experts then become a trusted part of the ontology, and eventually feed back down into the content of the Encyclopedia itself. The authors are constantly revising their entries, new associations become established, and so the whole cycle repeats itself.

An example of the kind of connections that can be discovered with this approach arose when were looking over some of the output of the statistical analysis for debugging purposes. We were surprised to find that the entry on “divine illumination” in the philosophy of religion section of the SEP was flagged as being of potential relevance to the entry on “mental content” from the philosophy of mind. Those of us working on the InPhO Project have considerable expertise in the philosophy of mind, but rather little in the philosophy of religion, and we thought this had to be a bug in the software. What is the connection between these two topics? We dug in and found that that it was a reasonable suggestion because medieval philosophers discussed “the problem of universals” — the question of how it is possible that human minds are able to grasp abstract concepts (e.g., DOG) despite experience only with particular individuals (i.e., individual dogs). A popular medieval view was that mental access to abstract concepts was made possible by God providing humans with those abstract concepts — that’s divine illumination. Perhaps there exist a handful of experts who could have explained this connection, but our software indicated it and made it very easy for us to discover what that connection was.

 

++++++++++

Discussion

Our goal is to derive structured data from the unstructured data that is available from a variety of sources. The structured data, the “ontology”, is a machine–readable specification of the types of entities in a domain, and the relationships between them. For example, we are trying to extract by these methods basic biographical data about philosophers and we are also trying to map various relations between different philosophers. We want to know what kinds of ideas there are in philosophy. We want to know what documents thinkers wrote and edited. We want to know where these thinkers worked and studied, what kinds of problems they worked on, how these ideas relate to each other and so on.

A lot of the tools that we need, the basic digital tools that we need to do this smoothly still don’t exist. For instance, there are very many unstructured bibliographies on the Web. In fact the SEP is one source of them in that every entry has a bibliographic section. There are approximately 70,000 citations in the entire Encyclopedia, but we can’t tell you how many of them are unique. They are unstructured, flat–text bibliographic records that are not even in any standardized citation format. It’s an enormous problem to try to figure out how we can identify who the authors are of all those. What is the title? When they were published? Which journal they were in? Casting their eyes over this kind of data is what an army of volunteers can do well.

More sophisticated judgments about philosophical content require more expertise. We have built an interface where anybody can request an InPhO account and provide feedback on the output of the software. We are going to do our best to recruit as many graduate students as we can and harness their efforts (as well as those of anyone else who is interested). These users will be invited to navigate through the idea taxonomy to evaluate terms wherever they wish. By asking volunteers when they register, to identify a couple of areas of philosophy in which they are knowledgeable, we can track and use information about their relative expertise. A similar interface will be engineered into the publication process of the SEP. Authors of SEP articles will be asked about the terms that specifically appear in their entries. Their judgments will be explicitly tracked as coming from domain experts. The range of users will provide a range of data that can then be fed into the answer set program to figure out where to put specific terms into the bigger structure. Given that the identity of those making the judgments is known, it is possible to calculate the correlations between individuals when they rate the same items. We can then use that information in our rules for the answer set program. Given a set of ratings of paired terms, if the ratings given by a particular individual are well correlated with those of experts, then it is reasonable to suppose that they are providing reliable information for other items in the same area of philosophy. Ratings from a non–expert user who is generally inconsistent with others should be given less weight, but may nevertheless be used in the absence of other sources of information. It becomes possible to use the behavior of our communities to help us to make sophisticated evaluations of the data that we have.

What goes for philosophical ideas can also be applied to information about a particular individual thinker or philosopher. For instance, users can rate the influence of one philosopher on another, or the degree of agreement or disagreement between them with regard to specific philosophical ideas. The authors and editors of the Encyclopedia articles use the same interfaces as other users, but are asked only to cover material that is pertinent to their own article. Again, using the techniques of semantic analysis, we can feed the authors and editors a few questions that are related to their expertise and which are fairly straightforward to answer. We can have a very high degree of confidence in the responses that they give and write the rules such that information obtained in this way will normally override the shakier information obtained from other sources.

 

++++++++++

Concluding remarks

We end with a few ideas and lessons about communities, repeating some of the points that have already been made. We want to emphasize that the world is not flat. There are different communities: there are communities of readers, there are communities of authors and editors, and so on. This provides the opportunity to work with these different communities in a stratified and multi–layered way. It is not the flat two–dimensional Wiki world, nor even the two and a half dimensional world that results from when Wikipedia and other Wikis do allow some hierarchical control over editing. These are still basically flat worlds with just a few people sitting on top looking over the rest. We want a much richer structure than that, to capture the full range of expertise that is available on the Web.

The bottom line for any academic project on the Web is that it is very important to protect the expert assets. One has to ensure that expert–generated content is something that cannot be messed up and the resulting product is one in which the experts will take pride and feel invested. Simultaneously it should possible to use these expert assets to ground more speculative or experimental applications of the data. Public participation can also be used in various ways to help leverage the expert assets. The way to manage this is to keep stratified data — track who is who, where they are coming from, and what kind of reliability they have on various topics. Then, one can use software to find structure in the data, and use the structure to collect the feedback. This feedback can be used to generate yet more structure. It is only in this iterative fashion that we are going to get to something that realizes the full potential that Web 2.0 really has for scholarly disciplines such as philosophy. End of article

 

About the authors

Colin Allen is Professor of History & Philosophy of Science and Professor of Cognitive Science at Indiana University, Bloomington. Cameron Buckner is a doctoral student in Philosophy at Indiana University. Mathias Niepert is a doctoral student in Computer Science, also at Indiana University.

Acknowledgments

The InPhO Project is supported by a Digital Humanities Startup Grant from the National Endowment for the Humanities.

Any views, findings, conclusions or recommendations expressed in this paper do not necessarily represent those of the National Endowment for the Humanities.

References

M. Niepert, C. Buckner, and C. Allen, 2008. “Answer set programming on expert feedback to populate and extend dynamic ontologies,” Proceedings of 21st FLAIRS. Menlo Park, Calif.: AAAI Press, and at http://inpho.cogs.indiana.edu/Papers/2008-InPhO-flairs.pdf, accessed 20 August 2008.

M. Niepert, C. Buckner, and C. Allen, 2007. “A dynamic ontology for a dynamic reference work,” In: E.M. Rasmussen, R.R. Larson, E. Toms, and S. Sugimoto (editors). Proceedings of the 7th ACM/IEEE-CS Joint Conference on Digital Libraries (Vancouver, B.C., Canada, 18–23 June), pp. 288–297, and at http://inpho.cogs.indiana.edu:16080/Papers/2007NiepertBucknerAllen.pdf, accessed 20 August 2008.

 


Editorial history

Paper received 7 July 2008.


Copyright © 2008, First Monday.

Copyright © 2008, Colin Allen, Cameron Buckner, and Mathias Niepert.

The World is Not Flat: Expertise and InPhO
by Colin Allen, Cameron Buckner, and Mathias Niepert
First Monday, Volume 13 Number 8 - 4 August 2008
http://journals.uic.edu/ojs/index.php/fm/article/view/2214/2023





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2015.