Kialo is a novel peer production system focused on pro/con debate construction. Distributed moderator teams vet and accept claims submitted by writers. Moderators also edit and refactor debates as they grow. Thus, moderators play a critical role in cultivating and maintaining debates. Conflict between moderators is typical. It is a feature of argumentation and debate. However, not all conflict is productive. Conflict between moderators can undermine collaboration (by distracting from the task of managing debates) and drive attrition (by discouraging participation on the site altogether). Based on a ten-month participant observation on Kialo, we identify a common source of conflict between moderators: adversarial beliefs and values. Moderators are not neutral participants on Kialo. They take positions on debate topics. We suggest foregrounding these positions, which are potential sources of conflict, through interface design as a scalable way to facilitate conflict management.
Kialo (www.kialo.com) is a novel online debate platform (Margolis, 2018; June, 2018) that can be understood as a community of practice (Wenger, 1998). Its participants cooperate to develop complex pro/con debates about a variety of topics, including: Internet neutrality, ad blockers, artificial general intelligence, climate change, gender, and reproductive rights, to name a few. A given debate may involve hundreds of participants and thousands of pro or con claims. Debate participants — regardless of their role — engage in continual learning processes about (1) a topic (e.g., Internet neutrality); (2) the theory and practice of argumentation; and, (3) how to use Kialo. Admins and editors, which we refer to in this paper as “moderators,” play a central role in the development of a debate. Moderators make decisions about who gets to participate, which claims are acceptable, as well as how to revise claims in order to strengthen their overall contribution. As with other researchers studying online peer production systems (Luther, et al., 2013; Kim, et al., 2014) we believe that the moderator role is vital to the growth and success of Kialo debates.
Kialo provides moderators with a set of tools that enable collaboration. These tools shape the ways moderators interact with each other. For example, moderators can “flag” problematic claims for a finite number of predetermined reasons, they can provide feedback to other participants, and they can discuss problematic claims with other moderators and writers via two separate chat tools. However, the Kialo toolset changes with some regularity.
In some cases, these changes lead to conflicts to arise between moderators and inspire moderators to re-negotiate the norms and conventions of their practice. On the basis of an ongoing (14 months) participant observation, in this paper, we discuss one such interface change and the resulting conflicts and negotiations between moderators. As it turns out, debate moderation on Kialo is not value-free. Moderators have stakes in the debates they moderate, and these stakes influence their decisions and actions as moderators. We argue that there is utility and value in knowing what these stakes are and propose that Kialo — and other peer production communities — make personal stakes more visible so that other members of these communities can leverage this knowledge.
Moderators play various roles in online communities. They can serve as defenders against trolling and flaming, they can mediate “comment wars,” or they can function as “project managers” establishing timelines and delegating tasks (Luther, et al., 2013; Settles and Dow, 2013). Moderation has been studied in peer production communities (Zhu, et al., 2012; Zhu, et al., 2011; Arazy, et al., 2015), creative collaborations (Kim, et al., 2014), and on online news sites (Park, et al., 2016). More recently, there have been opportunities to examine moderation in the context of online debate and deliberation systems (Kriplean, et al., 2014, 2011). A particular interest has been the roles moderators play in reducing barriers to participation, including conflict and dispute resolution.
Moderation in peer production communities. Moderators can perform a variety of roles in online peer production communities, which are distinguished by facilitating and coordinating the work of a large number of people toward a shared outcome. Common examples of successful peer production communities are Wikipedia and Linux. In these communities, moderators can be responsible for managing participants (Krieger, et al., 2009), crafting creative or intellectual project visions, helping maintain quality standards (Liu and Ram, 2011) and “gatekeeping” (Keegan and Gergle, 2010) against vulgar or abusive participants. Regardless of the community under examination, existing studies of online moderation share an assumption that moderators are crucial to online peer production and attempt to do one of two things: (1) Theorize online moderation by identifying barriers and analyzing cases; and, (2) propose policies or technical solutions to make online moderation more effective.
Challenges to effective moderation. Researchers have identified a number of challenges that undermine moderation efficacy. Some are related to the number and type of tasks. For example, Luther, et al. (2013) found that moderators in a flash animation community became overburdened with too many tasks and responsibilities, which caused projects to stall and fail. Providing timely, quality feedback to participants can also be challenging (Dow, et al., 2012), especially given that most moderation is done on a volunteer basis. This means that moderating competes with other personal and professional responsibilities for time and attention.
Conflict is another key challenge for moderators. Peer production systems, such as Wikipedia, involve interaction between humans, which means that conflict is bound to happen. Moderators may encounter conflict between participants or with other moderators, and it can lead to a few possible outcomes. Conflict can be detrimental to peer production communities by causing projects to “stall or fail” (Billings and Watts, 2010), by discouraging participation (Huang, et al., 2016), and by leading to the production of low-quality content. Menking and Erickson (2015), drawing on Hochschild (1979), characterize certain kinds of conflict as challenging, “if not caustic,” in online peer production communities and suggest that emotional labor enables participants to cope and persevere in such situations.
Conflict can be productive (Kittur, et al., 2009) and useful. For example, it can enable people to challenge their own perspectives on a social or political issue (Kriplean, et al., 2012). Moreover, when it comes to peer production, conflict (in the form of debate or argumentation) can result in higher quality outputs. For example, Arazy, et al. (2013) have studied the relationship between debates between editors and the quality of Wikipedia articles. These possible outcomes, and others, have led researchers to develop an interest in understanding online conflict and conflict management (Filippova and Cho, 2016; Fréard, et al., 2010; Grevet, et al., 2014).
Understanding and addressing online conflict. Researchers have proposed different factors that contribute to online conflicts. Some have argued, for example, that task interdependence and geographical distribution in some cases increase conflict in free and open-source software development teams. Schneider, et al. (2013) found that arguing with Wikipedia collaborators on the basis of “personal preference and inappropriate analogy to other cases,” rather than adhering to community norms and conventions, can be seen by others in the community as problematic, and, thus, fuel conflicts. In addition, disagreements among leaders (e.g., debate moderators) about processes and procedures can be interpreted as, or become, personal, which can distract from the tasks at hand. Conflicts in open collaboration and peer production communities also arise due to malleable or poorly defined policies and/or ideological issues (Filippova and Cho, 2015).
Resolving or managing online conflict in these settings can be consequential both for the work being done (e.g., authoring an article or constructing an argument) and for the general well-being of the community (e.g., people enjoy participating).
Scalable conflict management strategies. Different proposals for managing conflict have been put forth, and some have been deployed. On Wikipedia, some participants work as “mediators ... [helping] conflicting parties to express, recognize, and respond positively to their personal and substantive differences” . Others have proposed that responding to other contributors/collaborators with constructive suggestions for improvement is more effective than rational explanations of problems or generic social encouragement when it comes to managing conflict (Huang, et al., 2016). Finally, utilizing participatory decision-making and certain leadership styles have been shown to mitigate conflict in free and open source software development communities (Filippova and Cho, 2016). There are few examples of conflict management in online argumentation systems. However, we interpret Kriplean, et al.’s (2014) work as an example of conflict management.
Kriplean, et al. developed ConsiderIt to support public deliberation. Users work together to create pro/con debates about pertinent local social and political issues. Citizens contribute claims about real issues, such as the first-ever Washington state income tax . The researchers recognized the importance of having reliable information about these kinds of issues. For example, if a user claimed, “The state legislature may expand the income tax to the middle class in two years,”  it would be important to vet this claim. At the same time, leaving the vetting up to other users could produce contentious arguments (conflict). Anticipating the possibility of these conflicts, the researchers recognized the need to involve moderators who would be both reliable and seemingly neutral. So, they recruited public librarians to work as on-demand fact-checkers. Users responded favorably to the librarians’ role even when they disagreed with the fact-check.
This is a promising outcome, but the authors draw attention to the issue of scalability. Just as other volunteer moderators are pressed for time, so too were the public librarians. A key question becomes how to devise strategies for conflict management and resolution that leverage interface and interaction design?
Summary. Although there are a number of peer production systems that exist to support argumentation, to our knowledge, none have been studied directly in terms of conflict between participants. However, it is also clear that conflict between participants might be of significant interest. Kriplean, et al. (2012), for example, describe normatively desirable activities on ConsiderIt to include “crafting positions that recognize pros and cons as well as points written by people who do not agree with them” . That is, it is desirable for users to manage conflict such that they engage with disagreeable points of view. Similarly, Widescope (Burbank, et al., 2011) aimed to facilitate dialogue between people with divergent points of view with the goal of achieving some objective, such as arriving “at a mutually acceptable compromise.” Managing or resolving conflicts is crucial to achieving such an objective.
Of the existing solutions to conflict management and resolution in peer production systems, most tend involve policy prescriptions for how human actors ought to behave towards one another. For example, providing constructive suggestions, using language that adheres to the norms and conventions of an online community, and developing special roles (e.g., mediators) for participants to adopt and perform, are descriptions of human actions. These are good and reasonable suggestions. In this paper, we propose that it is also important to explore ways that peer production systems might be designed to support or augment human actions.
Kialo is a novel online debate platform designed to facilitate dialectical reasoning, which involves articulating multiple claims, or positions, on a topic and then debating, deconstructing, and analyzing their strengths and weaknesses to arrive at new perspectives (Cooner, 2005; Moshman, 1982; O’Donnell, 2012). It has been framed as a tool capable of “promoting enlightened debate online” (June, 2018) as well as “a hub for civilized debate” (Margolis, 2018). Kialo is free. Anyone with the time, interest, and access to a computer can sign up for an account and start or contribute to a debate. Given that, as of this writing, our study might be the first of its kind on Kialo, we first provide a general overview of how Kialo works. We pay special attention to how participants suggest new claims, the importance of evaluating those claims, and the ways moderators conduct evaluations.
Kialo uses the structure of dialectical reasoning to explore different sides of an issue. Each debate has a main thesis, or several main theses, which are elaborated through ‘pro’ and ‘con’ claims. Anyone with a Kialo account can start a debate on any topic. To our knowledge, there are no restrictions on what topics are up for debate, and, in fact, there is a somewhat burdensome process to go through to delete a public debate from the site. Debates can be public or private. Private debates are invite-only whereas public debates are in principle visible to anyone with an Internet connection, the knowledge that Kialo exists, and the time and interest to search its growing set of debates. Kialo has established a set of roles that participants can play in a given debate. These are described briefly in Table 1.
Table 1: Summary of various user roles on Kialo. Role Description Admin Admins can modify discussion settings, change the rights of users and invite new members. Admins control whether a discussion is private or public, can change the discussion between single- and multi-thesis forms, write tags, and change the cover image. They are also able to accept suggested claims or mark them for review. Editor Editors have full rights to create, edit, and delete [all] claims [in a debate], as well as to comment or mark claims for review. Writer Writers have the rights to create, edit, and delete [their own] claims, as well as to comment or mark [all] claims for review. Viewer Viewers can see all the content in a debate, and they do not need a Kialo account. They have no rights to do anything in a debate except view its content, [which includes any/all communication between active participants in a debate].
Viewers can suggest claims. However, they have to wait for moderators — admins or editors in Kialo’s terms — to vet and accept their suggestions. This means that Kialo debates are not open in the same way that, for example, articles in Wikipedia are in principle subject to iteration by anyone with access to a computer with an Internet connection. In order to participate in a Kialo debate, one has to get past the gatekeepers. Once a person has been granted Writer status, however, they can add claims without vetting. This does not mean that their claims would not be subject to assessment and revision — only that they would be accepted into the debate without any moderation up front. Vetting claims seems to be one of the most important parts of moderating a debate. As one moderator explained to us, “Badly worded claims ... invite more badly worded claims” @libre. Vetting claims is thus seen as directly contributing to the overall quality of a debate.
Moderators perform a variety of tasks. For example, they guide other participants’ learning — especially with regard to how to make good claims. This might involve critiquing suggested claims for their lack of clarity or lack of support. They also monitor debates for duplicate claims, insincere contributions, and vulgar/abusive content. Moderators work in teams, which means that collaboration is a key aspect of their practice. Few (if any) actions a moderator can take in a debate are carried out in isolation. Moderators frequently discuss suggested claims with each other before deciding to accept them in a debate. This is a kind of collaborative “gatekeeping” (Keegan and Gergle, 2010), and it is ultimately visible to the Kialo community in the form of public-facing chat logs. Moderators also deliberate over accepted claims, i.e., whether they need supporting evidence or lack clarity, and/or whether they remain relevant as a debate changes over time. It is common that, as a debate grows, the framing or main thesis may change, which motivates the moderator team to reevaluate and refactor the entire debate.
Figure 1: Moderators discuss a flagged claim on Kialo.
When we started our participant observation in October 2017, which was shortly after the site “went live” in August, vetting suggested claims was an individual process. Moderators could accept or “send back” a claim without consulting other moderators. Sending claims back to a writer was the only aspect of moderation that was hidden from the broader community. Only after a claim had been accepted, if a moderator took any action (e.g., flagging or commenting on it), then that action would be publicly visible. In early 2018, however, Kialo implemented a change that kept suggested claims static and visible to all moderators, which turned suggested claim vetting into a collaborative activity (Figure 1). It was no longer possible to send claims back. Instead, a moderator could ‘reply’ to the claim with comments, questions, or revision requests, all of which would be visible to the moderator team.
In line with existing studies of online communities (Boellstorff, 2012), we made the decision to use virtual ethnographic methods to explore moderation practices on Kialo. We are engaged in ongoing (14 months) participant observation, which means that we are active writers and moderators in several debates on Kialo.
Data collection. We have been developing a thick record (Carspecken, 1996) of our interactions on Kialo. This consists of (1) low-inference summaries of interactions and experiences on Kialo; and, (2) relevant, publicly visible user-generated text on Kialo. This publicly visible text comes either from a discussion chat or a claim chat. The discussion chat facilitates talk about high-level issues pertinent to a debate (e.g., are there too many top-level claims, does the main thesis need to change), onboarding new participants (e.g., by explaining to them the nuances of a debate, how Kialo works, etc.), as well as casual talk (e.g., who has been on vacation/holiday recently, whether someone has gotten busy at work, and so forth). Claim chats tend to be focused more so on the issues with a particular claim (e.g., whether it is unclear, unsupported, or irrelevant), though people discuss higher-level issues here, too. Both discussion and claim chat records are publicly visible, and we collect and organize them as part of our thick record.
Data analysis. We iteratively read and discussed our thick record, which drew our attention to the way moderation practices changed when Kialo rolled out design updates. In particular, we became interested in the ways that moderators came into conflict with each other as result of those changes. This led us to examine our thick record through the lens of conflict and to consider the ways in which conflict could be said to detract from or contribute to moderation practice. We continued our observation work as we performed data analysis, and became aware of the importance of claim vetting, which, in turn, led us to re-examine our data in terms of how conflict between moderators affects claim vetting.
First, we describe how claim vetting involves argumentation between moderators. Second, we describe how constructive dialogue between moderators can produce higher quality claims. Higher quality claims can mean that the claims are clearer, that they have stronger support, or that they become more relevant to a parent claim or main thesis. Although the interactions and text we describe are publicly visible on Kialo, we have changed all user names and edited text in an effort to maintain user privacy.
Claim vetting involves reasoning between moderators
Conflicts arise when one moderator initiates a discussion about a claim and another moderator accepts it before the discussion resolves. Since Kialo does not have an official policy on conflict management, moderators take different approaches in response to what they see as a conflict.
An illustrative case in the climate change debate, for example, played out between several moderators across multiple claim chat threads. It began with what one moderator perceived as a breach of protocol by another. @sodanotpop had been workshopping a suggested claim with an author when another moderator, @blueteam, accepted the claim into the debate. @sodanotpop subsequently flagged the claim and engaged @blueteam: “[It] was not appropriate to accept a suggestion still under discussion. i engaged the author in order to strengthen it before accepting it into the debate.” This comment initiated a lengthy argument that played out in three separate claim chat threads, which meant that these two moderators were moving to different claims in the debate arguing with each other about the proper protocol for collaborative claim vetting.
Some of this argumentation was pertinent to the claims themselves. For example, @blueteam discussed newly provided support as justification for accepting claims. “I accepted it because the author’s claim was cited as unsupported, they then supported the claim so i marked it as supported.” They also pointed to the lack of support for other claims as justification for accepting a new claim before support had been provided — arguing that double standards were applied. “Where is the evidence or anything else substantial that backs up this claim?? there isn’t any.” @sodanotpop argued, in one case, that a source of support itself was not valid: ”the claim contains a link to scientific work that has been disproven (shown to be false) by other members of the scientific community.“ Their interaction is civil enough. They provide rationales for certain decisions and politely discuss the value of the grounds intended to support a claim.
However, they also argued over how to go about collaborative claim vetting. Whereas @blueteam felt justified in accepting a claim that had been marked and was apparently in the process of being workshopped, @sodanotpop believed that it was inappropriate for another moderator to accept a claim that they were workshopping. @sodanotpop could have been echoing the perspective of another moderator in the debate, @libre, who, in a separate thread, called out a user for accepting a claim under discussion. “[I] think it might have been better to not accept this when @saskatoon @sodanotpop and me discuss it.” This comment did not lead to a long argument between moderators. In fact, the person who accepted @libre’s claim did not respond again in the thread.
This could have been due to the way @libre “softened” their comment by acknowledging Kialo’s interface change. “That’s a fairly recent change Kialo seems to have made and we’re still trying to figure out how to use it best.” Following this comment, @libre’s attention returns to the suggested claim. However, in their thread, @sodanotpop and @blueteam continued to argue about moderation policies, including the use of more blunt and direct criticisms.
For example, @blueteam suggested that @sodanotpop did not have a solid grasp of their actions as a moderator on Kialo: “You’re not comprehending what you’re doing here.” They also trivialized an objection as a gripe, “You then had another gripe with the supporting evidence — for another reason, so mark it again,??? so what?” and characterized several accepted claims as “out of touch with reality.” These comments could be construed as moving away from civil discourse and towards something akin to trolling. During the latter stages of this argument, @sodanotpop posted the following comment in the discussion chat:
To all the mods: we need to discuss accepting suggested claims. There are several such claims where I’ve initiated discussions with the authors in order to fix problems *before* accepting them into the debate only to have another moderator come along and accept the claim. This isn’t a great way to collaborate, nor is it a good way to grow the debate. So, I think we need to agree that best practice is to check and see if another moderator has already started discussing a suggested claim before clicking accept. Assume that there is reason that other moderator didn’t accept the claim and at least engage in some discussion before acting.
This proposal parallels an earlier insight that @Choco shared with us in a separate debate when we asked about the process of flagging claims. “Any editor can unflag a claim, but it [sic] generally accepted as a practice that the one who flags it should unflag it.” After Kialo changed its interface, we have seen some moderators apply this protocol or some version of it. For example, we have seen moderators propose and vote on changes to suggested claims and flagged claims. However, we have also experienced firsthand, and witnessed multiple cases of, moderators accepting claims without waiting for resolution to an active discussion.
@sodanotpop’s proposal did not result in any broader discussion amongst the moderators with regard to collaborative claim vetting. However, @blueteam posted what seemed to be an antagonistic critique of how some moderators in the debate vet and accept claims:
Claims are being accepted that are bordering on insane, with no thought or logic behind them ... for example saying that ‘technically/practically, the tools [to provide energy without burning fossil fuels] exist ... in principle there is no obstacle to [stop] burning fossil fuels’ ... i mean really?? come on guys. It’s like just claiming martians put CO2 in the atmosphere.
On the one hand, @blueteam issues a meaningful call for “thought or logic” to be employed when it comes to moderating suggested claims. On the other hand, they criticize other, unspecified moderators for accepting “insane” claims into the debate — illustrating their point with what could be interpreted as a disparaging analogy.
After @blueteam posted this remark, @sodanotpop seemed to withdraw, without comment, from the debate. Though they still appear on Kialo as a moderator in several debates, they have not accepted claims or participated in discussions of flagged claims since around the time their argument with @blueteam wound down. It does not appear that any other moderators took up the discussion of “best practices” for collaborative claim vetting. However, it is possible to find many examples of arguments between moderators — especially in the climate change debate — where the arguments seem not to affect claim under discussion.
Instead, they involve rhetoric like “what a blustering bunch of nonsense,” “it’s completely useless to discuss with you ...,” “I’ll explain in case anyone with an open mind is reading ...” and “all you've done is change a definition to suit your narrative. whatever.” These comments are all part of a single thread. At no point does anyone interject or attempt to mediate this exchange, which, as of this writing, is still active, and the claim from which it stems remains flagged and unedited. When people get together to argue, there is no magic bullet for mitigating rudeness, shouting, or trolling. However, these phenomena ought to be considered alongside constructive dialogues on the site — ones that result in additions and revisions to debate content that would appear to make it better.
Constructive dialogue between moderators produces higher quality claims
Dialogue is the primary way moderators resolve issues pertinent to the overall structure of a debate, to particular (problematic) claims, and to moderation practices. While it is possible, in our experience moderators rarely work in isolation. In fact, the two most important elements of the interface might be the discussion and claim chats since these provide the forums for moderator dialogue. Two important features of these chats are: (1) they are public and thus visible to the entire Kialo community; and, (2) they are continuous. Public visibility may strengthen civility between participants on the site, and a living historical record provides insight into how ideas may have evolved over time.
With regard to the process of evaluating and accepting suggested claims, the interaction between participants was neither public nor continuous. Moderators requested revisions to suggested claims via direct messages, which were invisible to the broader community. If an author suggested a new claim, the evaluation process began anew. Assuming the same moderator examined the new claim, they would have to recall (from memory) the previous claim as well as their revision request. There was no accessible living record of this interaction.
We are unsure of when exactly Kialo changed things, since we did not receive any formal communication describing changes to the platform. However, in early 2018 it became possible to interact with suggested claims as though they had already been accepted into the debate. Kialo made it so that moderators could make public comments on claims in a continuous chat thread. Aspiring participants could make revisions based on these requests or they could engage with moderators in a dialogue about why they might not want to make a revision. Moreover, multiple moderators could see and join the evaluation process, which created the conditions for dialogue to emerge around suggested claims.
For the most part, these dialogues are productive exchanges of ideas. Moderators weigh in with their thoughts on a particular claim, ask others for their perspective, and render judgments on suggested claims that can be taken into consideration by others when deciding to accept a claim or continue to workshop it. In the following exchange, for example, @libre solicits perspectives from another moderator (@qed) about a suggested claim from a new user @tennisC:
@libre: @qed what do you think, and @tennisC why do you think the parent is unrelated?
@hollywood: @libre @tennisC Interesting. Either this is a con to the parent claim as that claim says the parent does not address the thesis ... I understand the fact that the claim, “The earth is fine,” is right but does not ‘con’ the parent. I think it’s fine to argue that the claim is irrevelent or is not a good Con to the thesis.
@hollywood: @canoe Rewrite and explain why the parent isn’t good in the context of the thesis and I vote to accept
@sodanotpop: @reply the parent rebuts the thesis if we accept that the ends of fighting climate change ought to be [preservation] ... There are underlying issues that could stand to be teased out.
There are many examples of dialogue between moderators and writers resulting in concrete improvements to the clarity, relevance, or grounding of a claim. These dialogues tend to include civil language and a respect for other perspectives and approaches — even those that deviate from site-wide conventions for conduct. Indeed, these dialogues exemplify what the site aspires to achieve: a civilized space where it is OK to disagree and where participants are encouraged to reason about contentious issues in constructive ways (June, 2018). In a debate about gender as a social construct, for example, someone changed the form of the main thesis without consulting others who had been actively working in the debate. This resulted in a discussion of the merits of the change and, ultimately, a decision to revert the thesis back to its previous form:
@originator: I’ll tag @jolene @abcdefg and @grasshopper to see if they agree with the changes.
@jolene: Some of the reasons expressed have a point. But, I feel the first formulation was clearer for most readers (with little background knowl) and as objective as possible.
@abcdefg: I think the current wording communicates that gender and sex are the same, and the suggested claims just now coming in reflect this.
@abcdefg: I’m going to re-draft it similar to the original for now; we can continue discussing this to get something stronger. Hope that’s okay!
Kialo currently hosts several debates addressing contentious issues, such as the recent “stand or kneel” NFL controversy in the United States, abortion rights, and racial profiling. It is understandable that participants in these debates, including moderators, would have strong feelings about these topics. Furthermore, it is also understandable that these feelings would inform participants’ interactions with each other. For writers, this might mean posting more “pros” in support of a topic in accordance with their views. For moderators, this could mean holding certain sides of a debate to higher standards. One participant called out this style of moderation in a popular debate about climate change. “This is a clearly biased discussion. You have multiple pro claims that have no support and most of the skeptical ones are challenged repeatedly (to the point that the average contributor would give up).” Such bias is characterized as a liability on account of how it excludes certain perspectives from the debate.
While there can be drawbacks to moderators having different points of view, it is not necessary to frame points of view as liabilities. There are examples of how different — even opposing — points of view can be used to strengthen claims and debates on Kialo. On the other hand, there appear to be more scenarios involving clashing points of view that devolve into arguments that lead to no concrete improvements in a debate. In some cases, arguments have concrete, negative consequences: participants may withdraw from a debate or decide to stop using Kialo altogether. A key seems to be managing different points of view to facilitate constructive dialogue between adversarial points of view.
Conflict in an online community like Kialo can be problematic if it takes attention away from the primary activity of moderation and negatively affects users’ experiences. Our experience as participant observers on Kialo motivated us to explore the possibility that amplifying moderators’ beliefs and values might be conducive to constructive dialogue rather than some of the more volatile arguments we observed between @sodanotpop and @blueteam. Making moderators’ perspectives on a debate topic visible could be an effective strategy for facilitating constructive, civil interactions between moderators. In addition, we discuss the value of providing a space for moderators to document and iterate on their practice.
Foreground moderators’ beliefs and values
One interesting and potentially valuable feature on Kialo is the ‘Perspectives’ tool. The tool enables participants to see a debate “from [another] participants’ perspectives,” including moderators. Participants can cast a vote on the ‘veracity’ of the main thesis (whether they agree with it) and the ‘impact’ of the claims in a debate (whether a claim effectively supports or rebuts the main thesis or claim in a debate). The combination of votes on the main thesis and claims forms a participant’s perspective on the debate.
Voting on theses and claims is not mandatory, nor is it encouraged, and many participants simply do not vote. Thus, it is not possible to see their perspective. Moreover, we believe users have the power to opt out of sharing their perspective. After observing the arguments between @sodanotpop and @blueteam, we became interested in their perspectives on the climate change debate. We were able to see how @sodanotpop voted on several claims, and the votes indicate a favorable view of the main thesis and several supporting claims. Notably, @sodanotpop did not vote on many con claims. When we tried to see the debate from @blueteam’s perspective, we were unable to locate their user name on the list of active participants in the perspectives tool, which we took to mean that they opted out of sharing their perspectives. Their absence from the perspectives tool could also mean that, if a participant does not vote at all, then they do not appear as an active participant in the perspectives tool.
Accessing and making sense of the perspectives tool is not intuitive nor is it efficient. It is located in the discussion menu, which is itself difficult to find, and there is no indication of its functionality. In addition, even if a participant filters a debate through another participant’s perspective (Figure 2), they then have to navigate the debate, claim by claim, in order to “see” that perspective. Navigating a debate, even a small one, can be time-consuming.
Figure 2: The perspectives tool. The dark grey column on the left lists discussion participants. The blue dot signifies whose perspective we see, and the green and red/orange bars reflect how they voted on a claim.
Knowing other moderators’ points of view can be useful in dialogues about suggested claims. For example, if we know that another moderator strongly disagrees with the claim, “Humans should act to fight climate change,” we might calibrate our comments and questions to take this into account. We became frustrated, as participant observers, when we encountered a moderator who stated, on several separate occasions, that their claim vetting process was guided by logic and reason. This gave the impression of a disinterested third party committed to equitable claim vetting on both sides of the debate, which is a laudable goal. However, as participants, we interpreted many of their actions as favoring one side of the debate, and, when we raised this publicly, they were quick to rebut our interpretation in what we perceived as a caustic manner.
On the other hand, our reluctance to accept their stance could indicate that awareness of other moderators’ points of view might hamper dialogue. Our belief that another moderator acts on the basis of a point of view — as distinct from reason and logic — might undermine our ability to see that, in some cases, their arguments about certain suggested claims are reasonable. We drew conclusions about @blueteam’s standpoint, for instance, on the basis of a limited set of interactions that, perhaps, do not represent the whole of their experience. However, these conclusions influence our interactions. We are more confrontational, which could be good or bad. In some cases, we believe that our confrontations have yielded stronger claims being accepted into the debate. This is good, and it reflects ways in which conflict can be leveraged to serve the primary tasks at hand. Moreover, understanding other moderators’ perspectives, and doing some emotional work in anticipation of caustic interactions (Menking and Erickson, 2015), seems critical here. Knowing that @blueteam is more antagonistic, for example, helps us make sense of their reactions to our decisions and to strategize ways to engage with them more productively. In other cases, however, conflicts have fueled circular discussions that do not yield any concrete changes to a debate. To the contrary, they devolve into emotionally charged disagreements, characterized by rude comments and trolling.
Public facing claims by moderators about reason and logic as primary motivating factors for their work can serve to minimize the perception that personal beliefs and values have on moderation. But we believe that it is neither possible, nor desirable, for moderators to operate independently of their beliefs and values. Instead, we believe that moderators can leverage these perspectives to cultivate better debates and engage in more productive discussions. We are not suggesting that surfacing personal beliefs and values about debate topics would be a magic bullet for conflict management in peer production communities. However, our own experiences on Kialo suggest that doing so can be an asset in some cases and a detriment in others.
Kialo could require moderators to vote on the main thesis in a debate to indicate whether they agree or disagree. That stance could then be visualized in a public-facing way so that in any interaction with other moderators or writers it would be possible to glean the moderator’s standpoint on the debate topic. Moderators need not be “locked in” to their vote. Other researchers have implemented voting systems before and after participation in a debate (Kriplean, et al., 2011), which makes clear that perspectives can, and in some cases, should change over time. Moderators could vote on a main thesis as often as they like, with each new vote informing a change in their standpoint visualization in real time.
Toward constructive dialogue between adversarial points of view
We have observed and interacted with moderators whose interests seem to diverge from our own. In one notable case, for example, we observed a clash between moderators that came to a head with the acknowledgement that they reached an impasse whereby neither moderator was prepared entertain or accept a point of view other than their own. Maintaining participation during and after interactions such as these can be challenging. In fact, the clash seemed to result in one moderator withdrawing from the debate altogether, while the other moderator appears active as of the writing of this paper. Moreover, the claim(s) that served as the sites for this clash remained problematic and unresolved until, after several weeks of inattention, other moderators resumed evaluating and workshopping them.
This is a good illustration of what Filippova and Cho (2016) characterized as ideological issues distracting from the task at hand. Problematic claims in the debate remained unresolved because moderators were arguing about their differing beliefs and values, which leads us to suggest that, given the purpose of moderation on Kialo, this dialogue was not constructive.
A simple way to distinguish a constructive dialogue about a suggested claim on Kialo, then, could be its outcome: do any of the moderators involved recognize or appreciate the limitations of their own perspective? Do they acknowledge that other moderators can or should have different perspectives? Finally, and perhaps most crucially, does the dialogue lead to the revision and acceptance (or rejection) or a suggested claim? It seems reasonable to suggest that, in the end, if a dialogue results in some decision about a suggested claim then it could be said to be constructive. Whether moderators’ perspectives change could be a secondary concern.
Moderators who do not know (or care) much about a particular topic are unlikely to convince others to reevaluate their own positions. Moderators who know or care a lot about a debate topic — regardless of which side they are on — are, in our experience, more likely to ask questions and raises challenges, which, in the long run might be better for the debate. These kinds of moderators force others to “know their stuff,” develop stronger arguments, and remain engaged even when doing so seems counterproductive or frustrating. So, creating an environment conducive to civil, adversarial interactions between moderators might be quite important, and foregrounding their personal standpoints on an issue could contribute to that goal.
Once we drew conclusions about how other moderators felt about climate change or about “shadow banning” conservative-run social media accounts, we modified our expectations and rhetorical approach to arguing about suggested and flagged claims. We worked harder at ignoring caustic comments, ad hominem attacks, and trolling by other moderators and writers. Moreover, we made the decision to engage with them at every opportunity — rather than withdraw — thinking that other participants on the site would benefit from seeing two conflicting points of view engage with each other in order to strengthen a debate. It might be these kinds of interactions that produce the strongest accepted claims, and thus the strongest dialectics, on Kialo.
However, it is also clear that @sodanotpop and @blueteam were engaged in a conflict that seemed to straddle what Filippova and Cho (2016), citing Arazy, et al. (2013), characterize as “process” and “affective” conflict. It was a process conflict in the sense that both moderators argued about how to vet and accept claims as part of a team , and it was affective conflict in the sense that both moderators took issue with the other’s personal beliefs and values. Each characterized these as “biases” influencing the other’s participation in negative ways. @sodanotpop’s withdrawal from the debate following their conflict with @blueteam suggests that the circumstances had become too challenging, or caustic, for them, and that they had not engaged in emotional labor such that they felt comfortable maintaining their participation on Kialo.
Throughout their disagreement, both moderators commented not only on the content of the claims under examination but also on the expected behaviors of moderators when it comes to vetting suggested claims. @sodanotpop, adopting a perspective shared by some other moderators, argued that the moderator who flags a claim, or initiates a workshop with authors and other moderators, should be the one who unflags or decides to accept the claim. While @blueteam did not disagree with this directly, their actions suggest that they see these decisions as open to the moderator team. That is, if a moderator finds a flagged claim and sees a strong rationale for unflagging it, they can remove the flag without consulting others on the team.
We are not making any judgment about which of these two positions is right but, rather, we want to examine ways that they might be put into constructive dialogue with one another. Remaining firmly committed to one or the other position would seem to run counter to the tenets of dialectical reasoning, the purpose of which is to explore two contrasting points of view about an issue in order to produce new knowledge. This could mean appreciating the richness and complexity of an issue that previously seemed black and white or it could mean changing one’s mind entirely, and this would seem to be one of the underlying purposes of Kialo.
A key issue with regard to process conflict on Kialo is a lack of policy describing practical strategies for interacting with other users. One thing we took away from observing how moderators negotiate their practice is that these negotiations happen ad hoc and are distributed throughout a debate. For example, moderators may work out the “norms” of participation via several disparate claim chats as well as the discussion chat, but no concrete sets of “rules” or “guidelines” ever materialize. @sodanotpop and @blueteam, for example, spread their argument between three separate claim chat threads as well as the main discussion chat thread. They made several proposals with regard to what the norms for collaborative claim vetting should be, but they did not compare or reconcile these suggestions. Nor did they compile these suggestions in a location that would be widely visible to other moderators.
The discussion chat, which could in principle serve as such a location, is challenging to parse. Moderators and writers post greetings, introductions, links to relevant external content (e.g., journal articles, YouTube videos, etc.), comments on the debate, and personal details. It is organized in reverse chronology, which is typical of many chat logs. There is no low-cost method for searching or filtering content (e.g., by hashtag). The cost of finding meta-level comments and questions by sifting through all other comments might discourage people from trying. We have experienced this frustration firsthand. But perhaps it would be possible to introduce hashtags as a first step to make the discussion chat searchable by content type or to create a singular meta-discussion for the Kialo community where the purpose is to establish and revise a set of clearly defined “rules” for moderating and writing claims. This might have the dual benefit of mitigating disagreements about how to moderate and, thus, within the boundaries of a debate, keeping dialogue between moderators focused on the quality of theses and claims.
We see value in categorizing different kinds of conflict on Kialo and determining which of these kinds could be beneficial. This would require distinguishing good claims from bad ones, and Kialo already has a framework for this purpose. For example, bad claims are those deemed to be unsupported, unclear, vulgar/abusive, unrelated, not a claim, or duplicative. However, we believe it would be possible and useful to apply a framework such as Toulmin’s  to assess claim quality, which would be a crucial step in a project exploring the relationship between conflict and claim quality. We are also interested in possible ways to iterate on Kialo’s interface and interaction design to help writers craft stronger, more effective claims. Finally, we have already observed how some moderators characterize their actions as driven primarily by reason and logic. This struck as an interesting discursive strategy that could be part of a broader project aimed at maintaining the authority and power of their position as a moderator. Moreover, it would be interesting to compare how moderators talk about their process with how they carry it out. Hence, we see value in using discourse analysis to examine moderators’ talk about their role and then to compare this talk with an analysis of their actions (accept/reject decisions, edit decisions, and arguments with other moderators).
Kialo is a novel online debate platform supporting teams of moderators and writers in the construction of pro/con debates about different topics of interest. We situate our ongoing participant observation in relation to existing research on moderation and online conflict and dispute resolution. We found that moderators with different points of view can clash over suggested claims or accepted claims that have been flagged as problematic, and we explained that these clashes can result in no ostensible improvements to the debate or to the Kialo community. In some cases, these clashes can discourage participation in certain debates and even result in participants leaving the site altogether. On the other hand, constructive dialogue between moderators has the dual benefit of encouraging participation and strengthening the quality of debates.
During our study, we observed how an interface change facilitated conflicts between moderators and speculated that one possible source of these conflicts could be a lack of awareness of other moderators’ points of view. Moderators themselves frame their approach as driven by logic and reason rather than by beliefs and values even when those beliefs and values seem to become visible through interactions with other moderators. We propose that foregrounding moderators’ beliefs and values — by foregrounding their position on a given debate topic — could be an effective way to anticipate and preempt conflict. An important next step in our research will examine the way that moderators use language to construct the role that values and assumptions might play in their actions on Kialo.
About the authors
Jordan Beck is Assistant Research Professor in the College of Information Sciences and Technology at Pennsylvania State University.
E-mail: jeb560 [at] psu [dot] edu
Bikalpa Neupane is a Ph.D. candidate in the College of Information Sciences and Technology at Pennsylvania State University.
E-mail: bikalpaneupane [at] ist [dot] psu [dot] edu
John M. Carroll is Distinguished Professor of Information Sciences and Technology in the College of Information Sciences and Technology at Pennsylvania State University.
E-mail: jmcarroll [at] psu [dot] edu
1. Billings and Watts, 2010, p. 1,447.
2. Kriplean, et al., 2012, p. 267.
4. Kriplean, et al., 2012, p. 1.
5. Filippova and Cho, 2016, p. 707.
6. Toulmin, 2003, pp. 92–97.
O. Arazy, L. Yeo, and O. Nov, 2013. “Stay on the Wikipedia task: When task-related disagreements slip into personal and procedural conflicts,” Journal of the American Society for Information Science and Technology, volume 64, number 8, pp. 1,634–1,648.
doi: https://doi.org/10.1002/asi.22869, accessed 24 June 2019.
O. Arazy, F. Ortega, O. Nov, L. Yeo, and A. Balila, 2015. “Functional roles and career paths in Wikipedia,” CSCW ’15: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 1,092–1,105.
doi: https://doi.org/10.1145/2675133.2675257, accessed 24 June 2019.
M. Billings and L.A. Watts, 2010. “Understanding dispute resolution online,” CHI ’10: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1,447–1,456.
doi: https://doi.org/10.1145/1753326.1753542, accessed 24 June 2019.
T. Boellstorff, B. Nardi, C. Pearce, and T.L. Taylor, 2012. Ethnography and virtual worlds: A handbook of method. Princeton, N.J.: Princeton University Press.
N. Burbank, D. Dutta, A. Goel, D. Lee, E. Marschner, and N. Shivakumar, 2011. “Widescope — A social platform for serious conversations on the Web,” arXiv (8 November), at https://arxiv.org/abs/1111.1958, accessed 24 June 2019.
P.F. Carspecken, 1996. Critical ethnography in educational research: A theoretical and practical guide. New York: Routledge.
T.S. Cooner, 2005. “Dialectical constructivism: Reflections on creating a Web-mediated enquiry-based learning environment,” Social Work Education, volume 24, number 4, pp. 375–390.
doi: https://doi.org/10.1080/02615470500096902, accessed 24 June 2019.
S. Dow, A. Kulkarni, S. Klemmer, and B. Hartmann, 2012. “Shepherding the crowd yields better work,” CSCW ’12: Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work, pp. 1,013–1,022.
doi: https://doi.org/10.1145/2145204.2145355, accessed 24 June 2019.
A. Filippova and H. Cho, 2016. “The effects and antecedents of conflict in free and open source software development,” CSCW ’16: Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pp. 705–716.
doi: https://doi.org/10.1145/2818048.2820018, accessed 24 June 2019.
A. Filippova and H. Cho, 2015. “Mudslinging and manners: Unpacking conflict in free and open source software,” CSCW ’15: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 1,393–1,403.
doi: https://doi.org/10.1145/2675133.2675254, accessed 24 June 2019.
D. Fréard, A. Denis, F. Détienne, M. Baker, M. Quignard, and F. Barcellini, 2010. “The role of argumentation in online epistemic communities,” ECCE ’10: Proceedings of the 28th Annual European Conference on Cognitive Ergonomics, pp. 91–98.
doi: https://doi.org/10.1145/1962300.1962320, accessed 24 June 2019.
C. Grevet, L.G. Terveen, and E. Gilbert, 2014. “Managing political differences in social media,” CSCW ’14: Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 1,400–1,408.
doi: https://doi.org/10.1145/2531602.2531676, accessed 24 June 2019.
A.R. Hochschild, 1979. “Emotion work, feeling rules, and social structure,” American Journal of Sociology, volume 85, number 3, pp. 551–575.
doi: https://doi.org/10.1086/227049, accessed 24 June 2019.
W. Huang, T. Lu, H. Zhu, G. Li, and N. Gu, 2016. “Effectiveness of conflict management strategies in peer review process of online collaboration projects,” CSCW ’16: Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pp. 717–728.
doi: https://doi.org/10.1145/2818048.2819950, accessed 24 June 2019.
A.W. June, 2018. “How to promote enlightened debate online,” Chronicle of Higher Education (25 March), https://www.chronicle.com/article/How-to-Promote-Enlightened/242905, accessed 24 June 2019.
B. Keegan and D. Gergle, 2010. “Egalitarians at the gate,” CSCW ’10: Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work, pp. 131–134.
doi: https://doi.org/10.1145/1718918.1718943, accessed 24 June 2019.
J. Kim, J. Cheng, and M.S. Bernstein, 2014. “Ensemble: Exploring complementary strengths of leaders and crowds in creative collaboration,” CSCW ’14: Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 745–755.
doi: https://doi.org/10.1145/2531602.2531638, accessed 24 June 2019.
A. Kittur, B. Pendleton, and R.E. Kraut, 2009. “Herding the cats,” WikiSym ’09: Proceedings of the Fifth International Symposium on Wikis and Open Collaboration, article number 7.
doi: https://doi.org/10.1145/1641309.1641321, accessed 24 June 2019.
M. Krieger, E.M. Stark, and S.R. Klemmer, 2009. “Coordinating tasks on the commons,” CHI ’09: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1,485–1,494.
doi: https://doi.org/10.1145/1518701.1518927, accessed 24 June 2019.
T. Kriplean, C. Bonnar, A. Borning, B. Kinney, and B. Gill, 2014. “Integrating on-demand fact-checking with public dialogue,” CSCW ’14: Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 1,188–1,199.
doi: https://doi.org/10.1145/1518701.1518927, accessed 24 June 2019.
T. Kriplean, J. Morgan, D. Freelon, A. Borning, and L. Bennett, 2012. “Supporting reflective public thought with ConsiderIt,” CSCW ’12: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, pp. 265–274.
doi: https://doi.org/10.1145/2145204.2145249, accessed 24 June 2019.
T. Kriplean, J.T. Morgan, D. Freelon, A. Borning, and L. Bennett, 2011. “ConsiderIt,” CHI EA ’11: CHI ’11 Extended Abstracts on Human Factors in Computing Systems, pp. 1,831–1,836.
doi: https://doi.org/10.1145/1979742.1979869, accessed 24 June 2019.
J. Liu and S. Ram, 2011. “Who does what,” ACM Transactions on Management Information Systems, volume 2, number 2, article number 11.
doi: https://doi.org/10.1145/1985347.1985352, accessed 24 June 2019.
K. Luther, C. Fiesler, and A. Bruckman, 2013. “Redistributing leadership in online creative collaboration,” CSCW ’13: Proceedings of the 2013 Conference on Computer Supported Cooperative Work, pp. 1,007–1,022.
doi: https://doi.org/10.1145/2441776.2441891, accessed 24 June 2019.
J. Margolis, 2018. “Meet the start-up that wants to sell you civilised debate,” Financial Times (24 January), at https://www.ft.com/content/4c19005c-ff5f-11e7-9e12-af73e8db3c71, accessed 22 April 2019.
A. Menking and I. Erickson, 2015. “The heart work of Wikipedia: Gendered, emotional labor in the world’s largest online encyclopedia,” CHI ’15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 207–210.
doi: https://doi.org/10.1145/2441776.2441891, accessed 24 June 2019.
D. Moshman, 1982. “Exogenous, endogenous, and dialectical constructivism,” Developmental Review, volume 2, number 4, pp. 371–384.
doi: https://doi.org/10.1016/0273-2297(82)90019-3, accessed 24 June 2019.
A.M. O’Donnell, 2012. “Constructivism,” In: K.R. Harris, S. Graham, T. Urdan, C.B. McCormick, G.M. Sinatra, and J. Sweller (editors). APA educational psychology handbook. Volume 1. Washington D.C.: American Psychological Association, pp. 61–84.
doi: http://dx.doi.org/10.1037/13273-003, accessed 24 June 2019.
D. Park, S. Sachar, N. Diakopoulos, and N. Elmqvist, 2016. “Supporting comment moderators in identifying high quality online news comments,” CHI ’16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 1,114–1,125.
doi: http://dx.doi.org/10.1145/2858036.2858389, accessed 24 June 2019.
J. Schneider, K. Samp, A. Passant, and S. Decker, 2013. “Arguments about deletion,” CSCW ’13: Proceedings of the 2013 conference on Computer supported cooperative work, pp. 1,069–1,080.
doi: http://dx.doi.org/10.1145/2441776.2441897, accessed 24 June 2019.
B. Settles and S. Dow, 2013. “Let’s get together: The formation and success of online creative collaborations,” CHI ’13: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2,009–2,018.
doi: http://dx.doi.org/10.1145/2470654.2466266, accessed 24 June 2019.
S.E. Toulmin, 2003. The uses of argument. Updated edition. Cambridge: Cambridge University Press.
E. Wenger, 1998. Communities of practice: Learning, meaning, and identity. Cambridge: Cambridge University Press.
H. Zhu, R. Kraut, and A. Kittur, 2012. “Effectiveness of shared leadership in online communities,” CSCW ’12: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, pp. 407–416.
doi: http://dx.doi.org/10.1145/2145204.2145269, accessed 24 June 2019.
H. Zhu, R.E. Kraut, Y.-C. Wang, and A. Kittur, 2011. “Identifying shared leadership in Wikipedia,” CHI ’11: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3,431–3,434.
doi: http://dx.doi.org/10.1145/1978942.1979453, accessed 24 June 2019.
Received 12 December 2018; revised 4 June 2019; accepted 5 June 2019.
This paper is licensed under a Creative Commons Attribution 4.0 International License.
Managing conflict in online debate communities
by Jordan Beck, Bikalpa Neupane, and John M. Carroll.
First Monday, Volume 24, Number 7 - 1 July 2019