In 2011, the respective roles of higher education institutions and students worldwide were brought into question by the rise of the massive open online course (MOOC). MOOCs are defined by signature characteristics that include: lectures formatted as short videos combined with formative quizzes; automated assessment and/or peer and self–assessment and an online forum for peer support and discussion. Although not specifically designed to optimise learning, claims have been made that MOOCs are based on sound pedagogical foundations that are at the very least comparable with courses offered by universities in face–to–face mode. To validate this, we examined the literature for empirical evidence substantiating such claims. Although empirical evidence directly related to MOOCs was difficult to find, the evidence suggests that there is no reason to believe that MOOCs are any less effective a learning experience than their face–to–face counterparts. Indeed, in some aspects, they may actually improve learning outcomes.
The efficacy of online learning
The importance of retrieval and testing for learning
Peer and self–assessment
Short format videos
Online forums and video discussions
In 2011, the respective roles of higher education institutions and students worldwide were brought into question by the rise of the massive open online course (MOOC). MOOC platforms Coursera (2012a), edX (2012) and Udacity (2012) have partnered with 33 universities, offering more than 200 courses to over two million students in 196 countries (Coursera, 2012b). Courses offered have attracted enrolments of up to 160,000 students (Fazackerley, 2012) lending the “massive” portion to the name MOOC. These courses are also free or “open”. Given that these courses are being offered by some of the most prestigious of universities, the potential disruptive nature of MOOCs was recognised early on. After all, if a student could take a course from Princeton University for free, why would they pay for an identical course given by their local (less famous) institution? Given the growth in availability of MOOCs, the question could be extended to why someone wouldn’t do an entire degree programme in this way. Of course, there are a number of practical issues that need to be resolved before this happens. Providing proctored examinations to students who have had their identities verified being the most salient. However, a more fundamental question has been raised on both sides of the argument. Namely, do MOOCs represent a pedagogically sound format for learning at a tertiary level? Claims for and against the pedagogical foundations of MOOCs have been made by a variety of interested parties (Association for Learning Technology, 2012; Baker, 2012; Moe, 2012) but these claims have been backed with only a scant amount of evidence or indeed agreement as to the defining characteristics of a MOOC and the pedagogical foundations it rests upon.
For the purposes of our study, we have taken the representative format of MOOCs as they exist on sites such as Udacity (2012), Coursera (2012a) and edX (2012). These courses exhibit common defining characteristics that include: massive participation; online and open access; lectures formatted as short videos combined with formative quizzes; automated assessment and/or peer and self–assessment and online fora for peer support and discussion.
There is no absolute definition of each of these characteristics, however. Even the concept of massive is open to interpretation. Although claims have been made to large registrations of up to 160,000 participants (Fazackerley, 2012), the number who complete the course is typically much lower, of the order of 5–15 percent of initial enrolees (Korn and Levitz, 2013). Realistically, in order to qualify as massive, the participation at any point during the running of the course should be large enough that it couldn’t be run in a conventional face–to–face manner.
The pedagogical foundations claimed for MOOCs follow on from their attributes and in part are justifications for those attributes. So it has been argued that online learning is particularly effective, formative quizzes enhance learning through the mechanism of retrieval practice, short video formats with quizzes allow for mastery learning and peer and self–assessment enhance learning. Further claims have been made that short videos complement the optimal attention span of students (Khan, 2012) and that discussion forums provide an adequate replacement of direct teacher–student interactions that would be considered normal for a class delivered on campus.
The justification of pedagogical benefits of MOOCs is in all likelihood teleological. The benefits have been retrofitted after the fact to a course format pioneered by Sebastian Thrun and Peter Norvig (2012). The fact that their original course and others that have followed have proved so popular, however, would suggest that there are positive aspects to the way they have been presented. The structure and format of MOOCs is being adapted as more experience is gained with their delivery and so it is important to understand in a systematic manner their benefits and shortfalls.
The purpose of this review is to examine the evidence regarding the pedagogical foundations of MOOCs and indeed validate that these foundations actually relate to the attributes of MOOCs as they are currently envisioned. These attributes and their pedagogical consequences are shown in Table 1.
Table 1: Characteristics of MOOCs and their related pedagogical benefits. MOOC characteristic Pedagogical benefits Online mode of delivery Efficacy of online learning Online quizzes and assessments Retrieval learning Short videos and quizzes Mastery learning Peer and self–assessment Enhanced learning through this assessment Short videos Enhanced attention and focus Online forums Peer assistance, out–of–band learning
A difficulty with the analysis of MOOC structure and its pedagogical foundations is the question of how similar a MOOC is to existing online courses offered for distance learning or as an extension of face–to–face delivery of courses as part of a so–called blended delivery. In some ways they are not and so the analysis of MOOCs is inherently not that different from research examining the benefits of online delivery of courses generally. The difference lies in the particular combination of the underlying characteristic components of MOOCs, their massive participation and the fact that they are open. The subtlety in the novelty of MOOCs is not the point of this paper, however, and will be left for exploration in future work.
A narrative analysis of research related to the proposed pedagogical foundations of MOOCs was conducted using Google Scholar (2012), Web of Knowledge (2012), Education Resources Information Center (ERIC) (2012) and PsycINFO (2012). The keywords used when searching were as follows: “online learning”, “retrieval learning”, “mastery learning”, “peer assessment”, “short video AND lectures”, “short videos AND education” and “online forums AND learning”. There was no specific date period attached to the keyword searches.
Studies were only included in the analysis if they provided empirical evidence on the impact of the characteristics and pedagogical foundations under study. We excluded research that used only self–reported data from student surveys as a means of determining effective learning outcomes as this was considered a highly subjective and unreliable measure of whether a particular learning strategy is effective. Inclusion focused on research aimed at higher education environments. Studies from other age groups and environments were included in the case that there was insufficient research within the tertiary education sector.
There were limitations to using ERIC due to access restrictions: many full–text documents were inaccessible due to the discovery of sensitive information being published in some documents. Many articles were, therefore, taken off–line whilst under review. As a result, ERIC was unavailable as a source for research in the online learning section of this paper. With all the elements combined there was a total of 8,614 keyword finds, 138 articles met the criteria and were deemed suitable for this analysis.
The principle feature of MOOCs is they largely take place online. The prevailing argument is that online courses are at least as effective as face–to–face courses. In many ways the comparison is fraught. Online learning offers flexibility of access to course materials from anywhere at any time (Allen and Seaman, 2005; Means, et al., 2010) which is not possible in a solely face–to–face environment. Face–to–face courses also become largely impractical when class sizes exceed available physical room capacities. There are few face–to–face courses that do not include the flexibility of online access to lecture materials and recordings. It has also been shown that if lecture videos and material are provided to students, attendance at the actual lectures declines (Traphagan, et al., 2010). One reason for this is that the students see there being an equivalence between the recorded and live experiences.
Several meta–analyses found no significant differences in student achievement when university students accessed content via online means or through face–to–face (Bernard, et al., 2004; Cavanaugh, et al., 2004). Cavanaugh and colleagues determined that achievement in distance education for high school students is comparable to traditional instruction and concluded that educators should not anticipate any significant differences in performance as a result of online learning. These finding were supported by comparative studies that also found no difference in academic achievement (Barker and Wendel, 2001; Kozma, et al., 2000; Summers, et al., 2005).
Lapsley, et al. (2008) found that online learning pedagogy may even be superior in the overall effect on student performance. Indeed, this is supported by a meta–analysis conducted by Shachar and Neumann (2003) who found that distance education actually surpasses the more traditional teaching format. In two–thirds of the studies reviewed by Shacher and Neumann, students taking courses in distance education outperformed their face–to–face counterparts. In a meta–analysis of online learning studies prepared by the U.S. Department of Education, Means and associates (2010) found that students who engaged in online learning performed modestly better on average than students engaged in face–to–face instruction. Maki, et al. (2000) also found that Web–based students outperformed students enrolled in face–to–face classes.
Online learning is not without its disadvantages, however. Some researchers argue that interaction and timely feedback, are quite often absent in online instruction (El–Tigi and Branch, 1997; Olson and Wisher, 2002). It has also been widely recognized that online courses experience much higher attrition rates than classroom based courses (El–Tigi and Branch, 1997; Olson and Wisher, 2002; Merisotis and Phipps, 1999). In addition, specialised skills are required to work with the technology often resulting in sound and video production that is less than broadcast quality (Kerka, 1996). Students must also display greater learner initiative as there is less supervision than in a classroom environment and there is also the potential for online students to experience social isolation (Kerka, 1996).
Despite these criticisms, the majority of literature does support the notion that online learning is as effective, if not more so, than traditional classroom teachings (Gagné, 1985; Joy and Garcia, 2000; McDonald, 2002; McKissack, 1997; Russell, 1999; Wegner, et al., 1999). With class sizes increasing as universities try to rationalise the number of courses offered, online delivery is the only way of maintaining learning outcome quality with the available resources of space and teaching staff (Means, et al., 2010).
A common format for MOOCs is the short video interspersed or associated with multiple–choice quizzes (Orn, 2012). The argument made is that the quizzes provide students with an opportunity for retrieval learning (Agarwal, et al., 2012; Karpicke and Roediger, 2007). Retrieval practice is the act of enhancing long–term memory of facts through recalling information from short–term memory. There is a belief here that it will also enhance learning (Karpicke and Blunt, 2011). According to Karpicke and Grimaldi (2012), retrieval is not just a neutral assessment of a learner’s knowledge; learning occurs through the act of retrieval. Every time we retrieve knowledge, that knowledge is altered, and the ability to reconstruct that knowledge again in the future is strengthened. Recent studies have shown retrieval practice to also enhance meaningful learning (producing organised, coherent, and integrated mental models that allow people to make inferences and apply knowledge) (see Karpicke and Roediger, 2008).
There is much evidence for the benefit of retrieval practice or learning. In one study involving university students in the humanities (Karpicke and Grimalidi, 2012), retrieval practice improved students’ abilities to memorise lists of words over repeated study alone. Retrieval practice resulted in a 50 percent improvement in long–term retention scores. In another study, retrieval practice produced more learning than elaborative studying with concept mapping (Karpicke and Blunt, 2011). Students in the first group studied the text in a single period while students in the second group studied the text in four consecutive study periods. Short answer tests conducted one week later found that retrieval practice produced the best learning. Agarwal and colleagues (2012) have also shown that retrieval practice implemented through quizzes improves long–term retention, of information over just listening to the teacher and completing standard homework assignments.
A study conducted by Karpicke and Roedieger (2008) demonstrates the fundamental role of retrieval practice in enhancing learning and shows that university students remain largely unaware of this fact. Students in the first group learned foreign language vocabulary words via the classic model of repeated study–test trials. Under the other three conditions the following occurred: once the vocabulary item was successfully produced by the student it was (1) studied repeatedly and then dropped from further testing; (2) tested repeatedly but was no longer studied; and, (3) was dropped from both study and test. Repeated studying after learning produced no significant effect on ability for recall. Repeated testing resulted in a large positive effect. Students’ predictions regarding their performance proved to be uncorrelated with actual achievement.
In a study conducted by Karpicke and Roediger (2007), participants learned lists of words and took a recall test one week after learning. Repeated study of previously recalled items did not result in increased retention compared to dropping those items from additional study. Repeated recall of items previously recalled, however, improved retention by more than 100 percent compared to dropping those items from further testing. Similarly, Roediger and Butler (2011) concluded that retrieval practice serves as a powerful mnemonic enhancer, often leading to significant improvements in long–term retention relative to repeated studying. They argue that retrieval practice is often effective even without feedback, however, feedback enhances the benefits of testing.
While the studies discussed thus far established the effectiveness of retrieval learning in enhancing long term memory and deepening understanding, Storm, et al. (2010) argue that tests given immediately after exposure to content (as they are in many MOOCs at the end of videos) are nowhere near as effective as when tests are delayed. However, with delay comes the risk that information will be forgotten before the tests take place. The authors, therefore, suggest that initial testing should take place once information has been disseminated and should then be followed up with a series of delayed tests. Testing and spacing have the potential to enhance long–term retention of information (Bjork, 1999; Roediger and Karpicke, 2006) what is not clear is how the two manipulations can be most effectively combined (Storm, et al., 2010).
It has been claimed that the online open course format provides an opportunity for students to engage in “Mastery Learning (ML)” (Koller, 2012). Mastery learning as first envisaged by Bloom (1968) allows students to achieve mastery of a concept before moving on to the next. This contrasts a more traditional approach of presenting material and concepts and moving everyone at the same pace regardless of their understanding. Bloom (1984) went further and argued that the introduction of ML could result in improvements of one standard deviation from the conventional group. Students who were tutored individually achieved a two standard deviation improvement in their performance on summative achievement scores. Bloom framed this as the 2 sigma problem, posing the challenge of making group teaching approach the outcomes of individual tutoring.
One approach to this is the provision of short videos that cover a concept in conjunction with quizzes that provide formative assessment. This method was adopted by Peter Norvig (2012) from Stanford University. Norvig developed his artificial intelligence class into a MOOC and within two weeks, 50,000 people had signed up. One student went on to comment: “This class felt like sitting in a bar with a really smart friend who’s explaining something you haven’t grasped, but are about to.” Norvig was inspired by the work of Salman Khan of the Khan Academy (2012) whose videos emulate the one–on–one tutoring experience. Khan reports that improvements of 10–40 percent were achieved in a K–12 class in Los Altos after using the Khan Academy for mathematics instruction. There is unfortunately no formally published evidence for the effectiveness of the Khan Academy, or the use of short videos in enhancing student learning.
Other studies on ML have produced mainly positive results. A meta–analysis of 108 controlled evaluations showed mastery learning programs to have positive effects on examination performances of students in university, high school, and the upper grades in primary school (Kulik, et al., 1990). Indeed, the average effect of mastery learning was to raise student achievement scores by 0.52 standard deviations. Even larger improvements were reported by Guskey and Gates (1986) who claim effect sizes 0.65 at a college level. Walberg (1984) reports a mean effect size of 0.81 for “science mastery learning”. In addition, Burrows and Okey (2006) found that when low–aptitude fourth graders were taught using mastery, they achieved results as high as the fifth graders who were taught in a more conventional manner.
The possible levels of engagement is also a feature of ML. Research conducted by Clark and colleagues (1983) found that the mastery group demonstrated higher levels of achievement than their peers trained in a typical lecture approach. The significantly fewer absences in the mastery learning group also suggested that, concomitant with their superior achievement, these students were more interested in their coursework. Aviles (2001) reported that while students in a junior–level introductory social work course did not perform better, they did prefer ML over non–ML.
A prominent feature of running a course with large numbers of students is the impossibility of providing marking and feedback that is not either automated or peer assessed. Automated marking provides instant feedback that can be combined with formative quizzes that enhance learning. There is also the possibility that peer and self–assessment may lead to enhanced learning outcomes. The principle concern with peer assessment is whether this can be adopted in the MOOC environment in a reliable and accurate way that approaches the accuracy of tutor or instructor marking. Initial data from a peer–assessed exam that was conducted for an introduction to sociology course on Coursera (Lewin, 2012) indicated a high degree of correlation between the average of five peer–assessed marks for a final exam and marks of the teaching staff.
Accuracy aside, there is less consensus regarding the learning benefits for students engaged in peer assessment. Sluijsmans and colleagues (2004) found there to be no difference in performance between two groups of students in the same college course, one who adopted peer assessment and the other that did not. In addition, Topping (1998) came to the rather vague conclusion that “peer assessment seems equally likely to contribute to or not contribute to the assessee’s final official grade.”  Similarly, Bloxham and West (2004) found no evidence of a relationship between a college student’s ability to grade their peers and their own performance on an assessment. The vast majority of literature, however, does support the premise that there are additional learning benefits to the peer assessment process (Crooks, 1988; Falchikov, 2001; Lu and Law, 2012; Stiggins, 2002; Strijbos, et al., 2010; Topping, 1998).
Indeed, the literature reported many additional learning benefits for the peer assessor. Learning benefits can occur from exposure to other students’ approaches, not to mention through access to the assessment marking criteria (Topping, 1998). Peer assessment is also said to develop a student’s ability for self–learning, help them to recognise strengths and weaknesses, assist in developing professional skills and enhances reflective and critical thinking abilities (Sluijsmans, et al., 1998; Smith, et al., 2002; Topping, 1998). Nelson and Schunn (2009) argue that potential cognitive benefits of peer assessment arrive through summarisation, the identification of problems, localisation, and the provision of solutions. While Sadler and Good (2006) found no evidence of improved learning as a result of being a peer assessor, he did conclude that students who corrected their own work improved dramatically.
Self–assessment is said to facilitate greater autonomy in learning and is particularly effective in developing the self–learning skills (Boud and Falchikov, 1989) required for achievement in an online learning environment (Garrison, 2003). The cognitive benefits of self–assessment include improved understanding, performance, and ability for self–analysis (Gordon, 1992). There are also the long–term benefits of self–assessment. According to Oscarson (1989), we must not underestimate the importance of students being able to monitor and assess their own progress. The ability to self–assess is considered one of the most important skills that students require for effective and lifelong learning and for future professional development (Stefani, 1998; Taras, 2010).
Peter Norvig’s (2012) decision to use short videos was inspired by Salman Khan’s Khan Academy (2012). As mentioned earlier, the short video, interspersed with quizzes, emulates one–on–one tutoring. This draws on Bloom’s (1984) findings that tutoring resulted in an average improvement of two standard deviations over teaching using standard lectures. With short videos, the students have the ability to control the pace, pause, rewind, explore and return to the content. They are unable to do this with standard lectures or with video recordings which may be one to two hours long. Videos are kept deliberately short following Khan’s (2012) claim that videos of 10–15 minutes fit into an optimal period of time that students can maintain attention.
Online forums serve a number of roles. The first is as a mechanism for obtaining direct help with a problem, assessment or understanding of a concept (Darabi, et al., 2011). The second is as another mode of teaching to replace the face–to–face tutorial (Walker, 2007) and finally the online forum creates a space for exploring the subject matter, forming relationships and collaborating for project work and other assignments (Graham and Misanchuk, 2005). Forums play a vital role in online courses as they help establish a learning community through which learners generate knowledge (Li, 2004). Indeed, Thomas (2002) argues that students learn just as much from their interactions with each other as they do from the course materials.
Whether online forums and video discussions work in lieu of direct contact depends on which aspect is being considered. It is probably beyond question that the Internet provides an effective means of providing resources for problem solving and assistance. MOOC forums have illustrated (Young, 2012) that peer assistance is an effective way of dealing with student questions and issues. Koller (2012) discovered that more often than not students were responding to each other’s posts before a moderator was able to. Due to the high volume of students enrolled in the course, the median response time for a forum question was 22 minutes. A service, Koller said, her Stanford students certainly do not receive. Enhancements of this process could be made in terms of promoting trusted students who show ability in assisting others.
Whether forums and online videoconferencing are an effective substitute for face–to–face tutorials is unclear. On the affirmative side, Cartwright (2000) found that the online format promoted excellent content–discussion and reflection in an online nursing course. In addition, the researchers/instructors found that the online discussion showed more student–initiated activity, higher quality, and better application of concepts than the face–to–face discussion. Indeed, their results indicated that the online discussion was so effective that subsequent offerings of this course used online discussion as the only method for case discussion. Jeong (2003) concluded that online discussion board postings at a major Midwestern university did exhibit strong evidence of the critical thinking necessary for higher learning and the creation of new knowledge (Walker, 2007) and Han and Hill (2007) concluded that online forums, when properly designed, can be a significant facilitator of collaborative discourse that leads to higher–level learning (Darabi, et al., 2011).
Against online discussion, Kanuka and Anderson (2007) found that most of the online interaction they reviewed from 25 corporate managers invited to participate proved merely to be an acquisition of information already compatible with existing knowledge. While the student’s overall knowledge base increased, there was little evidence of new knowledge creation. Gustafson and Gibbs (2000) found that while individual student posts were of high quality and exhibited reflective thinking about the content, the students failed to actively engage one another’s ideas (Walker, 2007). Criticisms regarding the ability of online forums to serve as a sufficient substitute for face–to–face discussion appear to be based on assumptions that higher learning occurs in face–to–face tutorials which may not always be the case.
While it was clear that the instructor plays an integral role in ensuring successful forum outcomes, there was less consensus regarding just how much of an online presence they should maintain. According to Mazzolini and Maddison (2007), the role of the instructor as moderator can vary from being the ‘sage on the stage’, to the ‘guide on the side’, to the ‘ghost in the wings’ . Dysthe (2002) argues that the role of the instructor should be to intervene to motivate discussion and keep it on track. Guldberg and Pilkington (2007) argue, however, that if the tutor structures discussion and chooses questions carefully there may be less need to intervene to stimulate discussion or keep it on track than is sometimes assumed. This shifts the role of the tutor somewhat toward more preparatory and plenary work with less direct participation required to support the development of discussion skills amongst students, particularly in the later stages of the course.
As with many things related to online and face–to–face learning, successful outcomes of high levels of student engagement do not just happen by accident. Knowledge construction only occurs as a result of careful planning: clear, well–defined, well–crafted questions and discussion topics. Without such planning and subsequent guidance, only lower levels of cognitive engagement will occur (Andresen, 2009). This is the case in any format of teaching and is not restricted to online learning environments alone.
MOOCs are in essence a restatement of online learning environments that have been in use for some time. What is new is the numbers of participants, and the fact that the format concentrates on short form videos, automated or peer/self–assessment, forums and ultimately open content from a representation of the world’s leading higher educational institutions. This review has demonstrated that MOOCs have a sound pedagogical basis for their formats. What we have not addressed however are the larger questions around whether taking a collection of MOOCs could replace obtaining an education on campus at a university in all of its facets of personal development and education. To many people taking MOOCs, this point is moot. They simply do not have the opportunity to attend a university in person. Either this is because of a lack of access to necessary prerequisite qualifications, geographical access, or financial means. To those that have the choice of either attending universities or undertaking a series of MOOCs that could one day represent a degree equivalence, the evidence suggests that their experience will not necessarily be any less rich in either case. What MOOCs present, however, is an opportunity to conduct educational research and examine the potential for use of its elements in on campus settings as a form of flipped classroom or blended learning approach. Whatever the outcomes, the nature of higher education will have changed as a result of this phenomenon.
About the authors
Associate Professor David Glance is director of the University of Western Australia (UWA) Centre for Software Practice, a UWA research and development centre. His research interests include open source software, technology and society and health informatics.
E–mail: david [dot] glance [at] uwa [dot] edu [dot] au
Associate Professor Martin Forsey teaches at the University of Western Australia and has written about neoliberal reform of schooling, school choice and supplementary education.
E–mail: martin [dot] forsey [at] uwa [dot] edu [dot] au
Myles Riley is a Research Assistant at the University of Western Australia.
The authors recognise the assistance given by the University of Western Australia in providing funding for this research.
1. Topping cited in Aoun, 2008, p. 1.
2. Mazzolini and Maddison, 2007, p. 194.
P.K. Agarwal, P.M. Bain, and R.W. Chamberlain, 2012. “The value of applied research: Retrieval practice improves classroom learning and recommendations from a teacher, a principal, and a scientist,” Educational Psychology Review, volume 24, number 3, pp. 437–448.http://dx.doi.org/10.1007/s10648-012-9210-2
I.E. Allen and J. Seaman, 2005. Growing by degrees: Online education in the United States, 2005. Newburyport, Mass.: Sloan Consortium, at http://sloanconsortium.org/resources/growing_by_degrees.pdf, accessed 3 May 2013.
M.A. Andresen, 2009. “Asynchronous discussion forums: Success factors, outcomes, assessments, and limitations,” Educational Technology & Society, volume 12, number 1, pp. 249–257, and at http://www.ifets.info/journals/12_1/19.pdf, accessed 3 May 2013.
C. Aoun, 2008. ”Peer–assessment and learning outcomes: Product deficiency or process defectiveness?” Proceedings of the 34th International Association for Educational Assessment (IAEA) Conference, at http://www.cambridgeassessment.org.uk/ca/digitalAssets/154288_Aoun.pdf, accessed 3 May 2013.
Association for Learning Technology, 2012. “MOOC pedagogy: The challenges of developing for Coursera,” Association for Learning Technology Newsletter, number 28, at http://newsletter.alt.ac.uk/2012/08/mooc-pedagogy-the-challenges-of-developing-for-coursera/, accessed 3 May 2013.
C.B. Aviles, 2001. “A study of mastery learning versus non–mastery learning instruction in an undergraduate social work policy class,” Washington, D.C.: ERIC (Educational Resources Information Center), at http://www.eric.ed.gov/, accessed 3 May 2013.
T.J. Baker, 2012. “MOOC pedagogy: Theory & practice,” Profesorbaker’s ELT Blog (1 October), accessed at http://profesorbaker.com/2012/10/01/mooc-pedagogy-theory-practice/, accessed 3 May 2013.
K. Barker and T. Wendel, 2001. E–learning: Studying Canada’s virtual secondary schools. Kelowna, B.C.: Society for the Advancement of Excellence in Education.
R.M. Bernard, P.C. Abrami, Y. Lou, E. Borokhovski, A. Wade, L. Wozney, P.A. Wallet, M. Fiset, and B. Huang, 2004. “How does distance education compare with classroom instruction? A meta–analysis of the empirical literature,” Review of Educational Research, volume 74, number 3, pp. 379–439.http://dx.doi.org/10.3102/00346543074003379
R.A. Bjork, 1999. “Assessing your own competence: Heuristics and illusions,” In: D. Gopher and A. Koriat (editors). Attention and performance XVII: Cognitive regulation of performance, interaction of theory and application. Cambridge, Mass.: MIT Press, pp. 435–459.
B.S. Bloom, 1984. “The 2 sigma problem: The search for methods of group instruction as effective as one–to–one tutoring,” Educational Researcher, volume 13, number 6, pp. 4–16.http://dx.doi.org/10.3102/0013189X013006004
B.S. Bloom, 1968. Learning for mastery. Durham, N.C.: Regional Education Laboratory for the Carolinas and Virginia.
S. Bloxham and A. West, 2004. “Understanding the rules of the game: Marking peer assessment as a medium for developing students’ conceptions of assessment,” Assessment & Evaluation in Higher Education, volume 29, number 6, pp. 721–733.http://dx.doi.org/10.1080/0260293042000227254
D. Boud and N. Falchikov, 1989. “Quantitative studies of student self–assessment in higher education: A critical analysis of findings,” Higher Education, volume 18, number 5, pp. 529–549.http://dx.doi.org/10.1007/BF00138746
C. Burrows and J.R. Okey, 2006. “The effects of a mastery learning strategy on achievement,” Journal of Research in Science Teaching, volume 16, number 1, pp. 33–37.http://dx.doi.org/10.1002/tea.3660160106
J. Cartwright, 2000. “Lessons learned: Using asynchronous computer–mediated conferencing to facilitate group discussion,” Journal of Nursing Education, volume 39, number 2, pp. 87–90.
C. Cavanaugh, K.J. Gillan, J. Kromrey, M. Hess, and R. Blomeyer, 2004. “The effects of distance education on K–12 student outcomes a meta–analysis,” at http://faculty.education.ufl.edu/cathycavanaugh/docs/EffectsDLonK-12Students1.pdf, accessed 3 May 2013.
K. Cho, C.D. Schunn, and R.W. Wilson, 2006. “Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives,” Journal of Educational Psychology, volume 98, number 4, pp. 891–901.http://dx.doi.org/10.1037/0022-06126.96.36.1991
C.R. Clark, T.R. Guskey, and J.S. Benninga, 1983. “The effectiveness of mastery learning strategies in undergraduate education courses,” Journal of Educational Research, volume 76, number 4, pp. 210–214.
Coursera, 2012a. “Course Explorer,” at https://www.coursera.org/, accessed 19 November 2012.
Coursera, 2012b. “Coursera hits 1 million students across 196 countries,” at http://blog.coursera.org/post/29062736760/coursera-hits-1-million-students-across-196-countries, accessed 3 May 2013.
T.J. Crooks, 1988. “The impact of classroom evaluation practices on students,” Review of Educational Research, volume 58, number 4, pp. 438–481.http://dx.doi.org/10.3102/00346543058004438
A. Darabi, M.C. Arrastia, D.W. Nelson, T. Cornille, and X. Liang, 2011. “Cognitive presence in asynchrnous online learning: A comparison of four discussion strategies,” Journal of Computer Assisted Learning, volume 27, number 3, pp. 216–227.http://dx.doi.org/10.1111/j.1365-2729.2010.00392.x
C. Doyle, 2012. “Professor Keith Devlin on teaching his first MOOC,” Technapex (27 November), at http://www.technapex.com/2012/11/professor-keith-devlin-on-teaching-his-first-mooc/, accessed 3 May 2013.
O. Dysthe, 2002. “The learning potential of a Web–mediated discussion in a university course,” Studies in Higher Education, volume 27, number 3, pp. 339–352.http://dx.doi.org/10.1080/03075070220000716
M. El–Tigi and R.M. Branch, 1997. “Designing for interaction, learner control, and feedback during Web–based learning,” Educational Technology, volume 37, number 3, pp. 23–29.
ERIC, 2012. “Education Resources Information Center,” at http://www.eric.ed.gov/, accessed 3 May 2013.
N. Falchikov, 2001. Learning together: Peer tutoring in higher education. New York: RoutledgeFalmer.
A. Fazackerley, 2012. “UK universities are wary of getting on board the mooc train,” Guardian (3 December), at http://www.guardian.co.uk/education/2012/dec/03/massive-online-open-courses-universities, accessed 3 May 2013.
R. Gagné, 1985. The conditions of learning and theory of instruction. Fourth edition. New York: Holt, Rinehart and Winston.
D.R. Garrison, 2003. “Cognitive presence for effective asynchronous online learning: The role of reflective inquiry, self–direction and metacognition,” Elements of quality online education: Practice and direction, volume 4, pp. 47–58.
Google Scholar, 2012. “Google Scholar,” at http://scholar.google.com/, accessed 9 December 2012.
M.J. Gordon, 1992. “Self–assessment programs and their implications for health professions training,” Academic Medicine, volume 67, number 10, pp. 672–679.http://dx.doi.org/10.1097/00001888-199210000-00012
C.R. Graham and M. Misanchuk, 2005. “Computer–mediated learning groups,” In: M. Khosrow–Pour (editor). Encyclopedia of Information Science and Technology, pp. 502–507.
K. Guldberg and R. Pilkington, 2007. “Tutor roles in facilitating reflection on practice through online discussion,” Educational Technology & Society, volume 10, number 1, pp. 61–72, and at http://www.ifets.info/journals/10_1/ets_10_1.pdf, accessed 3 May 2013.
T.R. Guskey and S.L. Gates, 1986. “Synthesis of research on the effects of mastery learning in elementary and secondary classrooms,” Educational Leadership, volume 43, number 8, pp. 73–81.
P. Gustafson and D. Gibbs, 2000. “Guiding or hiring? The role of the facilitator in online teaching and learning,” Teaching Education, volume 11, number 2, pp. 195–210.http://dx.doi.org/10.1080/713698967
S.Y. Han and J.R. Hill, 2007. “Collaborate to learn, learn to collaborate: Examining the roles of context, community, and cognition in asynchronous discussion,” Journal of Educational Computing Research, volume 36, number 1, pp. 89–123.http://dx.doi.org/10.2190/A138-6K63-7432-HL10
A.C. Jeong, 2003. “The sequential analysis of group interaction and critical thinking in online threaded discussions,” American Journal of Distance Education, volume 17, number 1, pp. 25–43.http://dx.doi.org/10.1207/S15389286AJDE1701_3
E.H. Joy and F.E. Garcia, 2000. “Measuring learning effectiveness: A new look at no–significant–difference findings,” Journal of Asynchronous Learning Networks, volume 4, number 1, pp. 33–39.
H. Kanuka and T. Anderson, 2007. “Online social interchange, discord, and knowledge construction,” Journal of Distance Education, volume 13, number 1, pp. 57–74.
J.D. Karpicke and P.J. Grimaldi, 2012. “Retrieval–based learning: A perspective for enhancing meaningful learning,” Educational Psychology Review, volume 24, number 3, pp. 401–418.http://dx.doi.org/10.1007/s10648-012-9202-2
J.D. Karpicke and J.R. Blunt, 2011. “Retrieval practice produces more learning than elaborative studying with concept mapping,” Science, volume 331, number 6018 (20 January), pp. 772–775.
J.D. Karpicke and H.L. Roediger, 2008. “The critical importance of retrieval for learning,” Science, volume 319, number 5865 (15 February), pp. 966–968.
J.D. Karpicke and H.L. Roediger, 2007. “Repeated retrieval during learning is the key to long–term retention,” Journal of Memory and Language, volume 57, number 2, pp. 151–162.http://dx.doi.org/10.1016/j.jml.2006.09.004
S. Kerka, 1996. “Distance learning, the Internet, and the World Wide Web,” at http://www.ericdigests.org/1997-1/distance.html, accessed 3 May 2013.
Khan Academy, 2012. “Khan Academy,” at http://www.khanacademy.org/, accessed 3 May 2013.
S. Khan, 2012. The one world schoolhouse: Education reimagined. London: Hodder & Stoughton.
D. Koller, 2012. “What we're learning from online education,” at http://www.ted.com/talks/daphne_koller_what_we_re_learning_from_online_education.html, accessed 3 May 2013.
M. Korn and J. Levitz, 2013. “Online courses look for a business model,” Wall Street Journal (1 January), at http://online.wsj.com/article/SB10001424127887324339204578173421673664106.html?mod=googlenews_wsj, accessed 3 May 2013.
R. Kozma, A. Zucker, C. Espinoza, R. McGhee, L. Yarnall, D. Zalles, and A. Lewis, 2000. “The online course experience: Evaluation of the Virtual High School’s third year of implementation, 1999–2000,” Menlo Park, Calif.: SRI International, at http://www.govhs.org/Images/SRIEvals/$file/SRIAnnualReport2000.pdf, accessed 3 May 2013.
C.–L.C. Kulik, J.A. Kulik, and R.L. Bangert–Drowns, 1990. “Effectiveness of mastery learning programs: A meta–analysis,” Review of Educational Research, volume 60, number 2, pp. 265–299.http://dx.doi.org/10.3102/00346543060002265
R. Lapsley, B. Kulik, R. Moody, and J.B. Arbaugh, 2008. “Is identical really identical? An investigation of equivalency theory and online learning,” Journal of Educators Online, volume 5, number 1, at http://www.thejeo.com/Archives/Volume5Number1/LapsleyetalPaper.pdf, accessed 3 May 2013.
T. Lewin, 2012. “College of future could be come one, come all,” New York Times (19 November), at http://www.nytimes.com/2012/11/20/education/colleges-turn-to-crowd-sourcing-courses.html, accessed 3 May 2013.
Q. Li, 2004. “Knowledge building community: Keys for using online forums,” TechTrends, volume 48, number 4, pp. 24–29.http://dx.doi.org/10.1007/BF02763441
J. Lu and N. Law, 2012. “Online peer assessment: Effects of cognitive and affective feedback,” Instructional Science, volume 40, number 2, pp. 257–275.http://dx.doi.org/10.1007/s11251-011-9177-2
R.H. Maki, W.S. Maki, M. Patterson, and P.D. Whittaker, 2000. “Evaluation of a Web–based introductory psychology course: I. Learning and satisfaction in on–line versus lecture courses,” Behavior Research Methods, Instruments, & Computers, volume 32, number 2, pp. 230–239.http://dx.doi.org/10.3758/BF03207788
M. Mazzolini and S. Maddison, 2007. “When to jump in: The role of the instructor in online discussion forums,” Computers & Education, volume 49, number 2, pp. 193–213.http://dx.doi.org/10.1016/j.compedu.2005.06.011
J. McDonald, 2002. “Is ‘as good as face-to-face’ as good as it gets?” Journal of Asynchronous Learning Networks, volume 6, number 2, pp. 10–23, and at http://sloanconsortium.org/jaln/v6n2/quotas-good-face-facequot-good-it-gets, accessed 3 May 2013.
C.E. McKissack, 1997. “A comparative study of grade point average (GPA) between the students in traditional classroom setting and the distance learning classroom setting in selected colleges and universities,” ETD Collection for Tennessee State University, paper AAI9806343, at http://digitalscholarship.tnstate.edu/dissertations/AAI9806343/, accessed 3 May 2013.
B. Means, Y. Toyama, R. Murphy, M. Bakia, and K. Jones, 2010. “Evaluation of evidence–based practices in online learning: A meta–analysis and review of online learning studies,” Washington, D.C.: U.S. Department of Education, at http://www2.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf, accessed 3 May 2013.
J.P. Merisotis and R.A. Phipps, 1999. “What’s the difference? A review of contemporary research on the effectiveness of distance learning in higher education,” Washington D.C.: Institute for Higher Education Policy, at http://www.ihep.org/Publications/publications-detail.cfm?id=88, accessed 3 May 2013.
R. Moe, 2012. “MOOC pedagogy — Waiting for big data?”, at http://allmoocs.wordpress.com/2012/10/30/mooc-pedagogy-waiting-for-big-data/, accessed 3 May 2013.
M.M. Nelson and C.D. Schunn, 2009. “The nature of feedback: How different types of peer feedback affect writing performance,” Instructional Science, volume 37, number 4, pp. 375–401.http://dx.doi.org/10.1007/s11251-008-9053-x
P. Norvig, 2012. “Peter Norvig: The 100,000–student classroom,” at http://www.ted.com/talks/peter_norvig_the_100_000_student_classroom.html, accessed 3 May 2013.
T. Olson and R.A. Wisher, 2002. “The effectiveness of Web–based instruction: An initial inquiry,” International Review of Research in Open and Distance Learning, volume 3, number 2, at http://www.irrodl.org/index.php/irrodl/article/view/103/182, accessed 3 May 2013.
S. Orn, 2012. “Napster, Udacity, and the Academy Clay Shirky,” at http://www.kennykellogg.com/2012/11/napster-udacity-and-academy-clay-shirky.html, accessed 3 May 2013.
M. Oscarson, 1989. “Self–assessment of language proficiency: Rationale and applications,” Language Testing, volume 6, number 1, pp. 1–13.http://dx.doi.org/10.1177/026553228900600103
PsycINFO, 2012. “PsycINFO,” at http://www.apa.org/pubs/databases/psycinfo/index.aspx, accessed 3 May 2013.
H.L. Roediger and A.C. Butler, 2011. “The critical role of retrieval practice in long–term retention,” Trends in Cognitive Sciences, volume 15, number 1, pp. 20–27.http://dx.doi.org/10.1016/j.tics.2010.09.003
H.L. Roediger and J.D. Karpicke, 2006. “The power of testing memory: Basic research and implications for educational practice,” Perspectives on Psychological Science, volume 1, number 3, pp. 181–210.http://dx.doi.org/10.1111/j.1745-6916.2006.00012.x
T.L. Russell, 1999. The no significant difference phenomenon: As reported in 355 research reports, summaries and papers. Raleigh, N.C.: North Carolina State University.
P.M. Sadler and E. Good, 2006. “The impact of self– and peer–grading on student learning,” Educational Assessment, volume 11, number 1, pp. 1–31.http://dx.doi.org/10.1207/s15326977ea1101_1
M. Shachar and Y. Neumann, 2003. “Differences between traditional and distance education academic performances: A meta–analytic approach,” International Review of Research in Open and Distance Learning, volume 4, number 2, at http://www.irrodl.org/index.php/irrodl/article/view/153, accessed 3 May 2013.
D. Sluijsmans, F. Dochy, and G. Moerkerke, 1998. “Creating a learning environment by using self–, peer– and co–assessment,” Learning Environments Research, volume 1, number 3, pp. 293–319.http://dx.doi.org/10.1023/A:1009932704458
D. Sluijsmans, S. Brand–Gruwel, J. van Merriënboer, and R.L. Martens, 2004. “Training teachers in peer–assessment skills: Effects on performance and perceptions,” Innovations in Education and Teaching International, volume 41, number 1, pp. 59–78.http://dx.doi.org/10.1080/1470329032000172720
H. Smith, A. Cooper, and L. Lancaster, 2002. “Improving the quality of undergraduate peer assessment: A case for student and staff development,” Innovations in Education and Teaching International, volume 39, number 1, pp. 71–81.http://dx.doi.org/10.1080/13558000110102904
L.A.J. Stefani, 1998. “Assessment in partnership with learners,” Assessment & Evaluation in Higher Education, volume 23, number 4, pp. 339–350.http://dx.doi.org/10.1080/0260293980230402
R.J. Stiggins, 2002. “Assessment crisis: The absence of assessment for learning,” Phi Delta Kappan, volume 83, number 10, pp. 758–765.
B.C. Storm, R.A. Bjork, and J.C. Storm, 2010. “Optimizing retrieval as a learning event: When and why expanding retrieval practice enhances long–term retention,” Memory & Cognition, volume 38, number 2, pp. 244–253.http://dx.doi.org/10.3758/MC.38.2.244
J.–W. Strijbos, S. Narciss, and K. Dünnebier, 2010. “Peer feedback content and sender’s competence level in academic writing revision tasks: Are they critical for feedback perceptions and efficiency?” Learning and Instruction, volume 20, number 4, pp. 291–303.http://dx.doi.org/10.1016/j.learninstruc.2009.08.008
J.J. Summers, A. Waigandt, and T.A. Whittaker, 2005. “A comparison of student achievement and satisfaction in an online versus a traditional face–to–face statistics class,” Innovative Higher Education, volume 29, number 3, pp. 233–250.http://dx.doi.org/10.1007/s10755-005-1938-x
M. Taras, 2010. “Student self–assessment: Processes and consequences,” Teaching in Higher Education, volume 15, number 2, pp. 199–209.http://dx.doi.org/10.1080/13562511003620027
M.J.W. Thomas, 2002. “Learning within incoherent structures: The space of online discussion forums,” Journal of Computer Assisted Learning, volume 18, number 3, pp. 351–366.http://dx.doi.org/10.1046/j.0266-4909.2002.03800.x
K. Topping, 1998. “Peer assessment between students in colleges and universities,” Review of Educational Research, volume 68, number 3, pp. 249–276.http://dx.doi.org/10.3102/00346543068003249
T. Traphagan, J.V. Kucsera, and K. Kishi, 2010. “Impact of class lecture webcasting on attendance and learning,” Educational Technology Research and Development, volume 58, number 1, pp. 19–37.http://dx.doi.org/10.1007/s11423-009-9128-7
Udacity, 2012. “Udacity,” at http://www.udacity.com, accessed 3 May 2013.
H.J. Walberg, 1984. “Improving the productivity of America’s schools,” Educational Leadership, volume 41, number 8, pp. 19–27, and at http://www.ascd.org/ASCD/pdf/journals/ed_lead/el_198405_walberg.pdf, accessed 3 May 2013.
B.K. Walker, 2007. “Bridging the distance: How social interaction, presence, social presence, and sense of community influence student learning experiences in an online virtual environment,” unpublished Ph.D. dissertation, University of North Carolina, at http://libres.uncg.edu/ir/uncg/f/umi-uncg-1472.pdf, accessed 3 May 2013.
Web of Knowledge, 2012. “Web of Knowledge,” at http://apps.webofknowledge.com/, accessed 3 May 2013.
S.B. Wegner, K.C. Holloway, and E.M. Garton, 1999. “The effects of Internet-based instruction on student learning,” Journal of Asynchronous Learning Networks, volume 3, number 2, pp. 98–106, and at http://sloanconsortium.org/jaln/v3n2/effects-internet-based-instruction-student-learning, accessed 3 May 2013.
J.R. Young, 2012. “Providers of free MOOC’s now charge employers for access to student data,” Chronicle of Higher Education (4 December), at http://chronicle.com/article/Providers-of-Free-MOOCs-Now/136117/, accessed 3 May 2013.
Received 27 January 2013; accepted 19 February 2013.
To the extent possible under law, David Glance has waived all copyright and related or neighboring rights to “The pedagogical foundations of massive open online courses”. This work is published from Australia.
The pedagogical foundations of massive open online courses
by David George Glance, Martin Forsey, and Myles Riley.
First Monday, Volume 18, Number 5 - 6 May 2013
A Great Cities Initiative of the University of Illinois at Chicago University Library.
© First Monday, 1995-2015.