- Help & FAQ
Gamification of Education and Learning: A Review of Empirical Literature
Research output : Chapter in Book/Report/Conference proceeding › Conference contribution › Scientific › peer-review
Publication series
Publication forum classification.
- Publication forum level 1
Access to Document
- http://urn.fi/urn:nbn:de:0074-2186-5
- http://urn.fi/URN:NBN:fi:uta-201811132835
T1 - Gamification of Education and Learning
T2 - International GamiFIN Conference
AU - Majuri, Jenni
AU - Koivisto, Jonna
AU - Hamari, Juho
N1 - jufoid=53269 INT=tie,"Majuri, Jenni"
M3 - Conference contribution
T3 - CEUR Workshop Proceedings
BT - Proceedings of the 2nd International GamiFIN Conference, GamiFIN 2018
A2 - Koivisto, Jonna
A2 - Hamari, Juho
PB - CEUR-WS
Y2 - 21 May 2018 through 23 May 2018
The Gamification of Learning: a Meta-analysis
- Meta-Analysis
- Open access
- Published: 15 August 2019
- Volume 32 , pages 77–112, ( 2020 )
Cite this article
You have full access to this open access article
- Michael Sailer ORCID: orcid.org/0000-0001-6831-5429 1 &
- Lisa Homner 1
131k Accesses
506 Citations
112 Altmetric
Explore all metrics
This meta-analysis was conducted to systematically synthesize research findings on effects of gamification on cognitive, motivational, and behavioral learning outcomes. Results from random effects models showed significant small effects of gamification on cognitive ( g = .49, 95% CI [0.30, 0.69], k = 19, N = 1686), motivational ( g = .36, 95% CI [0.18, 0.54], k = 16, N = 2246), and behavioral learning outcomes ( g = .25, 95% CI [0.04, 0.46], k = 9, N = 951). Whereas the effect of gamification on cognitive learning outcomes was stable in a subsplit analysis of studies employing high methodological rigor, effects on motivational and behavioral outcomes were less stable. Given the heterogeneity of effect sizes, moderator analyses were conducted to examine inclusion of game fiction , social interaction , learning arrangement of the comparison group , as well as situational, contextual, and methodological moderators, namely, period of time , research context , randomization , design , and instruments . Inclusion of game fiction and social interaction were significant moderators of the effect of gamification on behavioral learning outcomes. Inclusion of game fiction and combining competition with collaboration were particularly effective within gamification for fostering behavioral learning outcomes. Results of the subsplit analysis indicated that effects of competition augmented with collaboration might also be valid for motivational learning outcomes. The results suggest that gamification as it is currently operationalized in empirical studies is an effective method for instruction, even though factors contributing to successful gamification are still somewhat unresolved, especially for cognitive learning outcomes.
Similar content being viewed by others
Gamification in Education and Its Impact on Student Motivation—A Critical Review
How to Use Game Elements to Enhance Learning: Applications of the Theory of Gamified Learning
Gamification for educational purposes: What are the factors contributing to varied effectiveness?
Avoid common mistakes on your manuscript.
In recent years, the concept of gamification, defined as “the use of game design elements in non-game contexts” (Deterding et al. 2011 , p. 9), has received increased attention and interest in academia and practice, with education among the top fields of gamification research (Dichev and Dicheva 2017 ; Hamari et al. 2014 ; Seaborn and Fels 2015 ). Its hypothesized motivational power has made gamification an especially promising method for instructional contexts. However, as the popularity of gamification has increased, so have critical voices describing gamification as “the latest buzzword and the next fad” (Boulet 2012 , p. 1) or “Pavlovication” (Klabbers 2018 , p. 232). But how effective is gamification when it comes to learning, and what factors contribute to successful gamification?
Even though considerable research efforts have been made in this field (Hamari et al. 2014 ; Seaborn and Fels 2015 ), conclusive meta-analytic evidence on the effectiveness of gamification in the context of learning and education has yet to be provided. Therefore, the aim of this analysis was to statistically synthesize the state of research on the effects of gamification on cognitive, motivational, and behavioral learning outcomes in an exploratory manner. Furthermore, with this meta-analysis, not only did we try to answer the question of whether learning should be gamified but also how . Thus, we also investigated potential moderating factors of successful gamification to account for conceptual heterogeneity in gamification (Sailer et al. 2017a ). What is more, we included contextual, situational, and methodological moderators to account for different research contexts and study setups as well as methodological rigor in primary studies. To further investigate the stability and robustness of results, we assessed publication bias and computed subsplit analyses, which included studies with high methodological rigor.
Effects of Gamification on Learning
Gamification in the context of learning can be referred to as gamified learning (see Armstrong and Landers 2017 ; Landers 2014 ). Even though gamified learning and game-based learning have overlapping research literatures, a common game design element toolkit (Landers et al. 2018 ), and the same focus on adding value beyond entertainment, that is, using the entertaining quality of gamification interventions or (serious) games for learning (see Deterding et al. 2011 ; Zyda 2005 ), they are different in nature. Whereas game-based learning approaches imply the design of fully fledged (serious) games (see Deterding et al. 2011 ), gamified learning approaches focus on augmenting or altering an existing learning process to create a revised version of this process that users experience as game-like (Landers et al. 2018 ). Thus, gamification is not a product in the way that a (serious) game is; gamification in the context of learning is a design process of adding game elements in order to change existing learning processes (see Deterding et al. 2011 ; Landers et al. 2018 ).
Although many studies that have examined gamification have lacked a theoretical foundation (Hamari et al. 2014 ; Seaborn and Fels 2015 ), some authors have attempted to explain the relationship between gamification and learning by providing frameworks such as the theory of gamified learning (Landers 2014 ). This theory defines four components: instructional content, behaviors and attitudes, game characteristics, and learning outcomes. The theory proposes that instructional content directly influences learning outcomes as well as learners’ behavior. Because gamification is usually not used to replace instruction, but rather to improve it, effective instructional content is a prerequisite for successful gamification (Landers 2014 ). The aim of gamification is to directly affect behaviors and attitudes relevant to learning. In turn, these behaviors and attitudes are hypothesized to affect the relationship between the instructional content and learning outcomes via either moderation or mediation, depending on the nature of the behaviors and attitudes targeted by gamification (Landers 2014 ). The theory of gamified learning proposes a positive, indirect effect of gamification on learning outcomes. However, it is important to note that this theory provides no information about effective learning mechanisms triggered by game design elements. Such mechanisms can be found in well-established psychological theories such as self-determination theory (Ryan and Deci 2002 ).
Self-determination theory has already been successfully applied in the contexts of games (see Rigby and Ryan 2011 ) and gamification (see Mekler et al. 2017 ; Sailer et al. 2017a ). It postulates psychological needs for competence, autonomy, and social relatedness. The satisfaction of these needs is central for intrinsic motivation and subsequently for high-quality learning (see Ryan and Deci 2000 ), with self-determination theory emphasizing the importance of the environment in satisfying these psychological needs (Ryan and Deci 2002 ). Enriching learning environments with game design elements modifies these environments and potentially affects learning outcomes. From the self-determination perspective, different types of feedback can be central learning mechanisms triggered by game design elements. Constantly providing learners with feedback is a central characteristic of serious games (Wouters et al. 2013 ; Prensky 2001 ) and of gamification (Werbach and Hunter 2012 ). Although the effectiveness of feedback has been shown to vary depending on a variety of criteria such as the timing of feedback (immediate, delayed), frame of reference (criterial, individual, social), or level of feedback (task, process, self-regulation, self), feedback is among the most powerful factors in the relationship between educational interventions and learning in general (Hattie and Timperley 2007 ). Based on the theory of gamified learning as well as self-determination theory, gamification might influence learning outcomes in a positive way.
Previous research that has attempted to synthesize effects of gamification on learning outcomes has done this almost exclusively with (systematic) reviews. An exception is an early meta-analysis on gamification in educational contexts that was based on data from 14 studies published between 2013 and 2015 (Garland 2015 ). Under a random effects model, this meta-analysis identified a positive, medium-sized correlation between the use of gamification and learning outcomes ( r = .31, 95% CI [0.11, 0.47]). It should be noted that learning outcomes in this case almost exclusively pertained to motivational outcomes, except for one study that investigated knowledge retention. Though promising, these results should be viewed with caution because they were methodologically limited. The analysis was highly likely to be underpowered and had only limited generalizability due to the small sample size. Furthermore, r as an estimator of effect size cannot correct for the bias caused by small samples, which can lead to an overestimation of the effect. Also, no evaluation of publication bias was attempted.
Apart from this analysis, a series of reviews providing at least the general tendency of research findings in the field have been conducted. A mutlidisciplinary literature review identified 24 peer-reviewed empirical studies published between 2008 and 2013 (Hamari et al. 2014 ). Nine of the 24 studies were conducted in the context of education and learning. All in all, the results were mixed, despite the presence of a positive tendency suggesting the existence of confounding variables. Seaborn and Fels ( 2015 ) returned concordant findings in their review of eight empirical studies (published between 2011 and 2013) from the domain of education; they also identified a positive tendency but predominantly found mixed results. Results from these reviews were attributed to differences in gamified contexts and participant characteristics, but effects of novelty and publication bias were also discussed as possible reasons for these mixed findings.
A series of reviews specifically focusing on educational contexts was conducted by Dicheva et al. ( 2015 ), Dicheva and Dichev ( 2015 ), and Dichev and Dicheva ( 2017 ). The review by Dicheva et al. ( 2015 ) consisted of 34 empirical studies published between 2010 and 2014. The majority of experiments ( k = 18) reported positive results for gamification on various motivational, behavioral, and cognitive variables. Using the same search strategies as Dicheva et al. ( 2015 ), a subsequent review by Dicheva and Dichev ( 2015 ) resulted in 41 reports published between July 2014 and June 2015. The majority of studies were identified as inconclusive due to methodological inadequacies. Only 10 reports provided conclusive positive evidence, and three studies showed negative effects. Dichev and Dicheva ( 2017 ) followed up by conducting another subsequent literature search focusing on studies published between July 2014 and December 2015. They identified 51 additional studies, of which 41 experiments investigated the effect of gamification on motivational, behavioral, and cognitive outcomes. Whereas 12 experiments reported positive results, three reported negative results. Again, the majority of the experiments ( k = 26) were inconclusive. The large numbers of inconclusive studies in these reviews point toward a general problem: Gamification research lacks methodological rigor.
According to meta-analytic results and several reviews, gamification tends to have positive effects on different kinds of learning outcomes, albeit with mixed results. Thus, in this meta-analysis, we statistically synthesized the state of current research on gamification to investigate the extent to which gamification affects cognitive, motivational, and behavioral learning outcomes compared with conventional instructional methods.
Moderating Factors
Gamification applications can be very diverse, and research has often failed to acknowledge that there are many different game design elements at work that can result in different affordances for learners, modes of social interactions, and learning arrangements (Sailer et al. 2017a ). Thus, we included different moderating factors to account for conceptual heterogeneity in gamification. Further, because contextual and situational factors might influence the effects of gamification on learning outcomes (Hamari et al. 2014 ), and gamification research lacks methodological rigor (Dicheva and Dichev 2015 ; Dichev and Dicheva 2017 ), we also included contextual, situational, and methodological moderators. The process of choosing potential moderating factors for the effects of gamification on learning outcomes was iterative in nature. We included moderating factors that were both theoretically interesting and which the literature was large enough to support their inclusion.
Inclusion of Game Fiction
From a self-determination theory perspective, game contexts can potentially satisfy the need for autonomy and relatedness by including choices, volitional engagement, sense of relevance, and shared goals (Rigby and Ryan 2011 ). These are assumed to be triggered by meaningful stories, avatars, nonplayer characters, or (fictional) teammates (Rigby and Ryan 2011 ; Sailer et al. 2017a ). A shared attribute of these elements is that they provide narrative characteristics or introduce a game world, both of which include elements of fantasy (Bedwell et al. 2012 ; Garris et al. 2002 ). In general, they focus on game fiction , which is defined as the inclusion of a fictional game world or story (Armstrong and Landers 2017 ). Inclusion of game fiction is closely related to the use of narrative anchors and has been shown to be effective for learning as it situates and anchors learning in a context (Clark et al. 2016 ) and can further serve as a cognitive framework for problem solving (Dickey 2006 ). In the context of games, results on the effectiveness of the inclusion of game fiction have been mixed (Armstrong and Landers 2017 ). Whereas Bedwell et al. ( 2012 ) found positive effects of the inclusion of game fiction on knowledge and motivation in their review, meta-analyses by others such as Wouters et al. ( 2013 ) found that serious games with narrative elements are not more effective than serious games without narrative elements. Thus, we investigated whether the use of game fiction moderates the effects of gamification on cognitive, motivational, and behavioral learning outcomes.
Social Interaction
The impact of relatedness in interpersonal activities can be crucial (Ryan and Deci 2002 ). Therefore, the type of social interaction that is likely to occur as a result of gamification could affect its relationship with learning outcomes. Collaboration and competition can be regarded as particularly important in this context (Rigby and Ryan 2011 ).
The term collaboration in this meta-analysis subsumes both collaborative and cooperative learning arrangements (i.e., situations in which learners work together in groups to achieve a shared goal), while being assessed either as a group (collaborative) or individually (cooperative; Prince 2004 ). In the broader context of games, collaboration has the potential to affect the needs for both relatedness and competence (Rigby and Ryan 2011 ). Collaboration not only allows for team work and thus the experience of being important to others, but it also enables learners to master challenges they otherwise might not be able to overcome on their own, which can result in feelings of competence.
Competition can cause social pressure to increase learners’ level of engagement and can have a constructive effect on participation and learning (Burguillo 2010 ). However, it also has the potential to either enhance or undermine intrinsic motivation (Rigby and Ryan 2011 ). In this context, two types of competition can be distinguished. On the one hand, destructive competition occurs if succeeding by tearing others down is required, resulting in feelings of irrelevance and oppression. On the other hand, constructive competition occurs if it is good-natured and encourages cooperation and mutual support (i.e., if competition is aimed at improving everyone’s skills instead of defeating someone). In this sense, constructive competition has the potential to foster feelings of relatedness, thereby enhancing intrinsic motivation (Rigby and Ryan 2011 ).
Collaboration—as well as competition augmented by aspects of collaboration (i.e., constructive competition)—can have additional beneficial effects on intrinsic motivation when compared with solitary engagement in an activity; as in cases of collaboration and competition augmented by aspects of collaboration, the need for relatedness is fostered additionally. Mere competition, however, can thwart feelings of relatedness when the goal is to defeat each other rather than to improve skills together (see Rigby and Ryan 2011 ).
Findings from the context of games have shown that collaborative gameplay can be more effective than individual gameplay (Wouters et al. 2013 ). Clark et al. ( 2016 ) included competition in their meta-analysis on digital games and found that combinations of competition and collaboration as well as single-player games without competitive elements can outperform games with mere competition. In this meta-analysis, we investigated whether different types of social interaction moderate the effects of gamification on cognitive, motivational, and behavioral learning outcomes.
Learning Arrangement of the Comparison Group
Active engagement in cognitive processes is necessary for effective and sustainable learning as well as deep levels of understanding (see Wouters et al. 2008 ). This emphasis on active learning in educational psychology is aligned with the (inter)active nature of games (Wouters et al. 2013 ). Similar to games, gamification also has high potential to create instructional affordances for learners to engage in active learning. Based on the theory of gamified learning, gamification is assumed to affect learning outcomes by enhancing the attitudes and behaviors that are relevant for learning (e.g., when rewards for taking high-quality notes are provided in gamification; Landers 2014 ). A prerequisite is that the behavior or attitude that is targeted by gamification must itself influence learning (Landers 2014 ) and thus create instructional affordances for learners to actively engage in cognitive processes with the learning material. However, how learners interact with the environment has to be considered because learners may interact with the environment in different ways and carry out certain learning activities whether or not they are intended by gamification designers or researchers (see Chi and Wylie 2014 ; Young et al. 2012 ).
Further, in between-subject studies, which were included in this meta-analysis, the learning arrangement of the comparison condition, against which gamification was contrasted, is crucial (see Chi and Wylie 2014 ). Learners in a comparison condition can receive different prompts or instructions to engage in different learning activities and thus bias the effects of gamification. Therefore, it is important to differentiate between the passive and active instructions of comparison groups. Whereas passive instruction includes listening to lectures, watching instructional videos, and reading textbooks, active instruction involves explicitly prompting the learners to engage in learning activities (e.g., assignments, exercises, laboratory experiments; Sitzmann 2011 ; Wouters et al. 2013 ). Similar to the approach used by Wouters et al. ( 2013 ) in the context of games and Sitzmann ( 2011 ) in the context of simulation games, we included the comparison group’s learning arrangement as a potential moderator of effects of gamification on cognitive, motivational, and behavioral learning outcomes.
Apart from these moderators, situational and contextual moderators were also included in the analysis. Thus, we included the period of time in which gamification was applied and the research context as potential moderators.
Period of Time
Previous reviews have indicated that the period of time during which gamification was used and investigated in primary studies has shown substantial variance (Dichev and Dicheva 2017 ; Seaborn and Fels 2015 ). One the one hand, reviews have raised the question of whether effects of gamification persist in the long run (Dichev and Dicheva 2017 ; Hamari et al. 2014 ; Seaborn and Fels 2015 ). On the other hand, research in the context of games has indicated that the effects of games are larger when players engage in multiple sessions and thus play over longer periods of time (Wouters et al. 2013 ). Thus, we included the period of time gamification was used as a potential moderator of its effects on cognitive, motivational, and behavioral learning outcomes.
Research Context
Gamification has been studied in different research contexts. Whereas the majority of studies found in reviews focusing on education were conducted in higher education settings, some of them were performed in primary and secondary school settings (see Dichev and Dicheva 2017 ). Further, some studies have been deployed in the context of further education or work-oriented learning (e.g., Lombriser et al. 2016 ) or informal settings with no reference to formal, higher, or work-related education (e.g., Sailer et al. 2017a ). Therefore, this meta-analysis includes the context of research as a moderating factor for effects of gamification.
As meta-analytic methods allow for a synthesis of studies using different designs and instruments, the degree of methodological rigor can vary between primary studies and thus jeopardize the conclusions drawn from meta-analyses (Wouters et al. 2013 ). Because studies in the context of gamification have often lacked methodological rigor (Dicheva and Dichev 2015 , 2017 ), we included methodological factors to account for possible differences in methodological study design and rigor across primary studies.
Randomization
Randomly assigning learners to experimental conditions allows researchers to rule out alternative explanations for differences in learning outcomes between different conditions. However, quasi-experimental studies do not allow alternative explanations to be ruled out (Wouters et al. 2013 ; Sitzmann 2011 ). For this reason, we included randomization as a moderating factor to account for methodological rigor.
Besides randomization, the design of the primary studies can indicate methodological rigor. Primary studies using posttest-only designs cannot account for prior knowledge or initial motivation. The administration of pretests is particularly relevant for quasi-experimental studies because effects can be biased first by not randomly assigning learners to conditions and second by not controlling for learners’ prior knowledge and motivation. Thus, we included design as a moderating factor to further account for methodological rigor in primary studies.
Instruments
Previous reviews have indicated that primary studies investigating gamification have often not used properly validated psychometric measurements to assess relevant outcomes (Dichev and Dicheva 2017 ; Hamari et al. 2014 ; Seaborn and Fels 2015 ). The use of standardized instruments can help to ensure the comparability of study results and further ensure the reliable measurement of variables (see Hamari et al. 2014 ). Thus, we included the type of instruments used in primary studies as a moderating factor of effects of gamification on cognitive, motivational, and behavioral learning outcomes.
To sum up, in this meta-analysis, we aimed to statistically synthesize the state of current research on gamification on cognitive, motivational, and behavioral learning outcomes. Further, we included the moderating factors inclusion of game fiction , social interaction , learning arrangement of the comparison group , period of time , research context , randomization , design , and instruments that potentially affect the relationships between gamification and cognitive, motivational, and behavioral learning outcomes. Further, we investigated publication bias and the stability of the findings by performing subsplits that included only primary studies that applied high methodological rigor.
Literature Search
In order to maximize the sensitivity of the search and adopting the search criteria used in the review by Seaborn and Fels ( 2015 ), the terms gamification and gamif* were used in the academic literature search in all subject areas. Specific game design elements were not used as search terms because gamification research lacks consistent terms, definitions, and taxonomies for game design elements and agreed upon lists of game design elements that claim to be exhaustive are lacking (see Landers et al. 2018 ; Sailer et al. 2017a ). Including specific game design elements as search terms would put the analysis in danger of being biased toward specific game design elements while leaving out others that are less common and thus potentially not part of the search terms. The years of publication were not restricted. The literature search was conducted on March 3, 2017. We searched the following academic data bases: ACM Digital Library, ERIC, IEEE Xplore, JSTOR, PubMed, ScienceDirect, and SpringerLink. Citations from these data bases were directly exported from the websites. Aside from this search in academic data bases, we conducted a Google Scholar search with the same terms to further maximize the scope of our literature search. To retrieve citations from Google Scholar, we used Publish or Perish (version 5), a software tool that allows all accessible search results to be automatically downloaded. We also screened the reference lists from Garland’s ( 2015 ) meta-analysis and the reviews by Hamari et al. ( 2014 ), Seaborn and Fels ( 2015 ), Dicheva et al. ( 2015 ), Dicheva and Dichev ( 2015 ), and Dichev and Dicheva ( 2017 ). After removing duplicates, the literature search resulted in a total of 5548 possibly eligible records. An overview of the total number of search results per data base is found in Fig. 1 . Google Scholar only shows a maximum of 1000 search results per search query. Therefore, 1100 denotes the number of records identified and retrieved from separate queries using gamification and gamif* .
Study flow diagram
Inclusion and Exclusion Criteria
- Gamification
Studies were required to include at least one condition in which gamification, defined as “the use of game design elements in non-game contexts” (Deterding et al. 2011 , p. 9), was used as an independent variable. Accordingly, studies describing interventions using the term gamification were excluded if the definition of gamification used in this meta-analysis did not apply to the intervention.
Learning Outcomes
Eligible studies were required to assess at least one learning outcome. Learning outcomes were divided into three categories: cognitive, motivational, and behavioral learning outcomes. Cognitive learning outcomes refer to conceptual knowledge or application-oriented knowledge. Conceptual knowledge contains knowledge of facts, principles, and concepts, whereas application-oriented knowledge comprises procedural knowledge, strategic knowledge, and situational knowledge (de Jong and Ferguson-Hessler 1996 ). Adopting a broad view of motivation (see Wouters et al. 2013 ), m otivational learning outcomes encompass (intrinsic) motivation, dispositions, preferences, attitudes, engagement, as well as feelings of confidence and self-efficacy. Behavioral learning outcomes refer to technical skills, motor skills, or competences, such as learners’ performance on a specific task, for example, a test flight after aviation training (Garris et al. 2002 ).
Eligible studies were required to be published in English.
Research Design
Only primary studies applying quantitative statistical methods to examine samples of human participants were eligible. Furthermore, descriptive studies that did not compare different groups were excluded because the data obtained from such studies does not allow effect sizes to be calculated.
Control Group
Studies were required to use a between-subject design and to compare at least one gamification condition with at least one condition involving another instructional approach. As the goal of this meta-analysis was to investigate the addition of game design elements to systems that did not already contain them, studies comparing gamification with fully fledged games were ineligible.
Availability of Statistical Data
Studies were required to report sufficient statistical data to allow for the application of meta-analytic techniques.
Coding Procedure
First, we screened all titles for clearly ineligible publications (e.g., publications published in languages other than English). Next, we coded all remaining abstracts for eligibility. Finally, the remaining publications were retrieved and coded at the full-text level. An overview of the number of search results excluded per step is found in Fig. 1 .
Moderator coding for eligible studies was then performed by two independent coders on a random selection of 10 studies (approximately 25%), with interrater reliability ranging from κ = .76 to perfect agreement. Remaining coding discrepancies were discussed between the authors until mutual agreement was reached before coding the remaining studies. Furthermore, all statistical data were double-coded, and differing results were recalculated by the authors. Finally, data on moderator variables were extracted. The previously introduced moderators, which potentially influence the effectiveness of gamification on learning outcomes, were coded as follows.
Studies using game fiction by providing a narrative context or introducing a game world (e.g., meaningful stories or avatars) were coded yes , whereas studies that did not use game fiction were coded no . For this moderator, interrater reliability was κ = .76.
Studies in which learners competed against each other or nonplayer characters during gamified interventions were coded as competitive , whereas studies in which learners collaborated with each other or nonplayer characters were coded as collaborative . If a study included both competitive and collaborative elements, it was assigned the code competitive-collaborative . Studies in which learners engaged in a learning activity entirely on their own were coded as none . For this moderator, interrater reliability was κ = .83.
Passive instructional methods of the comparison group include listening to lectures, watching instructional videos, and reading textbooks. Active instruction refers to learning arrangements that explicitly prompt learners to engage in learning activities (e.g., assignments, exercises, laboratory experiments; see Sitzmann 2011 ). Mixed instruction refers to a combination of passive and active instructional methods (see Wouters et al. 2013 ). Studies using a waitlist condition as a control group were coded as untreated. For this moderator, interrater reliability was κ = .86.
The duration of the intervention was operationalized as the period of time over which the intervention took place. Studies were assigned to one of the categories: 1 day or less , 1 week or less (but longer than 1 day), 1 month or less (but longer than 1 week), half a year or less (but longer than 1 month), or more than half a year . For this moderator, coders achieved perfect agreement.
Depending on the research context, studies were coded as school setting or higher education setting . Studies in the context of further education or work-oriented learning were coded as work-related learning setting. Studies with no reference to formal, higher, or work-related education were coded as informal training setting . For this moderator, coders achieved perfect agreement.
If participants were assigned to the experimental and control groups randomly, the study was coded as experimental , whereas publications using nonrandom assignment were assigned the value quasi-experimental . For this moderator, coders reached perfect agreement.
Furthermore, studies were coded on the basis of whether they exclusively used posttest measures ( posttest only ) or also administered a pretest ( pre- and posttest ). For this moderator, coders achieved perfect agreement.
Studies using preexisting, standardized instruments to measure the variables of interest were coded as standardized , whereas studies using adapted versions of standardized measures were coded as adapted . If the authors developed a new measure, studies were assigned the value self-developed . For this moderator, coders reached perfect agreement.
Final Sample
Application of the exclusion criteria detailed above resulted in a final sample of 38 publications reporting 40 experiments. Three of the studies (Wasson et al. 2013 ; Grivokostopoulou et al. 2016 ; Sanmugam et al. 2016 ) were excluded from the analysis to avoid bias after the forest plot; displaying the effect sizes for the analysis of cognitive learning outcomes showed that the corresponding effect sizes were extraordinarily large (see Fig. 2 ). An examination of the forest plots for motivational and behavioral learning outcomes revealed no remarkable outliers (see Fig. 2 ). Furthermore, the studies by de-Marcos et al. ( 2014 ) and Su and Cheng ( 2014 ) plus the first of two studies reported by Hew et al. ( 2016 ) were excluded from the analysis because they reported the same experiments as previously published or more extensive studies included in this sample.
Forest plots showing the distribution of effect sizes for cognitive (a), motivational (b), and behavioral (c) learning outcomes. Points and lines display effect sizes (Hedges’ g ) and confidence intervals, respectively. The overall effect size is shown by the diamond on the bottom, with its width reflecting the corresponding confidence interval
The final sample of studies reporting cognitive learning outcomes comprised 19 primary studies reporting 19 independent experiments. They were published between 2013 and 2017 and examined a total of 1686 participants. Considering motivational learning outcomes, the final sample of studies consisted of 16 primary studies reporting 16 independent experiments that were published between 2013 and 2017 and examined a total of 2246 participants. Finally, the final sample of studies examining behavioral learning outcomes consisted of nine primary studies reporting 10 independent experiments. These experiments were published between 2014 and 2017 and examined a total of 951 participants.
Statistical Analysis
Effect sizes were estimated using the formulas provided by Borenstein et al. ( 2009 ). First, Cohen’s d was determined by dividing the difference between the means of the experimental and control groups by the pooled standard deviation. If no means and/or standard deviations were available, effect sizes were estimated on the basis of the t , F , or r statistics. To correct for possible bias caused by small samples, Cohen’s d was then used to calculate Hedges’ g (Hedges 1981 ).
For studies with pretests, effect sizes were adjusted to allow for a more accurate estimation of the effect by controlling for pretest effects. In these cases, Hedges’ g was calculated for both pre- and posttest comparisons between groups. Posttest values were then adjusted by subtracting pretest effect sizes from posttest effect sizes, while the respective variances were added up (see Borenstein et al. 2009 ).
Another issue concerns studies reporting multiple effect sizes. This can occur when studies use multiple outcome measures or compare the same control group with more than one experimental group. To avoid the bias caused by dependent effect sizes, they were synthesized by calculating the mean effect size per study and the respective variance, taking into account the correlation between outcomes as recommended by Borenstein et al. ( 2009 ). The aggregation was performed using the R statistical environment (version 3.4.1) and Rstudio (version 1.0.143) with the MAd package.
For some studies that reported data on different experimental groups or outcomes, even though they belonged to the same study, they required assignment to different moderator levels. In these cases, aggregation would lead to a loss of data and not allow these studies to be included in the respective subgroup analyses because they could not be unambiguously assigned to a single moderator level. Therefore, groups were treated separately and not aggregated for the moderator analysis but were aggregated for all analyses not affected by this problem.
Because heterogeneity among the participant samples and the experimental conditions could be presumed, we used a random effects model for the main analysis (Borenstein et al. 2009 ), which was also conducted using R and Rstudio with the metafor and matrix packages. Because true effect sizes can vary under a random effects model, it is important to identify and quantify this heterogeneity. Using the Q statistic (Borenstein et al. 2009 ), for which the p value indicates the presence or absence of heterogeneity, the degrees of homogeneity were assessed for the effect sizes. Additionaly, I 2 was used to quantify the degree of inconsistency for the results of the included studies on a scale ranging from 0 to 100%.
Meta-regressions under a mixed effects model (Borenstein et al. 2009 ) were conducted for the moderator analyses using the metafor and matrix packages in R and R studio. This analysis uses a Q test as an omnibus test to estimate the heterogeneity among studies explained by the moderator, as well as residual heterogeneity (i.e., whether unexplained between-study variance remained). If a moderator level had a sample size smaller than two, it was excluded from the analysis. For categorical moderators, post hoc comparisons under a random effects model were calculated with the MAd package if the omnibus test indicated significance.
Because the reporting of a sufficient amount of statistical data on at least one outcome of interest was used as an eligibility criterion, there were no missing data in the summary effect analyses. However, there were missing data for the moderation analyses because not all studies reported sufficient information about all variables, and thus, they could not be unambiguously coded. These cases were excluded from the respective moderator analyses.
Furthermore, bivariate correlations between the presence of learning outcomes and moderator levels were computed for all studies. Therefore, all learning outcomes and moderator levels were dummy coded and included in a correlation matrix. The moderators inclusion of game fiction , randomization , and design were already initially dichotomously coded, and thus, only the moderator levels that indicated the presence of game fiction and randomization and the application of pre-posttest designs were included in the matrix. The database for this analysis is the final sample detailed above, excluding studies reporting the same experiment as other studies and the outliers described above.
Publication Bias
The absence of publication bias is highly unlikely unless a set of preregistered reports is examined (Carter et al. 2019 ). Because this was not the case for the primary studies included in this meta-analysis, the possibility of publication bias was explored by using a funnel plot, a selection model, and the fail-safe number. Evaluations of the funnel plots showed no obvious asymmetries for the samples of studies reporting cognitive, motivational, and behavioral learning outcomes (see Fig. 3 ), initially indicating the absence of strong publication bias.
Funnel plot for cognitive (a), motivational (b), and behavioral (c) learning outcomes. Black dots indicate studies from the present sample positioned by their respective estimated effect size and standard error
The selection model indicated that there was no publication bias for the subsets of studies reporting cognitive, χ 2 (1) = .45, p = .50, motivational, χ 2 (1) = 1.04, p = .31, and behavioral learning outcomes, χ 2 (1) = .44, p = .50. For this reason, estimates from the initial random effects model were computed for the cognitive, motivational, and behavioral learning outcomes.
Further, Rosenberg’s ( 2005 ) fail-safe number was used as an estimate of the degree to which publication bias existed in the sample. The fail-safe number indicates the number of nonsignificant studies that would need to be added to reduce a significant effect to a nonsignificant effect. It was computed for all significant summary effects as well as for every moderator level that showed a statistically significant effect. The fail-safe number can be considered robust when it is greater than 5 n + 10, with n standing for the number of primary studies (Rosenthal 1991 ).
To further investigate the stability of the effects, subsplits were computed for the summary effect and moderator analyses for conceptual, situational, and contextual moderators. In these subsplits for cognitive, motivational, and behavioral learning outcomes, only studies applying experimental designs or using quasi-experimental designs with pre- and posttests were included. Thus, we assumed that the studies included in these subsplit analyses were characterized by high methodological rigor.
Before reporting the summary effect and moderator analyses, we report correlations of the presence of different learning outcomes and moderator levels in gamification studies included in the final sample to illustrate possible co-occurrences of outcomes and moderator levels. The resulting correlation matrix, including the number of studies for each cell, is shown in Table 1 . All of the following results refer to significant correlations. Studies investigating cognitive learning outcomes were less likely to also observe motivational ( r = − .47) and behavioral learning outcomes ( r = − .60). Further, studies observing cognitive learning outcomes were more likely to use a mixed instruction comparison group ( r = .41), including self-developed instruments ( r = .35), and less likely to last 1 day ( r = − .53), indicating that these studies often took longer periods of time. Motivational learning outcomes were more likely to use standardized ( r = .44) or adapted instruments ( r = .54). Investigating behavioral learning outcomes was positively correlated with using an active comparison group ( r = .35).
Studies showed some specific patterns for their respective research context: Studies performed in school settings were more likely to use passive instruction for the comparison group ( r = .43). Higher education studies were more likely to investigate cognitive learning outcomes ( r = .35) that were compared with a mixed instruction control group ( r = .45) over a period of half a year ( r = .48). Some kind of social interaction was more likely to occur in higher education contexts (correlation with no social interaction r = − .39). In work-related training settings, studies were more likely to observe behavioral learning outcomes ( r = .37), and results were likely to be compared with a comparison group receiving active instruction ( r = .34). The use of no social interaction was positively correlated with work settings ( r = .49), and no study investigated competitive or collaborative gamification implementation in this setting. Studies in informal training settings were likely to apply randomization ( r = .35).
Further significant correlations were shown for the inclusion of game fiction: Studies including game fiction were less likely to apply randomization ( r = − .41) and were more likely to include collaborative modes of social interaction ( r = .45).
Summary Effect Analyses
Cognitive learning outcomes.
The random effects model yielded a significant, small effect of gamification on cognitive learning outcomes ( g = .49, SE = .10, p < .01, 95% CI [0.30, 0.69]). Homogeneity estimates showed a significant and substantial amount of heterogeneity for cognitive learning outcomes, Q (18) = 57.97, p < .01, I 2 = 72.21%. The fail-safe number could be considered robust for cognitive learning outcomes (fail-safe N = 469).
Motivational Learning Outcomes
The results of the random effects model showed a significant, small effect of gamification on motivational learning outcomes ( g = .36, SE = .09, p < .01, 95% CI [0.18, 0.54]) and a significant amount of heterogeneity, Q (15) = 73.54, p < .01, I 2 = 75.13%. The fail-safe number indicated a robust effect for motivational learning outcomes (fail-safe N = 316).
Behavioral Learning Outcomes
The random effects model showed a significant, small effect of gamification on behavioral learning outcomes ( g = .25, SE = .11, p < .05, 95% CI [0.04, 0.46]). Results showed a significant and substantial amount of heterogeneity for behavioral learning outcomes, Q (9) = 22.10, p < .01, I 2 = 63.80%. The fail-safe number for behavioral learning outcomes could be interpreted as robust (fail-safe N = 136).
Moderator Analyses
As the homogeneity estimates showed a significant and substantial amount of heterogeneity for cognitive, motivational, and behavioral learning outcomes, moderator analyses were conducted to determine whether additional factors could account for the variance observed in the samples. Not all of the following comparisons contained all possible levels of the respective moderator because levels with k ≤ 1 were excluded from these analyses. Summaries of the moderator analysis are shown in Table 2 for cognitive, Table 3 for motivational, and Table 4 for behavioral learning outcomes.
The mixed effects analysis concerning game fiction resulted in no significant effect size differences for cognitive learning outcomes, Q (1) = 0.04, p = .85, and the residual variance was significant ( p < .01). For motivational learning outcomes, effect size magnitude did not vary significantly, Q (1) = 0.13, p = .72, with significant residual variance remaining ( p < .01). Finally, the results of the mixed effects analysis for behavioral learning outcomes showed a significant difference in the magnitudes of the effect sizes for the game fiction moderator, Q (1) = 5.45, p < .05, and no residual variance was left ( p = .08). An evaluation of the separate levels showed that inclusion of game fiction yielded a significant, small effect on behavioral learning outcomes in contrast to a nonsignificant result of not including game fiction.
Concerning social interaction, the results of the mixed effects analysis showed no significant difference in effect sizes between different forms of social interaction for cognitive learning outcomes, Q (3) = 0.80, p = .85, or for motivational learning outcomes, Q (3) = 3.85, p = .28. For both outcomes, significant residual variance remained ( p < .01). However, for behavioral learning outcomes, there was a significant difference between competitive, competitive-collaborative, and no social interaction, Q (2) = 12.80, p < .01, with no significant residual variance remaining ( p = .05). Post hoc comparisons under a random effects model showed a significant difference between competitive-collaborative interaction and no social interaction ( p < .05), with the former outperforming the latter.
As for the learning arrangement of the comparison group, no difference in effect sizes could be found regarding cognitive learning outcomes, Q (2) = 1.49, p = .48, or for motivational learning outcomes, Q (2) = 1.17, p = .56. Significant residual variance remained for both outcomes ( p < .01). We could not assess the effect of the learning arrangement of the comparison group for behavioral learning outcomes because there was only one subgroup with more than one study.
There was no significant difference between interventions with different durations for cognitive, Q (2) = 0.17, p = .92, or behavioral learning outcomes, Q (1) = 0.28, p = .59. For both outcomes, significant residual variance remained ( p < .01). For motivational learning outcomes, there was a significant difference between gamification interventions lasting 1 day or less and interventions lasting half a year or less, Q (1) = 4.93, p < .05. Gamification interventions lasting half a year or less showed significantly larger effects on motivational learning outcomes than interventions lasting 1 day or less. Significant residual variance remained ( p < .01).
Mixed effects analyses showed no significant difference between different research contexts for motivational, Q (3) = 5.09, p = .17, or behavioral learning outcomes, Q (2) = 0.67, p = .71. For both outcomes, significant residual variance remained ( p < .01). For cognitive learning outcomes, there was a significant difference between school, higher education, and informal training settings, Q (2) = 12.48, p < .01, with significant residual variance remaining ( p < .01). Post hoc comparisons under a random effects model showed a significant difference between studies performed in school settings and studies performed in higher education settings ( p < .01) or informal training settings ( p < .05). The effects found in school settings were significantly larger than those found in either higher education settings or informal education settings.
Results from the mixed effects analyses showed no significant difference between experimental and quasi-experimental studies regarding cognitive, Q (1) = 1.18, p = .28, or behavioral learning outcomes, Q (1) = 1.63, p = .20. The residual variance was significant ( p < .01). However, for motivational learning outcomes, there was a significant difference in the magnitude of the effect size between experimental and quasi-experimental studies, Q (1) = 4.67, p < .05. Quasi-experimental studies showed a significant medium-sized effect on motivational learning outcomes, whereas experimental studies showed a nonsignificant effect. The residual variance was significant ( p < .01).
Mixed effects analyses showed no significant effects of applied designs for cognitive learning outcomes, Q (1) = 0.05, p = .82, and the residual variance was significant ( p < .01). An evaluation of the influence of the applied design was not possible for the motivational and behavioral learning outcomes because there was only one subgroup with more than one study.
An evaluation of the influence of the instrument that was used was not possible for cognitive or behavioral learning outcomes because there was only one subgroup with more than one study in each of these areas. Moreover, there was no significant difference in effect sizes between studies using standardized, adapted, or self-developed instruments for motivational learning outcomes, Q (2) = 2.85, p = .24, with significant residual variance remaining ( p < .01).
Subsplit Analyses Including Studies with High Methodological Rigor
Subsplits were performed for studies with high methodological rigor. Thus, in the following analyses, only studies with experimental designs or quasi-experimental designs that used pre- and posttests were included. In line with the summary effect analysis above, the subsplit for cognitive learning outcomes showed a small effect of gamification on cognitive learning outcomes ( g = .42, SE = .14, p < .01, 95% CI [0.14, 0.68], k = 9, N = 686, fail-safe N = 51), with homogeneity estimates showing a significant and substantial amount of heterogeneity, Q (8) = 19.90, p < .05, I 2 = 51.33%. In contrast to the summary effects analysis above, results of the subsplit for motivational learning outcomes showed no significant summary effect of gamification on motivational learning outcomes ( g = .22, SE = .17, p = .20, 95% CI [− 0.11, 0.56], k = 7, N = 1063). Significant residual variance was left, Q (6) = 35.99, p < .01, I 2 = 84.23%. Further, the subsplit summary effect of gamification on behavioral learning outcomes was not significant ( g = .27, SE = .22, p = .22, 95% CI [− 0.16, 0.70], k = 5, N = 667). The residual variance was significant, Q (4) = 17.66, p < .01, I 2 = 78.59%.
Moderator analyses for the subsplit for cognitive learning outcomes could only be performed for social interaction , learning arrangement of the comparison group , and period of time because the subgroups were too small for several other moderator levels (see Table 5 ). In line with the moderator analysis described above (see Table 2 ), social interaction, learning arrangement of the comparison group, and period of time did not significantly moderate the effects of gamification on cognitive learning outcomes in the subsplit analysis.
For motivational learning outcomes, moderator analyses were conducted for inclusion of game fiction , social interaction , and research context . Again, other moderator analyses were not possible because the subgroups were too small (see Table 6 ). In line with the initial moderator analyses for motivational learning outcomes (see Table 3 ), game fiction and research context did not significantly moderate the effects of gamification on motivational learning outcomes. Contrary to the initial moderator analysis, the subsplit analysis for social interaction showed significant differences in the magnitude of the effect size, Q (2) = 7.20, p < .05, with significant residual variance remaining ( p < .01). Gamification with combinations of competition and collaboration showed a medium-sized effect on motivational learning outcomes and thus outperformed the gamification environments that solely used competition.
Subsplit moderator analyses for behavioral learning outcomes were only possible for the social interaction moderator (see Table 7 ). The results of this analysis were in line with the initial moderator analysis (see Table 4 ) in showing a significant moderating effect of social interaction on the relationship between gamification and behavioral learning outcomes, Q (1) = 6.87, p < .01. Gamification with competitive-collaborative modes of social interaction outperformed gamification that solely used competitive modes of social interaction.
The aim of this meta-analysis was to statistically synthesize the current state of research on the effects of gamification on cognitive, motivational, and behavioral learning outcomes, taking into account potential moderating factors. Overall, the results indicated significant, small positive effects of gamification on cognitive, motivational, and behavioral learning outcomes. These findings provide evidence that gamification benefits learning, and they are in line with the theory of gamified learning (Landers 2014 ) and self-determination theory (Ryan and Deci 2002 ). In addition, the results of the summary effect analyses were similar to results from meta-analyses conducted in the context of games (see Clark et al. 2016 ; Wouters et al. 2013 ), indicating that the power of games can be transferred to non-game contexts by using game design elements. Given that gamification research is an emerging field of study, the number of primary studies eligible for this meta-analysis was rather small, and effects found in this analysis were in danger of being unstable. Therefore, we investigated the stability of the summary effects. Fail-safe numbers as an estimate of the degree to which publication bias may exist in the samples were applied and indicated that the summary effects for cognitive, motivational, and behavioral outcomes were stable. However, subsplits exclusively including studies with high methodological rigor only supported the summary effect of gamification on cognitive learning outcomes. Although the analyses were underpowered, the summary effects for motivational and behavioral learning outcomes were not significant. Thus, summary effects of gamification on motivational and behavioral learning outcomes are not robust according to the subsplit analyses. For both motivational and behavioral learning outcomes, the subsplits indicate effects of gamification depend on the mode of social interaction. In a nutshell, gamification of motivational and behavioral learning outcomes can be effective when applied in competitive-collaborative settings in contrast to mere competitive settings.
The significant, substantial amount of heterogeneity identified in all three subsamples was in line with the positive, albeit mixed findings of previous reviews. For this reason, moderator analyses were computed to determine which factors were most likely to contribute to the observed variance.
Moderators of the Effectiveness of Learning with Gamification
For both cognitive and motivational learning outcomes, there was no significant difference in effect sizes between the inclusion and the exclusion of game fiction. The mechanisms that are supposed to be at work according to self-determination theory (Rigby and Ryan 2011 ) were not fully supported in this analysis. However, the results on cognitive and motivational learning outcomes were in line with meta-analytic evidence from the context of games, which found that including game fiction was not more effective than excluding it (Wouters et al. 2013 ). Nevertheless, for behavioral learning outcomes, the effects of including game fiction were significantly larger than the effects without game fiction. The fail-safe number for the effect of game fiction on behavioral learning outcomes indicated a stable effect that did not suffer from publication bias. However, studies including game fiction were less likely to use experimental designs, which can be a confounding factor.
These results raise the question as to why the use of game fiction seems to matter only for behavioral learning outcomes. A striking divergence between the data representing behavioral learning outcomes and cognitive or motivational learning outcomes is the point of time in which the data were collected: Whereas cognitive and motivational learning outcomes were almost exclusively measured after interventions, behavioral learning outcomes were almost exclusively measured during interventions (i.e., the measurement was to some extent confounded with the intervention itself). Therefore, it makes sense to ask whether the significant difference in behavioral learning outcomes regarding the use of game fiction really reflects a difference in effectiveness for learning or rather points toward an effect of gamification with game fiction on assessment. Previous research has in fact shown that gamification can affect the data (e.g., the number of questions completed or the length of answers produced; Cechanowicz et al. 2013 ). Even though behavioral learning outcomes were typically measured by participants completing specific tasks such as assembling Lego® cars (Korn et al. 2015 ), it may well be the case that these findings also transfer to the number of completed tasks or the time and effort invested in completing a task. From a theoretical point of view, the present results might merely reflect the idea that including game fiction was more effective in getting learners to invest more effort in completing tasks than not including game fiction. It remains uncertain whether these differences would also appear if performance was assessed without gamification.
We attempted to include the time of assessment as a moderator in this analysis; however, there were not enough studies in the subgroups to allow for a conclusive evaluation. Future studies should consider assessing behavioral learning outcomes not only as process data during the intervention but also after the intervention to avoid confounding effects.
Additionally, as gamification can be described as a design process in which game elements are added, the nonsignificant result of this moderator for cognitive and motivational learning outcomes could be explained by the quality with which the (game) design methods were applied: Most learning designers who apply and investigate gamification in the context of learning are not trained as writers and are probably, on average, not successful at applying game fiction effectively. Further, the findings could also be affected by how the moderator was coded. For example, the effectiveness of gamification might be affected by whether game fiction was only used at the beginning of an invervention to provide initial motivation, but it might not be relevant afterwards (e.g., avatars that cannot be developed), or they might continue to be relevant throughout the intervention (e.g., meaningful stories that continue). These possible qualitative differences in the design and use of game fiction could have contributed to the mixed results found in the present analysis. Due to the small sample size, a more fine-grained coding was not possible because it would have led to subgroups that were too small to be used to conduct any conclusive comparisons. Further, subsplit analyses regarding the moderator inclusion of game fiction were not possible for behavioral learning outcomes because the subgroups were too small.
For the cognitive and motivational learning outcomes, no significant difference in effect sizes were found between the different types of social interaction. For behavioral outcomes, a significant difference was found between competitive-collaborative and no interaction in favor of the former. As mentioned previously, behavioral learning outcomes, as opposed to cognitive and motivational learning outcomes, were almost exclusively comprised of process data. Therefore, the results of this analysis suggested that different types of social interactions affect learners’ behavior within gamification. Evoking social interactions via gamification in the form of combinations of collaboration and competition was most promising for behavioral learning outcomes. This result for behavioral learning outcomes is in line with evidence from the context of games, showing that combinations of competition and collaboration in games are promising for learning (Clark et al. 2016 ).
Although the positive effect of competitive-collaborative modes of interaction on behavioral learning outcomes was in danger of being unstable with respect to its fail-safe number, the subsplit of studies with high methodological rigor confirmed the advantage of the combination of competition and collaboration, but here, the advantage was over competition by itself. Interestingly, the findings from the subsplits of motivational and behavioral learning outcomes showed parallels as both showed that combinations of competition and collaboration outperformed mere competition. These results can be interpreted to mean that mere competition might be problematic for fostering learners’ motivation and performance—at least for some learners under certain circumstances.
A factor that might have contributed to these results is learners’ subjective perceptions. As Rigby and Ryan ( 2011 ) pointed out, satisfying the need for relatedness, especially in competitive situations, largely depends on how competition is experienced. Perceiving competition as constructive or destructive is “a function of the people involved rather than the activity itself” (Rigby and Ryan 2011 , p. 79). The interaction between learner characteristics and gamification should therefore be investigated in primary studies. A closely related problem concerns whether learners engaged in the interventions in the manner in which they were intended. Most of the studies did not provide information about whether or not learners took advantage of the affordances to engage in collaborative and/or competitive interactions. These issues can only be resolved by conducting future primary studies with data on learning processes, which will allow for investigations of these aspects on a level more closely related to what actually occurs during interventions and checks for implementation fidelity. Besides observations of learners’ interactions with learning environments, gamification research should take advantage of log-file analyses and learning analytics to analyze the human-environment interaction in gamified environments (see Young et al. 2012 ).
Additionally, differences in learners’ skill levels may have also contributed to the mixed results found in this analysis. Competition may be problematic if it occurs between learners with widely different skill levels because succeeding can be unattainable for learners with lower skill levels (Slavin 1980 ; Werbach and Hunter 2012 ). Only a few primary studies have considered participants’ skill levels. Future studies applying competitive or competitive-collaborative interventions should take prior skill level into account and report it. Further, the effectiveness of different types of social interaction could be influenced by the way the gamification environment and the specific game design elements were designed. Landers et al. ( 2017 ) showed how leaderbords that are implemented to foster competition can be interpreted as goals, suggesting that leaderbords with easy goals would likely be less effective than leaderboards with difficult goals. This thereby suggests that also within the single moderator levels of our meta-analysis, the effects could vary depending on their specific design.
Results of the moderator learning arrangement of the comparison group did not show significant differences between gamification when compared with different types of instruction (i.e., passive, active, or mixed instruction) in the comparison group for cognitive and motivational learning outcomes. For behavioral learning outcomes, the analysis was not possible because the moderator subgroups were too small. On the one hand, these results may indicate that gamification is not yet used in a way that focuses on fostering high-quality learning activities and, thus, does not take full advantage of the possibilities that gamification might have. As proposed by the theory of gamified learning, gamification can affect learning outcomes by enhancing activities that are relevant for learning and might thus create instructional affordances for learners to actively engage in cognitive processes with the learning material. Several primary studies included in this analysis could have failed to specifically provide affordances for high-quality learning activities.
On the other hand, learners might not actually take advantage of the instructional affordances provided by the gamified system. Similar to the social interaction moderator, learners probably did not engage in certain (high-quality) learning activities, as intended by the gamification environment (see Chi and Wylie 2014 ). Primary studies in this meta-analysis did not report sufficient data on the learning processes to clarify this issue. Therefore, future primary studies should use the human-environment interaction as the unit of analysis to account for learners interacting in different ways with the gamified environment (see Young et al. 2012 ). This would allow researchers to investigate different levels of learning activities that are fostered by certain game design elements while taking into account different situational factors and learner characteristics. Differentiating actual learning activities on a more fine-grained level into passive, active, constructive, and interactive learning activities, as suggested by Chi and Wylie ( 2014 ), would enable gamification research to find answers about how, for whom, and under which conditions gamification might work best.
Situational, Contextual, and Methodological Moderators
Besides the moderators discussed above, period of time , research context , randomization , design , and instruments were included as moderators to account for situational and contextual factors as well as methodological rigor. Results regarding period of time for cognitive and behavioral learning outcomes indicated that gamification can be effective in both the short and long term. For motivational learning outcomes, interventions lasting half a year or less (but more than 1 month) showed a medium-sized effect, whereas interventions lasting 1 day or less showed a nonsignificant result. The results of this meta-analysis can weaken the fear that effects of gamification might not persist in the long run and might thus contradict the interpretations presented in reviews (see Hamari et al. 2014 ; Seaborn and Fels 2015 ). For motivational outcomes, it might even take longer to affect motivation. However, this does not allow for any conclusions about how enduring the obtained effects will be. An attempt to include the time of the posttest as an additional methodological moderator in this analysis was not possible across all three subsamples because only one study reported a delayed posttest. Future primary studies should therefore focus on conducting follow-up tests so that conclusions about the endurance of the effects of gamification can be drawn.
Regarding the research context, a significant difference was found between school settings and higher education settings as well as informal settings in favor of the school setting for cognitive learning outcomes. Therefore, gamification works especially well for school students’ knowledge acquisition. No significant differences were found for motivational and behavioral learning outcomes. It should be noted that studies in school contexts were more likely to compare gamification groups with groups receiving passive instruction as opposed to studies from higher education, which were more likely to use control groups that received mixed instructions. This could have led to biased effects.
Randomization did not affect the relationship between gamification and cognitive learning outcomes but did so for motivational and behavioral learning outcomes, indicating that methodological rigor might moderate the size of the effect. For both outcomes, effect sizes for quasi-experimental studies were significantly larger than for experimental studies. In cases in which evaluation of the other methodological moderators, namely design and instruments , was possible, no influence on the size of the effect for the respective learning outcomes could be detected.
Limitations
One limitation of our study was that the sample size was rather small, especially for behavioral learning outcomes and all subsplit analyses. This limits the generalizability of the results and is also problematic for statistical power because, for random effects models, power depends on the total number of participants across all studies and on the number of primary studies (Borenstein et al. 2009 ). If there is substantial between-study variance, as in this meta-analysis, power is likely to be low. For this reason, nonsignificant results do not necessarily indicate the absence of an effect but could be explained by a lack of statistical power, especially if effects are rather small.
A common criticism of meta-analytic techniques is that they combine studies that differ widely in certain characteristics, often referred to as comparing apples and oranges (Borenstein et al. 2009 ). This issue can be addressed, to some extent, by including moderators that can account for such differences. In the present meta-analysis, we therefore investigated a set of moderating factors, even though only some of them were shown to be significant moderators of the relationship between gamification and learning. It is important for future research not to disregard the factors that did not show any significant influence, given the issues discussed above regarding power. Furthermore, this set of factors is not exhaustive, and other variables could also account for the observed heterogeneity. A few such factors (e.g., the more fine-grained distinction of game fiction or the time of the posttest) were discussed above, but other factors such as participant characteristics (e.g., familiarity with gaming or differences in the individual perception of game design elements, personality traits, or player types; see Hamari et al. 2014 ; Seaborn and Fels 2015 ) or different types of feedback addressed by certain game design elements may also account for heterogeneity in effect sizes. These aspects could not be investigated in the present analysis because most primary studies did not examine or report them. Further, some of the aspects mentioned above refer to design decisions made by the developers of gamification interventions. Such design decisions can lead to variance in the effect sizes of certain game design elements as shown in the context of leaderboards (Landers et al. 2017 ) and in the context of achievements, showing that effectivity of achievements strongly depends on the design (i.e., the quantity and difficulty of achievements; Groening and Binnewies 2019 ). Our meta-analysis did not include such aspects of design and rather synthesized effects of gamification as it is currently operationalized in the literature. By doing so, a mean design effect was assumed. Nevertheless, our meta-analysis offers an interpretive frame for applying and further investigating gamification. Such a frame is especially needed in such a young field. However, research investigating different aspects of certain game design elements is valuable for fully capturing the role of design in the process of applying gamification.
Another limitation concerns the quality of primary studies in the present analysis, a problem often described with the metaphor garbage in , garbage out (Borenstein et al. 2009 ). As mentioned earlier, gamification research suffers from a lack of methodological rigor, which, from the perspective of meta-analyses, can be addressed by either assessing methodological differences as moderators or excluding studies with insufficient methodological rigor. In this analysis, both approaches were applied: methodological factors were included as moderators, and subsplits involving studies that applied high methodological rigor were performed. For motivational and behavioral learning outcomes, the results showed that quasi-experimental studies found significant effects, whereas experimental study showed nonsignificant results, emphasizing the need for more rigorous primary study designs that allow alternative explanations for differences in learning outcomes between different conditions to be ruled out. The subsplit analyses showed that the summary effects for the motivational and behavioral outcomes were not robust. However, given the small sample sizes in the subgroup analyses, these findings were highly likely to be underpowered and should be viewed with caution.
Outcome measures are a particular aspect affected by both these problems: A lack of uniformity in measurement leads to differences in reliability and validity that are not considered in the calculation of the mean effect sizes (Walker et al. 2008 ). The use of agreed-upon measurement instruments with good psychometric properties is therefore needed to increase both the comparability of studies and methodological rigor.
Gamification in the context of learning has received increased attention and interest over the last decade for its hypothesized benefits on motivation and learning. However, some researchers doubt that effects of games can be transferred to non-game contexts (see Boulet 2012 ; Klabbers 2018 ). The present meta-analysis supports the claim that gamification of learning works because we found significant, positive effects of gamification on cognitive, motivational, and behavioral learning outcomes. Whereas the positive effect of gamification on cognitive learning outcomes can be interpreted as stable, results on motivational and behavioral learning outcomes have been shown to be less stable. Further, the substantial amount of heterogeneity identified in the subsamples could not be accounted for by several moderating factors investigated in this analysis, leaving partly unresolved the question of which factors contribute to successful gamification. More theory-guided empirical research is needed to work toward a comprehensive theoretical framework with clearly defined components that describes precise mechanisms by which gamification can affect specific learning processes and outcomes. Future research should therefore explore possible theoretical avenues in order to construct a comprehensive framework that can be empirically tested and refined.
The theory of gamified learning offers a suitable framework for doing so (see Landers 2014 ; Landers et al. 2018 ). Combined with evidence from well-established educational and psychological theories that provide clear starting points for effective gamification, primary studies should work toward an evidence-based understanding of how gamification works: focusing on specific game design elements affecting specific psychological needs postulated by self-determination theory (see Ryan and Deci 2002 ) and exploring ways to create affordances for high-quality learning activities, namely, constructive and interactive learning activities (see Chi and Wylie 2014 ). Future research has to take into account how psychological needs and high-quality learning activities can be fostered by gamification and to what extent as well as under what conditions learners actually take advantage of these affordances by focusing on the human-environment interaction of learners in gamified interventions (see Young et al. 2012 ).
In general, more high-quality research, applying experimental designs or quasi-experimental designs with elaborated controls for prior knowledge and motivation, is needed to enable a more conclusive investigation of the relationship between gamification and learning, as well as possible moderating factors. As the abovementioned issues concerning the moderators are merely post hoc explanations, future research should specifically investigate issues such as learners’ experiences and perceptions of gamification, their actual activities in the interventions, the role of learners’ skill level in competition, the influence of learners’ initial motivation, the adaptiveness of gamified systems, other individual characteristics, and the endurance of effects after interventions. Nevertheless, the moderators considered in this analysis should also not be disregarded yet due to the abovementioned limitations of this meta-analysis.
The fact that the significant results in the analyses of the conceptual moderators were mostly found for behavioral learning outcomes is of particular interest for future investigations. Because these outcomes, in contrast to the cognitive variables, were almost exclusively measured during the interventions, these results could indicate that certain variables affect behavior and performance in the immediate situation, which does not necessarily transfer to situations outside the gamified context. The transparent reporting of study characteristics, control group arrangements, and combinations of learning processes and learning outcome data as well as investigating several outcome types in single studies would provide a more comprehensive (meta-analytic) investigation of the factors that contribute to the effectiveness of gamification.
The results of the present meta-analysis offer an interpretive frame for applying gamification and the following practical implications. First, the results suggest that, in general, gamification has the potential to serve as an effective instructional approach for interventions focusing on cognitive, motivational, and behavioral learning outcomes. Second, when considering interventions focusing on behavioral learning outcomes, including game fiction is promising; for example, introducing a fictional game world, which is relevant throughout the gamified intervention, combined with an avatar system, which allows for developing an avatar over time, can help to foster learners’ skills (e.g., Sailer et al. 2017b ). Further, creating a gamification environment that allows learners to engage in both competitive and collaborative interaction can be beneficial: Letting learners work together in teams, while competing with other teams, can help to improve learners’ quality of performance and skills (e.g., Sailer et al. 2017b ). There is evidence from the subsplit analysis of studies using high methodological rigor that this also holds true for motivational learning outcomes. For example, implementing a transparent badge system that allows peers to challenge themselves as well as awarding badges for helping and collaborating with others can help to foster motivation (e.g., Yildirim 2017 ). Both subsplits for motivational and behavioral learning outcomes further indicate that mere competition within gamification might be suboptimal, but competition augmented with collaboration can be effective. These aspects should be considered when designing gamified interventions, particularly when targeting behavioral learning outcomes, although these effects may especially apply to experiences and performance during the gamified intervention and might not affect performance in contexts that are devoid of gamification.
The initial question was whether gamification is effective for learning. The results from the present meta-analysis suggest that, yes , gamification might in fact be effective when it comes to learning. However, the question of which factors contribute most to successful gamification remains partly unresolved, at least for cognitive learning outcomes.
References marked with an asterisk indicate studies included in the meta-analysis. References marked with two asterisks indicate studies included in the subsplit analysis.
Armstrong, M. B., & Landers, R. N. (2017). An evaluation of gamified training: using narrative to improve reactions and learning. Simulation & Gaming, 48 (4), 513–538. https://doi.org/10.1177/1046878117703749 .
Article Google Scholar
Bedwell, W. L., Pavlas, D., Heyne, K., Lazzara, E. H., & Salas, E. (2012). Toward a taxonomy linking game attributes to learning: an empirical study. Simulation & Gaming, 43 (6), 729–760. https://doi.org/10.1177/1046878112439444 .
*Bernik, A., Bubaš, G., & Radošević, D. (2015). A pilot study of the influence of gamification on the effectiveness of an e-learning course. In T. Hunjak, V. Kirinić, & M. Konecki (Eds.), Central european conference on information and intelligent systems. CECIIS 26th International Conference (pp. 73–79). Varaždin: Faculty of Organization and Informatics, University of Zagreb.
Google Scholar
*Bonde, M. T., Makransky, G., Wandall, J., Larsen, M. V., Morsing, M., Jarmer, H., & Sommer, M. O. (2014). Improving biotech education through gamified laboratory simulations. Nature Biotechnology, 32 (7), 694–697. https://doi.org/10.1038/nbt.2955 .
Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2009). Introduction to meta-analysis . Chichester: Wiley. https://doi.org/10.1002/9780470743386 .
Book Google Scholar
Boulet, G. (2012). Gamification: the latest buzzword and the next fad. ELearn Magazine, 2012 (12). https://doi.org/10.1145/2407138.2421596 .
Burguillo, J. C. (2010). Using game theory and competition-based learning to stimulate student motivation and performance. Computers & Education, 55 (2), 566–575. https://doi.org/10.1016/j.compedu.2010.02.018 .
Carter, E. C., Schönbrodt, F. D., Gervais, W. M., & Hilgard, J. (2019). Correcting for bias in psychology: a comparison of meta-analytic methods. Advances in Methods and Practices in Psychological Science, 2 (2), 115–144. https://doi.org/10.1177/2515245919847196 .
Cechanowicz, J., Gutwin, C., Brownell, B., & Goodfellow, L. (2013). Effects of gamification on participation and data quality in a real-world market research domain. In L. E. Nacke, K. Harrigan, & N. Randall (Eds.), Proceedings of the First International Conference on Gameful Design, Research, and Applications (pp. 58–65). New York: ACM. https://doi.org/10.1145/2583008.2583016 .
Chapter Google Scholar
*Chen, C.-H., & Chiu, C.-H. (2016). Employing intergroup competition in multitouch design-based learning to foster student engagement, learning achievement, and creativity. Computers & Education, 103 , 99–113. https://doi.org/10.1016/j.compedu.2016.09.007 .
*Chen, C.-H., Liu, G.-Z., & Hwang, G.-J. (2015). Interaction between gaming and multistage guiding strategies on students’ field trip mobile learning performance and motivation. British Journal of Educational Technology, 47 (6), 1032–1050. https://doi.org/10.1111/bjet.12270 .
Chi, M. T., & Wylie, R. (2014). The ICAP framework: linking cognitive engagement to active learning outcomes. Educational Psychologist, 49 (4), 219–243. https://doi.org/10.1080/00461520.2014.965823 .
**Christy, K. R., & Fox, J. (2014). Leaderboards in a virtual classroom: a test of stereotype threat and social comparison explanations for women’s math performance. Computers & Education, 78 , 66–77. https://doi.org/10.1016/j.compedu.2014.05.005 .
Clark, D. B., Tanner-Smith, E. E., & Killingsworth, S. S. (2016). Digital games, design, and learning: a systematic review and meta-analysis. Review of Educational Research, 86 (1), 79–122. https://doi.org/10.3102/0034654315582065 .
Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining “gamification”. In A. Lugmayr (Ed.), Proceedings of the 15th International Academic Mindtrek Conference: Envisioning Future Media Environments (pp. 9–15). New York: ACM. https://doi.org/10.1145/2181037.2181040 .
Dichev, C., & Dicheva, D. (2017). Gamifying education: what is known, what is believed and what remains uncertain: a critical review. International Journal of Educational Technology in Higher Education, 14 (9), 1–36. https://doi.org/10.1186/s41239-017-0042-5 .
Dicheva, D., & Dichev, C. (2015). Gamification in education: where are we in 2015? In C. Ho & G. Lin (Eds.), Proceedings of E-Learn: World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education (pp. 1445–1454). Waynesville: Association for the Advancement of Computing in Education.
Dicheva, D., Dichev, C., Agre, G., & Angelova, G. (2015). Gamification in education: a systematic mapping study. Educational Technology & Society, 18 (3), 75–88.
Dickey, M. D. (2006). Game design narrative for learning: appropriating adventure game design narrative devices and techniques for the design of interactive learning environments. Educational Technology Research and Development, 54 (3), 245–263. https://doi.org/10.1007/s11423-006-8806-y .
**Diewald, S., Lindemann, P., Möller, A., Stockinger, T., Koelle, M., & Kranz, M. (2014). Gamified training for vehicular user interfaces - effects on drivers’ behavior. In R. Pfliegl (Ed.), 2014 International Conference on Connected Vehicles and Expo (ICCVE) Proceedings (pp. 250–257). Piscataway: IEEE. https://doi.org/10.1109/iccve.2014.7297551 .
*Domínguez, A., Saenz-de-Navarrete, J., de-Marcos, L., Fernández-Sanz, L., Pagés, C., & Martínez-Herráiz, J.-J. (2013). Gamifying learning experiences: practical implications and outcomes. Computers & Education, 63 , 380–392. https://doi.org/10.1016/j.compedu.2012.12.020 .
*Frącz, W. (2015). An empirical study inspecting the benefits of gamification applied to university classes. In 2015 7th Computer Science and Electronic Engineering Conference (CEEC). Conference Proceedings (pp. 135–139). Piscataway: IEEE. https://doi.org/10.1109/ceec.2015.7332713 .
*Frost, R. D., Matta, V., & MacIvor, E. (2015). Assessing the efficacy of incorporating game dynamics in a learning management system. Journal of Information Systems Education, 26 (1), 59–70.
Garland, C. M. (2015). Gamification and implications for second language education: a meta analysis (Master’s thesis). St. Cloud State University, St. Cloud.
Garris, R., Ahlers, R., & Driskell, J. E. (2002). Games, motivation, and learning: a research and practice model. Simulation & Gaming, 33 (4), 441–467. https://doi.org/10.1177/1046878102238607 .
Grivokostopoulou, F., Perikos, I., & Hatzilygeroudis, I. (2016). An innovative educational environment based on virtual reality and gamification for learning search algorithms. In V. Kumar, S. Murthy, & Kinshuk (Eds.), Proceedings IEEE Eighth International Conference on Technology for Education. T4E 2016 (pp. 110–115). IEEE: Los Alamitos. https://doi.org/10.1109/t4e.2016.029 .
Groening, C., & Binnewies, C. (2019). “Achievement unlocked!” - the impact of digital achievements as a gamification element on motivation and performance. Computers in Human Behavior, 97 , 151–166. https://doi.org/10.1016/j.chb.2019.02.026 .
Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work? - a literature review of empirical studies on gamification. In R. H. Sprague Jr. (Ed.), Proceedings of the 47th Annual Hawaii International Conference on System Sciences (pp. 3025–3034). Washington, DC: IEEE. https://doi.org/10.1109/hicss.2014.377 .
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 (1), 81–112. https://doi.org/10.3102/003465430298487 .
Hedges, L. V. (1981). Distribution theory for Glass’s estimator of effect size and related estimators. Journal of Educational Statistics, 6 (2), 107–128. https://doi.org/10.3102/10769986006002107 .
**Hew, K. F., Huang, B., Chu, K. W., & Chiu, D. K. (2016). Engaging Asian students through game mechanics: findings from two experiment studies. Computers & Education, 92-93 , 221–236. https://doi.org/10.1016/j.compedu.2015.10.010 .
*Hong, G. Y., & Masood, M. (2014). Effects of gamification on lower secondary school students’ motivation and engagement. International Journal of Social, Behavioral, Educational, Economic, Business and Industrial Engineering, 8 (12), 3448–3455.
*Jang, J., Park, J. J., & Yi, M. Y. (2015). Gamification of online learning. In C. Conati, N. Heffernan, A. Mitrovic, & M. F. Verdejo (Eds.), Artificial intelligence in education. AIED 2015. Lecture notes in computer science, vol. 9112 (pp. 646–649). Cham: Springer. https://doi.org/10.1007/978-3-319-19773-9_82 .
de Jong, T., & Ferguson-Hessler, M. G. (1996). Types and qualities of knowledge. Educational Psychologist, 31 (2), 105–113. https://doi.org/10.1207/s15326985ep3102_2 .
**Kelle, S., Klemke, R., & Specht, M. (2013). Effects of game design patterns on basic life support training content. Educational Technology & Society, 16 (1), 275–285.
*Kim, E., Rothrock, L., & Freivalds, A. (2016). The effects of gamification on engineering lab activities. In 2016 IEEE Frontiers in Education Conference Proceedings . Piscataway: IEEE. https://doi.org/10.1109/fie.2016.7757442 .
Klabbers, J. H. (2018). On the architecture of game science. Simulation & Gaming, 49 (3), 207–245. https://doi.org/10.1177/1046878118762534 .
*Korn, O., Funk, M., & Schmidt, A. (2015). Towards a gamification of industrial production. A comparative study in sheltered work environments. In J. Ziegler (Ed.), Proceedings of the 7th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (pp. 84–93). New York: ACM. https://doi.org/10.1145/2774225.2774834 .
*Krause, M., Mogalle, M., Pohl, H., & Williams, J. J. (2015). A playful game changer: fostering student retention in online education with social gamification. In G. Kiczales (Ed.), Proceedings of the 2nd (2015) ACM Conference on Learning @ Scale (pp. 95–102). New York: ACM. https://doi.org/10.1145/2724660.2724665 .
Landers, R. N. (2014). Developing a theory of gamified learning: linking serious games and gamification of learning. Simulation & Gaming, 45 (6), 752–768. https://doi.org/10.1177/1046878114563660 .
**Landers, R. N., Bauer, K. N., & Callan, R. C. (2017). Gamification of task performance with leaderboards: a goal setting experiment. Computers in Human Behavior, 71 , 508–515. https://doi.org/10.1016/j.chb.2015.08.008 .
Landers, R. N., Auer, E. M., Collmus, A. B., & Armstrong, M. B. (2018). Gamification science, its history and future: definitions and a research agenda. Simulation & Gaming, 49 (3), 315–337. https://doi.org/10.1177/1046878118774385 .
*Lombriser, P., Dalpiaz, F., Lucassen, G., & Brinkkemper, S. (2016). Gamified requirements engineering: model and experimentation. In M. Daneva & O. Pastor (Eds.), Requirements engineering: foundation for software quality. REFSQ 2016. Lecture notes in computer science, vol. 9619 (pp. 171–187). Cham: Springer. https://doi.org/10.1007/978-3-319-30282-9_12 .
de-Marcos, L., Domínguez, A., Saenz-de-Navarrete, J., & Pagés, C. (2014). An empirical study comparing gamification and social networking on e-learning. Computers & Education, 75 , 82–91. https://doi.org/10.1016/j.compedu.2014.01.012 .
**de-Marcos, L., Garcia-Lopez, E., & Garcia-Cabot, A. (2016). On the effectiveness of game-like and social approaches in learning: comparing educational gaming, gamification & social networking. Computers & Education, 95 , 99–113. https://doi.org/10.1016/j.compedu.2015.12.008 .
**Mekler, E. D., Brühlmann, F., Tuch, A. N., & Opwis, K. (2017). Towards understanding the effects of individual gamification elements on intrinsic motivation and performance. Computers in Human Behavior, 71 , 525–534. https://doi.org/10.1016/j.chb.2015.08.048 .
**Moradian, A., Nasir, M., Lyons, K., Leung, R., & Sim, S. E. (2014). Gamification of collaborative idea generation and convergence. In M. Jones & P. Palanque (Eds.), CHI ‘14 extended abstracts on human factors in computing systems (pp. 1459–1464). New York, NY: ACM. https://doi.org/10.1145/2559206.2581253 .
**Morschheuser, B., Henzi, C., & Alt, R. (2015). Increasing intranet usage through gamification – insights from an experiment in the banking industry. In T. X. Bui & R. H. Sprague Jr. (Eds.), Proceedings of the 48th Annual Hawaii International Conference on System Sciences. HICSS 2015 (pp. 635–642). IEEE: Los Alamitos, CA. https://doi.org/10.1109/hicss.2015.83 .
**Morschheuser, B., Maedche, A., & Walter, D. (2017). Designing cooperative gamification: conceptualization and prototypical implementation. In C. P. Lee & S. Poltrock (Eds.), Proceedings of the 20th ACM Conference on Computer-Supported Cooperative Work and Social Computing (pp. 2410–2421). New York: ACM. https://doi.org/10.1145/2998181.2998272 .
**Papadopoulos, P. M., Lagkas, T., & Demetriadis, S. N. (2016). How revealing rankings affects student attitude and performance in a peer review learning environment. In S. Zvacek, M. T. Restivo, J. Uhomoibhi, & M. Helfert (Eds.), Computer supported education. CSEDU 2015. Communications in computer and information science, vol. 583 (pp. 225–240). Cham: Springer. https://doi.org/10.1007/978-3-319-29585-5_13 .
*Poondej, C., & Lerdpornkulrat, T. (2016). The development of gamified learning activities to increase student engagement in learning. Australian Educational Computing, 31 (2), 1–16.
Prensky, M. (2001). Digital game-based learning . New York: McGraw-Hill.
Prince, M. (2004). Does active learning work? A review of the research. Journal of Engineering Education, 93 (3), 223–231. https://doi.org/10.1002/j.2168-9830.2004.tb00809.x .
Rigby, S., & Ryan, R. M. (2011). Glued to games: how video games draw us in and hold us spellbound . Santa Barbara: Praeger.
Rosenberg, M. S. (2005). The file-drawer problem revisited: a general weighted method for calculating fail-safe numbers in meta-analysis. Evolution, 59 (2), 464–468. https://doi.org/10.1111/j.0014-3820.2005.tb01004.x .
Rosenthal, R. (1991). Meta-analytic procedures for social research . Newbury Park: Sage. https://doi.org/10.4135/9781412984997 .
*Rouse, K. E. (2013). Gamification in science education: the relationship of educational games to motivation and achievement (Doctoral dissertation) . The University of Southern Mississippi, Hattiesburg, MS.
Ryan, R. M., & Deci, E. L. (2000). Intrinsic and extrinsic motivations: classic definitions and new directions. Contemporary Educational Psychology, 25 (1), 54–67. https://doi.org/10.1006/ceps.1999.1020 .
Ryan, R. M., & Deci, E. L. (2002). Overview of self-determination theory: an organismic dialectical perspective. In R. M. Ryan & E. L. Deci (Eds.), Handbook of self-determination research (pp. 3–33). Rochester, NY: University of Rochester Press.
**Sailer, M., Hense, J. U., Mayr, S. K., & Mandl, H. (2017a). How gamification motivates: an experimental study of the effects of specific game design elements on psychological need satisfaction. Computers in Human Behavior, 69 , 371–380. https://doi.org/10.1016/j.chb.2016.12.033 .
**Sailer, M., Hense, J., Mandl, H., & Klevers, M. (2017b). Fostering development of work competencies. In M. Mulder (Ed.), Competence-based vocational and professional education – bridging the world of work and education (pp. 795–818). Cham: Springer. https://doi.org/10.1007/978-3-319-41713-4_37 .
Sanmugam, M., Abdullah, Z., Mohamed, H., Aris, B., Zaid, N. M., & Suhadi, S. M. (2016). The affiliation between student achievement and elements of gamification in learning science. In Y. Rusmawati & T. A. B. Wirayuda (Eds.), 2016 4th International Conference on Information and Communication Technology (ICoICT) (pp. 1–4). Piscataway: IEEE. https://doi.org/10.1109/icoict.2016.7571962 .
Seaborn, K., & Fels, D. I. (2015). Gamification in theory and action: a survey. International Journal of Human-Computer Studies, 74 , 14–31. https://doi.org/10.1016/j.ijhcs.2014.09.006 .
Sitzmann, T. (2011). A meta-analytic examination of the instructional effectiveness of computer-based simulation games. Personnel Psychology, 64 (2), 489–528. https://doi.org/10.1111/j.1744-6570.2011.01190.x .
Slavin, R. E. (1980). Cooperative learning. Review of Educational Research, 50 (2), 315–342. https://doi.org/10.3102/00346543050002315 .
*Stansbury, J. A., & Earnest, D. R. (2017). Meaningful gamification in an industrial/organizational psychology course. Teaching of Psychology, 44 (1), 38–45. https://doi.org/10.1177/0098628316677645 .
**Su, C.-H., & Cheng, C.-H. (2013). A mobile game-based insect learning system for improving the learning achievements. Procedia - Social and Behavioral Sciences, 103 , 42–50. https://doi.org/10.1016/j.sbspro.2013.10.305 .
Su, C.-H., & Cheng, C.-H. (2014). A mobile gamification learning system for improving the learning motivation and achievements. Journal of Computer Assisted Learning, 31 (3), 268–286. https://doi.org/10.1111/jcal.12088 .
**Tan, M., & Hew, K. F. (2016). Incorporating meaningful gamification in a blended learning research methods class: examining student learning, engagement, and affective outcomes. Australasian Journal of Educational Technology, 32 (5), 19–34. https://doi.org/10.14742/ajet.2232 .
**Van Nuland, S. E., Roach, V. A., Wilson, T. D., & Belliveau, D. J. (2014). Head to head: the role of academic competition in undergraduate anatomical education. Anatomical Sciences Education, 8 , 404–412. https://doi.org/10.1002/ase.1498 .
Walker, E., Hernandez, A. V., & Kattan, M. W. (2008). Meta-analysis: its strengths and limitations. Cleveland Clinic Journal of Medicine, 75 (6), 431–439.
Wasson, R., Mould, D., Biddle, R., & Martinez, C. S. (2013). A sketching game for art history instruction. In S. N. Spencer (Ed.), Proceedings of the International Symposium on Sketch-Based Interfaces and Modeling (pp. 23–31). New York: ACM. https://doi.org/10.1145/2487381.2487384 .
Werbach, K., & Hunter, D. (2012). For the win: how game thinking can revolutionize your business . Philadelphia: Wharton Digital Press.
Wouters, P., Paas, F., & van Merriënboer, J. J. G. (2008). How to optimize learning from animated models: a review of guidelines based on cognitive load. Review of Educational Research, 78 (3), 645–675. https://doi.org/10.3102/0034654308320320 .
Wouters, P., van Nimwegen, C., van Oostendorp, H., & van der Spek, E. D. (2013). A meta-analysis of the cognitive and motivational effects of serious games. Journal of Educational Psychology, 105 (2), 249–265. https://doi.org/10.1037/a0031311 .
**Yildirim, I. (2017). The effects of gamification-based teaching practices on student achievement and students’ attitudes toward lessons. The Internet and Higher Education, 33 , 86–92. https://doi.org/10.1016/j.iheduc.2017.02.002 .
Young, M. F., Slota, S., Cutter, A. B., Jalette, G., Mullin, G., Lai, B., Simeoni, Z., Tran, M., & Yukhymenko, M. (2012). Our princess is in another castle: a review of trends in serious gaming for education. Review of Educational Research, 82 (1), 61–89. https://doi.org/10.3102/0034654312436980 .
Zyda, M. (2005). From visual simulation to virtual reality to games. Computer, 38 (9), 25–32. https://doi.org/10.1109/MC.2005.297 .
Download references
Author information
Authors and affiliations.
Chair of Education and Educational Psychology, LMU Munich, Munich, Germany
Michael Sailer & Lisa Homner
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Michael Sailer .
Additional information
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Reprints and permissions
About this article
Sailer, M., Homner, L. The Gamification of Learning: a Meta-analysis. Educ Psychol Rev 32 , 77–112 (2020). https://doi.org/10.1007/s10648-019-09498-w
Download citation
Published : 15 August 2019
Issue Date : March 2020
DOI : https://doi.org/10.1007/s10648-019-09498-w
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Gamified learning
- Meta-analysis
- Find a journal
- Publish with us
- Track your research
- Review article
- Open access
- Published: 20 February 2017
Gamifying education: what is known, what is believed and what remains uncertain: a critical review
- Christo Dichev 1 &
- Darina Dicheva ORCID: orcid.org/0000-0001-5590-0282 1
International Journal of Educational Technology in Higher Education volume 14 , Article number: 9 ( 2017 ) Cite this article
186k Accesses
555 Citations
150 Altmetric
Metrics details
Gamification of education is a developing approach for increasing learners’ motivation and engagement by incorporating game design elements in educational environments. With the growing popularity of gamification and yet mixed success of its application in educational contexts, the current review is aiming to shed a more realistic light on the research in this field by focusing on empirical evidence rather than on potentialities, beliefs or preferences. Accordingly, it critically examines the advancement in gamifying education. The discussion is structured around the used gamification mechanisms, the gamified subjects, the type of gamified learning activities, and the study goals, with an emphasis on the reliability and validity of the reported outcomes. To improve our understanding and offer a more realistic picture of the progress of gamification in education, consistent with the presented evidence, we examine both the outcomes reported in the papers and how they have been obtained. While the gamification in education is still a growing phenomenon, the review reveals that (i) insufficient evidence exists to support the long-term benefits of gamification in educational contexts; (ii) the practice of gamifying learning has outpaced researchers’ understanding of its mechanisms and methods; (iii) the knowledge of how to gamify an activity in accordance with the specifics of the educational context is still limited. The review highlights the need for systematically designed studies and rigorously tested approaches confirming the educational benefits of gamification, if gamified learning is to become a recognized instructional approach.
The idea of incentivizing people is not new but the term “gamification” didn’t enter the mainstream vocabulary until 2010. Only a year later it became a viable trend. The growing popularity of gamification is stemming from the belief in its potential to foster motivation, behavioral changes, friendly competition and collaboration in different contexts, such as customer engagement, employee performance and social loyalty. As with any new and promising technology it has been applied in a diversity of domains, including marketing, healthcare, human resources, training, environmental protection and wellbeing. Gamification is a multidisciplinary concept spanning a range of theoretical and empirical knowledge, technological domains and platforms and is driven by an array of practical motivations (Seaborn & Fels, 2015 ). In an attempt to best capture the essence of the underlying concepts and practices, the term gamification has been defined in several ways, such as “the use of game design elements in non-game contexts” (Deterding, Dixon, Khaled, & Nacke, 2011 ), “the phenomenon of creating gameful experiences” (Hamari, Koivisto, & Sarsa, 2014 ), or “the process of making activities more game-like” (Werbach, 2014 ). Empirical work across disciplines has begun to explore how gamification can be used in certain contexts and what behavioral and experiential effects gamification has on people in the short and long terms.
Ever since its advent gamification has sparked controversy between game designers, user experience designers, game theorists and researchers in human-computer interaction (Mahnič, 2014 ). This controversy is reflected also in some scientific studies of gamification, which show that its effect on motivation or participation is lower than the expectations created by the hype (Broer, 2014 ). Even so, substantial efforts have sought to take advantage of the alleged motivational benefits of gamification approaches.
One key sector where gamification is being actively explored (mainly for its potential to motivate) is education. Motivation is among the important predictors of student academic achievements, which influences the effort and time a student spends engaged in learning (Linehan, Kirman, Lawson, & Chan, 2011 ). Given that games, known to engender motivation and engagement, are notably popular, the proposal to incorporate game mechanics and principles to motivate the learner is appealing. Gamification in education refers to the introduction of game design elements and gameful experiences in the design of learning processes. It has been adopted to support learning in a variety of contexts and subject areas and to address related attitudes, activities, and behaviors, such as participatory approaches, collaboration, self-guided study, completion of assignments, making assessments easier and more effective, integration of exploratory approaches to learning, and strengthening student creativity and retention (Caponetto et al. 2014 ). The rationality at the basis of gamifying learning is that adding elements, such as those found in games to learning activities will create immersion in a way similar to what happens in games (Codish & Ravid, 2015 ). This leads to the belief that by incorporating game mechanics in the design of a learning process, we can engage learners in a productive learning experience, and more generally, change their behavior in a desirable way (Holman et al. 2013 ). Yet, the design of successful gamification applications in education that can sustain the intended behavior changes is still more of a guessing practice than science. This fact is in line with the Gartner Hype Cycle (Gartner, 2013 ), a research methodology that outlines a technology’s viability for commercial success, which points out that an emerging technology first climbs the ‘peak of inflated expectations’ followed by a subsequent strong fall down into the ‘trough of disillusionment’, before reaching the ‘slop of enlightenment’, which marks the stage where its benefits and limitations are understood and demonstrated.
The Gartner model is intended for representing the level of maturity and adoption of certain emerging technologies. We maintain the view that gamification is not just a technology but also a methodology which some organizations adopt as way to increase motivation. In this aspect, gamification is not a purely marketing trend but a behavioral/affective design trend that can be applied to different areas, including education. As such, gamification is also a growing area of research. However, research efforts and trends should be driven and evaluated based on distinct factors. Thus Gartner’s model is used here metaphorically and as a comparison model. We borrow it to illustrate observed trends in emerging research areas, demonstrating some sorts of ‘peaks of inflated expectations’ and ‘enlightenments’.
In 2014 we conducted a systematic mapping study of the empirical research published between January 2010 and June 2014 intended to recognize the emerging trends within the area of applications of gamification to education and to identify patterns, educational contexts and configurations of used game elements (Dicheva et al. 2015 ). For classifying the research results, the study used a categorical structure (based on the topics discussed in the reviewed papers) including game elements, context of the application of gamification, gamification implementation and evaluation. Although most of the reviewed 34 papers have been reporting promising results, the review concluded that more substantial empirical research is needed to determine whether both extrinsic and intrinsic motivation of the learners can be actually influenced by gamification. Given the exponential growth of publications on gamification, a year later we conducted a follow-up study covering the period July 2014–December 2015. Our goal was twofold: from one side, to complement the previous study and compare it with the findings derived from the papers published within the last year, and from another, to identify any shifts and new trends in this evolving field. The results from that review were published in (Dicheva and Dichev 2015 ).
In terms of the Gartner’s hype cycle, our first review (Dicheva et al. 2015 ) covered works from the rise-in-expectations period of gamification, where the reported outcomes of the early empirical work were often influenced by the hype prompting desire to demonstrate that gamification is an effective tool for motivating and engaging learners in educational contexts. We believe that the progress in the research, including educational research, unlike technological evolutions should differ from the Gartner’s hype cycle and evolve independently of media attention using instead scientific indicators for recognizing promising trends and thus minimizing inflated expectations. More importantly, the research efforts should be directed at understanding the phenomenon triggering the new interest and at generating evidence for or against the trend causing that interest. This suggests that the research should progress following a pattern different from the Gartner’s hype cycle and marked by stages, such as early studies, emerging research area, research topics formation, etc. In this sense, our second review was intended to take another snapshot in an attempt to verify this view. Despite the growing body of studies, we found the level of understanding of how to promote engagement and learning by incorporating game design elements to be questionable. In parallel, a significant part of the empirical research was nonetheless reporting success stories and possibly contributing to the ‘inflated expectations’. Because the empirical studies (on gamification) explore the unknown, uncertainty is an unavoidable part of the investigations. While the publication of valid and reliable studies reduces the uncertainty and adds to the knowledge on gamifying education, thus helping to shape future research in the field, invalid or unreliable findings obscure our understanding of the studied phenomenon. In this context and unlike the systematic mapping studies, the goal of this critical review is to see how the new studies are shaping the evolving research in educational gamification. In particular, compared to the previous reviews the focus here is shifted to analyzing and critically appraising the collected evidence from the latest empirical research with the aim of distinguishing facts from hypotheses or opinions. From this perspective, the present review adds to the first two by trying to subject educational gamification research to similar standards as in social or health sciences.
Accordingly, in this article the focus is on analyzing the understanding of the motivational mechanisms provided by gamification in educational settings and its impact on learning. The guiding questions in this context were:
What empirical evidence exists for the impact of gamification on motivational processes and effectiveness of learning?
What is the level of progress towards a systematic understanding of how to use gamification in educational contexts?
With the growing popularity of gamification and yet mixed opinions about its successful application in educational contexts, the current review is aiming to shed a more realistic light on the research in this field focusing on empirical evidence rather than on potentialities, beliefs and preferences.
On the technical side, the article includes several tables that summarize and add to the information provided in the text. The article also includes two appendices that summarize the relevant features of the reviewed studies.
Search strategy and sources
In search for empirical research papers, that is, papers based on actual observations or experiments on educational gamification, we searched the following databases: Google Scholar, ACM Digital Library, IEEE Explore and ScienceDirect using the following search terms: (gamification OR gamify OR gameful) AND (education OR learning OR training) AND (since 2014). In the cases when the OR option was not available in the provided Boolean search functionality, an equivalent search strategy was carried out through multiple searches with alternative terms. This search yielded a total of 4998 results depicted in Table 1 . We have chosen the definition of (Deterding et al., 2011 ) for gamification (“the use of game design elements in non‐gaming contexts”) to measure each found publication for relevance. Accordingly, publications discussing full-fledged games were filtered out. Peer-reviewed empirical research papers where no findings were reported were also excluded. For example, purely descriptive papers such as (Morrison & DiSalvo, 2014 ), which describes the implementation of gamification within Khan Academy, were not included. At the end of this step, all papers that appeared in the review presented in (Dicheva et al., 2015 ) were also filtered out. The review was restricted to papers appearing in the searched databases between June 30, 2014 and December 31, 2015. The result was a list of 51 empirical research papers. In sum, in the past one and a half years, several hundred articles pertaining to gamification in education have been published however only 51 studies met our criteria and are reviewed in this article.
For completeness of the review of the research in the field, we decided this time to include also theoretical papers dealing with gamification in education. Following (Seaborn & Fels, 2015 ), the “theoretical papers” category includes papers that propose an explanation of the underlying nature of gamification in education and such that propose relevant pedagogies or test already existing explanatory models from other domains with respect to gamification. We also added the published literature reviews to the group of theoretical papers. The end result was a list of 11 theoretical papers appearing in the searched databases between June 30, 2014 and December 31, 2015. Thus the final number of selected papers (empirical and theoretical) amounted to 63 in total. The last column of Table 1 shows the results after filtering out irrelevant papers and removing duplicates. For comparison, the total number of papers included in the previous review covering the period January 2010–June 2014 was 34.
Following the division empirical studies vs. theoretical papers, the first part of this review covers the published empirical research on the topic, while the second part surveys briefly publications targeting theoretical aspects of educational gamification.
Data extraction
A literature survey typically employs a framework for structuring the evaluation of the works in the targeted area. This framework captures the potential properties of interest and enables a comparison of the surveyed works and drawing meaningful conclusions. The use of gamification in learning involves a number of aspects, including game elements, educational context, learning outcomes, learner profile and the gamified environment. Gamification is receiving attention, particularly for its potential to motivate learners. Accordingly, our objective involving evaluation of the level of understanding of the motivational impacts of gamification in educational contexts has shaped our decision of what categories of information to be included in the framework for evaluating the surveyed works. More specifically, we looked for information that can facilitate the process of identifying and analyzing the empirical evidence demonstrating the motivational effects of gamification. Motivation as a psychological process that gives behavior purpose and direction is contextual. Not only are individuals motivated in multiple ways, but also their motivation varies according to the situation or context of the task. To provide support for analyzing the contextual aspect, the information collected from the studies include the educational level, academic subject, and type of the gamified learning activity. We also included the used game elements, mechanics and dynamics since they are inherently related to the success of a gamification application. A number of motivation measures have been used in attempts to establish the effect of gamification on student motivation. In addition to appropriate measures, the verification of the validity of reported results requires availability of relevant statistical information about the studies. In order to provide support for our decision on how conclusive the reported results of a study are, we added the following categories: study sample, study duration, method of data collection, and outcome. Thus the final structure of information to be derived from the reviewed studies included the following categories: game elements, educational level, academic subject, learning activity, study sample, study duration, data collection, and outcome.
Appendix 1 presents a description of the reviewed papers structured according to this framework. Obviously, the task of representing high-dimensional data in a table format is challenging, which implies a tradeoff between completeness and clarity.
Review results for empirical studies
For a systematic presentation of the review results we classify and interpret them in accordance with the described above framework.
What educational level is targeted?
Considering the educational level, the bulk of gamification studies in the survey period were conducted at university level (44 papers), with less attention to K-12 education (7 papers). At university level, 1 study has reported results involving graduate students (Nevin et al., 2014 ), while at K-12 level, 3 studies have reported results involving elementary school students (Boticki, Baksa, Seow, & Looi, 2015 ; Simoes, Mateus, Redondo, & Vilas, 2015 ; Su & Cheng, 2015 ) , 2 studies have reported results involving middle school students (Attali & Arieli-Attali, 2015 ; Long & Aleven, 2014 ) and 2 studies have reported results involving high school students (Davis & Klein, 2015 ; Paiva, Barbosa, Batista, Pimentel, & Bittencourt, 2015 ). A possible explanation of this disproportion is that perhaps it is easier for college instructors to experiment with using gamification in their own courses. This might be because they are better supported technically or have necessary computer-related skills, which allow them to implement some gamification features, e.g. an electronic leaderboard. Studies involving different demographic groups however are beneficial, as we cannot necessarily generalize the results of a study conducted with one demographic group to another demographic group.
What subjects are gamified?
The collection of papers covers a wide range of academic subjects (32) organized in six categories (see Table 2 ). The category “Others” includes studies with unspecified subjects, where the gamified activities are independent of a subject and the focus is on: the platform supporting gamification (Barrio et al., 2015 ; Chang & Wei, 2015 ; Davis & Klein, 2015 ; Lambruschini & Pizarro, 2015 ; Mekler et al., 2015 ), the game elements used (Boticki et al., 2015 ; Pedro et al., 2015a ), a personal learning environment (Morschheuser et al., 2014 ), measurements (Simoes et al., 2015 ) or learners’ personalities (Tu et al., 2015 ).
One emerging area which is not an academic subject in its own but rather referring to a set of tools offering new affordances for enhancing students’ understanding of dynamic processes and systems is interactive simulations (dynamic computer-based models which can help students observe or interact with scientific phenomena). Although gamifying the use of such simulations can help overcome the problems with insufficient motivation and engagement, there is a lack of studies evaluating the effects of gamified simulation-based learning. In this context, the work of Bonde et al. ( 2014 ), who studied the effect of combining gamification elements with simulations for improving learning effectiveness and motivation of biotech students addresses a critical gap. The results show that a gamified laboratory simulation can increase both learning outcomes and motivation levels when compared with traditional teaching. Further research is needed to examine whether these results can be extrapolated to a general tendency of the effectiveness of gamified simulations.
As shown in Table 2 , the vast majority of gamification studies are dealing with Computer Science (CS) and Information Technology (IT). This fact provokes the question: Are CS and IT more suitable to gamification than the other subjects? The present studies however do not provide conclusive answer to this question. In the lack of other evidences, speculative answers can be given similar to the ones for the observed disproportion in gamifying college vs. school level activities, namely that perhaps it is easier for CS and IT instructors to experiment in their own courses. In sharp contrast, gamification experiments targeting activities related to disciplines from humanity and social sciences are extremely limited, with only one example (Holman et al., 2015 ) touching this subject. Another interesting observation is the low proportion of studies on gamifying STEM disciplines, excluding CS/IT and mathematics, where reinforcement of motivation is particularly beneficial: only two out of thirty two (Bonde et al., 2014 ) and (Su & Cheng, 2015 ).
What kind of learning activities is targeted?
Formal learning typically involves a mix of instructional activities and supporting materials, such as lectures, tutorials, assignments, projects, labs, exercises, class discussions and team work. A sizable part of the papers (16) studied gamification of courses as a whole, which implies gamifying a range of learning activities. Half of these are studies of gamified online courses (Amriani et al., 2014 ; Bernik et al., 2015 ; Jang et al., 2015 ; Krause et al., 2015 ; Leach et al., 2014 ; Sillaots, 2014 ; Utomo & Santoso, 2015 ), while the remaining part are regular courses typically with web-based learning support. Online learning normally requires stronger motivation, which makes it a somewhat more promising field for applying gamification. Although this presumes a higher concentration of studies on gamified online learning our findings indicate the opposite.
As illustrated in Table 3 , the majority of works (36) studied the effect of gamification on general class activities (16) or a particular learning activity, such as exercises (6), collaboration/discussion forums (4), projects/labs (6) or tests (4). Another part of the papers addresses activities with indirect effect on learning, such as engaging students in more regular interactions with the learning environment (11). The category “Others” includes perception studies (Davis & Klein, 2015 ), augmented game mechanics studies (Pedro et al., 2015a ), a specific activity (Mekler et al., 2015 ) or platform dependent studies (Su & Cheng, 2015 ).
Although 6 studies are addressing “Exercises”, still limited attention is given to gamifying activities where students can learn through experimenting and retrying without fear of negative consequences. One observation that can be drawn from this distribution is that learning activities which involve tasks that are decomposable into simpler subtasks or tasks where performance is measurable (according to an obvious rewarding scheme or skills) are better candidates for gamification.
What combinations of game elements are studied?
According to (Deterding et al., 2011 ) gamification is the use of game design elements in non-game contexts. In turn, game design elements which are used in the creation of gamification scenarios can be divided into three categories: dynamics, mechanics and components (Werbach & Hunter, 2012 ). Footnote 1 Dynamics represents the highest conceptual level in a gamified system. It includes constraints, emotions, narrative, progression and relationships. Mechanics are a set of rules that dictate the outcome of interactions within the system, while dynamics are users’ responses to collections of those mechanics. The game mechanics refer to the elements that move the action forward. They include challenges, chance, competition, cooperation, feedback, resource acquisition, rewards. Components are at the basic level of the gamification process and encompass the specific instances of mechanics and dynamics. They include: achievements, avatars, badges, collections, content unlocking, gifting, leaderboards, levels, points, virtual goods, etc. For instance, points (components) provide rewards (mechanics) and create a sense of progression (dynamics). However, we note that the gamification terminology is still unsettled and various variations of the introduced above terms exist. When there is no danger of confusion, we will use the terms mechanics and dynamics to refer also to their specific instances, that is, components. Also, for consistency with our previous studies (Dicheva et al. 2015 ), we will use the term game elements to refer to game components.
Most of the educational gamification studies and applications are driven by the presumption that gamification in education consists chiefly of incorporating a suitable combination of game elements within learning activities. However, our review shows that the empirical studies on understanding what kind of game elements under what circumstances can drive desired behavior are not quite systematic. In the reviewed collection, 11 papers report studies of the effect of a single game element, 8 papers study gamified systems using 2 game elements, 16 papers study gamified systems with 3 game elements, while the remaining 16 papers report results of gamifying systems by incorporating more than three elements (see Table 4 ).
In all reviewed works with the exception of (Tu et al., 2015 ), which investigates the relation between gamers’ personality and their game dynamics preferences, the gamification studies focus on the use of game elements (i.e. game components in terms of (Werbach & Hunter, 2012 )). Typically, no justification is given for the selection of particular game elements. There is a need of more studies that can improve our understanding of how individual game elements are linked to behavioral and motivational outcomes and how they function in a given educational context. Without understanding the effect of individual game elements, it is difficult to identify their contribution in studies that mix several game elements together.
The majority of gamification studies feature a subset of the following game elements: points, badges, levels, leaderboards and progress bars. This is in line with the finding of other authors, e.g. (Nicholson, 2015 ) that the combination of points, badges and leaderboards (sometimes referred to as PBL) is the most used one (see Table 5 ).
In the absence of other justification for the overuse of points, badges and leaderboards, one possible explanation is that they somewhat parallel the traditional classroom assessment model and are also easiest to implement. This combination in its trivial form can be applied to almost any context, even if there isn’t a good reason to do so. Gamification with “deeper game elements” (Enders & Kapp, 2013 ) incorporating game design principles involving game mechanics and dynamics such as challenges, choice, low risk failure, role-play or narrative are still scarce. Only one work (Tu et al., 2015 ) among the reviewed studies addresses game dynamics explicitly. Studies utilizing to some extent “deeper game elements” are demonstrated in (Bonde et al., 2014 ; Boskic & Hu, 2015 ; Holman et al., 2015 ; Krause et al., 2015 ; Pettit et al., 2015 ). We believe that in addition to reward and feedback mechanisms, gamified systems should provide safe places where learners can gain experience without being judged or punished for failure, drawing upon approaches similar to the online learning environments proposed by (Hakulinen et al., 2015 ) and (Lehtonen et al., 2015 ), where students can improve their algorithmic skills by practicing with interactive exercises (Dichev et al. 2014 ).
Three questions related to the use of combinations of game elements remain open: “Do more game elements produce better results than less?”, “Is the task of identifying the right combination of game elements with respect to a given context and user group practically feasible?” and “How to balance points and rewards with play and intrinsic engagement?”. For answering these questions and for advancing the understanding of how to build successful gamified educational systems, there is a need for testing systems that support examining the effect of game elements and experimentally validating it. In particular, it implies the need of gamification platforms that support easy configuration of gamified learning prototypes with specific characteristics leveraging different game features and principles.
The available evidences indicate that in a learning context gamification is more than mapping game elements on to existing learning content. It should offer stronger ways to motivate students, rather than be simply a stream of extrinsic motivators.
What types of studies?
The reviewed papers expand the scope of the empirical research on educational gamification, as compared to (Dicheva et al. 2015 ). Although the majority of empirical works still examine the impact of the gamification on students’ engagement, performance, participation or retention, they are widening and deepening the focus of their studies. A growing body of papers is exploring a range of learning and behavioral outcomes including:
knowledge acquisition outcomes (Jang et al., 2015 ); Laskowski & Badurowicz, 2014 ; Paiva et al., 2015 ; Su & Cheng, 2015 )
perceptual outcomes (Christy & Fox, 2014 ; Codish & Ravid, 2014 ; Davis & Klein, 2015 ; Pedro et al., 2015b ; Sillaots, 2014 ; Sillaots, 2015 ; Christy & Fox, 2014 )
behavioral outcomes (Barata et al., 2014 ; Codish & Ravid, 2015 ; Hakulinen et al., 2015 ; Hew et al., 2016 ; Pedro et al., 2015b )
engagement outcomes (Boskic & Hu, 2015 ; Chang & Wei, 2015 ; Ibanez et al., 2014 ; Latulipe et al., 2015 ; Morschheuser et al., 2014 ; Poole et al., 2014 )
motivational outcomes (Hasegawa et al., 2015 ; Herbert et al., 2014 ; Mekler et al., 2015 ; Pedro et al., 2015a ; Utomo & Santoso, 2015 )
social outcomes (Hanus & Fox, 2015 ; Christy & Fox, 2014 ; Shi et al., 2014 ).
Under the perceptual outcome category, we have included also some works that initiate a new line of studies - the impact of gamification on different demographic groups. For example, (Pedro et al., 2015b ) reported that the game mechanics implemented in a virtual learning environment did not have any effect on motivation and performance of the female students. This findings are in line with the conclusions reported by (Koivisto & Hamari, 2014 ), who have shown in a more general context that women experience a greater effect when the gamification contains social aspects and men - when there is a sort of competition. (Christy & Fox, 2014 ), on the other hand, concluded that the use of leaderboards within educational settings may act to create stereotype threat (a belief that one may be evaluated based on a negative stereotype). The results of the study found that women in the female-dominated leaderboard condition demonstrated stronger academic identification than those in the control and male-dominated leaderboard conditions. These results suggest that the use of leaderboards in academic environments can, in some circumstances, affect academic performance of different demographics differently.
The motivational outcome category concerns concepts derived from motivational principles of games such as explicit goals, rules, a feedback system, and voluntary participation (McGonigal, 2011 ). Motivation is demonstrated by an individual’s choice to engage in an activity and the intensity of effort or persistence in that activity. Since video games are explicitly designed for entertainment, they can produce states of desirable experience and motivate users to remain engaged in an activity with unparalleled intensity and duration. Therefore, game design was adopted as an approach for making non-game activities more enjoyable and motivating. While gamification strives at its core to increase motivation, yet motivation is not a unitary phenomenon - different people may have different types and amounts of motivation, which can be shaped by the activity they are undertaking (Gooch et al., 2016 ). Additionally, success in one educational context does not guarantee that the same mechanism will be motivationally successful in another educational context.
An important distinction in the motivation research is that between intrinsic and extrinsic motivation (Ryan & Deci, 2000 ). While extrinsic motivation relies on incentives or expected consequences of an action, intrinsic motivation stems from fulfilling the action itself. According to the Self Determination Theory (Ryan & Deci, 2000 ), humans seek out activities to satisfy intrinsic motivational needs, such as competence, autonomy, or relatedness. More specifically, (Ryan et al., 2006 ) argue that the intrinsic appeal of games is due to their ability to satisfy the basic psychological needs for competence, autonomy, and relatedness. While self-determination theory provides a good theoretical starting point for studying the motivational dynamics of ‘gamified’ educational activities, further research is needed to bridge motivation to a more granular level of game elements and learners’ personalities. Although the connection between motivation and gamification design is demonstrated by a number of the reviewed studies, they do not add persuasive evidence confirming the effect of gamification as a motivational tool. The papers claiming to examine the motivational effects of gamification often report effects on learning outcomes instead on motivation.
The reviewed collection of empirical studies on gamifying education is very diverse with respect to the focus of the studies and the reported outcomes. This makes it difficult to find categorization that organizes the reviewed works in logical categories, captures the diversity and puts at the same time every work in a separate category. We selected a categorization with a focus on the effects of gamification on learners. It includes four categories: affective (A), behavioral (B), cognitive (C), and others. The intention with this grouping was to use it as an organizational structure for connecting outcomes with game elements and gamified activities. As under this categorization many outcomes fall into two categories, we extended it with behavioral and cognitive (B + C), affective and cognitive (A + C), and affective and behavioral (A + B) groupings. Table 6 presents the studies falling into a single category, organized in three sections: behavioral, affective, and cognitive, and connecting their outcomes with the corresponding game elements and gamified activities. Table 7 presents the studies falling into two categories, organized in the same way.
The two tables provide a more compact view, capturing the links between three key categorizing variables: game elements, gamified activities and reported outcomes. The more focused information extracted in the tables explicates data relevant to the questions guiding the study. Although the empirical work on applying gamification in educational contexts continues to grow, there is not sufficient evidence indicating noticeable progress based on collating and synthesizing the previous experiences. While the range of gamifying strategies is expanding, they are scattered across many different educational contexts and the aggregated information cannot confirm any emerging systematic approach yet. As it can be seen from the tables, the empirical research on gamified learning is quite fragmented. It covers studies on different configurations of game elements, used to gamify different activities and resulting in different outcomes, without any identifiable pattern of distribution. For example, the points-badges-leaderboard configuration is dominating, with 6 works studying its effect. However, the activities gamified with this configuration vary widely: project activities, course participation, online Java exercises, homework in high school LMS assessment and overall course activities. Within the category “Gamified activity” dominating is “Overall class activities” but again the configurations of game elements used to gamify it are very different: badges, leveling, autonomy, leaderboard, grade predictor; stamps, tokens, leaderboard; points, scoreboard, goals, avatar, feedback, levels, luck, competition; points, badges, leaderboard; points, leaderboard; badges, leaderboard, virtual coins, pseudonyms. The dearth of studies that build on the previous ones or parallel their efforts on exploring particular aspects of the effect of gamification on engagement and learning suggests a piecemeal approach. In the current studies that mix together points, badges, leaderboard, progress, status, etc. without a discernible systematic experimental approach, it is difficult to identify which game elements or configurations are most effective in promoting engagement and supporting learning for given activity and group of learners.
What types of goals?
We noticed that in addition to the heterogeneous nature of the empirical research, the stated goals of the studies were not always in line with the reported outcomes. To provide an additional dimension for organizing and examining the links between the corresponding categorizing variables we further grouped the studies according to their stated goal (see Appendix 2 , which lists the reviewed studies along with their goals). The two top categories for grouping the studies based on the study goals are: learner-centric and platform-centric (see Table 8 ). The bulk of works which expands and differentiates the earlier research on the effect of gamification on learners (e.g. (Dicheva et al., 2015 )) falls in the first category (44 papers). This category includes 4 subcategories grouping further the studies as follows:
Behavioral and cognitive results: focusing on behavioral and cognitive effects caused by gamification.
Categories of learners: focusing on the effects of gamification on different groups of learners.
Learners’ perception: focusing on the learner’s perception of different game mechanics and principles.
Measures: focusing on the measures used for assessing the outcomes.
These four groups cover a wide variety of goals. Group A includes studies of the effectiveness of gamification in the classroom longitudinally (Hanus & Fox, 2015 ); the impact of gamification on retention and learning success (Jang et al., 2015 ; Krause et al., 2015 ), on participation and quality of online discussions (Smith et al., 2014 ), on reducing undesirable behaviors and increasing performance in virtual learning environments (Pedro et al., 2015b ) and in personal learning environments (Lehtonen et al., 2015 ; Morschheuser et al., 2014 ); the effect of badges on student behavior (Hakulinen et al., 2015 ) and how they predict the student exam success (Boticki et al., 2015 ); the causal effect of gamifying a course project with leaderboards (Landers & Landers, 2015 ); the learning effectiveness of a gamified simulation (Bonde et al., 2014 ) and the effect of transforming a traditional course into a role-playing game (Boskic & Hu, 2015 ).
A progress has been made within the learner-centric category with explorations of psychological effects of gamification which can be summarized by the question: How students with different personalities, dispositions and learning styles are influenced by game elements? While in our first review the question shared between most of the papers was “Is gamification effective?”, now it appears in a more extended version, in combination with the questions “for what?” or “to whom?”.
Group B includes papers identifying learner types based on how students experience gamified courses (Barata et al., 2014 ) and how different learners perceive playfulness (Codish & Ravid, 2014 ), on the variation in motivation between learners with different gamification typologies (Herbert et al., 2014 ), on exploring whether points, leaderboards, and levels increase performance, competence, need satisfaction, and intrinsic motivation (Mekler et al., 2015 ), on involving Asian students in gamified course activities (Hew et al., 2016 ) and on the predictive effect of gaming personality on their game dynamic preferences (Tu et al., 2015 ). Even though the amount of papers addressing the question “to whom” is still limited, an emerging shared message particularly relevant to instructional designers recognizes that what one learner values, another may not, what one learner believes is achievable, another may not. Understanding differences in learners’ drivers, what they value and what they dislike is important to the design of reward, progress, and feedback systems with potential to achieve desired outcomes for the intended groups of learners.
Group C includes papers on students’ perceptions of simple game elements such as badges (Davis & Klein, 2015 ) or combination of points and badges (Paiva et al., 2015 ). It also includes studies on how students perceive a game-like course (Sillaots, 2015 ) and on profiling learners based on their gamification preferences (Knutas et al., 2014 ).
Another emerging topic in this category groups the works on measuring the impact of gamification (Group D). This group includes papers on the impact of gamification on students’ engagement and how to measure that impact (Simoes et al., 2015 ), papers on the effectiveness of gamification behavior patterns as a measure of playfulness (Codish & Ravid, 2015 ), and how predictive measurements can help students plan their pathways in gamified courses (Holman et al., 2015 ). While gamification is promoted as a motivational instrument, studies measuring its motivational effects are still limited.
In the second category we have placed 7 articles, which study the effect of incorporating selected game elements or game principles into specific learning platforms or experiment with conventional game elements by assigning them new roles. This category includes studies of employing gamification in audience response systems (Barrio et al., 2015 ; Pettit et al., 2015 ), in mobile learning systems (Su & Cheng, 2015 ), in Learning Management Systems (Lambruschini & Pizarro, 2015 ) and in MOOCs (Chang & Wei, 2015 ). Two papers explore creating badges as a tool for measuring students’ interest (Tvarozek & Brza, 2014 ) and the effect of collaborative badge creation on engagement and motivation (Pedro et al., 2015a ). The papers listed in the platform-centric category do not cover all gamified platforms proposed in the reviewed papers. When according to our judgment the focus of a paper was on behavioral effects, as for example in (Krause et al., 2015 ), that paper was included in the first category. The availability of successful gamified platforms will help widen the scope of gamified educational activities and create a ground for broadening experimental studies towards developing evidence based practices.
How conclusive are the reported results?
One of the evolving goals of this review was to take a closer look at the supporting evidence for the ‘positive’ or ‘negative’ results of the empirical studies as reported by their authors. This was provoked by the fact that some of the papers studying the effect of gamification on learners reported a mix of positive and negative results, other were inconclusive, and yet other expressed a degree of caution, while the strength of the evidence backing the positive and negative results were varying significantly.
A common pattern observed in most studies is to design and develop a particular gamified course/activity/environment, test it in a pilot and assess users’ approvals and gains in performance. The reported outcome often concludes that the gamification produced the pursued learning gains and that the users appreciated the added gamification features. Irrespective of the goals of the studies, the works on gamifying education should be subject to the same level of skepticism and scrutiny that is applied to any other areas of empirical research. In order to improve our understanding and to offer a more realistic picture of the nature of the effects of using gamification in education, consistent with the presented evidence, we undertook a more in-depth examination of the reviewed papers with a focus on both the reported outcomes and how they have been obtained. The primary aim of this effort was twofold: (i) to provide a critical review, questioning the validity of some reported outcomes, and (ii) to offer a picture that avoids the harmful effects of an one-sided viewpoint.
Our decision on the validity of the gamification studies was guided by the following factors: the sample size, the number of study groups, the length of the study, how the data was collected, how the variables were controlled, how and by what statistical procedures the data was analyzed, how well the conclusions are supported by the data, and does the study give enough information to convince the reader in the correctness of the evaluation conclusions. The examination of the selected papers indicated that the empirical studies tended to use surveys and quasi-experimental designs, while the randomized controlled trials were less common. According to the nature of the empirical study, the papers were partitioned into two major categories: ABC studies, which target Affective/Behavioral/Cognitive outcomes, and non-ABC studies. The ABC studies were further partitioned into three subcategories: positive, negative and inconclusive, based on the reliability of the evidence for the reported ABC outcome. The outcomes were marked as “positive” if valid evidence confirms the claim and marked as “negative” if the evidence confirms its negation. The studies were marked as “inconclusive” if the presented evidence was judged as insufficient based on inadequacies, such as small sample sizes, lack of comparison groups, use of purely descriptive statistics, short experiment timeframes, and unreliable statistical evidence. For example, reported positive effects of gamification based on a two-week study could be attributed to the ‘novelty effect’ of the used tool or approach rather than to the added gamification features. In the inconclusive category we also included papers studying gamification in combination with some other factors, which make uncertain whether the observed effects can be attributed to the gamification or to the other variables, as well as papers where no positive effect was found but negative effect was not discernible either.
The classification of papers in accordance with our judgment of the degree of validity of the reported results is presented in Table 9 and the proportions of the resulting grouping of the ABC papers in Fig. 1 .
Distribution of the behavioral studies by degree of evidence
The paper grouping, based on the strengths of the presented evidence, reveals that the high expectations for positive outcomes from gamified learning are not confirmed by the results of the reviewed empirical studies (see Fig. 1 ).
The examination of the papers shows that from the 41 ABC empirical studies only 15 present conclusive evidences for the reported outcomes. In those 15 papers, the findings related to the benefits of gamification are mixed: 12 studies present evidence for positive effects of gamification in educational settings, while 3 present evidence for negative effects. A surprising fact is that the vast majority of the empirical works (25 studies) report inconclusive outcomes, which means that there is no basis for confidence in the reported results. Such outcomes obscure the level of progress in the area of educational gamification. Table 10 and Table 11 below are obtained from Table 6 and Table 7 , correspondingly, by eliminating the studies marked as inconclusive. With this relatively small number of (15) papers and a diverse specter of game elements and activities, the presented outcomes are insufficient to draw definitive conclusions on the effectiveness of gamification on students’ engagement, learning or participation. This judgement can be interpreted as an answer to the first guiding question about the existing empirical evidence for the impact of gamification on motivational processes and effectiveness of learning. Currently, there is a dearth of quality empirical evidence to support general claims of the impact of gamification on student’ learning and motivation. Whilst 12 studies report encouraging outcomes, they cover a range of specific combinations of game elements, specific activities and outcomes and thus do not support practical generalization. It would be short-sighted to assume that gamified implementations with the same configurations of game elements would function similarly across different educational contexts. For example, (Hakulinen et al., 2015 ) present convincing evidence that points, badges and leaderboard incorporated in online Java exercises increases the use of an open learning environment. However, with the current understanding of the motivational mechanisms afforded by gamification, we cannot generalize this claim to other activities, game element combinations or academic subjects. In general, studies reporting positive results from using a specific combination of game elements do not promote the understanding of the causal effect of the combination, as it is unclear whether the combination or a particular element led to the positive outcome (e.g. Bonde et al., 2014 ; Jang et al., 2015 ). Negative results such as those of Hanus and Fox ( 2015 ), who reported that badges, leaderboard, virtual coins, and pseudonyms incorporated in a communication course can have a detrimental effect on students’ motivation, satisfaction, and empowerment, help understand the limits of gamification. Again, the results obtained from such studies should be interpreted in a restricted manner, for the specific combinations of game elements, gamified activity, academic subject, and age group. The piecemeal approach observed in the reviewed studies slows down the advancement in the understanding of the effect of incorporating game elements in learning activities. From the 14 studies listed in the two tables, with 14 different combinations of game elements and 15 different gamified activities, it is difficult to derive useful information on how to gamify a new (different) activity with predictable outcomes. For example, two papers (Hakulinen et al., 2015 ) and (Landers & Landers, 2015 ) report positive outcomes for using single game elements, but one is for badges and the other one for leaderboards. On the other hand, two of the studies reporting negative results deal with Mathematics (Attali & Arieli-Attali, 2015 ; Long & Aleven, 2014 ). But, in these two cases, the game elements, the learning activities, the student level and the gender vary. In addition, the mix of badges, levels, leaderboards, progress, feedback, status and avatars used in the conclusive studies makes it hard to know which of these elements actually worked. Furthermore, the fundamental differences in the studied educational contexts hamper the transfer of experimented practices from one learning situation to another. All this suggests a need for a more systematic program of experimental studies.
We note that our judgment in studying inconclusiveness can be viewed as rather subjective. Therefore, Table 12 presents the papers judged as “inconclusive” along with a short explanation for placing them in this group. In several cases our judgment simply conveys the paper’s conclusion where the authors themselves acknowledge that the results of the study should be interpreted with caution.
While it seems apparent that gamification has the potential to create enhanced learning environments, there is still insufficient evidence that it (1) produces reliable, valid and long-lasting educational outcomes, or (2) does so better than traditional educational models. There is still insufficient empirical work that investigates the educational potential of gamification in a rigorous manner. Increasing the number of studies that use randomized controlled trials or quasi-experimental designs will increase the scientific robustness. The continued (and coordinated) collection of evidence, that is, data that substantiate the successes and failures of gamification, remains crucial for building an empirical knowledge base and consolidating best practices, extracting guidelines and eventually developing predictive theories. It is necessary to strengthen the methodical base of gamified learning and systematically enlarge the body of evidence that explains what factors and conditions produce desirable outcomes. The empirical research should thereby not just be fixated on the pros of gamified learning, but also be open to the cons and the conditions when gamification for learning should be avoided (Linehan et al., 2011 ; Westera, 2015 ).
Indirectly related to the conclusiveness of the reported results are the measurements used. A significant number of the studies (15) are using performance as a measure of the effect that gamification has on the studied activities. This is understandable for several reasons. First, the driving criterion for adopting any technology in education is whether and how much it can improve learning. Second, one can argue that high learner performance provides evidence of learners motivation since performance has been shown to correlate with learner’s motivation. However, such an approach is imperfect. Performance is an indirect measure of motivation that is influenced by many non-motivational factors such as ability, prior knowledge, and quality of instruction, while motivation is the actual driving force which makes individuals want to do something and help them continue doing it. Therefore, it is beneficial to understand the motivational triggers that engage learners. This suggests a need of studies that utilize more reliable measures of motivation and characterize better how gamification influences learner motivation and consequently how it improves learner engagement and outcomes. Motivation is associated with a number of learning related concepts such as engagement, effort, goals, focus of attention, self-efficacy, confidence, achievement, interest, etc. Improving our understanding of motivational aspects of gamification will enable us to predict its effect on the related concepts. In addition, it will help improve the gamification design, in particular, how to design an appropriate gamified experience that strengthens the motivation of a given population of learners and leads to desirable learning outcomes.
Theoretical perspective
Gamification is growing as an area of both practice and research. The majority of the studies reviewed in the previous sections lack a theoretical underpinning that can help understand the researchers’ motivation and the justification for how their gamification approach is supported by a theoretical framework. For completeness of the review, in this section we outline theoretical works underpinning the use of gamification in education, published within the review period. Overall, the bulk of theoretical research addressing gamification maintains that focusing on points and rewards rather than on play and intrinsic engagement cannot always meet the goal of desired behavior change by catering to the intrinsic values of learners (Hansch et al., 2015 ; Songer & Miyata, 2014 ; Tomaselli et al., 2015 ). This suggests a user-centered approach in the design of gamified systems, characterized by a focus on the needs and desires of learners. A new line of research is taking steps towards developing a theory of educational gamification by combining motivational and learning theories aimed at linking gamification to practical education (Landers, 2015 ; Landers, Bauer, Callan, & Armstrong, 2015 ) or by developing a framework for integrating gamification with pedagogy (Tulloch, 2014 ) or psychology of games (Lieberoth, 2015 ).
Tulloch ( 2014 ) maintains that gamification is a product of an overlooked history of pedagogic refinement, a history of training that is effective, but largely ignored, namely the process of games teaching players how to play. He challenges the evolving concept of gamification, conceptualizing it not as a simple set of techniques and mechanics, but as a pedagogic heritage and an alternative framework for training and shaping participant behavior that has at its core the concepts of entertainment and engagement. Yet, Biro ( 2014 ) considers gamification as a new educational theory, alongside of behaviorism, cognitivism, constructivism and connectivism.
Songer and Miyata ( 2014 ) propose to deviate from using simple game elements often found in gamification approaches and move to a “gameful” experience that fosters intrinsic motivation of players. The authors address the issue of gamifying educational contexts with discussions about gamer motivations, the relationship between games and play, and designs for optimal learning within games. Based on the theoretical foundations of behavioral psychology, anthropology and game studies, the authors propose a model for the design and evaluation of playful experiences in learning environments inspired by game design.
With related concerns, (Tomaselli et al., 2015 ) attempts to analyze the most engaging factors for gamers in the context of gamification by questioning the relevance of some of the most used gamification strategies like attributing points and badges or simple reputation elements to users. The authors explore how engagement is associated with a variety of types of contemporary digital games. The results show that although there is support for the importance of competition against peers (contrary to the current prevailing understanding), the challenge of overcoming the game’s obstacles and mastering them is what matters the most to players, regardless of the type of the game. The takeaway message is that the gamified system designers should not be so concerned with rankings and online comparisons to encourage users to compete against each other, but with their use as a personal reference, creating challenging environments and guidance for users to achieve their mastery interests.
Landers ( 2015 ) advocates that no single theory is able to explain gamification. Accordingly, he presents a set of theories organized in two categories, motivational and learning theories that are most likely to explain the effects of gamification when it is implemented as an instructional intervention. Among learning theories, Landers identifies two major frameworks to describe the learning outcomes of gamification: the theory of gamified instructional design and classic conditioning theories of learning. He also identifies three major types of motivational theories: expectancy-based theories, goal-setting theory, and the self-determination theory.
The theory of gamified learning proposed by (Landers et al., 2015 ) provides two specific causal pathways by which gamification can affect learning and a framework for testing these pathways. Their theory identifies two specific processes by which gamification can affect learning. In both processes the gamification is aimed at affecting learning-related behavior. In the first one, this behavior moderates the relationship between instructional quality and learning. In the second, this behavior mediates the relationship between game elements and learning. Critically, one or both of these processes may be involved in any particular gamification effort. For gamification to be effective, it must successfully alter an intermediary learner behavior or learner attitude. That behavior or attitude must then itself cause changes in learning directly or must strengthen the effectiveness of existing instruction.
In their explorative study, Hansch et al. ( 2015 ) examine the motivational potential of gamification in online learning. Through reviewing ten platforms and an in-depth analysis, they explore how the motivational potential of gamification mechanics and the social and interactive elements in online learning can be effectively combined to build a community of engaged learners. The authors conclude that the starting point in gamifying online education should be learners’ needs, motivations and goals, rather than a platform-centric approach that strives to use technical features to hit some pre-defined performance metrics.
According to Lieberoth ( 2015 ), it might not be the game itself that stimulates individuals, but rather the packaging: the fact that an activity resembles a game. The simple framing of an activity as a “game” can potentially alter an individual’s behavior. To demonstrate this insight Lieberoth designed an experiment focusing on the psychological effects of framing tasks as games versus including game mechanics. The outcomes indicate that engagement and enjoyment increased significantly due to the psychological effects of framing the task as a game. Furthermore, no actual increased interest or enjoyment was measured by adding actual game mechanics to the task, when it was already framed as a game. This study reveals an interesting psychological perspective of gamification in educational environments: merely making an activity seem like a game impacts learners’ engagement.
In addition to the gamification works with theoretical, conceptual or methodological orientation, five literature reviews (Borges, Durelli, Reis, & Isotani, 2014 ; Caponetto et al., 2014 ; Dicheva & Dichev, 2015 ; Faiella & Ricciardi, 2015 ; Gerber, 2014 ) have been published over the last two years. While these reviews synthesize the empirical research on gamification in education, neither of them provides a critical analysis of the strengths and weaknesses of the research findings of the reviewed studies. The present review addresses this gap by evaluating analytically the validity of the reported results.
The research on gamification frameworks, platforms, and toolsets that help making the design and development of gamification applications easier, faster, and cheaper has also been showing progress in the last few years. Since the current research on gamification specific frameworks is not explicitly driven by educational objectives, we refer the interested readers to a corresponding literature review on this topic (Mora et al., 2015 ).
While the reviewed theoretical studies are touching interesting points, the covered topics are insufficient for complete understanding of the motivational mechanisms of gamification in educational context. Without a theoretical framework backing the design of the studies and the interpretation of their results, it is problematic to select an appropriate gamification structure or to differentiate which of the employed game mechanisms and principles were essential for arriving at successful outcomes. Hence, there is a need of theoretical and empirical studies capable of mutually advancing each other. This will allow bridging the identified gaps in order to understand how gamification in education works, when it works best, and its limits and key strategies.
Gamification in education is an approach for encouraging learners’ motivation and engagement by incorporating game design principles in the learning environment. The importance of sustaining students’ motivation has been a long-standing challenge to education. This explains the significant attention that gamification has gained in educational context - its potential to motivate students. However, the process of integrating game design principles within varying educational experiences appears challenging and there are currently no practical guidelines for how to do so in a coherent and efficient manner. The discussion in the present review has been structured based on the combinations of the game elements used, the gamified subjects, the type of learning activities, and the identified goals, ending with a thorough discussion on the reliability and validity of the reported outcomes. The review confirmed that the research on gamification is very diverse with respect to the focus of the studies, the reported outcomes and methodological approaches. It also indicates that the research focus at present is mainly on empirical studies with less attention to the theoretical considerations. Moreover, the majority of the studies target college students. A number of gamification approaches, driven by specific objectives, have been applied to support learning and related activities in a variety of educational contexts. Studies on how distinct categories of learners are affected by gamification, what to measure as an outcome, and how to add a gamified layer to a core activity are also emerging. Despite the fact that gamification in education is still growing phenomenon, the reviewed studies indicate that (i) The practice of gamifying learning has outpaced researchers’ understanding of its mechanisms and methods, (ii) Insufficient high-quality evidence exists to support the long-term benefits of gamification in educational context, and (iii) The understanding of how to gamify an activity depending on the specifics of the educational context is still limited.
We have identified a growing number of studies reporting empirical evidences for the effectiveness of gamification in educational context. At the same time, it is noticeable that a growing body of reported results is backed by inconclusive and insufficient evidence for making valid claims about the efficacy of gamification in education. Possible reasons for this are from one side the hype to publish on gamification and from another, the addressing of an overly broad research question based on limited supportive evidence. Whether gamification motivates students, improves learning or increases participation, are too broad questions. Instead, the focus should be narrowed to questions of the type: whether game design elements G are effective for learners of type L participating in activity of type A. All these indicate a need of a systematic program of experimental studies mapping game elements to the learning and motivational specifics of individual (groups of) learners. Another grey area that deserves attention is how to avoid gamification scenarios that can harm learning.
Gamification is a psychologically driven approach targeting motivation–the desire and willingness to do something. From technical perspective, it is a motivational design problem. While the majority of the reviewed studies do analyze specific educational effects of gamification (on learning, attainment, participation), their focus is often aside from motivation. When motivation is targeted, it is typically examined through observable indicators, such as grades, attendance, etc. that are not always directly linked to it. As a result, the educational benefits of gamification in terms of increasing student motivation or linking this motivation to learning outcomes are still not well understood.
While the effort to understand the effects of gamification on learning is expanding, there is a need for exploring the effect of game design elements in its broad sense including game mechanics and game dynamics and across learning contexts. The observed emphasis on points, badges, and leaderboards is too narrow to address the relevant motivational factors. It is also crucial to understand the target population of a gamified system in order to gamify a learning activity successfully. Specifically, the unique needs and preferences of each group of learners, along with the particular learning objectives relevant to that group must inform the choice of game elements.
A comparison of the results of this survey with the previous ones, which marked the climb to the inflated expectation, indicates a trend of decline of the expectations. The rise and fall of expectations for applying gamification in educational contexts is nothing out of the ordinary. Most emerging technologies and the accompanying research go through an initial period of hype as described by the Gartner’s Hype Cycle, before evolving for a second period of measured popularity, in which it attains maturity and meets the expectations (Naik, 2015 ). There are several assumptions underlying the usefulness of gamification in educational context, such as gamification is motivating, gamification is engaging, gamification can improve attendance and participation. However, research remains inconclusive on these assumptions. Educational contexts in which gamification may be particularly useful have not been confirmed yet. This does not mean though that gamification cannot be used with success in a learning context. It simply means that the educational benefits of gamification have not been scientifically confirmed yet. Only continued theoretical and rigorous systematic empirical work in varying gamification settings and across contexts will enable us to establish a practical, comprehensive, and methodical understanding of the benefits of applying gamification in educational contexts.
This terminology has been popularized through the book “For the win: How game thinking can revolutionize your business” by Werbach and Hunter and a series of Coursera’s MOOCs.
Amriani, A., Aji, A., Utomo, A. Y., Wahidah, F., & Junus, K. (2014). Gamified E-learning model based on community of inquiry. In 2014 IEEE International Conference on Advanced Computer Science and Information Systems, Jakarta , 474–480, doi: 10.1109/ICACSIS.2014.7065830 .
Anderson, P. E., Nash, T., & McCauley, R. (2015). Facilitating programming success in data science courses through gamified scaffolding and Learn2Mine, ITICSE ‘15, July 04–08, 2015, Vilnius, Lithuania .
Google Scholar
Attali, Y., & Arieli-Attali, M. (2015). Gamification in assessment: do points affect test performance? Computers & Education, 83 , 57–63.
Article Google Scholar
Auvinen, T., Hakulinen, L., & Malmi, L. (2015). Increasing Students’ Awareness of their Behavior in Online Learning Environments with Visualizations and Achievement Badges, IEEE Transactions on Learning Technologies , 8 (3), 261–273, doi: 10.1109/TLT.2015.2441718 .
Barata, G., Gama, S., Jorge, J., & Gonçalves, D. (2014). Identifying student types in a gamified learning experience. International Journal of Game-Based Learning (IJGBL), 4 (4), 19–36.
Barrio, C.M., Organero M.M., & Soriano, J. S. (2015). Can Gamification Improve the Benefits of Student Response Systems in Learning? An Experimental Study. IEEE Transactions on Emerging Topics in Computing, PP (99). doi: 10.1109/TETC.2015.2497459 .
Bernik, A., Bubaš, G., & Radošević, D. (2015). A pilot study of the influence of gamification on the effectiveness of an e-Learning Course. In 26th Central European Conference on Information and Intelligent Systems (CECIIS 2015) . (pp. 73–79). Varazdin: Faculty of Organization and Informatics.
Biro, G. I. (2014). Didactics 2.0: a pedagogical analysis of gamification theory from a comparative perspective with special view to the components of learning. Procedia - Social and Behavioral Sciences, 141 , 148–151.
Bonde, M. T., Makransky, G., Wandall, J., Larsen, M. V., Morsing, M., Jarmer, H., & Sommer, M. O. (2014). Improving biotech education through gamified laboratory simulations. Nature Biotechnology, 32 (7), 694–697.
Borges, S. S., Durelli, V. H. S., Reis, H. M., & Isotani, S. (2014). A systematic mapping on gamification applied to education. In ACM SAC’14 Conference, Gyeongju, South Korea (pp. 216–222).
Boskic, N., & Hu, S. (2015). Gamification in higher education: how we changed roles. European Conference on Games Based Learning , (pp. 741–748). Reading, UK: Academic Conferences and Publishing International Limited.
Boticki, I., Baksa, J., Seow, P., & Looi, C. K. (2015). Usage of a mobile social learning platform with virtual badges in a primary school. Computers & Education, 86 , 120–136.
Broer, J. (2014). Gamification and the trough of disillusionment. In A. Butz, M. Koch, & J. Schlichter (Eds.), Mensch & Computer 2014 - Workshopband (pp. 389–395). Berlin: De Gruyter Oldenbourg.
Caponetto, I., Earp, J., & Ott, M. (2014). Gamification and education: a literature review. In 8th European Conference on Games Based Learning (pp. 50–57). Germany: ECGBL. ISBN 978-1-910309-55-1.
Chang, J.W., & Wei, H.Y. (2015). Exploring Engaging Gamification Mechanics in Massive Online Open Courses, Educational Technology & Society , 12/2015 (in print).
Christy, K. R., & Fox, J. (2014). Leaderboards in a virtual classroom: a test of stereotype threat and social comparison explanations for women’s math performance. Computers & Education, 78 , 66–77.
Codish, D., & Ravid, G. (2014). Academic course gamification: the art of perceived playfulness. Interdisciplinary Journal of E-Learning and Learning Objects, 10 , 131–151.
Codish, D., & Ravid, G. (2015). Detecting playfulness in educational gamification through behavior patterns. IBM Journal of Research and Development, 59 (6), 1–14. doi: 10.1147/JRD.2015.2459651
Davis, K., & Klein, E. (2015). Investigating high school students’ perceptions of digital badges in afterschool learning. In ACM Conference on Human Factors in Computing Systems (CHI ‘15) , (pp. 4043–4046). New York, NY: ACM
Deterding, S., Dixon, D., Khaled, R., & Nacke, L. (2011). From game design elements to gamefulness: defining gamification. In 15th International Academic MindTrek Conference: Envisioning Future Media Environments (pp. 9–15). New York, NY: ACM.
Dichev, C., Dicheva, D., Angelova, G., & Agre, G. (2014). From gamification to gameful design and gameful experience in learning. Journal of Cybernetics and Information Technologies, 14 (4), 80–100. doi: 10.1515/cait-2014-0007
Dicheva, D., & Dichev, C. (2015). Gamification in education: where are we in 2015? World Conference on E-Learning (ELEARN 2015), Kona, Hawaii, October 19–22 (pp. 1276–1284). Chesapeake: AACE.
Dicheva, D., Dichev, C., Agre, G., & Angelova, G. (2015). Gamification in education: a systematic mapping study. Educational Technology & Society, 18 (3), 75–88.
Enders, B., & Kapp, K. (2013). Gamification, Games, and Learning: What Managers and Practitioners Need to Know, Hot Topics, The eLearning Guild Research , 1–7.
Faiella, F., & Ricciardi, M. (2015). Gamification and learning: a review of issues and research. Journal of e-Learning and Knowledge Society, 11 (3), 13–21.
Gartner. (2013). Hype cycle for emerging technologies, 2013 . Retrieved from http://www.gartner.com/document/2571624 . Accessed 15 Apr 2016.
Gerber, H. (2014). Problems and Possibilities of Gamifying Learning: A Conceptual Review. Internet Learning Journal, 3 (2), Article 5. Retrieved from http://digitalcommons.apus.edu/internetlearning/vol3/iss2/5 . Accessed 16 Jan 2017.
Gooch, D., Vasalou, A., Benton, L., & Khaled, R. (2016). Using gamification to motivate students with dyslexia . CHI 2016, May 7–12 2016, San Jose.
Book Google Scholar
Hakulinen, L., Auvinen, T., & Korhonen, A. (2015). The effect of achievement badges on students’ behavior: an empirical study in a university-level computer science course. International Journal of Emerging Technologies in Learning (iJET), 10 (1), 18–29.
Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work? – A literature review of empirical studies on gamification. In 47th Hawaii International Conference on System Sciences, Hawaii, USA (pp. 3025–3034). doi: 10.1109/HICSS.2014.377 .
Hansch, A., Newman, C., & Schildhauer, T. (2015). Fostering Engagement with Gamification: Review of Current Practices on Online Learning Platforms. (November 23, 2015). HIIG Discussion Paper Series No. 2015–04. Retrieved from http://dx.doi.org/10.2139/ssrn.2694736 .
Hanus, M. D., & Fox, J. (2015). Assessing the effects of gamification in the classroom: a longitudinal study on intrinsic motivation, social comparison, satisfaction, effort, and academic performance. Computers & Education, 80 , 152–161.
Hasegawa, T., Koshino, M., & Ban, H. (2015). An English Vocabulary Learning Support System for the Learner’s Sustainable Motivation. Springer Plus: Innovative Cloud Application in Computer Intelligence, 4 (99). doi: 10.1186/s40064-015-0792-2 .
Herbert, B., Charles, D., Moore, A., & Charles, T. (2014). An investigation of gamification typologies for enhancing learner motivation. In International Conference on Interactive Technologies and Games, UK (pp. 71–78).
Hew, K. F., Huang, B., Chu, K. W. S., & Chiu, D. K. W. (2016). Engaging Asian students through game mechanics: findings from two experiment studies. Computers & Education, 92–93 , 221–236.
Holman, C., Aguilar, S., & Fishman, B. (2013). GradeCraft: what can we learn from a game-inspired learning management system? Third International Conference on Learning Analytics and Knowledge, 2013 , (pp. 260–264). New York, NY: ACM.
Holman, C., Aguilar, S. J., Levick, A., Stern, J., Plummer, B., & Fishman, B. (2015). Planning for success: how students use a grade prediction tool to win their classes. In 2015 Third International Conference on Learning Analytics and Knowledge, (pp. 260–264). New York, NY: ACM.
Ibanez, M., Di Serio, A., & Delgado-Kloos, C. (2014). Gamification for engaging computer science students in learning activities: a case study. IEEE Transactions on Learning Technologies, 7 (3), 291–301.
Jang, J., Park, J., & Yi, M. Y. (2015). Gamification of online learning. In 17th International Conference on Artificial Intelligence in Education (AIED) (pp. 646–649). Switzerland: Springer International Publishing.
Knutas, A., Ikonen, J., Maggiorini, D., Ripomonti, L., & Porras, J. (2014a). Creating software engineering student interaction profiles for discovering gamification approaches to improve collaboration. In 15th International Conference on Computer Systems and Technologies (CompSysTech’14) , (pp. 378–385). New York, NY: ACM.
Knutas, A., Ikonen J., Nikula, U., & Porras, J. (2014). Increasing Collaborative Communications in a Programming Course with Gamification: A Case Study . 15th Int. Conference on Computer Systems and Technologies (CompSysTech’14) , (pp. 370–377). New York, NY: ACM
Koivisto, J., & Hamari, J. (2014). Demographic differences in perceived benefits from gamification. Computers in Human Behavior, 35 , 179–188.
Krause, M., Mogalle, M., Pohl, H., & Williams, J. J. (2015). A playful game changer: fostering student retention in online education with social gamification. In L@S’15 Proc. of Learning@ Scale Conference (pp. 95–102). Vancouver: ACM Press.
Lambruschini, B. B., & Pizarro, W. G. (2015). Tech - Gamification in University Engineering Education. In 10th International Conference on Computer Science & Education (ICCSE 2015) (pp. 295–299) IEEE Conference Publications, doi: 10.1109/ICCSE.2015.7250259 .
Landers, R. N. (2015). Developing a theory of gamified learning: linking serious games and gamification of learning. Simulation & Gaming, 45 , 752–768.
Landers, R. N., Bauer, K. N., Callan, R. C., & Armstrong, M. B. (2015). Psychological theory and the gamification of learning. In T. Reiners & L. Wood (Eds.), Gamification in education and business (pp. 165–186). Cham: Springer.
Landers, R. N., & Landers, A. K. (2015). An empirical test of the theory of gamified learning: the effect of leaderboards on time-on-task and academic performance. Simulation & Gaming, 45 , 769–785.
Laskowski, M., & Badurowicz, M. (2014). Gamification in Higher Education: A Case Study. In MakeLearn 2014: Human Capital without Borders: Knowledge and Learning for Quality of Life. Management, Knowledge and Learning International Conference , 971–975. (pp. 971–975). Portorož: ToKnowPress.
Latulipe, C., Long, N. B., & Seminario, C. E. (2015). Structuring flipped classes with lightweight teams and gamification. SIGCSE, 2015 , Proceedings of the 46th ACM Technical Symposium on Computer Science Education (pp. 392–397). New York, NY: ACM
Leach, D., Laur, B., Code, J., Bebbington, T., & Broome, D. (2014). Gamification for online engagement in higher education: a randomized controlled trial (pp. 151–159). Madison: Games Learning + Society Conference.
Lehtonen, T., Aho, T., Isohanni, E., & Mikkonen, T. (2015). On the role of gamification and localization in an open online learning environment: Javala experiences. In 15th Koli Calling Conference on Computing Education Research (pp. 50–59). New York, NY: ACM.
Lieberoth, A. (2015). Shallow gamification – psychological effects of framing an activity as a game. Games and Culture, 10 (3), 249–268.
Linehan, C., Kirman, B., Lawson, S., & Chan, G. (2011). Practical, appropriate, empirically-validated guidelines for designing educational games. In ACM Annual Conference on Human Factors in Computing Systems, May 7–12 (pp. 1979–1988). Canada: Vancouver.
Long, Y., & Aleven, V. (2014). Gamification of joint student/system control over problem selection in a linear equation tutor. In S. Trausan-Matu, K. E. Boyer, M. Crosby, & K. Panourgia (Eds.), 12th International Conference on Intelligent Tutoring Systems, ITS 2014 (pp. 378–387). Honolulu: Springer International Publishing.
Mahnič, N. (2014). Gamification of politics: start a new game! Teorija in Praksa, 51 (1), 143–161.
McGonigal, J. (2011). Reality is broken: why games make us better and how they can change the world . New York: Penguin.
Mekler, E. D., Brühlmann, F., Tuch, A. N., & Opwis, K. (2015). Towards Understanding the Effects of Individual Gamification Elements on Intrinsic Motivation and Performance. Computers in Human Behavior . doi: 10.1016/j.chb.2015.08.048 .
Mora, A., Riera, D., & Gonzalez, C. (2015). A literature review of gamification design frameworks. In Seventh International conference on Virtual Worlds and Games for Serious Applications: VS-Games, Sweden (pp. 1–8). doi: 10.1109/VS-GAMES.2015.7295760 .
Chapter Google Scholar
Morrison, B., & DiSalvo, B. (2014). Khan Academy Gamifies Computer Science. In Proceedings of SIGCSE’2014 45th ACM Technical Symposium on Computer Science Education, ACM (pp. 39–44).
Morschheuser, B. S., Rivera-Pelayo, V., Mazarakis, A., & Zacharias, V. (2014). Interaction and reflection with quantified self and gamification: an experimental study. Journal of Literacy and Technology, 15 (2), 136–156.
Naik, L. (2015). Gamification Isn’t Dead, It’s Just Very Misunderstood. Pulse, July 2015. Available at https://www.linkedin.com/pulse/gamification-isnt-dead-its-just-very-misunderstood-lee-naik .
Nevin, C. R., Westfall, A. O., Rodriguez, J. M., Dempsey, D. M., Cherrington, A., Roy, B., Patel, M., & Willig, J. H. (2014). Gamification as a tool for enhancing graduate medical education. Postgrad Med Journal, 90 (1070), 685–693.
Nicholson, S. (2015). A RECIPE for meaningful gamification. In L. Wood & T. Reiners (Eds.), Gamification in education and business (pp. 1–20). New York: Springer.
Paiva, R., Barbosa, A., Batista, E., Pimentel, D., & Bittencourt, I. (2015). Badges and XP: an observational study about learning. In Frontiers in Education Conference (FIE) (pp. 1–8). doi: 10.1109/FIE.2015.7344074 .
Pedro, L.Z., Lopes, A.M.Z., Prates, B.G., Vassileva, J., & Isotani, S. (2015b). Does Gamification Work for Boys and Girls? An Exploratory Study with a Virtual Learning Environment. In Proceedings of the 30th Annual ACM Symposium on Applied Computing (SAC’15) , 214–219, doi: 10.1145/2695664.2695752 .
Pedro, L., Santos, C., Aresta, M., & Almeida, S. (2015a). Peer-Supported Badge Attribution in a Collaborative Learning Platform: The SAPO Campus case. Computers in Human Behavior, 51 , 562–567, doi: 10.1016/j.chb.2015.03.024 .
Perry, B. (2015). Gamifying French language learning: a case study examining a quest-based, augmented reality mobile learning tool. Social and Behavioral Sciences, 174 , 2308–2315.
Pettit, R.K., McCoy, L., Kinney, M., & Schwartz, F.N., (2015). Student Perceptions of Gamified Audience Response System Interactions in Large Group Lectures and via Lecture Capture Technology Approaches to Teaching and Learning. BMC Medical Education, 15 (92). doi: 10.1186/s12909-015-0373-7 .
Poole, S., Kemp, E., Patterson, L., & Williams, K. (2014). Get your head in the game: using gamification in business education to connect with generation Y. Journal for Excellence in Business Education, 3 (2), 1–9.
Ryan, R. M., Rigby, S. C., & Przybylski, A. (2006). The motivational pull of video games: a self-determination theory approach, Motivation and Emotion, 30 , 347–363. Springer Science+Business Media. doi: 10.1007/s11031-006-9051-8
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55 , 68–78.
Seaborn, K., & Fels, D. I. (2015). Gamification in theory and action: a survey. International Journal of Human Computer Studies, 74 , 14–31.
Shi, L., Cristea, A. I., Hadzidedic, S., & Dervishalidovic, N. (2014). Contextual gamification of social interaction – towards increasing motivation in social e-learning. In 13th International Conference on Web-based Learning (ICWL2014) (pp. 116–122). Tallinn: Springer. 14–17 August, LNCS 8613.
Sillaots, M. (2014). Gamification of higher education by the example of course of research methods. Advances in Web-Based Learning – ICWL, Springer Lecture Notes in Computer Science, 8613 , 106–115. Tallinn: Springer Lecture Notes in Computer Science.
Sillaots, M. (2015). Gamification of higher education by the example of computer games course. In The Seventh International Conference on Mobile, Hybrid, and On-line Learning (eLmL) (pp. 62–58). Lisbon: IARIA. ISBN 978-1-61208-385-8. Available at http://www.thinkmind.org/index.php?view=article&articleid=elml_2015_4_20_50048 . Accessed 17 Jan 2017.
Simoes, J., Mateus, S., Redondo, R., & Vilas, A. (2015). An Experiment to Assess Students’ Engagement in a Gamified Social Learning Environment, eLearning Papers, 43 , July 2015, DOI: 10.13140/RG.2.1.2384.0488 .
Smith, E., Herbert, J., Kavanagh, L. & Reidsema, C. (2014). The Effects of Gamification on Student Learning through the Use of Reputation and Rewards within Community Moderated Discussion Boards. AAEE: 24th Annual Conference of the Australasian Association for Engineering Education , Australia, ISBN: 9780992409906. Available at https://espace.library.uq.edu.au/view/UQ:326086/UQ326086.pdf . Accessed 16 Jan 2017.
Songer, R. W., & Miyata, K. (2014). A playful affordances model for gameful learning. In TEEM ‘14 2nd International Conference on Technological Ecosystems for Enhancing Multiculturality, Salamanca, Spain (pp. 205–213). doi: 10.1145/2669711.2669901 .
Su, C. H., & Cheng, C. H. (2015). A mobile gamification learning system for improving the learning motivation and achievements. Journal of Computer Assisted Learning, 31 (3), 268–286.
Tomaselli, F., Sanchez, O., & Brown, S. (2015). How to engage users through gamification: the prevalent effects of playing and mastering over competing. In Thirty Sixth International Conference on Information Systems, Fort Worth (pp. 1–16).
Tu, C. H., Yen, C. J., Sujo-Montes, L., & Roberts, G. A. (2015). Gaming personality and game dynamics in online discussion instructions. Educational Media International, 52 (3), 155–172. doi: 10.1080/09523987.2015.1075099
Tulloch, R. (2014). Reconceptualising gamification: play and pedagogy. Digital Culture & Education, 6 (4), 317–333.
Tvarozek, J., & Brza, T. (2014). Engaging students in online courses through interactive badges. In 2014 International Conference on e-Learning, September 2014, Spain (pp. 89–95). Retrieved from http://elearning-conf.eu/docs/cp14/paper-12.pdf .
Utomo, A. Y., & Santoso, H. B. (2015). Development of gamification-enriched pedagogical agent for e-learning system based on community of inquiry. In Proceedings of the International HCI and UX Conference (CHIuXiD’15), Indonesia (pp. 1–9). doi: 10.1145/2742032.2742033 .
Werbach, K. (2014). (Re) Defining gamification: a process approach, persuasive technology. Lecture Notes in Computer Science, 8462 , 266–272.
Werbach, K., & Hunter, D. (2012). For the win: how game thinking can revolutionize your business . Philadelphia: Wharton Digital Press.
Westera, W. (2015). Games are motivating, aren’t they? Disputing the arguments for digital game-based learning. International Journal of Serious Games, 2 (2), 3–15. http://dx.doi.org/10.17083/ijsg.v2i2.58 .
Download references
Acknowledgments
This material is based upon work supported by the National Science Foundation under Grant No. HRD 1623236 “Targeted Infusion Project: Increasing Student Motivation and Engagement in STEM Courses through Gamification”.
Authors’ contributions
Both authors contributed equally towards this article. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Author information
Authors and affiliations.
Winston-Salem State University, 601 S. Martin Luther King Jr. Drive, Winston Salem, NC, 27110, USA
Christo Dichev & Darina Dicheva
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Darina Dicheva .
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Reprints and permissions
About this article
Cite this article.
Dichev, C., Dicheva, D. Gamifying education: what is known, what is believed and what remains uncertain: a critical review. Int J Educ Technol High Educ 14 , 9 (2017). https://doi.org/10.1186/s41239-017-0042-5
Download citation
Received : 02 November 2016
Accepted : 02 November 2016
Published : 20 February 2017
DOI : https://doi.org/10.1186/s41239-017-0042-5
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Gamification in education
- Gamifying learning
- Critical literature review
- Empirical studies
- Open access
- Published: 31 January 2023
Gamification of e-learning in higher education: a systematic literature review
- Amina Khaldi ORCID: orcid.org/0000-0003-4935-1840 1 ,
- Rokia Bouzidi 1 &
- Fahima Nader 1
Smart Learning Environments volume 10 , Article number: 10 ( 2023 ) Cite this article
37k Accesses
84 Citations
11 Altmetric
Metrics details
In recent years, university teaching methods have evolved and almost all higher education institutions use e-learning platforms to deliver courses and learning activities. However, these digital learning environments present significant dropout and low completion rates. This is primarily due to the lack of student motivation and engagement. Gamification which can be defined as the application of game design elements in non-game activities has been used to address the issue of learner distraction and stimulate students’ involvement in the course. However, choosing the right combination of game elements remains a challenge for gamification designers and practitioners due to the lack of proven design approaches, and there is no one-size-fits-all approach that works regardless of the gamification context. Therefore, our study focused on providing a comprehensive overview of the current state of gamification in online learning in higher education that can serve as a resource for gamification practitioners when designing gamified systems. In this paper, we aimed to systematically explore the different game elements and gamification theory that have been used in empirical studies; establish different ways in which these game elements have been combined and provide a review of the state-of-the-art of approaches proposed in the literature for gamifying e-learning systems in higher education. A systematic search of databases was conducted to select articles related to gamification in digital higher education for this review, namely, Scopus and Google Scholar databases. We included studies that consider the definition of gamification as the application of game design elements in non-game activities, designed for online higher education. We excluded papers that use the term of gamification to refer to game-based learning, serious games, games, video games, and those that consider face-to-face learning environments. We found that PBL elements (points, badges, and leaderboards), levels, and feedback and are the most commonly used elements for gamifying e-learning systems in higher education. We also observed the increasing use of deeper elements like challenges and storytelling. Furthermore, we noticed that of 39 primary studies, only nine studies were underpinned by motivational theories, and only two other studies used theoretical gamification frameworks proposed in the literature to build their e-learning systems. Finally, our classification of gamification approaches reveals the trend towards customization and personalization in gamification and highlights the lack of studies on content gamification compared to structural gamification.
Introduction
In recent years, most universities use e-learning platforms to deliver courses. Teaching in the form of e-learning is a modern supplement, and sometimes even an alternative to traditional education (Górska, 2016 ). Especially since the last few years, with the spread of the Covid-19 crisis, higher education institutions had to shift from traditional teaching to online teaching as an alternative to resume learners' learning (Sofiadin & Azuddin, 2021 ). However, over time, these digital environments brought several challenges. On one hand, student motivation decreases, resulting in a lack of engagement and participation in courses. On the other hand, instructors struggle to maintain learners’ attention, leading to the eventual abandonment of online education systems. To solve this problem and create engaging e-learning platforms, the gamification technique was proposed.
Game technologies create opportunities for higher education institutions to redesign and innovate their e-learning models to support learning experiences among learners (Alhammad & Moreno, 2018 ). The introduction and growing expansion of gamification in education and learning contexts promotes critical reflection on the development of projects that transform students’ learning experiences (Garone & Nesteriuk, 2019 ). However, is it that simple to create effective gamified e-learning systems especially in the context of higher education?
Early applied work on gamification of educational settings suggested positive-learning, but mixed results have been obtained (Seaborn & Fels, 2015 ). While gamification in general learning systems is known to have a positive impact on student motivation, evidence on its effectiveness in higher education settings is mixed and still uncertain due to the complicated environment in the higher education context. First, the level of difficulty of study is higher at the university than at lower levels of education, and students are more aware of the importance of education they have chosen (Urh et al., 2015 ). Moreover, tertiary education is characterized by the variety of students’ profiles, needs and learning methods; thereby, each game element and even each combination of game elements affects each student differently. Given this diversity of features in the higher education context and the increasing number of inter- and multidisciplinary programs, the process of applying gamification is becoming more complex.
The purpose of this systematic review was to provide a comprehensive overview of the current state of gamification in e-learning in higher education. We focused on identifying how designers currently deal with gamification in the digital higher education context, what game elements they use, how these elements are combined, and what gamification theories are used. In addition, this study sought to find data on existing gamification approaches in the literature, especially those suggested to be applied in digital higher education. Our study differs from previous studies in several ways. In our study, we first wanted to compare our results with previous research’s results that addressed the same research questions in terms of trends in the use of game elements, i.e. whether designers who develop gamified e-learning systems still use classic game elements such as points, badges, and leaderboards, or whether they expand the list of game elements used to include deeper game elements like challenges, storytelling, and so on. We then focused on the underpinning gamification theories used in empirical work, and specifically we sought to understand whether empirical research is beginning to use the various gamification frameworks available in the literature, or whether it is still relying on theories and methods that are highly theoretical and do not provide clear guidance to designers when choosing the right set of game elements (Toda et al., 2020 ). Also, in our study, we sought to find out how game elements are combined in gamified learning systems in higher education. Previous studies have not fully explored this point, with the exception of the study (Dichev & Dicheva, 2017 ). Finally, we proposed a classification of gamification approaches proposed in the context of e-learning in higher education based on several relevant criteria.
The remainder of this manuscript has the following structure. " Related works " section, briefly reviews some of the most relevant review papers. " Systematic literature review methodology " section, systematic literature review methodology, presents the approach we followed in conducting our paper retrieval. " Results of the search " section, results of the research, presents responses to our defined research questions. " Discussion and limitations " section is dedicated for discussion of the results; and finally, we conclude.
Related works
Prior reviews.
This section briefly reviews some of the relevant literature reviews on gamification in higher education related to the topic of our systematic review. The objective is to be able to compare our findings later in the results section to prior reviews’ findings and to shed a more realistic light on any advances in gamification in e-learning in the context of higher education.
Dichev and Dicheva ( 2017 ) critically reviewed the advancement of educational gamification. This review paper was the only one to address the issue of combining game elements in gamified learning systems. The authors found that in all reviewed works, no justification is given for the selection of particular game elements. The study concluded that there is a need for further studies to improve our understanding of how individual game elements are associated with behavioral and motivational outcomes and how they function in an educational context.
Ozdamli ( 2018 ) examined 313 studies on gamification in education. It used content analysis to determine trends in gamification research. The study sought to determine the distribution of empirical research based on a variety of criteria, namely: distribution of studies based on years, number of authors, type of publication, paradigms, research sample, environments, theory/model/strategy, learning area and distribution of game components, mechanics and dynamics. The author found that motivational theories are the most frequently used approach in gamification studies and that the most frequently used game components are goals, rewards and progression sticks.
Khalil et al. ( 2018 ) reviewed the state of the art on gamification in MOOCs (Massive Open Online Course) by answering eight research questions. One of these questions sought to identify elements of gamification that have been implemented or proposed for implementation in MOOCs. The study found that the most commonly used elements in the application of gamification in MOOCs are badges, leaderboards, progress, and challenges. According to the study, progress and challenges are used more frequently in MOOCs than points.
The paper (Alhammad & Moreno, 2018 ) studied gamification in the context of software engineering (SE) education. The study sought to understand how gamification was applied in the SE curriculum and what game elements were used. The study identified four gamification approaches from the primary studies analyzed: papers that implemented gamification by following an existing gamification approach in the literature, papers that adapted psychological and educational theories as gamification approaches, papers that designed and followed their own gamification approach, and finally, papers that did not follow any specific gamification approach. In addition, leaderboards, points and levels were found to be the most frequently used gaming components. Similarly, challenges, feedback, and rewards were the most commonly used mechanics, and progression was the most commonly used dynamic.
Majuri et al. ( 2018 ) reviewed 128 empirical research papers in the literature on gamification in education and learning. It was found that points, challenges, badges and leaderboards are the most commonly used gamification affordances in education which are affordances that refer to achievement and progression while social and immersion-oriented affordances are much less common.
In the paper (Zainuddin et al., 2020 ), the authors addressed a research question related to our research area, namely the underlying theoretical models used in gamification research. It was found that in the studies that implicitly mention their theoretical underpinnings, self-determination theory is the most commonly used, followed by flow theory and goal-setting, while the other studies do not provide any theoretical content.
More recently, van Gaalen et al. ( 2021 ) reviewed 44 research studies in the health professions education literature. The study addressed the question of what game attributes are used in gamified environments, and sought to understand the use of theory throughout the gamification process. The study used Landers ( 2014 )’s framework to categorize the identified game elements into game attributes and revealed that in most reviewed studies the game attributes ‘assessment’ and/or ‘conflict/challenge’ were embedded in the learning environment. Regarding the use of theory in gamification processes, most of the identified studies on gamification in health professions education were not theory-based, or theoretical considerations were not included or not yet developed.
Finally, the authors of the paper (Kalogiannakis et al., 2021 ) performed a systematic literature review on gamification in science education by reviewing 24 empirical research papers. A research question related to our field of study was addressed in this review, namely, what learning theory is used, and what game elements are incorporated into gaming apps. The findings of the studyshowed that most articles did not provide details about the theoretical content or the theory on which they were based. The few articles that used theoretical frameworks were based on self-determination theory SDT, flow theory, goal-setting theory, cognitive theory of multimedia learning and motivation theory. In addition, the study found that the most common game elements and mechanics used in gamified science education environments were competitive setup, leaderboards, points and levels.
Systematic literature review methodology
In this paper of systematic review, we followed a methodology to identify how gamification technique has been used in digital learning environments, specifically in higher education. We sought to identify the game elements that have been used the most, the way they have been combined, and the different frameworks proposed in the literature for gamification of e-learning systems in higher education. A systematic literature review is a means of identifying, evaluating and interpreting all available research relevant to a particular research question, or topic area, or phenomenon of interest (Kitchenham, 2004 ). Kitchenham ( 2004 ) summarizes the stages of a systematic review in three main phases: Planning the Review, Conducting the Review, and Reporting the Review. The first phase ‘Planning the Review’ includes the formulation of research questions, identification of key concepts and constructing the search queries. The second phase ‘Conducting the Review’ consists on study selection based on inclusion and exclusion criteria. Finally, the third phase ‘Reporting the Review’ relates to data extraction and responding to research questions. In the following, we detail the main steps of each phase.
Search strategy
We started by identifying the main goal of this systematic literature review by clearly formulating the following research questions:
Which game elements and gamification theories are used in gamified learning systems?
How these game elements are combined?
Which gamification design approaches are available in the literature?
Then, we constructed a list of key concepts that are: gamification, e-learning and higher education. After that, we identified the alternative terms for each of the key concepts as some authors may refer to the same concept using a different term. For the concept of gamification, we identified this list of free text terms: gamify, game elements, game dynamics, game mechanics, game components, game aesthetics and gameful. For the two other concepts of e-learning and higher education, we identified these terms: education, educational, learning, teaching, course, syllabus, syllabi, curriculum, and curricula.
We formulated two search queries based on the terms identified previously:
For research questions 1and 2:
(gamif* OR gameful OR “game elements” OR “game mechanics” OR “game dynamics” OR “game components” OR “game aesthetics”) AND (education OR educational OR learning OR teaching OR course OR syllabus OR syllabi OR curriculum OR curricula).
For research question 3:
(gamif* OR gameful OR “game elements” OR “game mechanics” OR “game dynamics” OR “game components” OR “game aesthetics”) AND (education OR educational OR learning OR teaching OR course OR syllabus OR syllabi OR curriculum OR curricula) AND (framework OR method OR design OR model OR approach OR theory OR strategy).
We conducted our research by searching the databases using the search query formulated previously. We performed our search in the Scopus and Google Scholar databases as the first is one of the most professional indexing databases and the second is the most popular, so it helps to identify further eligible studies. The search was performed in December 2021. Although the Scopus database indexed the publication abstracts, most of the articles were not available through Scopus, and the articles were retrieved from the following publishers:
SEMANTIC SCHOLAR,
(Hallifax et al. ) SAGE,
Science Direct.
The exception was some articles that could not be accessed. We also performed a backward snowballing search to identify further relevant studies by scanning and searching the references of papers marked as potentially relevant (Dichev & Dicheva, 2017 ; Mora et al., 2017 ; Gari & Radermacher, 2018 ; Khalil et al., 2018 ; Ozdamli, 2018 ; Subhash & Cudney, 2018 ; da Silva et al., 2019 ; Hallifax et al., 2019a , 2019b ; Legaki & Hamari, 2020 ; Zainuddin et al., 2020 ; Saleem et al., 2021 ; Swacha, 2021 ; van Gaalen et al., 2021 ) in search of other relevant studies.
Inclusion and exclusion criteria
In the following table, we summarized the inclusion and exclusion criteria that we considered when we screened full text articles (Table 1 ).
Study selection
To select the relevant studies for this systematic review, a manual screening was performed. First, we reviewed the titles and abstracts of different records that were retrieved. Then, citations were imported to Endnote and duplicate records were removed. After that, we read the full text of all retained articles for inclusion and exclusion based on the eligibility criteria. In case of uncertainty, discussion was organized with the research team to reach consensus about the articles in question.
Data extraction
We developed a data extraction form that was refined and discussed until consensus was obtained. The extraction form was then used by the review author to extract data from all included studies. In this part of this paper, we have considered two types of papers: papers representing case studies to extract the game elements used in the developed e-learning systems, the underpinning theories behind the gamification process and the way game elements were combined with each other. The second type of retrieved papers is about framework proposals, from which we could identify models, approaches, and design processes proposed in the literature for gamifying digital learning environments in tertiary education level.
Results of the search
General results.
In this literature review, we reported the most extensive overview of the empirical research literature on gamification of e-learning in higher education to date. The selection process of relevant studies is shown in Fig. 1 . We analyzed a total of 90 papers to respond to the three research questions formulated previously. First, we retrieved 39 papers in the form of empirical studies carried out at university level and analyzed them to identify what game elements are used, what gamification theories are used to guide the gamification process, and how these game elements are combined. We then identified a variety of 51 papers of type theoretical proposals intended to guide the gamification process. Since higher education is part of general learning systems, we included in this review papers that propose gamification approaches for general contexts and general learning systems. Indeed, we identified 16 papers for general application of gamification, 18 papers for gamifying general learning systems and 17 approaches intended to be applied to e-learning systems in higher education.
Flow diagram of the articles selection process
Answering research questions
In following, we will answer the three research questions formulated at the beginning of this article:
Education applications of gamification refer to using game elements for scholastic development in formal and informal settings (Seaborn & Fels, 2015 ). In our case, we were interested in collecting relevant experimental studies on gamification of e-learning systems in higher education. In the following table (Table 2 ), we list and examine 39 experimental studies that have implemented a digital learning system at the higher education level to answer RQ1 . For each study, we analyzed the game elements that were incorporated and the gamification approaches that were followed during the gamification process. For ease of reference, the game elements that were used in e-learning systems to improve student engagement and the underpinning theories are summarized in Table 2 . More detailed descriptions of the 39 empirical studies are presented in “Appendix”.
By analyzing the game elements listed in Table 2 , we noticed that PBL elements (points, badges, and leaderboards), levels, and feedback are the most commonly used elements for gamifying e-learning systems in higher education. This is in line with other reviews’ findings, e.g. (Dichev & Dicheva, 2017 ).
Furthermore, in response to what (Dichev & Dicheva, 2017 ) stated about the fact that gamification with “deeper game elements” (Enders, 2013 ) by incorporating game design principles involving game mechanics and dynamics such as challenges, choice, low-risk failure, role-play or narrative is still scarce, we noted in our systematic literature review that recent studies explore new game elements. Indeed, among the 39 studies analyzed in Table 2 , there are 20 primary studies that used “deeper game elements” (Enders, 2013 ) like challenges and storytelling (narrative). Among these, challenges are the most popular ones.
In Seaborn and Fels ( 2015 ), the authors noted that till 2015, the majority of applied research on gamification was not grounded in theory and did not use gamification frameworks in the design of the system under study. Likewise, in this systematic review, by analyzing the 39 empirical studies listed in Table 2 , we noticed that most studies were not underpinned by gamification theories. This is in line with the findings of other recent studies, such as van Gaalen et al. ( 2021 ) and Kalogiannakis et al. ( 2021 ). Indeed, of the 39 primary studies analyzed in our systematic review, only nine papers (Smith, 2017 ; Kyewski & Krämer, 2018 ; Pilkington, 2018 ; Tsay et al., 2018 ; van Roy & Zaman, 2019 ; De-Marcos et al., 2020 ; Facey-Shaw et al., 2020 ; Sanchez et al., 2020 ; Dikcius et al., 2021 ) adapted theoretical approaches and used them as gamification approaches. These are a set of social and motivational theories resumed in a variety of six different theories, namely: self-determination theory-SDT, Social comparison theory, social exchange theory-SET, cognitive evaluation theory-CET, situated motivational affordance theory, theory of gamified learning (Landers, 2014 ) and user-centered design (Nicholson, 2012 ). Self-determination theory is considerably the most popular one. These findings are correlated with other reviews’ findings such as Zainuddin et al. ( 2020 ) and Kalogiannakis et al. ( 2021 ). Only two other primary studies Marín et al. ( 2019 ) and Dias ( 2017 ) used existing theoretical gamification frameworks to build their gamified e-learning systems. For the remaining papers, some built their owngamification design based on guidelines from the literature whereas others did not cite any theory. Hence, we notice that this distribution is in line with (Alhammad & Moreno, 2018 )’s review findings regarding the use of four different categories of gamification approaches in primary studies, namely, papers that followed existing gamification frameworks, papers that adapted motivational theories to their needs, papers that built their own approach, and finally, those that didn’t follow any specific approach. We also noticed that motivational theories are the most frequently used approach, as noted in Ozdamli ( 2018 ).
For this research question, we sought to identify how game elements are combined in gamified learning systems in higher education. Previous studies have not fully explored this point except the paper (Dichev & Dicheva, 2017 ). By analyzing the different empirical studies involved in this systematic literature review (listed in Table 2 ), we noticed the lack of detailed information about how instructors and designers combined different game elements. Indeed, in all reviewed papers, the authors listed only the game elements employed to gamify their learning systems. In addition, no study provided any justification of the choice made about the sets of game elements to use, nor the way they combined them in the gamified learning systems.
In the reviewed collection, five studies employed one single game element (Coleman, 2018 ; Garnett & Button, 2018 ; Kyewski & Krämer, 2018 ; Facey-Shaw et al., 2020 ; Dikcius et al., 2021 ), three other studies gamified systems using two game elements (Fajiculay et al., 2017 ; Smith, 2017 ; Donnermann et al., 2021 ), five other studies used three game elements (Hisham & Sulaiman, 2017 ; Kasinathan et al., 2018 ; Romero-Rodriguez et al., 2019 ; Khaleel et al., 2020 ; Sanchez et al., 2020 ) while the remaining ones used more than three elements.
This happens due to the lack of studies that provide clear guidelines and justifications for the combination of game elements (Toda et al., 2020 ).
In this section, we will approach RQ3 . We first synthesize the current literature on gamification approaches in a general context. Then, we present a set of gamification approaches for general learning systems. Finally, we list a set of approaches proposed specifically for higher education within e-learning environments. We briefly described each approach in the table below (Table 3 ).
In the table above, we investigated a total of 51 gamification approaches in three different contexts. The first set of approaches (the first 16 rows of Table 3 ) was designed for general use, i.e., for all contexts such as learning, health, marketing and entrepreneurship. While the second set of approaches (the next 18 rows of Table 3 ) targeted general learning contexts, i.e., without any restriction on educational level. Finally, the third set of approaches (the last 17 rows of Table 3 ) was intended to be applied in a specific context, namely digital higher education.
Given our review’s main interest in e-learning in higher education, we will classify the last 17 approaches of Table 3 , which correspond to those designed for e-learning systems in higher education, into several classes based on different relevant criteria that we will detail below. The paper (Saggah et al., 2020 ) proposes categorizing gamification design frameworks into three categories: scenario-based, high-level approach, and Gamification elements guidance. Inspired by this categorization, we propose our categorization, which will be used to classify the different gamification approaches in e-learning in higher education. A description of each category is given in what follows, and our classification results are shown in Table 4 .
Level of detail
High-level approach This group categorizes papers that provide an overview of the design process that serves as a general high-level guideline containing the global phases without detailing which game elements to use and how to implement them.
Gamification elements guidance This group categorizes papers that provide a conceptualization of the gamification elements that can be used in educational environments. These studies can include implementation guidance.
Scenario based This group categorizes papers that provide a descriptive outline of the design process. In other words, these papers propose gamification approaches by describing their application through real empirical studies experimented in real learning environments.
Type from student perspective (adaptive gamification/one size fits all gamification) Adaptive gamification considers that users have different motivations, so it consists of personalizing learning experiences according to each learner profile. Whereas ‘one size fits all’ gamification uses the same gamified system (gamification elements, rules, etc.) for all learners. For ease of use, we will use ‘A’ character for adaptive approaches and x for ‘one size fits all’ ones.
Profundity from pedagogical perspective (structural gamification versus content gamification) structural gamification refers to the application of game design elements to motivate the learner through an instructional content without changing it (Garone & Nesteriuk, 2019 ). It can be made by using clear goals, rewards for achievements, progression system and status, challenge and feedback (Garone & Nesteriuk, 2019 ). Content gamification is the application of elements, mechanics and game thinking to make the content more game-like (Garone & Nesteriuk, 2019 ). It is a one-time structure created only for a specific content or learning objectives and hence cannot be reused for any content (Sanal, 2019 ). Garone and Nesteriuk ( 2019 ) states that elements that can be used in content gamification are story and narrative; challenge, curiosity and exploration; characters and avatars; interactivity, feedback and freedom to fail (Kapp, 2014 ). According to Kapp ( 2014 ), the combination of both structural and content gamification, is the most effective way to build high engaging and motivating environments. For ease of use, we will use ‘C’ character for content approaches and x for structural ones.
Validation This group categorizes papers that provided a validation of the proposed approach through empirical evidence showing its application to e-learning systems in higher education.
Table 4 represents the results of our classification of gamification approaches in the context of e-learning in higher education. Regarding the level of detail, we noticed that most of the analyzed approaches (with a number of 9 out of a total of 17) are of the type of gamification elements guidance (Urh et al., 2015 ; Huang & Hew, 2018 ; Alsubhi & Sahari, 2020 ; Kamunya et al., 2020 ; Winanti et al., 2020 ; Alsubhi et al., 2021 ; Júnior & Farias, 2021 ; Sofiadin & Azuddin, 2021 ; Yamani, 2021 ). This number is followed by a number of 5 approaches of type scenario based (Mi et al., 2018 ; Legaki et al., 2020 ; Al Ghawail et al., 2021 ; Bencsik et al., 2021 ; Fajri et al., 2021 ), and finally, only 2 approaches are categorized as high-level approaches (Carreño, 2018 ; de la Peña et al., 2021 ). It is worth saying that scenario-based approaches are, in most cases, the most difficult to reproduce in other educational environments, as they are very specific, and each environment has its own characteristics. In contrast, high-level approaches are more general and need to be tailored according to the context. Finally, gamification elements guidance approaches can strongly help implement gamified learning environments as they provide a handy catalog of elements that can be injected easily into learning environments.
Furthermore, Table 4 shows that most of the suggested design approaches in the literature are not empirically explored (for example, by using a control and comparing gamified and non-gamified systems). Indeed, of the 17 gamification approaches in the context of e-learning in higher education analyzed, only four approaches have been applied and evaluated by empirical evidence (Huang & Hew, 2018 ; Alsubhi et al., 2021 ; de la Peña et al., 2021 ; Júnior & Farias, 2021 ). Among those four studies, one work was validated with experts (Alsubhi et al., 2021 ).
Moreover, Table 4 shows that of the 17 gamification approaches proposed for application to online learning systems in the context of higher education, two approaches (Carreño, 2018 ; Kamunya et al., 2020 ) fall into the category of adaptive gamification. This shows the trendy nature of personalization in higher education. Finally, Table 4 shows that the 17 approaches that have been proposed to gamify online learning systems in higher education focus solely on structured gamification, neglecting the content side of online learning systems.
Discussion and limitations
Through this systematic review, we identified several papers on the gamification of e-learning in the higher education context. In recent years, the research on gamification in e-learning has been getting traction, and the number of research articles and systematic reviews of research articles is increasing. As a summary of the existing approaches of gamification in e-learning in higher education, we notice the following points:
Gamification of e-learning in higher education: a trending area of research
The systematic review showed that gamification of learning systems is nowadays a hot topic, and research in this field is growing rapidly as well as for e-learning in higher education context, as it is shown by Fig. 2 .
Number of publications per year
Gamification design gaps and tendencies
In general, gamification theory helps in training and shaping participant behavior, however, in our systematic literature review, we observed from RQ1 that the majority of applied research on gamification is not grounded in theory and did not use gamification frameworks in the design of the learning system under study. This highlights the fact that there is a real gap between theoretical and applied work on gamification. One reason may be that existing approaches are very theoretical and cannot strongly assist designers and practitioners when gamifying learning systems, as pointed out by Toda et al. ( 2020 ). This also explains our results to the second research question RQ2 regarding the lack of detail on the combination of game elements used in the experimental studies and the motivation behind choosing specific game elements over others.
To better understand this phenomenon and to find a rationale for this lack of using theory and, thus, the lack of logic behind the use of certain game elements over others and their random linking and combination in gamified learning systems, we addressed the research question RQ3. In the latter, we analyzed the gamification approaches available in the literature and classified them into different categories based on a variety of criteria. Our results revealed that the gamification elements guidance approaches that provide taxonomies of game elements that can be incorporated into learning systems constitute the majority of the approaches that have been proposed for application in online learning in higher education. Those did not provide the psychological and behavioral changes that correspond to each game element. Instead, the older gamification theory was based simply on the behavioral outcomes that come from using gamification and the motivational needs behind it and did not provide details on how to implement them or details on what elements to use.
Using appropriate game elements can lead to higher levels of user motivation, whereas inappropriate game elements can demotivate users (Hallifax et al., 2019a , 2019b ). Thus, it is essential to choose the right combination of game elements that perfectly matches the desired behavior change. To do this, we must first explore the effect of each game element separately (Dichev & Dicheva, 2017 ). Thus, further studies are needed to improve our understanding of how individual game elements relate to behavioral and motivational outcomes so that we can identify their contribution in studies that mix multiple game elements (Dichev & Dicheva, 2017 ). An example of such study was provided in the health domain in the paper (Hervas et al., 2017 ). The latter proposed a taxonomy of gamification elements used in the domain of health by relating them to psychological fundamentals on behavioral changes, like Self-efficacy, Social influence, and Behavioral momentum. This work can facilitate researchers' empirical validation of gamification theory by building contexts and scenarios from ready-made taxonomies of gamification elements that target a specific behavioral outcome.
On the other hand, through our systematic literature review, we can see from RQ3 the recent emergence of data-driven approaches through machine learning techniques (Knutas et al., 2019 ; Duggal et al., 2021 ). These techniques help to create gamification designs suitable for the gamified context, especially when it comes to customizing the game elements to be incorporated into the final gamified system to the students' profiles.
In many learning environments, pedagogy assumes that all learners have homogeneous characteristics (Kamunya et al., 2020 ). However, Schöbel and Söllner ( 2016 ) argue that most gamification projects are not working because they are designed for a group of system users without considering the personal needs of each user. Hence the advantage of personalized training to the learner where all learners differ in preference, style and abilities with regard to the learning processes with or without technology mediation (Naik & Kamat, 2015 ). In this context, we noted the existence of two gamification approaches designed for online learning in higher education (Carreño, 2018 ; Kamunya et al., 2020 ). This is put into practice by tailoring the gamification elements to users' individual preferences. A recent related problem is the lack of adaptation of gamification to the content being gamified.
Another recent and relevant issue is the extreme lack of content gamification. Indeed, the motivational impact of certain game elements varies with the user activity or the domain of gamified systems (Hallifax et al., 2019a , 2019b ). Therefore, there is a great need for further exploration and experimentation in this immature area to provide a gamified design to satisfy users’ preferences as well as the task at hand. In other words, personalization in gamification should extend to content, as it does with user profiles, for example, by applying machine learning techniques to tailor the choice of game elements to gamified content.
Another common study design issue illuminated by our review is the lack of validation of the proposed gamification approaches through statistical analyses. In addition, most applied research on the gamification of online learning systems in higher education has not explored the gamification frameworks suggested in the literature.
Conclusion and future work
In this work, we conducted a review of the literature on gamification elements used in digital higher education, the way they are combined, and the different gamification approaches proposed in the literature to gamify learning systems. We analyzed a total of 90 papers to answer the three research questions formulated for this study.
This review identified points, badges, leaderboards, levels, feedback, and challenges as the most commonly used game elements in digital higher education. However, in terms of using gamification theory, our review found that the majority of applied gamification research is not theory-based and has not used gamification frameworks in the design of gamified learning systems. Although some experimental studies attempt to adapt psychological and educational theories available in the literature as gamification approaches, the resulting systems are not very clear, and there is no rationale for choosing certain game elements over others. Consequently, it can be concluded that these gamification approaches cannot strongly assist designers and practitioners in gamifying their learning systems. In addition, theoretical gamification approaches in e-learning in higher education should focus on understanding the effect of each single game design element and the behavioral changes that outcome from its use.
Moreover, based on the results of this review, we can observe the trend towards data-driven approaches through the use of machine learning techniques, especially in adaptive gamification approaches. This involves the adaptation of gamification elements to user profiles. On the other hand, although we have noticed the increasing use of gamification elements that are suitable for content gamification and make the content more game-like, such as storytelling and challenges, there is still a lack of gamification approaches that address content gamification. In fact, this is still an immature research area in gamification design in e-learning in higher education. Future works should pay more attention to the pedagogical side of learning systems and the task under gamification. Apart from that, further research is required to compare theory-driven to data-driven gamification approaches, in terms of which one is the better or perhaps evaluate the effectiveness of a combination of the two, and go so far as to propose a hybrid gamification approach, which does not exist yet and might solve several gamification design issues.
Regarding future work, efforts should focus on building a holistic approach by considering all the aspects that constitute the environment. Among those, personalization according to students’ profiles, gamified subject, educational context, learner’s culture, learner’s preferences, level, playing motivations and experience with games.
Finally, we have seen that most of the design approaches suggested in the literature are not empirically explored. Therefore, statistical analyses and comparative studies should be conducted to draw more robust and generalizable conclusions to validate the existing gamification approaches in the literature.
Availability of data and materials
All data generated or analyzed during this study are included in this published article.
Adams, S. P., & Du Preez, R. (2021). Supporting student engagement through the gamification of learning activities: A design-based research approach. Technology, Knowledge and Learning, 27 , 119–138.
Article Google Scholar
Ahmed, H. D., & Asiksoy, G. (2021). The effects of gamified flipped learning method on student’s innovation skills, self-efficacy towards virtual physics lab course and perceptions. Sustainability (switzerland), 13 (18), 10163.
Al Ghawail, E. A., Yahia, S. B., et al. (2021). Gamification model for developing E-learning in Libyan Higher Education. Smart education and e-learning 2021 (pp. 97–110). Springer.
Google Scholar
Alcivar, I., & Abad, A. (2016). Design and evaluation of a gamified system for ERP training. Computers in Human Behavior, 58 , 109–118.
Alhammad, M. M., & Moreno, A. M. (2018). Gamification in software engineering education: A systematic mapping. Journal of Systems and Software, 141 , 131–150.
Allen, M. W. M. W. (2007). Designing successful e-learning: Forget what you know about instructional design and do something interesting . Wiley.
Alsubhi, M., Ashaari, N., et al. (2021). Design and evaluation of an engagement framework for e-learning gamification. International Journal of Advanced Computer Science and Applications , 12 .
Alsubhi, M. A., & Sahari, N. (2020). A conceptual engagement framework for gamified E-learning platform activities. International Journal of Emerging Technologies in Learning, 15 (22), 4–23.
Andrade, F. R. H., Mizoguchi, R., et al. (2016). The bright and dark sides of gamification . Springer.
Book Google Scholar
Aparicio, A. F., Vela, F. L. G., et al. (2012). Analysis and application of gamification. In Proceedings of the 13th international conference on Interacción Persona-Ordenador . Elche, Spain: Association for Computing Machinery. Article 17.
Aşıksoy, G. (2018). The effects of the gamified flipped classroom environment (GFCE) on students’ motivation, learning achievements and perception in a physics course. Quality and Quantity, 52 , 129–145.
Asiksoy, G., & Canbolat, S. (2021). The effects of the gamified flipped classroom method on petroleum engineering students’ pre-class online behavioural engagement and achievement. International Journal of Engineering Pedagogy, 11 (5), 19–36.
Bencsik, A., Mezeiova, A., et al. (2021). Gamification in higher education (case study on a management subject). International Journal of Learning, Teaching and Educational Research, 20 (5), 211–231.
Bennani, S., Maalel, A., et al. (2021). Towards an adaptive gamification model based on ontologies. In 2021 IEEE/ACS 18th international conference on computer systems and applications (AICCSA) .
Bernik, A. (2021). Gamification framework for E-learning systems in higher education. Tehnički Glasnik, 15 (2), 184–190.
Bernik, A., Radošević, D., et al. (2017). Research on efficiency of applying gamified design into University’s e-courses: 3D modeling and programming. Journal of Computer Science, 13 (12), 718–727.
Bernik, A., Radošević, D., et al. (2019). Achievements and usage of learning materials in computer science hybrid courses. Journal of Computer Science, 15 (4), 489–498.
Böckle, M., Micheel, I., et al. (2018). A design framework for adaptive gamification applications.
Buckley, P., & Doyle, E. (2017). Individualising gamification: An investigation of the impact of learning styles and personality traits on the efficacy of gamification using a prediction market. Computers and Education, 106 , 43–55.
Carreño, A. M. (2018). A framework for agile design of personalized gamification services.
Castro, T. C., & Gonçalves, L. S. (2018). The use of gamification to teach in the nursing field. Revista Brasileira De Enfermagem, 71 (3), 1038–1045.
Cechetti, N. P., Bellei, E. A., et al. (2019). Developing and implementing a gamification method to improve user engagement: A case study with an m-Health application for hypertension monitoring. Telematics Informatics, 41 , 126–138.
Chou, Y. K. (2015). Actionable gamification: Beyond points, badges, and leaderboards . Createspace Independent Publishing Platform.
Coleman, J. D. (2018). Engaging undergraduate students in a co-curricular digital badging platform. Education and Information Technologies, 23 (1), 211–224.
de la Peña, D., Lizcano, D., et al. (2021). Learning through play: Gamification model in university-level distance learning. Entertainment Computing, 39 , 100430.
De-Marcos, L., Garcia-Cabot, A., et al. (2020). Gamifying massive online courses: Effects on the social networks and course completion rates. Applied Sciences (switzerland), 10 (20), 1–17.
Deterding, S., Dixon, D., et al. (2011b). From game design elements to gamefulness: Defining gamification.
Deterding, S., Dixon, D., et al. (2011a). From game design elements to gamefulness: Defining "gamification". In Proceedings of the 15th international academic mindtrek conference: envisioning future media environments (pp. 9–15). Tampere, Finland: Association for Computing Machinery.
Deterding, S. (2012). Gamification: Designing for motivation. Interactions, 19 (4), 14–17.
Dias, J. (2017). Teaching operations research to undergraduate management students: The role of gamification. The International Journal of Management Education, 15 (1), 98–111.
Dichev, C., & Dicheva, D. (2017). Gamifying education: What is known, what is believed and what remains uncertain: A critical review. International Journal of Educational Technology in Higher Education, 14 (1), 9.
Dicheva, D., Dichev, C., et al. (2015). Gamification in education: A systematic mapping study. Educational Technology & Society, 18 , 75–88.
Dikcius, V., Urbonavicius, S., et al. (2021). Learning marketing online: The role of social interactions and gamification rewards. Journal of Marketing Education, 43 (2), 159–173.
Donath, L., Mircea, G., et al. (2020). E-learning platforms as leverage for education for sustainable development. European Journal of Sustainable Development, 9 (2), 1–19.
Donnermann, M., Lein, M., et al. (2021). Social robots and gamification for technology supported learning: An empirical study on engagement and motivation. Computers in Human Behavior, 121 , 106792.
Duggal, K., Gupta, L. R., et al. (2021). Gamification and machine learning inspired approach for classroom engagement and learning. Mathematical Problems in Engineering, 2021 , 9922775.
Enders, B. (2013). GAMIFICATION, GAMES, AND LEARNING: What managers and practitioners need to know . The E-learning Guild.
Facey-Shaw, L., Specht, M., et al. (2020). Do badges affect intrinsic motivation in introductory programming students? Simulation and Gaming, 51 (1), 33–54.
Fajiculay, J. R., Parikh, B. T., et al. (2017). Student perceptions of digital badges in a drug information and literature evaluation course. Currents in Pharmacy Teaching and Learning, 9 (5), 881–886.
Fajri, F. A., R. K. Haribowo P, et al. (2021). Gamification in e-learning: The mitigation role in technostress. International Journal of Evaluation and Research in Education, 10 (2), 606–614.
García, F., Pedreira, O., et al. (2017). A framework for gamification in software engineering. Journal of Systems and Software, 132 , 21–40.
Gari, M. R. N., & Radermacher, A. D. (2018). Gamification in computer science education: A systematic literature review. In ASEE annual conference and exposition, conference proceedings .
Garnett, T., & Button, D. (2018). The use of digital badges by undergraduate nursing students: A three-year study. Nurse Education in Practice, 32 , 1–8.
Garone, P., & Nesteriuk, S. (2019). Gamification and learning: A comparative study of design frameworks . Springer.
Górska, D. (2016). E-learning in Higher Education. The Person and the Challenges. the Journal of Theology, Education, Canon Law and Social Studies Inspired by Pope John Paul II, 6 (2), 35.
Guérard-Poirier, N., Beniey, M., et al. (2020). An educational network for surgical education supported by gamification elements: Protocol for a randomized controlled trial. JMIR Research Protocols, 9 (12), e21273.
Gunawan, F. E., & Jupiter,. (2018). Gamification analysis and implementation in online learning. ICIC Express Letters, 12 (12), 1195–1204.
Gündüz, A. Y., & Akkoyunlu, B. (2020). Effectiveness of gamification in flipped learning. SAGE Open , 10 (4).
Hallifax, S., Serna, A., et al. (2019a). Factors to consider for tailored gamification. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play (pp. 559–572). Barcelona, Spain: Association for Computing Machinery.
Hallifax, S., Serna, A., et al. (2019b). Adaptive gamification in education: A literature review of current trends and developments. In Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics) (Vol. 11722 LNCS, pp. 294–307).
Hervas, R., Ruiz-Carrasco, D., et al. (2017). Gamification mechanics for behavioral change: A systematic review and proposed taxonomy. In ACM international conference proceeding series .
Hisham, F. B. M. N., & Sulaiman, S. (2017). Adapting gamification approach in massive open online courses to improve user engagement. UTM Computing Proceedings Innovation in Computing Technology and Applications, 2 , 1–6.
Huang, B., & Hew, K. F. (2018). Implementing a theory-driven gamification model in higher education flipped courses: Effects on out-of-class activity completion and quality of artifacts. Computers and Education, 125 , 254–272.
Hunicke, R., Leblanc, M. G., et al. (2004). MDA: A formal approach to game design and game research.
Jianu, E. M., & Vasilateanu, A. (2017). Designing of an e-learning system using adaptivity and gamification. In 2017 IEEE international systems engineering symposium (ISSE) .
Júnior, E., & Farias, K. (2021). ModelGame: A quality model for gamified software modeling learning. In 15th Brazilian symposium on software components, architectures, and reuse (pp. 100–109). Joinville, Brazil: Association for Computing Machinery.
Kalogiannakis, M., Papadakis, S., et al. (2021). Gamification in science education. A systematic review of the literature. Education Sciences, 11 (1), 22.
Kamunya, S., Mirirti, E., et al. (2020). An adaptive gamification model for e-learning. In 2020 IST-Africa conference (IST-Africa) .
Kapp, K. M. (2012). The gamification of learning and instruction: game-based methods and strategies for training and education.
Kapp, K. M. B. L. M. R. (2014). The gamification of learning and instruction fieldbook: Ideas into practice . Wiley.
Kasinathan, V., Mustapha, A., et al. (2018). Questionify: Gamification in education. International Journal of Integrated Engineering, 10 (6), 139–143.
Khaleel, F. L., Ashaari, N. S., et al. (2019). An empirical study on gamification for learning programming language website. Jurnal Teknologi, 81 (2), 151–162.
Khaleel, F. L., Ashaari, N. S., et al. (2020). The impact of gamification on students learning engagement. International Journal of Electrical and Computer Engineering, 10 (5), 4965–4972.
Khalil, M., Wong, J., et al. (2018). Gamification in MOOCs: A review of the state of the art. In IEEE global engineering education conference, EDUCON .
Kim, J. T., & Lee, W.-H. (2015). Dynamical model for gamification of learning (DMGL). Multimedia Tools and Applications, 74 (19), 8483–8493.
Kitchenham, B. (2004). Procedures for performing systematic reviews . Software Engineering Group Department of Computer Science, Keele University.
Knutas, A., van Roy, R., et al. (2019). A process for designing algorithm-based personalized gamification. Multimedia Tools and Applications, 78 (10), 13593–13612.
Kyewski, E., & Krämer, N. C. (2018). To gamify or not to gamify? An experimental field study of the influence of badges on motivation, activity, and performance in an online learning course. Computers and Education, 118 , 25–37.
Landers, R. N. (2014). Developing a theory of gamified learning: Linking serious games and gamification of learning. Simulation & Gaming, 45 (6), 752–768.
Lavoué, E., Monterrat, B., et al. (2019). Adaptive gamification for learning environments. IEEE Transactions on Learning Technologies, 12 (1), 16–28.
Legaki, N. Z., & Hamari, J. (2020). Gamification in statistics education: A literature review. In CEUR workshop proceedings .
Legaki, N. Z., Xi, N., et al. (2020). The effect of challenge-based gamification on learning: An experiment in the context of statistics education. International Journal of Human Computer Studies, 144 , 102496.
Llorens-Largo, F., Gallego-Durán, F. J., et al. (2016). Gamification of the learning process: Lessons learned. IEEE Revista Iberoamericana De Tecnologias Del Aprendizaje, 11 (4), 227–234.
Majuri, J., Koivisto, J., et al. (2018). Gamification of education and learning: A review of empirical literature. In CEUR workshop proceedings .
Marín, B., Frez, J., et al. (2019). An empirical investigation on the benefits of gamification in programming courses. ACM Transactions on Computing Education, 19 (1), 1–22.
Mi, Q., Keung, J., et al. (2018). A gamification technique for motivating students to learn code readability in software engineering. In Proceedings—2018 international symposium on educational technology, ISET 2018 .
Milenković, I., Šošević, U., et al. (2019). Improving student engagement in a biometric classroom: The contribution of gamification. Universal Access in the Information Society, 18 (3), 523–532.
Mora, A., Riera, D., et al. (2017). Gamification: A systematic review of design frameworks. Journal of Computing in Higher Education, 29 (3), 516–548.
Morschheuser, B., Werder, K., et al. (2017). How to gamify? A method for designing gamification.
Morschheuser, B., Hassan, L., et al. (2018). How to design gamification? A method for engineering gamified software. Information and Software Technology, 95 , 219–237.
Naik, V., & Kamat, V. V. (2015). Adaptive and gamified learning environment (AGLE). In 2015 IEEE seventh international conference on technology for education (T4E) (pp. 7–14).
Nicholson, S. (2012). A user-centered theoretical framework for meaningful gamification.
Nielson, B. (2017). Gamification mechanics vs. gamification dynamics . Retrieved from https://www.yourtrainingedge.com/gamification-mechanics-vs-gamification-dynamics/ .
Ozdamli, S. K. A. F. (2018). A review of research on gamification approach in education.
Pakinee, A., & Puritat, K. (2021). Designing a gamified e-learning environment for teaching undergraduate ERP course based on big five personality traits. Education and Information Technologies, 26 (4), 4049–4067.
Park, J., De, L., et al. (2019). GAMESIT: A gamified system for information technology training. Computers and Education, 142 , 103643.
Pérez-López, I. J., Rivera García, E., et al. (2017). “The prophecy of the chosen ones”: An example of gamification applied to university teaching. Revista Internacional De Medicina y Ciencias De La Actividad Fisica y Del Deporte, 17 (66), 243–260.
Pilkington, C. (2018). A playful approach to fostering motivation in a distance education computer programming course: Behaviour change and student perceptions. International Review of Research in Open and Distance Learning, 19 (3), 282–298.
Rivera, E. S., & Garden, C. L. P. (2021). Gamification for student engagement: A framework. Journal of Further and Higher Education, 45 (7), 999–1012.
Rodríguez, I., Puig, A., et al. (2022). Towards adaptive gamification: A method using dynamic player profile and a case study. Applied Sciences, 12 (1), 486.
Romero-Rodriguez, L. M., Ramirez-Montoya, M. S., et al. (2019). Gamification in MOOCs: Engagement application test in energy sustainability courses. IEEE Access, 7 , 32093–32101.
Ropero-Padilla, C., Rodriguez-Arrastia, M., et al. (2021). A gameful blended-learning experience in nursing: A qualitative focus group study. Nurse Education Today, 106 , 105109.
van Roy, R., & Zaman, B. (2017). Why gamification fails in education and how to make it successful: Introducing nine gamification heuristics based on self-determination theory (pp. 485–509).
Ryan, R., & Deci, E. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. The American Psychologist, 55 , 68–78.
Saggah, A., Atkins, A. S., et al. (2020). A review of gamification design frameworks in education. In 2020 Fourth international conference on intelligent computing in data sciences (ICDS) .
Saleem, A. N., Noori, N. M., et al. (2021). Gamification applications in E-learning: A literature review. Technology, Knowledge and Learning, 27 , 139–159.
Sanal, A. (2019). Content gamification vs structured gamification in E-learning . Retrieved from https://playxlpro.com/content-gamification-vs-structured-gamification-in-e-learning/ .
Sanchez, D. R., Langer, M., et al. (2020). Gamification in the classroom: Examining the impact of gamified quizzes on student learning. Computers and Education, 144 , 103666.
Schöbel, S., & Söllner, M. (2016). How to gamify information systems—Adapting gamification to individual user preferences.
Schonfeld, E. (2010). SCVNGR's secret game mechanics playdeck.
Seaborn, K., & Fels, D. I. (2015). Gamification in theory and action: A survey. International Journal of Human-Computer Studies, 74 , 14–31.
da Silva, R. J. R., Rodrigues, R. G., et al. (2019)."Gamification in management education: A systematic literature review. BAR - Brazilian Administration Review , 16 (2).
Simões, J., Redondo, R. P. D., et al. (2013). A social gamification framework for a K-6 learning platform. Computers in Human Behavior, 29 , 345–353.
Smith, T. (2017). Gamified modules for an introductory statistics course and their impact on attitudes and learning. Simulation and Gaming, 48 (6), 832–854.
Sofiadin, A., & Azuddin, M. (2021). An initial sustainable e-learning and gamification framework for higher education. In International conferences on mobile learning 2021 and educational technologies 2021 .
Subhash, S., & Cudney, E. A. (2018). Gamified learning in higher education: A systematic review of the literature. Computers in Human Behavior, 87 , 192–206.
Swacha, J. (2021). State of research on gamification in education: A bibliometric survey. Education Sciences, 11 , 69.
Toda, A. M., Oliveira, W., et al. (2019). A taxonomy of game elements for gamification in educational contexts: Proposal and evaluation. In 2019 IEEE 19th international conference on advanced learning technologies (ICALT) .
Toda, A., Toledo Palomino, P., et al. (2020). How to gamify learning systems? An experience report using the design sprint method and a taxonomy for gamification elements in education. Educational Technology & Society, 22 , 47–60.
Towongpaichayont, W. (2021). A guideline of designing gamification in the classroom and its case study. ICIC Express Letters, 15 (6), 639–647.
Tsay, C. H. H., Kofinas, A., et al. (2018). Enhancing student learning experience with technology-mediated gamification: An empirical study. Computers and Education, 121 , 1–17.
Urh, M., Vukovic, G., et al. (2015). The model for introduction of gamification into E-learning in higher education. Procedia - Social and Behavioral Sciences, 197 , 388–397.
Uz Bilgin, C., & Gul, A. (2020). Investigating the effectiveness of gamification on group cohesion, attitude, and academic achievement in collaborative learning environments. TechTrends, 64 (1), 124–136.
van Gaalen, A. E. J., Brouwer, J., et al. (2021). Gamification of health professions education: A systematic review. Advances in Health Sciences Education, 26 (2), 683–711.
van Roy, R., & Zaman, B. (2019). Unravelling the ambivalent motivational power of gamification: A basic psychological needs perspective. International Journal of Human Computer Studies, 127 , 38–50.
Wendy Hsin-Yuan Huang, D. S. (2013). Gamification of education . Rotman School of Management, University of Toronto.
Werbach, K., & Hunter, D. (2012). For the win: How game thinking can revolutionize your business.
Winanti, W., Abbas, B. S., et al. (2020). Gamification framework for programming course in higher education. Journal of Games, Game Art, and Gamification, 5 (2), 54–57.
Wongso, O., Rosmansyah, Y., et al. (2014). Gamification framework model, based on social engagement in e-learning 2.0. In 2014 2nd international conference on technology, informatics, management, engineering & environment (pp. 10–14).
Yamani, H. (2021). A conceptual framework for integrating gamification in elearning systems based on instructional design model. International Journal of Emerging Technologies in Learning, 16 , 14–33.
Yildirim, I. (2017). The effects of gamification-based teaching practices on student achievement and students’ attitudes toward lessons. Internet and Higher Education, 33 , 86–92.
Zainuddin, Z., Chu, S. K. W., et al. (2020). The impact of gamification on learning and instruction: A systematic review of empirical evidence. Educational Research Review, 30 , 100326.
Zaric, N., Lukarov, V., et al. (2020). A fundamental study for gamification design: Exploring learning tendencies’ effects. International Journal of Serious Games, 7 (4), 3–25.
Zhao, D., Playfoot, J., et al. (2022). An innovative multi-layer gamification framework for improved STEM learning experience. IEEE Access, 10 , 3879–3889.
Zichermann, G., & Cunningham, C. (2011). Gamification by design: Implementing game mechanics in web and mobile apps . O’Reilly Media, Inc.
Download references
Acknowledgements
Not applicable.
Author information
Authors and affiliations.
Ecole Nationale Supérieure d’Informatique ESI, Ex INI, Algiers, Algeria
Amina Khaldi, Rokia Bouzidi & Fahima Nader
You can also search for this author in PubMed Google Scholar
Contributions
The authors worked together on the manuscript. All authors have read and approved the final manuscript.
Corresponding author
Correspondence to Amina Khaldi .
Ethics declarations
Competing interests.
The authors declare that they have no competing interests.
Additional information
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
About this article
Cite this article.
Khaldi, A., Bouzidi, R. & Nader, F. Gamification of e-learning in higher education: a systematic literature review. Smart Learn. Environ. 10 , 10 (2023). https://doi.org/10.1186/s40561-023-00227-z
Download citation
Received : 07 November 2022
Accepted : 10 January 2023
Published : 31 January 2023
DOI : https://doi.org/10.1186/s40561-023-00227-z
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Gamification
- Higher education
- Tertiary education
- Digital learning environments
- Systematic review
IMAGES
VIDEO
COMMENTS
In this review, we c onduct a literature review of 128 empirical research papers in the field of. gamification of education and learning. We provide the most extensive overview to date of the ...
A systematic literature review was conducted to summarize the results and discussions on studies that cover the field of tailored gamified education and provided an agenda with different challenges, opportunities, and research directions to improve the literature on tailored gamification in education. Expand. 111. PDF.
remained modest and narrow. Therefore, in this literature review we catalogue 128 empirical research papers in the field of gamification of education and learning. The results indicate that gamification in education and learning most commonly utilizes affordances signaling achievement and progression, while social and
Gamification of Education and Learning: A Review of Empirical Literature. In J. Koivisto, & J. Hamari (Eds.), Proceedings of the 2nd International GamiFIN Conference, GamiFIN 2018 (Vol. 2186, pp. 11-19). (CEUR Workshop Proceedings; Vol. 2186). ... Gamification of Education and Learning: A Review of Empirical Literature. / Majuri, Jenni; ...
Abstract. We synthesized the literature on gamification of education by conducting a review of the literature on gamification in the educational and learning context. Based on our review, we identified several game design elements that are used in education. These game design elements include points, levels/stages, badges, leaderboards, prizes ...
The findings of this study present an overview of empirical research literature on gamification studies in the context of education and learning. This study contributes to addressing the aforementioned research gaps, and provides practical insights and direction for future research on gamification based on the prevalent themes.
This paper is an attempt to shed light on the emergence and consolidation of gamification in education/training. It reports the results of a literature review that collected and analysed around ...
A mutlidisciplinary literature review identified 24 peer-reviewed empirical studies published between 2008 and 2013 (Hamari et al. 2014). Nine of the 24 studies were conducted in the context of education and learning. All in all, the results were mixed, despite the presence of a positive tendency suggesting the existence of confounding variables.
Gamification of education is a developing approach for increasing learners' motivation and engagement by incorporating game design elements in educational environments. With the growing popularity of gamification and yet mixed success of its application in educational contexts, the current review is aiming to shed a more realistic light on the research in this field by focusing on empirical ...
Finally, the authors of the paper (Kalogiannakis et al., 2021) performed a systematic literature review on gamification in science education by reviewing 24 empirical research papers. A research question related to our field of study was addressed in this review, namely, what learning theory is used, and what game elements are incorporated into ...