U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Fostering students’ motivation towards learning research skills: the role of autonomy, competence and relatedness support

Louise maddens.

1 Centre for Instructional Psychology and Technology, Faculty of Psychology and Educational Sciences, KU Leuven and KU Leuven Campus Kulak Kortrijk, Etienne Sabbelaan 51 – bus 7800, 8500 Kortrijk, Belgium

2 Itec, imec Research Group at KU Leuven, imec, Leuven, Belgium

3 Vives University of Applied Sciences, Kortrijk, Belgium

Fien Depaepe

Annelies raes.

In order to design learning environments that foster students’ research skills, one can draw on instructional design models for complex learning, such as the 4C/ID model (in: van Merriënboer and Kirschner, Ten steps to complex learning, Routledge, London, 2018). However, few attempts have been undertaken to foster students’ motivation towards learning complex skills in environments based on the 4C/ID model. This study explores the effects of providing autonomy, competence and relatedness support (in Deci and Ryan, Psychol Inquiry 11(4): 227–268, https://doi.org/10.1207/S15327965PLI1104_01, 2000) in a 4C/ID based online learning environment on upper secondary school behavioral sciences students’ cognitive and motivational outcomes. Students’ cognitive outcomes are measured by means of a research skills test consisting of short multiple choice and short answer items (in order to assess research skills in a broad way), and a research skills task in which students are asked to integrate their skills in writing a research proposal (in order to assess research skills in an integrative manner). Students’ motivational outcomes are measured by means of students’ autonomous and controlled motivation, and students’ amotivation. A pretest-intervention-posttest design was set up in order to compare 233 upper secondary school behavioral sciences students’ outcomes among (1) a 4C/ID based online learning environment condition, and (2) an identical condition additively providing support for students’ need satisfaction. Both learning environments proved equally effective in improving students’ scores on the research skills test. Students in the need supportive condition scored higher on the research skills task compared to their peers in the baseline condition. Students’ autonomous and controlled motivation were not affected by the intervention. Although, unexpectedly, students’ amotivation increased in both conditions, students’ amotivation was lower in the need supportive condition compared to students in the baseline condition. Theoretical relationships were established between students’ need satisfaction, students’ motivation (autonomous, controlled, and amotivation), and students’ cognitive outcomes. These findings are discussed taking into account the COVID-19 affected setting in which the study took place.

Introduction

Several scholars have argued that the process of learning research skills is often obstructed by motivational problems (Lehti & Lehtinen, 2005 ; Murtonen, 2005 ). Some even describe these issues as students having an aversion towards research (Pietersen, 2002 ). Examples of motivational problems are that students experience research courses as boring, inaccessible, or irrelevant to their daily lives (Braguglia & Jackson, 2012 ). In a research synthesis on teaching and learning research methods, Earley ( 2014 ) argues that students fail to see the relevance of research methods courses, are anxious or nervous about the course, are uninterested and unmotivated to learn the material, and have poor attitudes towards learning research skills. It should be mentioned that the studies mentioned above focused on the field of higher university education. In upper secondary education, to date, students’ motivation towards learning research skills has rarely been studied. As difficulties while learning research seem to relate to problems involving students’ previous experiences regarding learning research skills (Murtonen, 2005 ), we argue that fostering students’ motivation from secondary education onwards is a promising area of research.

The current study combines insights from instructional design theory and self-determination theory (SDT, Deci & Ryan, 2000 ), in order to investigate the cognitive and motivational effects of providing psychological need support (support for the need for autonomy, competence and relatedness) in a 4C/ID based (van Merriënboer & Kirschner, 2018 ) online learning environment fostering upper secondary schools students’ research skills. In the following section, we elaborate on the definition of research skills in the understudied domain of behavioral sciences; on 4C/ID (van Merriënboer & Kirschner, 2018 ) as an instructional design model for complex learning; and on self-determination theory and its related need theory (Deci & Ryan, 2000 ). In addition, the research questions addressed in the current study are outlined.

Conceptual framework

Research skills.

As described by Fischer et al., ( 2014 , p. 29), we define research skills 1 as a broad set of skills used “to understand how scientific knowledge is generated in different scientific disciplines, to evaluate the validity of science-related claims, to assess the relevance of new scientific concepts, methods, and findings, and to generate new knowledge using these concepts and methods”. Furthermore, eight scientific activities learners engage in while performing research are distinguished, namely: (1) problem identification, (2) questioning, (3) hypothesis generation, (4) construction and redesign of artefacts, (5) evidence generation, (6) evidence evaluation, (7) drawing conclusions, and (8) communicating and scrutinizing (Fischer et al., 2014 ). Fischer et al. ( 2014 ) argue that both the nature of, and the weights attributed to each of these activities, differ between domains. Intervention studies aiming to foster research skills are almost exclusively situated in natural sciences domains (Engelmann et al., 2016 ), leaving behavioral sciences domains largely understudied. The current study focuses on research skills in the understudied domain of behavioral sciences. We refer to the domain of behavioral sciences as the study of questions related to how people behave, and why they do so. Human behavior is understood in its broadest sense, and is the study of object in fields of psychology, educational sciences, cultural and social sciences.

The design of the learning environments used in this study is based on an existing instructional design model, namely the 4C/ID model (van Merriënboer & Kirschner, 2018 ). The 4C/ID model has been proven repeatedly effective in fostering complex skills (Costa et al., 2021 ), and thus drew our attention for the case of research skills, as research skills can be considered complex skills (it requires learners to integrate knowledge, skills and attitudes while performing complex learning tasks). Since the 4C/ID model focusses on supporting students’ cognitive outcomes, it might not be considered as relevant from a motivational point of view. However, since we argue that a deliberately designed learning environment from a cognitive point of view is an important prerequisite to provide qualitative motivational support, we briefly sketch the 4C/ID model and its characteristics. The 4C/ID model has a comprehensive character, integrating insights from different theories and models (Merrill, 2002 ), and highlights the relevance of four crucial components: learning tasks, supportive information, part task-practice, and just-in-time information. Central characteristics of these four components are that (a) high variability in authentic learning tasks is needed in order to deal with the complexity of the task; (b) supportive information is provided to the students in order to help them build mental models and strategies for solving the task under study (Cook & McDonald, 2008 ); (c) part-task practice is provided for recurrent skills that need to be automated; and (d) just-in-time (procedural) information is provided for recurrent skills.

Taking into account students’ cognitive struggles regarding research skills, and the existing research on the role of support in fostering research skills (see for example de Jong & van Joolingen, 1998 ), the 4C/ID model was found suitable to design a learning environment for research skills. This is partly because of its inclusion of (almost) all of the support found effective in the literature on research skills, such as providing direct access to domain information at the appropriate moment, providing learners with assignments, including model progression, the importance of students’ involvement in authentic activities, and so on (Chi, 2009 ; de Jong, 2006 ; de Jong & van Joolingen, 1998 ; Engelmann et al., 2016 ). While mainly implemented in vocational oriented programs, the 4C/ID model has been proposed as a good model to design learning environments aiming to foster research skills as well (Bastiaens et al., 2017 ; Maddens et al., 2020b ). Indeed, acquiring research skills requires complex learning processes (such as coordinating different constituent skills). Overall, the 4C/ID model can be considered to be highly suitable for designing learning environments aiming to foster research skills. Given its holistic design approach, it helps “to deal with complexity without losing sight of the interrelationships between the elements taught” (van Merriënboer & Kirschner, 2018 , p. 5).

Although the 4C/ID model has been used widely to construct learning environments enhancing students’ cognitive outcomes (see for example Fischer, 2018 ), research focusing on students’ motivational outcomes related to the 4C/ID model is scarce (van Merriënboer & Kirschner, 2018 ). Van Merriënboer and Kirschner ( 2018 ) suggest self-determination theory (SDT; Deci & Ryan, 2000 ) and its related need theory as a sound theoretical framework to investigate motivation in relation to 4C/ID.

Self-determination theory

Self-determination theory (SDT; Deci & Ryan, 2000 ) provides a broad framework for the study of motivation and distinguishes three types of motivation: amotivation (a lacking ability to self-regulate with respect to a behaviour), extrinsic motivation (extrinsically motivated behaviours, be they self-determined versus controlled), and intrinsic motivation (the ‘highest form’ of self-determined behaviour) (Deci & Ryan, 2000 ). According to Deci and Ryan ( 2000 , p. 237), intrinsic motivation can be considered “a standard against which the qualities of an extrinsically motivated behavior can be compared to determine its degree of self-determination”. Moreover, the authors (Deci & Ryan, 2000 , p. 237) argue that “extrinsic motivation does not typically become intrinsic motivation”. As the current study focuses on research skills in an academic context in which students did not voluntary chose to learn research skills, and thus learning research skills can be considered instrumental (directed to attaining a goal), the current study focuses on students’ amotivation, and students’ extrinsic motivation, realistically striving for the most self-determined types of extrinsic motivation.

Four types of extrinsic motivation are distinguished by SDT (external regulation, introjection, identification, and integration). These types can be categorized in two overarching types of motivation (autonomous and controlled motivation). Autonomous motivation contains the integrated and identified regulation towards a task (be it because the task is considered interesting, or because the task is considered personally relevant respectively). Controlled motivation refers to the external and introjected regulation towards the task (as a consequence of external or internal pressure respectively) (Vansteenkiste et al., 2009 ). More autonomous types of motivation have been found to be related to more positive cognitive and motivational outcomes (Deci & Ryan, 2000 ).

SDT further maintains that one should consider three innate psychological needs related to students’ motivation. These needs are the need for autonomy, the need for competence, and the need for relatedness. The need for autonomy can be described as the need to experience activities as being “concordant with one’s integrated sense of self” (Deci & Ryan, 2000 , p. 231). The need for competence refers to the need to feel effective when dealing with the environment (Deci & Ryan, 2000 ). The need for relatedness contains the need to have close relationships with others, including peers and teachers (Deci & Ryan, 2000 ). The satisfaction of these needs is hypothesized to be related to more internalization, and thus to more autonomous types of motivation (Deci & Ryan, 2000 ). This relationship has been studied frequently (for a recent overview, see Vansteenkiste et al., 2020 ). Indeed, research established the positive relationships between perceived autonomy (see for example Deci et al., 1996 ), perceived competence (see for example Vallerand & Reid, 1984 ), and perceived relatedness (see for example Ryan & Grolnick, 1986 for a self-report based study) with students’ more positive motivational outcomes. Apart from students’ need satisfaction, several scholars also aim to investigate need frustration as a different notion, as “it involves an active threat of the psychological needs (rather than a mere absence of need satisfaction)” (Vansteenkiste et al., 2020 , p. 9). In what follows, possible operationalizations are defined for the three needs.

Possible operationalizations of autonomy need support found in the literature are: teachers accepting irritation or negative feelings related to aspects of a task perceived as “uninteresting” (Reeve, 2006 ; Reeve & Jang, 2006 ; Reeve et al., 2002 ); providing a meaningful rationale in order to explain the value/usefulness of a certain task and stressing why involving in the task is important or why a rule exists (Deci & Ryan, 2000 ); using autonomy-supportive, inviting language (Deci et al., 1996 ); and allowing learners to regulate their own learning and to work at their own pace (Martin et al., 2018 ). Related to competence support, possible operationalizations are: providing a clear task rationale and providing structure (Reeve, 2006 ; Vansteenkiste et al., 2012 ); providing informational positive feedback after a learning activity (Deci et al., 1996 ; Martin et al., 2018 ; Vansteenkiste et al., 2012 ); providing an indication of progress and dividing content into manageable blocks (Martin et al., 2018 ; Schunk, 2003 ); and evaluating performance by means of previously introduced criteria (Ringeisen & Bürgermeister, 2015 ). Possible operationalizations concerning relatedness support are: teacher’s relational supports (Ringeisen & Bürgermeister, 2015 ); encouraging interaction between course participants and providing opportunities for learners to connect with each other (Butz & Stupnisky, 2017 ; van Merriënboer & Kirschner, 2018 ); using a warm and friendly approach or welcoming learners personally into a course (Martin et al., 2018 ); and offering a platform for learners to share ideas and to connect (Butz & Stupnisky, 2017 ; Martin et al., 2018 ).

In the current research, SDT is selected as a theoretical framework to investigate students’ motivation towards learning research skills, as, in contrast to other more purely goal-directed theories, it includes the concept of innate psychological needs or the Basic Psychological Need Theory (Deci & Ryan, 2000 ; Ryan, 1995 ; Vansteenkiste et al., 2020 ), and it describes the relation between these perceived needs and students’ autonomous motivation: higher levels of perceived needs relate to more autonomous forms of motivation. The inclusion of this need theory is considered an advantage in the case of research skills because research revealed problems of students with respect to both their feelings of competence in relation to research skills (Murtonen, 2005 ), as their feelings of autonomy in relation to research skills (Martin et al., 2018 ), as was indicated in the introduction. As such, fostering students’ psychological needs while learning research skills seems a promising way of fostering students’ motivation towards learning research skills.

4C/ID and SDT

One study (Bastiaens et al., 2017 ) was found to implement need support in 4C/ID based learning environments, comparing a traditional module, a 4C/ID based module and an autonomy supportive 4C/ID based module in a vocational undergraduate education context. Autonomy support was operationalized by means of providing choice to the learners. No main effect of the conditions was found on students’ motivation. Surprisingly, providing autonomy support did also not lead to an increase in students’ autonomy satisfaction. Similarly, no effects were found on students’ relatedness and competence satisfaction. Remarkably, students did qualitatively report positive experiences towards the need support, but this did not reflect in their quantitatively reported need experiences. In a previous study performed in the current research trajectory, Maddens et al. ( under review ) investigated the motivational effects of providing autonomy support in a 4C/ID based online learning environment fostering students’ research skills, compared to a learning environment not providing such support. Autonomy support was operationalized as stressing task meaningfulness to the students. Based on insights from self-determination theory, it was hypothesized that students in the autonomy condition would show more positive motivational outcomes compared to students in the baseline condition. However, results showed that students’ motivational outcomes appeared to be unaffected by the autonomy support. One possible explanation for this unexpected finding was that optimal circumstances for positive motivational outcomes are those that allow satisfaction of autonomy, competence, ánd relatedness support (Deci & Ryan, 2000 ; Niemiec & Ryan, 2009 ), and thus, that the intervention was insufficiently powerful for effects to occur. Autonomy support has often been manipulated in experimental research (Deci et al., 1994 ; Reeve et al., 2002 ; Sheldon & Filak, 2008 ). However, the three needs are rarely simultaneously manipulated (Sheldon & Filak, 2008 ).

Integrated need support

Although not making use of 4C/ID based learning environments, some scholars have focused on the impact of integrated (autonomy, competence and relatedness) need support on learners’ motivation. For example, Raes and Schellens ( 2015 ) found differential effects of a need supportive inquiry environment on upper secondary school students’ motivation: positive effects on autonomous motivation were only found in students in a general track, and not in students in a science track. This indicates that motivational effects of need-supportive environments might differ between tracks and disciplines. However, Raes and Schellens ( 2015 ) did not experimentally manipulate need support, as the learning environment was assumed to be need-supportive and was not compared to a non-need supportive learning environment. Pioneers in manipulating competence, relatedness and autonomy support in one study are Sheldon and Filak ( 2008 ), predicting need satisfaction and motivation based on a game-learning experience with introductory psychology students. Relatedness support (mainly operationalized by emphasizing interest in participants’ experiences in a caring way) had a significant effect on intrinsic motivation. Competence support (mainly operationalized by means of explicating positive expectations) had a marginal significant effect on intrinsic motivation. No main effects on intrinsic motivation were found regarding autonomy support (mainly operationalized by means of emphasizing choice, self-direction and participants’ perspective upon the task). However, as is often the case in motivational research based on SDT, the task at hand was quite straight forward (a timed task in which students try to form as many words as possible from a 4 × 4 letter grid), and thus, the applicability of the findings for providing need support in 4C/ID based learning environments for complex learning might be limited.

In the preceding section, several operationalizations of need support were discussed. Deci and Ryan ( 2000 ) argue that optimal circumstances for positive motivational outcomes are those that allow satisfaction of autonomy, competence, ánd relatedness support. However, such integrated need support has rarely been empirically studied (Sheldon & Filak, 2008 ). In addition, research investigating how need support can be implemented in learning environments based on the 4C/ID model is particularly scarce (van Merriënboer & Kirschner, 2018 ). This study aims to combine insights from instructional design theory for complex learning (van Merriënboer & Kirschner, 2018 ) and self-determination theory (Deci & Ryan, 2000 ) in order to investigate the motivational effects of providing need support in a 4C/ID based learning environment for students’ research skills. A pretest-intervention-posttest design is set up in order to compare 233 upper secondary school behavioral sciences students’ cognitive and motivational outcomes among two conditions: (1) a 4C/ID based online learning environment condition, and (2) an identical condition additively providing support for students’ need satisfaction. The following research questions are answered based on a combination of quantitative and qualitative data (see ‘method’): (1) Does a deliberately designed (4C/ID-based) learning environment improve students’ research skills, as measured by a research skills test and a research skills task? ; ( 2) What is the effect of providing autonomy, competence and relatedness support in a deliberately designed (4C/ID-based) learning environment fostering students’ research skills, on students’ motivational outcomes (i.e. students’ amotivation, autonomous motivation, controlled motivation, students’ perceived value/usefulness, and students’ perceived needs of competence, relatedness and autonomy)? ; (3) What are the relationships between students’ need satisfaction, students’ need frustration, students’ autonomous and controlled motivation and students’ cognitive outcomes (research skills test and research skills task)? ; (4) How do students experience need satisfaction and need frustration in a deliberately designed (4C/ID-based) learning environment? .

The first three questions are answered by means of quantitative data. Since the learning environment is constructed in line with existing instructional design principles for complex learning, we hypothesize that both learning environments will succeed in improving students’ research skills (RQ1). Relying on insights from self-determination theory (Deci & Ryan, 2000 ), we hypothesize that providing need support will enhance students’ autonomous motivation (RQ2). In addition, we hypothesize students’ need satisfaction to be positively related to students’ autonomous motivation (RQ3). These hypotheses on the relationship between students’ needs and students’ motivation rely on Vallerands’ ( 1997 ) finding that changes in motivation can be largely explained by students’ perceived competence, autonomy and relatedness (as psychological mediators). More specifically, Vallerand ( 1997 ) argues that environmental factors (in this case the characteristics of a learning environment) influence students’ perceptions of competence, autonomy, and relatedness, which, in turn, influence students’ motivation and other affective outcomes. In addition, based on the self-determination literature (Deci & Ryan, 2000 ), we expect students’ motivation to be positively related to students’ cognitive outcomes. In order to answer the fourth research question, qualitative data (students’ qualitative feedback on the learning environments) is analysed and categorized based on the need satisfaction and need frustration concepts (RQ4) in order to thoroughly capture the meaning of the quantitative results collected in light of RQ1–3. No hypotheses are formulated in this respect.

Methodology

Participants.

The study took place in authentic classroom settings in upper secondary behavioral sciences classes. In total, 233 students from 12 classes from eight schools in Flanders participated in the study. All participants are 11th or 12th grade students in a behavioral sciences track 2 in general upper secondary education in Flanders (Belgium). Classes were randomly assigned to one out of two experimental conditions. Of all 233 students, 105 students (with a mean age of 16.32, SD 0.90) worked in the baseline condition (of which 62% 11th grade students, 36% 12th grade students, and 2% not determined; and of which 31% male, 68% female, and 1% ‘other’), and 128 students (with a mean age of 16.02, SD 0.59) worked in the need supportive condition (of which 80% 11th grade students, and 20% 12th grade students; and of which 19% male, and 81% female). As the current study did not randomly assign students within classes to one out of the two conditions, this study should be considered quasi-experimental. Full randomization was considered but was not feasible as students worked in the learning environments in class, and would potentially notice the experimental differences when observing their peers working in the learning environment. As such, we argued that this would potentially cause bias in the study. By taking into account students’ pretest scores on the relevant variables (cognitive and motivational outcomes) as covariates, we aimed to adjust for inter-conditional differences. No such differences were found for students’ autonomous motivation t (226) =  − 0.115, p  < 0.909, d  = 0.015, and students’ amotivation t (226) =  − 0.658, p  < 0.511, d  =  − 0.088. However, differences were observed for students’ controlled motivation t (226) =  − 2.385, p  < 0.018, d  =  − 0.318, and students’ scores on the LRST pretest t (225) = − 5.200, p  < 0.001, d  =  − 0.695.

Study design and procedure

In a pretest session of maximum two lesson hours, the Leuven Research Skills Test (LRST, Maddens et al., 2020a ), the Academic Self-Regulation Scale (ASRS, Vansteenkiste et al., 2009 ), and four items related to students’ amotivation (Aydin et al., 2014 ) were administered in class via an online questionnaire, under supervision of the teacher. In the subsequent eight weeks, participants worked in the online learning environment, one hour a week. Out of the 233 participating students, 105 students studied in a baseline online learning environment. The baseline online learning environment 3 is systematically designed using existing instructional design principles for complex learning based on the 4C/ID model (van Merriënboer & Kirschner, 2018 ). All four components of the 4C/ID model were taken into account in the design process: regarding the first component, the learning tasks included real-life, authentic cases. More specifically, tasks were selected from the domains of psychology, educational sciences and sociology. As such, there was a large variety in the cases used in the learning tasks. This large variety in learning tasks is expected to facilitate transfer of learners’ research skills in a wide range of contexts. Furthermore, the tasks were ill-structured and required learners to make judgments, in order to provoke deep learning processes. Regarding the second component, supportive information was provided for complex tasks in the learning environment, such as formulating a research question, where students can consult general information on what constitutes a good research question, can consult examples or demonstrations of this general information, and can receive cognitive feedback on their answers (for example by means of example answers). Examples of the implementation of the third component (procedural information) are the provision of information on how to recognize a dependent and an independent variable by means of on-demand (just-in-time) presentation by means of pop-ups; information on how to use Boolean operators; and information on how to read a graph. To avoid split attention, this kind of information was integrated with the task environment itself (van Merriënboer & Kirschner, 2018 ). Finally, the fourth component, part-task-practice (by means of short tests) was implemented for routine aspects of research skills that should be automated, for example the formulation of a search query.

The remaining participating students ( n  = 128) completed an adapted version of the baseline online learning environment, in which autonomy, relatedness and competence support are provided. In total, need support consisted of 12 implementations (four implementations for each need), based on existing research on need support. An overview of these adaptations can be found in Tables ​ Tables1 1 and ​ and2. 2 . Although, ideally, students would work in class, under supervision of their teacher, this was not possible for all classes, due to the COVID-19 restrictions. 4 As a consequence, some students completed the learning environment partly at home. All students were supervised by their teachers (be it virtually or in class), and the researcher kept track of students’ overall activities in order to be able to contact students who did not complete the main activities. During the last two sessions of the intervention, participants submitted a two-pages long research proposal (“two-pager”). One week after the intervention, the LRST (Maddens et al., 2020a ), the ASRS (Vansteenkiste et al., 2009 ), four items related to students’ amotivation (Aydin et al., 2014 ), the value/usefulness scale (Ryan, 1982 ) and the Basic Psychological Need Satisfaction and Frustration Scale (BPNSNF, Chen et al., 2015 ) were administered in a posttest session of maximum two hours. Although most classes succeeded in organizing this posttest session in class, for some classes this posttest was administered at home. However, all classes were supervised by the teacher (be it virtually or in class). These contextual differences at the test moments will be reflected upon in the discussion section.

Adaptations online learning environment

Support typeImplementationsConcrete operationalizations in the need supportive learning environment
Autonomy supportA1. Providing meaningful rationales in order to explain the value/usefulness of a certain task and stressing why involving in the task is important or why a rule exists (Assor et al., ; Deci et al., ; Deci & Ryan, ; Steingut et al., )

–A1a. Video of a peer (student) stressing value/usefulness of learning environment before starting the learning environment

–A1b. Teacher stressing importance learning environment before starting the learning environment

–A1c. Avatars stressing importance (see Author et al., under review); for example an avatar mentioning ‘After having completed this module, I know how to formulate a research question for example when I am writing a bachelor thesis in my future academic career”

–A1d. 2-pager: adding examples of subjects of peers, in order for the task to feel more familiar

A2. Accepting irritation/acknowledging negative feelings (acknowledgment of aspects of a task perceived as uninteresting) (Reeve & Jang, ; Reeve et al., )

–A2a. Including statements during tasks: “We understand that this might cost an effort, but previous studies proved that students can learn from performing this activity…”

–A2b. At the end of each module: teacher asks about students’ difficulties

A3. Using autonomy-supportive, inviting language (Deci et al., )–A3a. Personal task rationale, for example: “I am curious about how you would tackle this problem.”, systematically implemented in the assignments
A4. Allowing learners to regulate their own learning and to work at their own pace. The use of a non-pressured environment (Martin et al., )–A4a. Adding a statement after each task class: “no need to compare your progress to that of your peers, you can work at your own pace!”
Relatedness supportR1. Teacher’s relational supports (Ringeisen & Bürgermeister, )

–R1a. Before starting the learning environment: stressing that students can contact researcher and teacher

–R1b. Researcher (scientist-mentor) sends motivational messages to the group (on a weekly basis)

R2. Encouraging interaction between course participants; providing opportunities for learners to connect with each other; introducing learning tasks that require group work or learning networks (Butz & Stupnisky, ; van Merriënboer & Kirschner, )

–R2a. Opening every task class: reminding students they can contact the researcher with questions

–R2b. Every task class: one opportunity to share answers in the forum

R3. Using a warm and friendly approach, welcoming learners personally into a course (Martin et al., )–R3a. Personal welcoming message in the beginning of the online learning environment
R4. Offering a platform for learners to share ideas and to connect (Butz & Stupnisky, ; Martin et al., )–R4a. Asking students to post an introduction post in the forum to sum up their expectations of the course (once, in the beginning of the learning environment)
Competence supportC1. Clear task rationale, providing structure (Reeve, ; Vansteenkiste et al., )–Introductory video of researcher explaining what students will learn in the online learning environment
C2. Informational positive feedback after learning activity (Deci et al., ; Martin et al., ; Vansteenkiste et al., )

–Personal short feedback after every task class, formulated in a positive manner

–Adding motivational quotes to example answers: “Thank you for submitting your answer! You will receive feedback at the end of this module, but until then, you can compare your answer to the example answer”

C3. Indication of progress; dividing content into manageable blocks (Martin et al., )–After every task class: ask students to mark their progress
C4. Evaluating performance by means of previously introduced criteria (Ringeisen & Bürgermeister, )

–SAP-chart referring to instructions 2-pager task

–Short guide 2-pager task

Overview instruments

Measured construct(s)InstrumentFormatNumber of itemsInternal consistency reliability/interrater reliabilityWhen administered?
Psychological need frustration and satisfactionBPNSNF-training scale (Chen et al., ; translated version Aelterman et al., )Likert-type items, 5 point scale24 items (4 items per scale)autonomy satisfaction,  = 0.67; ω = 0.67; autonomy frustration,  = 0.76; ω = 0.76; relatedness satisfaction,  = 0.79; ω = 0.79; relatedness frustration,  = 0.60; ω = 0.61; competence satisfaction,  = 0.72; ω = 0.73; competence frustration,  = 0.68; ω = 0.67Post
Experienced value/usefulness of the learning environmentIntrinsic Motivation Inventory (Ryan, )Likert-type items, 7-point scale7 items  = 0.92; ω = 0.92Post
Autonomous and controlled motivationASRS (Vansteenkiste et al., )Likert-type items, 5 point scale16 items (8 items for autonomous motivation, 8 items for controlled motivation

Autonomous motivation:  = 0.91; 0.92; ω = 0.90; 0.92

Controlled motivation:  = 0.83; 0.86; ω = 0.82; 0.85

Pre, post
AmotivationAcademic Motivation Scale for Learning Biology (adapted for the context) (Aydin et al., )Liker-type items, 5 point scale4 items  = 0.80; 0.75; ω = 0.81; 0.75Pre, post
Research skills testLRST (Maddens et al., )Combination of open ended and close ended conceptual and procedural knowledge items, each scored as 0 or 137 items  = 0.79; 0.82; ω = 0.78; ω = 0.80Pre, post
Research skills taskTwo pager task (Author et al., under review)Open ended question (performance assessment), assessed by means of a pairwise comparison technique1 taskInterreliability score = 0.79Post

a When administered at both pretest and posttest level (see ‘procedure’), the internal consistency values are reported respectively

Instruments

In this section, we elaborate on the tests used during the pretest and the posttest. Example items for each scale are presented in Appendix 1.

Motivational outcomes

In the current study, two groups of motivational outcomes are assessed: (1) students’ need satisfaction and frustration, and students’ experiences of value/usefulness; and (2) students’ level of autonomous motivation, controlled motivation, and amotivation. When administered at both pretest and posttest level (see ‘procedure’), the internal consistency values are reported respectively.

The BPNSNF-training scale (The Basic Psychological Need Satisfaction and Frustration Scale, Chen et al., 2015 ; translated version Aelterman et al., 2016 5 ) measured students’ need satisfaction and need frustration while working in the learning environment, and consists of 24 items (four items per scale): (autonomy satisfaction, α  = 0.67; ω = 0.67; autonomy frustration, α  = 0.76; ω = 0.76; relatedness satisfaction, α  = 0.79; ω = 0.79; relatedness frustration, α  = 0.60; ω = 0.61; competence satisfaction, α  = 0.72; ω = 0.73; competence frustration, α  = 0.68; ω = 0.67). The items are Likert-type items ranging from one (not at all true) to five (entirely true). Although the current study focusses mainly on students’ need satisfaction, the scales regarding students’ need frustration are included in order to be able to also detect students’ potential ill-being and in order to detect potential critical issues regarding students’ needs. In addition to the BPNSNF, by means of seven Likert-type items ranging from one (not at all true) to seven (entirely true), the (for the purpose of this research translated) value/usefulness scale of the Intrinsic Motivation Inventory (IMI, Ryan, 1982 ) measured to what extent students valued the activities of the online learning environment ( α  = 0.92; ω = 0.92). Since in the research skills literature problems have been observed related to students’ perceived value/usefulness of research skills (Earley, 2014 ; Murtonen, 2005 ), and this concept is not sufficiently stressed in the BPNSNF-scale, we found it useful to include this value/usefulness scale to the study. The difference in the range of the answer possibilities (one to five vs one to seven) exists because we wanted to keep the range as initially prescribed by the authors of each instrument. All motivational measures are calculated by adding the scores on every item, and dividing this sum score by the number of items on a scale, leading to continuous outcomes. Although the IMI and the BPNSNF targeted students’ experiences while completing the online learning environment, these measures were administered during the posttest. Thus, students had to think retrospectively about their experiences. In order to prevent cognitive overload while completing the online learning environment, these measures were not administered during the intervention itself.

Students’ autonomous and controlled motivation towards learning research skills was measured by means of the Dutch version of the Academic Self-Regulation Scale (ASRS; Vansteenkiste et al., 2009 ), adapted to ‘ research skills ’. The ASRS consists of Likert-type items ranging from one (do not agree at all) to five (totally agree), and contains eight items per subscale (autonomous and controlled motivation). In the autonomous motivation scale, four items are related to identified regulation, and four items are related to intrinsic motivation. 6 In the controlled motivation scale, four items are related to external regulation, and four items are related to introjected regulation. Both scales (autonomous motivation and controlled motivation) indicated good internal consistency for the study’s data (autonomous motivation: α  = 0.91; 0.92; ω = 0.90; 0.92; controlled motivation: α  = 0.83; 0.86; ω = 0.82; 0.85). The items were adapted to the domain under study (motivation to learn about research skills). Based on students’ motivational issues related to research skills, we found it useful to also include a scale to assess students’ amotivation. This was measured with (for the purpose of the current research translated) four items related to students’ amotivation regarding learning research skills, adapted from Academic Motivation Scale for Learning Biology (Aydin et al., 2014 ) ( α  = 0.80; 0.75; ω = 0.81; 0.75). Also this measure consist of Likert-type items ranging from one (do not agree at all) to five (totally agree).

Cognitive outcomes

Students’ research skills proficiency was measured by means of a research skills test (Maddens et al., 2020a ) and a research skills task.

The research skills test used in this study is the LRST (Maddens et al., 2020a ) consisting of a combination of 37 open ended and close ended items ( α  = 0.79; 0.82; ω = 0.78; ω = 0.80 for this data set), administered via an online questionnaire. Each item of the LRST is related to one of the eight epistemic activities regarding research skills as mentioned in the introduction (Fischer et al., 2014 ), and is scored as 0 or 1. The total score on the LRST is calculated by adding the mean subscale scores (related to the eight epistemic activities), and dividing them by eight (the number of scales). In a previous study (Maddens et al., 2020a ), the LRST was checked and found suitable in light of interrater reliability ( κ  = 0.89). As the same researchers assessed the same test with a similar cohort in the current study, the interrater reliability was not calculated for this study.

In the research skills task (“two pager task”), students were asked to write a research proposal of maximum two pages long. The concrete instructions for this research proposal are given in Appendix 1. In this research proposal, students were asked to formulate a research question and its relevance; to explain how they would tackle this research question (method and participants); to explain their hypotheses or expectations; and to explain how they would communicate their results. The two-pager task was analyzed using a pairwise comparison technique, in which four evaluators (i.e. the four authors of this paper) made comparative judgements by comparing two two-pagers at a time, and indicating which two-pager they think is best. All four evaluators are researchers in educational sciences and are familiar with the research project and with assessing students’ texts. This shared understanding and expertise is a prerequisite for obtaining reliable results (Lesterhuis et al., 2018 ). The comparison technique is performed by means of the Comproved tool ( https://comproved.com ). As described by Lesterhuis et al. ( 2018 , p. 18), “the comparative judgement method involves assessing a text on its overall quality. However, instead of requiring an assessor to assign an absolute score to a single text, comparative judgement simplifies the process to a decision about which of two texts is better”. In total, 1635 comparisons were made (each evaluator made 545 comparisons), and this led to a (interrater)reliability score of 0.79. In a next step, these comparative judgements were used to rank the 218 products (15 students did not submit a two-pager) on their quality; and the products were graded based on their ranking. This method was used to grade the two-pagers because it facilitates the holistic evaluation of the tasks, based on the judgement of multiple experts (interrater reliability).

Qualitative feedback

Students’ experiences with the online learning environment were investigated in the online learning environment itself. After completing the learning environment, students were asked how they experienced the tasks, the theory, the opportunity to post answers in the forum and to ask questions via the chat, what they liked or disliked in the online learning environment, and what they disliked in the online learning environment (Fig.  1 ).

An external file that holds a picture, illustration, etc.
Object name is 11251_2022_9606_Fig1_HTML.jpg

Study overview

The first research question (” Does a deliberately designed (4C/ID-based) learning environment improve students’ research skills, as measured by a research skills test and a research skills task?” ) is answered by means of a paired samples t -test in order to look for overall improvements in order to detect potential general trends, followed by a full factorial MANCOVA, as this allows us to investigate the effectiveness for both conditions taking into account students’ pretest scores. Hence, the condition is included as an experimental factor, and students’ scores on the LRST and the two-pager task are included as continuous outcome variables. Students’ pretest scores on the LRST are included as a covariate. Prior to the analysis, a MANCOVA model is defined taking into account possible interaction effects between the experimental factor and the covariate.

The second research question (“ What is the effect of providing autonomy, competence and relatedness support in a deliberately designed (4C/ID-based) learning environment fostering students’ research skills, on students’ motivational outcomes, i.e. students’ amotivation, autonomous motivation, controlled motivation, students’ perceived value/usefulness, and students’ perceived needs of competence, relatedness and autonomy)?”) ;) is answered by means of a full factorial MANCOVA. The condition (need satisfaction condition versus baseline condition) is included as an experimental factor, and students’ responses on the value/usefulness, autonomous and controlled motivation, amotivation, and need satisfaction scales are included as continuous outcome variables. ASRS pretest scores (autonomous and controlled motivation) are included as covariates in order to test the differences between group means, adjusted for students’ a priori motivation. Prior to the analysis, a MANCOVA model is defined taking into account possible interaction effects between the experimental factor and the covariates, and assumptions to be met to perform a MANCOVA are checked. 7

The third research question ( “ What are the relationships between students’ need satisfaction, students’ need frustration, students’ autonomous and controlled motivation and students’ cognitive outcomes (research skills test and research skills task)?” ), is initially answered by means of five multiple regression analyses. The first three regressions include the need satisfaction and frustration scales, and students’ value/usefulness as independent variables, and students’ (1) autonomous motivation, (2) controlled motivation, and (3) amotivation as dependent variables. The fourth and fifth regressions include students’ autonomous motivation, controlled motivation, and amotivation as independent variables, and students’ (4) LRST scores, and (5) scores on the two-pager task as dependent variables. As a follow-up analysis (see ‘ results ’) two additional regression analyses are performed to look into the direct relationships between students’ perceived needs and students’ experienced value/usefulness, with students’ cognitive outcomes (LRST (6) and two-pager (7)). As the goal of this analysis is to investigate the relationships between variables as described in SDT research, this analysis focuses on the full sample, rather than distinguishing between the two conditions. An ‘Enter’ method (Field, 2013 ) is used in order to enter the independent variables simultaneously (in line with Sheldon et al., 2008 ).

The fourth research question (“ How do students experience need satisfaction and need frustration in a deliberately designed (4C/ID-based) learning environment?” ) is analyzed by means of the knowledge management tool Citavi. Based on the theoretical framework, students’ experiences are labeled by the codes ‘autonomy satisfaction, autonomy frustration, competence satisfaction, competence frustration, relatedness satisfaction, and relatedness frustration’. For example, students’ quotes referring to the value/usefulness of the learning environment, are labeled as ‘autonomy satisfaction’ or ‘autonomy frustration’. Students’ references towards their feelings of mastery of the learning content are labeled as ‘competence satisfaction’ or ‘competence frustration’. Students’ quotes regarding their relationships with peers and teachers are labeled as ‘relatedness satisfaction’ or ‘relatedness frustration’ (Fig.  2 ).

An external file that holds a picture, illustration, etc.
Object name is 11251_2022_9606_Fig2_HTML.jpg

Overview variables

Does the deliberately designed (4C/ID based) learning environments improve students’ research skills, as measured by a research skills test and a research skills task?

Paired samples t -test. A paired samples t -test reveals that, in general, students ( n  = 210) improved on the LRST-posttest ( M  = 0.57, SD  = 0.16) compared to the pretest ( M  = 0.51, SD  = 0.15) (range 0–1). The difference between the posttest and the pretest is significant t (209) =  − 8.215, p  < 0.001, d 8  =  − 0.567. The correlation between the LRST pretest and posttest is 0.70 ( p  < 0.010).

MANCOVA. A MANCOVA model ( n  = 196) was defined checking for possible interaction effects between the experimental factor and the covariate in order to control for the assumption of ‘independence of the covariate and treatment effect’ (Field, 2013 ). The covariate LRST pretest did not show significant interaction effects for the two outcome variables LRST post ( p  = 0.259) and the two-pager task ( p  = 0.702). The correlation between the outcome variables (LRST post and two-pager), is 0.28 ( p  < 0.050).

Of all 233 students, 36 students were excluded from the main analysis because of missing data (for example, because they were absent during a pretest or posttest moment). These students were excluded by means of a listwise deletion method because we found it important to use a complete dataset, since, in a lot of cases, students who did not complete the pretest or posttest, did also not complete the entire learning environment. Including partial data for these students could bias the results. The baseline condition counted 86 students, and the need satisfaction condition counted 111 students. Using Pillai’s Trace [ V  = 0.070, F (2,193) = 7.285, p  ≤ 0.001], there was a significant effect of the condition on the cognitive outcome variables, taking into account students’ LRST pretest scores. Separate univariate ANOVAs on the outcome variables revealed no significant effect of the condition on the LRST posttest measure, F (1,194) = 2.45, p  = 0.120. However, a significant effect of condition was found on the two-pager scores, F (1,194) = 13.69, p  < 0.001 (in the baseline group, the mean score was 6,6/20; in the need condition group, the mean score was 7,6/20). It should be mentioned that both scores are rather low.

What is the effect of providing autonomy, competence and relatedness support in a deliberately designed (4C/ID based) learning environment fostering students’ research skills, on students’ motivational outcomes (students’ amotivation, autonomous motivation, controlled motivation, students’ perceived value/usefulness, and students’ perceived needs of competence, relatedness and autonomy)?

Paired samples t -tests. The correlations between students’ pretest and posttestscores for the motivational measures are 0.67 ( p  < 0.010) for autonomous motivation; 0.44 ( p  < 0.010) for controlled motivation, and 0.38 for amotivation ( p  < 0.010). Regarding the differences in students’ motivation, three unexpected findings were observed. Overall, students’ ( n  = 215) amotivation was higher on the posttest ( M  = 2.26, SD  = 0.89) compared to the pretest ( M  = 1.77, SD  = 0.79) (based on a score between 1 and 5). The difference between the posttest and the pretest is significant t (214) =  − 7.69, p  < 0.001, d  =  − 0.524. Further analyses learn that the amotivation means in the baseline group increased with 0.65, and the amotivation in the need support group increased with 0.37. In addition, students’ ( n  = 215) autonomous motivation was higher on the pretest ( M  = 2.81, SD  = 0.81) compared to the posttest ( M  = 2.64, SD  = 0.82). The difference between the posttest and the pretest is significant t (214) = 3.72, p  < 0.001, d  = 0.254. Students’ mean scores on autonomous motivation in the baseline condition decreased with 0.19, and students’ autonomous motivation in the need support condition decreased with 0.15. Students’ ( n  = 215) controlled motivation was higher on the posttest ( M  = 2.33, SD  = 0.75) compared to the pretest ( M  = 1.93, SD  = 0.67). The difference between the posttest and the pretest is significant t (214) =  − 07.72, p  < 0.001, d  =  − 0.527. Students’ controlled motivation in the baseline group increased with 0.36, and students’ controlled motivation in the need support group increased with 0.43. However, overall, all mean scores are and stay below neutral score (below 3), indicating robust low autonomous, controlled and amotivation scores (see Table ​ Table3). 3 ). An independent samples T -test on the mean differences between these measures shows that the increases/decreases on autonomous motivation [ t (213) =  − 0.506, p  = 0.613, d  =  − 0.069] and controlled motivation [ t (213) =  − 0.656, p  = 0.513, d  =  − 0.090] did not differ between the two groups. However, the increases in amotivation [ t (213) = 2.196, p  = 0.029, d  = 0.301] does differ significantly between the two conditions. More specifically, the increase was lower in the need supportive condition compared to the baseline condition.

Mean scores and standard deviations motivational variables

VariableRangeBaseline condition Need supportive condition
Value/usefulness1–75.12; .945.14; 1.14
Autonomy satisfaction1–53.14; .623.13; .62
Autonomy frustration1–52.94; .793; .85
Competence satisfaction1–53.18; .623.19; .58
Competence frustration1–52.77; .742.74; .71
Relatedness satisfaction1–52.73; .802.43; .82
Relatedness frustration1–51.91; .732.43; .65
Autonomous motivation PretestPosttestPretestPosttest
1–52.83; .822.65; .872.81; .812.65; .77
Controlled motivation PretestPosttestPretestPosttest
1–51.82; .662.19; .722.02; .662.45; .76
Amotivation PretestPosttestPretestPosttest*
1–51.74; .722.38; .911.81; .862.18; .87

a Overall, students’ ( n  = 215) autonomous motivation was significantly higher on the pretest compared to the posttest ( t (214) 3.72, p  ≤ 0.001, d  = 0.254

b Students’ (n = 215) controlled motivation was significantly higher on the posttest compared to the pretest ( t (214) =  − 7.72, p  ≤ 0.001, d  =  − 0.527

c Students’ ( n  = 215) amotivation was significantly higher on the posttest compared to the pretest ( t (214) =  − 07,69, p  ≤ 0.001, d  =  − 0.534)

MANCOVA. Of all 233 students, 18 students were excluded from the analysis because of missing data (for example, because they were absent during a pretest or posttest moment). Compared to the cognitive analyses, the amount of missing data is lower concerning motivational outcomes since, concerning the cognitive outcomes, some students did not complete the two-pager task. However, we found it important to use all relevant data and chose to report this is in a clear way. In total, the baseline condition counted 97 students, and the experimental condition counted 118 students. Similar to the analysis for the cognitive outcomes, a MANCOVA model was defined to check for possible interaction effects between the experimental factor and the covariate in order to control for the assumption of ‘independence of the covariate and treatment effect’ (Field, 2013 ). The covariates did not show significant interaction effects for the outcome variables. 9

Using Pillai’s Trace [ V  = 0.113, F (10,201) = 2.558, p  = 0.006], there was a significant effect of condition on the motivational variables, taking into account students’ autonomous and controlled pretest scores, and students’ a priori amotivation. Separate univariate ANOVAs on the outcome variables revealed a significant effect of the condition on the outcome variables amotivation, F (1,210) = 3.98, p  = 0.047; and relatedness satisfaction F (1,210) = 6.41, p  = 0.012. As was hypothesized, students in the need satisfaction group reported less amotivation ( M  = 2.38), compared to students in the baseline group ( M  = 2.18). In contrast to what was hypothesized, students in the need satisfaction group reported less relatedness satisfaction ( M  = 2.43) compared to students in the baseline group ( M  = 2.73), and no significant effects of condition were found on the outcome variables autonomous motivation post, controlled motivation post, value/usefulness, autonomy satisfaction, autonomy frustration, competence satisfaction, competence frustration, and relatedness frustration. Table ​ Table4 4 shows the correlations between the motivational outcome variables.

Correlations motivational outcome variables

AMCMAMOTVUASAFCSCFRSRF
AM1
CM − 0.031
AMOT − 0.21**0.41**1
VU0.66** − 0.07 − 0.36**1
AS0.64** − 0.16** − 0.28**0.60**1
AF − 0.40**0.40**0.35** − 0.41** − 0.58**1
CS0.48** − 0.19** − 0.16*0.46**0.58** − 0.41**1
CF − 0.110.29**0.22** − 0.11 − 0.31**0.41** − 0.52**1
RS0.27** − 0.03 − 0.030.15*0.30** − 0.33**0.29** − 0.19**1
RF − 0.030.19**0.11 − 0.13 − 0.10**0.21***0.25**0.32** − 0.28**1

AM autonomous motivation, CM controlled motivation, AMOT amotivation, VU value/usefulness, AS autonomy satisfaction, AF autonomy frustration, CS competence satisfaction, CF competence frustration, RS relatedness satisfaction, RF relatedness frustration

**Correlation is significant at the 0.010 level (2-tailed)

*Correlation is significant at the 0.050 level (2-tailed)

What are the relationships between students’ need satisfaction, students’ need frustration, students’ autonomous and controlled motivation and students’ cognitive outcomes (research skills test and research skills task)?

The third research question (investigating the relationships between students’ need satisfaction, students’ motivation and students’ cognitive outcomes), is answered by means of five multiple regression analyses. The first three regressions include the need satisfaction and frustration scales, and students value/usefulness as independent variables, and students’ (1) autonomous motivation, (2) controlled motivation, and (3) amotivation as dependent variables ( n  = 219). The fourth and fifth regressions include students’ autonomous motivation, controlled motivation, and amotivation as independent variables, and students’ (4) LRST scores ( n  = 215), and (5) scores on the two-pager task as dependent variables ( n  = 206). Table ​ Table4 4 depicts the correlations for the first three analyses. Table ​ Table5 5 depicts the correlations for the last two analyses.

Correlations motivational and cognitive outcome variables

AMCMAMOTLRSTTwopager
AM1
CM − 0.031
AMOT − 0.21**0.41**1
LRST0.10 − 0.10 − 0.32**1
2pager0.050.70 − 0.110.28**1

AM  autonomous motivation, CM  controlled motivation, AMOT  amotivation, LRST  score on LRST, Twopager  score on Twopager

In Table ​ Table3, 3 , we can see that students in both conditions experience average competence and autonomy satisfaction. However, students’ relatedness satisfaction seems low in both conditions. This finding will be further discussed in the discussion section. For autonomous motivation, a significant regression equation was found F (7,211) = 37.453, p  < 0.001. The regression analysis (see Table ​ Table5) 5 ) further reveals that all three satisfaction scores (competence satisfaction, relatedness satisfaction and autonomy satisfaction) contribute positively to students’ autonomous motivation, as does students’ experienced value/usefulness. Also for students’ controlled motivation a significant regression equation was found F (7,211) = 8.236, p  < 0.001, with students’ autonomy frustration and students’ relatedness satisfaction contributing to students’ controlled motivation. The aforementioned relationships are in line with the expectations. However, we noticed that relatedness satisfaction contributed to students’ controlled motivation in the opposite direction of what was expected (the higher students’ relatedness satisfaction, the lower students’ controlled motivation). This finding will be reflected upon in the discussion section. Also for students’ amotivation, a significant regression equation was found F (7,211) = 7.913, p  < 0.001. Students’ autonomy frustration, competence frustration and students’ value/usefulness contributed to students’ amotivation in an expected way. Also for cognitive outcomes related to the research skills test, a significant regression equation was found F (3,211) = 8.351, p  < 0.001. In line with the expectations, the regression analysis revealed that the higher students’ amotivation, the lower students’ scores on the research skills test. No significant regression equation was found for the outcome variable related to the research skills task F (3,202) = 0.954, p  < 0.416. For all regression equations, the R 2 and the exact regression weights are presented in Table ​ Table6 6 .

Linear model of predictors of autonomous motivation, controlled motivation, amotivation, LRST scores, and two-pager scores with beta values, standard errors, standardized beta values and significance values

RegressionDependent variableIndependent variable (SE)
1 (  = 0.55) AM 0.390.090.300 000*
AF − 0.020.06 − 0.020 691
0.220.090.160 014*
CF0.130.070.110.060
0.110.050.110.026*
RF0.100.060.090.088
0.310.050.400.000*
2 (  = 0.46) CMAS0.070.110.060.521
0.400.070.440.000*
CS − 0.050.11 − 0.040.667
CF0.120.080.110.154
0.130.060.140.035*
RF0.120.070.110.097
VU0.060.060.090.263
3 (  = 0.46)*AMOTAS − 0.040.14 − 0.030.794
0.250.090.230.006*
CS0.240.130.160.072
0.210.100.170.033*
RS0.100.070.090.180
RF0.030.090.030.699
 − 0.260.07 − 0.310.000*
4 (  = 0.33)*LRSTAM0.000.010.020.740
CM0.010.020.040.629
 − 0.060.01 − 0.330.000*
5(  = 0.12)2-pagerAM0.060.140.030.687
CM0.050.160.020.758
AMOT − 0.200.14 − 0.120.137

*Significant at .050 level

As a follow-up analysis and in order to better understand the outcomes, we decided to also look into the direct relationships between students’ perceived needs and students’ experienced value/usefulness, with students’ cognitive outcomes (LRST and two-pager) by means of two additional regression analyses. The motivation behind this decision relates to possible issues regarding the motivational measures used, which might complicate the investigation of indirect relationships (see discussion). The results are provided in Table ​ Table7, 7 , and show that both for the LRST and the two-pager, respectively, a significant [ F (7,207) = 4.252, p  < 0.001] and marginally significant regression weight [ F (7,199) = 2.029, p  = 0.053] was found. More specifically, students’ relatedness satisfaction and students’ perceived value/usefulness contribute to students’ scores on the two-pager and on the research skills test. As one would expect, we see that the higher students’ value/usefulness, the higher students’ scores on both cognitive outcomes. In contrast to one would expect, we found that the higher students’ relatedness satisfaction, the lower students’ scores on the cognitive outcomes. These findings are reflected upon in the discussion section.

Linear model of predictors of LRST scores, and two-pager scores with beta values, standard errors, standardized beta values and significance values

RegressionDependent variableIndependent variable (SE)
6 (  = 0.13) LRSTAS − 0.050.03 − 0.190.055
AF − 0.010.02 − 0.020 783
CS0.030.020.110.239
CF0.010.02 − 0.040.667
 − 0.030.01 − 0.160.025*
RF0.030.020.140.061
0.050.010.330.000*
7  = .07) 2-pagerAS − 0.220.27 − 0.090.413
AF0.070.170.040.667
CS0.020.250.010.936
CF − 0.300.19 − 0.140.116
 − 0.310.14 − 0.170.030*
RF − 0.020.17 − 0.120.906
0.330.130.220.015*

How do students experience need satisfaction and need frustration in a deliberately designed (4C/ID based) learning environment?

As was mentioned in the method section, the fourth research question was analysed by labelling students’ qualitative feedback by the codes ‘autonomy satisfaction, autonomy frustration, competence satisfaction, competence frustration, relatedness satisfaction, and relatedness frustration’. By means of this approach, we could analyse students’ need experiences in a fine grained manner. When students’ quotes were applicable to more than one code, they were labelled with different codes. In what follows, students’ quotes are indicated with the codes “BC” (baseline condition) or “NSC” (need satisfaction condition) in order to indicate which learning environment the student completed. Of all 233 students, 124 students provided qualitative feedback (44 in BC and 80 in NSC). In total, 266 quotes were labeled. Autonomy satisfaction was coded 40 times BC and 41 times in NSC; autonomy frustration was coded 13 times in BC and four times in NSC; competence satisfaction was coded 28 times in BC and 34 times in NSC; competence frustration was coded 31 times in BC and 27 times in NSC; relatedness satisfaction was coded 10 times in BC and 16 times in NSC; and relatedness frustration was coded five times in BC and 17 times in NSC. Several observations could be drawn from the qualitative data.

Related to autonomy satisfaction , in both conditions, several students explicitly mentioned the personal value and usefulness of what they had learned in the learning environment. While in the baseline condition, these references were often vague (“Now I know what people expect from me next year ”; “I think I might use this information in the future ”); some references appeared to be more specific in the need support condition (“I want to study psychology and I think I can use this information!”; “This is a good preparation for higher education and university ”; “I can use this information to write an essay ”; “I think the theory was interesting, because you are sure you will need it once. I don’t always have that feeling during a normal lesson in school”). In addition, students in both conditions mentioned that they found the material interesting, and that they appreciated the online format: “It’s different then just listening to a teacher, I kept interested because of the large variety in exercises and overall, I found it fun” (NSC).

Several comments were coded as ‘ autonomy frustration’ in both conditions. Some students indicated that they found the material “useless” (BC), or that “they did not remember that much” (BC). Others found the material “uninteresting” (BC), “heavy and boring” (NSC) or “not fun” (BC). In addition, some students “did not like to complete the assignments” (NSC), or “prefer a book to learn theory” (NSC).

Related to competence satisfaction , students in both conditions found the material “clear” (BC, NSC). In addition, students’ appreciated the example answers, the difficulty rate (“Some exercises were hard, but that is good. That’s a sign you’re learning something new” (NSC)), and the fact that the theory was segmented in several parts. In addition, students recognized that the material required complex skills: “I learned a lot, you had to think deeper or gain insights in order to solve the exercises” (NSC), “you really had to think to complete the exercises” (NSC). In the need satisfaction group, several quotes were labelled related to the specific need support provided. For example, students indicated that they appreciated the forum option: “If something was not clear, you could check your peer’s answers” (NSC). Students also valued the fact that they could work at their own pace: “I found it very good that we could solve everything at our own pace” (NSC); “good that you could choose your own pace, and if something was not clear to you, you could reread it at your own pace” (NSC). In addition, students appreciated the immediate feedback provided by the researcher “I found it very good that we received personal feedback from xxx (name researcher). That way, I knew whether I understood the theory correctly” (NSC); and the fact that they could indicate their progress “It was good that you could see how far you proceeded in the learning environment” (NSC).

In both the baseline and the need supportive condition, there were also several comments related to competence frustration . For example, students found exercises vague, unclear or too difficult. While students, overall, understood the theory provided, applying the theory to an integrative assignment appears to be very difficult: “I did understand the several parts of the learning environment, but I did not succeed in writing a research proposal myself” (NSC). “I just found it hard to respond to questions. When I had to write my two-pager research proposal, I really struggled. I really felt like I was doing it entirely wrong” (NSC)). In addition, a lot comments related to the fact that the theory was a lot to process in a short time frame, and therefore, students indicated that it was hard to remember all the theory provided. In addition, this led pressure in some students: “Sometimes, I experiences pressure. When you see that your peers are finished, you automatically start working faster.” (BC).

Concerning relatedness satisfaction , in the baseline condition, students appreciated the chat function “you could help each other and it was interesting to hear each other’s opinions about the topics we were working on” (BC). However, most students indicated that they did not make use of the chat or forum options. In the need satisfaction condition, students appreciated the forum and the chat function: “You knew you could always ask questions. This helped to process the learning material” (NSC), “My peers’ answers inspired me” (NSC), “Thanks to the chat function, I felt more connected to my peers” (NSC). In addition, students in the need satisfaction condition appreciated the fact that they could contact the researcher any time.

Several students made comments related to relatedness frustration . In both groups, students missed the ‘live teaching’: “I tried my best, but sometimes I did not like it, because you do not receive the information in ‘real time’, but through videos” (BC). In addition, students missed their peers: “We had to complete the environment individually” (BC). While some students appreciated the opportunity of a forum, other students found this possibility stressful: “I think the forum is very scary. I posted everything I had to, but I found it very scary that everyone can see what you post” (NSC). Others did not like the fact that they needed to work individually: “Sometimes I lost my attention because no one was watching my screen with me” (NSC); “I found it hard because this was new information and we could not discuss it with each other” (NSC); “I felt lonely” (NSC); “It is hard to complete exercises without the help of a teacher. In the future this will happen more often, so I guess I will have to get used to it” (NSC); “When I see the teacher physically, I feel less reluctant to ask questions” (NSC).

The current intervention study aimed at exploring the motivational and cognitive effects of providing need support in an online learning environment fostering upper secondary school students’ research skills. More specifically, we investigated the impact of autonomy, competence and relatedness support in an online learning environment on students’ scores on a research skills test, a research skills task, students’ autonomous motivation, controlled motivation, amotivation, need satisfaction, need frustration, and experienced value/usefulness. Adopting a pretest-intervention-posttest design approach, 233 upper secondary school behavioral sciences students’ motivational outcomes were compared among two conditions: (1) a 4C/ID inspired online learning environment condition (baseline condition), and (2) a condition with an identical online learning environment additively providing support for students’ autonomy, relatedness and competence need satisfaction (need supportive condition). This study aims to contribute to the literature by exploring the integration of need support for all three needs (the need for competence, relatedness and autonomy) in an ecologically valid setting. In what follows, the findings are discussed taking into account the COVID-19 affected circumstances in which the study took place.

As was hypothesized based on existing research (Costa et al., 2021 ), results showed significant learning gains on the LRST cognitive measure in both conditions, pointing out that the learning environments in general succeeded in improving students’ research skills. The current study did not find any significant differences in these learning gains between both conditions. Controlling for a priori differences between the conditions on the LRST pretest measure, students in the need support condition did exceed students in the baseline condition on the two-pager task. However, overall, the scores on the research skills task were quite low, pointing to the fact that students still seem to struggle in writing a research proposal. This task can be considered more complex (van Merriënboer & Kirschner, 2018 ) than the research skills test, as students are required to combine their conceptual and procedural knowledge in one assignment. Indeed, in the qualitative feedback, students indicate that they understand the theory and are able to apply the theory in basic exercises, but that they struggle in integrating their knowledge in a research proposal. Future research could set up more extensive interventions explicitly targeting students’ progress while writing a research proposal, for example using development portfolios (van Merriënboer et al., 2006 ).

The effect of the intervention on the motivational outcome measures was investigated. Since we experimentally manipulated need support, this study hypothesized that students in the need supportive condition would show higher scores for autonomous motivation, value/usefulness and need satisfaction; and lower scores for controlled motivation, amotivation and need frustration compared to students in the baseline condition (Deci & Ryan, 2000 ). However, the analyses showed that students in the conditions did not differ on the value/usefulness, autonomy satisfaction, autonomy frustration, competence satisfaction, competence frustration and relatedness frustration measures. In contrast to what was hypothesized, students’ in the baseline condition reported higher relatedness satisfaction compared to students in the need supportive condition. No differences were found in students’ autonomous motivation and controlled motivation. However, as was expected, students in the need supportive conditions did report lower levels of amotivation compared to students in the baseline condition. Still, for the current study, one could question the role of the need support in this respect, as the current intervention did not succeed in manipulating students’ need experiences. In what follows, possible explanations for these findings are outlined in light of the existing literature.

Need experiences

A first observation based on the findings as described above is that the intervention did not succeed in manipulating students’ need satisfaction, need frustration and value/usefulness in an expected way. One effect was found of condition on relatedness satisfaction, but in the opposite direction of what was expected. We did not find a conclusive explanation for this unanticipated finding, but we do argue that the COVID-19 related measures at play during the intervention could have impacted this result. This will be reflected upon later in this discussion (limitations). In both conditions, students seem to be averagely satisfied regarding autonomy and competence in the 4C/ID based learning environments. This might be explained by the fact that 4C/ID based learning environments inherently foster students’ perceived competence because of the attention for structure and guidance, and the fact that the use of authentic tasks can be considered autonomy supportive (Bastiaens & Martens, 2007). However, we see that students experience low relatedness satisfaction in both conditions. The fact that the learning environment was organized entirely online might have influenced this result. While one might also partly address this low relatedness satisfaction to the COVID-19 circumstances at play during the study, this hypothetical explanation does not hold entirely since also in a previous non COVID-affected study in this research trajectory (Maddens et al., under review ), students’ relatedness satisfaction was found to be low. This finding, combined with findings from students’ qualitative feedback clearly indicating relatedness frustration, we argue that future research could focus on the question as how to provide need for relatedness support in 4C/ID based learning environments. On a more general level, this raises the question how opportunities for discussions and collaboration can be included in 4C/ID based learning environments. For example, organizing ‘real classroom interactions’ or performing assignments in groups (see also the suggestion of van Merriënboer & Kirschner, 2018 ), might be important in fostering students’ relatedness satisfaction (Salomon, 2002 ) . As argued by Wang et al. ( 2019 ), relatedness support is clearly understudied, for a long time often even ignored, in the SDT literature. Recently, relatedness is beginning to receive more attention, and has been found a strong predictor of autonomous motivation in the classroom (Wang et al., 2019 ).

Possibly, the need support provided in the learning environment was insufficient or inadequate to foster students’ need experiences. However, as the implementations were based on the existing literature (Deci & Ryan, 2000 ), this finding can be considered surprising. In addition, we derive from the qualitative feedback that students seem to value the need support provided in the learning environment. These contradictory observations are in line with previous research (Bastiaens et al., 2017 ), and call for further investigation.

Autonomous motivation, controlled motivation, amotivation

A second observation is that, in both conditions, students seem to hold low autonomous motivation and low controlled motivation towards learning research. On average, also students’ amotivation is low. The fact that students are not amotivated to learn about research can be considered reassuring. However, the fact that students experience low autonomous motivation causes concerns, as we know this might negatively impact their learning behavior and intentions to learn (Deci & Ryan, 2000 ; Wang et al., 2019 ). However, this result is based on mean scores. Future research might look at these results at student level, in order to identify individual motivational profiles (Vansteenkiste et al., 2009 ) and their prevalence in upper secondary behavioral sciences education.

A third observation is that students’ autonomous and controlled motivation were not affected by the intervention. Since the intervention did not succeed in manipulating students’ need experiences, this finding is not surprising. In addition, this is in line with Bastiaens et al.’ ( 2017 ) study, not finding motivational effects of providing need support in 4C/ID based learning environments. However, the current study did confirm that—although still higher than at pretest level, see below—students in the need supportive condition reported lower amotivation compared to students in the baseline condition. As no amotivational differences were observed at pretest level, this might indicate that students’ self-reported motivation (autonomous and controlled motivation) and/or needs do not align with students’ experienced motivation and needs. As was mentioned, this calls for further research.

Theoretical relationships

In line with previous research (Wang et al., 2019 ), multiple regression analyses revealed that students’ need satisfaction (on all three measures) contributed positively to students’ autonomous motivation. In addition, also students’ perceived value/usefulness contributed positively to students’ autonomous motivation. Students’ competence frustration and autonomy frustration contributed positively to students’ amotivation, and students’ value/usefulness contributed negatively to students’ amotivation. Students’ autonomy frustration contributed positively to students’ controlled motivation. While all the aforementioned relationships are in line with the expectations (Deci & Ryan, 2000 ; Wang et al., 2019 ), an unexpected finding is that students’ relatedness satisfaction contributed positively to students’ controlled motivation. This contradicts previous research (Wang et al., 2019 ), reporting that relatedness contributes to controlled motivation negatively. However, previous research (Wang et al., 2019 ) did find controlled motivation to be positively related to pressure . Although we did not find a conclusive explanation for this unanticipated finding, one possible reason thus is that students who contacted their peers in the online learning environment (and thus felt more related to their peers), might have experienced pressure because they felt like their peers worked faster or in a different way. Indeed, in the qualitative feedback, we noticed that some students indicated they ‘rushed’ through the online learning environment because they noticed a peer working faster. This finding calls for further research.

Overall, the results indicate that the observed need variables contributed most to students’ autonomous motivation, compared to (reversed relationships in) students’ amotivation and students’ controlled motivation. As such, when targeting students’ motivation, fostering students’ autonomous motivation based on students’ need experiences seems most promising. This is in line with previous research (Wang et al., 2019 ) reporting high correlations between students’ needs and students’ autonomous motivation, compared to students’ controlled motivation. We also investigated the relationships between students’ motivation and students’ cognitive outcomes. In line with a previously conducted study in this research trajectory (Maddens et al., under review ), but in contrast to what was hypothesized based on the existing literature (Deci & Ryan, 2000 ; Grolnick et al., 1991 ; Reeve, 2006 ) we found that nor students’ autonomous motivation, nor students’ controlled motivation contributed to students’ scores on the research skills test. However, we did find that students’ amotivation contributed negatively to students’ LRST scores. As such, when targeting students’ cognitive outcomes in educational programs, one might pay explicit attention to preventing amotivation. This is in line with previous research conducted in other domains, reporting that amotivation plays an important role in predicting mathematics achievement (Leroy & Bressoux, 2016 ), while this relationship was not found in other motivation types. Related to research skills, the current research suggests that preventing competence frustration and autonomy frustration, and fostering students’ experiences of value/usefulness might be especially promising to reach this goal.

Initially, we did not plan any analyses investigating the direct relationships between students’ needs and students’ cognitive outcomes, partly because previous research (Vallerand & Losier, 1999 ) suggests that the relationships between need satisfaction and (cognitive) outcomes are mediated by the types of motivation. To this end, we investigated the relationships between students’ needs and students’ motivation, separately from the relationships between students’ motivation and students’ cognitive outcomes. However, because of potential issues with the motivational measures (see earlier), which possibly hampers the interpretation of the relationships between students’ needs, students’ motivation, and students’ cognitive outcomes, we decided to also directly assess the regression weights of students’ needs and students’ perceived value/usefulness, on students’ cognitive outcomes. Results revealed that, in line with the expectations, students’ perceived value/usefulness contributed positively to students’ LRST scores and two-pager scores, which potentially stresses the importance of value/usefulness, not only for motivational purposes, but also for cognitive purposes. This is in line with previous research (Assor et al., 2002 ), establishing relationships between fostering relevance and students’ behavioral and cognitive engagement (which potentially leads to better cognitive outcomes). In contrast to the expectations, students’ relatedness satisfaction was found to be negatively related to students’ scores on the LRST and the two-pager. However, again, this surprising finding is best interpreted in light of the COVID-10 pandemic (see earlier).

Limitations

This study faced some reliability issues given the time frame in which the study took place. Due to the COVID-19-restrictions at play at the time of study, the study plan needed to be revised several times in collaboration with teachers in order to be able to complete the interventions. In addition, it is very likely that students’ motivation (and relatedness satisfaction) was influenced by the COVID 19-restrictions. For example, due to the restrictions, in the last phase of the intervention, students could only be present at school halftime, and therefore, some students worked from home while others worked in the classroom. In the qualitative feedback, students reported several COVID-19 related frustrations (it was too cold in class because teachers were obligated to open the windows; students needed to frequently disinfect their computers…). Also the teachers mentioned that students suffered from low well-being during the COVID-19 time frame (see further), and as such, this affected their motivation. Although all efforts were undertaken in order for the study to take place as controlled as possible, results should be interpreted in light of this time frame. The impact of the COVID-19 pandemic on students’ self-reported motivation has been established in recent research (Daniels et al., 2021 ). Overall, one could question to what extent we can expect an intervention at microlevel (manipulating need support in learning environments) to work, when the study takes place in a time frame where students’ need experiences are seriously threatened by the circumstances.

Decreasing motivation

Students’ motivation evolved in a non-desirable way in both conditions. This unexpected finding (decreasing motivation) might be explained by four possible reasons: a first explanation is that asking students to fill out the same questionnaire at posttest and pretest level might lead to frustration and lower reported motivation (Kosovich et al., 2017 ). Indeed, students spent a lot of time working in the online learning environment, so filling out another motivational questionnaire on top of the intervention might have added to the frustration (Kosovich et al., 2017 ). A second explanation is that students’ motivation naturally declines over time (which is a common finding in the motivational literature, Kosovich et al., 2017 ). A third explanation is that students, indeed, felt less motivated towards research skills after having completed the online learning environment. For example, the qualitative data indicated that a lot of students acknowledged the fact that the learning environment was useful, but that personally, they were not interested in learning the material. In addition, students indicated that the learning material was a lot to process in a short time frame, and was new to them, which might have negatively impacted their motivation. The latter (students indicating that the learning material was extensive) might indicate that students experienced high cognitive load (Paas & van Merriënboer, 1994; Sweller et al., 1994 ) while completing the learning environment. A fourth explanation is that, due to the COVID19-restrictions, students lost motivation during the learning process. A post-intervention survey in which we asked teachers about the impact of the COVID-19 restrictions on students’ motivation indicated that some students experienced low well-being during the COVID-19 pandemic, and thus, this might have hampered their motivation to learn. In addition, a teacher mentioned that COVID-19 in general was very demotivating for the students, and that students had troubles concentrating due to the fact they felt isolated. As was mentioned, the impact of COVID-19 on students’ motivation has been well described in the literature (Daniels et al., 2021 ). Although, in the current study, we cannot prove the impact of these measures on students’ motivation specifically towards learning research skills, it is important to take this context into account when interpreting the results.

Students’ learning behavior

Based on students’ qualitative feedback, we have reasons to believe that students did not always work in the learning environment as we would want them to do. Thus, students did not interact with the need support in the intended way (‘instructional disobedient behavior’: Elen, 2020 ). For example, several students reported that they did not always read all the material, did not make use of the forum, or did not notice certain messages from the researcher. However, the current research did not specifically look into students’ learning behavior in the learning environment. In learning environments organized online, future researchers might want to investigate students’ online behavior in order to gain insights in students’ interactions with the learning environment.

This study aims to contribute to theory and practice. Firstly, this study defines the 4C/ID model (van Merriënboer & Kirschner, 2018 ) as a good theoretical framework in order to design learning environments aiming to foster students’ research skills. However, this study also points to students’ struggling in writing a research proposal, which might lead to more specific intervention studies especially focussing on monitoring students’ progress while performing such tasks. Secondly, this study clearly elaborates on the operationalizations of need support used, and as such, might inform instructional designers in order to implement need support in an integrated manner (including competence, relatedness and autonomy support). Future interventions might want to track and monitor students’ learning behavior in order for students to interact with the learning environment as expected (Elen, 2020 ). Thirdly, this study established theoretical relationships between students’ needs, motivation and cognitive outcomes, which might be useful information for researchers aiming to investigate students’ motivation towards learning research skills in the future. Based on the findings, future researchers might especially involve in research fostering students’ autonomous motivation by means of providing need support; and avoiding students’ amotivation in order to enhance students’ cognitive outcomes. Suggestions are made based on the need support and frustration measures relating to these motivational and cognitive outcomes. For example, fostering students’ value/usefulness seems promising for both cognitive and motivational outcomes. Fourthly, although we did not succeed in manipulating students’ need experiences, we did gain insights in students’ experiences with the need support by means of the qualitative data. For example, the irreplaceable role of teachers in motivating students has been exposed. This study can be considered innovative because of its aim to inspect both students’ cognitive and motivational outcomes after completing a 4C/ID based educational program (van Merriënboer & Kirschner, 2018 ). In addition, this study implements integrated need support rather than focusing on a single need (Deci & Ryan, 2000 ; Sheldon & Filak, 2008 ).

Acknowledgements

This study was carried out within imec’s Smart Education research programme, with support from the Flemish government.

Appendix: Overview test instruments

External regulationBecause that’s what others (e.g., parents, friends) expect from me
Introjected regulationBecause I want others to think I’m smart
Identified regulationBecause it’s personally important to me
Intrinsic motivationBecause I think it is interesting
AmotivationTo be honest, I don’t see any reason for learning about research skills
Value/UsefulnessI believe completing this learning environment could be of some value to me
Autonomy satisfactionWhile completing the learning environment, I felt a sense of choice and freedom in the things I thought and did

An external file that holds a picture, illustration, etc.
Object name is 11251_2022_9606_Figa_HTML.jpg

  • Instructions 2-pager (Maddens, Depaepe, Raes, & Elen, under review)

Write a research proposal for a fictional study.

In a Word-document of maximum two pages…

  • You describe a research question and the importance of this research question
  • You explain how you would answer this research question (manner of data collection and target group)
  • You explain what your expectations are, and how you will report your results.

To do so, you receive 2 hours.

Post your research proposal here.

Good luck and thank you for your activity in the RISSC-environment!

Declarations

The authors declare that they have no conflict of interest.

All ethical and GDPR-related guidelines were followed as required for conducting human research and were approved by SMEC (Social and Societal Ethics Committee).

1 Fischer et al. ( 2014 ) refer to these research skills as scientific reasoning skills.

2 In Flanders, during the time of study, four different types of education are offered from the second stage of secondary education onwards (EACEA, 2018) (general secondary education, technical secondary education, secondary education in the arts and vocational secondary education). Behavioral sciences is a track in general secondary education.

3 For a complete overview on the design and the evaluation of this learning environment, see Maddens et al ( 2020b ).

4 During the time of study, the COVID-19 restrictions became more strict: students in upper secondary education could only come to school half of the time. Therefore, some students completed the last modules of the learning environment at home.

5 The BPNSNF-training scale is initially constructed to evaluate motivation related to workshops. The phrasing was adjusted slightly in order for the suitability for the current study. For example, we changed the wording ‘during the past workshop…’ to ‘while completing the online learning environment…’.

6 In the current study, we would label the items categorized as ‘intrinsic motivation’ in ASRS (finding something interesting, fun, fascinating or a pleasant activity) as ‘integration’. In SDT (Deci & Ryan, 2000 ; Deci et al., 2017 ), integration is described as being “fully volitional”, or “wholeheartedly engaged”, and it is argued that fully internalized extrinsic motivation does not typically become intrinsic motivation, but rather remains extrinsic even though fully volitional (because it is still instrumental). In the context of the current study, in which students learn about research skills because this is instructed (thus, out of instrumental motivations), we think that the term integration is more applicable than pure intrinsic motivation in self-initiated contexts (which can be observed for example in children’s play or in sports).

7 Levene’s test for homogeneity of variances was significant for the outcome “two-pager”. However, we continued with the analyses since the treatment group sizes are roughly equal, and thus, the assumption of homogeneity of variances does not need to be considered (Field, 2013 ). Levene’s test for homogeneity of variances was non-significant for all the other outcome measures.

8 Cohen’s D is calculated in SPSS by means of the formula: D = M 1 - M 2 Sp

Condition x autonomous motivation pretest Value/usefulness: p  = 0.251; autonomous motivation: p  = 0.269; controlled motivation: p  = 0.457; amotivation: p  = 0.219; autonomy satisfaction: p  = 0.794; autonomy frustration: p  = 0.096; competence satisfaction: p  = 0.682; competence frustration: p  = 0.699; relatedness satisfaction: p  = 0.943; relatedness frustration: p  = 0.870.

Condition x controlled motivation pretest Value/usefulness: p  = 0.882; autonomous motivation: p  = 0.270; controlled motivation: p  = 0.782; amotivation: p  = 0.940; autonomy satisfaction: p  = 0.815; autonomy frustration: p  = 0.737; competence satisfaction: p  = 0.649; competence frustration: p  = 0.505; relatedness satisfaction: p  = 0.625; relatedness frustration: p  = 0.741.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Aelterman N, Vansteenkiste M, Van Keer H, Haerens L. Changing teachers' beliefs regarding autonomy support and structure: The role of experienced psychological need satisfaction in teacher training. Psychology of Sport and Exercise. 2016; 23 :64–72. doi: 10.1016/j.psychsport.2015.10.007. [ CrossRef ] [ Google Scholar ]
  • Assor A, Kaplan H, Roth G. Choice is good, but relevance is excellent: Autonomy-enhancing and suppressing teacher behaviours predicting students' engagement in schoolwork. British Journal of Educational Psychology. 2002; 72 (2):261–278. doi: 10.1348/000709902158883. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Aydın S, Yerdelen S, Yalmancı SG, Göksu V. Academic motivation scale for learning biology: A scale development study. Education & Science/Egitim Ve Bilim. 2014; 39 (176):425–435. doi: 10.15390/EB.2014.3678. [ CrossRef ] [ Google Scholar ]
  • Bastiaens E, van Merriënboer J, van Tilburg J. Research-based learning: Case studies from Maastricht University. Springer; 2017. Three educational models for positioning the Maastricht research-based learning programme; pp. 35–41. [ Google Scholar ]
  • Braguglia KH, Jackson KA. Teaching research methodology using a project-based three course sequence critical reflections on practice. American Journal of Business Education (AJBE) 2012; 5 (3):347–352. doi: 10.19030/ajbe.v5i3.7007. [ CrossRef ] [ Google Scholar ]
  • Butz NT, Stupnisky RH. Improving student relatedness through an online discussion intervention: The application of self-determination theory in synchronous hybrid programs. Computers & Education. 2017; 114 :117–138. doi: 10.1016/j.compedu.2017.06.006. [ CrossRef ] [ Google Scholar ]
  • Chen B, Vansteenkiste M, Beyers W, Boone L, Deci EL, Van der Kaap-Deeder J, Verstuyf J. Basic psychological need satisfaction, need frustration, and need strength across four cultures. Motivation and Emotion. 2015; 39 (2):216–236. doi: 10.1007/s11031-014-9450-1. [ CrossRef ] [ Google Scholar ]
  • Chi MT. Active-constructive-interactive: A conceptual framework for differentiating learning activities. Topics in Cognitive Science. 2009; 1 (1):73–105. doi: 10.1111/j.17568765.2008.01005.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cook DA, McDonald FS. E-learning: Is there anything special about the" e"? Perspectives in Biology and Medicine. 2008; 51 (1):5–21. doi: 10.1353/pbm.2008.0007. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Costa JM, Miranda GL, Melo M. Four-component instructional design (4C/ID) model: A meta-analysis on use and effect. Learning Environments Research. 2021 doi: 10.1007/s10984-021-09373-y. [ CrossRef ] [ Google Scholar ]
  • Daniels LM, Goegan LD, Parker PC. The impact of COVID-19 triggered changes to instruction and assessment on university students’ self-reported motivation, engagement and perceptions. Social Psychology of Education. 2021; 24 (1):299–318. doi: 10.1007/s11218-021-09612-3. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • de Jong T. Scaffolds for scientific discovery learning. In: Elen J, Clark RE, editors. Handling complexity in learning environments: Theory and research. Emerald Group Publishing Limited; 2006. pp. 107–128. [ Google Scholar ]
  • de Jong T, van Joolingen WR. Scientific discovery learning with computer simulations of conceptual domains. Review of Educational Research. 1998; 68 (2):179–201. doi: 10.3102/00346543068002179. [ CrossRef ] [ Google Scholar ]
  • Deci EL, Eghrari H, Patrick BC, Leone DR. Facilitating internalization: The self-determination theory perspective. Journal of Personality. 1994; 62 :119–142. doi: 10.1111/j.1467-6494.1994.tb00797.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Deci EL, Olafsen AH, Ryan RM. Self-determination theory in work organizations: The state of a science. Annual Review of Organizational Psychology and Organizational Behavior. 2017; 4 :19–43. doi: 10.1146/annurev-orgpsych-032516-113108. [ CrossRef ] [ Google Scholar ]
  • Deci EL, Ryan RM. The" what" and" why" of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry. 2000; 11 (4):227–268. doi: 10.1207/S15327965PLI1104_01. [ CrossRef ] [ Google Scholar ]
  • Deci EL, Ryan RM, Williams GC. Need satisfaction and the self-regulation of learning. Learning and Individual Differences. 1996; 8 (3):165–183. doi: 10.1016/S1041-6080(96)90013-8. [ CrossRef ] [ Google Scholar ]
  • Earley MA. A synthesis of the literature on research methods education. Teaching in Higher Education. 2014; 19 (3):242–253. doi: 10.1080/13562517.2013.860105. [ CrossRef ] [ Google Scholar ]
  • Elen J. “Instructional disobedience”: A largely neglected phenomenon deserving more systematic research attention. Educational Technology Research and Development. 2020; 68 (5):2021–2032. doi: 10.1007/s11423-020-09776-3. [ CrossRef ] [ Google Scholar ]
  • Engelmann K, Neuhaus BJ, Fischer F. Fostering scientific reasoning in education: Meta-analytic evidence from intervention studies. Educational Research and Evaluation. 2016; 22 (5–6):333–349. doi: 10.1080/13803611.2016.1240089. [ CrossRef ] [ Google Scholar ]
  • Field A. Discovering statistics using IBM SPSS statistics. SAGE Publications; 2013. [ Google Scholar ]
  • Fischer F, Chinn CA, Engelmann K, Osborne J. Scientific reasoning and argumentation. Routledge; 2018. [ Google Scholar ]
  • Fischer F, Kollar I, Ufer S, Sodian B, Hussmann H, Pekrun R, Neuhaus B, Dorner B, Pankofer S, Fischer M, Strijbos J-W, Heene M, Eberle J. Scientific reasoning and argumentation: Advancing an interdisciplinary research agenda in education. Frontline Learning Research. 2014; 4 :28–45. doi: 10.14786/flr.v2i2.96. [ CrossRef ] [ Google Scholar ]
  • Grolnick WS, Ryan RM, Deci EL. Inner resources for school achievement: Motivational mediators of children's perceptions of their parents. Journal of Educational Psychology. 1991; 83 (4):508–517. doi: 10.1037/0022-0663.83.4.508. [ CrossRef ] [ Google Scholar ]
  • Kosovich JJ, Hulleman CS, Barron KE. Measuring motivation in educational settings: A Case for pragmatic measurement. In: Renninger KA, Hidi SE, editors. The Cambridge handbook on motivation and learning. Cambridge University Press; 2017. pp. 39–60. [ Google Scholar ]
  • Lehti S, Lehtinen E. Computer-supported problem-based learning in the research methodology domain. Scandinavian Journal of Educational Research. 2005; 49 (3):297–324. doi: 10.1080/00313830500109618. [ CrossRef ] [ Google Scholar ]
  • Leroy N, Bressoux P. Does amotivation matter more than motivation in predicting mathematics learning gains? A longitudinal study of sixth-grade students in France. Contemporary Educational Psychology. 2016; 44 :41–53. doi: 10.1016/j.cedpsych.2016.02.001. [ CrossRef ] [ Google Scholar ]
  • Lesterhuis M, van Daal T, van Gasse R, Coertjens L, Donche V, de Maeyer S (2018) When teachers compare argumentative texts: Decisions informed by multiple complex aspects of text quality. L1 Educational Studies in Language and Literature, 18: 1–22. 10.17239/L1ESLL-2018.18.01.02
  • Maddens L, Depaepe F, Janssen R, Raes A, Elen J. Evaluating the Leuven research skills test for 11th and 12th grade. Journal of Psychoeducational Assessment. 2020; 38 (4):445–459. doi: 10.1177/0734282918825040. [ CrossRef ] [ Google Scholar ]
  • Maddens L, Depaepe F, Raes A, Elen J. The instructional design of a 4C/ID-inspired learning environment for upper secondary school students' research skills. International Journal of Designs for Learning. 2020; 11 (3):126–147. doi: 10.14434/ijdl.v11i3.29012. [ CrossRef ] [ Google Scholar ]
  • Maddens, L., Depaepe, F., Raes, A., & Elen, J. (under review). Fostering students’ motivation towards learning research skills in upper secondary school behavioral sciences education: the role of autonomy support.
  • Martin N, Kelly N, Terry P. A framework for self-determination in massive open online courses: Design for autonomy, competence, and relatedness. Australasian Journal of Educational Technology. 2018 doi: 10.14742/ajet.3722. [ CrossRef ] [ Google Scholar ]
  • Merrill MD. First principles of instruction. Educational Technology Research and Development. 2002; 50 (3):43–59. doi: 10.1007/BF02505024. [ CrossRef ] [ Google Scholar ]
  • Murtonen, M. S. S. (2005). Learning of quantitative research methods: University students' views, motivation and difficulties in learning. Doctoral Dissertation.
  • Niemiec CP, Ryan RM. Autonomy, competence, and relatedness in the classroom: Applying self-determination theory to educational practice. Theory and Research in Education. 2009; 7 (2):133–144. doi: 10.1177/2F1477878509104318. [ CrossRef ] [ Google Scholar ]
  • Pietersen C. Research as a learning experience: A phenomenological explication. The Qualitative Report. 2002; 7 (2):1–14. doi: 10.46743/2160-3715/2002.1980. [ CrossRef ] [ Google Scholar ]
  • Raes A, Schellens T. Unraveling the motivational effects and challenges of web-based collaborative inquiry learning across different groups of learners. Educational Technology Research and Development. 2015; 63 (3):405–430. doi: 10.1007/s11423-015-9381-x. [ CrossRef ] [ Google Scholar ]
  • Reeve J. Extrinsic rewards and inner motivation. In: Evertson CM, Weinstein CS, editors. Handbook of classroom management: Research, practice, and contemporary issues. Lawrence Erlbaum Associates Publishers; 2006. pp. 645–664. [ Google Scholar ]
  • Reeve J, Jang H. What teachers say and do to support students' autonomy during a learning activity. Journal of Educational Psychology. 2006; 98 (1):209–218. doi: 10.1037/0022-0663.98.1.209. [ CrossRef ] [ Google Scholar ]
  • Reeve J, Jang H, Hardre P, Omura M. Providing a rationale in an autonomy-supportive way as a strategy to motivate others during an uninteresting activity. Motivation and Emotion. 2002; 26 (3):183–207. doi: 10.1023/A:1021711629417. [ CrossRef ] [ Google Scholar ]
  • Ringeisen, T., & Bürgermeister, A. (2015). Fostering students’ self-efficacy in presentation skills: The effect of autonomy, relatedness and competence support. In Stress and anxiety: Application to schools, well-being, coping and internet use , 77–87.
  • Ryan RM. Control and information in the intrapersonal sphere: An extension of cognitive evaluation theory. Journal of Personality and Social Psychology. 1982; 43 :450–461. doi: 10.1037/0022-3514.43.3.450. [ CrossRef ] [ Google Scholar ]
  • Ryan RM. Psychological needs and the facilitation of integrative processes. Journal of Personality. 1995; 63 :397–427. doi: 10.1111/j.1467-6494.1995.tb00501.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ryan RM, Grolnick WS. Origins and pawns in the classroom: Self-report and projective assessments of individual differences in children’s perceptions. Journal of Personality and Social Psychology. 1986; 50 :550–558. doi: 10.1037/0022-3514.50.3.550. [ CrossRef ] [ Google Scholar ]
  • Salomon G. Technology and pedagogy: Why don't we see the promised revolution? Educational Technology. 2002; 42 (2):71–75. [ Google Scholar ]
  • Schunk DH. Self-efficacy for reading and writing: Influence of modeling, goal setting, and self-evaluation. Reading & Writing Quarterly. 2003; 19 (2):159–172. doi: 10.1080/10573560308219. [ CrossRef ] [ Google Scholar ]
  • Sheldon KM, Filak V. Manipulating autonomy, competence, and relatedness support in a game-learning context: New evidence that all three needs matter. British Journal of Social Psychology. 2008; 47 (2):267–283. doi: 10.1348/014466607X238797. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Steingut RR, Patall EA, Trimble SS. The effect of rationale provision on motivation and performance outcomes: A meta-analysis. Motivation Science. 2017; 3 (1):19–50. doi: 10.1037/mot0000039. [ CrossRef ] [ Google Scholar ]
  • Sweller J. Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction. 1994; 4 (4):295–312. doi: 10.1016/0959-4752(94)90003-5. [ CrossRef ] [ Google Scholar ]
  • Vallerand RJ. Advances in experimental social psychology. Academic Press; 1997. Toward a hierarchical model of intrinsic and extrinsic motivation; pp. 271–360. [ Google Scholar ]
  • Vallerand RJ, Losier GF. An integrative analysis of intrinsic and extrinsic motivation in sport. Journal of Applied Sport Psychology. 1999; 11 (1):142–169. doi: 10.1080/10413209908402956. [ CrossRef ] [ Google Scholar ]
  • Vallerand RJ, Reid G. On the causal effects of perceived competence on intrinsic motivation: A test of cognitive evaluation theory. Journal of Sport Psychology. 1984; 6 :94–102. doi: 10.1123/jsp.6.1.94. [ CrossRef ] [ Google Scholar ]
  • Van Merriënboer JJG, Kirschner PA. Ten steps to complex learning. Routledge; 2018. [ Google Scholar ]
  • van Merriënboer J, Sluijsmans D, Corbalan G, Kalyuga S, Paas F, Tattersall C. Performance assessment and learning task selection in environments for complex learning. In: Elen J, Clark RE, editors. Handling complexity in learning environments: Theory and Research. Elsevier Science Ltd; 2006. [ Google Scholar ]
  • Vansteenkiste M, Ryan RM, Soenens B. Basic psychological need theory: Advancements, critical themes, and future directions. Motivation and Emotion. 2020; 44 :1–31. doi: 10.1007/s11031-019-09818-1. [ CrossRef ] [ Google Scholar ]
  • Vansteenkiste M, Sierens E, Goossens L, Soenens B, Dochy F, Mouratidis A, Beyers W. Identifying configurations of perceived teacher autonomy support and structure: Associations with self-regulated learning, motivation and problem behavior. Learning and Instruction. 2012; 22 (6):431–439. doi: 10.1016/j.learninstruc.2012.04.002. [ CrossRef ] [ Google Scholar ]
  • Vansteenkiste M, Sierens E, Soenens B, Luyckx K, Lens W. Motivational profiles from a self-determination perspective: The quality of motivation matters. Journal of Educational Psychology. 2009; 101 (3):671–688. doi: 10.1037/a0015083. [ CrossRef ] [ Google Scholar ]
  • Wang CJ, Liu WC, Kee YH, Chian LK. Competence, autonomy, and relatedness in the classroom: Understanding students’ motivational processes using the self-determination theory. Heliyon. 2019; 5 (7):e01983. doi: 10.1016/j.heliyon.2019.e01983. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

research skills learning outcomes

Indiana University Indianapolis Indiana University Indianapolis IU Indianapolis

  • Herron School of Art
  • Ruth Lilly Law
  • Ruth Lilly Medical
  • School of Dentistry

Learning Outcomes for Teaching Research Skills

  • Library Instruction
  • Learning Outcomes for…

The   Framework for Information Literacy for Higher Education  was developed by the Association of College & Research Libraries (ACRL) to help students understand the complex and changing landscape of Information Literacy (IL) skills that will benefit them as lifelong learners. The six "frames" below can be used over a student's entire academic career -- in the first-year, milestone, capstone, and graduate courses -- to teach students to find, use, evaluate, and produce information effectively and ethically.

research skills learning outcomes

Authority is Constructed and Contextual  -  Information is evaluated in part based on the author’s credibility and is applied in context.  

research skills learning outcomes

Information Creation is a Process  -  Information exists in different formats and should be evaluated to determine its’ usefulness.

research skills learning outcomes

Information has Value  -  Legal and socioeconomic interests influence information gathering and distribution.

research skills learning outcomes

Research as Inquiry  -  Questions beget questions in an iterative process.

research skills learning outcomes

Scholarship is a Conversation  -  Research matures over time through sustained discourse between scholars.

research skills learning outcomes

Searching is a Strategic Exploration  -  Finding information requires flexibility and inquisitiveness.

Authority is Constructed and Contextual

research skills learning outcomes

University Library's suggested learning outcomes (on graduation) :

Identify authoritative information sources in any form.

Evaluate the authority of information from various sources (e.g., peer-reviewed journals, magazines, newspapers, website, etc.).*

Acknowledge their own authority in certain contexts.

Recognize that authority or credibility is contextual in relation to time, discipline, methodology, and other factors.*

Information Creation is a Process

research skills learning outcomes

 where information exists in different formats, which has an impact on how it is used and shared. The underlying processes of creation and the final product should be critically evaluated to determine the usefulness of the information.  (Outcomes marked with an asterisk * have been identified as General Education Learning Outcomes, by the time a student has completed approximately 30 credit hours.)

Articulate the capabilities and constraints of various processes of information creation.

Critique the presentation of information within disciplines.

Articulate traditional and emerging research processes. (e.g., literature review, statistical analysis, etc.).

Distinguish between format and method of access.

Select sources that best meet an information need based on the audience, context, and purpose of various formats.*

Information Has Value

research skills learning outcomes

Information has value as a commodity, a means of education, a means of influence, and a means of negotiating and understanding the world. Legal and socioeconomic interests influence information production and dissemination.  

Manage personal and academic information online with a knowledge of the commodification of that information.

Recognize that intellectual property is legally and socially constructed and varies by discipline and culture.

Cite sources through proper attribution.*

Identify publication practices and their related implications for how information is accessed and valued (e.g., open movement, digital divide).

Research as Inquiry

research skills learning outcomes

Research is an iterative process that depends upon asking increasingly complex or new questions whose answers prompt additional questions or lines of inquiry in any field. (Outcomes marked with an asterisk * have been identified as General Education Learning Outcomes, by the time a student has completed approximately 30 credit hours.)

University Library's suggested learning outcomes (on graduation) : 

Formulate questions for research of an appropriate scope, based on information gaps or by reexamining existing information.*

Select research methodology(ies) based on need, circumstance, and type of inquiry.

Organize information systematically (e.g., citation management software).

Synthesize information from multiple sources and a variety of perspectives.*

Scholarship is a Conversation

research skills learning outcomes

Scholarship is a conversation consisting of sustained discourse within communities of scholars, researchers, or professionals, with new insights and discoveries occurring over time as a result of a variety of perspectives and interpretations. (Outcomes marked with an asterisk * have been identified as General Education Learning Outcomes, by the time a student has completed approximately 30 credit hours.)

Contribute to the ongoing scholarly conversation at an appropriate level.

Identify the contribution that information sources make within a discipline or conversation.*

Describe the ways that communication systems privilege some perspectives and present barriers to others.*

Summarize the changes in scholarly perspective over time on a particular topic within a specific discipline.

Recognize that a given scholarly work may not represent the only or even the majority perspective on the issue.*

Searching is a Strategic Exploration

research skills learning outcomes

Searching is a strategic exploration encompassing inquiry, discovery, and flexibility. Searching means understanding how information is organized, identifying relevant sources, and how to access those sources. (Outcomes marked with an asterisk * have been identified as General Education Learning Outcomes, by the time a student has completed approximately 30 credit hours.)

Identify information need and potential sources of information (e.g., scholars, organizations, governments, industries).*

Design searches strategically, considering and selecting systems to search and evaluate results.*

Refine information need and search strategies based on results.*

Identify how information systems are organized in order to access relevant information.

Apply different searching language types (e.g., controlled vocabulary, keywords)

Teaching Commons Autumn Symposium 2024

Get ready for autumn quarter at the Teaching Commons Autumn Symposium. Friday, September 27.

Creating Learning Outcomes

Main navigation.

A learning outcome is a concise description of what students will learn and how that learning will be assessed. Having clearly articulated learning outcomes can make designing a course, assessing student learning progress, and facilitating learning activities easier and more effective. Learning outcomes can also help students regulate their learning and develop effective study strategies.

Defining the terms

Educational research uses a number of terms for this concept, including learning goals, student learning objectives, session outcomes, and more. 

In alignment with other Stanford resources, we will use learning outcomes as a general term for what students will learn and how that learning will be assessed. This includes both goals and objectives. We will use learning goals to describe general outcomes for an entire course or program. We will use learning objectives when discussing more focused outcomes for specific lessons or activities.

For example, a learning goal might be “By the end of the course, students will be able to develop coherent literary arguments.” 

Whereas a learning objective might be, “By the end of Week 5, students will be able to write a coherent thesis statement supported by at least two pieces of evidence.”

Learning outcomes benefit instructors

Learning outcomes can help instructors in a number of ways by:

  • Providing a framework and rationale for making course design decisions about the sequence of topics and instruction, content selection, and so on.
  • Communicating to students what they must do to make progress in learning in your course.
  • Clarifying your intentions to the teaching team, course guests, and other colleagues.
  • Providing a framework for transparent and equitable assessment of student learning. 
  • Making outcomes concerning values and beliefs, such as dedication to discipline-specific values, more concrete and assessable.
  • Making inclusion and belonging explicit and integral to the course design.

Learning outcomes benefit students 

Clearly, articulated learning outcomes can also help guide and support students in their own learning by:

  • Clearly communicating the range of learning students will be expected to acquire and demonstrate.
  • Helping learners concentrate on the areas that they need to develop to progress in the course.
  • Helping learners monitor their own progress, reflect on the efficacy of their study strategies, and seek out support or better strategies. (See Promoting Student Metacognition for more on this topic.)

Choosing learning outcomes

When writing learning outcomes to represent the aims and practices of a course or even a discipline, consider:

  • What is the big idea that you hope students will still retain from the course even years later?
  • What are the most important concepts, ideas, methods, theories, approaches, and perspectives of your field that students should learn?
  • What are the most important skills that students should develop and be able to apply in and after your course?
  • What would students need to have mastered earlier in the course or program in order to make progress later or in subsequent courses?
  • What skills and knowledge would students need if they were to pursue a career in this field or contribute to communities impacted by this field?
  • What values, attitudes, and habits of mind and affect would students need if they are to pursue a career in this field or contribute to communities impacted by this field?
  • How can the learning outcomes span a wide range of skills that serve students with differing levels of preparation?
  • How can learning outcomes offer a range of assessment types to serve a diverse student population?

Use learning taxonomies to inform learning outcomes

Learning taxonomies describe how a learner’s understanding develops from simple to complex when learning different subjects or tasks. They are useful here for identifying any foundational skills or knowledge needed for more complex learning, and for matching observable behaviors to different types of learning.

Bloom’s Taxonomy

Bloom’s Taxonomy is a hierarchical model and includes three domains of learning: cognitive, psychomotor, and affective. In this model, learning occurs hierarchically, as each skill builds on previous skills towards increasingly sophisticated learning. For example, in the cognitive domain, learning begins with remembering, then understanding, applying, analyzing, evaluating, and lastly creating. 

Taxonomy of Significant Learning

The Taxonomy of Significant Learning is a non-hierarchical and integral model of learning. It describes learning as a meaningful, holistic, and integral network. This model has six intersecting domains: knowledge, application, integration, human dimension, caring, and learning how to learn. 

See our resource on Learning Taxonomies and Verbs for a summary of these two learning taxonomies.

How to write learning outcomes

Writing learning outcomes can be made easier by using the ABCD approach. This strategy identifies four key elements of an effective learning outcome:

Consider the following example: Students (audience) , will be able to label and describe (behavior) , given a diagram of the eye at the end of this lesson (condition) , all seven extraocular muscles, and at least two of their actions (degree) .

Audience 

Define who will achieve the outcome. Outcomes commonly include phrases such as “After completing this course, students will be able to...” or “After completing this activity, workshop participants will be able to...”

Keeping your audience in mind as you develop your learning outcomes helps ensure that they are relevant and centered on what learners must achieve. Make sure the learning outcome is focused on the student’s behavior, not the instructor’s. If the outcome describes an instructional activity or topic, then it is too focused on the instructor’s intentions and not the students.

Try to understand your audience so that you can better align your learning goals or objectives to meet their needs. While every group of students is different, certain generalizations about their prior knowledge, goals, motivation, and so on might be made based on course prerequisites, their year-level, or majors. 

Use action verbs to describe observable behavior that demonstrates mastery of the goal or objective. Depending on the skill, knowledge, or domain of the behavior, you might select a different action verb. Particularly for learning objectives which are more specific, avoid verbs that are vague or difficult to assess, such as “understand”, “appreciate”, or “know”.

The behavior usually completes the audience phrase “students will be able to…” with a specific action verb that learners can interpret without ambiguity. We recommend beginning learning goals with a phrase that makes it clear that students are expected to actively contribute to progressing towards a learning goal. For example, “through active engagement and completion of course activities, students will be able to…”

Example action verbs

Consider the following examples of verbs from different learning domains of Bloom’s Taxonomy . Generally speaking, items listed at the top under each domain are more suitable for advanced students, and items listed at the bottom are more suitable for novice or beginning students. Using verbs and associated skills from all three domains, regardless of your discipline area, can benefit students by diversifying the learning experience. 

For the cognitive domain:

  • Create, investigate, design
  • Evaluate, argue, support
  • Analyze, compare, examine
  • Solve, operate, demonstrate
  • Describe, locate, translate
  • Remember, define, duplicate, list

For the psychomotor domain:

  • Invent, create, manage
  • Articulate, construct, solve
  • Complete, calibrate, control
  • Build, perform, execute
  • Copy, repeat, follow

For the affective domain:

  • Internalize, propose, conclude
  • Organize, systematize, integrate
  • Justify, share, persuade
  • Respond, contribute, cooperate
  • Capture, pursue, consume

Often we develop broad goals first, then break them down into specific objectives. For example, if a goal is for learners to be able to compose an essay, break it down into several objectives, such as forming a clear thesis statement, coherently ordering points, following a salient argument, gathering and quoting evidence effectively, and so on.

State the conditions, if any, under which the behavior is to be performed. Consider the following conditions:

  • Equipment or tools, such as using a laboratory device or a specified software application.
  • Situation or environment, such as in a clinical setting, or during a performance.
  • Materials or format, such as written text, a slide presentation, or using specified materials.

The level of specificity for conditions within an objective may vary and should be appropriate to the broader goals. If the conditions are implicit or understood as part of the classroom or assessment situation, it may not be necessary to state them. 

When articulating the conditions in learning outcomes, ensure that they are sensorily and financially accessible to all students.

Degree 

Degree states the standard or criterion for acceptable performance. The degree should be related to real-world expectations: what standard should the learner meet to be judged proficient? For example:

  • With 90% accuracy
  • Within 10 minutes
  • Suitable for submission to an edited journal
  • Obtain a valid solution
  • In a 100-word paragraph

The specificity of the degree will vary. You might take into consideration professional standards, what a student would need to succeed in subsequent courses in a series, or what is required by you as the instructor to accurately assess learning when determining the degree. Where the degree is easy to measure (such as pass or fail) or accuracy is not required, it may be omitted.

Characteristics of effective learning outcomes

The acronym SMART is useful for remembering the characteristics of an effective learning outcome.

  • Specific : clear and distinct from others.
  • Measurable : identifies observable student action.
  • Attainable : suitably challenging for students in the course.
  • Related : connected to other objectives and student interests.
  • Time-bound : likely to be achieved and keep students on task within the given time frame.

Examples of effective learning outcomes

These examples generally follow the ABCD and SMART guidelines. 

Arts and Humanities

Learning goals.

Upon completion of this course, students will be able to apply critical terms and methodology in completing a written literary analysis of a selected literary work.

At the end of the course, students will be able to demonstrate oral competence with the French language in pronunciation, vocabulary, and language fluency in a 10 minute in-person interview with a member of the teaching team.

Learning objectives

After completing lessons 1 through 5, given images of specific works of art, students will be able to identify the artist, artistic period, and describe their historical, social, and philosophical contexts in a two-page written essay.

By the end of this course, students will be able to describe the steps in planning a research study, including identifying and formulating relevant theories, generating alternative solutions and strategies, and application to a hypothetical case in a written research proposal.

At the end of this lesson, given a diagram of the eye, students will be able to label all of the extraocular muscles and describe at least two of their actions.

Using chemical datasets gathered at the end of the first lab unit, students will be able to create plots and trend lines of that data in Excel and make quantitative predictions about future experiments.

  • How to Write Learning Goals , Evaluation and Research, Student Affairs (2021).
  • SMART Guidelines , Center for Teaching and Learning (2020).
  • Learning Taxonomies and Verbs , Center for Teaching and Learning (2021).

British Educational Research Association

  • Search term Advanced Search Citation Search
  • Individual login
  • Institutional login

Review of Education

What matters for student learning outcomes? A systematic review of studies exploring system-level factors of educational effectiveness

Corresponding Author

Ana María Mejía-Rodríguez

  • [email protected]

Department of Education, University of Cyprus, Nicosia, Cyprus

IEA, Hamburg, Germany

Correspondence

Ana María Mejía-Rodríguez, Department of Education, University of Cyprus, P.O. Box 20537, 1678 Nicosia, Cyprus.

Email: [email protected]

Leonidas Kyriakides

Meta-analysis comprises a powerful tool for synthesising prior research and empirically validating theoretical frameworks. Using this tool and two recent multilevel models of educational effectiveness as guiding frameworks, this paper synthesises the results of 195 studies investigating the association between system-level characteristics and student learning outcomes. Results show a broad range of system-level factors studied in the international literature, which could be grouped into the three categories used in the integrated multilevel model of education: antecedents determined by the larger societal context, including factors such as level of development, inequality and societal values; system ecology and structural reform, covering factors such as decentralisation, accountability and stratification; and direct educational policies, including financial resources, time resources and variables related to teacher training and qualifications. Results highlight the importance of the larger context in which educational systems operate, as well as the need for further research looking at actual educational policies. Further analyses provide support for a generic effect of system-level factors, regardless of the educational level or subject domain assessed. However, results show variation in terms of methodological choices, such as the number of levels used in the analysis. Based on these results, implications for theory, research, and policy are drawn.

Context and implications

Rationale for this study.

We present a meta-analysis of the international literature on system-level factors associated with student learning outcomes to contribute to the improvement of quality in education.

Why the new findings matter

A synthesis of the literature is important to identify what kind of factors have been studied so far, their consistency and significance, and to identify future research needs.

Implications for theory, research, and policy

Socioeconomic factors, such as affluence and inequality, and other characteristics of the larger societal context are important system-level conditions. Conducting the meta-analysis allowed to identify areas where more research is needed, such as on the impact of national educational policies, and on the impact of system factors on non-cognitive learning outcomes. Further studies are also needed with a longitudinal component and focusing on the indirect effects of system-level factors. The results of the meta-analysis give further support to exerting caution when it comes to educational reforms and their implementation across different contexts, particularly because of the role that system inequality plays for educational outcomes.

INTRODUCTION

factors in teaching, curriculum, and learning environment at different levels such as the classroom, the school, and the above-school levels (that) can directly or indirectly explain the differences in the outcomes of students, taking into account background characteristics, such as ability, socioeconomic status, and prior attainment. (2008, p. 12)

Although there are quantitative systematic reviews of the literature regarding classroom (e.g., Bardach & Klassen,  2020 ; Bourdeaud'hui et al.,  2018 ; Kyriakides et al.,  2013 ; Scheerens,  2016 ) and school (e.g., Kyriakides et al.,  2010 ; Scheerens,  2016 ) factors, to our knowledge this has not yet been applied to factors at the system level, as it is regarded as a fairly new research area (Scheerens et al.,  2015 ; Scheerens & Blömeke,  2016 ). Scheerens ( 2016 ) provides a literature review of some of the salient system-level factors studied in the last two decades but refrained from making a quantitative synthesis due to the early stages of the area of system effectiveness within EER. Outside of EER, there have been several high-quality and informative reviews conducted within the economics of education literature (e.g., Hanushek & Woessmann,  2011 , 2017 ); however, these are focused on the institutional arrangements and the structure of educational systems. Even in Hattie's ( 2009 ) famous synthesis of over 800 meta-analyses, the focus was on student, home, teacher, school and curricula, but there was no synthesis related to system-level factors.

To contribute to the ongoing EER efforts to study the system level, we present a meta-analysis of the international literature on student learning outcomes. Meta-analysis is a powerful tool for providing not only a synthesis of the knowledge that has been accumulated in a given field or topic but also for identifying associations that have not yet been studied or have not been studied extensively (Cooper et al.,  2019 ) which can, in turn, provide directions for future theoretical and empirical work. Therefore, the aim of the present meta-analysis is to synthesise the existing evidence regarding system-level factors associated with student learning outcomes: which factors have been identified in the literature, what is their relationship with student learning outcomes, to what extent are findings consistent, and what are the current gaps in the literature.

CONCEPTUAL FRAMEWORK

Since the start of EER, which dates to the 1970s, the field has gone through several phases and has addressed different research questions. As a reaction to the studies by Coleman et al. ( 1966 ) and Jencks et al. ( 1972 ), which stated that schools did not have much impact on student learning, the early phases of EER focused on showing that schooling did matter (Kyriakides, Creemers, et al.,  2018 ; Reynolds et al.,  2014 ). Afterwards, an interest emerged to search for other factors associated with student learning, leading to different approaches to model educational effectiveness. These approaches are based on three disciplinary perspectives. In the economic approach, effectiveness is seen in terms of the production process of schools and educational systems, in which inputs (e.g., financial resources) are transformed into outputs (e.g., graduation rates) (Creemers & Kyriakides,  2008 ; Scheerens & Bosker,  1997 ). Research from this approach has shown that the relationships between inputs and outcomes is more complex than initially assumed (Creemers & Kyriakides,  2006 ; Reynolds et al.,  2014 ), with little evidence that factors such as pupil-teacher ratios, spending per students, and teacher salaries have a consistent impact on achievement (Hanushek & Woessmann,  2015 , 2017 ). A second approach to model educational effectiveness comes from the disciplinary perspective of educational psychology. The focus from this approach is on effective instruction and learning conditions (Creemers,  1994 ), including factors like different student characteristics (e.g., aptitudes, personality, motivation), time on task and opportunity to learn, and factors about what happens inside the classrooms (e.g., teaching practices, teacher characteristics). Educational effectiveness has also been modelled from a sociological disciplinary perspective, with a focus on the relationship between student outcomes and student background characteristics, such as their socioeconomic status, gender and immigration background, as well as on the variability that might exist due to these background characteristics (Creemers & Kyriakides,  2008 ).

Although these three disciplinary perspectives might have existed separately in the early phases of EER, Scheerens and Bosker ( 1997 ) indicate that a blending began to take place from the mid-1980s, leading to an integrated approach to modelling effectiveness. It is from this approach that multilevel models such as the ones by Creemers ( 1994 ) and Scheerens ( 1992 ) emerged, as well as more recent models like the dynamic model of educational effectiveness (DMEE; Creemers & Kyriakides,  2008 ) and the integrated multilevel model of education (Scheerens,  2016 ). The present meta-analysis is guided by these two latter models. Both are multilevel and refer to factors associated with student learning outcomes that operate at four levels: student, classroom, school and system. They also acknowledge direct and indirect influences on learning, within and across the four levels in their models. However, an important distinction at the level of the system is that the integrated multilevel model of education provides only a framework of what kind of system-level characteristics can be examined in relation to educational outcomes, whereas the DMEE is a parsimonious model that focuses on a narrower set of variables that originate, mainly, from previous research at classroom and school levels. In the following sections, we will describe the main characteristics of the system-level in both conceptual models. A full description of all the levels included in the models can be found in Creemers and Kyriakides ( 2008 ), for the DMEE, and Scheerens ( 2016 ), for the integrated multilevel model of education.

System-level factors of the dynamic model of educational effectiveness

With the aim of EER in mind and with the idea that theoretical models of effectiveness ‘should be established in such a way as to help policymakers and practitioners to improve educational practice’ (Creemers & Kyriakides,  2008 , p. 75), the system-level of the DMEE focuses on malleable conditions of educational systems. Specifically, the DMEE mentions three system-level aspects that might affect learning both inside and outside the classroom: (1) national policy for teaching and for creating a school learning environment; (2) evaluation of the national policy; and (3) the wider educational environment. The first two aspects are directly derived from the school-level factors in the model, as by the time the model was developed there were not only few studies that attempted to identify factors operating at the system level but also the evidence in these studies was mixed. Therefore, the authors opted for a parsimonious model, which mainly refers to aspects of national educational policy that affect learning both inside and outside the classroom.

In the first aspect, national policy for teaching is further divided into quantity of teaching, provision of learning opportunities, and quality of teaching. National policy on quantity of teaching is concerned with regulations and guidelines to ensure that the quantity of teaching is kept to a maximum level. This includes, for example, policy or regulations concerned with the school timetable, and policy on student and teacher absenteeism (Creemers & Kyriakides,  2008 ). Provision of learning opportunities refers to policies on national curriculum but also on the provision of—and support or encouragement to participate in—extra-curricular activities that contribute to achieving the aims of the curriculum (e.g., participation in math Olympiads or other competitions). Policies on quality of teaching have to do with guidelines and support given to teachers to use effective teaching practices (e.g., through specified standards on teaching, teacher evaluation), whereas national policy for improving the school learning environment focuses on issues related to student behaviour outside the classroom, collaboration and interaction between teachers, partnerships between different stakeholders (e.g., school, community, parents), and the provision of sufficient learning resources for students and teachers. The second aspect, evaluation of national policy, refers to the existence of evaluation mechanisms to collect data about the appropriateness of the national policy. Examples of information that could be gathered about these evaluation mechanisms are how frequently they occur, when data is collected, and whether data comes from multiple stakeholders.

Although the DMEE focuses on educational policy, the model also includes an aspect labelled as the ‘wider educational environment’. Creemers and Kyriakides ( 2008 ) state that this aspect is partly defined by the political and sociocultural context in which students, teachers and schools operate, and it is expected to influence teaching and learning through its ability to increase opportunities for learning and to develop positive values for learning. Although it is only described in a general way, the DMEE mentions two elements of the wider educational environment: support provided to schools from different stakeholders (e.g., companies, universities, educational researchers), and stakeholders' expectations of learning and learning outcomes.

System-level factors of the integrated multi-level model of education

At the level of the system, Scheerens ( 2016 ) presents an overview of different conditions that can be examined in educational effectiveness research, including a broader set of factors beyond the educational policies and the educational environment considered by the DMEE. This broad set of system-level conditions is further organised into three meaningful categories: (1) antecedents determined by the larger societal context; (2) system ecology and structural reform; and (3) direct policies, in the sense of malleable inputs and processes.

The larger societal context includes a set of cultural, social, and economic and political conditions that might be related to student learning outcomes. Conditions of this larger societal context include the affluence of the system, the level of inequality, the heterogeneity of the population, and cultural norms that ‘could be seen as relevant for the appreciation of education and the motivation of the population to do well in education’ (Scheerens,  2016 , p. 148).

The other two categories refer to educational policies and the key distinction between both is the degree of malleability of said policies. System ecology and structural reform refers to factors that are a result of long-term educational reforms and, therefore, might not be that easy to change in the short term. According to Scheerens, there are three areas of structural reform that have stood out in the last two decades: functional decentralisation, evaluation and accountability arrangements, and the structural integration (or segregation) of secondary education. Functional decentralisation refers to the degree of autonomy of decision-making that exists at different administrative levels (e.g., schools, states, national education system) and on different elements of educational administration (e.g., curriculum, evaluation, human resource management). The second area of structural reform includes evaluation for accountability purposes—usually external and with consequences for students, teachers, or schools—and evaluation for school improvement, which is internal and with formative purposes. The final element within this category focuses on the implementation of early tracking or selection into different pathways in secondary education based on student ability. In general, across these three categories, previous research shows a mixed pattern of results with no clear links of decentralisation, accountability or stratification with student learning outcomes. Nevertheless, there seems to be high expectations for some facets of these factors, such as autonomy regarding instructional matters, accountability in the form of national standard based exams, and comprehensive schools (i.e., that do not separate students into different tracks) (Scheerens & Blömeke,  2016 ).

The final category of direct policies refers to malleable inputs and processes in the matter of investment in education, teacher training and recruitment, equity-oriented policies and quality improvement programmes. Investment in education is usually measured as the percentage of national wealth spent on education but other forms of investment can be in terms of teacher salaries and—based on the framework by the OECD ( 2016 )—material (e.g., pedagogical resources, infrastructure) and human resources (e.g., class size, pupil-teacher ratios, teacher shortages). Scheerens describes teacher training and recruitment mainly in terms of formal teacher qualifications, such as having a master’s degree or being certified to teach in a specific field. Equity-oriented policies concern programmes or policies designed to either achieve a more equal distribution of inputs and outputs among target groups (e.g., economically disadvantaged students or students with special education needs) or to better serve these groups.

RESEARCH QUESTIONS

  • Which system-level factors related to student achievement have been studied across the literature?
  • What is their association (in terms of effect sizes) with student achievement?
  • To what extent is there variation of effect sizes across studies?
  • Which study characteristics can explain variation in effect sizes?

Literature search and initial screening

To identify the literature relevant to this meta-analysis, we performed an electronic search based on the Scopus, Web of Science, ERIC, PyschINFO, SocINDEX, and EconLit databases focusing on publications that had been published between 1 January 1970 and 1 November 2019, using a combination of the three sets of search terms 1 : (1) system, country or context; (2) factors, characteristics, determinants or differences; and (3) student, learning or educational outcomes. All searches were limited to titles, abstracts and keywords. An additional target search was performed on the reports and publications of international large-scale assessments in education conducted by the International Association for the Evaluation of Educational Achievement (IEA) and by the Organisation for Economic Cooperation and Development (OECD). Finally, the references list of all the included studies (i.e., those that passed the initial screening and the eligibility stage) were to identify studies not yet covered in our search. Figure  1 describes the stages of our literature search according to the PRISMA guidelines (Moher et al.,  2009 ).

Details are in the caption following the image

Screening and selection criteria

Studies were included in our meta-analysis if they met the following criteria. First, we selected studies that compared at least two educational systems, this includes the comparison of districts, regions or states within or across countries, as well as cross-country comparisons. Second, studies should have included explanatory variables at the system level and provided information on how the variables were measured and from which sources they were gathered. Third, the selected studies should include explicit and valid measures of cognitive (i.e., achievement) or non-cognitive (e.g., self-concept, motivation, interest) learning outcomes. Studies that used more global criteria for academic outcomes, such as dropout rates, completion rates, and grade retention were not included. Finally, studies had to be reported in English or Spanish. We note here that to include more studies in this meta-analysis, we decided to err on the side of being more inclusive rather than exclusive. Therefore, we used minimal quality criteria for study selection, and we decided to conduct the meta-analysis in two steps, outlined in the next section. After the application of these criteria, the final sample consisted of 195 studies. The full list of studies is provided in Appendix  S1 .

Statistical procedure

A first and merely descriptive approach was to follow the vote counting procedure developed by Light and Smith ( 1971 ), in which the reported results across the studies in our sample are divided into significant positive, significant negative and non-significant. This is a common practice in previous systematic reviews within the field of EER (e.g., Kyriakides et al.,  2010 , 2013 ; Scheerens,  2016 ; Scheerens & Bosker,  1997 ; Witziers et al.,  2003 ). The vote-counting procedure was combined with a quantitative approach to calculate average effect sizes and variation in effect sizes across studies by means of a multilevel meta-analysis. For this second part, we were only able to include studies that reported enough information to calculate standardised effect sizes, which led to a smaller sample size of 140 studies.

For all analyses presented, the first step was to extract all system-level variables studied across the literature and identify factors that were commonly measured across these studies. Because of the breadth of factors covered, these were grouped into the three themes from Scheerens' model: (1) antecedents determined by the larger societal context; (2) system ecology and structural reform; and (3) direct educational policies in the sense of malleable inputs and processes. The vote counting and the multilevel meta-analysis approaches were done separately for factors within each theme and, when possible, for specific variables. Data preparation and the vote-counting analyses were performed in IBM SPSS 25 and the multilevel meta-analysis was conducted using Mplus version 8 (Muthén & Muthén,  2017 ).

For the multilevel meta-analysis, we adopted the procedure suggested by Lamberts and Abrams ( 1995 ), which has also been used in recent meta-analyses within EER (e.g., Kyriakides et al.,  2010 , 2013 ; Scheerens,  2016 ; Scheerens & Bosker,  1997 ). First, an unconditional model was established to determine the average effect size of a factor and to partition the variance in effect sizes across and within studies. Next, if the unconditional model showed significant average effect sizes as well as significant between-study variability, we proceeded to add study characteristics as explanatory variables in conditional models. More information on the multilevel model used is presented in the Appendix  A .

The computation of effect sizes

Given the nature of our sample, in which most studies report regression coefficients, we follow Bowman ( 2012 ) and use standardised regression coefficients as our chosen metric of effect size. Furthermore, given empirical support that standardised beta coefficients can be substituted directly for correlation coefficients in quantitative meta-analysis (Rosenthal & DiMatteo,  2001 ) we treat them as a same metric of effect size. All effect sizes were transformed into Fisher's Z using the formula presented by Rosenthal ( 1994 ).

Coding of study characteristics

In multilevel meta-analysis, study characteristics are possible moderators that might help explain variation in effect sizes across studies (Creemers & Kyriakides,  2010 ). For this meta-analysis, we extracted information on the source of student learning outcomes data used, the type of system study, the sample size, whether study covered developing and/or developed countries, the educational level studied, the type of outcome measured (cognitive or non-cognitive), the domain, study design and statistical techniques employed. A description of the study characteristics, and its coding, is available in Appendix  A .

Results are presented in the following way. First, descriptive statistics of the studies and of the system-level factors are presented. Next, we present results of the vote counting procedure for each overarching theme of the integrated multilevel model of education. The third part of the results includes the results of the multilevel meta-analysis for selected factors.

A brief description of the studies in our sample is presented in Table  1 . Although all publications conducted prior to November 2019 were considered, relevant publications generally fell into the last 20 years, with an increasing number of articles including system-level factors in every decade. Most the articles in our sample were secondary analyses of international large-scale assessments, but about one quarter of studies used other sources of data, mainly from national examinations of student achievement such as the NAEP (e.g., Grissmer et al.,  2000 ; Grodsky et al.,  2009 ; Marchant et al.,  2006 ); but also from regional assessments, like PASEC (e.g., Michaelowa,  2001 ); or even collecting their own data (e.g., Kyriakides, Creemers, et al.,  2018 ; Kyriakides, Georgiou, et al.,  2018 ). The studies in our sample also show that the focus has been on cognitive outcomes of schooling, particularly on mathematics achievement. Regarding characteristics of the design and analysis used in these studies, most had a cross-sectional design and were unilevel (i.e., conducted only at the level of the system).

Category % Category %
Journal article 152 78 Cross-sectional 161 83
Working or Discussion Paper 10 5 Quasi-experimental 8 4
Book chapter 2 1 Panel or longitudinal study 26 13
Report 20 10
Conference paper 6 3 Unilevel 109 55
Dissertation 5 3 Multilevel 86 45
1975–2000 25 13 Multiple regression 82 42
2001–2010 65 33 Correlation 31 16
2011–2019 105 54 Multilevel regression 67 34
Structural equation modelling 9 5
PISA 109 56 Differences-in-differences 5 3
TIMSS 47 24 Instrumental variables 4 2
PIRLS 12 6
ICCS/CIVED 8 4 Countries 131 67
IAEP 3 2 Other (e.g., states, districts) 70 36
Other 51 26
Less than 30 124 64
Primary education 32 16 30 or more 89 46
Secondary education 182 94
Developed only 87 45
Cognitive 193 99 Developing and developed 108 55
Non-cognitive 8 4
Europe 131 67
Mathematics 116 59 North America (only USA or CAN) 149 76
Reading 76 39 Latin America 106 54
Science 62 32 Asia 104 53
Civics/Citizenship 8 4 Australia/NZ 105 54
General 39 20 Africa 45 23
  • Note : Studies might not add up to 195 in certain categories because a study could examine more than one outcome, use more than one data source or method of analysis, include multiple countries/regions in their sample.
  • Abbreviations: CIVED, Civid Education Study; IAEP, International Assessment of Educational Progress; ICCS, International Civic and Citizenship Education Study; PIRLS, Progress in International Reading Literacy Study; PISA, Programme for International Student Assessment; TIMSS, Trends in International Mathematics and Science Study.

Antecedents determined by the larger societal context

The theme of antecedents determined by the larger societal context includes factors such as the affluence of the system and its level of development and inequality. Table  2 provides an overview of all these factors, specific variables within each factor, the distribution of reported effects in terms of the statistical significance and their sign, and the average effect sizes across studies.

Reported results 95% CI
+ Nss
General affluence 122 320 50.3% 5.0% 44.7% 0.27*** [0.22, 0.32]
Affluence 103 215 48.8% 6.1% 45.1% 0.26*** [0.20, 0.31]
GDP per capita 64 118 55.1% 6.8% 38.1% 0.29*** [0.20, 0.37]
GNP per capita 8 13 30.8% 7.7% 61.5% 0.29* [0.06, 0.51]
GNI per capita 6 10 20.0% 0.0% 80.0% 0.27 [−0.10, 0.63]
SES 8 22 59.1% 13.6% 27.3% 0.33* [0.01, 0.66]
Poverty 25 82 1.2% 51.2% 47.6% −0.30** [−0.47, −0.12]
Child poverty 6 14 0.0% 57.1% 42.9% −0.16* [−0.24, −0.08]
Disadvantaged students 12 25 4.0% 96.0% 0.0% −0.19 [−0.41, 0.02]
Unemployment 10 23 8.7% 60.9% 30.4% −0.36* [−0.63, −0.09]
Inequality 24 36 0.0% 72.2% 27.8% −0.36*** [−0.52, −0.19]
Gini index 19 28 0.0% 67.9% 32.1% −0.36*** [−0.57, −0.15]
SES inequality 2 3 0.0% 100.0% 0.0% −0.48* [−0.82, −0.15]
Job status inequality 3 3 0.0% 66.7% 33.3% −0.26* [−0.26, −0.25]
Heterogeneity 20 60 23.3% 20.0% 56.7% 0.16 [−0.18, 0.50]
Share of immigrants 4 12 16.7% 16.7% 66.6% 0.04 [−0.09, 0.16]
Share of immigrant students 9 15 6.7% 13.3% 80.0% 0.05 [−0.14, 0.25]
General development 50 144 56.3% 4.2% 39.6% 0.26*** [0.16, 0.35]
HDI 15 44 63.6% 9.1% 27.3% 0.27** [0.08, 0.46]
Adult literacy 29 70 44.3% 1.4% 54.3% 0.27* [0.17, 0.36]
Life expectancy 4 6 33.3% 0.0% 66.7% 0.41* [0.03, 0.80]
Fertility rate 3 3 0.0% 100.0% 0.0% −0.48* [−0.65, −0.31]
Child mortality 3 5 0.0% 100.0% 0.0% −0.33* [−0.56, −0.11]
Crime/Violence rate 3 4 25.0% 75.0% 0.0% −0.16 [−0.40, 0.08]
Educational relevant aspects of national cultures 8 17 17.6% 11.8% 27.8% −0.18 [−0.50, 0.14]
Shadow education 4 12 0.0% 16.7% 83.3% −0.17* [−0.26, −0.08]
Educational aspirations 2 3 100.0% 0.0% 0.0% 0.27 n/a
Student aid/grants 2 2 0.0% 0.0% 100.0% n/a n/a
Culture
Corruption 4 5 −0.25 [−0.82, 0.32]
Democracy 4 6 0.0% 0.0% 100.0% 0.14 [−0.84, 1.22]
Long-term orientation (Hofstede) 6 18 100.0% 0.0% 0.0% 0.53*** [0.28, 0.78]
Uncertainty avoidance (Hofstede) 6 17 0.0% 58.8% 41.2% −0.19** [−0.32, −0.07]
Power distance (Hofstede) 7 18 0.0% 27.8% 72.2% −0.15** [−0.25, −0.05]
Masculinity (Hofstede) 6 17 23.5% 0.0% 76.5% 0.02 [−0.01, 0.04]
Indulgence (Hofstede) 3 8 0.0% 0.0% 100.0% 0.03 [−0.10, 0.16]
Individualism (Hofstede) 7 18 33.3% 0.0% 66.7% 0.11 [−0.03, 0.26]
Embeddedness (Schwartz) 2 10 30.0% 70.0% 0.0% −0.28 [−0.57, 0.00]
Self-expression (Inglehart) 3 14 50.0% 0.0% 50.0% 0.21* [0.07, 0.36]
Secular authority (Inglehart) 3 14 100.0% 0.0% 0.0% 0.69* [0.57, 0.81]
Monumentalism (Minkov) 3 10 0.0% 100.0% 0.0% −0.94* [−1.01, −0.83]
Gender egalitarianism (GLOBE) 2 8 62.5% 0.0% 37.5% 0.25* [0.06, 0.44]
In-group collectivism (GLOBE) 2 8 87.5% 0.0% 12.5% 0.26* [0.07, 0.45]
Power distance (GLOBE) 2 8 25.0% 37.5% 37.5% −0.01 [−0.10, 0.09]
Uncertainty avoidance (GLOBE) 2 8 0.0% 87.5% 12.5% −0.45* [−0.73, −0.16]
Gender equity 5 7 42.9% 42.9% 14.3% 0.01 [−0.08, 0.10]
Well-being 25 4 32.0% 44.0% 24.0% 0.10 [−0.25, 0.45]
  • Note : Average effect sizes are calculated from unconditional multilevel models unless noted otherwise. * p  < 0.05; ** p  < 0.01; *** p  < 0.001.
  • Abbreviation: m , Number of studies; k , Number of effect sizes; +, statistically significant and positive; −, statistically significant and negative; Nss, not statistically significant; Z r , average effect size; GDP, gross domestic product; GNP, gross national product; GNI, gross national income; SES, socioeconomic status (as measured by PISA or IEA studies); HDI, Human Development Index.
  • a Indicator was reversed when combined with the general overarching factor to which it belongs.
  • b Average effect sizes and respective confidence intervals were calculated based on Hedges and Olkin’s ( 1985 ) random-effects method.

General affluence

General affluence was the most popular system-level characteristic included across studies, with most of them treating it as a control variable as it is recognised to be one of the most influential confounders of between-country comparisons of educational performance (Duru-Bellat & Suchaut,  2005 ). Different variables were used to measure the level of affluence or wealth of a system, of which gross domestic product (GDP) per capita was the most frequently used. Alternative measures of general affluence were in terms of low levels, or lack, of affluence, including various poverty measures (e.g., child poverty, proportion of economically disadvantaged students). Not all studies reported significant associations between the different indicators of affluence and student learning outcomes, but when significant effects were reported they usually showed a positive association. As a whole, the average effect size of general affluence was 0.27 ( p  < 0.001). The positive association between affluence and educational outcomes is explained in several ways across the studies included in this synthesis. Some authors explain more direct influences, such as the possibilities that more affluent systems have to allocate more funds to education (e.g., Chiu & Xihua,  2008 ; Grilli et al.,  2016 ; Rodríguez-Santero & Gil-Flores,  2018 ), while others also describe more indirect pathways, like how wealthier systems also tend to have lower levels of inequality (e.g., Chiu,  2010 ; Chiu & Khoo,  2005 ; Condron,  2013 ), better living standards in terms of adequate nutrition and healthcare (e.g., Chiu & Xihua,  2008 ), and more social and cultural resources—such as libraries and museums—available for students to benefit from them (e.g., Chiu,  2010 ; Condron,  2013 ).

The most frequent variable used to measure inequality was the Gini index, which measures the extent to which the distribution of income deviates from a perfectly equal distribution (World Bank,  n.d. ). Other indicators used across studies were either the standard deviation of the socioeconomic status of students within a system or the standard deviation in parents' highest job status within a system, both gathered from student background questionnaires of ILSAs. Although there was a large percentage of non-significant findings, all the significant effects consistently indicated a negative association between the level of inequality in a system and student learning outcomes. According to Condron ( 2013 ), higher levels of inequalities in a system may affect student learning outcomes through greater investment in the rich and blocked opportunities for the poor. Additionally, Chiu ( 2015 ) explains that higher levels of inequality mean that resources are allocated inefficiently, and a less equal distribution of resources leads to a lower educational value for all students, on average.

Heterogeneity of population

This subtheme was included in Scheerens' ( 2016 ) framework as a factor that could affect both performance levels and learning disparities. In our sample of studies, the general picture is that there is no significant association between the heterogeneity of the population and student learning outcomes. However, this overview comes only from a few studies. The most popular variables were the proportion of immigrants in a country, either as a share of the total population (e.g., Fossati,  2011 ; He et al.,  2017 ) or the share of immigrant students (e.g., Aloisi & Tymms,  2018 ; Edwards & Garcia Marin,  2015 ; Konan et al.,  2010 ; Rodríguez-Santero & Gil-Flores,  2018 ). Ethnic and religious diversity were also used as measures of the heterogeneity of the population, but they only were included in one study each and therefore are not shown in Table  2 .

Level of development

Instead of focusing on the affluence of a system, some researchers use measures of level of development to control for differences across contexts. Nevertheless, they are still sometimes mentioned as measures of SES or prosperity (e.g., Long,  2014 ; van Hek et al.,  2019 ). These measures are either composite measures of general level of development or individual indicators. An example of a composite measure, and one of the variables most frequently used within this category, is the Human Development Index (HDI). The HDI includes information on countries' life expectancy, expected and mean years of schooling, and gross national income per capita (United Nations Development Programme,  2020 ). The relationship between HDI and student achievement was statistically significant and positive in 63% of the effect sizes reported. Another frequently used indicator was the level of adult literacy in a system or country, measured by the proportion of people aged 16–64 that has finished at least secondary education. Examples of individual indicators of development used in the studies in our sample are life expectancy, fertility rate, child mortality and crime rate but they only come from a few studies (e.g., Bratti et al.,  2007 ; Burhan et al.,  2017 ; DeAngelis,  2019 ; Gyimah-Brempong & Gyapong,  1991 ; Lynn et al.,  2017 ).

Education relevant aspects of national cultures

This category refers to the valuing of education and other educational relevant aspects of societies. Examples of variables included in this category are the prevalence of student aid (e.g., Jacques & Brorsen,  2002 ), the prevalence of shadow education (i.e., out-of-school lessons) (Baker et al.,  2001 ) and educational aspirations on how far in education do students within a system expect to go (Shen,  2001 ). However, it was only possible to calculate an average effect size for the prevalence of shadow education, showing a negative association with student learning outcomes.

Societal values

Besides factors related to the economic conditions of the system, there were several indicators related to attributes of societies and cultures which, a priori, might not be ‘educational relevant’—as framed in Scheerens' ( 2016 ) model—and were therefore added as a separate category. Most of the studies about cultural values used specific values frameworks such as the one by Hofstede (e.g., Chiu & Klassen,  2009 ; Fang et al.,  2013 ; He et al.,  2017 ) and the GLOBE study values (e.g., Chiu & Chow,  2015 ; He et al.,  2017 ). Other societal values or attributes that were included in this category were measures of gender equality in the system (e.g., Ayalon & Livneh,  2013 ; Julià,  2016 ; van Hek et al.,  2019 ), the extent of corruption (Berkovich,  2016 ; Mikk,  2015 ), and democracy (e.g., Hong,  2015 ; Rindermann,  2007 ), but these attributes were not as frequently studied as values from cultural values frameworks. The framework developed by Hofstede was the most used, with the value of long-term orientation showing consistent significant associations with student learning outcomes. Systems—in this case countries—with long-term orientation are more likely to achieve better student learning outcomes. Societies with long-term orientation values emphasise virtues related to future rewards, such as determination, perseverance and thrift (Hofstede et al.,  2010 ). Other values with significant average effect sizes were uncertainty avoidance—both from Hofstede's framework and from the GLOBE values study, Hofstede's power distance, GLOBE values of gender egalitarianism and in-group collectivism, and Inglehart's self-expression and secular authority.

System ecology and structural reform

The variables in this group are mostly related to educational policies but we have included the conditions, along with policies, of the school learning environment and policies regarding social welfare as well as the integration of immigrants to the system. Overall, there were significant associations for most of the variables, but the analysis shows small effect sizes and mixed results across studies in terms of finding significant effects and on the direction of these effects. Table  3 shows the results of the vote counting along with the average effect sizes for specific factors and variables.

Factor Reported results 95% CI
+ Nss
Competition 24 73 49.3% 5.5% 45.2% 0.14** [0.08, 0.21]
Private enrolment 16 36 36.1% 5.6% 58.3% 0.17** [0.03, 0.31]
Private operation 4 9 100.0% 0.0% 0.0% 0.18 [0.00, 0.00]
School choice 3 7 42.9% 0.0% 57.1% 0.09* [0.09, 0.10]
Competition for students (school reports) 3 5 20.0% 0.0% 80.0% 0.19 [−0.02, 0.40]
School density 3 5 40.0% 20.0% 40.0% −0.01 [−0.18, 0.16]
Functional decentralisation 28 124 30.6% 16.1% 53.2% 0.11*** [0.06, 0.16]
General indicators 5 15 46.7% 6.7% 46.7% 0.10* [0.06, 0.13]
Financial resources 15 37 21.6% 21.6% 56.8% 0.04 [0.00, 0.08]
Personnel management 6 15 46.7% 0.0% 53.3% 0.06 [0.00, 0.10]
Pedagogical aspects 19 57 28.1% 17.5% 54.4% 0.14** [0.06, 0.22]
Structural differentiation 29 140 15.7% 27.9% 56.4% −0.05 [−0.13, 0.02]
Tracking (overall) 17 55 7.3% 25.5% 67.3% −0.13** [−0.21, −0.05]
Age of first selection 5 7 0.0% 28.6% 71.4% 0.19* [0.02, 0.36]
Years tracked 5 7 14.3% 14.3% 71.4% 0.11 [−0.13, 0.35]
Number of tracks 8 14 0.0% 35.7% 64.3% −0.15** [−0.26, −0.05]
Early tracking (dummy) 7 27 7.4% 25.9% 66.7% −0.07* [−0.15, 0.00]
General indicators 4 7 0.0% 28.6% 71.4% −0.06* [−0.09, −0.03]
Within-school ability grouping 4 8 0.0% 37.5% 62.5% −0.21* [−0.35, −0.06]
Prevalence of vocational programmes 8 12 8.3% 8.3% 83.3% 0.00 [−0.15, 0.14]
School selectivity 7 11 27.3% 9.1% 63.6% 0.08 [−0.02, 0.18]
Grade repetition 5 10 10.0% 40.0% 50.0% −0.27*** [−0.41, −0.13]
Various comprehensive system indicators 3 16 43.8% 37.5% 18.8% 0.44 [−0.08, 0.97]
Grade variability 3 7 0.0% 57.1% 42.9% −0.20 [−0.39, −0.01]
Evaluation and accountability 37 166 27.1% 11.4% 61.4% 0.08* [0.02, 0.15]
Central examinations 26 60 36.7% 11.7% 51.7% 0.04 [−0.03, 0.12]
External school inspections 4 8 0.0% 37.5% 62.5% −0.09* [−0.14, −0.04]
Public reports of performance 5 8 0.0% 12.5% 87.5% −0.06 [−0.20, 0.09]
School self-evaluation 3 11 0.0% 18.2% 81.8% 0.00 [−0.03, 0.03]
Teacher evaluation 5 8 25.0% 12.5% 62.5% −0.11 [−0.45, 0.24]
Consequential 7 40 12.5% 5.0% 82.5% 0.06* [0.01, 0.11]
School learning environment 21 148 60.1% 15.5% 24.3% 0.19*** [0.11, 0.26]
Enrolment 8 27 40.7% 0.0% 59.3% 0.19** [0.05, 0.34]
Discipline 4 12 41.7% 0.0% 58.3% 0.16* [0.06, 0.27]
Partnerships 3 34 70.6% 11.8% 17.6% 0.15* [0.14, 0.16]
Absenteeism 7 17 70.6% 0.0% 29.4% −0.38*** [−0.52, −0.24]
Pre-school 10 18 16.7% 16.7% 66.7% 0.02 [−0.25, 0.29]
Enrolment 4 7 0.0% 42.9% 57.1% −0.21 [−0.58, 0.15]
Years of attendance 6 11 27.3% 0.0% 72.7% 0.24 [0.0, 0.48]
Migration integration policies 3 20 30.0% 0.0% 70.0% 0.15* [0.02, 0.27]
Social welfare 16 5 43.7% 12.6% 43.7% 0.25* [0.17, 0.34]
  • Abbreviation: m , Number of studies; k , Number of effect sizes; +, statistically significant and positive; −, statistically significant and negative; Nss, not statistically significant; Z r , average effect size.
  • a Average effect sizes and respective confidence intervals were calculated based on Hedges and Olkin’s ( 1985 ) random-effects method.
  • b Indicator was reversed when combined with the general overarching factor to which it belongs.

Functional decentralisation

Functional decentralisation, or autonomy, was typically measured by the degree to which schools were responsible for decisions regarding curriculum, content, evaluation, budget and resource allocation, and personnel management (e.g., Agasisti & Cordero-Ferrera,  2013 ; Álvarez et al.,  2007 ; Isac et al.,  2011 ), or by a general indicator that covered two or more of these areas of decision-making (e.g., Falch & Fischer,  2008 ; Rodríguez-Santero & Gil-Flores,  2018 ; Woessmann,  2003 ). The average effect size for the overarching theme was 0.11 ( p  < 0.001) although the vote-count analysis showed mixed results across studies. The only aspect of functional decentralisation that showed a significant average effect size was decentralisation of pedagogical aspects, which included aspects related to content, curriculum and evaluation. Decentralisation, or local autonomy, has been a highly discussed educational policy (Hanushek et al.,  2013 ). According to Hanushek et al., the main idea behind favouring decentralisation policies is that local decision-makers have a better understanding of their local school systems and, therefore, could make better decisions to improve student performance. On the other side, some of the arguments against decentralisation (i.e., in favour of centralisation), also mentioned by Hanushek et al. ( 2013 ), are that schools could pursue other goals than maximising student achievement as well as the difficulty to maintain a common standard across the whole educational systems.

Evaluation and accountability arrangements

Results indicate a significant and positive association between accountability and student learning outcomes ( Z r  = 0.08, p  < 0.05). However, most of the effect sizes reported by the studies in our sample did not find statistically significant effects. One of the most popular indicators of accountability was the existence of central or external examinations of student achievement, followed by the application of rewards or sanctions to schools, or teachers, based on student achievement. Other indicators used were the extent to which schools in a system provide public reports of performance, whether monitoring by external evaluators takes place, school self-evaluation (e.g., the percentage of students in schools that use achievement data to monitor progress over time or to compare to other schools), and teacher evaluation (e.g., monitoring of teacher practices). The underlying belief behind accountability is that it creates ‘incentives’ to perform at high levels (e.g., Schuetz et al.,  2013 ; Smith,  2016 ; Woessmann et al.,  2007 ). Despite these levers of how accountability could potentially lead to improved performance, Smith ( 2016 ) also mentions that a concern of increased accountability is that educators might be motivated to manipulate the system to obtain better scores (e.g., narrowing the curriculum, teaching to the test, shaping the pool of test takers).

Structural differentiation

Sometimes referred to as stratification or streaming, structural differentiation was one of the most popular factors—together with accountability—studied across the literature on student learning outcomes. A frequent way to measure the level of differentiation of a system was in terms of tracking (e.g., Chapuis & Causa,  2010 ; Hanushek & Wößmann,  2006 ; Horn,  2009 ; van Hek et al.,  2019 ), which refers to the grouping of students, taking place mainly in secondary education, into different educational tracks or schools that differ in educational programmes or objectives (Bol & van de Werfhorst,  2013 ; Jakubowski,  2010 ). Overall, results indicate a negative, non-significant relationship, although there is variation depending on the factors or specific indicators used to measure the structural differentiation of a system. Results of indicators related to tracking of students into different educational pathways (e.g., early tracking dummy variable, number of tracks, years tracked) suggest that this practice has a negative association with student learning outcomes. In the same line, age of selection showed a positive relationship with student learning outcomes; systems that select students into different tracks at a later age tend to have higher student learning outcomes. The practice of grade repetition had an average effect size of −0.27 ( p  < 0.001), indicating a negative association with student learning outcomes, although few studies reported effects for this indicator.

School competition and choice

Findings suggest that school competition and choice have a positive association with student learning outcomes. According to economic theory, school competition may improve school performance as it provides ‘incentives’ to increase efficiency and effectiveness (Agasisti,  2011 ). The most frequently used variable within this category was the percentage of students enrolled in private schools, which also shows a positive association with student learning outcomes ( Z r  = 0.21, p  < 0.01). Despite significant average effect sizes, the distribution of results shows a mixed pattern, with several non-significant findings.

Migration integration policies

Migration integration measures are mostly based on data from the Migration Integration Policy Index (e.g., Arikan et al.,  2017 , Fossati,  2011 ; He et al.,  2017 ). This index captures country's policies, in several domains, aimed at integrating migrants into society (Solano & Huddleston,  2020 ). Most of the effect sizes for the MIPEX indicator—either as a general score or as scores for specific policy areas (e.g., education, anti-discrimination)—were not statistically significant (70%) but when reported results were significant, it was always with a positive effect. The average effect size of this category was 0.04 ( p  < 0.05).

Pre-school coverage

Pre-school coverage is considered an institutional factor in the economics of education literature (Woessmann,  2010 ), measured either in terms of enrolment or duration. In some cases, it is considered as an important aspect of time resources invested in education (see OECD,  2013 ). According to the OECD ( 2013 ), participation in pre-primary school can promote school readiness, particularly if the programme lasts longer than a year. The average effect size for the general category was 0.02 ( p  = 0.10), resulting from opposite average effect sizes of the two variables in this category. Enrolment rate had a negative association with student learning outcomes whereas the opposite occurred for the average years in pre-school.

School learning environment

This subtheme includes variables related to the performance or functioning of the system in terms of enrolment, completion and dropout rates, as well as variables mentioned in the DMEE (e.g., absenteeism, discipline, partnerships, teacher collaboration). However, a distinction needs to be made between how the DMEE refers to policies that are in place to deal with those aspects of discipline, partnerships and teacher collaboration, whereas the studies in the sample focused on the prevalence of those aspects. For example, Shen ( 2006 ) includes the average percentage of student absenteeism in a school day, as reported in the school questionnaire of TIMSS 2019, and He et al. ( 2017 ) include country mean scores of classroom discipline, as reported by teachers in TALIS 2013. Results show that educations systems with better coverage in terms of enrolment tend to have better student learning outcomes ( Z r  = 0.21, p  < 0.01), whereas higher levels of student absenteeism at the system level (e.g., how often students skip or arrive late to school) has consistent negative associations with student learning outcomes ( Z r  = −0.38, p  < 0.001). Few studies included variables for partnerships (e.g., among teachers, between schools and parents) or for student discipline, but the average effect size was significant and positive (see Table  3 ). Larger effect sizes were reported for different aspects of national policy on the learning environment (i.e., on partnerships, teacher collaboration), although they mainly come from one study by Kyriakides, Georgiou, et al. ( 2018 ).

Direct policies in the sense of malleable inputs and processes

The last group of factors includes variables that refer mainly to resources or inputs of educational systems. Other variables in this overarching theme refer to teacher training and qualifications, and equity conditions of the system. Table  4 provides an overview of the factors and specific variables in this theme, along with the vote counting results. Similar to system ecology and structural reform, effects reported were mainly not significant.

Factor Reported results 95% CI
+ Nss
Investment in education 103 556 35.6% 8.8% 55.6% 0.13*** [0.10, 0.16]
Financial 83 226 28.3% 11.9% 59.7% 0.16*** [0.11, 0.21]
Expenditures 79 180 21.7% 14.4% 63.9% 0.16*** [0.10, 0.21]
Expenditures (% of GDP) 14 23 21.7% 26.1% 52.2% −0.01 [−0.10 0.09]
Expenditure per student 53 109 24.8% 12.8% 62.4% 0.27*** [0.14, 0.39]
Teacher salaries 18 46 54.3% 2.2% 43.5% 0.24*** [0.13, 0.35]
Material resources 6 12 91.7% 8.3% 0.0% 0.16 [−0.12, 0.44]
Time resources 16 64 48.4% 6.3% 45.3% 0.09** [0.03, 0.15]
Instruction time 10 22 18.2% 9.1% 72.7% 0.01 [−0.09, 0.11]
Days in school 3 8 75.0% 12.5% 12.5% 0.04 [−0.01, 0.08]
Opportunity to learn 5 42 52.4% 0.0% 47.6% 0.16* [0.14, 0.17]
Homework time 2 6 16.7% 0.0% 83.3% 0.00 [−0.10, 0.10]
Extracurriculars 2 8 12.5% 0.0% 87.5% 0.15 [0.04, 0.26]
Human resources 39 100 19.0% 10.0% 71.0% 0.05* [0.01, 0.10]
Class size 14 30 3.3% 6.7% 90.0% −0.04 [−0.15, 0.06]
Pupil-teacher ratio 27 58 10.3% 29.3% 60.3% −0.07 [−0.15, 0.02]
Shortage of (qualified) teachers 4 12 25.0% 0.0% 75.0% 0.06 [−0.12, 0.24]
Teacher training and qualifications 26 100 35.0% 9.0% 56.0% 0.19*** [0.13, 0.24]
Training and development 4 14 21.4% 35.7% 42.9% 0.08 [−0.29, 0.44]
Teacher skills and qualifications 23 86 37.2% 4.7% 58.1% 0.22** [0.14, 0.31]
Teacher experience 7 17 17.6% 17.6% 64.7% 0.08 [−0.16, 0.33]
Teacher skills 2 6 83.3% 0.0% 16.7% 0.33* [0.3, 0.37]
Teacher education 17 56 33.9% 0.0% 66.1% 0.15* [0.03, 0.26]
Major or certification 5 21 57.1% 0.0% 42.9% 0.53* [0.33, 0.72]
At least master's degree 9 28 21.4% 0.0% 78.6% 0.10* [0.07, 0.13]
Equity-oriented conditions 7 14 50.0% 28.6% 21.4% 0.02 [−0.04, 0.08]
  • Note : Average effect size calculated from unconditional multilevel models unless noted otherwise. * p  < 0.05; ** p  < 0.01; *** p  < 0.001.

Investment in education or educational resources

Across the articles in our sample, we identified a broad range of educational resources studied. Financial resources, including different operationalisations of educational expenditures (e.g., per student, as a proportion of GDP) and teacher salaries, were the type of resource that appeared more frequently across the studies in our sample. The overall average effect size was 0.16 ( p  < 0.001), with teacher salaries showing the highest average effect size within this category ( Z r  = 0.25, p  < 0.001). Other types of resources included time, human and material resources, each with different indicators as well. Time resources were described, for example, through length of the school year (in days), yearly instruction in hours, and minutes per week in regular lessons (e.g., Ainley & Thomson,  2006 ; Long,  2014 ; Scheerens et al.,  2015 ). The category of human resources includes variables that described the availability of human resources (e.g., student-teacher ratio) and a separate category was developed to describe different attributes of teachers (see ‘Teacher training and qualifications’). Finally, material resources included indicators such as the percentage of schools with a library, the quality of infrastructure (e.g., buildings, lights, classrooms), and the availability of pedagogical resources (e.g., books, video, laboratory material). Time and human resources had significant and positive effect sizes (0.09 and 0.05, respectively), whereas the average effect size for material resources was not statistically significant but it only emerged from few studies.

Teacher training and qualifications

As a general category, results across studies in our sample indicate a significant and positive association with student learning outcomes ( Z r  = 0.19, p  < 0.001) but there were different types of indicators used. Furthermore, few studies included variables regarding teacher training and professional development, so the average effect size of this category mainly comes from effect sizes related to teacher qualifications (e.g., teacher educational background, experience, skills). Like in previous subthemes, there was a wide variety of specific indicators used. For example, experience was sometimes measured based on the percentage of students taught by teachers with at least 3 years of experience (e.g., Akiba et al.,  2007 ), the percentage of teachers with at least 3 years of experience (e.g., Caldas & Bankston III,  1999 ), or the system average years of teaching experience (e.g., Reyna,  2015 ). For teacher educational background, some studies included the percentage of teachers with a master’s degree (e.g., Darling-Hammond,  2000 ; Moreau & McIntire,  1995 ) and others included the percentage of teachers with postgraduate diplomas (e.g., Heras Recuero & Olaberría,  2018 ; Li et al.,  2017 ; Unnever et al.,  2000 ). Out of these teacher qualifications, teacher skills (in numeracy, literacy and instruction) and the system share of teachers who have a major and/or certification to teach their subject had the largest average effect sizes (0.33 and 0.53, respectively). However, the average effect size from teacher skills comes only from two studies by Hanushek et al. ( 2019 ) and Meroni et al. ( 2015 ).

Equity-oriented conditions

Although the original category in Scheerens' model refers to policies, none of the studies in our sample included policy variables in this topic. Nevertheless, some studies included variables that display the degree to which systems have equity in issues such as allocation of qualified teachers or educational resources, and equity in the quality of physical infrastructure. The average effect size for this category was 0.05 but it only comes from a few studies (e.g., Anastasiou et al.,  2018 ; Chapuis & Causa,  2010 ; Chiu,  2015 ; Condron,  2013 ).

Searching for features that explain variation in reported effect sizes

For the specific factors and indicators that had statistically significant mean effect sizes and between-study variation, we conducted conditional multilevel models introducing study characteristics as possible moderators of the variation in effect sizes across studies (see Table  5 ). The clearest pattern is in terms of the methodological nature of the studies. Effect sizes were, on average, smaller when they came from studies that considered more than one level of clustering in their data (i.e., students, classrooms, schools and countries) and when the analysis was correlational. Regarding research design, in some cases, the results indicate that cross-sectional studies report larger effect sizes. The level of education of interest in the studies in our sample was also significant but mostly for the theme of general affluence and its respective measures. Studies in which the outcome was in primary education, or a combination of primary and secondary education, showed a smaller association between affluence and student learning outcomes compared to studies only about secondary education. The type of outcome measured in each study was, generally, not statistically significant in explaining differences in effect sizes across studies. Compared to effect sizes from secondary analyses of IEA data or from other sources of data, effect sizes from secondary analyses of OECD data were significantly larger in half of the models.

Study characteristics Factors
GAF ( -value) GDP ( -value) POV ( -value) EMP ( -value) DEV ( -value) DPE ( -value) CMP ( -value) ACC ( -value) SLE ( -value) ABS ( -value) TCK ( -value) EXP ( -value) SAL ( -value) HRS ( -value)
OECD 0.11 (0.03)* 0.19 (0.08) 0.08 (0.65) 0.41 (0.02)* 0.17 (0.05)* 0.20 (0.00)* 0.09 (0.03)* −0.07 (0.23) 0.13 (0.11) n.a. −0.42 (0.00)* 0.33 (0.00)* 0.04 (0.72) −0.13 (0.00)*
IEA 0.06 (0.24) 0.06 (0.61) −0.02 (0.79) n.a. 0.25 (0.01)* n.a. 0.02 (0.51) 0.22 (0.11) 0.25 (0.00) * 0.10 (0.33) 0.08 (0.09) 0.00 (0.99) −0.45 (0.00) *
Country 0.06 (0.25) 0.06 (0.68) −0.20 (0.38) n.a. 0.16 (0.06) 0.09 (0.21) −0.09 (0.17) −0.01 (0.89) 0.23 (0.00) * 0.22 (0.06) n.a. 0.29 (0.01)* 0.09 (0.39) −0.16 (0.01)*
> 30 0.04 (0.47) −0.02 (0.83) −0.57 (0.13) n.a. 0.17 (0.07) 0.18 (0.02)* −0.06 (0.36) −0.18 (0.00)* 0.17 (0.01)* 0.22 (0.06) −0.09 (0.22) −0.12 (0.43) 0.07 (0.26) −0.12 (0.11)
Developing 0.02 (0.69) −0.04 (0.75) −0.19 (0.51) n.a. 0.14 (0.11) 0.11 (0.12) −0.05 (0.42) 0.02 (0.81) 0.19 (0.01)* 0.22 (0.06) −0.23 (0.03)* 0.42 (0.01)* 0.00 (0.99) −0.12 (0.11)
Primary −0.12 (0.01)* −0.17 (0.00)* −0.04 (0.13) −0.41 (0.02)* −0.09 (0.31) n.a. n.a. 0.08 (0.27) −0.13 (0.09) n.a. 0.23 (0.00) * n.a. −0.02 (0.84) 0.08 (0.15)
Math −0.10 (0.19) −0.12 (0.13) 0.12 (0.61) 0.16 (0.14) −0.09 (0.60) 0.19 (0.03)* 0.08 (0.22) 0.19 (0.01)* −0.08 (0.33) 0.26 (0.02)* 0.27 (0.08) 0.30 (0.00)* −0.03 (0.84) 0.02 (0.75)
Language −0.09 (0.26) −0.08 (0.34) 0.07 (0.76) 0.04 (0.13) −0.10 (0.58) 0.25 (0.00)* 0.04 (0.52) 0.00 (0.95) −0.37 (0.00)* n.a. −0.01 (0.92) 0.26 (0.00)* −0.03 (0.82) 0.02 (0.74)
Science −0.09 (0.26) −0.14 (0.06) −0.04 (0.83) n.a. 0.13 (0.53) 0.01 (0.61) 0.00 (0.91) 0.04 (0.53) −0.05 (0.54) 0.14 (0.21) n.a. 0.26 (0.00)* −0.08 (0.51) −0.12 (0.21)
Cross-sectional 0.18 (0.00)* 0.13 (0.24) 0.11 (0.00)* 0.05 (0.85) 0.00 (0.97) n.a. −0.36 (0.27) 0.00 (0.98) n.a. n.a. −0.04 (0.65) n.a. 0.11 (0.33) 0.01 (0.87)
Multilevel −0.24 (0.00)* −0.35 (0.00)* −0.17 (0.11) −0.39 (0.03)* −0.28 (0.01)* −0.22 (0.01)* −0.10 (0.01)* −0.17 (0.00) * 0.09 (0.17) n.a. 0.06 (0.00) * −0.39 (0.00) * −0.14 (0.05) −0.16 (0.01) *
Correlation 0.36 (0.00)* 0.40 (0.00)* 0.31 (0.24) 0.80 (0.00)* 0.62 (0.00)* 0.30 (0.00)* 0.08 (0.17) 0.14 (0.11) 0.34 (0.00)* 0.24 (0.01)* n.a. 0.48 (0.00)* 0.12 (0.29) 0.02 (0.71)
  • Note : Each study characteristic was modelled separately to avoid multicollinearity issues. n.a. = it was not possible to test the effect of the study characteristics since almost all of the studies that assessed this factor belong to only one of the groups compared. * p  < 0.05.
  • Abbreviations: GAF, general affluence; GDP, gross domestic product, POV, poverty measures, EMP, unemployment, DEV, general development, DPE, decentralisation—pedagogy, CMP, competition, ACC, accountability, SLE, school learning environment, ABS, absenteeism, TCK, tracking (dummy), EXP, educational expenditure per student, SAL, teacher salaries, HRS, human resources.

The present meta-analysis reveals that the system-level factors studied so far are very broad, from financial inputs (e.g., educational expenditures, country wealth, teacher salaries), to educational indicators (e.g., quantity of teaching, school learning environment), and even to a diverse set of characteristics of societies (e.g., cultural values, level of corruption, gender equity). Table  6 provides an overview of the major categories in which the system-level factors were grouped in this study. The literature review presented by Scheerens ( 2016 ) as part of the integrated multi-level model of education helped categorise the broad range of system-level factors identified. Although the factors fit well with Scheerens' proposed framework, our synthesis of effect sizes shows that not all themes, and not all factors within each theme, were significantly associated with learning outcomes. In summary, factors within the overarching theme of direct policies in the sense of malleable inputs and processes had, in general, associations in the expected direction. The associations between factors related to system ecology and structural reform and student outcomes were mainly inconsistent; with small effect sizes for those factors that were significant. More consistent, and stronger, findings were found for factors related to the antecedents determined by the larger societal context, specifically system affluence, level of inequality, and certain cultural values.

Antecedents determined by the larger societal context System ecology and structural reform Direct educational policies in the sense of malleable inputs and processes

General affluence (GDP per capita, country SES from ILSAs scores, child poverty, unemployment rate)

Level of development (Human Development Index, child mortality, life expectancy)

Societal (in)equality (Gini index, between-school SES variance, global gender gap index)

Heterogeneity of the population (percentage of immigrants, percentage of immigrant students, religious diversity)

Education relevant aspects of national cultures (prevalence of shadow education, democracy, corruption)

Societal values (long-term orientation, Monumentalism-Flexibility)

Functional decentralisation (general, of financial resources, of pedagogical aspects, of personnel management)

Evaluation and accountability arrangements (central examinations, teacher evaluation, school self-evaluation)

Structural differentiation of secondary education (age of first selection, number of tracks, years tracked)

School competition and choice (private enrolment, school competing for students)

School learning environment (enrolment rates, discipline, absenteeism)

Pre-school coverage (enrolment rate, average years in pre-school)

Migration integration policies (MIPEX)

Investment in education

Financial resources (educational expenditures as proportion of GDP, per-pupil expenditures)

Teacher salaries (average teacher salary, teacher salary relative to average earnings)

Time resources (teaching hours per year, average learning time per subject)

Opportunity to learn (homework time, extracurriculars)

Material resources (country mean school resources, quality of infrastructure)

Human resources (pupil-teacher ratio, class size)

Teacher qualifications and training (teacher experience, teacher education, teacher certification, teacher skills)

Equity-oriented conditions (special education coverage, school resource inequality, equity in teacher shortage)

Theoretical implications

From a theoretical standpoint, our results provide support for developing parsimonious models of educational effectiveness at the system level. In this sense, these results also provide some support for the DMEE (Creemers & Kyriakides,  2008 ), in which, for example, variables related to structural reform (i.e., accountability, decentralisation, stratification are not included in the model). However, it is difficult to conclude whether the system-level factors included in the DMEE (i.e., direct educational policies to ensure the quantity and quality of learning, as well as the quality of the school learning environment) are relevant for system effectiveness, as only few studies searched for relationships between these factors—as they are framed in the DMEE—and educational outcomes.

The results of our synthesis also give some indications on characteristics that could be used to describe the wider educational environment, as named in the DMEE, or the antecedents determined by the larger societal context, as named in the integrated multilevel model of education. Some of these characteristics are already part of the integrated multilevel model of education framework (i.e., general affluence, level of inequality) and others could be added (e.g., cultural values, level of development).

Despite Creemers and Kyriakides ( 2008 ) stating that the wider educational environment is partly defined by the political and sociocultural context in which students, teachers and schools operate, the authors of the DMEE opted to focus on more malleable elements. Specifically, they name two elements of the wider educational environment: support provided to schools from different stakeholders (e.g., companies, universities, educational researchers), and stakeholders' expectations of learning and learning outcomes. Although variables like affluence, inequality and level of development are not very malleable and they are not within the reach or responsibilities of ministries of education or other educational policy makers, it is still important to mention their relevance in education. Currently, the DMEE does not mention these conditions (e.g., affluence, development, inequality) at the system level, and the model could improve if it explains how these conditions could affect the effectiveness of educational policies. For example, Scheerens ( 2016 ) mentions how the effect of accountability sometimes disappears when analyses adjust for socioeconomic background at country and student levels, and school autonomy seems to be positive in developed countries but negative in developing ones (Hanushek et al.,  2013 ).

Another aspect to consider for the wider educational environment or the larger societal context is that of societal values. Our synthesis showed that not all values are educational relevant aspects of society, as named in the integrated multilevel model of education, but there were promising values like LTO or Monumentalism-Flexibility. Based on our results, perhaps cultural values such as these could be used to describe the expectations of learning and learning outcomes that are mentioned in the DMEE, or the appreciation of education and the motivation of the population to do well in education mentioned by Scheerens ( 2016 ). For example, one could argue that Monumentalism-Flexibility appears to describe important differences in how learning is viewed and approached across cultures. Since cultures with higher flexibility scores tend to emphasise self-improvement over self-enhancement (Minkov et al.,  2017 ) it could be possible that such cultures place a higher value on learning to improve the self.

The importance of the socioeconomic characteristics of the systems and their cultural values is in line with popular psychology theories of child development, such as Bronfenbrenner's ecological systems theory (Bronfenbrenner,  1979 ) and the subsequent bioecological model (Bronfenbrenner & Ceci,  1994 ). In fact, these theories were mentioned in some of the studies included in the meta-analysis (e.g., Anastasiou et al.,  2018 ; Chiu & Chow,  2015 ; Giménez et al.,  2018 ; Lee,  2011 ; Schulz et al.,  2011 ). In these theories, child development is viewed as a complex system of relationships situated at multiple levels: the microsystem, which is the child's immediate environment, including parents, teachers, classmates; the mesosystem, formed by interactions between different elements of the microsystem (e.g., between parents and teachers); the exosystem, including formal and informal social structures such as neighbourhoods and the workplaces of parents; the macrosystem, focusing on elements of the society and culture; and the chronosystem, including environmental changes such as major life transitions and historical events. In a way, this is similar to the multilevel structure of EER models, such as the DMEE and the integrated multilevel model of education. To strengthen their theoretical foundations, especially at the system level, these models could draw from such psychological theories.

Implications for research

During the process of conducting this meta-analysis it was possible to identify not only what kind of system-level factors have been studied so far but also areas where more research is needed—both in terms of research topics and methodological approaches.

Although we were able to identify a diverse set of system-level factors studied, there were also some factors that are included in the DMEE and in the integrated multilevel model of education for which we could not find any study in our sample looking at them. This indicates that more research is needed to determine their relevance for system effectiveness and to validate their inclusion in theoretical models of educational effectiveness. The study characteristics evaluated in the conditional multilevel models (e.g., whether the outcome was cognitive or non-cognitive, in which subject domain, the type of system studied, the statistical techniques used) also informed current research gaps, both in terms of topics and methodological issues.

Topics for further research

The factors from the DMEE that did not appear in any study in our sample were the evaluation of national policies (i.e., what sort of evaluation mechanisms are in place to collect data about the appropriateness and implementation of educational policies), the national or regional policy for improving the school learning environment, and the support provided to schools by various stakeholders (e.g., companies, universities, educational researchers). For the integrated multilevel model of education, factors that did not appear in our synthesis were community involvement in schooling and the status of teachers. Although the overarching theme of direct policies in the sense of malleable inputs was present in the sample of studies, through the second part of the analysis—focusing on study characteristics—we could also identify that further research is needed regarding other types of systems, besides countries (e.g., municipalities, states, districts); other subject domains, besides the core of mathematics, reading and science; and regarding affective and non-cognitive outcomes of schooling. Another aspect that needs further research is the impact of actual policies. If we consider educational policies per se—in the sense of documents or guidelines provided by an educational authority—only the study by Kyriakides, Creemers, et al. ( 2018 ); Kyriakides, Georgiou, et al. ( 2018 ) covered this aspect of the DMEE. Other studies usually measured what is happening in the systems, which could be a reflection of different policies in place, but it might not necessarily be the case. Such studies sometimes obtain those measures by aggregating variables that are located at the level of classrooms (e.g., regarding teacher characteristics, instruction time per subject as reported by students or teachers) or schools (e.g., the way schools make use of assessments), introducing different statistical issues such as the ecological fallacy and the shift of meaning. This situation was also reflected in the theme of equity-oriented policies, included in the integrated multilevel model of education, as the studies in our sample looked at equity-related conditions (e.g., school resource inequalities, equity in teacher shortages) rather than looking at policies. The lack of studies in our sample looking at national policies could be explained by the large proportion of studies that were secondary analyses of ILSAs. Although ILSAs are a great resource that collect comparable information across educational systems, these assessments are not specifically designed to measure formal system-level policies (an exception would the curriculum, or national context, questionnaires of IEA studies). In fact, researchers within the field of EER have pointed out the lack of data on national educational policies (Kyriakides et al.,  2015 ) and a restricted range of educational process variables at this level (Reynolds et al.,  2016 ). They have also called for sound assessment frameworks that consider the knowledge base of EER (e.g., Creemers & Kyriakides,  2006 ; Klieme, 2020 ; Kyriakides et al.,  2015 ; Reynolds et al.,  2016 ).

Methodological issues for further research

The methodological needs can be divided into four themes: the generic effect of system-level factors, the use of multilevel techniques, the testing of indirect effects, and the use of longitudinal studies.

Testing the generic effect of system-level factors

Based on the findings from the multilevel meta-analysis, our results give support for a generic effect of system-level conditions on student learning outcomes; the associations seem to be the same whether mathematics or science achievement is studied, or whether the focus is on primary or secondary education, for example. However, it is important to highlight that most of the studies in our sample focused on cognitive mathematics achievement, and further research should be conducted to provide stronger support to the generic nature of system-level factors. The interest in this type of outcome could be because mathematics achievement is thought to be more easily comparable, especially across countries with different curriculums and educational aims. Even if it involves a greater challenge, further research is needed on student learning outcomes outside of this ‘core’ domain and on meta-cognitive and non-cognitive outcomes of education.

The use of multilevel techniques

Despite theoretical models of effectiveness being multilevel in nature, more than half of the studies in our sample were unilevel, and the results of the multilevel meta-analysis show that effect sizes were, on average, smaller when they came from studies that considered more than one level of clustering in their data (i.e., students, classrooms, schools and countries). From a methodological standpoint, this shows the importance of acknowledging the multilevel structure of educational data, as results from unilevel analyses may be overestimating the effect size of system-level factors.

Searching for indirect effects of system-level factors

Another important methodological observation that stemmed from this synthesis was the predominance of studies looking at direct effects. Although the meta-analysis only gathered variables for which direct effects were reported, it was possible to observe that many studies mentioned the system-level factors as having indirect effects, but few studies searched for this type of effect (e.g., by including interaction with other system-level variables or with variables located at other levels). Moreover, the mechanisms through which they are supposed to impact student achievement were sometimes not clear or not mentioned at all. Yet, the theoretical models of EER state that the system level is expected to have mainly an indirect effect on student outcomes, through factors located at lower levels (e.g., school, classroom). Further research is needed on these indirect paths, for example using structural equation modelling, to understand the mechanisms and conditions under which certain system factors can lead to better outcomes. This could also contribute to a more comprehensive approach for studying system effectiveness, in which system factors are not addressed in isolation but rather as working together with other factors at the same or lower levels.

Contributions of longitudinal studies

A final methodological observation with important implications for research is in terms of causality and the cross-sectional design of the studies in our sample. Although the results from the multilevel models do not show a clear pattern when comparing cross-sectional studies with other type of studies (e.g., longitudinal, quasi-experimental), more than 80% of the studies in our sample had a cross-sectional design, which made it difficult to assess the impact of this characteristic in explaining variation in effect sizes. As stated by Luyten et al. ( 2005 ), to deepen our insights into the causes of school—and system—effectiveness, more experimental and longitudinal studies are needed. In an educational setting, it is sometimes difficult to conduct experimental studies—even more at the system level, which might be reflected in the fact that none of the studies in our sample was experimental. Even then, very few studies tried to work around this issue and have a quasi-experimental design, such as differences-in-differences or using instrumental variables (e.g., DeAngelis,  2019 ; Edwards & Garcia Marin,  2015 ; Hanushek & Wößmann,  2006 ; Lavrijsen & Nicaise,  2016 ). Although the need for quasi-experimental designs in EER had already been highlighted by Reynolds and Teddlie ( 2000 ), the results of the present project show that it is still a concern within the field. Regarding longitudinal designs, few studies in the sample included a time component—either by comparing changes in the correlation of a system-level factor and student achievement at different time points or by having a panel design. With more and more cycles of ILSAs, such as PISA and TIMSS, the possibilities for longitudinal studies at the system level are expanded, in which changes within systems over time are studied (see Gustafsson,  2007 ).

Implications for policy

Although not a new insight within EER, the results of the meta-analysis give even further support to exerting caution when it comes to educational reforms—for example, those involving decentralisation and accountability policies—and their implementation across different contexts. The reality is that whether a system is centralised or decentralised, or whether it has high accountability or low accountability, is not, by itself, what causes a system to be high performing. For example, Reynolds ( 2007 ) argues that while the United Kingdom, Australia and Canada all have favourable PISA rankings, they also have very different policies in terms of autonomy and centralisation. A similar situation occurs when looking at the degree of stratification of the system; the Netherlands has a selective system while Finland has a comprehensive one, yet both are considered high achieving educational systems. Based on the heterogeneity of results for structural reform variables, and their small average effect sizes, the main implication for policy and practice echoes the warning of Scheerens et al. about ‘overoptimistic expectations of educational policy measures’ (Scheerens,  2016 , p. 33). Taking from systems theory and systems thinking, policy makers could follow the principle of leverage; ‘small, relatively inexpensive, well-focused actions can sometimes produce significant, enduring improvements, if they're in the right place’ (Senge et al.,  2012 , p. 12). The reality is that we still do not know much about system effectiveness. But what we do know is that there are effective classroom- and school-level practices, as there is much more evidence about factors operating at these two levels. Furthermore, greater leverage might exist at the classroom and school levels, as they have a more direct influence with student learning outcomes and they are, after all, where learning ultimately takes place. In the absence of sound evidence about system effectiveness, policy makers might be better off placing their focus and efforts on how to support the effectiveness of relevant school and classroom factors, within their own contexts or systems, and on using ILSAs scores to monitor their own performance through time, instead of focusing on league tables or country rankings. An additional implication for policy is about the large role of system inequality for educational outcomes; not only inequality was a significant factor by itself, but it could also undermine the positive effects found for affluence on educational outcomes; even if more resources were available, if these are allocated inefficiently, then a less equal distribution of resources could lead to lower educational value for all students. Socioeconomic inequality could also affect achievement through higher inequalities in school resources within a country, higher socioeconomic segregation across schools and, like affluence, through more distal and indirect channels such as larger health disparities across the population, which could affect the cognitive development of children. The importance of the role of inequality has implications not only for educational policy, as inequality is related to policies beyond the educational system.

Limitations

There are certain limitations to the present study. First, as with any review of the literature, our study could suffer from publication bias, as statistically significant results are more likely to be published than non-significant ones (Cooper et al.,  2019 ). Second, our sample was reduced considerably when we consider the number of studies and factors for which we could calculate effect sizes and conduct a multilevel meta-analysis. This is partly due to limitations in the reporting of results and of differences in publication standards across research disciplines. Therefore, it is important to highlight that although some of the average effect sizes presented in this study are moderately strong, they are based on a very small number of studies, particularly when talking about specific indicators. Except for the average effect sizes of few individual indicators (e.g., GDP per capita, Gini Index, central examinations, pupil-teacher ratio), most other average effect sizes were calculated based on 15 studies or less. This is an important limitation to keep in mind when discussing the magnitude of the effect sizes of system-level factors. Third, some study characteristics could not be measured for all themes and indicators either because of low or a complete lack of variability. Finally, it is important to remark that the results presented in this study are just in terms of associations and it is not possible to establish causal claims based on the nature of our sample, in which most studies had a cross-sectional research design.

Final remarks

All in all, the results of this synthesis show that further research is needed on the topic of system effectiveness. There is still work ahead to build the theoretical framework of EER and to validate the system level of current models like the DMEE and the integrated multilevel model of education. During the process of conducting this meta-analysis it was possible to identify not only what kind of system-level factors have been studied so far but also areas where more research is needed—both in terms of research topics and methodological approaches. If the relationship between EER and ILSAs keeps developing and improving, with more ILSA cycles relying on EER theories in the design of their frameworks, collecting more relevant information about educational systems, and with more researchers tapping into this valuable information through secondary analyses to generate and test various hypotheses, the future of EER looks promising. It could bring us, hopefully, one step closer to understanding what works in education and why.

ACKNOWLEDGEMENTS

This project has received funding from the European Union's Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Skłodowska-Curie Grant Agreement No. 765400.

CONFLICT OF INTEREST

There are no conflicts of interest to declare.

ETHICS STATEMENT

This research is based on a systematic review of published studies. Consequently, ethical approval is not applicable to our research.

  • 1 TITLE-ABS (((region* OR nation* OR countr* OR context* OR system OR macro OR international OR education*) W/3 (determinants OR effects OR factors OR influences OR characteristics OR indicators OR policies)) AND ((student OR education* OR academic OR learning) W/1 (achievement OR performance OR quality OR outcome OR effectiveness OR results))) AND PUBYEAR >1969 AND PUBYEAR <2020 AND (LIMIT-TO (LANGUAGE, “English”) OR LIMIT-TO (LANGUAGE, “Spanish”)).

APPENDIX A: THE MULTILEVEL MODEL USED TO CONDUCT THE META-ANALYSIS

A multilevel meta-analytical approach was implemented to analyse the observed effects of each study and the sources of variation arising from different study characteristics. Separate multilevel models were used for the different themes mentioned in our theoretical framework. In some cases, based on the number of studies available, multilevel analyses were also conducted for specific variables (e.g., GDP per capita, expenditure on education, early tracking).

Following Raudenbush and Bryk ( 1985 ), studies that investigated the relationship of student achievement with another factor are considered a sample from the population of studies investigating the relationship between this factor and student achievement. Based on the nature of our sample, in which studies often included multiple system-level independent variables in their analyses, effect sizes are nested within studies. Therefore, a two-level model can be used. The main advantage of the statistical meta-analysis employed in our paper is that the information from each study is weighted by the reliability of the information, in this case the sample size.

Another advantage of multilevel meta-analysis is that it helps us identify factors associated with the variation in the observed effect sizes for each of the main system-level factors. This is accomplished by modelling the differences in reported effect sizes as a function of study characteristics, such as the differences in the outcome domains used to measure student achievement, the educational level at which the study was conducted, and the research design of the study. Further information about the statistical modelling technique is outlined below.

  • Data source . A grouping variable was available to search the impact of the data source used in each study. Specifically, it was possible to identify three types of data sources: (1) data from OECD PISA; (2) data from IEA studies (e.g., TIMSS, PIRLS, ICCS); (3) other types of data either from older international studies like the International Assessment of Educational Progress, national examinations such as the National Assessment of Educational Progress or the SAT (both in the United States), or data from other international research projects such as the European collaborative research project ‘Establishing a knowledge-base for quality in education: Testing a dynamic theory of educational effectiveness’. Two dummy variables were entered with data from the third type of data source as the reference category.
  • Type of system . The type of system analysed in their studies could be grouped in two categories: countries and within-country systems (e.g., states, municipalities, school districts). Within-country systems was used as the reference category.
  • Sample size . Looking at the number of countries in each study, we classified them into two groups: (1) studies with a sample of less than 30 countries, (2) studies of at least 30 countries. The reference category was studies with a sample of less than 30 countries.
  • Developed and developing countries in sample . Because of variations in the countries included in the analysis, it was only possible to classify studies in terms of whether their sample included only developed countries or a combination of developed and developing countries. Studies with samples of only developed countries were treated as the reference category.
  • Level of education . Studies in our sample focused either on primary education, secondary education or a combination of both. No studies in our sample focused on tertiary level. Since only few studies focused on a combination of both primary and secondary education, we decided to make one distinction only: whether primary education was included or not. In this way, secondary education was our reference category and our dummy variable included studies that focused only on primary education or on a combination of primary and secondary education.
  • Outcomes of schooling . Most of the studies in our sample focused on cognitive outcomes of schooling in the ‘core’ domains of mathematics, language and science. Few studies included non-cognitive outcomes (e.g., science self-concept, liking school) or focused on cognitive outcomes outside the core (mostly in civic education). Some studies also reported general outcomes in the sense of ‘academic achievement’ or average achievement across multiple domains. Therefore, we decided to have the following four categories: (1) cognitive achievement in mathematics; (2) cognitive achievement in language; (3) cognitive achievement in science; (4) all other outcomes. Three dummy variables were entered into the multilevel models, with category four (i.e., all other outcomes) as the reference category.
  • Research design . Most of the studies in our sample had a cross-sectional design. However, it was possible to identify some studies that were either quasi-experimental or longitudinal. Our sample did not have any experimental, case studies or outlier studies. We grouped all studies that did not use a cross-sectional design into a single category that was used as the reference category.
  • Statistical approach . We also examined the statistical approach of the study in terms of whether unilevel or multilevel analyses were used to investigate the association between system-level factors and student achievement. Unilevel analysis was used as the reference category.
  • Statistical estimates . The main result estimates reported across our sample of studies were correlation coefficients or regression coefficients from multiple regressions or multilevel regression. Few studies included analysis that employed more complex techniques such as instrumental variables, differences-in-differences and structural equation modelling. Because of only a few studies using these types of approach, they were all considered in the same group as regressions and this was our reference group.

Open Research

Data availability statement.

Data available from the corresponding author upon reasonable request.

Supporting Information

Filename Description
Word 2007 document , 36.2 KB

Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.

  • Agasisti, T. ( 2011 ). Does competition affect Schools' performance? Evidence from Italy through OECD-PISA data . European Journal of Education , 46 ( 4 ), 549 – 565 . https://doi.org/10.1111/j.1465-3435.2011.01500.x 10.1111/j.1465?3435.2011.01500.x Web of Science® Google Scholar
  • Agasisti, T. , & Cordero-Ferrera, J. M. ( 2013 ). Educational disparities across regions: A multilevel analysis for Italy and Spain . Journal of Policy Modeling , 35 ( 6 ), 1079 – 1102 . https://doi.org/10.1016/j.jpolmod.2013.07.002 10.1016/j.jpolmod.2013.07.002 Web of Science® Google Scholar
  • Ainley, J. , & Thomson, S. ( 2006 ). Differences in science teaching and learning across Australian states . The Second IEA International Research Conference: Proceedings of the IRC-2006 , 1 , 89 – 98 . Google Scholar
  • Akiba, M. , LeTendre, G. K. , & Scribner, J. P. ( 2007 ). Teacher quality, opportunity gap, and National Achievement in 46 countries . Educational Researcher , 36 ( 7 ), 369 – 387 . https://doi.org/10.3102/0013189X07308739 10.3102/0013189X07308739 Google Scholar
  • Aloisi, C. , & Tymms, P. ( 2018 ). PISA trends, social changes, and education reforms . Educational Research and Evaluation , 23 ( 5–6 ), 180 – 220 . https://doi.org/10.1080/13803611.2017.1455290 10.1080/13803611.2017.1455290 Web of Science® Google Scholar
  • Álvarez, J. , Garcia, V. , & Patrinos, H. A. ( 2007 ). Institutional effects as determinants of learning outcomes: Exploring state variations in Mexico . World Bank. Google Scholar
  • Anastasiou, D. , Sideridis, G. D. , & Keller, C. E. ( 2018 ). The relationships of socioeconomic factors and special education with Reading outcomes across PISA countries . Exceptionality , 28 , 279 – 293 . https://doi.org/10.1080/09362835.2018.1531759 10.1080/09362835.2018.1531759 Web of Science® Google Scholar
  • Arikan, S. , van de Vijver, F. J. R. , & Yagmur, K. ( 2017 ). PISA mathematics and reading performance differences of mainstream European and Turkish immigrant students . Educational Assessment, Evaluation and Accountability , 29 ( 3 ), 229 – 246 . https://doi.org/10.1007/s11092-017-9260-6 10.1007/s11092?017?9260?6 Web of Science® Google Scholar
  • Ayalon, H. , & Livneh, I. ( 2013 ). Educational standardization and gender differences in mathematics achievement: A comparative study . Social Science Research , 42 ( 2 ), 432 – 445 . https://doi.org/10.1016/j.ssresearch.2012.10.001 10.1016/j.ssresearch.2012.10.001 PubMed Web of Science® Google Scholar
  • Baker, D. P. , Akiba, M. , LeTendre, G. K. , & Wiseman, A. W. ( 2001 ). Worldwide shadow education: Outside-school learning, institutional quality of schooling, and cross-National Mathematics Achievement . Educational Evaluation and Policy Analysis , 23 ( 1 ), 1 – 17 . https://doi.org/10.3102/01623737023001001 10.3102/01623737023001001 Web of Science® Google Scholar
  • Bardach, L. , & Klassen, R. M. ( 2020 ). Smart teachers, successful students? A systematic review of the literature on teachers' cognitive abilities and teacher effectiveness . Educational Research Review , 30 , 100312. https://doi.org/10.1016/j.edurev.2020.100312 10.1016/j.edurev.2020.100312 Web of Science® Google Scholar
  • Berkovich, I. ( 2016 ). The corrupted industry and the “wagon-wheel effect.” . Administration & Society , 48 ( 5 ), 559 – 579 . https://doi.org/10.1177/0095399715607287 10.1177/0095399715607287 Web of Science® Google Scholar
  • Bol, T. , & van de Werfhorst, H. G. ( 2013 ). The measurement of tracking, vocational orientation, and standardization of educational systems: A comparative approach . GINI discussion paper No. 81. Growing Inequalities' impacts (GINI). Google Scholar
  • Bourdeaud'hui, H. , Aesaert, K. , Van Keer, H. , & van Braak, J. ( 2018 ). Identifying student and classroom characteristics related to primary school students' listening skills: A systematic review . Educational Research Review , 25 , 86 – 99 . https://doi.org/10.1016/j.edurev.2018.09.005 10.1016/j.edurev.2018.09.005 Web of Science® Google Scholar
  • Bowman, N. A. ( 2012 ). Effect sizes and statistical methods for meta-analysis in higher education . Research in Higher Education , 53 ( 3 ), 375 – 382 . https://doi.org/10.1007/s11162-011-9232-5 10.1007/s11162?011?9232?5 Web of Science® Google Scholar
  • Bratti, M. , Checchi, D. , & Filippin, A. ( 2007 ). Territorial differences in Italian Students' mathematical competencies: Evidence from Pisa 2003. In IZA discussion paper No. 2603 . Google Scholar
  • Bronfenbrenner, U. ( 1979 ). The ecology of human development: Experiments by nature and design . Harvard University Press. 10.4159/9780674028845 Google Scholar
  • Bronfenbrenner, U. , & Ceci, S. J. ( 1994 ). Nature-nurture reconceptualized in developmental perspective: A bioecological model . Psychological Review , 101 ( 4 ), 568 – 586 . 10.1037/0033-295X.101.4.568 CAS PubMed Web of Science® Google Scholar
  • Burhan, N. A. S. , Yunus, M. M. , Tovar, M. E. L. , & Burhan, N. M. G. ( 2017 ). Why are cognitive abilities of children so different across countries? The link between major socioeconomic factors and PISA test scores . Personality and Individual Differences , 105 , 95 – 106 . https://doi.org/10.1016/j.paid.2016.09.043 10.1016/j.paid.2016.09.043 Web of Science® Google Scholar
  • Caldas, S. J. , & Bankston, C. L., III . ( 1999 ). Multilevel examination of student, school, and district-level effects on academic achievement . The Journal of Educational Research , 93 ( 2 ), 91 – 100 . https://doi.org/10.1080/00220679909597633 10.1080/00220679909597633 Web of Science® Google Scholar
  • Chapuis, C. , & Causa, O. ( 2010 ). Equity in student achievement across OECD countries . OECD Journal: Economic Studies , 2010 ( 1 ), 1 – 50 . https://doi.org/10.1787/eco_studies-2010-5km61lb7b39x 10.1787/eco_studies?2010?5km61lb7b39x Google Scholar
  • Chiu, M. M. ( 2010 ). Effects of inequality, family and school on mathematics achievement: Country and student differences . Social Forces , 88 ( 4 ), 1645 – 1676 . https://doi.org/10.1353/sof.2010.0019 10.1353/sof.2010.0019 Web of Science® Google Scholar
  • Chiu, M. M. ( 2015 ). Family inequality, school inequalities, and mathematics achievement in 65 countries: Microeconomic mechanisms of rent seeking and diminishing marginal returns . Teachers College Record , 117 ( 1 ), 1 – 32 . 10.1177/016146811511700105 Web of Science® Google Scholar
  • Chiu, M. M. , & Chow, B. W.-Y. ( 2015 ). Classmate characteristics and student achievement in 33 countries: Classmates' past achievement, family socioeconomic status, educational resources, and attitudes toward reading . Journal of Educational Psychology , 107 ( 1 ), 152 – 169 . https://doi.org/10.1037/a0036897 10.1037/a0036897 Web of Science® Google Scholar
  • Chiu, M. M. , & Khoo, L. ( 2005 ). Effects of resources, inequality, and privilege bias on achievement: Country, school, and student level analyses . American Educational Research Journal , 42 ( 4 ), 575 – 603 . https://doi.org/10.3102/00028312042004575 10.3102/00028312042004575 Web of Science® Google Scholar
  • Chiu, M. M. , & Klassen, R. M. ( 2009 ). Calibration of reading self-concept and reading achievement among 15-year-olds: Cultural differences in 34 countries . Learning and Individual Differences , 19 ( 3 ), 372 – 386 . https://doi.org/10.1016/j.lindif.2008.10.004 10.1016/j.lindif.2008.10.004 Web of Science® Google Scholar
  • Chiu, M. M. , & Xihua, Z. ( 2008 ). Family and motivation effects on mathematics achievement: Analyses of students in 41 countries . Learning and Instruction , 18 ( 4 ), 321 – 336 . https://doi.org/10.1016/j.learninstruc.2007.06.003 10.1016/j.learninstruc.2007.06.003 Web of Science® Google Scholar
  • Coleman, J. S. , Campbell, E. Q. , Hobson, C. J. , McPartland, J. , Mood, A. M. , Weinfeld, F. D. , & York, R. L. ( 1966 ). Equality of educational opportunity . US Government Printing Office. Google Scholar
  • Condron, D. J. ( 2013 ). Affluence, inequality, and educational achievement: A structural analysis of 97 jurisdictions across the Globe . Sociological Spectrum , 33 ( 1 ), 73 – 97 . https://doi.org/10.1080/02732173.2013.732866 10.1080/02732173.2013.732866 Web of Science® Google Scholar
  • Cooper, H. , Hedges, L. V. , & Valentine, J. C. (Eds.). ( 2019 ). The handbook of research synthesis and meta-analysis . Russell Sage Foundation. 10.7758/9781610448864 Google Scholar
  • Creemers, B. P. M. ( 1994 ). Effective instruction: An empirical basis for a theory of educational effectiveness . In D. Reynolds , B. P. M. Creemers , P. S. Nesselrodt , E. C. Schaffer , S. Stringfield , & C. Teddlie (Eds.), Advances in school effectiveness research and practice (pp. 189 – 205 ). Pergamon. 10.1016/B978-0-08-042392-0.50014-0 Google Scholar
  • Creemers, B. P. M. , & Kyriakides, L. ( 2006 ). Critical analysis of the current approaches to modelling educational effectiveness: The importance of establishing a dynamic model . School Effectiveness and School Improvement , 17 ( 3 ), 347 – 366 . https://doi.org/10.1080/09243450600697242 10.1080/09243450600697242 Web of Science® Google Scholar
  • Creemers, B. P. M. , & Kyriakides, L. ( 2008 ). The dynamics of educational effectiveness: A contribution to policy, practice and theory in contemporary schools . Routledge. Google Scholar
  • Creemers, B. P. M. , & Kyriakides, L. ( 2010 ). Meta-analyses of effectiveness studies . In B. P. M. Creemers , L. Kyriakides , & P. Sammons (Eds.), Methodological advances in educational effectiveness research (pp. 303 – 323 ). Routledge. 10.4324/9780203851005 Google Scholar
  • Darling-Hammond, L. ( 2000 ). Teacher quality and student achievement: A review of state policy evidence . Education Policy Analysis Archives , 8 ( 1 ), 1 – 44 . https://doi.org/10.14507/epaa.v8n1.2000 10.14507/epaa.v8n1.2000 Google Scholar
  • DeAngelis, C. A. ( 2019 ). Does private schooling affect international test scores? Evidence from a natural experiment . School Effectiveness and School Improvement , 30 ( 4 ), 380 – 397 . https://doi.org/10.1080/09243453.2019.1614072 10.1080/09243453.2019.1614072 Web of Science® Google Scholar
  • Duru-Bellat, M. , & Suchaut, B. ( 2005 ). Organisation and context, efficiency and equity of educational systems: What PISA tells us . European Educational Research Journal , 4 ( 3 ), 181 – 194 . 10.2304/eerj.2005.4.3.3 Google Scholar
  • Edwards, S. , & Garcia Marin, A. ( 2015 ). Constitutional rights and education: An international comparative study . Journal of Comparative Economics , 43 ( 4 ), 938 – 955 . https://doi.org/10.1016/j.jce.2015.05.002 10.1016/j.jce.2015.05.002 Web of Science® Google Scholar
  • Falch, T. , & Fischer, J. A. V. ( 2008 ). Does a generous welfare state crowd out student achievement? Panel data evidence from international student tests . CESifo Working Paper , 2383 , 1 – 30 . Google Scholar
  • Fang, Z. , Grant, L. W. , Xu, X. , Stronge, J. H. , & Ward, T. J. ( 2013 ). An international comparison investigating the relationship between national culture and student achievement . Educational Assessment, Evaluation and Accountability , 25 ( 3 ), 159 – 177 . https://doi.org/10.1007/s11092-013-9171-0 10.1007/s11092?013?9171?0 Web of Science® Google Scholar
  • Fossati, F. ( 2011 ). The effect of integration and social democratic welfare states on immigrants' educational attainment: A multilevel estimate . Journal of European Social Policy , 21 ( 5 ), 391 – 412 . https://doi.org/10.1177/0958928711418852 10.1177/0958928711418852 Web of Science® Google Scholar
  • Giménez, G. , Martín-Oro, Á. , & Sanaú, J. ( 2018 ). The effect of districts' social development on student performance . Studies in Educational Evaluation , 58 , 80 – 96 . https://doi.org/10.1016/j.stueduc.2018.05.009 10.1016/j.stueduc.2018.05.009 Web of Science® Google Scholar
  • Grilli, L. , Pennoni, F. , Rampichini, C. , & Romeo, I. ( 2016 ). Exploiting TIMSS and PIRLS combined data: Multivariate multilevel modelling of student achievement . The Annals of Applied Statistics , 10 ( 4 ), 2405 – 2426 . https://doi.org/10.1214/16-AOAS988 10.1214/16?AOAS988 Web of Science® Google Scholar
  • Grissmer, D. , Flanagan, A. , Kawata, J. , Williamson, S. , & Rand Corp, S. M. C. A. ( 2000 ). Improving student achievement: What state NAEP test scores tell us. (ERIC document reproduction service No. ED440154) . Google Scholar
  • Grodsky, E. , Warren, J. R. , & Kalogrides, D. ( 2009 ). State high school exit examinations and NAEP Long-term trends in Reading and mathematics, 1971–2004 . Educational Policy , 23 ( 4 ), 589 – 614 . 10.1177/0895904808320678 Web of Science® Google Scholar
  • Gustafsson, J.-E. ( 2007 ). Understanding causal influences on educational achievement through analysis of differences over time within countries . In T. Loveless (Ed.), Lessons learned: What international assessments tell us about math achievement (pp. 37 – 63 ). Brookings Institution Press. Google Scholar
  • Gyimah-Brempong, K. , & Gyapong, A. O. ( 1991 ). Characteristics of education production functions: An application of canonical regression analysis . Economics of Education Review , 10 ( 1 ), 7 – 17 . https://doi.org/10.1016/0272-7757(91)90035-N 10.1016/0272?7757(91)90035?N Google Scholar
  • Hanushek, E. A. , Link, S. , & Woessmann, L. ( 2013 ). Does school autonomy make sense everywhere? Panel estimates from PISA . Journal of Development Economics , 104 , 212 – 232 . https://doi.org/10.1016/j.jdeveco.2012.08.002 10.1016/j.jdeveco.2012.08.002 Web of Science® Google Scholar
  • Hanushek, E. A. , Piopiunik, M. , & Wiederhold, S. ( 2019 ). The value of smarter teachers international evidence on teacher cognitive skills and student performance . Journal of Human Resources , 54 ( 4 ), 0317-8619R1. https://doi.org/10.3368/jhr.54.4.0317.8619R1 10.3368/jhr.54.4.0317.8619R1 Web of Science® Google Scholar
  • Hanushek, E. A. , & Woessmann, L. ( 2011 ). The economics of international differences in educational achievement . In E. A. Hanushek , S. Machin , & L. Woessmann (Eds.), Handbook of the economics of education (Vol. 3 , pp. 89 – 200 ). North-Holland. https://doi.org/10.1016/B978-0-444-53429-3.00002-8 10.1016/B978-0-444-53429-3.00002-8 Web of Science® Google Scholar
  • Hanushek, E. A. , & Woessmann, L. ( 2015 ). The knowledge capital of nations: Education and the economics of growth . MIT press. 10.7551/mitpress/9780262029179.001.0001 Google Scholar
  • Hanushek, E. A. , & Woessmann, L. ( 2017 ). School resources and student achievement: A review of cross-country economic research . In M. Rosén , K. Y. Hansen , & U. Wolff (Eds.), Cognitive abilities and educational outcomes (pp. 149 – 171 ). Springer. https://doi.org/10.1007/978-3-319-43473-5_8 10.1007/978-3-319-43473-5_8 Google Scholar
  • Hanushek, E. A. , & Wößmann, L. ( 2006 ). Does educational tracking affect performance and inequality? Differences- in-differences evidence across countries . The Economic Journal , 116 ( 510 ), C63 – C76 . https://doi.org/10.1111/j.1468-0297.2006.01076.x 10.1111/j.1468?0297.2006.01076.x Web of Science® Google Scholar
  • Hattie, J. ( 2009 ). Visible learning: A synthesis of over 800 meta-analyses relating to achievement . Routledge. Google Scholar
  • He, J. , Van de Vijver, F. J. R. , & Kulikova, A. ( 2017 ). Country-level correlates of educational achievement: Evidence from large-scale surveys . Educational Research and Evaluation , 23 ( 5–6 ), 163 – 179 . https://doi.org/10.1080/13803611.2017.1455288 10.1080/13803611.2017.1455288 Web of Science® Google Scholar
  • Hedges, L. , & Olkin, I. ( 1985 ). Statistical methods for meta-analysis . Academic Press. CAS Google Scholar
  • Heras Recuero, L. , & Olaberría, E. ( 2018 ). Public spending in education and student's performance in Colombia . OECD Economics Department Working Papers. https://doi.org/10.1787/282d9700-en 10.1787/282d9700?en Google Scholar
  • Hofstede, G. H. , Hofstede, G. J. , & Minkov, M. ( 2010 ). Cultures and organizations: Software of the mind: Intercultural cooperation and its importance for survival ( 3rd ed. ). McGraw-Hill. Google Scholar
  • Hong, J. ( 2015 ). Effects of education policies and institutions on student performance . Seoul Journal of Economics , 28 ( 1 ), 85 – 105 . Google Scholar
  • Horn, D. ( 2009 ). Age of selection counts: A cross-country analysis of educational institutions . Educational Research and Evaluation , 15 ( 4 ), 343 – 366 . https://doi.org/10.1080/13803610903087011 10.1080/13803610903087011 Google Scholar
  • Isac, M. M. , Maslowski, R. , & Van der Werf, G. ( 2011 ). Effective civic education: An educational effectiveness model for explaining students' civic knowledge . School Effectiveness and School Improvement , 22 ( 3 ), 313 – 333 . 10.1080/09243453.2011.571542 Web of Science® Google Scholar
  • Jacques, C. , & Brorsen, B. W. ( 2002 ). Relationship between types of school district expenditures and student performance . Applied Economics Letters , 9 ( 15 ), 997 – 1002 . https://doi.org/10.1080/13504850210148161 10.1080/13504850210148161 Web of Science® Google Scholar
  • Jakubowski, M. ( 2010 ). Institutional tracking and achievement growth: Exploring difference-in-differences approach to PIRLS, TIMSS, and PISA data . In J. Dronkers (Ed.), Quality and inequality of education (pp. 41 – 81 ). Springer. https://doi.org/10.1007/978-90-481-3993-4_3 10.1007/978-90-481-3993-4_3 Google Scholar
  • Jencks, C. S. , Smith, M. , Ackland, H. , Bane, M. J. , Cohen, D. , Gintis, H. , & Michelson, S. ( 1972 ). Inequality: A reassessment of the effect of the family and schooling in America . Basic Books. Google Scholar
  • Julià, A. ( 2016 ). Contexto escolar y desigualdad de género en el rendimiento de comprensión lectora/School Context and Gender Inequalities in Reading Achievement . Revista Española de Investigaciones Sociológicas , 156 , 41 – 58 . https://doi.org/10.5477/cis/reis.156.41 10.5477/cis/reis.156.41 Web of Science® Google Scholar
  • Klieme, E. ( 2020 ). Policies and practices of assessment: A showcase for the use (and misuse) of international large scale assessments in educational effectiveness research . International Perspectives in Educational Effectiveness Research , 147 – 181 . https://doi.org/10.1007/978-3-030-44810-3_7 10.1007/978?3?030?44810?3_7 Google Scholar
  • Konan, P. N. , Chatard, A. , Selimbegović, L. , & Mugny, G. ( 2010 ). Cultural diversity in the classroom and its effects on academic performance . Social Psychology , 41 ( 4 ), 230 – 237 . https://doi.org/10.1027/1864-9335/a000031 10.1027/1864?9335/a000031 Web of Science® Google Scholar
  • Kyriakides, L. , Christoforou, C. , & Charalambous, C. Y. ( 2013 ). What matters for student learning outcomes: A meta-analysis of studies exploring factors of effective teaching . Teaching and Teacher Education , 36 , 143 – 152 . https://doi.org/10.1016/j.tate.2013.07.010 10.1016/j.tate.2013.07.010 Web of Science® Google Scholar
  • Kyriakides, L. , Creemers, B. , Antoniou, P. , & Demetriou, D. ( 2010 ). A synthesis of studies searching for school factors: Implications for theory and research . British Educational Research Journal , 36 ( 5 ), 807 – 830 . https://doi.org/10.1080/01411920903165603 10.1080/01411920903165603 Web of Science® Google Scholar
  • Kyriakides, L. , Creemers, B. , & Charalambous, E. ( 2018 ). Investigating the quality and equity dimensions: A critical review of literature on educational effectiveness . In Equity and quality dimensions in educational effectiveness. Policy implications of research in education (Vol. 8 ). Springer. https://doi.org/10.1007/978-3-319-72066-1_3 10.1007/978-3-319-72066-1_3 Google Scholar
  • Kyriakides, L. , Creemers, B. P. M. , Antoniou, P. , Demetriou, D. , & Charalambous, C. Y. ( 2015 ). The impact of school policy and stakeholders' actions on student learning: A longitudinal study . Learning and Instruction , 36 , 113 – 124 . https://doi.org/10.1016/j.learninstruc.2015.01.004 10.1016/j.learninstruc.2015.01.004 Web of Science® Google Scholar
  • Kyriakides, L. , Georgiou, M. P. , Creemers, B. P. , Panayiotou, A. , & Reynolds, D. ( 2018 ). The impact of national educational policies on student achievement: A European study . School Effectiveness and School Improvement , 29 ( 2 ), 171 – 203 . https://doi.org/10.1080/09243453.2017.1398761 10.1080/09243453.2017.1398761 Web of Science® Google Scholar
  • Lamberts, P. C. , & Abrams, K. R. ( 1995 ). Meta-analysis using multilevel models . Multilevel Modeling Newsletter , 7 ( 2 ), 17 – 19 . Google Scholar
  • Lavrijsen, J. , & Nicaise, I. ( 2016 ). Educational tracking, inequality and performance: New evidence from a differences-in-differences technique . Research in Comparative and International Education , 11 ( 3 ), 334 – 349 . https://doi.org/10.1177/1745499916664818 10.1177/1745499916664818 Google Scholar
  • Lee, Y. S. ( 2011 ). Family policy, family resources, and children's educational achievement: A comparative study of 18 rich countries Doctoral Dissertation. Washington University. https://openscholarship.wustl.edu/etd/605 Google Scholar
  • Li, J. , Miranti, R. , & Vidyattama, Y. ( 2017 ). What matters in education: A decomposition of educational outcomes with multiple measures . Educational Research and Evaluation , 23 ( 1–2 ), 3 – 25 . https://doi.org/10.1080/13803611.2017.1311795 10.1080/13803611.2017.1311795 Web of Science® Google Scholar
  • Light, R. , & Smith, P. ( 1971 ). Accumulating evidence: Procedures for resolving contradictions among different research studies . Harvard Educational Review , 41 ( 4 ), 429 – 471 . https://doi.org/10.17763/haer.41.4.437714870334w144 10.17763/haer.41.4.437714870334w144 Web of Science® Google Scholar
  • Long, D. A. ( 2014 ). Cross-National Educational Inequalities and opportunities to learn: Conflicting views of instructional time . Educational Policy , 28 ( 3 ), 351 – 392 . https://doi.org/10.1177/0895904812465108 10.1177/0895904812465108 Web of Science® Google Scholar
  • Luyten, H. , Visscher, A. , & Witziers, B. ( 2005 ). School effectiveness research: From a review of the criticism to recommendations for further development . School Effectiveness and School Improvement , 16 ( 3 ), 249 – 279 . 10.1080/09243450500114884 Web of Science® Google Scholar
  • Lynn, R. , Antonelli-Ponti, M. , Mazzei, R. F. , Da Silva, J. A. , & Meisenberg, G. ( 2017 ). Differences in intelligence and socio-economic outcomes across the twenty-seven states of Brazil . Mankind Quarterly , 57 ( 4 ), 519 – 541 . https://doi.org/10.46469/mq.2017.57.4.3 10.46469/mq.2017.57.4.3 Google Scholar
  • Marchant, G. J. , Paulson, S. E. , & Shunk, A. ( 2006 ). Relationships between high-stakes testing policies and student achievement after controlling for demographic factors in aggregated data . Education Policy Analysis Archives , 14 ( 30 ). 1 – 34 . https://doi.org/10.14507/epaa.v14n30.2006 10.14507/epaa.v14n30.2006 Google Scholar
  • Meroni, E. C. , Vera-Toscano, E. , & Costa, P. ( 2015 ). Can low skill teachers make good students? Empirical evidence from PIAAC and PISA . Journal of Policy Modeling , 37 ( 2 ), 308 – 323 . https://doi.org/10.1016/j.jpolmod.2015.02.006 10.1016/j.jpolmod.2015.02.006 Web of Science® Google Scholar
  • Michaelowa, K. ( 2001 ). Primary education quality in francophone sub-Saharan Africa: Determinants of learning achievement and efficiency considerations . World Development , 29 ( 10 ), 1699 – 1716 . https://doi.org/10.1016/S0305-750X(01)00061-4 10.1016/S0305?750X(01)00061?4 Web of Science® Google Scholar
  • Mikk, J. ( 2015 ). Explaining the difference between PISA 2009 reading scores in Finland and Estonia . Educational Research and Evaluation , 21 ( 4 ), 324 – 342 . https://doi.org/10.1080/13803611.2015.1062400 10.1080/13803611.2015.1062400 Google Scholar
  • Minkov, M. , Bond, M. H. , Dutt, P. , Schachner, M. , Morales, O. , Sanchez, C. , Jandosova, J. , Khassenbekov, Y. , & Mudd, B. ( 2017 ). A reconsideration of Hofstede's fifth dimension: New flexibility versus monumentalism data from 54 countries . Cross-Cultural Research , 52 ( 3 ), 309 – 333 . https://doi.org/10.1177/1069397117727488 10.1177/1069397117727488 Web of Science® Google Scholar
  • Moher, D. , Liberati, A. , Tetzlaff, J. , Altman, D. G. , & The PRISMA Group . ( 2009 ). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement . PLoS Medicine , 6 ( 7 ), 1 – 6 . https://doi.org/10.1371/journal.pmed.1000097 10.1371/journal.pmed.1000097 Web of Science® Google Scholar
  • Moreau, R. A. , & McIntire, W. G. ( 1995 ). Selected School District factors and grade eight pupil achievement in Maine. Paper presented at the annual meeting of the National Rural Education Association (Salt Lake City, UT) . (ERIC Document Reproduction Service No. ED389500). Google Scholar
  • Muthén, L. K. , & Muthén, B. O. ( 2017 ). Mplus User's Guide ( 8th ed. ). Muthén & Muthén. Google Scholar
  • OECD . ( 2013 ). PISA 2012 results: What makes schools successful (volume IV): Resources, Policies and Practices, PISA . OECD Publishing. https://doi.org/10.1787/9789264201156-en Google Scholar
  • OECD . ( 2016 ). PISA 2015 results (volume II): Policies and practices for successful schools, PISA . OECD Publishing. https://doi.org/10.1787/9789264267510-en 10.1787/9789264267510-en Google Scholar
  • Raudenbush, S. W. , & Bryk, A. S. ( 1985 ). Empirical Bayes meta-analysis . Journal of Educational Statistics , 10 ( 2 ), 75 – 98 . 10.2307/1164836 Google Scholar
  • Reyna, J. R. ( 2015 ). Influence of property school wealth on fifth grade student achievement in Reading and mathematics Doctoral Dissertation. University of Texas Rio Grande Valley. https://www.proquest.com/docview/1771508798 Google Scholar
  • Reynolds, D. ( 2007 ). School effectiveness and school improvement (SESI): Links with the international standards/accountability agenda . In International handbook of school effectiveness and improvement (pp. 471 – 484 ). Springer. 10.1007/978-1-4020-5747-2_26 Google Scholar
  • Reynolds, D. , Caldwell, B. , & Cruz, R. M. ( 2016 ). Comparative educational research . In The Routledge international handbook of educational effectiveness and improvement (pp. 278 – 314 ). Routledge. Google Scholar
  • Reynolds, D. , Sammons, P. , De Fraine, B. , Van Damme, J. , Townsend, T. , Teddlie, C. , & Stringfield, S. ( 2014 ). Educational effectiveness research (EER): A state-of-the-art review . School Effectiveness and School Improvement. , 25 , 197 – 230 . https://doi.org/10.1080/09243453.2014.885450 10.1080/09243453.2014.885450 Web of Science® Google Scholar
  • Reynolds, D. , & Teddlie, C. ( 2000 ). The future agenda for school effectiveness research . In The international handbook of school effectiveness research (pp. 336 – 357 ). Routledge. Google Scholar
  • Rindermann, H. ( 2007 ). The g-factor of international cognitive ability comparisons: The homogeneity of results in PISA, TIMSS, PIRLS and IQ-tests across nations . European Journal of Personality , 21 ( 5 ), 667 – 706 . https://doi.org/10.1002/per.634 10.1002/per.634 Web of Science® Google Scholar
  • Rodríguez-Santero, J. , & Gil-Flores, J. ( 2018 ). Variables contextuales asociadas a las diferencias de rendimiento educativo entre los países de la unión europea . Cultura y Educación , 30 ( 4 ), 605 – 632 . https://doi.org/10.1080/11356405.2018.1522024 10.1080/11356405.2018.1522024 Web of Science® Google Scholar
  • Rosenthal, R. ( 1994 ). Parametric measures of effect size . In H. Cooper , L. V. Hedges , & J. C. Valentine (Eds.), The handbook of research synthesis (pp. 231 – 244 ). Russell Sage Foundation. Google Scholar
  • Rosenthal, R. , & DiMatteo, M. R. ( 2001 ). Meta-analysis: Recent developments in quantitative methods for literature reviews . Annual Review of Psychology , 52 , 59 – 82 . https://doi.org/10.1146/annurev.psych.52.1.59 10.1146/annurev.psych.52.1.59 CAS PubMed Web of Science® Google Scholar
  • Scheerens, J. ( 1992 ). Effective schooling: Research, theory and practice . Cassell. Google Scholar
  • Scheerens, J. ( 2016 ). Educational effectiveness and ineffectiveness: A critical review of the knowledge base . Springer. 10.1007/978-94-017-7459-8 Google Scholar
  • Scheerens, J. , & Blömeke, S. ( 2016 ). Integrating teacher education effectiveness research into educational effectiveness models . Educational Research Review , 18 , 70 – 87 . https://doi.org/10.1016/j.edurev.2016.03.002 10.1016/j.edurev.2016.03.002 Web of Science® Google Scholar
  • Scheerens, J. , & Bosker, R. ( 1997 ). The foundations of educational effectiveness . Pergamon. Google Scholar
  • Scheerens, J. , Luyten, H. , van den Berg, S. M. , & Glas, C. A. ( 2015 ). Exploration of direct and indirect associations of system-level policy-amenable variables with reading literacy performance . Educational Research and Evaluation , 21 ( 1 ), 15 – 39 . https://doi.org/10.1080/13803611.2015.1008520 10.1080/13803611.2015.1008520 Google Scholar
  • Schuetz, G. , Luedemann, E. , West, M. R. , & Woessmann, L. ( 2013 ). School accountability, autonomy, choice, and the equality of educational opportunities . In M. Windzio (Ed.), Integration and inequality in educational institutions (pp. 123 – 152 ). Springer. https://doi.org/10.1007/978-94-007-6119-3_6 10.1007/978-94-007-6119-3_6 Google Scholar
  • Schulz, W. , Fraillon, J. , Ainley, J. , & van de Gaer, E. ( 2011 ). Multi-level analysis of factors explaining differences in civic knowledge . Paper presented at the annual meeting of the American Educational Research Association. Google Scholar
  • Senge, P. M. , Cambron-McCabe, N. , Lucas, T. , Smith, B. , & Dutton, J. ( 2012 ). Schools that learn (updated and revised): A fifth discipline fieldbook for educators, parents, and everyone who cares about education . Currency. Google Scholar
  • Shen, C. ( 2001 ). Social values associated with cross-national differences in mathematics and science achievement: A cross-national analysis . Assessment in Education: Principles, Policy & Practice , 8 ( 2 ), 193 – 223 . https://doi.org/10.1080/09695940125423 10.1080/09695940125423 Google Scholar
  • Shen, C. ( 2006 ). Factors associated with cross-national variation in mathematics and science achievement based on TIMSS 1999 data . In S. J. Howie & T. Plomp (Eds.), Contexts of learning mathematics and science: Lessons learned from TIMSS (pp. 387 – 405 ). Routledge. https://doi.org/10.4324/9780203012536 Google Scholar
  • Smith, W. C. ( 2016 ). National testing policies and educator-based testing for accountability: The role of selection in student achievement . OECD Journal: Economic Studies , 2016 ( 1 ), 131 – 148 . https://doi.org/10.1787/eco_studies-2016-5jg1jxftj4r3 10.1787/eco_studies?2016?5jg1jxftj4r3 Google Scholar
  • Solano, G. , & Huddleston, T. ( 2020 ). Migration integration policy index . Barcelona Center for International Affairs (CIDOB) and Migration Policy Group (MPG). ISBN: 978-84-92511-83-9. Google Scholar
  • Stringfield, S. , & Mackay, A. ( 2016 ). Educational effectiveness research and system reconstruction and change . In The Routledge international handbook of educational effectiveness and improvement (pp. 342 – 357 ). Routledge. Google Scholar
  • United Nations Development Programme . ( 2020 ). Human Development Reports . http://hdr.undp.org/en/humandev Google Scholar
  • Unnever, J. D. , Kerckhoff, A. C. , & Robinson, T. J. ( 2000 ). District variations in educational resources and student outcomes . Economics of Education Review , 19 ( 3 ), 245 – 259 . https://doi.org/10.1016/S0272-7757(99)00043-6 10.1016/S0272?7757(99)00043?6 Web of Science® Google Scholar
  • van Hek, M. , Buchmann, C. , & Kraaykamp, G. ( 2019 ). Educational systems and gender differences in Reading: A comparative multilevel analysis . European Sociological Review , 35 ( 2 ), 169 – 186 . https://doi.org/10.1093/esr/jcy054 10.1093/esr/jcy054 Web of Science® Google Scholar
  • Witziers, B. , Bosker, R. J. , & Krüger, M. L. ( 2003 ). Educational leadership and student achievement: The elusive search for an association . Educational Administration Quarterly , 39 ( 3 ), 398 – 425 . 10.1177/0013161X03253411 Web of Science® Google Scholar
  • Woessmann, L. ( 2003 ). Schooling resources, educational institutions and student performance: The international evidence . Oxford Bulletin of Economics and Statistics , 65 ( 2 ), 117 – 170 . https://doi.org/10.1111/1468-0084.00045 10.1111/1468?0084.00045 Web of Science® Google Scholar
  • Woessmann, L. ( 2010 ). Institutional determinants of school efficiency and equity: German states as a microcosm for OECD countries . Jahrbücher Für Nationalökonomie Und Statistik , 230 ( 2 ), 234 – 270 . https://doi.org/10.1515/jbnst-2010-0206 10.1515/jbnst?2010?0206 Web of Science® Google Scholar
  • Woessmann, L. , Ludemann, E. , Schutz, G. , West, M. R. , & Organisation for Economic Co-operation and Development . ( 2007 ). School accountability, autonomy, choice, and the level of student achievement: International evidence from PISA 2003 . OECD Education Working Papers, No. 13. OECD Publishing. Google Scholar
  • World Bank . (n.d.). Gini index (World Bank Estimate) . https://data.worldbank.org/indicator/SI.POV.GINI Google Scholar

Citing Literature

research skills learning outcomes

Volume 10 , Issue 3

December 2022

research skills learning outcomes

Information

research skills learning outcomes

Log in to Wiley Online Library

Change password, your password must have 10 characters or more:.

  • a lower case character, 
  • an upper case character, 
  • a special character 

Password Changed Successfully

Your password has been changed

Create a new account

Forgot your password.

Enter your email address below.

Please check your email for instructions on resetting your password. If you do not receive an email within 10 minutes, your email address may not be registered, and you may need to create a new Wiley Online Library account.

Request Username

Can't sign in? Forgot your username?

Enter your email address below and we will send you your username

If the address matches an existing account you will receive an email with instructions to retrieve your username

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 24 November 2023

Exploring learning outcomes, communication, anxiety, and motivation in learning communities: a systematic review

  • Wenwen Cao 1 &
  • Zhonggen Yu   ORCID: orcid.org/0000-0002-3873-980X 2  

Humanities and Social Sciences Communications volume  10 , Article number:  866 ( 2023 ) Cite this article

2724 Accesses

3 Citations

Metrics details

  • Language and linguistics

Learning communities have become a focal point of research due to their potential impact on learning outcomes, motivation, and communication. These factors are recognized as crucial determinants of the effectiveness of learning communities. To guide this study, a thorough review of 35 relevant studies was conducted, employing rigorous inclusion and exclusion criteria based on the PRISMA framework to ensure a systematic and robust approach. The findings of this study indicated that learning communities possess the capacity to enhance communication, motivation, and learning outcomes, while simultaneously alleviating learner anxiety. Specifically, it was observed that well-designed online learning communities can significantly improve learning outcomes. Furthermore, the utilization of online technologies within these communities can facilitate enhanced communication, leading to improved learning outcomes. Moreover, this study offers a range of recommendations for optimizing learning outcomes through the implementation of learning communities. These recommendations serve as valuable guidance for harnessing the full potential of learning communities to achieve educational goals. In conclusion, this study underscores the importance of learning communities in enhancing learning outcomes, motivation, and communication. It highlights the efficacy of appropriately designed online communities and the integration of technology in fostering effective communication and improving learning outcomes. The study contributes important insights into ways of maximizing the benefits of learning communities in promoting educational success.

Similar content being viewed by others

research skills learning outcomes

Exploring students’ beliefs about web-based collaborative learning and their practices: a qualitative case study of university English-as-a-foreign-language readers

research skills learning outcomes

The science of effective learning with spacing and retrieval practice

research skills learning outcomes

Real-world effectiveness of a social-psychological intervention translated from controlled trials to classrooms

Introduction.

In recent years, there has been a growing interest in both offline and online learning communities, which consist of professionals, shared goals, facilitators, and mechanisms, as well as the interconnectedness among these elements. These learning communities have shown potential in enhancing leadership, organization, and the ability to tackle various challenges (Wen and Zhang, 2020 ). Consequently, scholars have increasingly focused on investigating the impacts of learning communities on learning outcomes, motivation, and communication (Magana et al., 2021 ). These factors are considered important indicators of the effectiveness of learning communities. Notably, motivation and learning outcomes can be positively influenced through communication within learning communities. This is because strong motivation, coupled with frequent communication, facilitates intensive engagement with new knowledge and innovative information, consequently enhancing knowledge acquisition.

Anxiety plays a crucial role in learning communities and can impede effective communication and learning outcomes. Within a learning community, learners often face challenges related to imbalances in communication abilities and anxiety levels between experts and novices (Young et al., 2018 ). Novice learners may experience apprehension and reluctance to ask questions, leading to their withdrawal from active participation in learning activities within the community. Additionally, the dominance of experts within the learning community may exert pressure on other community members, hindering effective communication between teachers and learners. Particularly, anxiety, primarily experienced by novice learners, can have a detrimental impact on learning outcomes. The presence of anxiety can significantly influence learning outcomes, communication, and motivation within learning communities. Accordingly, this study aims to examine the role of anxiety and propose strategies to alleviate anxiety levels within learning communities.

This study complements the missing links in the scientific literature in the field of learning communities. Several academic studies have examined the efficacy of learning communities in the field of physical education (Parker et al., 2022 ), analyzed the impact of learning communities on online learning outcomes, and investigated the integration of learning communities with social networks in educational settings. Blayone et al. ( 2017 ) conducted research specifically on the influence of online learning communities on learning outcomes, while Schechter ( 2010 ) explored the role of social networks in educational contexts. Scanty review studies have synthesized the effects of learning communities on learning outcomes, communication, anxiety, and motivation. It aims to understand how communication, anxiety levels, and motivation impact students’ learning outcomes in a community-based educational setting. The objective is to gather data on these variables and analyze the findings to provide insights on how to enhance learning experiences and outcomes within these communities. This systematic review study is meaningful and necessary since it aims to fill the research gap by identifying community-based learning outcomes, communication, anxiety, and motivation. The specific research questions are: (1) Can learning communities improve learners’ communication? (2) Can learning communities improve learners’ motivation? (3) Can learning communities mitigate learners’ anxiety? and (4) How to improve learning outcomes through learning communities?

Theoretical framework

Activity Theory is a theoretical framework that originated in the field of psychology and has gained prominence in various disciplines such as education, sociology, and human–computer interaction (Sukirman and Kabilan, 2023 ). It provides a lens to analyze and understand human actions within a social context. According to Activity Theory, human activities are not isolated events but are influenced by, and also influence, the social, cultural, and historical factors in which they occur. This theoretical perspective emphasizes that humans are active agents who engage in purposeful activities to achieve specific goals. Activities are seen as complex systems comprising multiple interconnected elements, including the subject (the individual or group engaged in the activity), the object (the goal or purpose of the activity), the tools or artifacts used, the rules and norms governing the activity, the community or social setting in which the activity takes place, and the division of labor among participants.

Based on the activity theory, learning communities were conducive to language learning outcomes. Activity theory attempted to explore human–computer interactions based on the conception that a specific Activity could exert an influence on thinking, learning goals, reasons for doing, ways of doing, and learning methods. Activity theory provides a foundation for learning communities (Engeström, 2001 ). In a learning community, an individual activity could influence aspects of others. The positive or negative learning activity could exert a positive or negative influence on others’ learning behaviors. It is thus important for community members to establish a model of positive activities to positively influence other language learners in the community.

Leading activities, community guidelines, and organized divisions of work could improve language learning effectiveness and inspire language learners and teachers (Isbell, 2018 ). In a learning community, teachers and designers could select learners who were actively engaged in learning activities and set them up as examples to be followed by other learners. Teachers and designers could also specify community guidelines to direct community members to appropriate learning directions and guide them to achieve success in language learning. Teachers could also organize learning activities and divide members into different teams where individuals assumed different responsibilities. In this way, teachers could improve members’ language learning effectiveness and stimulate other members’ learning enthusiasm.

The interplay between anxiety, communication, motivation, and learning outcomes within learning communities is a complex and dynamic process that can significantly impact the effectiveness of the educational experience. Anxiety can hinder effective communication and dampen motivation, ultimately impacting learning outcomes. On the other hand, positive communication can enhance motivation and learning outcomes, and intrinsic motivation supports effective communication and improved learning outcomes. Understanding these intricate dynamics can inform educators and policymakers in creating supportive learning environments that foster effective communication, reduce anxiety, and enhance motivation, leading to improved learning outcomes in learning communities.

Literature review

Definition of learning community.

To define learning community, it is valuable to refer to the works of Wenger-Trayner and Wenger-Trayner ( 2015 ) and Wenger ( 1998 ). These studies provide insights into the concept of learning communities. According to Wenger-Trayner and Wenger-Trayner ( 2015 ) and Wenger ( 1998 ), a learning community can be understood as a collective of individuals who share a common interest, engage in joint activities, and collaborate in a meaningful manner to enhance their learning and knowledge. It is characterized by mutual engagement, shared values, and a sense of belonging.

In a learning community, individuals come together to pursue their common goals, exchange ideas, and challenge one another intellectually. They often engage in regular interactions, such as discussions, collaborative projects, and sharing resources. Through these interactions, members of the learning community develop relationships, build trust, and establish a supportive environment that fosters continuous learning and development. The learning community is not restricted to a formal educational setting but can be found in various contexts, including workplaces, online platforms, or other social spaces. It transcends traditional hierarchical structures and encourages participation from individuals at different levels of expertise. Within a learning community, newcomers are welcomed and supported in their learning journey, while experienced members serve as mentors or facilitators.

Central to the concept of a learning community is the notion of a “community of practice” as described by Wenger ( 1998 ). A community of practice refers to a group of individuals who share a domain of knowledge or field of practice and jointly learn through their interactions. Members of a community of practice engage in collective learning, negotiation of meaning, and the development of shared resources and practices. Drawing from Wenger-Trayner and Wenger-Trayner ( 2015 ) and Wenger ( 1998 ), a learning community can be defined as a social group of individuals who come together to pursue a common interest, engage in joint activities, and collaborate in a meaningful manner to enhance their learning and knowledge. It is characterized by mutual engagement, shared values, and a supportive environment that fosters continuous learning and development.

Communication

Learning communities have the potential to enhance learning outcomes through improved communication. Online learning communities offer teachers the chance to engage in activities related to English language teaching, enabling students to acquire language knowledge and engage in meaningful communication with their instructors (Pagan et al., 2020 ). Moreover, these communities provide teachers with a wealth of resources to adequately prepare for their teaching responsibilities. By connecting students and teachers from diverse social, cultural, and educational backgrounds, learning communities facilitate the exchange of suggestions, feedback, and mutual learning (Pagan et al., 2020 ). Consequently, online learning communities offer students living in isolated areas the opportunity to communicate with their teachers and interact with their peers, while teachers can employ flexible instructional approaches through online communicative technologies (Salazar, 2011 ).

Drawing upon the constructive attributes of learning communities, they serve as significant platforms for effective communication between school management, English language learners, and other stakeholders involved. These collaborative communities foster communication to redress inequities encountered by English language learners and also shed light on the dynamic interplay among schools, teachers, and students (Brooks et al., 2010 ). Therefore, by leveraging the benefits of learning communities, such as enhanced communication channels and the exchange of ideas, feedback, and resources, there is potential for improved learning outcomes for both teachers and students. Researchers thus propose the following research question:

RQ1. Can learning communities improve learners’ communication?

Learning communities play a crucial role in improving learning outcomes by enhancing motivation. By creating a supportive and engaging environment, learning communities can motivate and activate students, while also fostering teachers’ professional development and shaping students’ perceptions within meaningful contexts (Pagan et al., 2020 ). In the context of learning Chinese as a foreign language, online learning communities have been found to effectively motivate students to engage in language learning (Cai and Zhu, 2012 ). In the case of Vietnamese students, who have limited opportunities to practice English oral skills, their motivation and interest in oral practice tend to be low. However, the use of social media within learning communities can bridge the gap between text-based learning and oral skills practice. Through the assistance of learning communities facilitated by social media platforms, students can engage in socio-cultural interactions and actively practice their oral English skills. This is made possible due to the easy accessibility, flexible schedules shared resources, and collaborative attributes of learning communities (Duong and Pham, 2022 ). In general, learning communities hold the potential to foster desire and motivation within online or distance learning contexts. By providing a supportive and interactive environment, these communities play a vital role in enhancing engagement and motivation among learners.

Although learning communities for English teachers have shown potential in enhancing language learners’ communication and motivation, there are still discrepancies and contradictions between researchers and teachers, theoretical frameworks and practical implementation, and the integration of innovative designs and pedagogical practices. Within learning communities, teachers have the opportunity to raise pertinent questions, observe learners’ behaviors, analyze academic issues, propose inquiries, implement teaching strategies, reflect on their instructional practices, and address challenging problems (Yan and Yang, 2019 ). However, the extent to which teachers can effectively improve students’ communication skills and motivation within a learning community remains uncertain. Consequently, researchers have put forth the following research inquiries to explore this matter:

RQ2. Can learning communities improve learners’ motivation?

Learning communities have the potential to enhance learners’ interactions and alleviate their feelings of anxiety. Specifically, it has been observed that learning communities can improve the interactions among learners in a relaxed and informal setting, resulting in a reduction in learning anxiety among international students who are married to individuals residing in the United States (Grimm et al., 2019 ). By engaging in cooperative and interactive activities within a learning community, members are able to effectively address misunderstandings and misconceptions commonly encountered in foreign language education (Zhang, 2016 ). These close interactions redirect learners’ focus toward learning activities, thereby reducing psychological stress and increasing overall satisfaction. Furthermore, participation in a learning community allows learners and teachers to share a wealth of learning resources, which facilitates easy accessibility to materials and diminishes anxiety arising from concerns about making trivial mistakes or experiencing a lack of proficiency. Interactions with peers and instructors also aid in rectifying any misconceptions regarding key concepts. However, the effects of learning communities on interactions and anxiety have not been thoroughly explored through systematic review studies. Consequently, the following research questions have been proposed by the researchers undertaking this study:

RQ3. Can learning communities mitigate learners’ anxiety?

Learning outcomes.

The establishment of virtual learning communities through the development of online learning platforms presents a promising approach to enhancing learning outcomes. One such example is the virtual intercultural avenues (VIA) program, which leverages social media, serious games, and other educational technologies to facilitate both online and physical learning and teaching within a learning community (Ren et al., 2016 ). Furthermore, the Discussion Forum of the GRE Analytical Writing Section, an online learning platform, has proven to be an effective tool for guiding students in practicing their writing skills within a learning community. Leveraging the theory of Community of Inquiry, the platform fosters social presence, teacher presence, and cognitive presence, ultimately leading to an improvement in students’ analytical writing skills (Sun et al., 2017 ).

The Theory of Community of Inquiry is a theoretical framework that focuses on the process of creating meaningful and transformative learning experiences in online and blended learning environments. According to Yu and Li ( 2022 ), this theory emphasizes the importance of social presence, cognitive presence, and teaching presence in fostering a deep and engaged learning community. According to this theory, social presence refers to the extent to which participants in an online community perceive each other as real and as connected individuals. It involves establishing trust, building relationships, and engaging in open communication to create a sense of belonging and connectedness. Cognitive presence refers to the depth of critical thinking, reflection, and inquiry that occurs within the learning community. It involves the exploration of complex problems, the application of higher-order thinking skills, and the construction of new knowledge and understanding. Teaching presence encompasses the design, facilitation, and direction of the learning experience by the instructor or facilitator. It includes instructional design, facilitating discourse, and providing direct instruction as necessary to guide and support learners’ engagement and achievement of learning goals.

The Theory of Community of Inquiry suggests that all three elements (social presence, cognitive presence, and teaching presence) are interconnected and essential for the creation of a rich and meaningful learning experience. By fostering a sense of community, promoting active and reflective learning, and providing effective teaching, this theory aims to optimize the online and blended learning environment to support deep and transformative learning outcomes. The online learning platform such as Gather.Town could enhance students’ engagement and interactions in foreign language learning by establishing a learning community (Zhao and McClure). To explore how to improve learning outcomes through learning communities, researchers proposed the research question as follows:

RQ4. How to improve learning outcomes through learning communities?

Research methods.

The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework is a widely recognized and utilized tool for conducting and reporting systematic reviews and meta-analyses in academic research. The PRISMA framework offers a comprehensive set of guidelines to ensure transparency and rigor in the review process, enhancing the credibility and reproducibility of the study findings. The PRISMA framework comprises a 27-item checklist and a four-phase flow diagram, which serve as valuable resources to guide researchers through each stage of the review process. These stages include the identification and selection of relevant studies, the extraction and synthesis of data, the assessment of study quality and bias, and the reporting of the results. The checklist addresses key components such as the design and objectives of the review, the search strategy and inclusion criteria, the data extraction process, and the assessment of the risk of bias in included studies.

By adhering to these guidelines, researchers can ensure a thorough and systematic approach to their review, minimizing the likelihood of bias and enhancing the reliability of the study findings. Furthermore, the PRISMA flow diagram visually depicts the flow of information throughout the review process, from the initial identification of studies to the final inclusion or exclusion of articles. This diagram allows readers to understand the selection process and identify any potential biases or gaps in the review. The PRISMA framework serves as a valuable tool for researchers undertaking systematic reviews and meta-analyses. Its comprehensive checklist and flow diagram promote transparency, rigor, and consistency in the review process, ultimately enhancing the validity and reliability of the study findings.

This systematic review study was implemented based on the protocol of PRISMA (Page et al., 2021 ). The review study was not registered since it did not involve any human or animal participants and was approved by the Academic Board of the University. Researchers recruited three raters to include and exclude the studies obtained from various online databases. Two raters independently included and excluded the studies based on both inclusion and exclusion criteria. The inter-rater reliability was measured to ensure both raters reached a satisfactory degree of agreement on their decisions.

Raters independently extracted data from the included studies using the finalized data extraction form. Three reviewers performed the extraction to minimize errors and biases. Any discrepancies between the reviewers were resolved through discussion or consultation with another reviewer. They assessed the quality and risk of bias of each included study using established tools, such as the Cochrane Collaboration’s risk of bias tool or the Newcastle-Ottawa Scale. This step helps inform the interpretation of the results and enhances the robustness of the systematic review.

To assess the quality and risk of bias of each included study, raters understood the specific criteria and domains assessed and then obtained all relevant information from the included studies, such as study protocols, methods, data, and results. They identified the key domains or criteria used in the assessment tool to evaluate the quality and risk of bias in the studies and evaluated each included study individually based on the identified domains and criteria. After carefully reviewing the information provided in the publication(s) of the study, including methods sections, tables, figures, and supplementary materials, they used the assessment tool to assign ratings or scores for each domain or criterion being evaluated. They justified the ratings for each domain or criterion, summarized the overall risk of bias for each included study, highlighted specific areas where bias might be present, and considered the implications of the assessed risk of bias on the study findings and the strength of evidence.

Raters included the studies based on the following inclusion criteria. Firstly, they should belong to the scope of learning outcomes, communication, anxiety, and motivation in learning communities. Secondly, they should be of higher quality based on the assessment of a systematic review, i.e. Step 6: Assess Quality of Included Studies ( https://guides.lib.unc.edu/systematic-reviews/assess-quality ) detailed in University Guidelines in the University of North Carolina at Chapel Hill. Two raters scored each included scientific literature based on a 5-point system. The final score of each was calculated as the mean of the two raters’ scores. They scored the included studies according to the questions proposed to evaluate their relevance, reliability, validity, and applicability (Appendix A ). Thirdly, the included studies should be able to provide enough data for a systematic review. For instance, they should provide convincing results and evidence to support their findings.

Researchers also established exclusion criteria to exclude the literature. The literature will be excluded if they are poorly scored or designed. They will exclude editorials, notes, short surveys, reference work entries, news, datasets, duplicated documents, withdrawn works, corrections, and those out of the scope of the learning community. They also excluded those without abstracts, rigid design, a proper sample size, or adequate data, as well as those failing to provide enough convincing results and evidence. Two raters will exclude the literature based on the criteria with the measurements of inter-rater reliability. A third rater will also decide the results if both raters cannot reach an agreement on any decision.

Researchers obtained scientific literature from multiple online databases according to their specific syntactic rules. Specifically, they retrieved 2065 results on August 16, 2022 by keying “learn* outcome*“ OR communicat* OR anxiety OR motivat* (topic) and “learn* communit*“ (topic) in the search column in Web of Science including article ( n  = 1433), conference paper ( n  = 656), others ( n  = 63), online first ( n  = 37), reviews ( n  = 34), abstracts ( n  = 16), books ( n  = 3), etc. This online database includes the Core Collection of Web of Science, China Sciences Citation Index, Derwent Innovations Index, KCI-Korean Journal Database, MEDLINE®, and SciELO Citation Index.

They obtained 2236 results by keying (TITLE-ABS-KEY (“learn* outcome*“ OR communicat* OR anxiety OR motivat*) AND TITLE-ABS-KEY (“learn* communit*“)) in the search column of Scopus, including article ( n  = 1269), conference paper ( n  = 644), and book chapter ( n  = 177), review ( n  = 87), book ( n  = 24), conference review ( n  = 24), note ( n  = 5), editorial ( n  = 3), and short survey ( n  = 1). The discipline included Social Sciences ( n  = 1485), computer science ( n  = 892), engineering ( n  = 336), arts and humanities ( n  = 141), mathematics ( n  = 125), psychology ( n  = 107), business, management and accounting ( n  = 100), medicine ( n  = 90), decision sciences ( n  = 51), and physics and astronomy ( n  = 38). The literature search was carried out on August 16, 2022.

They obtained 46 result(s) for ‘(communication OR anxiety OR motivation OR learning OR community)’ in Springer by entering terms, i.e. “with at least one of the words: communication anxiety motivation learning community” and “where the title contains: learning outcome”. The content type included article ( n  = 26), chapter ( n  = 16), conference paper ( n  = 13), and reference work entry ( n  = 4). The discipline included education ( n  = 20), computer science ( n  = 18), engineering ( n  = 2), psychology ( n  = 2), and biomedicine ( n  = 1). The obtained results were all written in English and the search was implemented on August 16, 2022.

They obtained 227 results for [Keywords: communication or anxiety or motivation or learning outcome] and [Title: learning community] in Sage. The article type included article-commentary ( n  = 1), research-article ( n  = 191), review-article ( n  = 8), case-report ( n  = 3), and others ( n  = 24), ranging from 1981 to 2022. The discipline included geography ( n  = 2), public health ( n  = 23), engineering & computing ( n  = 3), marketing & hospitality ( n  = 1), and economics & development ( n  = 5). Researchers carried out the search on August 16, 2022.

They obtained 18 results by keying in “Find articles with these terms: communication or anxiety or motivation or learning outcome, and “Title: learning community” in Elsevier ScienceDirect. The article type included review article ( n  = 1), research article ( n  = 15), encyclopedia ( n  = 1), and book chapter ( n  = 1). The publication titles included Computers & Education ( n  = 2), The Internet and Higher Education ( n  = 2), and Nurse Education Today ( n  = 2). Subject areas included social sciences ( n  = 13), nursing and health professions ( n  = 4), psychology ( n  = 4), business, management and accounting ( n  = 2), arts and humanities ( n  = 1), earth and planetary sciences ( n  = 1), and medicine and dentistry ( n  = 1). After the inclusion and exclusion, researchers included a total of 35 studies in this systematic review study (Fig. 1 ).

figure 1

This is a diagram that visually displays the process of selecting and filtering relevant scientific literature.

The included studies ( n  = 35) guided the study. They underwent inter-rater selection after the inclusion and exclusion process based on the criteria. Two raters extracted necessary information and data from included studies using content analysis methods (Hsu et al., 2013 ). They adopted Cohen’s kappa statistics to evaluate the inter-rater reliability coefficient (Cohen, 1968 ). The inter-rater reliability reached a satisfactory level ( k  = 0.92). Raters extracted data such as authors, publication years, names of sources, and major findings that might guide this systematic review study (Table 1 ).

The selected studies for this systematic review were chosen following the PRISMA framework, which ensures a comprehensive and transparent selection process. To ensure completeness, a thorough search of relevant databases was conducted, capturing a wide range of studies related to learning communities and their effects on communication, motivation, and learning outcomes. The inclusion criteria encompassed studies from various contexts, such as different educational levels, institutions, and countries.

To address the representativeness of the selected studies, efforts were made to include studies with diverse socio-demographics. This was achieved by including studies conducted in various socio-economic settings, encompassing different geographical regions, cultural backgrounds, and geographic locations. Additionally, studies were included that involved learners from different age groups, ethnicities, and educational backgrounds, ensuring a comprehensive representation of socio-demographic diversity. By incorporating studies from diverse socio-demographic backgrounds, this systematic review aims to provide a more comprehensive and holistic understanding of the effects of learning communities on communication, motivation, and learning outcomes.

Results and discussion

Most studies reported that learning communities could improve learners’ communication. Communication was a fundamental ability that could reflect learners’ academic achievements in online learning communities. Virtual communities could provide private and social media-based platforms for students to communicate with peers or teachers to share their opinions, propose questions, and obtain timely feedback from teachers (Corbo et al., 2016 ). Various roles of students may greatly facilitate communication in learning communities. Different roles of students and teaching in learning communities could exert a great influence on the communicative pedagogical approach and learning experiences (Puigdellivol et al., 2017 ). Teachers could integrate the roles and cater different learning tasks to different individuals. Various kinds of learning communities, assisted with mobile technologies, could enhance communicative skills, improve self-directed learning management, and reduce addictions to social media and cyber-bullying behaviors (Furdu et al., 2015 ). In this way, learners could improve communication through digital technologies (de Witt, 2011 ).

Various factors in learning communities could improve learners’ communicative ability. Virtual and physical learning communities could both improve communicative skills via organized learning activities (Young, 2002 ). School leadership could activate teachers’ learning communities and establish organized interactions to improve cultural knowledge acquisition, teaching skills, and communicative ability (Shin and Choi, 2018 ). Both students and teachers with video annotation tools could improve their communicative skills and reflective thinking ability by reducing communicative hindrance, avoiding the revelation of students’ weaknesses, and contextualizing the written notes in videos (Shek et al., 2021 ). Based on computer-assisted communication, teachers could dominate learning activities and promote cooperation and interactions between students and teachers in learning communities (Zhao et al., 2019 ).

Communication in learning communication is conducive to learning outcomes. Communicative ability, an important factor that could influence learning communities, could in turn influence students’ self-regulation, collaborative learning ability, problem-solving skills, and learning outcomes (Park and Hee, 2022 ). The activity level and communicative skills in learning communities were positively related to learning outcomes (Seo and Eun-Young, 2018 ). Frequent communication, a strong sense of presence, and favorable relationships could greatly improve learning outcomes based on learning communities (Seckman, 2018 ). Social communication occurred frequently in virtual learning communities, where the forum provided opportunities for members to post opinions and answer questions conveniently and concisely (Reyes and Tchounikine, 2004 ). Frequent communication could increase the contacts of knowledge, and thus improve learning outcomes.

Learning communities have the potential to enhance learners’ communication skills. Studies have shown that by participating in learning communities, students are provided with opportunities for collaborative learning, active engagement, and communication with peers and instructors. These interactions facilitate the exchange of ideas, discussions, feedback, and constructive criticism, contributing to the development of effective communication skills. Additionally, the integration of social networks within learning communities can further promote communication by providing an online platform for interaction and collaboration. Therefore, it can be argued that learning communities have a positive impact on learners’ communication abilities.

The majority of studies revealed that virtual learning communities could enhance learners’ motivation. Virtual learning communities could have a positive impact on learners’ motivation for Chinese language education (Cai and Zhu, 2012 ). Living-learning communities could improve learners’ motivation and enhance their skills in adopting motivational strategies. The honors community could more significantly motivate students to learn than science and engineering communities (Faber et al., 2014 ). The features of teachers in learning communities, e.g. shared vision and contextual sustainability, could exert a great influence on students’ motivation in learning activities (Kim and Jung, 2018 ). Interpersonal connections and a sense of belonging could motivate students to engage in learning activities in virtual learning communities (Lopez de la Serna et al., 2021 ). Learning communities could enhance the sense of communities, improve learning quality, enhance learning engagement, increase course satisfaction, and foster learning motivation (Lee, 2021 ).

Learning communities could foster learners’ motivation via the improvement in collaboration, interactions, satisfaction, and self-efficacy. Collaborative and social interactive models based on self-determination theory could cultivate a learning climate motivating students to engage in listening practice in learning communities (Ng and Latife, 2022 ). Students who joined the learning communities tended to possess higher levels of satisfaction, self-efficacy, and motivation than those who did not (Park et al., 2019 ). Learners’ self-efficacy, learning strategies, and intrinsic motivation played important roles in the persistence of learning behaviors in online learning communities (Park and Bong, 2022 ). Teachers’ self-efficacy could exert a great influence on their motivational regulation, perceived teaching values, and engagement in online professional learning communities (Zhang and Liu, 2019 ).

Learning communities can have a positive effect on learners’ motivation. Engaging in a learning community provides a sense of belongingness and support, which can increase learners’ motivation to actively participate in their learning process. Being part of a community creates a social connection that fosters intrinsic motivation and a desire to achieve goals. Learning communities often emphasize collaboration and peer support, which can enhance motivation through the encouragement and inspiration provided by peers. In a community setting, learners can share their successes, challenges, and progress, creating a positive and motivating environment. Furthermore, learning communities can offer additional resources, such as access to mentors or experts, which can increase learners’ motivation by providing them with guidance and support. The availability of these resources and the opportunity for meaningful interactions within a learning community can inspire learners to persist in their learning journey and achieve their goals. Generally, learning communities create a supportive and collaborative environment that promotes motivation and engagement, leading to improved learning outcomes.

Numerous studies demonstrated that both offline and online learning communities could reduce learners’ anxiety. A year-long learning community could facilitate collaboration and reduce the anxiety of university lecturers in the UK (MacKenzie et al., 2010 ), leading to lower levels of anxiety among learners. Virtual learning communities could improve learning environments for dental school students and enhance their engagement in dental education by reducing anxiety and stress (Karpenko et al., 2021 ). In addition, professional learning communities could reduce teachers’ anxiety via training and online courses (Intasingh, 2019 ). Teachers with less anxiety could transfer the relaxing atmosphere to learners, which might result in reduced learner anxiety in learning communities.

Learner anxiety could be mitigated through collaboration in learning communities. Collaborative learning in learning communities could also cause learner anxiety, especially when learners are aware that their learning achievements would be evaluated and compared with their peers. The learner’s anxiety could, in turn, negatively influence their participation and motivation in learning through learning communities. Computer anxiety could negatively influence learning outcomes in computer-supported learning communities (Celik and Yesilyurt, 2013 ). However, frequent interactions could acquaint learners with their environments. With the learning process through learning communities, learners might be increasingly familiar with their peers and competitive environments. Their anxiety might thus be reduced and learning outcomes and coping strategies might be enhanced in learning communities (Hilliard et al., 2020 ).

Learning communities can help mitigate learners’ anxiety. Learning can often be challenging and overwhelming, leading to feelings of anxiety and stress. However, being part of a learning community can alleviate these negative emotions by providing a supportive and collaborative environment. By interacting with peers who share similar learning experiences and challenges, learners realize they are not alone in their struggles. This sense of shared experience and commonality can help reduce anxiety by providing reassurance and support. Learning communities can also offer opportunities for collaboration and peer learning, which can help alleviate anxiety by distributing the workload and fostering a sense of shared responsibility. When learners work together and support one another, the burden of learning may seem less daunting, reducing anxiety levels. Additionally, learning communities often promote a growth mindset, emphasizing the idea that intelligence and skills can be developed over time with effort and practice. This mindset can help alleviate anxiety by reducing the fear of failure and fostering a more positive perception of learning. Consequently, learning communities can provide a nurturing and supportive environment that helps mitigate learners’ anxiety by promoting shared experiences, collaboration, and a growth mindset.

Properly designed online learning communities could improve learning outcomes in various aspects. Online learning platforms could establish learning communities through advanced communicative technologies. Online learning platforms, e.g. IRC Francais, could improve foreign language learning effectiveness through learning communities, improve digital literacy, enhance self-efficacy, facilitate knowledge acquisition, and foster learning motivation (Insaard and Netwong, 2015 ). Online learning platforms such as the Hellenic American Union in Greece could improve second language learning skills through learning communities (Halkias and Mills, 2008 ). Online learning platforms such as UNIV-RCT could provide plentiful learning resources through learning communities to improve problem-solving skills, enhance collaborative learning ability, and maintain French language proficiency (Stoytcheva, 2017 ). Communication in learning communities could enhance individual awareness, team collaborative skills, and learning outcomes via online interactions (Chou et al., 2014 ).

Communication, enhanced through online technologies, could increase learning outcomes. The online platform could improve communication through cloud learning communities and the online teaching was effective through professional learning communities (Karo and Petsangsri, 2021 ). Bilateral communication through learning communities could improve learning outcomes. Communication through learning communities could improve cross-cultural communication and learning experiences (Kamihira et al., 2011 ). The computer-assisted communication through learning communities could increase virtual engagement and social and cognitive presence. Communication is an indispensable factor that may facilitate learning community-assisted learning and teaching. Teachers, developers, and course designers could pay special attention to the ways to enhance communication through online communicative technologies.

Improving learning outcomes through learning communities involves creating a supportive and collaborative environment that fosters engagement, participation, and active learning. Educators can encourage learners to interact with each other through group discussions, collaborative projects, or online forums. This interaction allows for the exchange of ideas, diverse perspectives, and constructive feedback, which can deepen understanding and enhance learning. They can encourage learners to actively engage with course materials and concepts through problem-solving activities, case studies, hands-on experiments, or simulations. This active learning approach promotes critical thinking, application of knowledge, and a deeper understanding of the subject matter. They can create opportunities for learners to connect with each other, such as icebreaker activities, regular check-ins, or social events. Foster a culture of inclusivity, respect, and support to create a safe space for learners to express their ideas, ask questions, and seek help when needed. They can offer clear learning objectives, guidelines, and resources to support learners’ progress. They can promote self-reflection and self-assessment practices to help learners monitor their progress, identify areas where they need improvement, and set goals for growth.

In addition, a well-designed online learning community can significantly improve learning outcomes through collaboration and interaction, peer-to-peer learning, timely feedback and support, a sense of belonging and motivation, personalized learning opportunities, access to diverse perspectives and resources, and flexibility and convenience.

Online learning communities facilitate collaboration and interaction among learners. By incorporating discussion forums, group projects, and virtual classrooms, learners can engage with and learn from one another in a collaborative manner. This active involvement promotes a deeper understanding of the subject matter.

Online learning communities facilitate peer-to-peer learning where learners can share their knowledge, experiences, and perspectives with their peers. Engaging in meaningful discussions and exchanging insights can enhance understanding and promote critical thinking among learners.

A well-designed online learning community provides timely feedback and support mechanisms, such as instructor feedback, peer assessment, and virtual office hours. These elements enhance comprehension, allow for clarification of doubts, and improve overall engagement with the learning materials.

By creating a supportive and inclusive environment, a well-designed online learning community fosters a sense of belonging among learners. This feeling of community helps motivate learners to actively participate, persist in their studies, and strive for better learning outcomes.

Online learning communities can offer personalized learning opportunities through adaptive learning technologies, individualized assignments, and tailored resources. Such customization allows learners to focus on their specific learning needs and preferences, leading to better comprehension and retention of the material.

Online learning communities often bring together learners from different regions, cultures, and backgrounds. This diversity provides learners with exposure to different perspectives and ideas, broadening their understanding and enriching their learning experience.

Online learning communities offer the flexibility to access learning materials and engage with fellow learners at any time and from anywhere. This convenience allows learners to adapt their learning to their individual schedules and preferences, resulting in enhanced engagement and better learning outcomes.

To sum up, a well-designed online learning community enables collaboration, promotes peer-to-peer learning, provides timely feedback and support, fosters a sense of belonging, offers personalization, exposes learners to diverse perspectives, and provides flexibility. These elements collectively contribute to significant improvements in learning outcomes.

Deeper insights into learning communities and related factors

Learning communities are social environments where individuals come together to learn, share knowledge, and support each other’s learning journeys. Online learning communities specifically refer to these communities facilitated through digital platforms, enabling learners from different locations to connect and collaborate virtually. Deep insights into learning communities and related factors can be further explored.

Social constructivism is an important element to be included in learning communities. Learning communities are based on the principle of social constructivism, which suggests that knowledge is actively constructed through social interactions and collaboration. In a learning community, learners engage in discussions, share ideas, and collectively build knowledge through their interactions.

Sense of community plays an important role in community-based learning. A crucial aspect of learning communities is the development of a sense of community among members. The feeling of belonging, shared goals, and support within the community fosters a positive learning environment. The sense of community encourages active participation, cooperation, and a sense of accountability among learners.

Active learning is facilitated in communities. Learning communities promote active learning rather than passive consumption of information. Learners are encouraged to contribute, ask questions, and critically engage with the learning content. This active participation enhances comprehension, retention, and application of knowledge.

Roles of facilitators are essential in learning communities. Facilitators play a significant role in online learning communities by guiding and supporting learners. They create a structured framework, facilitate discussions, provide feedback, and encourage participation. Skilled facilitators can effectively nurture a collaborative learning environment and address individual learning needs.

Peer learning and support are considered important factors in learning communities. Peer learning is an essential component of learning communities. Learners can benefit from the diverse knowledge, experiences, and perspectives of their peers. Peer feedback, collaboration on projects, and collective problem-solving contribute to deeper learning and skill development.

Reflection and metacognition are considered important elements in learning communities. Learning communities encourage learners to reflect on their learning experiences and engage in metacognition, which involves thinking about their thinking. Reflection helps learners consolidate their understanding, identify areas for improvement, and set goals for further learning.

Technology and digital tools can be used in learning communities. Online learning communities heavily rely on technology and digital tools to facilitate communication, collaboration, and access to resources. Learning management systems, communication platforms, multimedia resources, and online forums support and enhance the learning experience within the community.

Lifelong learning and professional development cannot be sustained without learning communities. Learning communities provide opportunities for lifelong learning and continuous professional development. Learners can stay updated with the latest knowledge and trends in their field, acquire new skills, and build professional networks within the community.

Motivation and engagement are important factors influencing the effect of learning communities. Engaging and motivating learners is crucial for the success of learning communities. Incorporating gamification elements, interactive activities, and recognition of achievements can enhance learner motivation and sustain engagement over time.

Assessment and evaluation are important measurements to secure the development of learning communities. Learning communities employ various methods of assessment and evaluation to measure learning outcomes. These may include quizzes, assignments, peer evaluations, and self-assessments. The feedback received through assessments helps learners identify areas for improvement and guide future learning efforts.

In conclusion, learning communities foster active learning, collaboration, peer support, and reflection. Skilled facilitators, technology, and an effective sense of community contribute to creating an engaging and supportive learning environment. By focusing on these factors, learning communities can significantly enhance the learning outcomes and overall learning experience for individuals.

Recommendations for optimizing community-based learning outcomes

Building a strong sense of community, fostering active collaboration, and facilitating meaningful connections are key to optimizing community-based learning outcomes. A sense of belonging can facilitate community-based learning. Create an inclusive and welcoming learning community that values and respects the contributions of all members. Encourage learners to actively participate and engage in discussions, promoting a sense of belonging and ownership within the community. This can enhance motivation and commitment to learning.

Active collaboration is an important factor in the success of community-based learning. Design learning activities that promote active collaboration among community members. Assign group projects, discussion forums, and peer-to-peer mentoring programs to encourage learners to work together, share knowledge, and learn from each other. This collaborative approach enhances critical thinking, problem-solving skills, and a deeper understanding of the subject matter.

It is important to cultivate meaningful connections between community members. To promote mentorship, professional development, and access to a broader range of knowledge and resources, learners should be encouraged to establish meaningful connections with their peers, facilitators, mentors, and industry professionals within the community. This can be facilitated through various opportunities such as networking events, guest lectures, and virtual meet-ups.

Major findings

This study presents a systematic review based on the PRISMA framework, finding that the utilization of learning communities can yield enhancements in communication, motivation, and learning outcomes, along with a reduction in learners’ anxiety. It is suggested that well-designed online learning communities have the potential to improve learning outcomes, while the integration of online technologies can further augment communication and subsequently enhance learning outcomes within learning communities. Additionally, the researchers put forward several suggestions aimed at enhancing learning outcomes through the implementation of learning communities.

Limitations

Although this study is rigidly designed, this study is limited to several aspects. Firstly, this study could not leverage all the publications due to the limitation of library resources. Secondly, this study undertakes a systematic review without sufficient quantitative data support. The number of included studies is limited to 35, which is insufficient to underpin the conclusion in the absence of quantitative data. Lastly, there may be other factors excluded from this study that may need further investigation in the future.

Implications for future research

Future research could integrate entertainment elements into learning communities. Serious games could stimulate learners’ interest and promote their learning motivation by integrating entertainment into learning communities (Tam, 2022 ). Learners could play serious games with team members and subconsciously acquire knowledge embedded in the games and plots. Teachers could guide students to focus on how to achieve goals in the games and students struggled with fun in the gameplay. The difficulty in entertainment-based learning communities may lie in the development and design of serious games for adult learners. Future researchers could be devoted to the creation of serious games with interdisciplinary efforts.

Educational administrators could consider including the element of learning communities-based learning and teaching in the future. Some countries and areas have implemented this educational policy. For instance, learning communities were included as an important educational policy in Scotland (Hancock and Hancock, 2021 ). The learning community organization may need administrators to coordinate between different individuals and institutions. Individuals may possess different personality traits and preferences. Coordinators need to meet different demands and establish a harmonious learning community. Teachers could account for the goals and process of learning community-based learning and encourage learners to participate in the learning activities. The formation of learning communities may be confronted with unexpected challenges.

Future teachers could combine traditional pedagogy with learning communities, especially in language education. For instance, the combination of a popular teaching model with learning communities could leverage educational technologies and improve community-assisted Spanish language learning outcomes. The learning community could cultivate a Spanish language learning space for students and teachers to interact with each other and solve difficult problems (Overfield, 2003 ). In a learning community, students could enhance their interactions and communication with peers and teachers to improve their language practice skills. They could also foster their critical thinking ability through the learning community.

Advanced technologies and digital literacy could better the learning community-based learning outcomes in the future. Learning technologies and innovative pedagogies could improve language learning by establishing a cyber-learning community via a flipped pedagogical approach. The online technologies could make learning communities easily established, together with smooth communication. The learning communities could improve meaningful and collaborative learning, increase the opportunities for oral skill practice, and enhance language learning engagement through various learning activities such as role play, storytelling, discussion, and presentation (Wu et al., 2017 ). Future development of learning community-based learning may largely depend on the development of information technologies and digital literacy of learners and teachers.

Future research could highlight how to improve task distribution and collaborative teaching in learning communities. For instance, in learning communities of English teachers, teachers played different roles and presented different identities, and they engaged in a higher proportion of reasoning teaching with lower distributed participation and less collaborative teaching (Cheng and Pan, 2019 ). It may be the important task of teachers to allot appropriate assignments to different team members in the learning community. Teachers could also encourage members’ collaboration in learning and improve the learning process. They could also provide timely feedback on students’ complaints or suggestions. Students should hold a positive attitude towards collaborative learning in a community.

Future research could focus on how to improve students’ metacognition in a learning community. Community metacognition could improve communication among tertiary learners in Chungbuk University in learning communities. Students with higher levels of metacognition could adopt a cooperative strategy to learn in a community because they might be aware of the importance and benefits of community-based learning. They would collaborate with peers and teachers by raising questions and solving problems. On the contrary, those with lower meta-cognition could not perceive the benefits of learning communities and could thus refuse to collaborate with members or teachers. They would likely prefer individual learning, which might not benefit the learning outcomes.

Future research could also focus on gender differences in the sense of learning and attitudes toward privacy in learning communities. Females held a significantly stronger sense of learning and felt more comfortable with personal information revelation than their counterparts (Ozturk and Deryakulu, 2011 ). Learning attitudes could exert a great influence on learning outcomes in the context of learning communities (Eftimie, 2013 ). Positive attitudes could increase learning outcomes in the context of learning communities. Future researchers could make every effort to cater to different preferences in learning communities and improve community-based learning outcomes by adopting appropriate teaching strategies. This might pose great challenges to teachers and designers in the future.

Data availability

The datasets generated during and/or analyzed during the current study are openly available in the [OSF] repository, [ https://osf.io/m697t/?view_only=a3c261407d83424f93ddcfefcee607a3 ].

Blayone TJB, Vanoostveen R, Barber W, DiGiuseppe M, Childs E (2017) Democratizing digital learning: theorizing the fully online learning community model. Int J Educ Technol High Educ 14. https://doi.org/10.1186/s41239-017-0051-4

Brooks K, Adams SR, Morita-Mullaney T (2010) Creating inclusive learning communities for ELL students: transforming school principals’ perspectives. Theory Into Pract 49(2):145–151. https://doi.org/10.1080/00405841003641501

Article   Google Scholar  

Cai SR, Zhu W (2012) The impact of an online learning community project on University Chinese as a foreign language students’ motivation. Foreign Lang Ann 45(3):307–329. https://doi.org/10.1111/j.1944-9720.2012.01204.x

Celik V, Yesilyurt E (2013) Attitudes to technology, perceived computer self-efficacy and computer anxiety as predictors of computer supported education. Comput Educ 60(1):148–158. https://doi.org/10.1016/j.compedu.2012.06.008

Cheng X, Pan XY (2019) English language teacher learning in professional learning communities: a case study of a Chinese secondary school. Prof Dev Educ 45(4):698–712. https://doi.org/10.1080/19415257.2019.1579109

Chou CL, Hirschmann K, Fortin AH, Lichstein PR (2014) The impact of a faculty learning community on professional and personal development: the facilitator training program of the American Academy on Communication in Healthcare. Acad Med 89(7):1051–1056. https://doi.org/10.1097/ACM.0000000000000268

Article   PubMed   Google Scholar  

Cohen J (1968) Weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit. Psychol Bull 70(4):213

Article   CAS   PubMed   Google Scholar  

Corbo JC, Rundquist A, Henderson C, Dancy MH (2016) Using asynchronous communication to support virtual faculty learning communities. In: Jones DL, Ding L, Traxler A (eds) Physics Education Research Conference (PERC), Sacramento, CA, July 20–21, 2016

De Witt C (2011) Communication in online learning communities: digital teaching in higher education as reflected by pragmatism. Z Padagogik 57(3):312–325

Google Scholar  

Duong QPT, Pham TN (2022) Moving beyond four walls and forming a learning community for speaking practice under the auspices of Facebook. E-learn Digit Media 19(1):1–18. https://doi.org/10.1177/20427530211028067

Eftimie R (2013) The role of pupil–teacher communication within E-learning communities. Is E-learning a good tool in improving educational performance? In: Giannakopoulos G, Sakas DP, Vlachos DS, KyriakiManessi D (eds) Proceedings of the 2nd International Conference on Integrated Information (IC-ININFO 2012), procedia social and behavioral sciences, vol. 73, Budapest, Hungary, August 30–September 3, 2012, pp. 196–204

Engeström Y (2001) Expansive learning at work: toward an activity theoretical reconceptualization. J Educ Work 14(1):133–156. https://doi.org/10.1080/13639080020028747

Faber CJ, Grigg S, Kirn A, Chasmar J, Benson LC (2014) Engineering student motivation and perceived metacognition in learning communities. In: ASEE (ed) 2014 ASEE annual conference & exposition, Indianapolis, IN, June 15–18, 2014

Furdu I, Patrut B, Varlan S (2015) Mobile notifier tool to enhance communication within learning communities. In: Patrut B, Andone D, Holotescu C, Grosseck G (eds) SMART 2014—International conference on Social Media in Academia—Research and Teaching (SMART), Timisoara, Romania, September 18–21, 2014

Grimm AT, Kanhai D, Landgraf J (2019) International student spouses and the English language: co-creating a low-stakes language learning community. J Int Stud 9(4):1172–1190. https://doi.org/10.32674/jis.v9i4.583

Halkias D, Mills GT (2008) Distance education in support of lifelong learning: the case of the Hellas Alive Web Platform in building Greek language learning communities. In: Mastorakis NE, Mladenov V, Bojkovic Z, Simian D, Kartalopoulos S, Varonides A (eds) Proceedings of the 12th WSEAS international conference on computers, PTS 1–3: new aspects of computers, Heraklion, Greece, July 23–25, 2008

Hancock A, Hancock J (2021) On the outside, looking in: learning community languages and Scotland’s 1. Curr Issues Language Plan 22(3):328–347. https://doi.org/10.1080/14664208.2020.1867415

Hilliard J, Kear K, Donelan H, Heaney C (2020) Students’ experiences of anxiety in an assessed, online, collaborative project. Comput Educ 143:103675. https://doi.org/10.1016/j.compedu.2019.103675

Hsu YC, Hung JL, Ching YH (2013) Trends of educational technology research: more than a decade of international research in six SSCI-indexed refereed journals. Educ Technol Res Dev 61(4):685–705. https://doi.org/10.1007/s11423-013-9290-9

Insaard S, Netwong T (2015) Development of professional learning community of practice to enhance practical community using ICT for instruction of basic education teachers. In: Chova LG, Martinez AL, Torres IC (eds) INTED2015: 9th International Technology, Education and Development Conference. INTED Proceedings, Madrid, Spain, March 2–4, 2015, pp. 1968–1976

Intasingh S (2019) Non-science teacher’s teaching science: problems, adaptation, and anxiety. In: Yuenyong C, Sangpradit T (eds) Proceedings of the 6th International Conference of Science Educators and Teachers (ISET) 2018, AIP Conference Proceedings, 2081, 030014

Isbell DR (2018) Online informal language learning: insights from a Korean learning community. Language Learn Technol 22(3):82–102

Kamihira T, Aoki M, Nakano T (2011) Building a shared cross-cultural learning community for visual communication design education. In: Kurosu M (ed) 2nd International conference on Human Centered Design (HCD)/14th International Conference on Human–Computer Interaction (HCI), Orlando, FL, July 9–14, 2011

Karo D, Petsangsri S (2021) The effect of online mentoring system through professional learning community with information and communication technology via cloud computing for pre-service teachers in Thailand. Educ Inf Technol 26(1):1133–1142. https://doi.org/10.1007/s10639-020-10304-2

Karpenko AE, Sterlitz SJ, Garcia-Hammaker SS (2021) Virtual online learning communities reducing dental student stress and anxiety. J Dent Educ 85:1195–1196. https://doi.org/10.1002/jdd.12430

Kim J, Jung PS (2018) Teachers’ motivation for voluntary participation and the characteristics of teacher learning community impact on transfer of learning. J Learn-Cent Curric Instr 18(10):953–975. https://doi.org/10.22251/jlcci.2018.18.10.953

Lee YS (2021) Successful learning communities during times of disruption: developing a community of inquiry in business communication. Bus Commun Res Pract 4(1):57–64. https://doi.org/10.22682/bcrp.2021.4.1.57

Article   MathSciNet   Google Scholar  

Lopez de la Serna A, Bilbao Quintana N, Romero Andonegui A (2021) Motivation and sense of belonging in the Virtual Learning Communities at the university. Comparative study. EDMETIC 10(2):227–249. https://doi.org/10.21071/edmetic.v10i2.12988

MacKenzie J, Bell S, Bohan J, Brown A, Burke J, Cogdell B, Jamieson S, McAdam J, McKerlie R, Morrow L, Paschke B, Rea P, Tierney A (2010) From anxiety to empowerment: a Learning Community of University Teachers. Teach High Educ 15(3):273–284. https://doi.org/10.1080/13562511003740825 . Article Pii 923048457

Magana AJ, Jaiswal A, Madamanchi A, Parker LC, Gundlach E, Ward MD (2021) Characterizing the psychosocial effects of participating in a year-long residential research-oriented learning community. Current Psychol. https://doi.org/10.1007/s12144-021-01612-y

Ng B, Latife A (2022). Exploring students’ learning and motivation in a lesson study for learning community (LSLC) environment: a new perspective. Int J Lesson Learn Stud. https://doi.org/10.1108/IJLLS-01-2022-0007

Overfield DM (2003) Creating a language learning community within and beyond the classroom. In: Cherry CM (ed) Models for excellence in second language education: dimension 2003, joint conference of the Southern Conference on Language Teaching (SCOLT)/Foreign-Language-Association-of-Georgia (FLAG), Atlanta, GA, February 27–March 1, 2003

Ozturk E, Deryakulu D (2011) The effect of type of computer mediated communication tools on social and cognitive presence in online learning community. Hacet Univ Egitim Fak Derg-Hacet Univ J Educ 41:349–359

Pagan FJB, Ventura RC, Gomez-Garre LC (2020) The participation of Primary Education teachers in the virtual learning community English Teachers Exchange Network (ETEN). Rev Fuentes 22(2):178–189. https://doi.org/10.12795/revistafuentes.2020.v22.i2.10

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD et al. (2021) The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 372:71. https://doi.org/10.1136/bmj.n71

Park, Dong S, Mee-Soon C, Hye-Young J (2019) The structural relationship among learning satisfaction, learning motivation, and academic self-efficacy in physical education students: multi-group analysis according to participating in learning community activity of center for teaching and learning. J Learn-Centered Curric Instr 19(11):551–571. https://doi.org/10.22251/jlcci.2019.19.11.551

Park H, Bong CHAS (2022) The effect of self-efficacy for group work, learning strategy, intrinsic motivation on task persistence in the online learning community. Res Humanit Soc Sci 30(2):186–209

Park MJ, Hee PS (2022) The mediating effect of communication skills on the relationship between collaborative self-regulation, collaborative self-efficacy, team efficacy, and problem-solving skills in university students learning community activities. J Learn-Centered Curric Instr 22(11):57–75. https://doi.org/10.22251/jlcci.2022.22.11.57

Parker M, Patton K, Goncalves L, Luguetti C, Lee O (2022) Learning communities and physical education professional development: a scoping review. Eur Phys Educ Rev 28(2):500–518. https://doi.org/10.1177/1356336X211055584

Puigdellivol I, Molina S, Sabando D, Gomez G, Petrenas C (2017) When community becomes an agent of educational support: communicative research on Learning Communities in Catalonia. Disabil Soc 32(7):1065–1084. https://doi.org/10.1080/09687599.2017.1331835

Ren HH, Ma C, Atlantis P (2016) Study on improvement and development of college English autonomous learning field within ubiquitous learning community. In: Atlantis Press (ed) Proceedings of the 2016 International conference on Computer Engineering and Information System (CEIS), ACSR—Advances in Comptuer Science Research, vol. 52, Shanghai, China, November 12–13, 2016, pp. 174–177

Reyes P, Tchounikine P (2004) Redefining the turn-taking notion in mediated communication of virtual learning communities. In: Lester JC, Vicari RM, Paraguacu F (eds) 7th International conference on intelligent tutoring systems, proceedings. Lecture notes in computer science, vol. 3220, Maceio, Brazil, August 30–September 3, pp. 295–304

Salazar D (2011) E-learning communities: meeting the instructional challenges of a diverse educational environment. In: Callaos N, Carrasquero JV, Oropeza A, Tremante A, Welsch F (eds) 5th International Multi-Conference on Society, Cybernetics and Information (IMSCI’11), Orlando, FL, July 19–22, 2011

Schechter C (2010) Learning from success as leverage for a professional learning community: exploring an alternative perspective of school improvement process. Teachers College Record 112(1):182–224

Seckman C (2018) Impact of interactive video communication versus text-based feedback on teaching, social, and cognitive presence in online learning communities. Nurse Educ 43(1):18–22. https://doi.org/10.1097/NNE.0000000000000448

Seo E, Eun-Young K (2018) The effects of participation and activity levels in learning community of college students: focusing on academic achievement and communication skills. J Learn-Centered Curric Instr 18(21):929–948. https://doi.org/10.22251/jlcci.2018.18.21.929

Shek MM-P, Leung K-C, To PY-L (2021) Using a video annotation tool to enhance student–teachers’ reflective practices and communication competence in consultation practices through a collaborative learning community. Educ Inf Technol 26(4):4329–4352. https://doi.org/10.1007/s10639-021-10480-9

Shin SI, Choi E-S (2018) Structural relationships among secondary school principals’ compassionate rationalism leadership, communication in organization, learning organization culture, teaching competency and active participation degree in teacher learning communities. J Korean Teacher Educ 35(4):41–70. https://doi.org/10.24211/tjkte.2018.35.4.41

Stoytcheva M (2017) Collaborative distance learning: developing an online learning community. In: Pasheva V, Popivanov N, Venkov G (eds) Proceedings of the 43rd international conference Applications of Mathematics in Engineering and Economics (AMEE’17), AIP conference proceedings, 1910, 060009, Sozopol, Bulgaria, June 8–13, 2017

Sukirman MK, Kabilan MK (2023) Indonesian researchers’ scholarly publishing: an activity theory perspective. High Educ Res Dev. https://doi.org/10.1080/07294360.2023.2209522

Sun YY, Franklin T, Gao F (2017) Learning outside of classroom: exploring the active part of an informal online English learning community in China. Br J Educ Technol 48(1):57–70. https://doi.org/10.1111/bjet.12340

Tam ACF (2022) Transforming preschool language teachers’ beliefs of implementing play-based learning in a professional learning community. Int J Early Years Educ. https://doi.org/10.1080/09669760.2022.2065247

Wen QF, Zhang H (2020) Building professional learning communities of foreign language teachers in higher education. Circulo De Linguistica Aplicada A La Comunicacion 84:1–12. https://doi.org/10.5209/clac.72815

Wenger E (1998) Communities of practice: learning, meaning, and identity. Cambridge University Press

Wenger-Trayner E, Wenger-Trayner B (2015) An introduction to communities of practice: a brief overview of the concept and its uses. https://www.wenger-trayner.com/introduction-to-communities-of-practice . Accessed 3 Jul 2023

Wu WCV, Hsieh JSC, Yang JC (2017) Creating an online learning community in a flipped classroom to enhance EFL learners’ oral proficiency. Educ Technol Soc 20(2):142–157

Yan Y, Yang LX (2019) Exploring contradictions in an EFL teacher professional learning community. J Teacher Educ 70(5):498–511. https://doi.org/10.1177/0022487118801343

Young A, Cavanagh M, Moloney R (2018) Building a whole school approach to professional experience: collaboration and community. Asia-Pac J Teacher Educ 46(3):279–291. https://doi.org/10.1080/1359866X.2018.1436689

Young C (2002) Task design principles of a campus-based communication skills course in a natural-virtual learning community. In: Kinshuk X, Lewis R, Akahori K, Kemp R, Okamoto T, Henderson L, Lee CH (eds) International conference on computers in education, Auckland, New Zealand, December 3–6, 2002, pp. 1305–1306

Yu ZG, Li M (2022) A bibliometric analysis of Community of Inquiry in online learning contexts over twenty-five years. Educ Inf Technol. https://doi.org/10.1007/s10639-022-11081-w

Zhang S, Liu Q (2019) Investigating the relationships among teachers’ motivational beliefs, motivational regulation, and their learning engagement in online professional learning communities. Comput Educ 134:145–155. https://doi.org/10.1016/j.compedu.2019.02.013

Zhang YH (2016) Learning community in foreign language learning. In: Chang L, Guiran C, Zhen L (eds) Proceedings of the 6th international conference on Electronics, Mechanics, Culture and Medicine (EMCM), ACSR—advances in computer science research, vol. 45, Shenyang, China, December 29–31, 2015, pp. 394–397

Zhao J, Hua X, Li Z (2019) Empowering learning community of teachers in computer mediated communication. In: Wang S, Lin L, Hartsell T, Zhan H, Beedle J (eds) 2019 Proceedings of the eighth International Conference on Educational Innovation through Technology (EITT). University of Southern Mississippi, Biloxi, MS, pp. 27–31

Zhao X, McClure CD (2022) Gather.Town: a gamification tool to promote engagement and establish online learning communities for language learners. RELC J https://doi.org/10.1177/00336882221097216

Download references

Acknowledgements

Author would like to extend sincere gratitude to anonymous reviewers and editors. This work is supported by the Key Research and Application Project of the Key Laboratory of Key Technologies for Localization Language Services of the State Administration of Press and Publication, “Research on Localization and Intelligent Language Education Technology for the ‘Belt and Road Initiative” (Project Number: CSLS 20230012), and Special fund of Beijing Co-construction Project-Research and reform of the “Undergraduate Teaching Reform and Innovation Project” of Beijing higher education in 2020-innovative “multilingual+” excellent talent training system (202010032003); Research on the Development of Leadership Skills for Foreign Language Instructors in Selected Chinese Universities (a sub-project by NSF, Grant No. 2106571).

Author information

Authors and affiliations.

Foreign Languages Teaching Department, Qufu Normal University (Rizhao District), 276826, Rizhao, Shandong, China

Faculty of Foreign Studies, Beijing Language and Culture University, Beijing, China

  • Zhonggen Yu

You can also search for this author in PubMed   Google Scholar

Contributions

WC collected data, analyzed data, revised and confirmed the paper. ZY acquired the funding and conceptualized the study.

Corresponding author

Correspondence to Zhonggen Yu .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

The research was approved by the academic committee of the Faculty of Foreign Studies of Beijing Language and Culture University (Grant No. 20231037). The research was performed where no human participants were involved.

Informed consent

This article does not contain any studies with human participants performed by any of the authors.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Cao, W., Yu, Z. Exploring learning outcomes, communication, anxiety, and motivation in learning communities: a systematic review. Humanit Soc Sci Commun 10 , 866 (2023). https://doi.org/10.1057/s41599-023-02325-2

Download citation

Received : 20 February 2023

Accepted : 27 October 2023

Published : 24 November 2023

DOI : https://doi.org/10.1057/s41599-023-02325-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

A meta-analysis of effects of automated writing evaluation on anxiety, motivation, and second language writing skills.

  • Xiaoli Huang

The Asia-Pacific Education Researcher (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research skills learning outcomes

CENTRE FOR TEACHING SUPPORT & INNOVATION

  • What We Offer Home
  • Consultations
  • CTSI Programming
  • Teaching with Generative AI at U of T

Scholarship of Teaching and Learning

  • Course Evaluations
  • Teaching Awards and Grants
  • Teaching Feedback Services
  • Resources Home
  • Assessing Learning
  • Engaging Your Students
  • Improving Practice
  • Planning and Delivering Your Courses
  • Tool Guides
  • Tool Guide Menu
  • Teaching with Technology
  • Search for:
  • Advanced Site Search

Developing Learning Outcomes

What are learning outcomes.

Learning outcomes are statements that describe the knowledge or skills students should acquire by the end of a particular assignment, class, course, or program. They help students:

  • understand why that knowledge and those skills will be useful to them
  • focus on the context and potential applications of knowledge and skills
  • connect learning in various contexts
  • help guide assessment and evaluation

Good learning outcomes emphasize the application and integration of knowledge. Instead of focusing on coverage of material, learning outcomes articulate how students will be able to employ the material, both in the context of the class and more broadly.

Consider using approximately five to ten learning outcomes per assignment; this number allows the learning outcomes to cover a variety of knowledge and skills while retaining a focus on essential elements of the course.

Add learning outcomes to your Quercus course and learn more about developing learning outcomes in module 4 of the Course Design Foundations (CDF) self-paced asynchronous program ( enrol in the CDF ).

Examples of Learning Outcomes

For reference, Bloom’s Taxonomy of relevant active verbs.

  • identify and describe the political, religious, economic, and social uses of art in Italy during the Renaissance
  • identify a range of works of art and artists analyze the role of art and of the artist in Italy at this time
  • analyze the art of the period according to objective methods
  • link different materials and types of art to the attitudes and values of the period
  • evaluate and defend their response to a range of art historical issues
  • provide accurate diagrams of cells and be able to classify cells from microscopic images
  • identify and develop data collection instruments and measures for planning and conducting sociological research
  • identify and classify their spending habits and prepare a personal budget
  • predict the appearance and motion of visible celestial objects
  • formulate scientific questions about the motion of visible celestial objects
  • plan ways to model and/or simulate an answer to the questions chosen
  • select and integrate information from various sources, including electronic and print resources, community resources, and personally collected data, to answer the questions chosen communicate scientific ideas, procedures, results, and conclusions using appropriate SI units, language, and formats
  • describe, evaluate, and communicate the impact of research and other accomplishments in space technology on our understanding of scientific theories and principles and on other fields of endeavour
  • By the end of this course, students will be able to categorize macroeconomic policies according to the economic theories from which they emerge.
  • By the end of this unit, students will be able to describe the characteristics of the three main types of geologic faults (dip-slip, transform, and oblique) and explain the different types of motion associated with each.
  • By the end of this course, students will be able to ask questions concerning language usage with confidence and seek effective help from reference sources.
  • By the end of this course, students will be able to analyze qualitative and quantitative data, and explain how evidence gathered supports or refutes an initial hypothesis.
  • By the end of this course, students will be able to work cooperatively in a small group environment.
  • By the end of this course, students will be able to identify their own position on the political spectrum.

Specific Language

Learning outcomes should use specific language , and should clearly indicate expectations for student performance.

Vague Outcome : By the end of this course, students will have added to their understanding of the complete research process.

More Precise Outcome : By the end of this course, students will be able to:

  • describe the research process in social interventions
  • evaluate critically the quality of research by others
  • formulate research questions designed to test, refine, and build theories
  • identify and demonstrate facility in research designs and data collection strategies that are most appropriate to a particular research project
  • formulate a complete and logical plan for data analysis that will adequately answer the research questions and probe alternative explanations
  • interpret research findings and draw appropriate conclusions

Vague Outcome : By the end of this course, students will have a deeper appreciation of literature and literary movements in general.

  • identify and describe the major literary movements of the 20th century
  • perform close readings of literary texts
  • evaluate a literary work based on selected and articulated standards

For All Levels

Learning outcomes are useful for all levels of instruction, and in a variety of contexts.

By the end of this course students will be able to:

  • identify the most frequently encountered endings for nouns, adjectives and verbs, as well as some of the more complicated points of grammar, such as aspect of the verb
  • translate short unseen texts from Czech
  • read basic material relating to current affairs using appropriate reference works, where necessary
  • make themselves understood in basic everyday communicative situations

By the end of this course, students will be able to:

  • identify key measurement problems involved in the design and evaluation of social interventions and suggest appropriate solutions
  • assess the strengths and weaknesses of alternative strategies for collecting, analyzing and interpreting data from needs analyses and evaluations in direct practice, program and policy interventions
  • identify specific strategies for collaborating with practitioners in developmental projects, formulation of research questions, and selection of designs and measurement tools so as to produce findings usable by practitioners at all levels
  • analyze qualitative data systematically by selecting appropriate interpretive or quantified content analysis strategies
  • evaluate critically current research in social work
  • articulate implications of research findings for explanatory and practice theory development and for practice/program implementation
  • instruct classmates and others in an advanced statistical or qualitative data analysis procedure

By the end of the course you will be able to:

  • identify several learning style models and know how to use these models in your teaching
  • construct and use learning objectives
  • design a course and a syllabus
  • implement the principles of Universal Instructional Design in the design of a course
  • use strategies and instructional methods for effective teaching of small classes and large classes
  • identify the advantages and disadvantages of different assessment methods
  • construct a teaching portfolio

Why Develop Learning Outcomes?

For students:.

  • By focusing on the application of knowledge and skills learned in a course and on the integration of knowledge and skills with other areas of their lives, students are more connected to their learning and to the material of the course.
  • The emphasis on integration and generalizable skills helps students draw connections between courses and other kinds of knowledge, enhancing student engagement.
  • Students understand the conditions and goals of their assessment.

For instructors:

  • Developing learning outcomes allows for reflection on the course content and its potential applications, focusing on the knowledge and skills that will be most valuable to the student now and in the future.
  • Learning outcomes point to useful methods of assessment.
  • Learning outcomes allow instructors to set the standards by which the success of the course will be evaluated.

For institutions and administrators:

  • When an instructor considers the particular course or unit in the context of future coursework and the curriculum as a whole, it  contributes to the development of a coherent curriculum within a decentralized institution and helps to ensure that students are prepared for future work and learning.
  • The application and integration of learning emphasized by learning outcomes reflect and support the contemporary nature and priorities of the university, enhancing student engagement, uncovering opportunities for interdisciplinary, and providing guidance and support for students with many different kinds of previous academic preparation.
  • Learning outcomes provide structures from which courses and programs can be evaluated and can assist in program and curricular design, identify gaps or overlap in program offerings, and clarify instructional, programmatic, and institutional priorities.

Context of Learning

In developing learning outcomes, first consider the context of the learning taking place in the course might include:

  • If the course is part of the major or specialization, what knowledge or skills should students have coming into the course? What knowledge or skills must they have by its conclusion in order to proceed through their program?
  • How can this course contribute to the student’s broad learning and the student’s understanding of other subjects or disciplines?
  • What are the priorities of the department or Faculty? How does the particular focus of the course contribute to those broader goals?
  • Does the course play a particular role within the student’s program (introductory, elective, summative)? How is the course shaped by this role?
  • What knowledge or skills gained in this course will serve students throughout their lives? How will the class shape the student’s general understanding of the world?
  • Which careers commonly stem from education in this field? What are the skills or knowledge essential to these careers?
  • What kinds of work are produced in those careers?
  • How can this course enrich a student’s personal or professional life?
  • Where will the student encounter the subject matter of the course elsewhere in his or her life? In what situations might the knowledge or skills gained in the course be useful to the student?

Tools for Developing Learning Outcomes

The process of developing learning outcomes offers an opportunity for reflection on what is most necessary to help learners gain this knowledge and these skills. Considering the following elements as you prepare your learning outcomes.

To begin the process of developing learning outcomes, it may be useful to brainstorm some key words central to the disciplinary content and skills taught in the course. You may wish to consider the following questions as you develop this list of key words:

  • What are the essential things students must know to be able to succeed in the course?
  • What are the essential things students must be able to do to succeed in the course?
  • What knowledge or skills do students bring to the course that the course will build on?
  • What knowledge or skills will be new to students in the course?
  • What other areas of knowledge are connected to the work of the course?

Scholars working in pedagogy and epistemology offer us taxonomies of learning that can help make learning outcomes more precise. These levels of learning can also help develop assessment and evaluation methods appropriate to the learning outcomes for the course.

Bloom’s Taxonomy and Structure of Observed Learning Outcomes (SOLO) Taxonomy

These three areas can be used to identify and describe different aspects of learning that might take place in a course.

Content can be used to describe the disciplinary information covered in the course. This content might be vital to future work or learning in the area. A learning outcome focused on content might read:

By the end of this course, students will be able recall the 5 major events leading up to the Riel Rebellion and describe their role in initiating the Rebellion.

Skills can refer to the disciplinary or generalizable skills that students should be able to employ by the conclusion of the class. A learning outcome focused on skills might read:

By the end of this course, students will be able to define the characteristics and limitations of historical research.

Values can describe some desired learning outcomes, the attitudes or beliefs imparted or investigated in a particular field or discipline. In particular, value-oriented learning outcomes might focus on ways that knowledge or skills gained in the course will enrich students’ experiences throughout their lives. A learning outcome focused on values might read:

By the end of this course, students will be able to articulate their personal responses to a literary work they have selected independently.

Characteristics of Good Learning Outcomes

Good learning outcomes are very specific , and use active language – and verbs in particular – that make expectations clear and ensure that student and instructor goals in the course are aligned.

Where possible, avoid terms, like understand or demonstrate, that can be interpreted in many ways.

See the Bloom’s Taxonomy resource for a list of useful verbs.

Vague Outcome : By the end of the course, I expect students to increase their organization, writing, and presentation skills.

More precise outcome : By the end of the course, students will be able to:

  • produce professional quality writing
  • effectively communicate the results of their research findings and analyses to fellow classmates in an oral presentation

Vague Outcome : By the end of this course, students will be able to use secondary critical material effectively and to think independently.

More precise outcome : By the end of this course, students will be able to evaluate the theoretical and methodological foundations of secondary critical material and employ this evaluation to defend their position on the topic.

Keep in mind, learning outcomes:

  • should be flexible : while individual outcomes should be specific, instructors should feel comfortable adding, removing, or adjusting learning outcomes over the length of a course if initial outcomes prove to be inadequate
  • are focused on the learner: rather than explaining what the instructor will do in the course, good learning outcomes describe knowledge or skills that the student will employ, and help the learner understand why that knowledge and those skills are useful and valuable to their personal, professional, and academic future
  • are realistic , not aspirational: all passing students should be able to demonstrate the knowledge or skill described by the learning outcome at the conclusion of the course. In this way, learning outcomes establish standards for the course
  • focus on the application and integration of acquired knowledge and skills: good learning outcomes reflect and indicate the ways in which the described knowledge and skills may be used by the learner now and in the future
  • indicate useful modes of assessment and the specific elements that will be assessed: good learning outcomes prepare students for assessment and help them feel engaged in and empowered by the assessment and evaluation process
  • offer a timeline for completion of the desired learning

Each assignment, activity, or course might usefully employ between approximately five and ten learning outcomes; this number allows the learning outcomes to cover a variety of knowledge and skills while retaining a focus on essential elements of the course.

  • Speak to the learner : learning outcomes should address what the learner will know or be able to do at the completion of the course
  • Measurable : learning outcomes must indicate how learning will be assessed
  • Applicable : learning outcomes should emphasize ways in which the learner is likely to use the knowledge or skills gained
  • Realistic : all learners who complete the activity or course satisfactorily should be able to demonstrate the knowledge or skills addressed in the outcome
  • Time-bound : the learning outcome should set a deadline by which the knowledge or skills should be acquired;
  • Transparent : should be easily understood by the learner; and
  • Transferable : should address knowledge and skills that will be used by the learner in a wide variety of contexts

The SMART(TT) method of goal setting is adapted from Blanchard, K., & Johnson, S. (1981). The one minute manager. New York: Harper Collins

Assessment: Following Through on Learning Outcomes

Through assessment, learning outcomes can become fully integrated in course design and delivery. Assignments and exams should match the knowledge and skills described in the course’s learning outcomes. A good learning outcome can readily be translated into an assignment or exam question; if it cannot, the learning outcome may need to be refined.

One way to match outcomes with appropriate modes of assessment is to return to Bloom’s Taxonomy . The verbs associated with each level of learning indicate the complexity of the knowledge or skills that students should be asked to demonstrate in an assignment or exam question.

For example, an outcome that asks students to recall key moments leading up to an historical event might be assessed through multiple choice or short answer questions. By contrast, an outcome that asks students to evaluate several different policy models might be assessed through a debate or written essay.

Learning outcomes may also point to more unconventional modes of assessment. Because learning outcomes can connect student learning with its application both within and outside of an academic context, learning outcomes may point to modes of assessment that parallel the type of work that students may produce with the learned knowledge and skills in their career or later in life.

Unit of Instruction (e.g. lecture, activity, exam, course, workshop) and Assessment Examples

Objective : What content or skills will be covered in this instruction?

  • Identification and evaluation of severe weather patterns, use of weather maps

Outcome : What should students know or be able to do as a result of this unit of instruction?

  • By completing this assignment, students will be able to accurately predict severe weather using a standard weather map.

How do you know? : How will you be able to tell that students have achieved this outcome?

  • Student predictions will be compared with historical weather records.

Assessment : What kind of work can students produce to demonstrate this?

  • Based on this standard weather map, please indicate where you would expect to see severe weather in the next 24-hour period. Your results will be compared with historical weather records.
  • Stylistic characteristics and common themes of Modernist literature
  • By the end of this unit, students will be able to identify the stylistic and thematic elements of Modernism.
  • Students will be able to identify a passage from a Modernist novel they have not read.
  • Read this passage. Identify which literary movement it represents and which qualities drew you to that conclusion.

Course, Program, Institution: Connecting Learning Outcomes

Learning outcomes can also be implemented at the program or institutional level to assess student learning over multiple courses, and to monitor whether students have acquired the necessary knowledge and skills at one stage to be able to move onto the next.

Courses that require prerequisites may benefit from identifying a list of outcomes necessary for advancement from one level to another. When this knowledge and these skills are identified as outcomes as opposed to topics, assessment in the first level can directly measure preparation for the next level.

Many major and specialist programs identify a list of discipline-specific and multi-purpose skills, values, and areas of knowledge graduating students in the program will have. By articulating these as things that students will know or be able to do, the benefits of a program of study can be clearly communicated to prospective students, to employers, and to others in the institution.

Athabasca University developed learning outcomes for all its undergraduate major programs. Please see their Anthropology BA learning outcomes as an example.

Academic plans increasingly include a list of learning outcomes that apply across programs of study and even across degree levels. These outcomes provide an academic vision for the institution, serve as guidelines for new programs and programs undergoing review, and communicate to members of the university and the public at large the academic values and goals of the university. As previously discussed, the best learning outcomes address course-specific learning within the context of a student’s broader educational experience. One way to contribute to a coherent learning experience is to align course outcomes, when appropriate, with institutional priorities.

The University of Toronto’s academic plan, Stepping Up: A framework for academic planning at the University of Toronto, 2004-2010, outlines institutional goals in relation to the learning experience of our undergraduate and graduate students. These priorities are further articulated in “Companion Paper 1: Enabling Teaching and Learning and the Student Experience”. The skills outcomes meant to apply to all undergraduate programs follow.

  • knowing what one doesn’t know and how to seek information
  • able to think: that is, to reason inductively and deductively, to analyze and to synthesize, to think through moral and ethical issues, to construct a logical argument with appropriate evidence
  • able to communicate clearly, substantively, and persuasively both orally and in writing
  • able not only to answer questions through research and analysis but to exercise judgment about which questions are worth asking knowledgeable about and committed to standards of intellectual honesty and use of information
  • knowing how to authenticate information, whether it comes from print sources or through new technologies
  • able to collaborate with others from different disciplines in the recognition that multidisciplinary approaches are necessary to address the major issues facing society
  • understanding the methods of scientific inquiry; that is, scientifically literate

Curriculum Mapping: Translating between local and global learning outcomes

At the global program or institutional level, learning outcomes are often necessarily vague to allow for flexibility in their implementation and assessment. Consequently, in order to be effectively applied at the local level of a course or class, they must be reformulated for the particular setting. Similarly, learning outcomes from individual courses may be extrapolated and generalized in order to create program or institution-wide learning outcomes.

Both of these processes are most frequently accomplished through a technique called “curriculum mapping” . When moving from programmatic or institutional to course or class outcomes, curriculum mapping involves identifying which courses, portions of courses, or series of courses fulfill each programmatic or institutional learning outcome.

The global learning outcomes can then be matched with course-specific outcomes that directly address the content and skills required for that particular subject material. Identifying and locating all the learning outcomes encountered by a student over the course of their program can help present learning as a coherent whole to students and others, and can help students make the connection between their learning in one course and that in another. Maki (2004) notes that understanding where particular pieces of learning take place can help students take charge of their own education:

A map reveals the multiple opportunities that students have to make progress on collectively agreed-on learning goals, beginning with their first day on campus. Accompanied by a list of learning outcomes, maps can encourage students to take responsibility for their education as a process of integration and application, not as a checklist of courses and educational opportunities. Maps can also position students to make choices about courses and educational experiences that will contribute to their learning and improve areas of weakness.

For more information about and examples of curriculum mapping, please see Maki, P. (2004). Maps and inventories: Anchoring efforts to track student learning. About Campus 9(4), 2-9.

This work is licensed under a Creative Commons BY-NC-SA 4.0 International License

Teaching Assistants' Training Program

For information on graduate student and Teaching Assistant professional development and job training, please visit the TATP  for resources, events and more.

Enroll in the SoTL Hub  to access resources, share ideas and engage with your U of T community.

Table of Contents

Related topics (tags), related tool guides.

  • Quercus Learning Outcomes

Learning Outcomes 101: A Comprehensive Guide

research skills learning outcomes

For those trying to figure out what are learning outcomes, its types, steps, and assessments, this article is for you. Read on to find out more about this guide in developing your teaching strategies.

Table of Contents

Introduction.

In today’s education landscape, learning outcomes play a pivotal role in shaping the educator’s teaching strategies and heralding the academic progress of students. Defining the road map of a learning session, the learning outcomes focus on the knowledge, skills, and attitudes that learners should grasp upon the completion of a course or program.

The relevance and applicability of learning outcomes extend to both the educators and the learners, providing the former with a clear teaching structure and the latter with expectations for their learning.

Defining Learning Outcomes

What are learning outcomes.

Learning outcomes are statements that describe the knowledge, skills, and attitudes that students should have after completing a learning activity or program. These outcomes articulate what students should know or be able to do as a result of the learning experience. This includes knowledge gained , new skills acquired , a deepened understanding of the subject matter , attitudes and values influenced by learning, as well as changes in behavior that can be applied in specific contexts.

Learning outcomes are statements that describe the knowledge, skills, and attitudes that students should have after completing a learning activity or program.

Learning outcomes are critical in the educational setting because they guide the design of curriculum , instruction, and assessment methods. They are the foundation of a course outline or syllabus, providing clear direction for what will be taught, how it will be taught, and how learning will be assessed. They hold teachers accountable for delivering effective instruction that leads to desired learning outcomes and help students understand what is expected of them, enhancing their learning experience.

Learning outcomes also equip students with transferrable skills and knowledge. They provide a clear description of what the learner can apply in real-world contexts or in their further studies. This makes learning outcomes not only crucial in the academic setting but also in preparing learners for the workforce.

Differentiating Learning Outcomes from Learning Objectives

Learning objectives.

Learning objectives are more teacher-centered and describe what the teacher intends to teach or what the instruction aims to achieve in the scope of a lesson or unit. These may involve specific steps or methodologies used to impart knowledge or skills to the students.

Learning Outcomes

On the other hand, learning outcomes are student-centered and focus on what the student is expected to learn and demonstrate at the end of a learning period. These are usually measurable and observable, making them useful tools for assessing a student’s learning progress and the effectiveness of a lesson or course.

Examples of Learning Outcomes

Various academic disciplines utilize explicit learning outcomes to provide students with a clear understanding of what they are expected to achieve by the end of a course, unit, or lesson. Here are some examples:

These specific learning outcomes are instrumental in steering the progress of students throughout their educational journey. They provide key alignment within the education system, ensuring that instructions, learning activities, assessments, and feedback are all constructed around accomplishing these predefined objectives.

3 Types of Learning Outcomes

1. knowledge outcomes.

Knowledge outcomes represent a student’s capacity to remember and comprehend the information and concepts imparted during lessons. These outcomes are usually assessed through examinations or tests, which gauge how well the student has retained the information.

This strand of learning outcomes is generally divided into two categories: declarative knowledge and procedural knowledge . Declarative knowledge outcomes evaluate the student’s aptitude to recollect and identify factual information , such as the capital city of a country. In contrast, procedural knowledge outcomes measure the student’s ability to utilize rules and processes fruitfully to solve problems , such as mathematical calculations. Thus, both these subsets form the bedrock of a student’s academic accomplishments.

2. Skill Outcomes

Skill outcomes assess a student’s ability to apply learned theories or concepts within real-world contexts. These are practical skills often developed through hands-on experience and active participation, such as fieldwork or lab experiments. They may also stem from the application of theoretical knowledge to solve practical problems.

Skill outcomes are commonly split into two categories: generic skills and specific skills . Generic skills are transferable skills that can be used across various fields , such as communication or teamwork skills. Specific skills pertain to specific fields or jobs, such as the ability to use laboratory equipment correctly or the ability to compile code in a specific programming language.

3. Attitudinal Outcomes

Such outcomes play a significant role in shaping a student’s viewpoint and actions, both in the classroom and beyond. Attitudinal outcomes can reflect changes in attitudes, enhanced appreciation of alternate perspectives, or an inclination to engage with different individuals or groups.

Steps in Formulating Learning Outcomes

1. determine the knowledge, essential skills, and attitude expected.

For example, in a mathematics course, core skills that a student may need to develop could include solving linear equations, while key knowledge to be absorbed might involve grasping the principles of calculus.

By identifying these skills and knowledge, the foundation is laid for designing effective learning outcomes. These insights then guide subsequent steps in the process of formulating concrete and measurable learning outcomes for specific courses or programs.

2. Draft the Learning Objectives

At this juncture, educators start to formulate the objectives that guide the learning process. These objectives should be specific, measurable, attainable, relevant, and time-bound (SMART) to ensure they can effectively guide students’ learning.

Hence, we can rephrase the learning objective as “By the end of the semester, students will be able to 1) identify the primary causes of the First World War, and 2) analyze the effects of the First World War.”

3. Develop the Learning Outcomes

The third step in the process involves evolving these learning objectives into learning outcomes. Unlike objectives, which refer to goals that educators set for their students, learning outcomes refer to demonstrable skills or competencies that learners should exhibit upon the completion of a course or program. They are typically written from a learner’s perspective and are often accompanied by associated assessment criteria.

4. Write Clear and Achievable Outcomes

An effective learning outcome should be worded clearly enough that it becomes obvious to both learners and educators whether or not it has been achieved. Each outcome must also be achievable within the constraints of the learning program.

For example, an achievable outcome of an English course might be: “At the end of the course, students will be able to write a well-structured and clearly argued essay”.

5. Understand and Refine Learning Outcomes

Assessment of learning outcomes, regularly refine and revise the learning outcomes.

For instance, an English course’s prior outcome might be modified following a review to something like: “Upon completing the course, students will demonstrate their capability to write a well-structured persuasive essay with hardly any grammatical errors.”

Methods to Assess Learning Outcomes

Other methods include project-based assessments or portfolios , which are ideal for evaluating more complex learning outcomes, such as problem-solving skills, creativity, and the ability to apply knowledge in real-world situations. These types of assessments allow students to demonstrate their skills and knowledge in a more meaningful context, and they provide evidence of learning that is more authentic and comprehensive than a single test score.

Importance of Consistent and Fair Assessments

Consistency and fairness in assessments are not only important for accuracy, but also for promoting a positive learning environment. Assessments should be built around clear and measurable outcomes , and students should understand these outcomes ahead of time. This ensures that every student knows what they are expected to learn and how their learning will be measured.

Different Ways to Evaluate Learning Outcomes

The most common way to evaluate learning outcomes is through formative and summative assessments .

Summative assessments take place after instruction and are often used to evaluate student’s mastery of content and skills. These assessments might include final exams , term papers , or presentations .

However, regardless of the type, all assessments must be developed with clear and direct alignment to learning outcomes.

Feedback as a Key Component in Evaluating Learning Outcomes

Through consistent, purposeful and tailored feedback, educators have the power to steer their students’ learning trajectory towards achieving desired outcomes. It’s a navigational tool that informs students’ journey in gaining new knowledge and honing skills that are in sync with envisioned learning outcomes.

Impacts and Challenges of Learning Outcomes

Learning outcomes: proven catalysts in students’ upward progression.

Learning outcomes also fuel learners’ confidence and desire to learn. They provide incremental milestones towards the ultimate goal, enabling learners to revel in frequent success and thus perpetuate a positive feedback loop. This heightened morale becomes a natural motivator that drives persistent learning endeavors.

Nevertheless, learning outcomes pose potential drawbacks as well. These come to the fore if the outcomes are overly specific and rigid , thereby stifling critical thinking and creativity. On the other hand, unduly lofty outcomes could leave students grappling to meet them, causing frustration and eventual disinterest. Accordingly, there lies a crucial need for balanced, flexible, and attainable learning outcomes.

Challenges in Implementing Learning Outcomes

The formulation of learning outcomes itself is a complex process that demands a deep understanding of the domain of learning. It is crucial to balance the need for specificity of outcomes, with the breadth and richness of the learning experience. Getting this balance right can be a painstaking process.

Overcoming Challenges

Overcoming these challenges requires a holistic approach. Organizational culture plays a crucial role in this regard. Encouraging a culture of change and innovation can mitigate resistance from faculty.

Professional development programs , workshops, and training can be helpful in honing faculty’s skills for creating and implementing effective learning outcomes. These programs can also be used to foster a shared understanding of the purpose and role of learning outcomes.

However, it’s essential to consider the challenges that might be encountered in this process. Notwithstanding these challenges, the potential benefits of learning outcomes to students’ educational progress present them as a crucial factor in the quest for enhanced education quality.

Related Posts

What is a curriculum engine 4 key features, five contemporary teaching strategies in the new normal, the what and the why of obe, about the author, patrick regoniel, simplyeducate.me privacy policy.

Learning outcomes and competencies

  • January 2009

Declan Kennedy at University College Cork

  • University College Cork

Áine Hyland at University College Cork

  • This person is not on ResearchGate, or hasn't claimed this research yet.

Fig. B 2.3-3-1 The concept of being competent

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

André Selmanagić

  • Sally Goldin
  • Nadh Ditcharoen

Patricio Henriquez

  • Alif Maulidiyah
  • Aijarkyn Zhunusakunova

Magda Gaber

  • Evelyne Gross
  • Scott Blair
  • Juan Francisco Carías Álvarez
  • J.R. Aguilar-Cisneros
  • Ricardo Valerdi
  • Brendan Patrick Sullivan
  • Vesna Bjedov

Vlatka Ivić

  • Françoise Delamare

Emma Stringfellow

  • C. Woodruffe
  • Daniel P. Liston
  • Peter Jarvis
  • R. Kenneth Jones
  • Stephen Adam
  • MANAGE LEARN
  • Reva Berman Brown
  • Hum Resource Dev Int

Kim E. Dooley

  • Larry M. Dooley

Meera Alagaraja

  • R. Bruce McAfee
  • Paul J. Champagne
  • Vernon A. Quarstein
  • Reva B. Brown
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Banner

  • NWTC Library

Research Skills Tutorial

  • Learning Outcomes
  • Tutorial Menu
  • How do I use this guide?
  • 2. Sources of Information
  • 3. Searching for Information
  • 4. Evaluating Information
  • 5. Presenting Research
  • 6. Citations & Academic Integrity
  • Research Skills Post-Test

Getting Started with Research - Learning Outcomes

Students will be able to​

  • Understand what information literacy is and why it is important in school, the workplace, and society.
  • Know resources available to build information literacy skills.
  • Identify how information is needed and used in everyday life.
  • Set goals to build information literacy skills.
  • Identify the steps in the research process.
  • Develop a plan for a research paper.
  • Choose a research topic.
  • Identify keywords, synonyms, and related terms for a project.
  • Recognize the iterative nature of the research process.
  • << Previous: 1. Getting Started with Research
  • Next: 2. Sources of Information >>
  • URL: https://nwtc.libguides.com/research_skills
  • Open access
  • Published: 26 August 2024

Evaluating panel discussions in ESP classes: an exploration of international medical students’ and ESP instructors’ perspectives through qualitative research

  • Elham Nasiri   ORCID: orcid.org/0000-0002-0644-1646 1 &
  • Laleh Khojasteh   ORCID: orcid.org/0000-0002-6393-2759 1  

BMC Medical Education volume  24 , Article number:  925 ( 2024 ) Cite this article

Metrics details

This study investigates the effectiveness of panel discussions, a specific interactive teaching technique where a group of students leads a pre-planned, topic-focused discussion with audience participation, in English for Specific Purposes (ESP) courses for international medical students. This approach aims to simulate professional conference discussions, preparing students for future academic and clinical environments where such skills are crucial. While traditional group presentations foster critical thinking and communication, a gap exists in understanding how medical students perceive the complexities of preparing for and participating in panel discussions within an ESP setting. This qualitative study investigates the perceived advantages and disadvantages of these discussions from the perspectives of both panelists (medical students) and the audience (peers). Additionally, the study explores potential improvements based on insights from ESP instructors. Utilizing a two-phase design involving reflection papers and focus group discussions, data were collected from 46 medical students and three ESP instructors. Thematic analysis revealed that panel discussions offer unique benefits compared to traditional presentations, including enhanced engagement and more dynamic skill development for both panelists and the audience. Panelists reported gains in personal and professional development, including honing critical thinking, communication, and presentation skills. The audience perceived these discussions as engaging learning experiences that fostered critical analysis and information synthesis. However, challenges such as academic workload and concerns about discussion quality were also identified. The study concludes that panel discussions, when implemented effectively, can be a valuable tool for enhancing critical thinking, communication skills, and subject matter knowledge in ESP courses for medical students. These skills are transferable and can benefit students in various academic and professional settings, including future participation in medical conferences. This research provides valuable insights for ESP instructors seeking to integrate panel discussions into their curriculum, ultimately improving student learning outcomes and preparing them for future success in professional communication.

Peer Review reports

Introduction

In the field of medical education, the acquisition and application of effective communication skills are crucial for medical students in today’s global healthcare environment [ 1 ]. This necessitates not only strong English language proficiency but also the ability to present complex medical information clearly and concisely to diverse audiences.

Language courses, especially English for Specific Purposes (ESP) courses for medical students, are highly relevant in today’s globalized healthcare environment [ 2 ]. In non-English speaking countries like Iran, these courses are particularly important as they go beyond mere language instruction to include the development of critical thinking, cultural competence, and professional communication skills [ 3 ]. Proficiency in English is crucial for accessing up-to-date research, participating in international conferences, and communicating with patients and colleagues from diverse backgrounds [ 4 ]. Additionally, ESP courses help medical students understand and use medical terminologies accurately, which is essential for reading technical articles, listening to audio presentations, and giving spoken presentations [ 5 ]. In countries where English is not the primary language, ESP courses ensure that medical professionals can stay current with global advancements and collaborate effectively on an international scale [ 6 ]. Furthermore, these courses support students who may seek to practice medicine abroad, enhancing their career opportunities and professional growth [ 7 ].

Moreover, ESP courses enable medical professionals to communicate effectively with international patients, which is crucial in multicultural societies and for medical tourism, ensuring that patient care is not compromised due to language barriers [ 8 ]. Many medical textbooks, journals, and online resources are available primarily in English, and ESP courses equip medical students with the necessary language skills to access and comprehend these resources, ensuring they are well-informed about the latest medical research and practices [ 9 ].

Additionally, many medical professionals from non-English speaking countries aim to take international certification exams, such as the USMLE or PLAB, which are conducted in English, and ESP courses prepare students for these exams by familiarizing them with the medical terminology and language used in these assessments [ 10 ]. ESP courses also contribute to the professional development of medical students by improving their ability to write research papers, case reports, and other academic documents in English, which is essential for publishing in international journals and contributing to global medical knowledge [ 11 ]. In the increasingly interdisciplinary field of healthcare, collaboration with professionals from other countries is common, and ESP courses facilitate effective communication and collaboration with international colleagues, fostering innovation and the exchange of ideas [ 12 ].

With the rise of telemedicine and online medical consultations, proficiency in English is essential for non-English speaking medical professionals to provide remote healthcare services to international patients, and ESP courses prepare students for these modern medical practices [ 13 ].

Finally, ESP courses often include training on cultural competence, which is crucial for understanding and respecting the cultural backgrounds of patients and colleagues, leading to more empathetic and effective patient care and professional interactions [ 14 ]. Many ESP programs for medical students incorporate group presentations as a vital component of their curriculum, recognizing the positive impact on developing these essential skills [ 15 ].

Group projects in language courses, particularly in ESP for medical students, are highly relevant for several reasons. They provide a collaborative environment that mimics real-world professional settings, where healthcare professionals often work in multidisciplinary teams [ 16 ]. These group activities foster not only language skills but also crucial soft skills such as teamwork, leadership, and interpersonal communication, which are essential in medical practice [ 17 ].

The benefits of group projects over individual projects in language learning are significant. Hartono, Mujiyanto [ 18 ] found that group presentation tasks in ESP courses led to higher self-efficacy development compared to individual tasks. Group projects encourage peer learning, where students can learn from each other’s strengths and compensate for individual weaknesses [ 19 ]. They also provide a supportive environment that can reduce anxiety and increase willingness to communicate in the target language [ 20 ]. However, it is important to note that group projects also come with challenges, such as social loafing and unequal contribution, which need to be managed effectively [ 21 ].

Traditional lecture-based teaching methods, while valuable for knowledge acquisition, may not effectively prepare medical students for the interactive and collaborative nature of real-world healthcare settings [ 22 ]. Panel discussions (hereafter PDs), an interactive teaching technique where a group of students leads a pre-planned, topic-focused discussion with audience participation, are particularly relevant in this context. They simulate professional conference discussions and interdisciplinary team meetings, preparing students for future academic and clinical environments where such skills are crucial [ 23 ].

PDs, also known as moderated discussions or moderated panels, are a specific type of interactive format where a group of experts or stakeholders engage in a facilitated conversation on a particular topic or issue [ 22 ]. In this format, a moderator guides the discussion, encourages active participation from all panelists, and fosters a collaborative environment that promotes constructive dialogue and critical thinking [ 24 ]. The goal is to encourage audience engagement and participation, which can be achieved through various strategies such as asking open-ended questions, encouraging counterpoints and counterarguments, and providing opportunities for audience members to pose questions or share their own experiences [ 25 ]. These discussions can take place in-person or online, and can be designed to accommodate diverse audiences and settings [ 26 ].

In this study, PD is considered a speaking activity where medical students are assigned specific roles to play during the simulation, such as a physician, quality improvement specialist, policymaker, or patient advocate. By taking on these roles, students can gain a better understanding of the diverse perspectives and considerations that come into play in real-world healthcare discussions [ 23 ]. Simulating PDs within ESP courses can be a powerful tool for enhancing medical students’ learning outcomes in multiple areas. This approach improves language proficiency, academic skills, and critical thinking abilities, while also enabling students to communicate effectively with diverse stakeholders in the medical field [ 27 , 28 ].

Theoretical framework

The panel discussions in our study are grounded in the concept of authentic assessment (outlined by Villarroel, Bloxham [ 29 ]), which involves designing tasks that mirror real-life situations and problems. In the context of medical education, this approach is particularly relevant as it prepares students for the complex, multidisciplinary nature of healthcare communication. Realism can be achieved through two means: providing a realistic context that describes and delivers a frame for the problem to be solved and creating tasks that are similar to those faced in real and/or professional life [ 30 ]. In our study, the PDs provide a realistic context by simulating scenarios where medical students are required to discuss and present complex medical topics in a professional setting, mirroring the types of interactions they will encounter in their future careers.

The task of participating in PDs also involves cognitive challenge, as students are required to think critically about complex medical topics, analyze information, and communicate their findings effectively. This type of task aims to generate processes of problem-solving, application of knowledge, and decision-making that correspond to the development of cognitive and metacognitive skills [ 23 ]. For medical students, these skills are crucial in developing clinical reasoning and effective patient communication. The PDs encourage students to go beyond the textual reproduction of fragmented and low-order content and move towards understanding, establishing relationships between new ideas and previous knowledge, linking theoretical concepts with everyday experience, deriving conclusions from the analysis of data, and examining both the logic of the arguments present in the theory and its practical scope [ 24 , 25 , 27 ].

Furthermore, the evaluative judgment aspect of our study is critical in helping students develop criteria and standards about what a good performance means in medical communication. This involves students judging their own performance and regulating their own learning [ 31 ]. In the context of panel discussions, students reflect on their own work, compare it with desired standards, and seek feedback from peers and instructors. By doing so, students can develop a sense of what constitutes good performance in medical communication and what areas need improvement [ 32 ]. Boud, Lawson and Thompson [ 33 ] argue that students need to build a precise judgment about the quality of their work and calibrate these judgments in the light of evidence. This skill is particularly important for future medical professionals who will need to continually assess and improve their communication skills throughout their careers.

The theoretical framework presented above highlights the importance of authentic learning experiences in medical education. By drawing on the benefits of group work and panel discussions, university instructor-researchers aimed to provide medical students with a unique opportunity to engage with complex cases and develop their communication and collaboration skills. As noted by Suryanarayana [ 34 ], authentic learning experiences can lead to deeper learning and improved retention. Considering the advantages of group work in promoting collaborative problem-solving and language development, the instructor-researchers designed a panel discussion task that simulates real-world scenarios, where students can work together to analyze complex cases, share knowledge, and present their findings to a simulated audience.

While previous studies have highlighted the benefits of interactive learning experiences and critical thinking skills in medical education, a research gap remains in understanding how medical students perceive the relevance of PDs in ESP courses. This study aims to address this gap by investigating medical students’ perceptions of PD tasks in ESP courses and how these perceptions relate to their language proficiency, critical thinking skills, and ability to communicate effectively with diverse stakeholders in the medical field. This understanding can inform best practices in medical education, contributing to the development of more effective communication skills for future healthcare professionals worldwide [ 23 ]. The research questions guiding this study are:

What are the perceived advantages of PDs from the perspectives of panelists and the audience?

What are the perceived disadvantages of PDs from the perspectives of panelists and the audience?

How can PDs be improved for panelists and the audience based on the insights of ESP instructors?

Methodology

Aim and design.

For this study, a two-phase qualitative design was employed to gain an understanding of the advantages and disadvantages of PDs from the perspectives of both student panelists and the audience (Phase 1) and to acquire an in-depth understanding of the suggested strategies provided by experts to enhance PPs for future students (Phase 2).

Participants and context of the study

This study was conducted in two phases (Fig.  1 ) at Shiraz University of Medical Sciences (SUMS), Shiraz, Iran.

figure 1

Participants of the study in two phases

In the first phase, the student participants were 46 non-native speakers of English and international students who studied medicine at SUMS. Their demographic characteristics can be seen in Table  1 .

These students were purposefully selected because they were the only SUMS international students who had taken the ESP (English for Specific Purposes) course. The number of international students attending SUMS is indeed limited. Each year, a different batch of international students joins the university. They progress through a sequence of English courses, starting with General English 1 and 2, followed by the ESP course, and concluding with academic writing. At the time of data collection, the students included in the study were the only international students enrolled in the ESP course. This mandatory 3-unit course is designed to enhance their language and communication skills specifically tailored to their profession. As a part of the Medicine major curriculum, this course aims to improve their English language proficiency in areas relevant to medicine, such as understanding medical terminology, comprehending original medicine texts, discussing clinical cases, and communicating with patients, colleagues, and other healthcare professionals.

Throughout the course, students engage in various interactive activities, such as group discussions, role-playing exercises, and case studies, to develop their practical communication skills. In this course, medical students receive four marks out of 20 for their oral presentations, while the remaining marks are allocated to their written midterm and final exams. From the beginning of the course, they are briefed about PDs, and they are shown two YouTube-downloaded videos about PDs at medical conferences, a popular format for discussing and sharing knowledge, research findings, and expert opinions on various medical topics.

For the second phase of the study, a specific group of participants was purposefully selected. This group consisted of three faculty members from SUMS English department who had extensive experience attending numerous conferences at national and international levels, particularly in the medical field, as well as working as translators and interpreters in medical congresses. Over the course of ten years, they also gained considerable experience in PDs. They were invited to discuss strategies helpful for medical students with PDs.

Panel discussion activity design and implementation

When preparing for a PD session, medical students received comprehensive guidance on understanding the roles and responsibilities of each panel member. This guidance was aimed at ensuring that each participant was well-prepared and understood their specific role in the discussion.

Moderators should play a crucial role in steering the conversation. They are responsible for ensuring that all panelists have an opportunity to contribute and that the audience is engaged effectively. Specific tasks include preparing opening remarks, introducing panelists, and crafting transition questions to facilitate smooth topic transitions. The moderators should also manage the time to ensure balanced participation and encourage active audience involvement.

Panelists are expected to be subject matter experts who bring valuable insights and opinions to the discussion. They are advised to conduct thorough research on the topic and prepare concise talking points. Panelists are encouraged to draw from their medical knowledge and relevant experiences, share evidence-based information, and engage with other panelists’ points through active listening and thoughtful responses.

The audience plays an active role in the PDs. They are encouraged to participate by asking questions, sharing relevant experiences, and contributing to the dialogue. To facilitate this, students are advised to take notes during the discussion and think of questions or comments they can contribute during the Q&A segment.

For this special course, medical students were advised to choose topics either from their ESP textbook or consider current medical trends, emerging research, and pressing issues in their field. Examples included breast cancer, COVID-19, and controversies in gene therapy. The selection process involved brainstorming sessions and consultation with the course instructor to ensure relevance and appropriateness.

To accommodate the PD sessions within the course structure, students were allowed to start their PD sessions voluntarily from the second week. However, to maintain a balance between peer-led discussions and regular course content, only one PD was held weekly. This approach enabled the ESP lecturer to deliver comprehensive content while also allowing students to engage in these interactive sessions.

A basic time structure was suggested for each PD (Fig.  2 ):

figure 2

Time allocation for panel discussion stages in minutes

To ensure the smooth running of the course and maintain momentum, students were informed that they could cancel their PD session only once. In such cases, they were required to notify the lecturer and other students via the class Telegram channel to facilitate rescheduling and minimize disruptions. This provision was essential in promoting a sense of community among students and maintaining the course’s continuity.

Research tools and data collection

The study utilized various tools to gather and analyze data from participants and experts, ensuring a comprehensive understanding of the research topic.

Reflection papers

In Phase 1 of the study, 46 medical students detailed their perceptions of the advantages and disadvantages of panel discussions from dual perspectives: as panelists (presenters) and as audience members (peers).

Participants were given clear instructions and a 45-minute time frame to complete the reflection task. With approximately 80% of the international language students being native English speakers and the rest fluent in English, the researchers deemed this time allocation reasonable. The questions and instructions were straightforward, facilitating quick comprehension. It was estimated that native English speakers would need about 30 min to complete the task, while non-native speakers might require an extra 15 min for clarity and expression. This time frame aimed to allow students to respond thoughtfully without feeling rushed. Additionally, students could request more time if needed.

Focus group discussion

In phase 2 of the study, a focus group discussion was conducted with three expert participants. The purpose of the focus group was to gather insights from expert participants, specifically ESP (English for Specific Purposes) instructors, on how presentation dynamics can be improved for both panelists and the audience.

According to Colton and Covert [ 35 ], focus groups are useful for obtaining detailed input from experts. The appropriate size of a focus group is determined by the study’s scope and available resources [ 36 ]. Morgan [ 37 ] suggests that small focus groups are suitable for complex topics where specialist participants might feel frustrated if not allowed to express themselves fully.

The choice of a focus group over individual interviews was based on several factors. First, the exploratory nature of the study made focus groups ideal for interactive discussions, generating new ideas and in-depth insights [ 36 ]. Second, while focus groups usually involve larger groups, they can effectively accommodate a limited number of experts with extensive knowledge [ 37 ]. Third, the focus group format fostered a more open environment for idea exchange, allowing participants to engage dynamically [ 36 ]. Lastly, conducting a focus group was more time- and resource-efficient than scheduling three separate interviews [ 36 ].

Data analysis

The first phase of the study involved a thorough examination of the data related to the research inquiries using thematic analysis. This method was chosen for its effectiveness in uncovering latent patterns from a bottom-up perspective, facilitating a comprehensive understanding of complex educational phenomena [ 38 ]. The researchers first familiarized themselves with the data by repeatedly reviewing the reflection papers written by the medical students. Next, an initial round of coding was independently conducted to identify significant data segments and generate preliminary codes that reflected the students’ perceptions of the advantages and disadvantages of presentation dynamics PDs from both the presenter and audience viewpoints [ 38 ].

The analysis of the reflection papers began with the two researchers coding a subset of five papers independently, adhering to a structured qualitative coding protocol [ 39 ]. They convened afterward to compare their initial codes and address any discrepancies. Through discussion, they reached an agreement on the codes, which were then analyzed, organized into categories and themes, and the frequency of each code was recorded [ 38 ].

After coding the initial five papers, the researchers continued to code the remaining 41 reflection paper transcripts in batches of ten, meeting after each batch to review their coding, resolve any inconsistencies, and refine the coding framework as needed. This iterative process, characterized by independent coding, joint reviews, and consensus-building, helped the researchers establish a robust and reliable coding approach consistently applied to the complete dataset [ 40 ]. Once all 46 reflection paper transcripts were coded, the researchers conducted a final review and discussion to ensure accurate analysis. They extracted relevant excerpts corresponding to the identified themes and sub-themes from the transcripts to provide detailed explanations and support for their findings [ 38 ]. This multi-step approach of separate initial coding, collaborative review, and frequency analysis enhanced the credibility and transparency of the qualitative data analysis.

To ensure the trustworthiness of the data collected in this study, the researchers adhered to the Guba and Lincoln standards of scientific accuracy in qualitative research, which encompass credibility, confirmability, dependability, and transferability [ 41 ] (Table  2 ).

The analysis of the focus group data obtained from experts followed the same rigorous procedure applied to the student participants’ data. Thematic analysis was employed to examine the experts’ perspectives, maintaining consistency in the analytical approach across both phases of the study. The researchers familiarized themselves with the focus group transcript, conducted independent preliminary coding, and then collaboratively refined the codes. These codes were subsequently organized into categories and themes, with the frequency of each code recorded. The researchers engaged in thorough discussions to ensure agreement on the final themes and sub-themes. Relevant excerpts from the focus group transcript were extracted to provide rich, detailed explanations of each theme, thereby ensuring a comprehensive and accurate analysis of the experts’ insights.

1. What are the advantages of PDs from the perspective of panelists and the audience?

The analysis of the advantages of PDs from the perspectives of both panelists and audience members revealed several key themes and categories. Tables  2 and 3 present the frequency and percentage of responses for each code within these categories.

From the panelists’ perspective (Table  3 ), the overarching theme was “Personal and Professional Development.” The most frequently reported advantage was knowledge sharing (93.5%), followed closely by increased confidence (91.3%) and the importance of interaction in presentations (91.3%).

Notably, all categories within this theme had at least one code mentioned by over 80% of participants, indicating a broad range of perceived benefits. The category of “Effective teamwork and communication” was particularly prominent, with collaboration (89.1%) and knowledge sharing (93.5%) being among the most frequently cited advantages. This suggests that PDs are perceived as valuable tools for fostering interpersonal skills and collective learning. In the “Language mastery” category, increased confidence (91.3%) and better retention of key concepts (87.0%) were highlighted, indicating that PDs are seen as effective for both language and content learning.

The audience perspective (Table  4 ), encapsulated under the theme “Enriching Learning Experience,” showed similarly high frequencies across all categories.

The most frequently mentioned advantage was exposure to diverse speakers (93.5%), closely followed by the range of topics covered (91.3%) and increased audience interest (91.3%). The “Broadening perspectives” category was particularly rich, with all codes mentioned by over 70% of participants. This suggests that audience members perceive PDs as valuable opportunities for expanding their knowledge and viewpoints. In the “Language practice” category, the opportunity to practice language skills (89.1%) was the most frequently cited advantage, indicating that even as audience members, students perceive significant language learning benefits.

Comparing the two perspectives reveals several interesting patterns:

High overall engagement: Both panelists and audience members reported high frequencies across all categories, suggesting that PDs are perceived as beneficial regardless of the role played.

Language benefits: While panelists emphasized increased confidence (91.3%) and better retention of concepts (87.0%), audience members highlighted opportunities for language practice (89.1%). This indicates that PDs offer complementary language learning benefits for both roles.

Interactive learning: The importance of interaction was highly rated by panelists (91.3%), while increased audience interest was similarly valued by the audience (91.3%). This suggests that PDs are perceived as an engaging, interactive learning method from both perspectives.

Professional development: Panelists uniquely emphasized professional growth aspects such as experiential learning (84.8%) and real-world application (80.4%). These were not directly mirrored in the audience perspective, suggesting that active participation in PDs may offer additional professional development benefits.

Broadening horizons: Both groups highly valued the diversity aspect of PDs. Panelists appreciated diversity and open-mindedness (80.4%), while audience members valued diverse speakers (93.5%) and a range of topics (91.3%).

2. What are the disadvantages of PDs from the perspective of panelists and the audience?

The analysis of the disadvantages of panel discussions (PDs) from the perspectives of both panelists and audience members revealed several key themes and categories. Tables  4 and 5 present the frequency and percentage of responses for each code within these categories.

From the panelists’ perspective (Table  5 ), the theme “Drawbacks of PDs” was divided into two main categories: “Academic Workload Challenges” and “Coordination Challenges.” The most frequently reported disadvantage was long preparation (87.0%), followed by significant practice needed (82.6%) and the time-consuming nature of PDs (80.4%). These findings suggest that the primary concern for panelists is the additional workload that PDs impose on their already demanding academic schedules. The “Coordination Challenges” category, while less prominent than workload issues, still presented significant concerns. Diverse panel skills (78.3%) and finding suitable panelists (73.9%) were the most frequently cited issues in this category, indicating that team dynamics and composition are notable challenges for panelists.

The audience perspective (Table  6 ), encapsulated under the theme “Drawbacks of PDs,” was divided into two main categories: “Time-related Issues” and “Interaction and Engagement Issues.” In the “Time-related Issues” category, the most frequently mentioned disadvantage was the inefficient use of time (65.2%), followed by the perception of PDs as too long and boring (60.9%). Notably, 56.5% of respondents found PDs stressful due to overwhelming workload from other studies, and 52.2% considered them not very useful during exam time. The “Interaction and Engagement Issues” category revealed more diverse concerns. The most frequently mentioned disadvantage was the repetitive format (82.6%), followed by limited engagement with the audience (78.3%) and the perception of PDs as boring (73.9%). The audience also noted issues related to the panelists’ preparation and coordination, such as “Not practiced and natural” (67.4%) and “Coordination and Interaction Issues” (71.7%), suggesting that the challenges faced by panelists directly impact the audience’s experience.

Workload concerns: Both panelists and audience members highlighted time-related issues. For panelists, this manifested as long preparation times (87.0%) and difficulty balancing with other studies (76.1%). For the audience, it appeared as perceptions of inefficient use of time (65.2%) and stress due to overwhelming workload from other studies (56.5%).

Engagement issues: While panelists focused on preparation and coordination challenges, the audience emphasized the quality of the discussion and engagement. This suggests a potential mismatch between the efforts of panelists and the expectations of the audience.

Boredom and repetition: The audience frequently mentioned boredom (73.9%) and repetitive format (82.6%) as issues, which weren’t directly mirrored in the panelists’ responses. This indicates that while panelists may be focused on content preparation, the audience is more concerned with the delivery and variety of the presentation format.

Coordination challenges: Both groups noted coordination issues, but from different perspectives. Panelists struggled with team dynamics and finding suitable co-presenters, while the audience observed these challenges manifesting as unnatural or unpracticed presentations.

Academic pressure: Both groups acknowledged the strain PDs put on their academic lives, with panelists viewing it as a burden (65.2%) and the audience finding it less useful during exam times (52.2%).

3. How can PDs be improved for panelists and the audience from the experts’ point of view?

The presentation of data for this research question differs from the previous two due to the unique nature of the information gathered. Unlike the quantifiable student responses in earlier questions, this data stems from expert opinions and a reflection discussion session, focusing on qualitative recommendations for improvement rather than frequency of responses (Braun & Clarke, 2006). The complexity and interconnectedness of expert suggestions, coupled with the integration of supporting literature, necessitate a more narrative approach (Creswell & Poth, 2018). This format allows for a richer exploration of the context behind each recommendation and its potential implications (Patton, 2015). Furthermore, the exploratory nature of this question, aimed at generating ideas for improvement rather than measuring prevalence of opinions, is better served by a detailed, descriptive presentation (Merriam & Tisdell, 2016). This approach enables a more nuanced understanding of how PDs can be enhanced, aligning closely with the “how” nature of the research question and providing valuable insights for potential implementation (Yin, 2018).

The experts provided several suggestions to address the challenges faced by students in panel discussions (PDs) and improve the experience for both panelists and the audience. Their recommendations focused on six key areas: time management and workload, preparation and skill development, engagement and interactivity, technological integration, collaboration and communication, and institutional support.

To address the issue of time management and heavy workload, one expert suggested teaching students to “ break down the task to tackle the time-consuming nature of panel discussions and balance it with other studies .” This approach aims to help students manage the extensive preparation time required for PDs without compromising their other academic responsibilities. Another expert emphasized “ enhancing medical students’ abilities to prioritize tasks , allocate resources efficiently , and optimize their workflow to achieve their goals effectively .” These skills were seen as crucial not only for PD preparation but also for overall academic success and future professional practice.

Recognizing the challenges of long preparation times and the perception of PDs being burdensome, an expert proposed “ the implementation of interactive training sessions for panelists .” These sessions were suggested to enhance coordination skills and improve the ability of group presenters to engage with the audience effectively. The expert emphasized that such training could help students view PDs as valuable learning experiences rather than additional burdens, potentially increasing their motivation and engagement in the process.

To combat issues of limited engagement and perceived boredom, experts recommended increasing engagement opportunities for the audience through interactive elements like audience participation and group discussions. They suggested that this could transform PDs from passive listening experiences to active learning opportunities. One expert suggested “ optimizing time management and restructuring the format of panel discussions ” to address inefficiency during sessions. This restructuring could involve shorter presentation segments interspersed with interactive elements to maintain audience attention and engagement.

An innovative solution proposed by one expert was “ using ChatGPT to prepare for PDs by streamlining scenario presentation preparation and role allocation. ” The experts collectively discussed the potential of AI to assist medical students in reducing their workload and saving time in preparing scenario presentations and allocating roles in panel discussions. They noted that AI could help generate initial content drafts, suggest role distributions based on individual strengths, and even provide practice questions for panelists, significantly reducing preparation time while maintaining quality.

Two experts emphasized the importance of enhancing collaboration and communication among panelists to address issues related to diverse panel skills and coordination challenges. They suggested establishing clear communication channels and guidelines to improve coordination and ensure a cohesive presentation. This could involve creating structured team roles, setting clear expectations for each panelist, and implementing regular check-ins during the preparation process to ensure all team members are aligned and progressing.

All experts were in agreement that improving PDs would not be possible “ if nothing is done by the university administration to reduce the ESP class size for international students .” They believed that large class sizes in ESP or EFL classes could negatively influence group oral presentations, hindering language development and leading to uneven participation. The experts suggested that smaller class sizes would allow for more individualized attention, increased speaking opportunities for each student, and more effective feedback mechanisms, all of which are crucial for developing strong presentation skills in a second language.

Research question 1: what are the advantages of PDs from the perspective of panelists and the audience?

The results of this study reveal significant advantages of PDs for both panelists and audience members in the context of medical education. These findings align with and expand upon previous research in the field of educational presentations and language learning.

Personal and professional development for panelists

The high frequency of reported benefits in the “Personal and Professional Development” theme for panelists aligns with several previous studies. The emphasis on language mastery, particularly increased confidence (91.3%) and better retention of key concepts (87.0%), supports the findings of Hartono, Mujiyanto [ 42 ], Gedamu and Gezahegn [ 15 ], Li [ 43 ], who all highlighted the importance of language practice in English oral presentations. However, our results show a more comprehensive range of benefits, including professional growth aspects like experiential learning (84.8%) and real-world application (80.4%), which were not as prominently featured in these earlier studies.

Interestingly, our findings partially contrast with Chou [ 44 ] study, which found that while group oral presentations had the greatest influence on improving students’ speaking ability, individual presentations led to more frequent use of metacognitive, retrieval, and rehearsal strategies. Our results suggest that PDs, despite being group activities, still provide significant benefits in these areas, possibly due to the collaborative nature of preparation and the individual responsibility each panelist bears. The high frequency of knowledge sharing (93.5%) and collaboration (89.1%) in our study supports Harris, Jones and Huffman [ 45 ] emphasis on the importance of group dynamics and varied perspectives in educational settings. However, our study provides more quantitative evidence for these benefits in the specific context of PDs.

Enriching learning experience for the audience

The audience perspective in our study reveals a rich learning experience, with high frequencies across all categories. This aligns with Agustina [ 46 ] findings in business English classes, where presentations led to improvements in all four language skills. However, our study extends these findings by demonstrating that even passive participation as an audience member can lead to significant perceived benefits in language practice (89.1%) and broadening perspectives (93.5% for diverse speakers). The high value placed on diverse speakers (93.5%) and range of topics (91.3%) by the audience supports the notion of PDs as a tool for expanding knowledge and viewpoints. This aligns with the concept of situated learning experiences leading to deeper understanding in EFL classes, as suggested by Li [ 43 ] and others [ 18 , 31 ]. However, our study provides more specific evidence for how this occurs in the context of PDs.

Interactive learning and engagement

Both panelists and audience members in our study highly valued the interactive aspects of PDs, with the importance of interaction rated at 91.3% by panelists and increased audience interest at 91.3% by the audience. This strong emphasis on interactivity aligns with Azizi and Farid Khafaga [ 19 ] study on the benefits of dynamic assessment and dialogic learning contexts. However, our study provides more detailed insights into how this interactivity is perceived and valued by both presenters and audience members in PDs.

Professional growth and real-world application

The emphasis on professional growth through PDs, particularly for panelists, supports Li’s [ 43 ] assertion about the power of oral presentations as situated learning experiences. Our findings provide more specific evidence for how PDs contribute to professional development, with high frequencies reported for experiential learning (84.8%) and real-world application (80.4%). This suggests that PDs may be particularly effective in bridging the gap between academic learning and professional practice in medical education.

Research question 2: what are the disadvantages of pds from the perspective of panelists and the audience?

Academic workload challenges for panelists.

The high frequency of reported challenges in the “Academic Workload Challenges” category for panelists aligns with several previous studies in medical education [ 47 , 48 , 49 ]. The emphasis on long preparation (87.0%), significant practice needed (82.6%), and the time-consuming nature of PDs (80.4%) supports the findings of Johnson et al. [ 24 ], who noted that while learners appreciate debate-style journal clubs in health professional education, they require additional time commitment. This is further corroborated by Nowak, Speed and Vuk [ 50 ], who found that intensive learning activities in medical education, while beneficial, can be time-consuming for students.

Perceived value of pds relative to time investment

While a significant portion of the audience (65.2%) perceived PDs as an inefficient use of time, the high frequency of engagement-related concerns (82.6% for repetitive format, 78.3% for limited engagement) suggests that the perceived lack of value may be more closely tied to the quality of the experience rather than just the time investment. This aligns with Dyhrberg O’Neill [ 27 ] findings on debate-based oral exams, where students perceived value despite the time-intensive nature of the activity. However, our results indicate a more pronounced concern about the return on time investment in PDs. This discrepancy might be addressed through innovative approaches to PD design and implementation, such as those proposed by Almazyad et al. [ 22 ], who suggested using AI tools to enhance expert panel discussions and potentially improve efficiency.

Coordination challenges for panelists

The challenges related to coordination in medical education, such as diverse panel skills (78.3%) and finding suitable panelists (73.9%), align with previous research on teamwork in higher education [ 21 ]. Our findings support the concept of the free-rider effect discussed by Hall and Buzwell [ 21 ], who explored reasons for non-contribution in group projects beyond social loafing. This is further elaborated by Mehmood, Memon and Ali [ 51 ], who proposed that individuals may not contribute their fair share due to various factors including poor communication skills or language barriers, which is particularly relevant in medical education where clear communication is crucial [ 52 ]. Comparing our results to other collaborative learning contexts in medical education, Rodríguez-Sedano, Conde and Fernández-Llamas [ 53 ] measured teamwork competence development in a multidisciplinary project-based learning environment. They found that while teamwork skills improved over time, initial coordination challenges were significant. This aligns with our findings on the difficulties of coordinating diverse panel skills and opinions in medical education settings.

Our results also resonate with Chou’s [ 44 ] study comparing group and individual oral presentations, which found that group presenters often had a limited understanding of the overall content. This is supported by Wilson, Ho and Brookes [ 54 ], who examined student perceptions of teamwork in undergraduate science degrees, highlighting the challenges and benefits of collaborative work, which are equally applicable in medical education [ 52 ].

Quality of discussions and perception for the audience

The audience perspective in our study reveals significant concerns about the quality and engagement of PDs in medical education. The high frequency of issues such as repetitive format (82.6%) and limited engagement with the audience (78.3%) aligns with Parmar and Bickmore [ 55 ] findings on the importance of addressing individual audience members and gathering feedback. This is further supported by Nurakhir et al. [ 25 ], who explored students’ views on classroom debates as a strategy to enhance critical thinking and oral communication skills in nursing education, which shares similarities with medical education. Comparing our results to other interactive learning methods in medical education, Jones et al. [ 26 ] reviewed the use of journal clubs and book clubs in pharmacy education. They found that while these methods enhanced engagement, they also faced challenges in maintaining student interest over time, similar to the boredom issues reported in our study of PDs in medical education. The perception of PDs as boring (73.9%) and not very useful during exam time (52.2%) supports previous research on the stress and pressure experienced by medical students [ 48 , 49 ]. Grieve et al. [ 20 ] specifically examined student fears of oral presentations and public speaking in higher education, which provides context for the anxiety and disengagement observed in our study of medical education. Interestingly, Bhuvaneshwari et al. [ 23 ] found positive impacts of panel discussions in educating medical students on specific modules. This contrasts with our findings and suggests that the effectiveness of PDs in medical education may vary depending on the specific context and implementation.

Comparative analysis and future directions

Our study provides a unique comparative analysis of the challenges faced by both panelists and audience members in medical education. The alignment of concerns around workload and time management between the two groups suggests that these are overarching issues in the implementation of PDs in medical curricula. This is consistent with the findings of Pasandín et al. [ 56 ], who examined cooperative oral presentations in higher education and their impact on both technical and soft skills, which are crucial in medical education [ 52 ]. The mismatch between panelist efforts and audience expectations revealed in our study is a novel finding that warrants further investigation in medical education. This disparity could be related to the self-efficacy beliefs of presenters, as explored by Gedamu and Gezahegn [ 15 ] in their study of TEFL trainees’ attitudes towards academic oral presentations, which may have parallels in medical education. Looking forward, innovative approaches could address some of the challenges identified in medical education. Almazyad et al. [ 22 ] proposed using AI tools like ChatGPT to enhance expert panel discussions in pediatric palliative care, which could potentially address some of the preparation and engagement issues identified in our study of medical education. Additionally, Ragupathi and Lee [ 57 ] discussed the role of rubrics in higher education, which could provide clearer expectations and feedback for both panelists and audience members in PDs within medical education.

Research question 3: how can PDs be improved for panelists and the audience from the experts’ point of view?

The expert suggestions for improving PDs address several key challenges identified in previous research on academic presentations and student workload management. These recommendations align with current trends in educational technology and pedagogical approaches, while also considering the unique needs of medical students.

The emphasis on time management and workload reduction strategies echoes findings from previous studies on medical student stress and academic performance. Nowak, Speed and Vuk [ 50 ] found that medical students often struggle with the fast-paced nature of their courses, which can lead to reduced motivation and superficial learning approaches. The experts’ suggestions for task breakdown and prioritization align with Rabbi and Islam [ 58 ] recommendations for reducing workload stress through effective assignment prioritization. Additionally, Popa et al. [ 59 ] highlight the importance of acceptance and planning in stress management for medical students, supporting the experts’ focus on these areas.

The proposed implementation of interactive training sessions for panelists addresses the need for enhanced presentation skills in professional contexts, a concern highlighted by several researchers [ 17 , 60 ]. This aligns with Grieve et al. [ 20 ] findings on student fears of oral presentations and public speaking in higher education, emphasizing the need for targeted training. The focus on interactive elements and audience engagement also reflects current trends in active learning pedagogies, as demonstrated by Pasandín et al. [ 56 ] in their study on cooperative oral presentations in engineering education.

The innovative suggestion to use AI tools like ChatGPT for PD preparation represents a novel approach to leveraging technology in education. This aligns with recent research on the potential of AI in scientific research, such as the study by Almazyad et al. [ 22 ], which highlighted the benefits of AI in supporting various educational tasks. However, it is important to consider potential ethical implications and ensure that AI use complements rather than replaces critical thinking and creativity.

The experts’ emphasis on enhancing collaboration and communication among panelists addresses issues identified in previous research on teamwork in higher education. Rodríguez-Sedano, Conde and Fernández-Llamas [ 53 ] noted the importance of measuring teamwork competence development in project-based learning environments. The suggested strategies for improving coordination align with best practices in collaborative learning, as demonstrated by Romero-Yesa et al. [ 61 ] in their qualitative assessment of challenge-based learning and teamwork in electronics programs.

The unanimous agreement on the need to reduce ESP class sizes for international students reflects ongoing concerns about the impact of large classes on language learning and student engagement. This aligns with research by Li [ 3 ] on issues in developing EFL learners’ oral English communication skills. Bosco et al. [ 62 ] further highlight the challenges of teaching and learning ESP in mixed classes, supporting the experts’ recommendation for smaller class sizes. Qiao, Xu and bin Ahmad [ 63 ] also emphasize the implementation challenges for ESP formative assessment in large classes, further justifying the need for reduced class sizes.

These expert recommendations provide a comprehensive approach to improving PDs, addressing not only the immediate challenges of preparation and delivery but also broader issues of student engagement, workload management, and institutional support. By implementing these suggestions, universities could potentially transform PDs from perceived burdens into valuable learning experiences that enhance both academic and professional skills. This aligns with Kho and Ting [ 64 ] systematic review on overcoming oral presentation anxiety among tertiary ESL/EFL students, which emphasizes the importance of addressing both challenges and strategies in improving presentation skills.

This study has shed light on the complex challenges associated with PDs in medical education, revealing a nuanced interplay between the experiences of panelists and audience members. The findings underscore the need for a holistic approach to implementing PDs that addresses both the academic workload concerns and the quality of engagement.

Our findings both support and extend previous research on the challenges of oral presentations and group work in medical education settings. The high frequencies of perceived challenges across multiple categories for both panelists and audience members suggest that while PDs may offer benefits, they also present significant obstacles that need to be addressed in medical education. These results highlight the need for careful consideration in the implementation of PDs in medical education, with particular attention to workload management, coordination strategies, and audience engagement techniques. Future research could focus on developing and testing interventions to mitigate these challenges while preserving the potential benefits of PDs in medical education.

Moving forward, medical educators should consider innovative approaches to mitigate these challenges. This may include:

Integrating time management and stress coping strategies into the PD preparation process [ 59 ].

Exploring the use of AI tools to streamline preparation and enhance engagement [ 22 ].

Developing clear rubrics and expectations for both panelists and audience members [ 57 ].

Incorporating interactive elements to maintain audience interest and participation [ 25 ].

Limitations and future research

One limitation of this study is that it focused on a specific population of medical students, which may limit the generalizability of the findings to other student populations. Additionally, the study relied on self-report data from panelists and audience members, which may introduce bias and affect the validity of the results. Future research could explore the effectiveness of PDs in different educational contexts and student populations to provide a more comprehensive understanding of the benefits and challenges of panel discussions.

Future research should focus on evaluating the effectiveness of these interventions and exploring how PDs can be tailored to the unique demands of medical education. By addressing the identified challenges, PDs have the potential to become a more valuable and engaging component of medical curricula, fostering both academic and professional development. Ultimately, the goal should be to transform PDs from perceived burdens into opportunities for meaningful learning and skill development, aligning with the evolving needs of medical education in the 21st century.

Future research could also examine the long-term impact of PDs on panelists’ language skills, teamwork, and communication abilities. Additionally, exploring the effectiveness of different training methods and tools, such as AI technology, in improving coordination skills and reducing workload stress for panelists could provide valuable insights for educators and administrators. Further research could also investigate the role of class size and audience engagement in enhancing the overall effectiveness of PDs in higher education settings. By addressing these gaps in the literature, future research can contribute to the ongoing development and improvement of PDs as a valuable learning tool for students in higher education.

However, it is important to note that implementing these changes may require significant institutional resources and a shift in pedagogical approaches. Future research could focus on piloting these recommendations and evaluating their effectiveness in improving student outcomes and experiences with PDs.

Data availability

We confirm that the data supporting the findings are available within this article. Raw data supporting this study’s findings are available from the corresponding author, upon request.

Abbreviations

Artificial Intelligence

English as a Foreign Language

English for Specific Purposes

Panel Discussion

Shiraz University of Medical Sciences

Harden RM, Laidlaw JM. Essential skills for a medical teacher: an introduction to teaching and learning in medicine. Elsevier Health Sciences; 2020.

Ibrahim Mohamed O, Al Jadaan DO. English for Specific purposes (Esp) Needs Analysis for Health Sciences students: a cross-sectional study at a University in the UAE. English for Specific purposes (Esp) Needs Analysis for Health Sciences Students: A Cross-Sectional Study at a University in the UAE.

Li Y, Heron M. English for general academic purposes or English for specific purposes? Language learning needs of medical students at a Chinese university. Theory Pract Lang Stud. 2021;11(6):621–31.

Article   Google Scholar  

Chan SMH, Mamat NH, Nadarajah VD. Mind your language: the importance of English language skills in an International Medical Programme (IMP). BMC Med Educ. 2022;22(1):405.

Cortez Faustino BS, Ticas de Córdova CK, de la Hernández DI. Teaching English for specific purposes: contents and methodologies that could be implemented in the English for Medical purposes (EMP) course for the doctor of Medicine Major at the University of El Salvador. Universidad de El Salvador; 2022.

BENYAMINA E-Z BOUKAHLAH. Enhancing Specialty Language learning through content-based instruction: students of Paramedical Institute of Tiaret as a case study. Université IBN KHALDOUN-Tiaret; 2023.

Prikazchikov M. Medical English course for russian-speaking dentists: a needs analysis study. Iowa State University; 2024.

Kim C, Lee SY, Park S-H. Is Korea Ready to be a key player in the Medical Tourism Industry? An English Education Perspective. Iran J Public Health. 2020;49(2):267–73.

Google Scholar  

Syakur A, Zainuddin H, Hasan MA. Needs analysis English for specific purposes (esp) for vocational pharmacy students. Budapest International Research and Critics in Linguistics and Education (BirLE). Journal. 2020;3(2):724–33.

Chan S, Taylor L. Comparing writing proficiency assessments used in professional medical registration: a methodology to inform policy and practice. Assess Writ. 2020;46:100493.

Hyland K, Jiang FK. Delivering relevance: the emergence of ESP as a discipline. Engl Specif Purp. 2021;64:13–25.

Maftuna B. The role of English in ESP. Am J Adv Sci Res. 2024;1(2):1–5.

LEON LI, HUMANIZING THE FOREIGN LANGUAGE. COURSE: NEW TEACHING METHODS FOR MEDICAL STUDENTS. Language, Culture and Change. 2022:243.

Dahm MR, Yates L. Rapport, empathy and professional identity: Some challenges for international medical graduates speaking English as a second or foreign language. Multilingual Healthcare: A Global View on Communicative Challenges. 2020:209 – 34.

Gedamu AD, Gezahegn TH. TEFL trainees’ attitude to and self-efficacy beliefs of academic oral presentation. Cogent Educ. 2023;10(1):2163087.

Saliu B, Hajrullai H. Best practices in the English for specific purpose classes at the language center. Procedia-Social Behav Sci. 2016;232:745–9.

Clokie TL, Fourie E. Graduate employability and communication competence: are undergraduates taught relevant skills? Bus Prof Communication Q. 2016;79(4):442–63.

Hartono H, Mujiyanto J, Fitriati SW, Sakhiyya Z, Lotfie MM, Maharani MM. English Presentation Self-Efficacy Development of Indonesian ESP students: the effects of Individual versus Group Presentation tasks. Int J Lang Educ. 2023;7(3):361–76.

Azizi Z, Farid Khafaga A. Scaffolding via Group-dynamic Assessment to positively affect motivation, learning anxiety, and willingness to Communicate: a Case Study of High School Students. J Psycholinguist Res. 2023;52(3):831–51.

Grieve R, Woodley J, Hunt SE, McKay A. Student fears of oral presentations and public speaking in higher education: a qualitative survey. J Furth High Educ. 2021;45(9):1281–93.

Hall D, Buzwell S. The problem of free-riding in group projects: looking beyond social loafing as reason for non-contribution. Act Learn High Educ. 2013;14(1):37–49.

Almazyad M, Aljofan F, Abouammoh NA, Muaygil R, Malki KH, Aljamaan F, et al. Enhancing Expert Panel discussions in Pediatric Palliative Care: innovative scenario development and summarization with ChatGPT-4. Cureus. 2023;15(4):e38249.

Bhuvaneshwari S, Rashmi R, Deepika K, Anirudh VM, Vijayamathy A, Rekha S, Kathiravan R. Impact of panel discussion in educating AETCOM First Module among Undergraduate Medical Students. Latin Am J Pharmacy: Life Sci J. 2023;42(6):407–12.

Johnson BR, Logan LD, Darley A, Stone RH, Smith SE, Osae SP, et al. A scoping review for Debate-Style Journal Clubs in Health Professional Education. Am J Pharm Educ. 2023;87(6):100064.

Nurakhir A, Palupi FN, Langeveld C, Nurmalia D. Students’ views of classroom debates as a strategy to enhance critical thinking and oral communication skills. 2020.

Jones EP, Nelson NR, Thorpe CT, Rodgers PT, Carlson RB. Use of journal clubs and book clubs in pharmacy education: a scoping review. Currents Pharm Teach Learn. 2022;14(1):110–9.

Dyhrberg O’Neill L. Assessment of student debates in support of active learning? Students’ perceptions of a debate-based oral final exam. Act Learn High Educ. 2024.

Dyment JE, O’Connell TS. Assessing the quality of reflection in student journals: a review of the research. Teach High Educ. 2011;16(1):81–97.

Villarroel V, Bloxham S, Bruna D, Bruna C, Herrera-Seda C. Authentic assessment: creating a blueprint for course design. Assess Evaluation High Educ. 2018;43(5):840–54.

Schultz M, Young K, Gunning K, Harvey T. Defining and measuring authentic assessment: a case study in the context of tertiary science. Assess Evaluation High Educ. 2022;47(1):77–94.

Sundrarajun C, Kiely R. The oral presentation as a context for learning and assessment. Innov Lang Learn Teach. 2010;4(2):101–17.

Wyatt-Smith C, Adie L. The development of students’ evaluative expertise: enabling conditions for integrating criteria into pedagogic practice. J Curriculum Stud. 2021;53(4):399–419.

Boud D, Lawson R, Thompson DG. The calibration of student judgement through self-assessment: disruptive effects of assessment patterns. High Educ Res Dev. 2015;34(1):45–59.

A. S. Enhancing Meaningful Learning experiences through Comprehension and Retention by students. Twentyfirst Century Publications Patiala. 2023;49.

Colton D, Covert RW. Designing and constructing instruments for social research and evaluation. Wiley; 2007.

Krueger RA, Casey MA. Focus group interviewing. Handbook of practical program evaluation. 2015:506 – 34.

Morgan DL. Handbook of interview research: Context and method. Oaks, CA, USA: Sage Publications Thousand; 2002.

Braun V, Clarke V. Conceptual and design thinking for thematic analysis. Qualitative Psychol. 2022;9(1):3.

Elliott V. Thinking about the coding process in qualitative data analysis. Qualitative Rep. 2018;23(11).

Syed M, Nelson SC. Guidelines for establishing reliability when coding narrative data. Emerg Adulthood. 2015;3(6):375–87.

Lincoln Y. Naturalistic inquiry: Sage; 1985.

Hartono H, Mujiyanto J, Fitriati SW, Sakhiyya Z, Lotfie MM, Maharani MM. English presentation self-efficacy development of Indonesian ESP students: the effects of Individual versus Group Presentation tasks. Int J Lang Educ. 2023;7(3).

Li X. Teaching English oral presentations as a situated task in an EFL classroom: a quasi-experimental study of the effect of video-assisted self-reflection. Revista Signos. 2018;51(98):359–81.

Chou M-h. The influence of learner strategies on oral presentations: a comparison between group and individual performance. Engl Specif Purp. 2011;30(4):272–85.

Harris A, Jones M, Huffman J. Teachers leading educational reform. The power of. 2017.

Agustina L. Stimulating students to speak up through presentation in business English class. J Appl Stud Lang. 2019;3(1):21–8.

Babal JC, Abraham O, Webber S, Watterson T, Moua P, Chen J. Student pharmacist perspectives on factors that influence wellbeing during pharmacy school. Am J Pharm Educ. 2020;84(9):ajpe7831.

Moir F, Yielder J, Sanson J, Chen Y. Depression in medical students: current insights. Adv Med Educ Pract. 2018;323:33.

Pavlinac Dodig I, Lusic Kalcina L, Demirovic S, Pecotic R, Valic M, Dogas Z. Sleep and lifestyle habits of medical and non-medical students during the COVID-19 lockdown. Behav Sci. 2023;13(5):407.

Nowak G, Speed O, Vuk J. Microlearning activities improve student comprehension of difficult concepts and performance in a biochemistry course. Currents Pharm Teach Learn. 2023;15(1):69–78.

Mehmood K, Memon S, Ali F. Language barriers to Effective Communication in speaking English: a phenomenological study of Pakistan International cricketers. Pakistan Lang Humanit Rev. 2024;8(1):107–14.

Buelow JR, Downs D, Jorgensen K, Karges JR, Nelson D. Building interdisciplinary teamwork among allied health students through live clinical case simulations. J Allied Health. 2008;37(2):e109–23.

Rodríguez-Sedano FJ, Conde M, Fernández-Llamas C, editors. Measuring teamwork competence development in a multidisciplinary project based learning environment. Learning and Collaboration Technologies Design, Development and Technological Innovation: 5th International Conference, LCT 2018, Held as Part of HCI International 2018, Las Vegas, NV, USA, July 15–20, 2018, Proceedings, Part I 5; 2018: Springer.

Wilson L, Ho S, Brookes RH. Student perceptions of teamwork within assessment tasks in undergraduate science degrees. Assess Evaluation High Educ. 2018;43(5):786–99.

Parmar D, Bickmore T. Making it personal: addressing individual audience members in oral presentations using augmented reality. Proc ACM Interact Mob Wearable Ubiquitous Technol. 2020;4(2):1–22.

Pasandín AMR, Pérez IP, Iglesias PO, Díaz JJG. Cooperative oral presentations in higher education to enhance technical and soft skills in engineering students. Int J Continuing Eng Educ Life Long Learn. 2023;33(6):592–607.

Ragupathi K, Lee A. Beyond fairness and consistency in grading: The role of rubrics in higher education. Diversity and inclusion in global higher education: Lessons from across Asia. 2020:73–95.

Rabbi MF, Islam MS. The effect of academic stress and Mental anxiety among the students of Khulna University. Edukasiana: Jurnal Inovasi Pendidikan. 2024;3(3):280–99.

Popa CO, Schenk A, Rus A, Szasz S, Suciu N, Szabo DA, Cojocaru C. The role of acceptance and planning in stress management for medical students. Acta Marisiensis-Seria Med. 2020;66(3):101–5.

Christianson M, Payne S. Helping students develop skills for better presentations: Using the 20x20 format for presentation training. 語学研究. 2012;26:1–15.

Romero-Yesa S, Fonseca D, Aláez M, Amo-Filva D. Qualitative assessment of a challenge-based learning and teamwork applied in electronics program. Heliyon. 2023;9(12).

Bosco TJ, Gabriel B, Florence M, Gilbert N. Towards effective teaching and learning ESP in mixed classes: students’ interest, challenges and remedies. Int J Engl Literature Social Sci. 2020;5(2):506–16.

Qiao L, Xu Y, bin Ahmad N, An Analysis Of Implementation Challenges For English, For Specific Purposes (Esp) Formative Assessment Via Blended Learning Mode At Chinese Vocational Polytechnics. Journal Of Digital Education, Communication, And Arts (DECA). 2023;6(02):64–76.

Kho MG-W, Ting S-H. Overcoming oral presentation anxiety: a systematic review of Tertiary ESL/EFL Students’ challenges and strategies. Qeios. 2023.

Download references

We confirm that no funding was received for this work.

Author information

Authors and affiliations.

Department of English Language, School of Paramedical Sciences, Shiraz University of Medical Sciences, Shiraz, Iran

Elham Nasiri & Laleh Khojasteh

You can also search for this author in PubMed   Google Scholar

Contributions

L.KH was involved in writing the proposal, reviewing the text, analyzing the data, and writing the manuscript. E. N was involvedin designing the research and collecting and analyzing the data. Both authors have reviewed and approved the final version of the manuscript.

Corresponding author

Correspondence to Laleh Khojasteh .

Ethics declarations

Ethics approval and consent to participate.

Our study, entitled “Evaluating Panel Discussions in ESP Classes: An Exploration of International Medical Students’ and ESP Instructors’ Perspectives through Qualitative Research,” was reviewed by the Institutional Review Board (IRB) of the School of Paramedical Sciences, Shiraz University of Medical Sciences (SUMS). The IRB reviewed the study on August 14th, 2024, and determined that formal ethics approval or a reference number was not required. This decision was based on the fact that the research posed minimal risk to participants and focused solely on their educational experiences without involving any intervention or the collection of sensitive personal data.

Consent for publication

Not Applicable.

Competing interests

We confirm that there are no known conflicts of interest associated with this publication and that this work did not receive any financial support.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Nasiri, E., Khojasteh, L. Evaluating panel discussions in ESP classes: an exploration of international medical students’ and ESP instructors’ perspectives through qualitative research. BMC Med Educ 24 , 925 (2024). https://doi.org/10.1186/s12909-024-05911-3

Download citation

Received : 08 May 2024

Accepted : 14 August 2024

Published : 26 August 2024

DOI : https://doi.org/10.1186/s12909-024-05911-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Group oral presentations
  • Medical students
  • Panel discussions
  • ESP courses

BMC Medical Education

ISSN: 1472-6920

research skills learning outcomes

Handbook home

  • Search the Handbook
  • Undergraduate courses
  • Graduate courses
  • Research courses
  • Undergraduate subjects
  • Graduate subjects
  • Research subjects
  • Breadth Tracks
  • CAPS Login - Staff only
  • MA Research

MA Research (HIST70005)

Masters time-based research On Campus (Parkville)

View full page

About this subject

Contact information, coordinator.

Heather Benbow

Availability
Fees

This subject is for students admitted in the Master of Arts (Thesis only) at the School of Historical and Philosophical Studies.

It is designed for students to develop advanced skills in carrying out independent and sustained research. The thesis is expected to showcase a critical application of specialised knowledge and contribute independently to the existing scholarship in the chosen area of research.

The normal length of an MA thesis is 30,000 to 50,000 words, exclusive of words in tables, maps, illustration, bibliographies and appendices. Footnotes are included as part of the word limit.

Intended learning outcomes

On completion of this subject, students should be able to have:

  • Advanced knowledge and understanding in a chosen specialised area in arts, humanities or social sciences.
  • Demonstrated ability to conduct research with a strong emphasis on ethical considerations and rigorous methodologies.
  • Advanced technical skills to independently initiate and formulate original research.
  • Advanced communication and written skills to present a theoretically sound and methodologically defensible research investigation.
  • Capacity to disseminate research findings to a variety of audiences.

Generic skills

On completion of this subject, it is expected that candidates will possess the following generic skills: 

  • Critical reasoning and thinking
  • Problem solving
  • Communication 
  • Creativity and innovation 

Last updated: 26 August 2024

Logo

  • Health A to Z
  • Alternative Medicine
  • Blood Disorders
  • Bone and Joint
  • Cardiovascular Diseases
  • Child Health
  • Coronavirus
  • Dental Care
  • Digestive System
  • Disabilities
  • Drug Center
  • Ear, Nose, and Throat
  • Environmental Health
  • Exercise And Fitness
  • First Aid and Emergencies
  • General Health
  • Health Technology
  • Hearing Loss
  • Hypertension
  • Infectious Disease
  • LGBTQ Health
  • Liver Health
  • Men's Health
  • Mental Health
  • Pain Management
  • Public Health
  • Pulmonology
  • Senior Health
  • Sexual Health
  • Skin Health
  • Women's Health
  • News For Medical Professionals
  • Our Products
  • Consumer News
  • Physician's Briefing
  • HealthDay TV
  • Wellness Library
  • HealthDay Living
  • Conference Coverage
  • Custom Products
  • Health Writing
  • Health Editing
  • Video Production
  • Medical Review

Grit, Love for Learning: It's Not Just Smarts That Boost Child Academics

Key takeaways.

Sheer intelligence isn't the key to excellent grades: Perseverance and genuinely loving learning may also be critical

The genetics of certain personality traits appear to play a role in academic success

The importance of these traits may only increase a a child ages

TUESDAY, Aug. 27, 2024 (HealthDay News) -- A child's intelligence is not the sole key to academic success, a new British study concludes.

Instead, intelligence plus "non-cognitive" factors, such as a determination to excel despite obstacles and an innate love of learning, can push a child to the top of the class, new genetic data shows.

"Our research challenges the long-held assumption that intelligence is the primary driver of academic achievement," said study co-lead author Dr. Margherita Malanchini .

"We’ve found compelling evidence that non-cognitive skills -- such as grit, perseverance, academic interest and value attributed to learning -- are not only significant predictors of success, but that their influence grows stronger over time," added Malanchini, a senior lecturer in psychology at Queen Mary University of London.

She and co-lead study author Dr. Andrea Allegrini , of University College London, published their findings Aug. 26 in the journal Nature Human Behavior .

The new study involved over 10,000 British children whose academic success was tracked from ages 7 to 16. At the same time, the London researchers took a look at each child's DNA, seeking genes that are known to be linked to certain non-cognitive skills.

The team also compared outcome in pairs of identical and fraternal twins, again looking at how shared genes might influence academic outcomes.

They put the data together to create a "polygenic" score predicting how well each child might do in school.

"We discovered that genetic effects associated with non-cognitive skills become increasingly predictive of academic achievement over the school years, in fact their effect nearly doubles between the ages of 7 and 16," Allegrini, a research fellow at University College London, said in a Queen Mary news release. "By the end of compulsory education, genetic dispositions towards non-cognitive skills were equally as important as those related to cognitive abilities in predicting academic success." 

In other words, intelligence alone isn't always enough to excel: Drive, curiosity and other traits also play a big role in academic success.

Some of that goes beyond genetics and relies on the home or school environment as well, the researchers said. The twins study helped support that notion.

"We found that while family-wide processes play a significant role, the increasing influence of non-cognitive genetics on academic achievement remained evident even within families," Allegrini explained. "This suggests that children may actively shape their own learning experiences based on their personality, dispositions and abilities, creating a feedback loop that reinforces their strengths." 

The new findings suggest that schools should focus on more than just kids' smarts to boost their grades.

"Our education system has traditionally focused on cognitive development," Malanchini said. "It's time to rebalance that focus and give equal importance to nurturing non-cognitive skills. By doing so, we can create a more inclusive and effective learning environment for all students." 

More information

Find out more about motivating your child to learn at Harvard Health .

SOURCE: Queen Mary University, news release, Aug. 26, 202

What This Means For You

Encouraging your child's love of learning and determination can be just as important as their intelligence for academic success.

Related Stories

logo

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

information-logo

Article Menu

research skills learning outcomes

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Digital developmental advising systems for engineering students based on accreditation board of engineering and technology student outcome evaluations.

research skills learning outcomes

Share and Cite

Hussain, W.; Fong, M.; Spady, W.G. Digital Developmental Advising Systems for Engineering Students Based on Accreditation Board of Engineering and Technology Student Outcome Evaluations. Information 2024 , 15 , 520. https://doi.org/10.3390/info15090520

Hussain W, Fong M, Spady WG. Digital Developmental Advising Systems for Engineering Students Based on Accreditation Board of Engineering and Technology Student Outcome Evaluations. Information . 2024; 15(9):520. https://doi.org/10.3390/info15090520

Hussain, Wajid, Mak Fong, and William G. Spady. 2024. "Digital Developmental Advising Systems for Engineering Students Based on Accreditation Board of Engineering and Technology Student Outcome Evaluations" Information 15, no. 9: 520. https://doi.org/10.3390/info15090520

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. Examples Of Learning Outcomes

    research skills learning outcomes

  2. Categories of Learning Outcomes This figure demonstrates the skills

    research skills learning outcomes

  3. Designing Assessments

    research skills learning outcomes

  4. Bloom’s Taxonomy

    research skills learning outcomes

  5. 1 : The cognitive learning outcomes and examples of assessment

    research skills learning outcomes

  6. Framework for Articulating and Measuring Individual Learning Outcomes

    research skills learning outcomes

COMMENTS

  1. The Relationship between Study Skills and Learning Outcomes: A Meta

    This paper reports the results of a meta-analysis of 52 studies that investigated the relationship between a range of study strategies and outcomes measures.Low correlations were found between a range of different types of study skills and various outcome measures. Having many study skills (i.e. versatility), as assessed by total study skills scores, produced the largest correlations with both ...

  2. Fostering students' motivation towards learning research skills: the

    Introduction. Several scholars have argued that the process of learning research skills is often obstructed by motivational problems (Lehti & Lehtinen, 2005; Murtonen, 2005).Some even describe these issues as students having an aversion towards research (Pietersen, 2002).Examples of motivational problems are that students experience research courses as boring, inaccessible, or irrelevant to ...

  3. Learning Outcomes for Teaching Research Skills

    University Library's suggested learning outcomes (on graduation) : Manage personal and academic information online with a knowledge of the commodification of that information. Recognize that intellectual property is legally and socially constructed and varies by discipline and culture. Cite sources through proper attribution.*.

  4. Creating Learning Outcomes

    Learning outcomes benefit instructors. Learning outcomes can help instructors in a number of ways by: Providing a framework and rationale for making course design decisions about the sequence of topics and instruction, content selection, and so on. Communicating to students what they must do to make progress in learning in your course.

  5. Full article: Is research-based learning effective? Evidence from a pre

    The effectiveness of research-based learning. Conducting one's own research project involves various cognitive, behavioural, and affective experiences (Lopatto, Citation 2009, 29), which in turn lead to a wide range of benefits associated with RBL. RBL is associated with long-term societal benefits because it can foster scientific careers: Students participating in RBL reported a greater ...

  6. PLAT 20 (1) 2021: Enhancing Student Learning in Research and

    Future research needs to generalize the positive effects of elaborate feedback with different materials and outcome variables (learning transfer, cf. Butler et al., 2013). Beyond providing diagnostic information on task performance, feedback can also include hints on how to solve the task at hand and suggestions to improve the self-regulated ...

  7. (PDF) Broadening the Definition of 'Research Skills' to Enhance

    This viewpoint article identifies the following seven research skills that were most frequently reported across both thesis and non-thesis programs: critical appraisal, information synthesis ...

  8. What matters for student learning outcomes? A systematic review of

    Meta-analysis comprises a powerful tool for synthesising prior research and empirically validating theoretical frameworks. Using this tool and two recent multilevel models of educational effectiveness as guiding frameworks, this paper synthesises the results of 195 studies investigating the association between system-level characteristics and student learning outcomes.

  9. Student Learning Outcomes Assessment in Higher Education and in

    Additionally, most institutions indicate information literacy and research skills are desired learning outcomes for students. As such, the opportunity is in place and only needs for us to take advantage. Approached in the proper manner, SLO assessment can certainly move toward becoming a strength in academic librarianship. For that reason, it ...

  10. Exploring learning outcomes, communication, anxiety, and motivation in

    Learning communities have become a focal point of research due to their potential impact on learning outcomes, motivation, and communication. These factors are recognized as crucial determinants ...

  11. The Value of Assessing Higher Education Student Learning Outcomes

    This Special Topic of AERA Open was produced to advance the growing field of research into the assessment of student learning outcomes. Over the past decade, innovative contributions have been made via international initiatives, like the Organisation for Economic Co-operation and Development's Assessment of Higher Education Learning Outcomes; national initiatives, like the Valid Assessment ...

  12. PDF Trends in Learning Outcomes Assessment

    in the proportion who say their learning outcomes address the area of technology (down from 61% in 2008 to 49% today). Conversely, the most notable increase has been among those who say their learning outcomes address research skills and projects (up from 65% in 2008 to 75% today).

  13. Broadening the Definition of 'Research Skills' to Enhance Students

    Undergraduate and master's programs—thesis- or non-thesis-based—provide students with opportunities to develop research skills that vary depending on their degree requirements. However, there is a lack of clarity and consistency regarding the definition of a research skill and the components that are taught, practiced, and assessed. In response to this ambiguity, an environmental scan ...

  14. PDF What Are Student Learning Outcomes?

    What Are Student Learning Outcomes?Learning outcomes are statements of the knowledge, skills and abilities individual students should possess and can demonstrate upon completion of a learning experience. r sequence of learning experiences. Before preparing a list of learning outcomes consi. er the following recommendations: Learning outcomes.

  15. Developing Learning Outcomes

    A learning outcome focused on skills might read: By the end of this course, students will be able to define the characteristics and limitations of historical research. Values can describe some desired learning outcomes, the attitudes or beliefs imparted or investigated in a particular field or discipline. In particular, value-oriented learning ...

  16. Using Bloom's Taxonomy to Write Effective Learning Outcomes

    Learning outcome examples adapted from, Nelson Baker at Georgia Tech: [email protected]. How Bloom's works with Quality Matters. For a course to meet the Quality Matters standards it must have learning outcomes that are measurable. Using a verb table like the one above will help you avoid verbs that cannot be quantified, like: understand, learn, appreciate, or enjoy.

  17. Learning Outcomes 101: Types, Examples, Steps, & Assessment

    Steps in Formulating Learning Outcomes. 1. Determine the Knowledge, Essential Skills, and Attitude Expected. Once an understanding of knowledge, skills, and attitudinal outcomes is obtained, educators then identify the necessary knowledge, skills and attitude (KSA) that students need to acquire in a specific subject.

  18. (PDF) Teachers' professional learning and its impact on students

    in teachers ' skills, such as the introduction of Outcomes-Based Educa- tion and Curriculum Reform 2005 in South Africa and the perceived gap of training particularly for teachers of science ...

  19. (PDF) Learning outcomes and competencies

    the link between competences and learning outcomes as follows: Learning outcomes support the competences, are at a gr eater. level of detail and form the basis of both learning and assess-. ment ...

  20. LibGuides: Research Skills Tutorial: Learning Outcomes

    Getting Started with Research - Learning Outcomes. Students will be able to . Understand what information literacy is and why it is important in school, the workplace, and society. Know resources available to build information literacy skills. Identify how information is needed and used in everyday life. Set goals to build information literacy ...

  21. Crafting measurable course learning outcomes

    Characteristics of effective outcomes. Course-level learning outcomes are clear, concise statements that describe the intended competencies and abilities students are expected to demonstrate upon successful completion of a course. Learning outcomes should be SMART: specific, measurable, achievable, relevant, and time-bound (Lamm, 2023).These characteristics ensure that students are being ...

  22. Evaluating panel discussions in ESP classes: an exploration of

    This study investigates the effectiveness of panel discussions, a specific interactive teaching technique where a group of students leads a pre-planned, topic-focused discussion with audience participation, in English for Specific Purposes (ESP) courses for international medical students. This approach aims to simulate professional conference discussions, preparing students for future academic ...

  23. Improving Students' Learning With Effective Learning Techniques:

    Many students are being left behind by an educational system that some people believe is in crisis. Improving educational outcomes will require efforts on many fronts, but a central premise of this monograph is that one part of a solution involves helping students to better regulate their learning through the use of effective learning techniques.

  24. Research Skills

    4. Demonstrate practical linguistic research skills by critiquing existing research, undertaking independent study, analysing and discussing their findings according to scientific protocol and reflecting critically upon the processes involved. The intended generic learning outcomes. On successfully completing the module students will be able to: 1.

  25. MA Research (HIST70005)

    Intended learning outcomes. On completion of this subject, students should be able to have: Advanced knowledge and understanding in a chosen specialised area in arts, humanities or social sciences. Demonstrated ability to conduct research with a strong emphasis on ethical considerations and rigorous methodologies.

  26. It's Not Just Smarts That Boost Child Academics

    Non-cognitive skills: the hidden key to academic success. New research reveals the growing importance of emotional intelligence in shaping educational outcomes. TUESDAY, Aug. 27, 2024 (HealthDay News) -- A child's intelligence is not the sole key to academic success, a new British study concludes.Instead, intelligence p

  27. Information

    The purpose of this research is to examine the benefits and limitations of the implementation of novel digital academic advising systems based on the principles of authentic outcome-based education (OBE) using automated collection and reporting processes for Accreditation Board for Engineering and Technology (ABET) student outcomes data for effective developmental advising.