• Open access
  • Published: 29 August 2024

Effects of adaptive scaffolding on performance, cognitive load and engagement in game-based learning: a randomized controlled trial

  • Tjitske J. E. Faber 1 , 2 ,
  • Mary E. W. Dankbaar 2 ,
  • Walter W. van den Broek 2 ,
  • Laura J. Bruinink 3 ,
  • Marije Hogeveen 3 &
  • Jeroen J. G. van Merriënboer 4   na1  

BMC Medical Education volume  24 , Article number:  943 ( 2024 ) Cite this article

Metrics details

While game-based learning has demonstrated positive outcomes for some learners, its efficacy remains variable. Adaptive scaffolding may improve performance and self-regulation during training by optimizing cognitive load. Informed by cognitive load theory, this study investigates whether adaptive scaffolding based on interaction trace data influences learning performance, self-regulation, cognitive load, test performance, and engagement in a medical emergency game.

Sixty-two medical students from three Dutch universities played six game scenarios. They received either adaptive or nonadaptive scaffolding in a randomized double-blinded matched pairs yoked control design. During gameplay, we measured learning performance (accuracy, speed, systematicity), self-regulation (self-monitoring, help-seeking), and cognitive load. Test performance was assessed in a live scenario assessment at 2- and 6–12-week intervals. Engagement was measured after completing all game scenarios.

Surprisingly, the results unveiled no discernible differences between the groups experiencing adaptive and nonadaptive scaffolding. This finding is attributed to the unexpected alignment between the nonadaptive scaffolding and the needs of the participants in 64.9% of the scenarios, resulting in coincidentally tailored scaffolding. Exploratory analyses suggest that, compared to nontailored scaffolding, tailored scaffolding improved speed, reduced self-regulation, and lowered cognitive load. No differences in test performance or engagement were found.

Our results suggest adaptive scaffolding may enhance learning by optimizing cognitive load. These findings underscore the potential of adaptive scaffolding within GBL environments, cultivating a more tailored and effective learning experience. To leverage this potential effectively, researchers, educators, and developers are recommended to collaborate from the outset of designing adaptive GBL or computer-based simulation experiences. This collaborative approach facilitates the establishment of reliable performance indicators and enables the design of suitable, preferably real-time, scaffolding interventions. Future research should confirm the effects of adaptive scaffolding on self-regulation and learning, taking care to avoid unintended tailored scaffolding in the research design.

Trial registration

This study was preregistered with the Center for Open Science prior to data collection. The registry may be found at https://osf.io/7ztws/ .

Peer Review reports

Introduction

Game-based learning (GBL) is a promising tool to support learning [ 1 , 2 , 3 ], but differences in effectiveness between learners and learner groups have been observed [ 4 , 5 , 6 ]. Adaptive scaffolding, meaning the automatic modulation of support measures based on players’ characteristics or behaviors, has been shown to improve learning outcomes [ 7 , 8 ], possibly through the optimization of cognitive load [ 3 , 9 , 10 ]. However, the number of studies into the effects of adaptive scaffolding on cognitive load and learning outcomes in GBL is low [ 9 , 10 , 11 ]. This study aims to investigate the effects of adaptive scaffolding in a medical emergency simulation game.

Theoretical background

  • Cognitive load theory

To understand how the same instruction may have different effects on different learner groups, we turn to cognitive load theory (CLT [ 12 ]). This theory assumes a limited working memory and unlimited long-term memory holding cognitive schemas. Expertise comes from knowledge stored as schemas, and learning is described as the construction and automation of such schemas. To create schemas, new information must be ‘mindfully combined’ with other information or existing schemas. When working memory is overloaded, learning is impaired [ 13 ]. It follows that learners who have already developed relevant schemas will have more working memory resources to spare to deal with the task. These experienced learners may perform worse at a task when detailed instructions are provided (the “expertise reversal effect” [ 14 ]) because working memory becomes bogged down with attempts to cross-reference the instruction with existing schemas in long-term memory. Novice performers will benefit from instruction as the instruction may act as a central executive to organize the relevant information in working memory [ 3 ], freeing up cognitive load. Accordingly, instructional design should aim to 1) deliver learning activities, which present new information to be combined into more complex schemas (construction) or the opportunity to repeatedly apply existing schemas to new problems (automation), and 2) optimize cognitive load, to allow the learner to mindfully combine the new information.

In understanding how instruction influences cognitive load it is helpful to consider different types of cognitive load. Intrinsic cognitive load refers to the demands on working memory caused by the learning task itself. The more complex the learning task, or the lower the learner’s expertise, the higher the intrinsic cognitive load. Thus, the same learning task may cause a high cognitive load for a low-expertise learner but a low cognitive load for a high-expertise learner. Extraneous cognitive load is the load caused by demands on working memory caused by the instruction and the environment, rather than the information to be learned. Finally, germane cognitive load is the load required to deal with intrinsic cognitive load. It redistributes working memory resources to activities relevant to learning so that it promotes schema construction and automation. Techniques to measure cognitive load include direct measures such as subjective rating scales, including the popular 1-item Paas scale for mental effort [ 15 , 16 , 17 ], and dual-task methods (e.g. [ 18 ], Rojas, Haji [ 19 ], as well as indirect measures such as learning outcomes [ 20 ], physiological measures [ 21 ], and behavioral measures [ 22 ].

To optimize cognitive load in learning environments several principles have been described (e.g. [ 3 , 23 , 24 ]), including tailoring the instructional design to varying levels of learner expertise [ 9 ]. This may be accomplished through scaffolding, “the process whereby the support given to students is gradually reduced to counteract the adverse effects of excessive task complexity” [ 25 ]. Scaffolding is closely related to Vygotsky’s Zone of Proximal Development [ 26 ]. The additional support may take the form of supportive information (the provision of domain-general strategies to perform a task) or procedural information (specific information on how to complete routine aspects of a task) [ 27 ]. With scaffolding, the learner can perform more complex tasks or perform tasks more independently [ 27 , 28 , 29 ]. Scaffolding in general has been shown to improve learning outcomes in GBL [ 30 ]. However, superfluous scaffolding will increase extraneous cognitive load, for example by causing the learner to cross-reference provided instructions with information already present in their long-term memory, while insufficient or unnecessary scaffolding fails to lower the burden placed on the learner’s working memory, impeding the learning process in both situations [ 7 , 31 ]. Consequently, it is critical to provide contingent scaffolding: the right type and level of support at the appropriate time and rate.

Adaptive scaffolding

To ensure contingent scaffolding in computer-based learning environments such as digital GBL, adaptivity may be used: the automatic adjustment of a system to input from the player’s characteristics and choices [ 32 ]. While nonadaptive systems exacerbate differences between individuals, adaptations that are responsive to individual differences have been proposed to improve the equality and diversity of educational opportunities [ 33 ]. Adaptivity improves learning in hypermedia environments [ 34 ]. In GBL, several studies have investigated adaptivity, demonstrating promising effects on skill acquisition [ 35 , 36 , 37 ]. However, not all studies demonstrate favourable results [ 38 ].

Appropriate adaptive scaffolding should be triggered by indicators that identify the learner’s need for support. These indicators may be obtained before, during, or after a learning task. Examples include the learner's current knowledge level, cognitive load, stress measurements, performance assessments, or interaction traces documenting in-game events, choices, and behaviors, either separately or in combination [ 9 , 10 , 32 , 39 , 40 ]. Of these options, interaction traces in particular offer the advantage of unobtrusive and real-time collection, allowing for adaptations on a small timescale with short feedback loops. Examples of traces that can be used as indicators of performance in GBL include accuracy, speed, systematicity, and self-monitoring actions [ 41 , 42 , 43 , 44 ].

From the analysis presented above, we assume that adaptive scaffolding based on interaction traces is likely to positively influence cognitive load and improve learning task performance by freeing up working memory resources. In addition, this mechanism may improve the learner’s ability to self-regulate their learning, increase the transfer of learning, and influence learner engagement. We will discuss each of these below.

First, self-regulation of learning (SRL) refers to the modulation of affective, cognitive, and behavioral processes throughout a learning experience to reach the desired level of achievement [ 45 ]. Improved SRL can facilitate the learning of complex skills [ 46 , 47 , 48 , 49 , 50 , 51 ]. For example, students with higher developed SRL skills are better able to monitor their learning process during a task, recognize points of improvement, and use cognitive resources to support their learning, including help-seeking. Accordingly, SRL skills have been associated with improved confidence in learning, academic achievement, and success in clinical skills [ 47 , 49 , 52 , 53 ]. SRL is especially important in GBL, as the inherent openness of the learning environment requires students to take control of their learning [ 54 ]. Several authors have presented suggestions on how to integrate CLT and SRL theory, arguing that metacognitive and self-regulatory demands should be conceptualized as a form of working memory load that can add to the cognitive load related to task performance [ 55 , 56 , 57 ]. In this light, optimizing cognitive load through adaptive scaffolding allows more resources for SRL activities. Indeed, adaptive scaffolding has been shown to improve self-regulated learning in non-game environments [ 8 , 34 , 38 ] and it has been suggested that adaptive scaffolding can prompt students to consciously regulate their learning [ 7 ].

Second, we expect adaptive scaffolding to influence the transfer of learning: applying one’s prior knowledge or skill to novel tasks, contexts, or related materials [ 58 ]. In GBL transfer may not arise naturally, as learning takes place in an environment that can be notably different from real-life practice. However, well-designed simulations and games are favorable for situated learning, which is known to improve learning and transfer [ 59 ]. Transfer can be promoted by effortful learning conditions that trigger active and deep processing. Instructional strategies aiming to create these conditions include variability in practice and encouraging elaboration. From the CLT perspective, these strategies aim to increase germane cognitive load. Adaptive scaffolding can enhance this process by decreasing extraneous load when the learner is overloaded and increasing germane load in the case of cognitive underload. Research demonstrating these effects is scarce, with a notable paper by Basu, Biswas [ 60 ] reporting improved transfer of computational thinking skills in students who received adaptive scaffolding during training.

Third, scaffolding is likely to influence game engagement, meaning the experience of being fully involved in an activity. The ease of starting, playing, and progressing in the game are important factors that influence engagement [ 61 ]. Engagement improves learning and increases information retention [ 62 ]. Different effects of scaffolding on engagement in GBL have been reported. For example, Barzilai and Blau [ 63 ] found no effect on engagement, while others have demonstrated decreases in engagement (e.g. [ 63 , 64 , 65 ]. It should be noted that these findings relate to nonadaptive scaffolding. If this scaffolding fails to optimize cognitive load, it is likely that learners will lose motivation to continue working on a task [ 66 ] and be less engaged. On the contrary, adaptive scaffolding designed to optimize cognitive load may positively influence engagement, as observed in one study by Chen, Law and Huang [ 7 ].

Evaluating adaptive interventions

To specifically evaluate the effects of adaptive scaffolding, a yoked control research design may be applied [ 9 , 35 , 40 ]. In this design, matched participants are yoked (joined together) by receiving exactly the same treatment or interventions. From each pair, at random one participant is assigned to the adaptive condition and receives scaffolding tailored to their needs while their counterpart, assigned to the nonadaptive condition, is exposed to exactly the same scaffolding. Consequently, for the participant in the nonadaptive condition, the scaffolding is not intentionally adapted to their needs. The advantage of the yoked control design is that it allows the evaluation of the adaptation specifically. A difference in outcome may be attributed to the adaptation rather than the received support. However, depending on the heterogeneity in input used for the adaptive scaffolding, the nonadaptive scaffolding may coincidentally match the needs of the participant if their needs are the same as their counterpart adaptive in the adaptive condition. We will refer to the situation where participants in the nonadaptive condition coincidentally receive needed scaffolding as tailored scaffolding and the situation where they do not receive needed support as nontailored scaffolding.

Purpose of the study

In the present study, we will investigate the effects of adaptive scaffolding in a medical emergency simulation game. We hypothesize that adaptive scaffolding will result in lower cognitive load through a decrease in extraneous cognitive load (hypothesis 1). This decrease in cognitive load will free up working memory capacity, allowing the learner to better process the information in the learning task. This will result in improved learning task performance (hypothesis 2) during gameplay, measured as accuracy (hypothesis 2a), speed (hypothesis 2b), and systematicity (hypothesis 2c). Working memory capacity may also be used for self-regulatory activities, including (more) self-monitoring (hypothesis 3a) and (more) help-seeking (hypothesis 3b). We hypothesize that improved task performance and self-regulation will lead to more effective learning, measured as improved transfer test performance (hypothesis 4). Regarding engagement, we hypothesize that adaptive scaffolding will improve learner engagement (hypothesis 5). In the current study, we will compare the adaptive and nonadaptive scaffolding groups for each hypothesis, as well as discuss post hoc exploratory analyses regarding the influence of tailored scaffolding in the non-adaptive group.

To specifically evaluate the effects of adaptive scaffolding, we used a yoked control design as described above. Participants from the same university and either the same or immediately adjacent emergency care experience (0 cases, 1–2 cases, 3–5 cases) were matched in pairs. From each pair, one participant was randomly assigned to the adaptive scaffolding condition and the other to the nonadaptive condition. Ethical approval was provided by the Ethical Review Board of the Netherlands Association for Medical Education (dossier number 2021.3.5). Participants signed informed consent.

Participants

Demographics questionnaire.

A questionnaire was available regarding age, gender, study year, university of enrollment, and experience in emergency care. The questionnaire can be found in Appendix 1 .

E-learning and knowledge test

In emergency care, healthcare professionals are trained to adhere to the ABCDE approach. This is an internationally used method in which the acronym “ABCDE” guides healthcare providers to examine and treat patients in the following phases: Airway, Breathing, Circulation, Disability, and Exposure. Following the ABCDE structure ensures that the most life-threatening conditions are treated first. For example, in the ‘B’ phase, the healthcare provider focuses on the breathing by listening to the lungs, checking for blue discoloration of the skin (cyanosis), ordering a chest X-ray if necessary, and providing inhalation medication if needed.

To provide students with knowledge of the ABCDE approach, an e-learning module consisting of ± 90 screens of information, illustrations, interactive questions, and videos on emergency medicine and the ABCDE method was available online. To confirm sufficient knowledge, we used a validated knowledge test on the ABCDE approach developed using the Delphi method [ 67 ]. The test contained 29 multiple-choice items. We applied a pass rate of 60% to ensure an adequate knowledge level. The test could be re-taken an unlimited number of times.

The abcdeSIM simulation game

In the abcdeSIM simulation game, players must assess and treat a virtual patient in a simulated virtual emergency department [ 5 ]. For familiarization, a walk-through tutorial and a practice scenario are available. In the practice scenario, the patient is healthy and their condition does not deteriorate. The game contains different scenarios in which a patient presenting with a medical condition must be examined, diagnosed, and treated within 15 min. After completing a scenario, a score and feedback on interventions are displayed. The game score is generated by adding points for correct interventions and subtracting points for harmful interventions or overlooked necessary interventions. If all vital interventions are performed, a time bonus of one extra point per second remaining is awarded. The patient’s underlying condition determines the required interventions, which were established by a panel of content experts.

We used the practice scenario and six emergency scenarios in a fixed order as follows: practice, deep venous thrombosis, chronic obstructive pulmonary disease, gastrointestinal bleeding, acute myocardial infarction, sepsis caused by pneumonia, and anaphylactic shock. Complexity increases with subsequent scenarios, meaning the patient’s condition is more severe and requires more or more urgent interventions.

Scaffolding in the abcdeSIM game

To enable scaffolding in the abcdeSIM game, we implemented additional supportive information and procedural information as described by Faber, Dankbaar and van Merriënboer [ 68 ]. Both types of information can be toggled on and off separately, resulting in four possible scaffolding combinations: both supportive and procedural information provided, neither provided, only supportive information provided and only procedural information provided.

Supportive information explains to the learners how a learning domain is organized and how to approach problems in that domain. It supports the learner in developing general schemas and problem-solving approaches [ 27 ]. In the abcdeSIM game, supportive information consisted of an extended checklist designed to facilitate the construction of a cognitive schema representing the ABCDE approach. The original abcdeSIM game includes a basic checklist intended to help the learner structure their approach (Fig.  1 ), consisting of simple checkboxes for the general approach in each ABCDE phase. However, it does not specify which actions or measurements should be performed. The extended checklist prompts the player to evaluate specific items in each phase, such as looking at skin color, listening to the heart, and measuring blood pressure in the ‘C’ phase (Fig.  2 ).

figure 1

The basic checklist in abcdeSIM

figure 2

The extended checklist in abcdeSIM. A tab for general information (e.g. patient characteristics, presenting complaints) and one for each ABCDE phase prompt the player to examine specific features

Additional procedural information, meaning information provided in a just-in-time manner to complete routine aspects of tasks in the correct way [ 27 ], was implemented by showing a dialogue box upon tool selection. This dialogue box displays information on how and when to use the tool and appears every time the tool is selected until the player indicates to have read the information (Fig.  3 ).

figure 3

Tool information is provided in a dialogue box when a tool is selected. A checkbox in the bottom left corner enables the player to indicate they have read the information and do not want it to be shown again

Adaptive scaffolding algorithm

Adaptive scaffolding was provided based on different measures of task performance in the previously played scenario. The algorithm for adaptive scaffolding is summarized in Fig.  4 . First, supportive information was provided when cognitive strategy use was deemed inadequate. We used systematicity in approach as a measure for adequate cognitive strategy use. Systematicity in approach, quantified using a Hidden Markov Model as described by Lee et al. [ 44 ], describes the level to which a player takes actions in the correct order. The model yields a score ranging from 0 to 1. A high systematicity indicates efficient knowledge-based cognitive strategies. To establish cutoff points for systematicity, we used data from a previous study with medical students playing the abcdeSIM game [ 41 ] ( M  = 0.71 and SD  = 0.11). If the systematicity in the first scenario was below 0.70, additional supportive information was activated in the form of the extended checklist described above. For each subsequent scenario, the extended checklist was deactivated when systematicity increased at least 0.05 or was above 0.95, and activated if systematicity decreased by 0.05 or more.

figure 4

Algorithm for adaptive support

Secondly, procedural information about tool use was provided based on the frequency of inappropriate tool use, quantified by counting the number of times the in-game nurse issued a warning to the player during a scenario. We consider this an indicator of insufficient procedural knowledge regarding the correct application of the instruments available in the game. The presence of any warnings led to additional procedural scaffolding by activating tool information for the subsequent scenario. If no warnings occurred, tool information was deactivated in the subsequent scenario.

Outcome measures

Learning performance.

To operationalize learning performance, meaning the performance in the game, we measured the accuracy of clinical decision-making, speed, and systematicity. Accuracy represents applied domain knowledge and was measured as the game score minus the time bonus. Speed represents the strength of cognitive strategies used and was shown to distinguish between experts and novices by Lee et al. [ 44 ]. We measured speed both as the total time to scenario completion and as the relative time to complete three critical interventions: introducing oneself, attaching the vital functions monitor, and providing oxygen. To allow comparison between different scenarios, z -scores were calculated per scenario after checking the normality of distribution. Finally, systematicity represents the quality of cognitive strategies, or how to approach unfamiliar problems in this context. We operationalized systematicity as a measure of how well the player adhered to the ABCDE approach, calculated as described under ‘Adaptive scaffolding algorithm’ above. An overview of all included outcome measures is provided in Table  1 .

Cognitive load

Using an online questionnaire, we measured cognitive load for each game scenario using the Paas subjective rating scale [ 69 ] asking how much mental effort they invested in the task on a 1–9 scale, labeled from 1 = ‘very, very low mental effort’ to 9 = ‘very, very high mental effort’. According to Paas, Tuovinen [ 15 ], mental effort measured using this scale refers to “the aspect of cognitive load that is allocated to accommodate the demands imposed by the task” and as such may be considered to reflect the actual cognitive load.

Self-regulated learning

Interaction traces can offer insight into the use of specific SRL strategies in the game, such as monitoring, problem-solving, and decision-making processes [ 39 , 70 ]. To quantify the use of specific SRL strategies, we recorded the number of times participants accessed the checklist as a measure of monitoring and the number of telephone calls to a medical specialist or consultant as a measure of help-seeking .

Transfer test performance

To quantify transfer test performance, we used a live scenario-based skill assessment of the ABCDE approach at two time points (immediate assessment and delayed assessment). Four different scenarios were designed by content experts to be distinct from the game scenarios and checked for similar complexity. The scenarios concerned patients presenting with hypoglycemia, urosepsis, pneumothorax, and ruptured aneurysm of the abdominal aorta. In the immediate assessment, participants were presented with first the hypoglycemia and then the urosepsis scenario. In the delayed assessment, they were presented with first the pneumothorax and then the ruptured aneurysm of the abdominal aorta scenario. Expert clinicians experienced in simulation-based training and assessment facilitated the scenarios, playing the role of nurse, and assessed the participants’ performance. A basic manikin and practice crash cart were used. Vital functions, patient responses, and additional information were provided by the scenario assessor. The participants did not have to perform psychomotor skills, such as placing an iv or attaching the monitor, but did have to indicate when to apply these skills. The assessor rated performance using an assessment instrument adapted from Dankbaar et al. [ 71 ]. The rating consisted of a Competency Scale (6 items on the ABCDE method and diagnostics, rated on a 7-point scale from 1 = “very weak” to 7 = “excellent”) and a Global Performance Scale using a single 10-point scale to rate ‘independent functioning in caring for acutely ill patients in the Emergency Department’ (10 = “perfect”) as if the participant were a recently graduated physician. The assessment instruments are shown in Appendix 2 . To improve inter-rater reliability, the first author briefed all raters on the content of the scenarios, how to run the scenarios, how much support and guidance to provide during the assessment, and how to use the assessment instruments. Raters were blinded to the scaffolding conditions and the participant’s year of study. Feedback to the participant was provided only after the delayed assessment.

Game engagement

To measure game engagement, we used a questionnaire on participants’ experience adapted from Dankbaar, Stegers-Jager, Baarveld, Merrienboer, Norman, Rutten, et al. [ 5 ]. The questionnaire consists of 9 statements, including items such as: “I felt actively involved with the patient cases”, to be scored on a 5-point Likert scale (5 = fully agree). The questionnaire can be found in Appendix 3.

The overall study design is visualized in Fig.  5 . After enrollment, all participants were given access to the e-learning module and completed the demographics survey. Next, they were randomly divided into matched pairs. After passing the knowledge test, participants gained online access to the six game scenarios.

figure 5

Study design

In the scenarios, scaffolding was provided as follows:

Adaptive scaffolding condition : in the first patient scenario, no scaffolding was provided. In subsequent scenarios, adaptive scaffolding was provided as described above.

Non-adaptive condition : the yoked participant received the same scaffolding as the participant they were matched to. Each training sequence was allocated only once to one participant in the non-adaptive condition.

During the game scenarios, learning performance outcome measures were collected automatically. After each game scenario, participants were requested to indicate the cognitive load for the scenario in the separate online cognitive load questionnaire. After the sixth and final game scenario, they completed the engagement questionnaire. Within two weeks of completing the final game scenario, participants performed the first live scenario-based skill assessment. Six to twelve weeks later, participants returned for a delayed live scenario-based skill assessment to measure long-term retention. They could not access the abcdeSIM game between the two assessments.

Confirmatory analysis

For each game session, we used a specialized JavaScript parser to extract accuracy, scenario completion time, systematicity in approach, self-monitoring, and help-seeking as described by Faber, Dankbaar, Kickert, van den Broek and van Merriënboer [ 41 ]. The analysis was performed in R [ 72 ] using the Rstudio software version 1.2.1335 [ 73 ]. Data were visually inspected for normality. Differences between the groups in participant characteristics were tested for significance using paired t -tests for continuous variables and Stuart-Maxwell tests for categorical variables. We calculated Cronbach’s alpha for the questionnaires and assessment instruments to evaluate reliability. Multilevel correlations between the learning performance outcome measures were calculated using the correlation package [ 74 ].

For hypotheses 1, 2 and 3, we used multilevel regression (also known as linear mixed) models, taking into account the number of scenarios already played by the student. This type of model has been widely used in longitudinal data where repeated measurements of the same participants are taken over the study period [ 75 ]. We fitted a partially crossed linear mixed model, using the lme4 package [ 76 ]. We fit separate models for the following outcome measures: cognitive load (H1), accuracy, time spent on the scenario, time to vital interventions, and systematicity (H2), and frequency of self-monitoring and help-seeking (H3). We used the outcome measures as criterion measures and random intercepts for pair and participant as random effects, to account for the dependent data structure. As fixed effects, we included the number of scenarios played and the scaffolding condition (adaptive vs. non-adaptive). To calculate p values, we performed likelihood ratio tests comparing the full model with the effects in question against the model without the effects in question. Model comparisons can be found in Supplementary Table A. To test hypotheses 4 and 5, we performed a paired t -test for transfer test performance and engagement outcomes per condition.

Exploratory analysis

Because tailored scaffolding occurred, meaning participants in the nonadaptive group received the same support as they would have in the adaptive group, we performed separate exploratory subgroup analyses within the nonadaptive group. For learning performance, SRL, and cognitive load, we included these outcome measures as criterion measures and random intercepts for participants as random effects in multilevel regression models. As fixed effects, we included the number of scenarios played and whether supportive and procedural information was tailored. Model comparisons for the tailored scaffolding models can be found in Supplementary Table B. For test performance and engagement, we calculated Pearson’s r to test for correlations between the number of scenarios played with tailored scaffolding and the outcome measure.

Baseline characteristics

Eighty-three medical students (age M  = 22.8 years , SD  = 1.8) participated in the study. One participant was excluded because they did not adhere to the study protocol. Sixty-nine participants completed all six game scenarios, resulting in 32 complete pairs. The other 19 participants either could not be matched or failed to complete the game scenarios.

Participants in the adaptive and nonadaptive groups were similar in age, gender, experience with emergency care, study year, and score on the knowledge test. Detailed characteristics are shown in Table  2 . Tailored scaffolding was observed in 64.9% of the game scenarios played in the nonadaptive group, with an average of 3.9 tailored scenarios per participant (range 2–6). One participant in the nonadaptive group received tailored scaffolding on all six scenarios.

Sixty-four students matched in 32 pairs played a total of 384 game scenarios. The cognitive load questionnaire was completed for 244 game sessions played by 49 participants in 30 pairs (64.7% of game sessions). For seven game scenarios data were not available for analysis due to technical problems, resulting in data available for analysis for 377 game sessions played by 63 participants in 32 pairs for learning performance (accuracy, scenario completion time, and systematicity) and self-regulated learning (help-seeking and monitoring). Time to vital interventions could not be calculated in 160 sessions because one or more vital actions had been omitted, resulting in 221 sessions available for this analysis. Thirty student pairs completed the initial transfer test and twenty-three the delayed transfer test.

Reliability of instruments

In contrast to previous research validating the knowledge test with acceptable internal consistency (Cronbach’s α = . 77, [ 67 ]) our data show poor consistency (α = 0.55, 95% CI [0.38—0.69]). Internal consistency for the assessment scores was excellent (α = 0.95, 95% CI [0.93—0.97]). There was a strong correlation between the score for the competency scale and the global performance scale, for both the immediate (r p  = 0.89, p  < 0.001) and the delayed assessment (r p  = 0.90, p  < 0.001).

A weak positive correlation was found between accuracy and total scenario time (r = 0.27, p  = 0.015). For cognitive load, a significant correlation was present with systematicity ( r  = -0.28, p  = 0.008) and total scenario time ( r  = 0.27, p  = 0.015) but not accuracy, self-monitoring or help-seeking. Self-monitoring significantly correlated with accuracy ( r  = 0.32, p  = 0.001) and total scenario time ( r  = 0.33, p  < 0.001) but not with systematicity or help-seeking. For help-seeking we found a positive correlation with both accuracy ( r  = 0.35, p  < 0.001) and total scenario time ( r  = 0.42, p  < 0.001).

Adaptive scaffolding condition did not significantly predict accuracy, time to vital interventions, and systematicity (Supplementary Table A). A trend toward longer scenario completion time was found for the adaptive scaffolding condition (β = 52.60 s, SE  = 27.71, 95% CI = [-1.89 – 107.09], Supplementary Table B).

The model including scaffolding condition could not significantly predict cognitive load compared with the model without scaffolding condition (χ 2  = 1.71, df  = 1, p  = 0.191, Supplementary Table A).

Adaptive scaffolding condition predicted a non-significant increase in the frequency of self-monitoring (β = 0.65, SE  = 0.35, 95% CI [-0.03 – 1.34], Supplementary Table B). Help-seeking was not predicted by scaffolding condition.

We did not find differences in initial test performance between the conditions on both competency and global performance (respectively t = 0.71, df  = 29, p  = 0.480 and t = 0.93, df  = 29, p  = 0.357). Similarly, there were no differences in test performance on the delayed test (respectively t = -0.97, df  = 22, p  = 0.341 and t = -0.96, df  = 21, p  = 0.350). Results are shown in Table 3 .

Engagement was not significantly different between the adaptive and nonadaptive groups ( t  = 0.75662, df  = 29, p  = 0.455).

Thirty-two students in the non-adaptive group played a total number of 192 game scenarios. One scenario was not available for analysis due to technical issues, resulting in data for 191 game scenarios available for accuracy, scenario completion time, systematicity, help-seeking and self-monitoring. For 111 scenarios the time to vital interventions could be calculated. For 110 sessions, cognitive load data were measured. In 168 scenarios (87.9%) tailored supportive information was provided, while tailored procedural information was provided in 142 scenarios (74%). Descriptive statistics by tailored supportive and procedural scaffolding is available in Supplementary Table G and Supplementary Table H.

Full model estimates can be found in Supplementary Table F. Tailored scaffolding significantly predicted scenario completion time (χ 2  = 8.12, df  = 2, p  = 0.017) and time to vital interventions (χ 2  = 8.54, df  = 2, p  = 0.014), but not accuracy and systematicity. As can be seen in Fig.  6 , scenario completion time decreased both with tailored supportive and procedural information (respectively β = -90.57, SE  = 35.35, 95% CI [-160.13 – -21.02] and β = -36.76, SE  = 27.10, 95% CI [-90.23 – 16.72]). Tailored supportive information strongly decreased time to vital interventions (β = -0.82, SE  = 0.32, 95% CI [-1.45 – -0.19]) while tailored procedural information had a weaker opposite effect, slowing the participants down (β = 0.32, SE  = 0.25, 95% CI [-0.18 – 0.83]).

figure 6

Scenario completion time and tailored supportive information. Participants receiving tailored supportive information (blue) are faster, compared to participants receiving nontailored supportive information (red). Left: participants who do not need supportive information are faster to complete the scenario when information is not provided (blue) compared to those who are provided with supportive information (red). Right: when supportive information is indicated, providing the information results in a faster completion (blue) compared to not providing supportive information (red)

Including tailored scaffolding significantly improved the model to predict cognitive load (χ 2  = 14,85, df  = 6, p  = 0.021, Supplementary Table B). As shown in Fig.  5 , tailored supportive information significantly lowered cognitive load (respectively β = -0.88, SE  = 0.34, 95% CI [-1.56 – -0.20] Fig.  5 ) and a similar trend was observed for tailored procedural information (β = -0.51, SE  = 0.30, 95% CI [-1.10 – 0.09]). Full results of the model can be found in Supplementary Table C.

In the nonadaptive group, tailored scaffolding significantly predicted both self-monitoring and help-seeking (respectively χ 2  = 8.39, df  = 2, p  = 0.015 and χ 2  = 6.99, df  = 2, p  = 0.030). Tailored supportive information decreased the frequency of self-monitoring in the scenario in which it was provided (β = -0.85, SE  = 0.30, 95% CI [-1.44 – -0.26]) but had no large influence on help-seeking. In contrast, tailored procedural information did not influence self-monitoring significantly, but decreased help-seeking (β = -0.81, SE  = 0.31, 95% CI [-1.41 – -0.21]), as can be seen in Fig.  7 . Visual inspection (Figs.  8 and 9 ) suggests that the presence of the extended checklist increased monitoring behavior, regardless of the student’s needs. A post hoc multilevel model was constructed using self-monitoring as a criterion measure, random intercepts for participants, and as fixed effects the number of scenarios played, whether or not supportive and procedural information was available, and whether supportive and procedural information was tailored. This model was significantly different from the original model without the availability of supportive and procedural information (χ 2  = 45.49, df  = 2, p  < 0.001) and showed that the presence of the extended checklist significantly increased self-monitoring (β = 1.52, SE  = 0.21, 95% CI [1.11– 1.94]).

figure 7

Cognitive load and tailored supportive information. Tailored supportive information (blue) results in a lower cognitive load compared with nontailored supportive information (red). Left: participants who do not need supportive information experience higher cognitive load when information is provided compared to those who are not provided with supportive information. Right: when supportive information is indicated, providing the information results in a lower cognitive load compared to not providing supportive information

figure 8

Help-seeking actions. Participants for whom procedural information is tailored (blue) seek help less often compared to participants for whom procedural information is not tailored (red)

figure 9

Self-monitoring behavior increases when supportive information is available, regardless of whether the information was tailored to the player’s behavior

Looking at the influence of tailored scaffolding in the nonadaptive group, competency and global performance were not significantly correlated with the number of scenarios with tailored scaffolding on the first assessment (respectively r p  = 0.07, p 0.694 and r p  = -0.01, p  = 0.944), and on the delayed assessment (r p  = -0.13, p  = 0.537 and r p  = -0.10, p  = 0.641).

The number of scenarios with tailored scaffolding did not correlate with engagement in the non-adaptive group (r p  = 0.04, p  = 0.838).

This study investigated the effects of adaptive scaffolding in a medical emergency simulation game on cognitive load, self-regulation, learning performance, transfer test performance, and engagement in a yoked control design. Apart from a trend towards more frequent self-monitoring and a longer time to scenario completion, we found no significant differences between the adaptive and nonadaptive groups. Unfortunately, the study’s power to detect differences between the groups was reduced because participants in the nonadaptive group also received scaffolding tailored to their needs in 64.9% of the game scenarios. This likely occurred because participants in both groups displayed comparable in-game behaviors. A similar limitation was mentioned by Salden, Paas and van Merriënboer [ 40 ], proposing that homogeneity in prior knowledge and expertise level explain this phenomenon, although they do not describe to what extent it occurred. Consequently, we performed exploratory analyses in the nonadaptive subgroup investigating the effects of tailored versus non-tailored scaffolding.

Regarding hypothesis 1, the results of the exploratory analyses suggest that tailored scaffolding lowered cognitive load. This effect can be explained by a reduction in extraneous load: students who do not require support do not need to cross-reference the information provided by the scaffolding with existing schemas, while students who lack knowledge on how to proceed are given scaffolding that can organize their learning [ 3 ].

Regarding learning performance (hypothesis 2), accuracy and systematicity could not be predicted and results regarding speed were mixed. While the adaptive group as a whole took longer to complete the scenarios compared with the nonadaptive group, in the nonadaptive group tailored scaffolding shortened the time to scenario completion. Time to vital interventions decreased with tailored supportive information but increased with tailored procedural information. In the literature, different effects from different types of scaffolds have been described (e.g., Wu and Looi [ 77 ]), with general prompts (similar to the supportive information used in this study) stimulating metacognitive activities, like self-monitoring, and specific prompts stimulating reflection on domain-related tasks and task-specific skills. Two explanations for our findings come to mind: first and foremost, reading the procedural information during task execution takes time by itself that immediately adds to the time to vital interventions. Secondly, the supportive information may stimulate learners to go back to the standard approach they have learned, helping them back on track.

Regarding self-monitoring (hypothesis 3a), in contrast to our findings comparing the adaptive and nonadaptive group, we found significantly reduced self-monitoring with tailored supportive information. This contrasts with previous research in non-game environments, where increases in self-regulation have been observed with adaptive scaffolding, either provided by human tutors [ 8 , 78 ] or through rule-based artificial intelligence [ 38 ]. Visual inspection of our data and further exploratory post hoc analysis suggested that the presence of supportive information in itself increased the frequency of self-monitoring, while tailored scaffolding had no significant effects on self-monitoring frequency. This finding should be confirmed in an appropriately powered study, possibly combining interaction trace measures of SRL with other measures such as systematic observations [ 79 ], think-aloud protocols [ 80 ], micro-analytic questions [ 81 ], or eye-tracking data [ 82 ].

Help-seeking (hypothesis 3b) decreased with tailored procedural support. Participants who did not require procedural support and did not receive it, as well as those who did require procedural support and did receive it, sought help less often. Possibly, the tailored procedural information accurately provided the information the participants needed; hence the provision of help did not add much. We found no improvements in test performance (hypothesis 4) and learner engagement (hypothesis 5) with tailored scaffolding, likely because the analyses in the nonadaptive group had insufficient power for these single-timepoint outcomes.

Our study had several strengths. We included students from three different universities in a double-blinded randomized study design. The study intervention provided multiple scenarios and we measured performance on several dimensions, including transfer test performance and retention. To our knowledge, this study is the first one to investigate the effects of adaptive scaffolding on learning performance as well as transfer performance in the context of game-based learning. However, our findings must be interpreted in light of the following limitations.

The first limitation regards the occurrence of coincidental tailored scaffolding in the nonadaptive group. As described above, this reduced the study’s power in comparing adaptive and non-adaptive support. To avoid this, future research should attempt to increase the differences between the adaptive and nonadaptive groups. For example, a different sampling strategy aiming to increase heterogeneity would decrease the incidence of adaptive scaffolding. This could involve recruiting more expert learners (e.g. residents) as well as novices, and not matching the pairs by experience. Other options include implementing a larger number of unique input variables for the adaptive algorithm or applying a different research design. This design could incorporate an adaptive group, a control group that does not receive any scaffolding, and another group receiving random scaffolding. The second limitation concerns the application of the adaptive scaffolding in the next scenario, instead of providing the scaffolding in the scenario where the need for scaffolding was identified. The timing of scaffolding influences its effects. For example, study material provided before play has proven more effective than the other way around [ 63 ]. This may have attenuated the effects of the scaffolding provided in our study.

A final limitation in our study was the use of a single-item measure for cognitive load. We chose the Paas single item mental effort scale because it is sensitive to small changes [ 83 , 84 ], easy to use and barely interrupts gameplay. However, we failed to/did not find significant correlations between cognitive load and self-regulatory activities although we expected increases in germane load. A differentiated cognitive load measure could provide more insight into how adaptive scaffolding increases germane load, meaning the active resources invested by the learner, compared with the load produced by the task itself, consisting of intrinsic and extraneous load. Apart from the previously mentioned 10-item scale by Leppink et al. [ 16 ], the 8-item questionnaire by Klepsch and Seufert [ 85 ] and the 15-item scale developed by Krieglstein et al. [ 86 ] appear promising instruments that distinguish between active and passive mental load. Challenges in using these questionnaires involve the larger number of items, interrupting game flow, as well as the limited reliability for measuring germane cognitive load and sensitivity to changes in item formulation that may be necessary for translation. As germane cognitive load is dependent on intrinsic cognitive load [ 87 , 88 ], adding physiological measures (see Ayres et al. [ 21 ]) to non-intrusively provide insight into intrinsic cognitive load may help clarify the role of scaffolding in relation to task complexity.

Conclusions

We could not find evidence to support our hypothesis of improved performance and lower cognitive load in adaptive scaffolding in game-based learning. Exploratory analyses do suggest a possible effect of tailored scaffolding. To further build on these findings, we offer three recommendations for research in adaptive scaffolding in game-based learning/GBL?. First, researchers should choose their research design and adaptive algorithm carefully to prevent coincidental adaptive scaffolding in the control group, as described above. Secondly, we recommend a more granular approach to measuring cognitive load, combining multi-item subjective measurements with physiological measurements. Finally, the specific effects of adaptive scaffolding should be investigated, including different effects for various types of adaptive scaffolding. Options include incorporating eye tracking, think-aloud protocols, or cued recall interviews to elucidate the mechanisms through which adaptive scaffolding influenced self-regulation in the game.

Tailored scaffolding shows promise as a technique to optimize cognitive load in GBL. When designing an adaptive GBL or computer-based simulation environment, we recommend that educators and developers work towards adaptive scaffolding as a team from the start. This will facilitate the establishment of reliable indicators of performance, self-regulation, and learning, as well as the design of appropriate, preferably real-time, scaffolding. For educators or developers who are unable to implement adaptive scaffolding, supportive information may be provided as a static scaffold to improve self-monitoring.

To conclude, this study into the effects of scaffolding in a medical emergency simulation game suggests that implementing tailored scaffolding in GBL may optimize cognitive load. Tailored supportive and procedural information have different effects on self-regulation and learning performance, necessitating further research into the effects of adaptive support as well as the design of well-calibrated algorithms. Considering the pivotal role of cognitive load in learning, these findings should inform instructional design both in game-based learning as well as other educational formats.

Availability of data and materials

The datasets used during the current study are available from the corresponding author on reasonable request.

Abbreviations

Airway, breathing, circulation, disability, exposure: a mnemonic used in emergency medicine

  • Game-based learning

Self-regulated learning or self-regulation of learning

Abdulmajed H, Park YS, Tekian A. Assessment of educational games for health professions: a systematic review of trends and outcomes. 2015.

Google Scholar  

de Freitas S. Are games effective learning tools? A review of educational games. J Educ Technol Soc. 2018;21(2):74–84.

Kalyuga S, Plass JL. Evaluating and managing cognitive load in games. In: Ferdig RE, editor. Handbook of Research on Effective Electronic Gaming in Education. Hershey, PA: IGI Global; 2009. p. 719–37.

Chapter   Google Scholar  

Dankbaar MEW, Alsma J, Jansen EEH, van Merrienboer JJG, van Saase JLCM, Schuit SCE. An experimental study on the effects of a simulation game on students’ clinical cognitive skills and motivation. Adv Health Sci Educ Theory Pract. 2016;21(3):505–21.

Article   Google Scholar  

Dankbaar MEW, Roozeboom MB, Oprins EAPB, Rutten F, van Merrienboer JJG, van Saase JLCM, et al. Preparing residents effectively in emergency skills training with a serious game. Simul Healthc. 2017;12(1):9–16.

Munshi A, Biswas G, Baker R, Ocumpaugh J, Hutt S, Paquette L. Analysing adaptive scaffolds that help students develop self-regulated learning behaviours. J Comput Assist Learn. 2023;39(2):351–68. https://doi.org/10.1111/jcal.12761 .

Chen C-H, Law V, Huang K. Adaptive scaffolding and engagement in digital game-based learning. Educ Technol Res Dev. 2023;71:1785–98.

Azevedo R, Cromley JG, Seibert D. Does adaptive scaffolding facilitate students’ ability to regulate their learning with hypermedia? Contemp Educ Psychol. 2004;29(3):344–70.

Kalyuga S, Sweller J. Rapid dynamic assessment of expertise to improve the efficiency of adaptive e-learning. Educ Tech Res Dev. 2005;53(3):83–93.

Hennings C, Ahmad M, Lohan K. Real-Time Adaptive Game to Reduce Cognitive Load. In: Proceedings of the 9th International Conference on Human-Agent Interaction (HAI '21). New York: Association for Computing Machinery; 2021. p. 342–7.  https://doi-org.ru.idm.oclc.org/10.1145/3472307.3484674 .

Ke F. Designing and integrating purposeful learning in game play: a systematic review. Educ Tech Res Dev. 2016;64(2):219–44.

Sweller J, van Merriënboer JJG, Paas F. Cognitive architecture and instructional design: 20 years later. Educ Psychol Rev. 2019;31(2):261–92.

Sweller J, et al. Cognitive Load Theory. New York: Springer; 2011. https://doi.org/10.1007/978-1-4419-8126-4 .

Kalyuga S, Ayres P, Chandler P, Sweller J. The expertise reversal effect. Educ Psychol. 2003;38(1):23–31.

Paas F, Tuovinen, JE, Tabbers, H, Van Gerven, PWM. Cognitive Load Measurement as a Means to Advance Cognitive Load Theory. Educ Psychol. 2003;38(1):63–71. https://doi.org/10.1207/S15326985EP3801_8 .

Leppink J, Paas F, van der Vleuten CPM, van Gog T, van Merriënboer JJG. Development of an instrument for measuring different types of cognitive load. Behav Res Methods. 2013;45(4):1058–72.

Hart SG, Staveland, LE. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. Adv Psychol. 1988;52:139-83. P. A. Hancock and N. Meshkati, Amsterdam

Brünken R, Steinbacher S, Plass JL, Leutner D. Assessment of cognitive load in multimedia learning using dual-task methodology. Exp Psychol. 2002;49(2):109.

Rojas D, Haji F, Shewaga R, Kapralos B, Dubrowski A. The impact of secondary-task type on the sensitivity of reaction-time based measurement of cognitive load for novices learning surgical skills using simulation. Stud Health Technol Inform. 2014;196:353–9.

Leppink J. Cognitive load measures mainly have meaning when they are combined with learning outcome measures. Med Educ. 2016;50(9):979-.

Ayres P, Lee JY, Paas F, van Merriënboer JJG. The Validity of Physiological Measures to Identify Differences in Intrinsic Cognitive Load. Front Psychol. 2021;12:702538.

Skulmowski A, Rey GD. Measuring cognitive load in embodied learning settings. Front Psychol. 2017;8:1191.

van Merriënboer JJG, Kester L. The Four-Component Instructional Design Model: Multimedia Principles in Environments for Complex Learning. The Cambridge handbook of multimedia learning. New York: Cambridge University Press; 2005. p. 71–93.

van Merriënboer JJ, Sweller J. Cognitive load theory in health professional education: design principles and strategies. Med Educ. 2010;44(1):85–93.

Könings KD, van Zundert M, van Merriënboer JJG. Scaffolding peer-assessment skills: Risk of interference with learning domain-specific skills? Learn Instr. 2019;60:85–94.

Vygotsky LS. Mind in Society: Development of Higher Psychological Processes. Edited by Michael Cole, Vera Jolm-Steiner, Sylvia Scribner, and Ellen Souberman. Cambridge: Harvard University Press; 1978. https://doi.org/10.2307/j.ctvjf9vz4 . Original manuscripts [ca. 1930–1934].

van Merriënboer JJG, Kirschner PA. Ten steps to complex learning : a systematic approach to four-component instructional design. 3rd ed. London: Routledge; 2017. p. 399.

Book   Google Scholar  

Merrill MD. A task-centered instructional strategy. J Res Technol Educ. 2007;40(1):5–22.

Puntambekar S, Hubscher R. Tools for scaffolding students in a complex learning environment: what have we gained and what have we missed? Educ Psychol. 2005;40(1):1–12.

Cai Z, Mao P, Wang D, He J, Chen X, Fan X. Effects of scaffolding in digital game-based learning on student’s achievement: a three-level meta-analysis. Educ Psychol Rev. 2022;34(2):537–74.

van de Pol J, Volman M, Oort F, Beishuizen J. The effects of scaffolding in the classroom: support contingency and student independent working time in relation to student achievement, task effort and appreciation of support. Instr Sci. 2015;43(5):615–41.

Streicher A, Smeddinck JD. Personalized and Adaptive Serious Games. In: Dörner R, Göbel S, Kickmeier-Rust M, Masuch M, Zweig K, editors. Entertainment Computing and Serious Games: International GI-Dagstuhl Seminar 15283, Dagstuhl Castle, Germany, July 5–10, 2015, Revised Selected Papers. Cham: Springer International Publishing; 2016. p. 332–77.

Snow RE. Individual differences and the design of educational programs. Am Psychol. 1986;41(10):1029–39.

Azevedo R, Cromley J, Moos D, Greene J, Winters F. Adaptive Content and Process Scaffolding: a key to facilitating students’ self-regulated learning with hypermedia. Psychol Test Assess Model. 2011;53:106.

Corbalan G, Kester L, van Merriënboer JJG. Selecting learning tasks: Effects of adaptation and shared control on learning efficiency and task involvement. Contemp Educ Psychol. 2008;33:733–56.

Leutner D. Guided discovery learning with computer-based simulation games: Effects of adaptive and non-adaptive instructional support. Learn Instr. 1993;3(2):113–32.

Serge SR, Priest HA, Durlach PJ, Johnson CI. The effects of static and adaptive performance feedback in game-based training. Comput Hum Behav. 2013;29(3):1150–8.

Lim L, Bannert M, van der Graaf J, Singh S, Fan Y, Surendrannair S, et al. Effects of real-time analytics-based personalized scaffolds on students’ self-regulated learning. Comput Hum Behav. 2023;139: 107547.

Serrano-Laguna Á, Manero B, Freire M, Fernández-Manjón B. A methodology for assessing the effectiveness of serious games and for inferring player learning outcomes. Multimed Tools Appl. 2018;77(2):2849–71.

Salden RJCM, Paas F, van Merriënboer JJG. Personalised adaptive task selection in air traffic control: Effects on training efficiency and transfer. Learn Instr. 2006;16(4):350–62.

Faber TJE, Dankbaar MEW, Kickert R, van den Broek WW, van Merriënboer JJG. Identifying indicators to guide adaptive scaffolding in games. Learn Instr. 2022;83:101666.

Kang J, Liu M, Qu W. Using gameplay data to examine learning behavior patterns in a serious game. Comput Hum Behav. 2017;72:757–70.

Riemer V, Schrader C. Impacts of behavioral engagement and self-monitoring on the development of mental models through serious games: Inferences from in-game measures. Comput Hum Behav. 2016;64:264–73.

Lee JY, Donkers J, Jarodzka H, van Merriënboer JJG. How prior knowledge affects problem-solving performance in a medical simulation game: Using game-logs and eye-tracking. Comput Hum Behav. 2019;99:268–77.

Karoly P. Mechanisms of self-regulation: a systems view. Annu Rev Psychol. 1993;44(1):23–52.

van Houten-Schat MA, Berkhout JJ, van Dijk N, Endedijk MD, Jaarsma ADC, Diemers AD. Self-regulated learning in the clinical context: a systematic review. Med Educ. 2018;52(10):1008–15.

Cho KK, Marjadi B, Langendyk V, Hu W. The self-regulated learning of medical students in the clinical environment - a scoping review. BMC Med Educ. 2017;17(1):112-.

Brydges R, Manzone J, Shanks D, Hatala R, Hamstra SJ, Zendejas B, et al. Self-regulated learning in simulation-based training: A systematic review and meta-analysis. Med Educ. 2015;49(4):368–78.

Sabourin JL, Shores LR, Mott BW, Lester JC. Understanding and predicting student self-regulated learning strategies in game-based learning environments. Int J Artif Intell Educ. 2013;23(1–4):94–114.

Boekaerts M, Minnaert A. Self-regulation with respect to informal learning. Int J Educ Res. 1999;31:533–44.

de Bruin ABH, van Merriënboer JJG. Bridging Cognitive Load and Self-Regulated Learning Research: a complementary approach to contemporary issues in educational research. Learn Instr. 2017;51:1–9.

Cleary TJ, Durning SJ, Artino AR. Microanalytic assessment of self-regulated learning during clinical reasoning tasks. Acad Med. 2016;91(11):1516–21.

Nietfeld JL, Shores LR, Hoffmann KF. Self-regulation and gender within a game-based learning environment. J Educ Psychol. 2014;106(4):961–73.

Wouters P, van Oostendorp H. A meta-analytic review of the role of instructional support in game-based learning. Comput Educ. 2013;60(1):412–25.

Schwonke R. Metacognitive load – Useful, or extraneous concept? Metacognitive and self-regulatory demands in computer-based learning. J Educ Technol Soc. 2015;18(4):172–84.

Seufert T. The interplay between self-regulation in learning and cognitive load. Educ Res Rev. 2018;24:116–29.

Valcke M. Cognitive load: updating the theory? Learn Instr. 2002;12(1):147–54.

Perkins DN, Salomon G. Transfer of learning. Int Encyclopedia Educ. 1992;2:6452–7.

Hajian S. Transfer of Learning and Teaching: A Review of Transfer Theories and Effective Instructional Practices. IAFOR J Educ. 2019;7:93–111.

Basu S, Biswas G, Kinnebrew J. Learner modeling for adaptive scaffolding in a Computational Thinking-based science learning environment. User Model User-Adap Interact. 2017;26:5–3.

Game WN, Theory E, Learning A. Game Engagement Theory and Adult Learning. Simul Gaming. 2011;42(5):596–609.

Garris R, Ahlers R, Driskell JE. Games, motivation, and learning: a research and practice model. Simul Gaming. 2002;33(4):441–67.

Barzilai S, Blau I. Scaffolding game-based learning: Impact on learning achievements, perceived learning, and game experiences. Comput Educ. 2014;70:65–79.

Charsky D, Ressler W. “Games are made for fun”: Lessons on the effects of concept maps in the classroom use of computer games. Comput Educ. 2011;56(3):604–15.

Broza O, Barzilai S, editors. When the mathematics of life meets school mathematics: Playing and learning on the “my money” website. Learning in the technological era: Proceedings of the sixth chais conference on instructional technologies research; 2011.

van Merriënboer JJG, Clark RE, de Croock MBM. Blueprints for complex learning: The 4C/ID-model. Education Tech Research Dev. 2002;50(2):39–61.

Schoeber NHC, Linders M, Binkhorst M, De Boode W-P, Draaisma JMT, Morsink M, et al. Healthcare professionals’ knowledge of the systematic ABCDE approach: a cross-sectional study. BMC Emerg Med. 2022;22(1):202.

Faber TJE, Dankbaar MEW, van Merriënboer JJG. Four-Component Instructional Design Applied to a Game for Emergency Medicine. In: Brooks AL, Brahman S, Kapralos B, Nakajima A, Tyerman J, Jain LC, editors. Recent Advances in Technologies for Inclusive Well-Being: Virtual Patients, Gamification and Simulation. Cham: Springer International Publishing; 2021. p. 65–82.

Paas FG. Training strategies for attaining transfer of problem-solving skill in statistics: a cognitive-load approach. J Educ Psychol. 1992;84(4):429.

Rovers SFE, Clarebout G, Savelberg HHCM, de Bruin ABH, van Merriënboer JJG. Granularity matters: comparing different ways of measuring self-regulated learning. Metacogn Learn. 2019;14(1):1–19.

Dankbaar MEW, Stegers-Jager KM, Baarveld F, Merrienboer JJGV, Norman GR, Rutten FL, et al. Assessing the assessment in emergency care training. PloS one. 2014;9(12):e114663-e.

R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing; 2021. https://www.R-project.org/ .

RStudio Team. RStudio: Integrated Development Environment for R. Boston: RStudio, Inc.; 2018.

Makowski D, Ben-Shachar MS, Patil I, Lüdecke D. Methods and algorithms for correlation analysis in R. J Open Source Softw. 2020;5(51):2306.

Magezi DA. Linear mixed-effects models for within-participant psychology experiments: an introductory tutorial and free, graphical user interface (LMMgui). Front Psychol. 2015;6:2.

Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Using lme4. J Stat Softw. 2015;1(1):2015.

Wu L, Looi C-K. Agent prompts: Scaffolding for productive reflection in an intelligent learning environment. J Educ Technol Soc. 2012;15(1):339–53.

Azevedo R, Cromley JG, Winters FI, Moos DC, Greene JA. Adaptive human scaffolding facilitates adolescents' self-regulated learning with hypermedia. Instr Sci. 2005;33:381–412. https://doi.org/10.1007/s11251-005-1273-8 .

Perry NE. Young children’s self-regulated learning and contexts that support it. J Educ Psychol. 1998;90:715–29.

Ericsson KA. Protocol Analysis and Expert Thought: Concurrent Verbalizations of Thinking during Experts' Performance on Representative Tasks. In K. A. Ericsson, N. Charness, P. J. Feltovich, & R. R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance (pp. 223–241). Cambridge University Press; 2006. https://doi.org/10.1017/CBO9780511816796.013 .

Cleary TJ. Emergence of Self-Regulated Learning Microanalysis. Handbook of Self-Regulation of Learning and Performance. 2017. p. 10513.

Kok EM, Jarodzka H. Before your very eyes: the value and limitations of eye tracking in medical education. Med Educ. 2017;51(1):114–22.

Paas FGWC, van Merriënboer JJG, Adam JJ. Measurement of Cognitive Load in Instructional Research. Percept Mot Skills. 1994;79(1):419–30.

Haji FA, Rojas D, Childs R, de Ribaupierre S, Dubrowski A. Measuring cognitive load: performance, mental effort and simulation task complexity. Med Educ. 2015;49(8):815–27.

Klepsch M, Seufert T. Making an Effort Versus Experiencing Load. Front Educ. 2021;6:645284.

Krieglstein F, Beege M, Rey GD, Sanchez-Stockhammer C, Schneider S. Development and validation of a theory-based questionnaire to measure different types of cognitive load. Educ Psychol Rev. 2023;35(1):9.

Klepsch M, Schmitz F, Seufert T. Development and Validation of Two Instruments Measuring Intrinsic, Extraneous, and Germane Cognitive Load. Front Psychol. 2017;8:1997.

Sweller J. Element interactivity and intrinsic, extraneous, and germane cognitive load. Educ Psychol Rev. 2010;22:123–38.

Download references

Acknowledgements

The authors would like to acknowledge all students who participated in the study. We are grateful to Femke Jongen for enabling data collection, and to Laurens Bisschops, Hella Borggreve, Sven Crama, Ineke Dekker, Els Jansen, and Dewa Westerman for performing assessments. We extend our thanks to Kim van den Bosch, Josepha Kuhn, Joost Jan Pannebakker, and Robin de Vries for assisting with the data collection. The authors thank Tin de Zeeuw, P.D.Eng., for creating software to process the game log data. Finally, we wish to acknowledge IJsfontein for creating adaptive support and VirtualMedSchool for implementing the support algorithm and providing access to the study version of the game.

This work was supported by the Netherlands Organization for Scientific Research (NWO) [project number 055.16.117].

Author information

Jeroen J. G. van Merriënboer passed away November 15, 2023.

Authors and Affiliations

Department of Anesthesiology, Pain and Palliative Medicine, Radboud University Medical Center, Huispostnummer 717, P.O. Box 9101, Nijmegen, 6500 HB, The Netherlands

Tjitske J. E. Faber

Erasmus MC, University Medical Center Rotterdam, Institute for Medical Education Research Rotterdam, P.O. Box 2040, 3000 CA, Rotterdam, The Netherlands

Tjitske J. E. Faber, Mary E. W. Dankbaar & Walter W. van den Broek

Department of Neonatology, Radboud University Medical Center, Radboud Institute for Health Sciences, P.O. Box 9101, 6500 HB, Nijmegen, The Netherlands

Laura J. Bruinink & Marije Hogeveen

School of Health Professions Education, Faculty of Health, Medicine and Life Sciences, Maastricht University, P.O. Box 616, 6200 MD, Maastricht, The Netherlands

Jeroen J. G. van Merriënboer

You can also search for this author in PubMed   Google Scholar

Contributions

TF, MD, JvM and WvdB conceptualized the study and developed the study protocol. TF oversaw the investigations, conducted analyses, and wrote the main manuscript text. MD, WvdB and MH provided resources for data collection. TF, LB and MH performed data collection. JvM passed away on November 15th, 2023 and reviewed the first version of the manuscript. All remaining authors reviewed the final manuscript.

Corresponding author

Correspondence to Tjitske J. E. Faber .

Ethics declarations

Ethics approval and consent to participate.

Ethical approval was provided by the Ethical Review Board of the Netherlands Association for Medical Education (refrence number 2021.3.5). Participants signed informed consent.

Consent for publication

Not applicable.

Competing interests

VirtualMedSchool, the owner of the abcdeSIM game, provided access to the game for this study and technical support during the data collection. They were not involved in the collection, analysis, and interpretation of the data, the preparation of the manuscript, or the decision to publish. The authors have no other interests to declare.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Faber, T.J.E., Dankbaar, M.E.W., van den Broek, W.W. et al. Effects of adaptive scaffolding on performance, cognitive load and engagement in game-based learning: a randomized controlled trial. BMC Med Educ 24 , 943 (2024). https://doi.org/10.1186/s12909-024-05698-3

Download citation

Received : 28 August 2023

Accepted : 23 June 2024

Published : 29 August 2024

DOI : https://doi.org/10.1186/s12909-024-05698-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Serious games
  • Instructional design

BMC Medical Education

ISSN: 1472-6920

development of group cognition in online collaborative problem solving processes

  • DOI: 10.1177/07356331211047784
  • Corpus ID: 244220320

Development of Group Cognition in Online Collaborative Problem-Solving Processes

  • Ouyang Fan , Tengjiao Ling , Pengcheng Jiao
  • Published in Journal of educational… 22 September 2021
  • Psychology, Computer Science

Figures and Tables from this paper

figure 1

7 Citations

A joint evaluation method of regulated-learning and cognitive quality in collaborative knowledge building, a study on using online tagging and concept map to foster students' group cognition, examining the effect of a genetic algorithm-enabled grouping method on collaborative performances, processes, and perceptions, exploring the effects of roles and group compositions on social and cognitive interaction structures in online collaborative problem-solving, objectivity by design: the impact of ai-driven approach on employees' soft skills evaluation, temporal group interaction density in collaborative problem solving: exploring group interactions with different time granularities, computer-based assessment of collaborative problem solving skills: a systematic review of empirical research, 69 references, group cognition: computer support for building collaborative knowledge (acting with technology).

  • Highly Influential

The relationships between social participatory roles and cognitive engagement levels in online discussions

From cognitive load theory to collaborative cognitive load theory, the multi-layered nature of small-group learning: productive interactions in object-oriented collaboration, reconsidering group cognition: from conceptual confusion to a boundary area between cognitive and socio-cultural perspectives, studying virtual math teams, interactive team cognition, exploring the effect of three scaffoldings on the collaborative problem-solving processes in china’s higher education, cohesion in online environments, examining the effects of three group-level metacognitive scaffoldings on in-service teachers’ knowledge building, related papers.

Showing 1 through 3 of 0 Related Papers

development of group cognition in online collaborative problem solving processes

Published in Journal of educational computing research 2021

Ouyang Fan Tengjiao Ling Pengcheng Jiao

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 11 January 2023

The effectiveness of collaborative problem solving in promoting students’ critical thinking: A meta-analysis based on empirical literature

  • Enwei Xu   ORCID: orcid.org/0000-0001-6424-8169 1 ,
  • Wei Wang 1 &
  • Qingxia Wang 1  

Humanities and Social Sciences Communications volume  10 , Article number:  16 ( 2023 ) Cite this article

19k Accesses

21 Citations

3 Altmetric

Metrics details

  • Science, technology and society

Collaborative problem-solving has been widely embraced in the classroom instruction of critical thinking, which is regarded as the core of curriculum reform based on key competencies in the field of education as well as a key competence for learners in the 21st century. However, the effectiveness of collaborative problem-solving in promoting students’ critical thinking remains uncertain. This current research presents the major findings of a meta-analysis of 36 pieces of the literature revealed in worldwide educational periodicals during the 21st century to identify the effectiveness of collaborative problem-solving in promoting students’ critical thinking and to determine, based on evidence, whether and to what extent collaborative problem solving can result in a rise or decrease in critical thinking. The findings show that (1) collaborative problem solving is an effective teaching approach to foster students’ critical thinking, with a significant overall effect size (ES = 0.82, z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]); (2) in respect to the dimensions of critical thinking, collaborative problem solving can significantly and successfully enhance students’ attitudinal tendencies (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI[0.87, 1.47]); nevertheless, it falls short in terms of improving students’ cognitive skills, having only an upper-middle impact (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI[0.58, 0.82]); and (3) the teaching type (chi 2  = 7.20, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), and learning scaffold (chi 2  = 9.03, P  < 0.01) all have an impact on critical thinking, and they can be viewed as important moderating factors that affect how critical thinking develops. On the basis of these results, recommendations are made for further study and instruction to better support students’ critical thinking in the context of collaborative problem-solving.

Similar content being viewed by others

development of group cognition in online collaborative problem solving processes

A meta-analysis of the effects of design thinking on student learning

development of group cognition in online collaborative problem solving processes

Fostering twenty-first century skills among primary school students through math project-based learning

development of group cognition in online collaborative problem solving processes

A meta-analysis to gauge the impact of pedagogies employed in mixed-ability high school biology classrooms

Introduction.

Although critical thinking has a long history in research, the concept of critical thinking, which is regarded as an essential competence for learners in the 21st century, has recently attracted more attention from researchers and teaching practitioners (National Research Council, 2012 ). Critical thinking should be the core of curriculum reform based on key competencies in the field of education (Peng and Deng, 2017 ) because students with critical thinking can not only understand the meaning of knowledge but also effectively solve practical problems in real life even after knowledge is forgotten (Kek and Huijser, 2011 ). The definition of critical thinking is not universal (Ennis, 1989 ; Castle, 2009 ; Niu et al., 2013 ). In general, the definition of critical thinking is a self-aware and self-regulated thought process (Facione, 1990 ; Niu et al., 2013 ). It refers to the cognitive skills needed to interpret, analyze, synthesize, reason, and evaluate information as well as the attitudinal tendency to apply these abilities (Halpern, 2001 ). The view that critical thinking can be taught and learned through curriculum teaching has been widely supported by many researchers (e.g., Kuncel, 2011 ; Leng and Lu, 2020 ), leading to educators’ efforts to foster it among students. In the field of teaching practice, there are three types of courses for teaching critical thinking (Ennis, 1989 ). The first is an independent curriculum in which critical thinking is taught and cultivated without involving the knowledge of specific disciplines; the second is an integrated curriculum in which critical thinking is integrated into the teaching of other disciplines as a clear teaching goal; and the third is a mixed curriculum in which critical thinking is taught in parallel to the teaching of other disciplines for mixed teaching training. Furthermore, numerous measuring tools have been developed by researchers and educators to measure critical thinking in the context of teaching practice. These include standardized measurement tools, such as WGCTA, CCTST, CCTT, and CCTDI, which have been verified by repeated experiments and are considered effective and reliable by international scholars (Facione and Facione, 1992 ). In short, descriptions of critical thinking, including its two dimensions of attitudinal tendency and cognitive skills, different types of teaching courses, and standardized measurement tools provide a complex normative framework for understanding, teaching, and evaluating critical thinking.

Cultivating critical thinking in curriculum teaching can start with a problem, and one of the most popular critical thinking instructional approaches is problem-based learning (Liu et al., 2020 ). Duch et al. ( 2001 ) noted that problem-based learning in group collaboration is progressive active learning, which can improve students’ critical thinking and problem-solving skills. Collaborative problem-solving is the organic integration of collaborative learning and problem-based learning, which takes learners as the center of the learning process and uses problems with poor structure in real-world situations as the starting point for the learning process (Liang et al., 2017 ). Students learn the knowledge needed to solve problems in a collaborative group, reach a consensus on problems in the field, and form solutions through social cooperation methods, such as dialogue, interpretation, questioning, debate, negotiation, and reflection, thus promoting the development of learners’ domain knowledge and critical thinking (Cindy, 2004 ; Liang et al., 2017 ).

Collaborative problem-solving has been widely used in the teaching practice of critical thinking, and several studies have attempted to conduct a systematic review and meta-analysis of the empirical literature on critical thinking from various perspectives. However, little attention has been paid to the impact of collaborative problem-solving on critical thinking. Therefore, the best approach for developing and enhancing critical thinking throughout collaborative problem-solving is to examine how to implement critical thinking instruction; however, this issue is still unexplored, which means that many teachers are incapable of better instructing critical thinking (Leng and Lu, 2020 ; Niu et al., 2013 ). For example, Huber ( 2016 ) provided the meta-analysis findings of 71 publications on gaining critical thinking over various time frames in college with the aim of determining whether critical thinking was truly teachable. These authors found that learners significantly improve their critical thinking while in college and that critical thinking differs with factors such as teaching strategies, intervention duration, subject area, and teaching type. The usefulness of collaborative problem-solving in fostering students’ critical thinking, however, was not determined by this study, nor did it reveal whether there existed significant variations among the different elements. A meta-analysis of 31 pieces of educational literature was conducted by Liu et al. ( 2020 ) to assess the impact of problem-solving on college students’ critical thinking. These authors found that problem-solving could promote the development of critical thinking among college students and proposed establishing a reasonable group structure for problem-solving in a follow-up study to improve students’ critical thinking. Additionally, previous empirical studies have reached inconclusive and even contradictory conclusions about whether and to what extent collaborative problem-solving increases or decreases critical thinking levels. As an illustration, Yang et al. ( 2008 ) carried out an experiment on the integrated curriculum teaching of college students based on a web bulletin board with the goal of fostering participants’ critical thinking in the context of collaborative problem-solving. These authors’ research revealed that through sharing, debating, examining, and reflecting on various experiences and ideas, collaborative problem-solving can considerably enhance students’ critical thinking in real-life problem situations. In contrast, collaborative problem-solving had a positive impact on learners’ interaction and could improve learning interest and motivation but could not significantly improve students’ critical thinking when compared to traditional classroom teaching, according to research by Naber and Wyatt ( 2014 ) and Sendag and Odabasi ( 2009 ) on undergraduate and high school students, respectively.

The above studies show that there is inconsistency regarding the effectiveness of collaborative problem-solving in promoting students’ critical thinking. Therefore, it is essential to conduct a thorough and trustworthy review to detect and decide whether and to what degree collaborative problem-solving can result in a rise or decrease in critical thinking. Meta-analysis is a quantitative analysis approach that is utilized to examine quantitative data from various separate studies that are all focused on the same research topic. This approach characterizes the effectiveness of its impact by averaging the effect sizes of numerous qualitative studies in an effort to reduce the uncertainty brought on by independent research and produce more conclusive findings (Lipsey and Wilson, 2001 ).

This paper used a meta-analytic approach and carried out a meta-analysis to examine the effectiveness of collaborative problem-solving in promoting students’ critical thinking in order to make a contribution to both research and practice. The following research questions were addressed by this meta-analysis:

What is the overall effect size of collaborative problem-solving in promoting students’ critical thinking and its impact on the two dimensions of critical thinking (i.e., attitudinal tendency and cognitive skills)?

How are the disparities between the study conclusions impacted by various moderating variables if the impacts of various experimental designs in the included studies are heterogeneous?

This research followed the strict procedures (e.g., database searching, identification, screening, eligibility, merging, duplicate removal, and analysis of included studies) of Cooper’s ( 2010 ) proposed meta-analysis approach for examining quantitative data from various separate studies that are all focused on the same research topic. The relevant empirical research that appeared in worldwide educational periodicals within the 21st century was subjected to this meta-analysis using Rev-Man 5.4. The consistency of the data extracted separately by two researchers was tested using Cohen’s kappa coefficient, and a publication bias test and a heterogeneity test were run on the sample data to ascertain the quality of this meta-analysis.

Data sources and search strategies

There were three stages to the data collection process for this meta-analysis, as shown in Fig. 1 , which shows the number of articles included and eliminated during the selection process based on the statement and study eligibility criteria.

figure 1

This flowchart shows the number of records identified, included and excluded in the article.

First, the databases used to systematically search for relevant articles were the journal papers of the Web of Science Core Collection and the Chinese Core source journal, as well as the Chinese Social Science Citation Index (CSSCI) source journal papers included in CNKI. These databases were selected because they are credible platforms that are sources of scholarly and peer-reviewed information with advanced search tools and contain literature relevant to the subject of our topic from reliable researchers and experts. The search string with the Boolean operator used in the Web of Science was “TS = (((“critical thinking” or “ct” and “pretest” or “posttest”) or (“critical thinking” or “ct” and “control group” or “quasi experiment” or “experiment”)) and (“collaboration” or “collaborative learning” or “CSCL”) and (“problem solving” or “problem-based learning” or “PBL”))”. The research area was “Education Educational Research”, and the search period was “January 1, 2000, to December 30, 2021”. A total of 412 papers were obtained. The search string with the Boolean operator used in the CNKI was “SU = (‘critical thinking’*‘collaboration’ + ‘critical thinking’*‘collaborative learning’ + ‘critical thinking’*‘CSCL’ + ‘critical thinking’*‘problem solving’ + ‘critical thinking’*‘problem-based learning’ + ‘critical thinking’*‘PBL’ + ‘critical thinking’*‘problem oriented’) AND FT = (‘experiment’ + ‘quasi experiment’ + ‘pretest’ + ‘posttest’ + ‘empirical study’)” (translated into Chinese when searching). A total of 56 studies were found throughout the search period of “January 2000 to December 2021”. From the databases, all duplicates and retractions were eliminated before exporting the references into Endnote, a program for managing bibliographic references. In all, 466 studies were found.

Second, the studies that matched the inclusion and exclusion criteria for the meta-analysis were chosen by two researchers after they had reviewed the abstracts and titles of the gathered articles, yielding a total of 126 studies.

Third, two researchers thoroughly reviewed each included article’s whole text in accordance with the inclusion and exclusion criteria. Meanwhile, a snowball search was performed using the references and citations of the included articles to ensure complete coverage of the articles. Ultimately, 36 articles were kept.

Two researchers worked together to carry out this entire process, and a consensus rate of almost 94.7% was reached after discussion and negotiation to clarify any emerging differences.

Eligibility criteria

Since not all the retrieved studies matched the criteria for this meta-analysis, eligibility criteria for both inclusion and exclusion were developed as follows:

The publication language of the included studies was limited to English and Chinese, and the full text could be obtained. Articles that did not meet the publication language and articles not published between 2000 and 2021 were excluded.

The research design of the included studies must be empirical and quantitative studies that can assess the effect of collaborative problem-solving on the development of critical thinking. Articles that could not identify the causal mechanisms by which collaborative problem-solving affects critical thinking, such as review articles and theoretical articles, were excluded.

The research method of the included studies must feature a randomized control experiment or a quasi-experiment, or a natural experiment, which have a higher degree of internal validity with strong experimental designs and can all plausibly provide evidence that critical thinking and collaborative problem-solving are causally related. Articles with non-experimental research methods, such as purely correlational or observational studies, were excluded.

The participants of the included studies were only students in school, including K-12 students and college students. Articles in which the participants were non-school students, such as social workers or adult learners, were excluded.

The research results of the included studies must mention definite signs that may be utilized to gauge critical thinking’s impact (e.g., sample size, mean value, or standard deviation). Articles that lacked specific measurement indicators for critical thinking and could not calculate the effect size were excluded.

Data coding design

In order to perform a meta-analysis, it is necessary to collect the most important information from the articles, codify that information’s properties, and convert descriptive data into quantitative data. Therefore, this study designed a data coding template (see Table 1 ). Ultimately, 16 coding fields were retained.

The designed data-coding template consisted of three pieces of information. Basic information about the papers was included in the descriptive information: the publishing year, author, serial number, and title of the paper.

The variable information for the experimental design had three variables: the independent variable (instruction method), the dependent variable (critical thinking), and the moderating variable (learning stage, teaching type, intervention duration, learning scaffold, group size, measuring tool, and subject area). Depending on the topic of this study, the intervention strategy, as the independent variable, was coded into collaborative and non-collaborative problem-solving. The dependent variable, critical thinking, was coded as a cognitive skill and an attitudinal tendency. And seven moderating variables were created by grouping and combining the experimental design variables discovered within the 36 studies (see Table 1 ), where learning stages were encoded as higher education, high school, middle school, and primary school or lower; teaching types were encoded as mixed courses, integrated courses, and independent courses; intervention durations were encoded as 0–1 weeks, 1–4 weeks, 4–12 weeks, and more than 12 weeks; group sizes were encoded as 2–3 persons, 4–6 persons, 7–10 persons, and more than 10 persons; learning scaffolds were encoded as teacher-supported learning scaffold, technique-supported learning scaffold, and resource-supported learning scaffold; measuring tools were encoded as standardized measurement tools (e.g., WGCTA, CCTT, CCTST, and CCTDI) and self-adapting measurement tools (e.g., modified or made by researchers); and subject areas were encoded according to the specific subjects used in the 36 included studies.

The data information contained three metrics for measuring critical thinking: sample size, average value, and standard deviation. It is vital to remember that studies with various experimental designs frequently adopt various formulas to determine the effect size. And this paper used Morris’ proposed standardized mean difference (SMD) calculation formula ( 2008 , p. 369; see Supplementary Table S3 ).

Procedure for extracting and coding data

According to the data coding template (see Table 1 ), the 36 papers’ information was retrieved by two researchers, who then entered them into Excel (see Supplementary Table S1 ). The results of each study were extracted separately in the data extraction procedure if an article contained numerous studies on critical thinking, or if a study assessed different critical thinking dimensions. For instance, Tiwari et al. ( 2010 ) used four time points, which were viewed as numerous different studies, to examine the outcomes of critical thinking, and Chen ( 2013 ) included the two outcome variables of attitudinal tendency and cognitive skills, which were regarded as two studies. After discussion and negotiation during data extraction, the two researchers’ consistency test coefficients were roughly 93.27%. Supplementary Table S2 details the key characteristics of the 36 included articles with 79 effect quantities, including descriptive information (e.g., the publishing year, author, serial number, and title of the paper), variable information (e.g., independent variables, dependent variables, and moderating variables), and data information (e.g., mean values, standard deviations, and sample size). Following that, testing for publication bias and heterogeneity was done on the sample data using the Rev-Man 5.4 software, and then the test results were used to conduct a meta-analysis.

Publication bias test

When the sample of studies included in a meta-analysis does not accurately reflect the general status of research on the relevant subject, publication bias is said to be exhibited in this research. The reliability and accuracy of the meta-analysis may be impacted by publication bias. Due to this, the meta-analysis needs to check the sample data for publication bias (Stewart et al., 2006 ). A popular method to check for publication bias is the funnel plot; and it is unlikely that there will be publishing bias when the data are equally dispersed on either side of the average effect size and targeted within the higher region. The data are equally dispersed within the higher portion of the efficient zone, consistent with the funnel plot connected with this analysis (see Fig. 2 ), indicating that publication bias is unlikely in this situation.

figure 2

This funnel plot shows the result of publication bias of 79 effect quantities across 36 studies.

Heterogeneity test

To select the appropriate effect models for the meta-analysis, one might use the results of a heterogeneity test on the data effect sizes. In a meta-analysis, it is common practice to gauge the degree of data heterogeneity using the I 2 value, and I 2  ≥ 50% is typically understood to denote medium-high heterogeneity, which calls for the adoption of a random effect model; if not, a fixed effect model ought to be applied (Lipsey and Wilson, 2001 ). The findings of the heterogeneity test in this paper (see Table 2 ) revealed that I 2 was 86% and displayed significant heterogeneity ( P  < 0.01). To ensure accuracy and reliability, the overall effect size ought to be calculated utilizing the random effect model.

The analysis of the overall effect size

This meta-analysis utilized a random effect model to examine 79 effect quantities from 36 studies after eliminating heterogeneity. In accordance with Cohen’s criterion (Cohen, 1992 ), it is abundantly clear from the analysis results, which are shown in the forest plot of the overall effect (see Fig. 3 ), that the cumulative impact size of cooperative problem-solving is 0.82, which is statistically significant ( z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]), and can encourage learners to practice critical thinking.

figure 3

This forest plot shows the analysis result of the overall effect size across 36 studies.

In addition, this study examined two distinct dimensions of critical thinking to better understand the precise contributions that collaborative problem-solving makes to the growth of critical thinking. The findings (see Table 3 ) indicate that collaborative problem-solving improves cognitive skills (ES = 0.70) and attitudinal tendency (ES = 1.17), with significant intergroup differences (chi 2  = 7.95, P  < 0.01). Although collaborative problem-solving improves both dimensions of critical thinking, it is essential to point out that the improvements in students’ attitudinal tendency are much more pronounced and have a significant comprehensive effect (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI [0.87, 1.47]), whereas gains in learners’ cognitive skill are slightly improved and are just above average. (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI [0.58, 0.82]).

The analysis of moderator effect size

The whole forest plot’s 79 effect quantities underwent a two-tailed test, which revealed significant heterogeneity ( I 2  = 86%, z  = 12.78, P  < 0.01), indicating differences between various effect sizes that may have been influenced by moderating factors other than sampling error. Therefore, exploring possible moderating factors that might produce considerable heterogeneity was done using subgroup analysis, such as the learning stage, learning scaffold, teaching type, group size, duration of the intervention, measuring tool, and the subject area included in the 36 experimental designs, in order to further explore the key factors that influence critical thinking. The findings (see Table 4 ) indicate that various moderating factors have advantageous effects on critical thinking. In this situation, the subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), learning scaffold (chi 2  = 9.03, P  < 0.01), and teaching type (chi 2  = 7.20, P  < 0.05) are all significant moderators that can be applied to support the cultivation of critical thinking. However, since the learning stage and the measuring tools did not significantly differ among intergroup (chi 2  = 3.15, P  = 0.21 > 0.05, and chi 2  = 0.08, P  = 0.78 > 0.05), we are unable to explain why these two factors are crucial in supporting the cultivation of critical thinking in the context of collaborative problem-solving. These are the precise outcomes, as follows:

Various learning stages influenced critical thinking positively, without significant intergroup differences (chi 2  = 3.15, P  = 0.21 > 0.05). High school was first on the list of effect sizes (ES = 1.36, P  < 0.01), then higher education (ES = 0.78, P  < 0.01), and middle school (ES = 0.73, P  < 0.01). These results show that, despite the learning stage’s beneficial influence on cultivating learners’ critical thinking, we are unable to explain why it is essential for cultivating critical thinking in the context of collaborative problem-solving.

Different teaching types had varying degrees of positive impact on critical thinking, with significant intergroup differences (chi 2  = 7.20, P  < 0.05). The effect size was ranked as follows: mixed courses (ES = 1.34, P  < 0.01), integrated courses (ES = 0.81, P  < 0.01), and independent courses (ES = 0.27, P  < 0.01). These results indicate that the most effective approach to cultivate critical thinking utilizing collaborative problem solving is through the teaching type of mixed courses.

Various intervention durations significantly improved critical thinking, and there were significant intergroup differences (chi 2  = 12.18, P  < 0.01). The effect sizes related to this variable showed a tendency to increase with longer intervention durations. The improvement in critical thinking reached a significant level (ES = 0.85, P  < 0.01) after more than 12 weeks of training. These findings indicate that the intervention duration and critical thinking’s impact are positively correlated, with a longer intervention duration having a greater effect.

Different learning scaffolds influenced critical thinking positively, with significant intergroup differences (chi 2  = 9.03, P  < 0.01). The resource-supported learning scaffold (ES = 0.69, P  < 0.01) acquired a medium-to-higher level of impact, the technique-supported learning scaffold (ES = 0.63, P  < 0.01) also attained a medium-to-higher level of impact, and the teacher-supported learning scaffold (ES = 0.92, P  < 0.01) displayed a high level of significant impact. These results show that the learning scaffold with teacher support has the greatest impact on cultivating critical thinking.

Various group sizes influenced critical thinking positively, and the intergroup differences were statistically significant (chi 2  = 8.77, P  < 0.05). Critical thinking showed a general declining trend with increasing group size. The overall effect size of 2–3 people in this situation was the biggest (ES = 0.99, P  < 0.01), and when the group size was greater than 7 people, the improvement in critical thinking was at the lower-middle level (ES < 0.5, P  < 0.01). These results show that the impact on critical thinking is positively connected with group size, and as group size grows, so does the overall impact.

Various measuring tools influenced critical thinking positively, with significant intergroup differences (chi 2  = 0.08, P  = 0.78 > 0.05). In this situation, the self-adapting measurement tools obtained an upper-medium level of effect (ES = 0.78), whereas the complete effect size of the standardized measurement tools was the largest, achieving a significant level of effect (ES = 0.84, P  < 0.01). These results show that, despite the beneficial influence of the measuring tool on cultivating critical thinking, we are unable to explain why it is crucial in fostering the growth of critical thinking by utilizing the approach of collaborative problem-solving.

Different subject areas had a greater impact on critical thinking, and the intergroup differences were statistically significant (chi 2  = 13.36, P  < 0.05). Mathematics had the greatest overall impact, achieving a significant level of effect (ES = 1.68, P  < 0.01), followed by science (ES = 1.25, P  < 0.01) and medical science (ES = 0.87, P  < 0.01), both of which also achieved a significant level of effect. Programming technology was the least effective (ES = 0.39, P  < 0.01), only having a medium-low degree of effect compared to education (ES = 0.72, P  < 0.01) and other fields (such as language, art, and social sciences) (ES = 0.58, P  < 0.01). These results suggest that scientific fields (e.g., mathematics, science) may be the most effective subject areas for cultivating critical thinking utilizing the approach of collaborative problem-solving.

The effectiveness of collaborative problem solving with regard to teaching critical thinking

According to this meta-analysis, using collaborative problem-solving as an intervention strategy in critical thinking teaching has a considerable amount of impact on cultivating learners’ critical thinking as a whole and has a favorable promotional effect on the two dimensions of critical thinking. According to certain studies, collaborative problem solving, the most frequently used critical thinking teaching strategy in curriculum instruction can considerably enhance students’ critical thinking (e.g., Liang et al., 2017 ; Liu et al., 2020 ; Cindy, 2004 ). This meta-analysis provides convergent data support for the above research views. Thus, the findings of this meta-analysis not only effectively address the first research query regarding the overall effect of cultivating critical thinking and its impact on the two dimensions of critical thinking (i.e., attitudinal tendency and cognitive skills) utilizing the approach of collaborative problem-solving, but also enhance our confidence in cultivating critical thinking by using collaborative problem-solving intervention approach in the context of classroom teaching.

Furthermore, the associated improvements in attitudinal tendency are much stronger, but the corresponding improvements in cognitive skill are only marginally better. According to certain studies, cognitive skill differs from the attitudinal tendency in classroom instruction; the cultivation and development of the former as a key ability is a process of gradual accumulation, while the latter as an attitude is affected by the context of the teaching situation (e.g., a novel and exciting teaching approach, challenging and rewarding tasks) (Halpern, 2001 ; Wei and Hong, 2022 ). Collaborative problem-solving as a teaching approach is exciting and interesting, as well as rewarding and challenging; because it takes the learners as the focus and examines problems with poor structure in real situations, and it can inspire students to fully realize their potential for problem-solving, which will significantly improve their attitudinal tendency toward solving problems (Liu et al., 2020 ). Similar to how collaborative problem-solving influences attitudinal tendency, attitudinal tendency impacts cognitive skill when attempting to solve a problem (Liu et al., 2020 ; Zhang et al., 2022 ), and stronger attitudinal tendencies are associated with improved learning achievement and cognitive ability in students (Sison, 2008 ; Zhang et al., 2022 ). It can be seen that the two specific dimensions of critical thinking as well as critical thinking as a whole are affected by collaborative problem-solving, and this study illuminates the nuanced links between cognitive skills and attitudinal tendencies with regard to these two dimensions of critical thinking. To fully develop students’ capacity for critical thinking, future empirical research should pay closer attention to cognitive skills.

The moderating effects of collaborative problem solving with regard to teaching critical thinking

In order to further explore the key factors that influence critical thinking, exploring possible moderating effects that might produce considerable heterogeneity was done using subgroup analysis. The findings show that the moderating factors, such as the teaching type, learning stage, group size, learning scaffold, duration of the intervention, measuring tool, and the subject area included in the 36 experimental designs, could all support the cultivation of collaborative problem-solving in critical thinking. Among them, the effect size differences between the learning stage and measuring tool are not significant, which does not explain why these two factors are crucial in supporting the cultivation of critical thinking utilizing the approach of collaborative problem-solving.

In terms of the learning stage, various learning stages influenced critical thinking positively without significant intergroup differences, indicating that we are unable to explain why it is crucial in fostering the growth of critical thinking.

Although high education accounts for 70.89% of all empirical studies performed by researchers, high school may be the appropriate learning stage to foster students’ critical thinking by utilizing the approach of collaborative problem-solving since it has the largest overall effect size. This phenomenon may be related to student’s cognitive development, which needs to be further studied in follow-up research.

With regard to teaching type, mixed course teaching may be the best teaching method to cultivate students’ critical thinking. Relevant studies have shown that in the actual teaching process if students are trained in thinking methods alone, the methods they learn are isolated and divorced from subject knowledge, which is not conducive to their transfer of thinking methods; therefore, if students’ thinking is trained only in subject teaching without systematic method training, it is challenging to apply to real-world circumstances (Ruggiero, 2012 ; Hu and Liu, 2015 ). Teaching critical thinking as mixed course teaching in parallel to other subject teachings can achieve the best effect on learners’ critical thinking, and explicit critical thinking instruction is more effective than less explicit critical thinking instruction (Bensley and Spero, 2014 ).

In terms of the intervention duration, with longer intervention times, the overall effect size shows an upward tendency. Thus, the intervention duration and critical thinking’s impact are positively correlated. Critical thinking, as a key competency for students in the 21st century, is difficult to get a meaningful improvement in a brief intervention duration. Instead, it could be developed over a lengthy period of time through consistent teaching and the progressive accumulation of knowledge (Halpern, 2001 ; Hu and Liu, 2015 ). Therefore, future empirical studies ought to take these restrictions into account throughout a longer period of critical thinking instruction.

With regard to group size, a group size of 2–3 persons has the highest effect size, and the comprehensive effect size decreases with increasing group size in general. This outcome is in line with some research findings; as an example, a group composed of two to four members is most appropriate for collaborative learning (Schellens and Valcke, 2006 ). However, the meta-analysis results also indicate that once the group size exceeds 7 people, small groups cannot produce better interaction and performance than large groups. This may be because the learning scaffolds of technique support, resource support, and teacher support improve the frequency and effectiveness of interaction among group members, and a collaborative group with more members may increase the diversity of views, which is helpful to cultivate critical thinking utilizing the approach of collaborative problem-solving.

With regard to the learning scaffold, the three different kinds of learning scaffolds can all enhance critical thinking. Among them, the teacher-supported learning scaffold has the largest overall effect size, demonstrating the interdependence of effective learning scaffolds and collaborative problem-solving. This outcome is in line with some research findings; as an example, a successful strategy is to encourage learners to collaborate, come up with solutions, and develop critical thinking skills by using learning scaffolds (Reiser, 2004 ; Xu et al., 2022 ); learning scaffolds can lower task complexity and unpleasant feelings while also enticing students to engage in learning activities (Wood et al., 2006 ); learning scaffolds are designed to assist students in using learning approaches more successfully to adapt the collaborative problem-solving process, and the teacher-supported learning scaffolds have the greatest influence on critical thinking in this process because they are more targeted, informative, and timely (Xu et al., 2022 ).

With respect to the measuring tool, despite the fact that standardized measurement tools (such as the WGCTA, CCTT, and CCTST) have been acknowledged as trustworthy and effective by worldwide experts, only 54.43% of the research included in this meta-analysis adopted them for assessment, and the results indicated no intergroup differences. These results suggest that not all teaching circumstances are appropriate for measuring critical thinking using standardized measurement tools. “The measuring tools for measuring thinking ability have limits in assessing learners in educational situations and should be adapted appropriately to accurately assess the changes in learners’ critical thinking.”, according to Simpson and Courtney ( 2002 , p. 91). As a result, in order to more fully and precisely gauge how learners’ critical thinking has evolved, we must properly modify standardized measuring tools based on collaborative problem-solving learning contexts.

With regard to the subject area, the comprehensive effect size of science departments (e.g., mathematics, science, medical science) is larger than that of language arts and social sciences. Some recent international education reforms have noted that critical thinking is a basic part of scientific literacy. Students with scientific literacy can prove the rationality of their judgment according to accurate evidence and reasonable standards when they face challenges or poorly structured problems (Kyndt et al., 2013 ), which makes critical thinking crucial for developing scientific understanding and applying this understanding to practical problem solving for problems related to science, technology, and society (Yore et al., 2007 ).

Suggestions for critical thinking teaching

Other than those stated in the discussion above, the following suggestions are offered for critical thinking instruction utilizing the approach of collaborative problem-solving.

First, teachers should put a special emphasis on the two core elements, which are collaboration and problem-solving, to design real problems based on collaborative situations. This meta-analysis provides evidence to support the view that collaborative problem-solving has a strong synergistic effect on promoting students’ critical thinking. Asking questions about real situations and allowing learners to take part in critical discussions on real problems during class instruction are key ways to teach critical thinking rather than simply reading speculative articles without practice (Mulnix, 2012 ). Furthermore, the improvement of students’ critical thinking is realized through cognitive conflict with other learners in the problem situation (Yang et al., 2008 ). Consequently, it is essential for teachers to put a special emphasis on the two core elements, which are collaboration and problem-solving, and design real problems and encourage students to discuss, negotiate, and argue based on collaborative problem-solving situations.

Second, teachers should design and implement mixed courses to cultivate learners’ critical thinking, utilizing the approach of collaborative problem-solving. Critical thinking can be taught through curriculum instruction (Kuncel, 2011 ; Leng and Lu, 2020 ), with the goal of cultivating learners’ critical thinking for flexible transfer and application in real problem-solving situations. This meta-analysis shows that mixed course teaching has a highly substantial impact on the cultivation and promotion of learners’ critical thinking. Therefore, teachers should design and implement mixed course teaching with real collaborative problem-solving situations in combination with the knowledge content of specific disciplines in conventional teaching, teach methods and strategies of critical thinking based on poorly structured problems to help students master critical thinking, and provide practical activities in which students can interact with each other to develop knowledge construction and critical thinking utilizing the approach of collaborative problem-solving.

Third, teachers should be more trained in critical thinking, particularly preservice teachers, and they also should be conscious of the ways in which teachers’ support for learning scaffolds can promote critical thinking. The learning scaffold supported by teachers had the greatest impact on learners’ critical thinking, in addition to being more directive, targeted, and timely (Wood et al., 2006 ). Critical thinking can only be effectively taught when teachers recognize the significance of critical thinking for students’ growth and use the proper approaches while designing instructional activities (Forawi, 2016 ). Therefore, with the intention of enabling teachers to create learning scaffolds to cultivate learners’ critical thinking utilizing the approach of collaborative problem solving, it is essential to concentrate on the teacher-supported learning scaffolds and enhance the instruction for teaching critical thinking to teachers, especially preservice teachers.

Implications and limitations

There are certain limitations in this meta-analysis, but future research can correct them. First, the search languages were restricted to English and Chinese, so it is possible that pertinent studies that were written in other languages were overlooked, resulting in an inadequate number of articles for review. Second, these data provided by the included studies are partially missing, such as whether teachers were trained in the theory and practice of critical thinking, the average age and gender of learners, and the differences in critical thinking among learners of various ages and genders. Third, as is typical for review articles, more studies were released while this meta-analysis was being done; therefore, it had a time limit. With the development of relevant research, future studies focusing on these issues are highly relevant and needed.

Conclusions

The subject of the magnitude of collaborative problem-solving’s impact on fostering students’ critical thinking, which received scant attention from other studies, was successfully addressed by this study. The question of the effectiveness of collaborative problem-solving in promoting students’ critical thinking was addressed in this study, which addressed a topic that had gotten little attention in earlier research. The following conclusions can be made:

Regarding the results obtained, collaborative problem solving is an effective teaching approach to foster learners’ critical thinking, with a significant overall effect size (ES = 0.82, z  = 12.78, P  < 0.01, 95% CI [0.69, 0.95]). With respect to the dimensions of critical thinking, collaborative problem-solving can significantly and effectively improve students’ attitudinal tendency, and the comprehensive effect is significant (ES = 1.17, z  = 7.62, P  < 0.01, 95% CI [0.87, 1.47]); nevertheless, it falls short in terms of improving students’ cognitive skills, having only an upper-middle impact (ES = 0.70, z  = 11.55, P  < 0.01, 95% CI [0.58, 0.82]).

As demonstrated by both the results and the discussion, there are varying degrees of beneficial effects on students’ critical thinking from all seven moderating factors, which were found across 36 studies. In this context, the teaching type (chi 2  = 7.20, P  < 0.05), intervention duration (chi 2  = 12.18, P  < 0.01), subject area (chi 2  = 13.36, P  < 0.05), group size (chi 2  = 8.77, P  < 0.05), and learning scaffold (chi 2  = 9.03, P  < 0.01) all have a positive impact on critical thinking, and they can be viewed as important moderating factors that affect how critical thinking develops. Since the learning stage (chi 2  = 3.15, P  = 0.21 > 0.05) and measuring tools (chi 2  = 0.08, P  = 0.78 > 0.05) did not demonstrate any significant intergroup differences, we are unable to explain why these two factors are crucial in supporting the cultivation of critical thinking in the context of collaborative problem-solving.

Data availability

All data generated or analyzed during this study are included within the article and its supplementary information files, and the supplementary information files are available in the Dataverse repository: https://doi.org/10.7910/DVN/IPFJO6 .

Bensley DA, Spero RA (2014) Improving critical thinking skills and meta-cognitive monitoring through direct infusion. Think Skills Creat 12:55–68. https://doi.org/10.1016/j.tsc.2014.02.001

Article   Google Scholar  

Castle A (2009) Defining and assessing critical thinking skills for student radiographers. Radiography 15(1):70–76. https://doi.org/10.1016/j.radi.2007.10.007

Chen XD (2013) An empirical study on the influence of PBL teaching model on critical thinking ability of non-English majors. J PLA Foreign Lang College 36 (04):68–72

Google Scholar  

Cohen A (1992) Antecedents of organizational commitment across occupational groups: a meta-analysis. J Organ Behav. https://doi.org/10.1002/job.4030130602

Cooper H (2010) Research synthesis and meta-analysis: a step-by-step approach, 4th edn. Sage, London, England

Cindy HS (2004) Problem-based learning: what and how do students learn? Educ Psychol Rev 51(1):31–39

Duch BJ, Gron SD, Allen DE (2001) The power of problem-based learning: a practical “how to” for teaching undergraduate courses in any discipline. Stylus Educ Sci 2:190–198

Ennis RH (1989) Critical thinking and subject specificity: clarification and needed research. Educ Res 18(3):4–10. https://doi.org/10.3102/0013189x018003004

Facione PA (1990) Critical thinking: a statement of expert consensus for purposes of educational assessment and instruction. Research findings and recommendations. Eric document reproduction service. https://eric.ed.gov/?id=ed315423

Facione PA, Facione NC (1992) The California Critical Thinking Dispositions Inventory (CCTDI) and the CCTDI test manual. California Academic Press, Millbrae, CA

Forawi SA (2016) Standard-based science education and critical thinking. Think Skills Creat 20:52–62. https://doi.org/10.1016/j.tsc.2016.02.005

Halpern DF (2001) Assessing the effectiveness of critical thinking instruction. J Gen Educ 50(4):270–286. https://doi.org/10.2307/27797889

Hu WP, Liu J (2015) Cultivation of pupils’ thinking ability: a five-year follow-up study. Psychol Behav Res 13(05):648–654. https://doi.org/10.3969/j.issn.1672-0628.2015.05.010

Huber K (2016) Does college teach critical thinking? A meta-analysis. Rev Educ Res 86(2):431–468. https://doi.org/10.3102/0034654315605917

Kek MYCA, Huijser H (2011) The power of problem-based learning in developing critical thinking skills: preparing students for tomorrow’s digital futures in today’s classrooms. High Educ Res Dev 30(3):329–341. https://doi.org/10.1080/07294360.2010.501074

Kuncel NR (2011) Measurement and meaning of critical thinking (Research report for the NRC 21st Century Skills Workshop). National Research Council, Washington, DC

Kyndt E, Raes E, Lismont B, Timmers F, Cascallar E, Dochy F (2013) A meta-analysis of the effects of face-to-face cooperative learning. Do recent studies falsify or verify earlier findings? Educ Res Rev 10(2):133–149. https://doi.org/10.1016/j.edurev.2013.02.002

Leng J, Lu XX (2020) Is critical thinking really teachable?—A meta-analysis based on 79 experimental or quasi experimental studies. Open Educ Res 26(06):110–118. https://doi.org/10.13966/j.cnki.kfjyyj.2020.06.011

Liang YZ, Zhu K, Zhao CL (2017) An empirical study on the depth of interaction promoted by collaborative problem solving learning activities. J E-educ Res 38(10):87–92. https://doi.org/10.13811/j.cnki.eer.2017.10.014

Lipsey M, Wilson D (2001) Practical meta-analysis. International Educational and Professional, London, pp. 92–160

Liu Z, Wu W, Jiang Q (2020) A study on the influence of problem based learning on college students’ critical thinking-based on a meta-analysis of 31 studies. Explor High Educ 03:43–49

Morris SB (2008) Estimating effect sizes from pretest-posttest-control group designs. Organ Res Methods 11(2):364–386. https://doi.org/10.1177/1094428106291059

Article   ADS   Google Scholar  

Mulnix JW (2012) Thinking critically about critical thinking. Educ Philos Theory 44(5):464–479. https://doi.org/10.1111/j.1469-5812.2010.00673.x

Naber J, Wyatt TH (2014) The effect of reflective writing interventions on the critical thinking skills and dispositions of baccalaureate nursing students. Nurse Educ Today 34(1):67–72. https://doi.org/10.1016/j.nedt.2013.04.002

National Research Council (2012) Education for life and work: developing transferable knowledge and skills in the 21st century. The National Academies Press, Washington, DC

Niu L, Behar HLS, Garvan CW (2013) Do instructional interventions influence college students’ critical thinking skills? A meta-analysis. Educ Res Rev 9(12):114–128. https://doi.org/10.1016/j.edurev.2012.12.002

Peng ZM, Deng L (2017) Towards the core of education reform: cultivating critical thinking skills as the core of skills in the 21st century. Res Educ Dev 24:57–63. https://doi.org/10.14121/j.cnki.1008-3855.2017.24.011

Reiser BJ (2004) Scaffolding complex learning: the mechanisms of structuring and problematizing student work. J Learn Sci 13(3):273–304. https://doi.org/10.1207/s15327809jls1303_2

Ruggiero VR (2012) The art of thinking: a guide to critical and creative thought, 4th edn. Harper Collins College Publishers, New York

Schellens T, Valcke M (2006) Fostering knowledge construction in university students through asynchronous discussion groups. Comput Educ 46(4):349–370. https://doi.org/10.1016/j.compedu.2004.07.010

Sendag S, Odabasi HF (2009) Effects of an online problem based learning course on content knowledge acquisition and critical thinking skills. Comput Educ 53(1):132–141. https://doi.org/10.1016/j.compedu.2009.01.008

Sison R (2008) Investigating Pair Programming in a Software Engineering Course in an Asian Setting. 2008 15th Asia-Pacific Software Engineering Conference, pp. 325–331. https://doi.org/10.1109/APSEC.2008.61

Simpson E, Courtney M (2002) Critical thinking in nursing education: literature review. Mary Courtney 8(2):89–98

Stewart L, Tierney J, Burdett S (2006) Do systematic reviews based on individual patient data offer a means of circumventing biases associated with trial publications? Publication bias in meta-analysis. John Wiley and Sons Inc, New York, pp. 261–286

Tiwari A, Lai P, So M, Yuen K (2010) A comparison of the effects of problem-based learning and lecturing on the development of students’ critical thinking. Med Educ 40(6):547–554. https://doi.org/10.1111/j.1365-2929.2006.02481.x

Wood D, Bruner JS, Ross G (2006) The role of tutoring in problem solving. J Child Psychol Psychiatry 17(2):89–100. https://doi.org/10.1111/j.1469-7610.1976.tb00381.x

Wei T, Hong S (2022) The meaning and realization of teachable critical thinking. Educ Theory Practice 10:51–57

Xu EW, Wang W, Wang QX (2022) A meta-analysis of the effectiveness of programming teaching in promoting K-12 students’ computational thinking. Educ Inf Technol. https://doi.org/10.1007/s10639-022-11445-2

Yang YC, Newby T, Bill R (2008) Facilitating interactions through structured web-based bulletin boards: a quasi-experimental study on promoting learners’ critical thinking skills. Comput Educ 50(4):1572–1585. https://doi.org/10.1016/j.compedu.2007.04.006

Yore LD, Pimm D, Tuan HL (2007) The literacy component of mathematical and scientific literacy. Int J Sci Math Educ 5(4):559–589. https://doi.org/10.1007/s10763-007-9089-4

Zhang T, Zhang S, Gao QQ, Wang JH (2022) Research on the development of learners’ critical thinking in online peer review. Audio Visual Educ Res 6:53–60. https://doi.org/10.13811/j.cnki.eer.2022.06.08

Download references

Acknowledgements

This research was supported by the graduate scientific research and innovation project of Xinjiang Uygur Autonomous Region named “Research on in-depth learning of high school information technology courses for the cultivation of computing thinking” (No. XJ2022G190) and the independent innovation fund project for doctoral students of the College of Educational Science of Xinjiang Normal University named “Research on project-based teaching of high school information technology courses from the perspective of discipline core literacy” (No. XJNUJKYA2003).

Author information

Authors and affiliations.

College of Educational Science, Xinjiang Normal University, 830017, Urumqi, Xinjiang, China

Enwei Xu, Wei Wang & Qingxia Wang

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Enwei Xu or Wei Wang .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Additional information.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary tables, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Xu, E., Wang, W. & Wang, Q. The effectiveness of collaborative problem solving in promoting students’ critical thinking: A meta-analysis based on empirical literature. Humanit Soc Sci Commun 10 , 16 (2023). https://doi.org/10.1057/s41599-023-01508-1

Download citation

Received : 07 August 2022

Accepted : 04 January 2023

Published : 11 January 2023

DOI : https://doi.org/10.1057/s41599-023-01508-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Impacts of online collaborative learning on students’ intercultural communication apprehension and intercultural communicative competence.

  • Hoa Thi Hoang Chau
  • Hung Phu Bui
  • Quynh Thi Huong Dinh

Education and Information Technologies (2024)

Exploring the effects of digital technology on deep learning: a meta-analysis

The impacts of computer-supported collaborative learning on students’ critical thinking: a meta-analysis.

  • Yoseph Gebrehiwot Tedla
  • Hsiu-Ling Chen

Sustainable electricity generation and farm-grid utilization from photovoltaic aquaculture: a bibliometric analysis

  • A. A. Amusa
  • M. Alhassan

International Journal of Environmental Science and Technology (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

development of group cognition in online collaborative problem solving processes

Group Cognition and Collaborative AI

  • First Online: 08 June 2018

Cite this chapter

development of group cognition in online collaborative problem solving processes

  • Janin Koch 5 &
  • Antti Oulasvirta 5  

Part of the book series: Human–Computer Interaction Series ((HCIS))

4785 Accesses

4 Citations

Significant advances in artificial intelligence suggest that we will be using intelligent agents on a regular basis in the near future. This chapter discusses group cognition as a principle for designing collaborative AI. Group cognition is the ability to relate to other group members’ decisions, abilities, and beliefs. It thereby allows participants to adapt their understanding and actions to reach common objectives. Hence, it underpins collaboration. We review two concepts in the context of group cognition that could inform the development of AI and automation in pursuit of natural collaboration with humans: conversational grounding and theory of mind. These concepts are somewhat different from those already discussed in AI research. We outline some new implications for collaborative AI, aimed at extending skills and solution spaces and at improving joint cognitive and creative capacity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

development of group cognition in online collaborative problem solving processes

Co-constructing knowledge with generative AI tools: Reflections from a CSCL perspective

development of group cognition in online collaborative problem solving processes

Investigation 19. The Constitution of Group Cognition

development of group cognition in online collaborative problem solving processes

From Distributed Cognition to Collective Intelligence: Supporting Cognitive Search to Facilitate Online Massive Collaboration

Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning. In: Proceedings of the twenty-first international conference on Machine learning, p. 1. ACM (2004)

Google Scholar  

Abrams, D., Rutland, A., Palmer, S.B., Pelletier, J., Ferrell, J., Lee, S.: The role of cognitive abilities in children’s inferences about social atypicality and peer exclusion and inclusion in intergroup contexts. Br. J. Dev. Psychol. 32 (3), 233–247 (2014)

Article   Google Scholar  

Akkerman, S., Van den Bossche, P., Admiraal, W., Gijselaers, W., Segers, M., Simons, R.J., Kirschner, P.: Reconsidering group cognition: from conceptual confusion to a boundary area between cognitive and socio-cultural perspectives? Educ. Res. Rev. 2 (1), 39–63 (2007)

Alexakos, C., Kalogeras, A.P.: Internet of things integration to a multi agent system based manufacturing environment. In: 2015 IEEE 20th Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1–8. IEEE (2015)

Allen, J., Guinn, C.I., Horvitz, E.: Mixed-initiative interaction. IEEE Intell. Syst. Appl. 14 (5), 14–23 (1999)

Argall, B.D., Chernova, S., Veloso, M., Browning, B.: A survey of robot learning from demonstration. Robot. Auton. Syst. 57 (5), 469–483 (2009)

Baker, M.J.: Collaboration in collaborative learning. Interact. Stud. 16 (3), 451–473 (2015)

Baker, M., Hansen, T., Joiner, R., Traum, D.: The role of grounding in collaborative learning tasks. Collab. Learn. Cogn. Comput. Approach. 31 , 63 (1999)

Bradáč, V., Kostolányová, K.: Intelligent tutoring systems. In: E-Learning, E-Education, and Online Training: Third International Conference, eLEOT 2016, Dublin, Ireland, August 31–September 2, 2016, Revised Selected Papers, pp. 71–78. Springer (2017)

Cai, Z., Wu, Q., Huang, D., Ding, L., Yu, B., Law, R., Huang, J., Fu, S.: Cognitive state recognition using wavelet singular entropy and arma entropy with afpa optimized gp classification. Neurocomputing 197 , 29–44 (2016)

Cambria, E., White, B.: Jumping nlp curves: a review of natural language processing research. IEEE Comput. Intell. Mag. 9 (2), 48–57 (2014)

Campbell, A., Wu, A.S.: Multi-agent role allocation: issues, approaches, and multiple perspectives. Auton. Agent. Multi-Agent Syst. 22 (2), 317–355 (2011)

Cannon-Bowers, J.A., Salas, E.: Reflections on shared cognition. J. Organ. Behav. 22 (2), 195–202 (2001)

Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., Elhadad, N.: Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1721–1730. ACM (2015)

Chandrasekaran, A., Yadav, D., Chattopadhyay, P., Prabhu, V., Parikh, D.: It takes two to tango: towards theory of ai’s mind (2017). arXiv:1704.00717

Chau, D.H., Kittur, A., Hong, J.I., Faloutsos, C.: Apolo: making sense of large network data by combining rich user interaction and machine learning. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 167–176. ACM (2011)

Cheng, J., Greiner, R.: Learning bayesian belief network classifiers: algorithms and system. In: Advances in artificial intelligence, pp. 141–151 (2001)

Chapter   Google Scholar  

Chrislip, D.D., Larson, C.E.: Collaborative leadership: how citizens and civic leaders can make a difference, vol. 24. Jossey-Bass Inc Pub (1994)

Clark, H.H., Wilkes-Gibbs, D.: Referring as a collaborative process. Cognition 22 (1), 1–39 (1986)

Clark, H.H., Brennan, S.E., et al.: Grounding in communication. Perspect. Soc. Shar. Cogn. 13 (1991), 127–149 (1991)

Cohen, P.R., Perrault, C.R.: Elements of a plan-based theory of speech acts. Cogn. Sci. 3 (3), 177–212 (1979)

Dahl, G.E., Yu, D., Deng, L., Acero, A.: Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Process. 20 (1), 30–42 (2012)

Dartnall, T.: Artificial intelligence and creativity: an interdisciplinary approach, vol. 17. Springer Science & Business Media (2013)

de Haan, M.: Intersubjectivity in models of learning and teaching: reflections from a study of teaching and learning in a mexican mazahua community. In: The theory and practice of cultural-historical psychology, pp. 174–199 (2001)

De Jong, K.A., Spears, W.M., Gordon, D.F.: Using genetic algorithms for concept learning. Mach. Learn. 13 (2–3), 161–188 (1993)

Deterding, C.S., Hook, J.D., Fiebrink, R., Gow, J., Akten, M., Smith, G., Liapis, A., Compton, K.: Mixed-initiative creative interfaces (2017)

Dresner, K., Stone, P.: A multiagent approach to autonomous intersection management. J. Artif. Intell. Res. 31 , 591–656 (2008)

El Kaliouby, R., Robinson, P.: Mind reading machines: automated inference of cognitive mental states from video. In: 2004 IEEE International Conference on Systems, Man and Cybernetics, vol. 1, pp. 682–688. IEEE (2004)

El Kaliouby, R., Robinson, P.: Real-time inference of complex mental states from facial expressions and head gestures. In: Real-Time Vision for Human-Computer Interaction, pp. 181–200. Springer (2005)

Emojis as content within chatbots and nlps (2016). https://www.smalltalk.ai/blog/2016/12/9/how-to-use-emojis-as-content-within-chatbots-and-nlps

Engel, D., Woolley, A.W., Jing, L.X., Chabris, C.F., Malone, T.W.: Reading the mind in the eyes or reading between the lines? Theory of mind predicts collective intelligence equally well online and face-to-face. PloS one 9 (12), e115,212 (2014)

Flavell, J.H.: Theory-of-mind development: retrospect and prospect. Merrill-Palmer Q. 50 (3), 274–290 (2004)

Fotheringham, M.J., Owies, D., Leslie, E., Owen, N.: Interactive health communication in preventive medicine: internet-based strategies in teaching and research. Am. J. Prev. Med. 19 (2), 113–120 (2000)

Fussell, S.R., Kiesler, S., Setlock, L.D., Yew, V.: How people anthropomorphize robots. In: 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 145–152. IEEE (2008)

Galegher, J., Kraut, R.E., Egido, C.: Intellectual Teamwork: Social and Technological Foundations of Cooperative Work. Psychology Press (2014)

Goldstone, R.L., Theiner, G.: The multiple, interacting levels of cognitive systems (milcs) perspective on group cognition. Philos. Psychol. 30 (3), 334–368 (2017)

Graesser, A.C., VanLehn, K., Rosé, C.P., Jordan, P.W., Harter, D.: Intelligent tutoring systems with conversational dialogue. AI Mag. 22 (4), 39 (2001)

Gray, B.: Collaborating: Finding Common Ground for Multiparty Problems (1989)

Guzman, A.L.: The messages of mute machines: human-machine communication with industrial technologies. Communication+ 1 5 (1), 1–30 (2016)

Hendricks, L.A., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., Darrell, T.: Generating visual explanations. In: European Conference on Computer Vision, pp. 3–19. Springer (2016)

Hill, J., Ford, W.R., Farreras, I.G.: Real conversations with artificial intelligence: a comparison between human-human online conversations and human-chatbot conversations. Comput. Hum. Behav. 49 , 245–250 (2015)

Hollan, J., Hutchins, E., Kirsh, D.: Distributed cognition: toward a new foundation for human-computer interaction research. ACM Trans. Comput.-Hum. Interact. (TOCHI) 7 (2), 174–196 (2000)

Holzinger, A.: Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Inform. 3 (2), 119–131 (2016)

Holzinger, A., Plass, M., Holzinger, K., Crişan, G.C., Pintea, C.M., Palade, V.: Towards interactive machine learning (iml): applying ant colony algorithms to solve the traveling salesman problem with the human-in-the-loop approach. In: International Conference on Availability, Reliability, and Security, pp. 81–95. Springer (2016)

Hong, H.Y., Chen, F.C., Chai, C.S., Chan, W.C.: Teacher-education students views about knowledge building theory and practice. Instr. Sci. 39 (4), 467–482 (2011)

Huber, G.P., Lewis, K.: Cross-understanding: implications for group cognition and performance. Acad. Manag. Rev. 35 (1), 6–26 (2010)

iOS Siri, A.: Apple (2013)

Jurafsky, D., Martin, J.H.: Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition (2014)

Karami, A.B., Jeanpierre, L., Mouaddib, A.I.: Human-robot collaboration for a shared mission. In: Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction, pp. 155–156. IEEE Press (2010)

Kelley, R., Wigand, L., Hamilton, B., Browne, K., Nicolescu, M., Nicolescu, M.: Deep networks for predicting human intent with respect to objects. In: Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pp. 171–172. ACM (2012)

Koch, J.: Design implications for designing with a collaborative ai. In: AAAI Spring Symposium Series, Designing the User Experience of Machine Learning Systems (2017)

Kulesza, T., Burnett, M., Wong, W.K., Stumpf, S.: Principles of explanatory debugging to personalize interactive machine learning. In: Proceedings of the 20th International Conference on Intelligent User Interfaces, pp. 126–137. ACM (2015)

Lake, B.M., Salakhutdinov, R., Tenenbaum, J.B.: Human-level concept learning through probabilistic program induction. Science 350 (6266), 1332–1338 (2015)

Article   MathSciNet   Google Scholar  

Lala, D., Inoue, K., Milhorat, P., Kawahara, T.: Detection of social signals for recognizing engagement in human-robot interaction (2017). arXiv:1709.10257 [cs.HC]

Lang, F., Fink, A.: Collaborative machine scheduling: challenges of individually optimizing behavior. Concurr. Comput. Pract. Exp. 27 (11), 2869–2888 (2015)

Lave, J., Wenger, E.: Situated Learning: Legitimate Peripheral Participation. Cambridge university press, Cambridge (1991)

Lee, D., Lee, J., Kim, E.K., Lee, J.: Dialog act modeling for virtual personal assistant applications using a small volume of labeled data and domain knowledge. In: Sixteenth Annual Conference of the International Speech Communication Association (2015)

Lei, T., Barzilay, R., Jaakkola, T.: Rationalizing neural predictions (2016). arXiv:1606.04155

Levine, S.J., Williams, B.C.: Concurrent plan recognition and execution for human-robot teams. In: ICAPS (2014)

Licklider, J.C.: Man-computer symbiosis. IRE Trans. Hum. Factors Electron. 1 , 4–11 (1960)

Lipton, Z.C.: The mythos of model interpretability (2016). arXiv:1606.03490

Mavridis, N.: A review of verbal and non-verbal human-robot interactive communication. Robot. Auton. Syst. 63 , 22–35 (2015)

Mohammed, S., Ringseis, E.: Cognitive diversity and consensus in group decision making: the role of inputs, processes, and outcomes. Organ. Behav. Hum. Decis. Process. 85 (2), 310–335 (2001)

Nehaniv, C.L., Dautenhahn, K., Kubacki, J., Haegele, M., Parlitz, C., Alami, R.: A methodological approach relating the classification of gesture to identification of human intent in the context of human-robot interaction. In: ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 2005, pp. 371–377. IEEE (2005)

Novak, J.: Mine, yours... ours? Designing for principal-agent collaboration in interactive value creation. Wirtschaftsinformatik 1 , 305–314 (2009)

Oliver, N.M., Rosario, B., Pentland, A.P.: A bayesian computer vision system for modeling human interactions. IEEE Trans. Pattern Anal. Mach. Intell. 22 (8), 831–843 (2000)

Pantic, M., Pentland, A., Nijholt, A., Huang, T.S.: Human computing and machine understanding of human behavior: a survey. In: Artifical Intelligence for Human Computing, pp. 47–71. Springer (2007)

Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)

Rich, C., Ponsler, B., Holroyd, A., Sidner, C.L.: Recognizing engagement in human-robot interaction. In: 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 375–382. IEEE (2010)

Robert, S., Büttner, S., Röcker, C., Holzinger, A.: Reasoning under uncertainty: towards collaborative interactive machine learning. In: Machine Learning for Health Informatics, pp. 357–376. Springer (2016)

Robinson, T.N., Patrick, K., Eng, T.R., Gustafson, D., et al.: An evidence-based approach to interactive health communication: a challenge to medicine in the information age. JAMA 280 (14), 1264–1269 (1998)

Roschelle, J., Teasley, S.D., et al.: The construction of shared knowledge in collaborative problem solving. Comput.-Support. Collab. Learn. 128 , 69–197 (1995)

Ruttkay, Z., Reidsma, D., Nijholt, A.: Human computing, virtual humans and artificial imperfection. In: Proceedings of the 8th international conference on Multimodal interfaces, pp. 179–184. ACM (2006)

Sato, E., Yamaguchi, T., Harashima, F.: Natural interface using pointing behavior for human-robot gestural interaction. IEEE Trans. Industr. Electron. 54 (2), 1105–1112 (2007)

Schurr, N., Marecki, J., Tambe, M., Scerri, P., Kasinadhuni, N., Lewis, J.P.: The future of disaster response: humans working with multiagent teams using defacto. In: AAAI Spring Symposium: AI Technologies for Homeland Security, pp. 9–16 (2005)

Shapiro, D., Shachter, R.: User-agent value alignment. In: Proceedings of The 18th National Conference on Artificial Intelligence AAAI (2002)

Sheridan, T.B.: Human-robot interaction: status and challenges. Hum. Factors 58 (4), 525–532 (2016)

Shoham, Y., Leyton-Brown, K.: Multiagent systems: Algorithmic, game-theoretic, and logical foundations. Cambridge University Press, Cambridge (2008)

Sidner, C.L., Lee, C., Morency, L.P., Forlines, C.: The effect of head-nod recognition in human-robot conversation. In: Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, pp. 290–296. ACM (2006)

Simard, P., Chickering, D., Lakshmiratan, A., Charles, D., Bottou, L., Suarez, C.G.J., Grangier, D., Amershi, S., Verwey, J., Suh, J.: Ice: enabling non-experts to build models interactively for large-scale lopsided problems (2014). arXiv:1409.4814

Soller, A.: Supporting social interaction in an intelligent collaborative learning system. Int. J. Artif. Intell. Educ. (IJAIED) 12 , 40–62 (2001)

Stahl, G.: Shared meaning, common ground, group cognition. In: Group Cognition: Computer Support for Building Collaborative Knowledge, pp. 347–360 (2006)

Stahl, G.: From intersubjectivity to group cognition. Comput. Support. Coop. Work (CSCW) 25 (4–5), 355–384 (2016)

Stone, P., Veloso, M.: Multiagent systems: a survey from a machine learning perspective. Auton. Robots 8 (3), 345–383 (2000)

Taha, T., Miró, J.V., Dissanayake, G.: A pomdp framework for modelling human interaction with assistive robots. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 544–549. IEEE (2011)

Theiner, G., Allen, C., Goldstone, R.L.: Recognizing group cognition. Cogn. Syst. Res. 11 (4), 378–395 (2010)

Turner, P.: Mediated Cognition. Springer International Publishing, Cham (2016)

Book   Google Scholar  

Vondrick, C., Oktay, D., Pirsiavash, H., Torralba, A.: Predicting motivations of actions by leveraging text. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2997–3005 (2016)

Vondrick, C., Pirsiavash, H., Torralba, A.: Anticipating visual representations from unlabeled video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 98–106 (2016)

Wenger, E.: Artificial Intelligence and Tutoring Systems: Computational and Cognitive Approaches to the Communication of Knowledge. Morgan Kaufmann (2014)

Wood, D.J., Gray, B.: Toward a comprehensive theory of collaboration. J. Appl. Behav. Sci. 27 (2), 139–162 (1991)

Woolf, B.P.: Building Intelligent Interactive Tutors: Student-Centered Strategies for Revolutionizing e-Learning. Morgan Kaufmann (2010)

Yoshikawa, Y., Shinozawa, K., Ishiguro, H., Hagita, N., Miyamoto, T.: Responsive robot gaze to interaction partner. In: Robotics: Science and Systems (2006)

Yu, Z., Ramanarayanan, V., Lange, P., Suendermann-Oeft, D.: An open-source dialog system with real-time engagement tracking for job interview training applications. In: Proceedings of IWSDS (2017)

Zhang, S., Sridharan, M.: Active visual sensing and collaboration on mobile robots using hierarchical pomdps. In: Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pp. 181–188. International Foundation for Autonomous Agents and Multiagent Systems (2012)

Zhou, J., Chen, F.: Making machine learning useable. Int. J. Intell. Syst. Technol. Appl. 14 (2), 91–109 (2015)

MathSciNet   Google Scholar  

Download references

Acknowledgements

The project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement 637991).

Author information

Authors and affiliations.

Department of Communications and Networking, School of Electrical Engineering, Aalto University, Espoo, Finland

Janin Koch & Antti Oulasvirta

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Janin Koch .

Editor information

Editors and affiliations.

DATA61, CSIRO, Eveleigh, New South Wales, Australia

Jianlong Zhou

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Koch, J., Oulasvirta, A. (2018). Group Cognition and Collaborative AI. In: Zhou, J., Chen, F. (eds) Human and Machine Learning. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-319-90403-0_15

Download citation

DOI : https://doi.org/10.1007/978-3-319-90403-0_15

Published : 08 June 2018

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-90402-3

Online ISBN : 978-3-319-90403-0

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Cognition-oriented Facilitation and Guidelines for Collaborative Problem-solving Online and Face-to-face: An in-depth examination of format and facilitation influence on problem-solving performance

New citation alert added.

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations, view options, 1 introduction, 2 related work, 2.1 comparison between remote and face-to-face collaborative tasks, 2.2 facilitation in cscw, 2.3 complex scenario simulation for problem-solving research, 3 facilitation guidelines proposed in this study.

#Cognitive processSocial interactionDigital platformFacilitation statementWhen to use
F1InformationSharingMain information sourcePlease share your observations with each otherWhen starting the discussion
When new information is provided
F2InformationSharing If you do not have the information, can you recall and share some of your experience?When problem solvers identify that the information is incomplete
F3InformationPerception Can you apply any theoretical knowledge in this situation?When problem solvers identify that the information is incomplete and do not have relevant experience for reference
F4InformationPerceptionDocumentation toolCan you write down this information so that we can revisit it when required?When problem solvers share abundant information, which cannot possibly be held in the short-term memory
F5Problem representationPerception What is your goal?When all information has been quickly made available at the beginning of the discussion
F6Problem representationPerception What are your assumptions?When some of the information and relationships among problem elements are not provided
F7Problem representationPerceptionDocumentation toolWere you able to prove your assumptions?When new information is generated after change in problem status
F8Problem representationPerceptionDocumentation toolMaybe you can write your assumption and prove it after obtaining more informationWhen the information is partially provided, and decisions must be made for further problem-solving activities
F9Problem representationSharing How about participant X? Do you agree with this opinion?When one participant has finished sharing their problem representation or solution
F10SolutionSharing Please share your rationale for this decision with your discussion partner.When problem solvers propose a solution that is not directly related to the ongoing discussion
F11SolutionPerceptionDocumentation toolAccording to the information you have obtained thus far, is it a rational decision?When problem solvers propose a solution that is extreme compared to the current strategies
F12SolutionPerceptionAnalytic toolWith this understanding, what is your solution?When problem solvers have finished analyzing the problem but have not started creating solutions
F13Goal evaluationPerceptionMain information sourceBased on the latest information, is your goal still valid?When problem solvers start creating a new solution after obtaining new information but without a detailed analysis

4.1 Participant demographics

4.2 problem-scenario simulator.

development of group cognition in online collaborative problem solving processes

4.3 Performance indicators

Performance parameterValueOptimizationkx \(logi{t}^{ - 1}( x )\)
Balance24000Maximize42.40.9168
Cost20000Minimize420.8808
Quality of beans0.7Maximize00.70.6682

4.4 Experimental design and data collection

development of group cognition in online collaborative problem solving processes

4.5 Post-discussion questionnaire

CategoryQuestions
1. Problem-solving effectiveness1.1. I understood the meaning of all given elements in the simulation scenario.
1.2. I became aware of some elements that were not provided in the simulation scenario.
1.3. I could explain the reasons for the performance changes.
1.4. I achieved my goal in the simulation scenario.
1.5. I determined the appropriate actions required to achieve my goal in the simulation scenario.
1.6. I executed the actions that I determined to achieve my goal in the simulation scenario.
1.7. I kept checking the effectiveness of my actions.
1.8. I revised my planned actions by reviewing the overall effectiveness of my approach.
2. Satisfaction2.1. Overall, I was satisfied with the discussion interaction.
2.2. Overall, I was satisfied with the outcome.
2.3. I would be happy to have another meeting with the same partner.
3. Facilitation quality3.1. Overall, I found the facilitator to be helpful during the discussion.
3.2. The facilitator assisted me in recalling relevant knowledge.
3.3. The facilitator assisted me in recalling relevant practical experience.
3.4. The facilitator assisted me in developing new perspectives for problem understanding.
3.5. The facilitator assisted me in identifying causality among the elements in the simulation scenario.
3.6. The facilitator encouraged us to ask each other a variety of questions.
3.7. The facilitator encouraged us to reassess our thoughts.
3.8. The facilitator intervened in the discussion at appropriate times.
4. Qualitative feedback4.1. Which part of the facilitation was most helpful?
4.2. Which part of the facilitation was least helpful?

4.6 Facilitation evaluation

4.6.1 text-based short-answer for subjective feedback analysis..

Categories (Triple-space framework)Inclusion criteria
Cognitive processInformation collectionFacilitation context; CPS scenario; Theoretical knowledge; Personal experience
Problem representationGoal setting; Assumption identification; Perspective suggestion
SolutionRationale clarification; Solution creation;
Goal evaluationGoal validation; Performance evaluation
Social interaction Encouraging collaboration; Time management; Intervention; Flow guidance
Digital platform Simulator UI; Digital whiteboard; Google Sheet

4.6.2 Facilitator's utterances for process analysis.

5.1 effects of format and facilitation on discussion performance.

development of group cognition in online collaborative problem solving processes

FacilitationBalance (n = 20)Overall performance score (n = 20)Satisfaction (n = 40)
Non-facilitated0.3910.038**0.418
Facilitated0.1610.1050.439
FormatBalance (n = 20)Overall performance score (n = 20)Satisfaction (n = 40)
Online0.075*0.051*0.319
Face-to-face0.3750.1530.342

5.2 Factors contributing to the changes in performance indicators

Response Variable (Unit of interest: participant)Significant explanatory variablesCoefficient and p-values
Satisfaction for non-facilitated discussions (n = 40)I understood the meaning of all elements given in the simulation scenario.0.344, p-value = 0.019
I can explain the reasons for the performance changes.-0.379, p-value = 0.042
I kept checking the effectiveness of my actions.0.471, p-value = 0.028
Satisfaction for facilitated discussions (n = 40)The facilitator encouraged us to ask each other a variety of questions.0.604, p-value = 0.019
I determined the appropriate actions to take for achieving my goal in the simulation scenario.0.559, p-value = 0.012
I revised my planned actions by reviewing the overall effectiveness of my approach.0.556, p-value = 0.016

5.3 Facilitation evaluation

5.3.1 facilitation style..

 Facilitation statements countMean intervention timingp-values
#OnlineFace-to-faceOnlineFace-to-faceProportional testTwo Sample t-test
F122171:04:331:12:340.5220.286
F29130:32:540:58:520.5220.044**
F3480:26:170:53:140.3870.154
F426120:45:200:16:570.035**0.012**
F527260:32:160:36:001.0000.743
F656581:01:430:57:150.9250.414
F739391:16:311:14:491.0000.759
F848340:50:580:56:460.1510.413
F980280:55:431:10:339.226e-07**0.161
F101091250:50:310:49:310.3270.823
F11790:56:291:21:140.8030.278
F1256461:04:541:08:540.3730.612
F1327281:16:361:10:430.5000.336

5.3.2 Subjective view regarding facilitation helpfulness.

 Cognitive processSocial interactionDigital platform
 Information collectionProblem representationSolutionGoal evaluation
Positive (n)91314463
Negative (n)811042
Kappa0.7790.64810.7290.6480.821
IRR90%85%100%90%85%95%

6 Discussion

6.1 how does online problem-solving performance differ from that conducted face-to-face, 6.2 how does facilitation affect problem-solving performance when different formats are used, 6.3 exploring cognition-oriented facilitation in cscw system designs, 6.3.1 format matters., 6.3.2 cognitive level defines functions., 6.3.3 dedicate flow and concise message., 6.4 limitations and future directions, 7 conclusion, acknowledgments, supplementary material.

  • Namura S Tada S Chen Y Kanno T Yoshida H Karikawa D Nonose K Inoue S (2023) Clarifying Patterns in Team Communication Through Extended Recurrence Plot with Levenshtein Distance HCI International 2023 Posters 10.1007/978-3-031-35998-9_17 (118-123) Online publication date: 9-Jul-2023 https://doi.org/10.1007/978-3-031-35998-9_17

Index Terms

Human-centered computing

Human computer interaction (HCI)

Empirical studies in HCI

Interaction paradigms

Collaborative interaction

Web-based interaction

Recommendations

Meeting facilitation: process versus content interventions.

This article examines the impacts of two types of meeting facilitation that occur in traditional and GSS environments: process and content facilitation. Based on existing facilitation, leadership, and GSS research, and structuration theory, we ...

A space-time GIS for dynamics in potential face-to-face meeting opportunities

With the use of information and communication technologies (ICT) such as cell phones and the Internet, people can easily arrange and rearrange face-to-face (F2F) meetings even if their activity schedules change over time. Few studies focused on ...

Information Needs for Meeting Facilitation

In many group work settings, meetings take up a reasonable amount of time and often do not achieve satisfactory outcomes. One of the techniques that has been introduced to ensure meetings run smoothly and reach their goals places an individual in the ...

Information

Published in.

cover image ACM Conferences

LMU Munich, Germany60028717

Author Picture

Tampere University, Finland60011170

Google Research, USA60006191

Author Picture

University of Cambridge, UK60031101

University of Namibia, Namibia60072704

Author Picture

Massachusetts Institute of Technology, USA60022195

Author Picture

University of Glasgow, UK60001490

Author Picture

University of Nottingham, UK60015138

  • SIGCHI: ACM Special Interest Group on Computer-Human Interaction

Association for Computing Machinery

New York, NY, United States

Publication History

Check for updates, author tags.

  • Cognition-oriented guidelines
  • Complex problem solving
  • Face-to-face meeting
  • Meeting facilitation
  • Remote collaboration
  • Research-article
  • Refereed limited

Funding Sources

Acceptance rates, contributors, other metrics, bibliometrics, article metrics.

  • 1 Total Citations View Citations
  • 1,027 Total Downloads
  • Downloads (Last 12 months) 822
  • Downloads (Last 6 weeks) 88

View options

View or Download as a PDF file.

View online with eReader .

HTML Format

View this article in HTML Format.

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

Share this publication link.

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

IMAGES

  1. (PDF) Development of Group Cognition in Online Collaborative Problem

    development of group cognition in online collaborative problem solving processes

  2. Figure 1 from Development of Group Cognition in Online Collaborative

    development of group cognition in online collaborative problem solving processes

  3. Figure 1 from Development of Group Cognition in Online Collaborative

    development of group cognition in online collaborative problem solving processes

  4. (PDF) Understand group interaction and cognitive state in online

    development of group cognition in online collaborative problem solving processes

  5. PPT

    development of group cognition in online collaborative problem solving processes

  6. Figure 1 from Development of Group Cognition in Online Collaborative

    development of group cognition in online collaborative problem solving processes

VIDEO

  1. How to Develop Learners’ Collaborative Problem Solving Skills

  2. Engaging Stakeholders in Problem-Solving

  3. Cognitive Skill Activities

  4. 🎯Understanding the Basics of Cognition- AP Psychology Unit 2 Part 1🎯

  5. Group Decision Making Techniques| Organizational Behaviour

  6. Collaborative Problem Solving is a cornerstone of personal development

COMMENTS

  1. Effects of decision-based learning on student performance in ...

    DBL is a novel pedagogical approach intended to improve students' conditional knowledge and problem-solving skills by exposing them to a sequence of branching learning decisions. The DBL software provided students with ample opportunities to engage in the expert decision-making processes involved in complex problem-solving and to receive just-in-time instruction and scaffolds at each ...

  2. Effects of adaptive scaffolding on performance, cognitive load and

    Background While game-based learning has demonstrated positive outcomes for some learners, its efficacy remains variable. Adaptive scaffolding may improve performance and self-regulation during training by optimizing cognitive load. Informed by cognitive load theory, this study investigates whether adaptive scaffolding based on interaction trace data influences learning performance, self ...

  3. Development of Group Cognition in Online Collaborative Problem-Solving

    From a quantitative perspective, this research proposes a measure equation of group cognition, conducts empirical research during online collaborative problem-solving, and uses multiple quantitative methods to examine group cognition complemented with qualitative microanalysis.

  4. Development of Group Cognition in Online Collaborative Problem-Solving

    Development of Group Cognition in Online Collaborative Problem-Solving ...

  5. Development of Group Cognition in Online Collaborative Problem-Solving

    Abstract. Group cognition is a cognitive science concept that studies how groups think, learn, and work. Most research investigates group cognition as a qualitative-oriented phe-. nomenon. From a ...

  6. Development of Group Cognition in Online Collaborative Problem-Solving

    From a quantitative perspective, this research proposes a measure equation of group cognition, conducts empirical research during online collaborative problem-solving, and uses multiple quantitative methods to examine group cognition complemented with qualitative microanalysis.

  7. PDF Development of Group Cognition in Online Collaborative Problem-Solving

    Article Development of Group Cognition in Online Collaborative Problem-Solving Processes Fan Ouyang1, Tengjiao Ling1,2, and Pengcheng Jiao3 Abstract Group cognition is a cognitive science concept ...

  8. Development of Group Cognition in Online Collaborative Problem-Solving

    This research proposes a measure equation of group cognition, conducts empirical research during online collaborative problem-solving, and uses multiple quantitative methods to examine group cognition complemented with qualitative microanalysis. Group cognition is a cognitive science concept that studies how groups think, learn, and work. Most research investigates group cognition as a ...

  9. Understand group interaction and cognitive state in online

    The purpose of this study aimed to analyze the process of online collaborative problem solving (CPS) via brain-to-brain synchrony (BS) at the problem-understanding and problem-solving stages. Aiming to obtain additional insights than traditional approaches (survey and observation), BS refers to the synchronization of brain activity between two or more people, as an indicator of interpersonal ...

  10. PDF Understand group interaction and cognitive state in online

    problem-understanding phase involves a cognitive structure that corresponds to a prob-lem constructed by a solver (Chi, Feltovich, et al., 1981). en, in the solution develop-ment phase, students work together to develop corresponding solutions based on the collaborative cognitive structure. erefore, group dynamics (i.e., how students interact

  11. The development of a collaborative problem solving environment that

    The students' online discussion messages were also encoded for the later quantitative content analysis to explore their cognitive process in CPS activities. The study included 94 participants, who were divided into the study sheet group (use the simulation tool and paper study sheets for discussions) and the scaffolding mind tool group.

  12. Exploring the effects of roles and group compositions on ...

    Collaborative problem-solving (CPS) involves the interaction and interdependence of students' social and cognitive skills, making it a complex learning process. To delve into the complex dynamics of CPS, previous research has categorized socio-cognitive roles, providing insights into social-cognitive frameworks. However, despite the specific cognitive and social interaction structures ...

  13. Incorporation of Collaborative Problem Solving and Cognitive Tools to

    To address this insufficiency, we adopted a collaborative problem solving approach as a teaching strategy to tutor students in online discussion activities using concept maps as a cognitive tool and using Facebook for communication within the learning community.

  14. Exploring collaborative problem solving in virtual laboratories: a

    When solving hard problems, students also tended to regulate the collective cognition by slowing down the ongoing process of collaborative problem solving. This created opportunity for the group to metacognitively reflect the process of collaborative problem solving and increased the possibility of effectively solving the problem.

  15. Understand group interaction and cognitive state in online

    The purpose of this study aimed to analyze the process of online collaborative problem solving (CPS) via brain-to-brain synchrony (BS) at the problem-understanding and problem-solving stages.

  16. Group metacognition in online collaborative learning: validity and

    While a number of studies have considered that metacognition is related to processes at an individual level, the role of metacognition during collaborative learning activities remains unclear. Metacognition has been studied mainly as a process of the individual, neglecting the relevance of group regulated behavior during cooperative activities and how group members perceive their skills and ...

  17. Development of Group Cognition in Online Collaborative Problem-Solving

    DOI: 10.1177/07356331211047784 Corpus ID: 244220320; Development of Group Cognition in Online Collaborative Problem-Solving Processes @article{Ouyang2021DevelopmentOG, title={Development of Group Cognition in Online Collaborative Problem-Solving Processes}, author={Fan Ouyang and Tengjiao Ling and Pengcheng Jiao}, journal={Journal of Educational Computing Research}, year={2021}, volume={60 ...

  18. The effectiveness of collaborative problem solving in promoting

    Collaborative problem-solving has been widely embraced in the classroom instruction of critical thinking, which is regarded as the core of curriculum reform based on key competencies in the field ...

  19. Cognition-oriented Facilitation and Guidelines for Collaborative

    as online collaborative problem-solving has become common in nu-merous white-collar industries [2, 3]. A report published by Dialpad, ... diagnoses and intervenes to help a group identify and solve prob-lems, make decisions, and increase efectiveness [10]. While the ... where higher level cognitive processes are used. Thus, there is an urgent ...

  20. Group Cognition and Collaborative AI

    This chapter discusses group cognition as a principle for designing collaborative AI. Group cognition is the ability to relate to other group members' decisions, abilities, and beliefs. It thereby allows participants to adapt their understanding and actions to reach common objectives. Hence, it underpins collaboration.

  21. Cognition-oriented Facilitation and Guidelines for Collaborative

    With similar motivation, problem scenario simulators have been extensively used in problem-solving-related studies. The design of such simulators is critical for investigating problem-solving cognitive processes because the complexity and difficulty of a problem scenario may directly influence the quality of data collection.

  22. Effectiveness of online collaborative problem‐solving method on

    The moderator analysis indicated that the online CPS method was more effective for (a) college preparatory learners, (b) the discipline of Economics, (c) grouping method of assigned, (d) teacher-led instruction, (e) study duration of 2 to 4 weeks, (f) group size of 3-5 members, (g) synchronous online environment and (h) cognitive performance.

  23. Exploring the relationship between learning sentiments and cognitive

    In online collaborative learning activities, Wang, Hou, and Wu (2017) discovered that learners showed more cognitive processes of "create" in problem-solving and role-playing strategies. In other words, researchers have shown an increased interest in cognitive processing during collaborative learning to promote learners' higher-order ...

  24. PDF Group Cognition in Online Teams

    Group cognition can then be seen as what transforms groups into factories for the creation of new knowledge. The types of problems that have been the focus of exploration within the group cognition