Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals

You are here

  • Volume 25, Issue 3
  • Impact of virtual simulation on nursing students’ learning outcomes: a systematic review
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-0211-3055 Terri Kean
  • Retired Faculty of Nursing , University of Prince Edward Island , Harrington , PE C1A 4P3 , Canada
  • Correspondence to Terri Kean, Retired Faculty of Nursing, University of Prince Edward Island, Charlottetown, PE C1A 4P3, Canada; tkean1965{at}gmail.com

https://doi.org/10.1136/ebnurs-2022-103567

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Background and purpose

This is a summary of Foronda C, Fernandez-Burgos M, Nadeau C, et al . Virtual simulation in nursing education: a systematic review spanning 1996 to 2018. Simulation in Healthcare. 2020;15(1):46.

Despite its growing use, there is limited synthesised knowledge on the effectiveness of virtual simulation (VS) as a pedagogical approach in nursing education.

Measuring the effectiveness of VS as a nursing pedagogy may contribute to the development of best practices for its use and enhance student learning.

This systematic …

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Provenance and peer review Commissioned; internally peer reviewed.

Read the full text or download the PDF:

  • Open access
  • Published: 01 May 2024

The effectiveness of virtual reality training on knowledge, skills and attitudes of health care professionals and students in assessing and treating mental health disorders: a systematic review

  • Cathrine W. Steen 1 , 2 ,
  • Kerstin Söderström 1 , 2 ,
  • Bjørn Stensrud 3 ,
  • Inger Beate Nylund 2 &
  • Johan Siqveland 4 , 5  

BMC Medical Education volume  24 , Article number:  480 ( 2024 ) Cite this article

1080 Accesses

2 Altmetric

Metrics details

Virtual reality (VR) training can enhance health professionals’ learning. However, there are ambiguous findings on the effectiveness of VR as an educational tool in mental health. We therefore reviewed the existing literature on the effectiveness of VR training on health professionals’ knowledge, skills, and attitudes in assessing and treating patients with mental health disorders.

We searched MEDLINE, PsycINFO (via Ovid), the Cochrane Library, ERIC, CINAHL (on EBSCOhost), Web of Science Core Collection, and the Scopus database for studies published from January 1985 to July 2023. We included all studies evaluating the effect of VR training interventions on attitudes, knowledge, and skills pertinent to the assessment and treatment of mental health disorders and published in English or Scandinavian languages. The quality of the evidence in randomized controlled trials was assessed with the Cochrane Risk of Bias Tool 2.0. For non-randomized studies, we assessed the quality of the studies with the ROBINS-I tool.

Of 4170 unique records identified, eight studies were eligible. The four randomized controlled trials were assessed as having some concern or a high risk of overall bias. The four non-randomized studies were assessed as having a moderate to serious overall risk of bias. Of the eight included studies, four used a virtual standardized patient design to simulate training situations, two studies used interactive patient scenario training designs, while two studies used a virtual patient game design. The results suggest that VR training interventions can promote knowledge and skills acquisition.

Conclusions

The findings indicate that VR interventions can effectively train health care personnel to acquire knowledge and skills in the assessment and treatment of mental health disorders. However, study heterogeneity, prevalence of small sample sizes, and many studies with a high or serious risk of bias suggest an uncertain evidence base. Future research on the effectiveness of VR training should include assessment of immersive VR training designs and a focus on more robust studies with larger sample sizes.

Trial registration

This review was pre-registered in the Open Science Framework register with the ID-number Z8EDK.

Peer Review reports

A robustly trained health care workforce is pivotal to forging a resilient health care system [ 1 ], and there is an urgent need to develop innovative methods and emerging technologies for health care workforce education [ 2 ]. Virtual reality technology designs for clinical training have emerged as a promising avenue for increasing the competence of health care professionals, reflecting their potential to provide effective training [ 3 ].

Virtual reality (VR) is a dynamic and diverse field, and can be described as a computer-generated environment that simulates sensory experiences, where user interactions play a role in shaping the course of events within that environment [ 4 ]. When optimally designed, VR gives users the feeling that they are physically within this simulated space, unlocking its potential as a dynamic and immersive learning tool [ 5 ]. The cornerstone of the allure of VR is its capacity for creating artificial settings via sensory deceptions, encapsulated by the term ‘immersion’. Immersion conveys the sensation of being deeply engrossed or enveloped in an alternate world, akin to absorption in a video game. Some VR systems will be more immersive than others, based on the technology used to influence the senses. However, the degree of immersion does not necessarily determine the user’s level of engagement with the application [ 6 ].

A common approach to categorizing VR systems is based on the design of the technology used, allowing them to be classified into: 1) non-immersive desktop systems, where users experience virtual environments through a computer screen, 2) immersive CAVE systems with large projected images and motion trackers to adjust the image to the user, and 3) fully immersive head-mounted display systems that involve users wearing a headset that fully covers their eyes and ears, thus entirely immersing them in the virtual environment [ 7 ]. Advances in VR technology have enabled a wide range of VR experiences. The possibility for health care professionals to repeatedly practice clinical skills with virtual patients in a risk-free environment offers an invaluable learning platform for health care education.

The impact of VR training on health care professionals’ learning has predominantly been researched in terms of the enhancement of technical surgical abilities. This includes refining procedural planning, familiarizing oneself with medical instruments, and practicing psychomotor skills such as dexterity, accuracy, and speed [ 8 , 9 ]. In contrast, the exploration of VR training in fostering non-technical or ‘soft’ skills, such as communication and teamwork, appears to be less prevalent [ 10 ]. A recent systematic review evaluates the outcomes of VR training in non-technical skills across various medical specialties [ 11 ], focusing on vital cognitive abilities (e.g., situation awareness, decision-making) and interprofessional social competencies (e.g., teamwork, conflict resolution, leadership). These skills are pivotal in promoting collaboration among colleagues and ensuring a safe health care environment. At the same time, they are not sufficiently comprehensive for encounters with patients with mental health disorders.

For health care professionals providing care to patients with mental health disorders, acquiring specific skills, knowledge, and empathic attitudes is of utmost importance. Many individuals experiencing mental health challenges may find it difficult to communicate their thoughts and feelings, and it is therefore essential for health care providers to cultivate an environment where patients feel safe and encouraged to share feelings and thoughts. Beyond fostering trust, health care professionals must also possess in-depth knowledge about the nature and treatment of various mental health disorders. Moreover, they must actively practice and internalize the skills necessary to translate their knowledge into clinical practice. While the conventional approach to training mental health clinical skills has been through simulation or role-playing with peers under expert supervision and practicing with real patients, the emergence of VR applications presents a compelling alternative. This technology promises a potentially transformative way to train mental health professionals. Our review identifies specific outcomes in knowledge, skills, and attitudes, covering areas from theoretical understanding to practical application and patient interaction. By focusing on these measurable concepts, which are in line with current healthcare education guidelines [ 12 ], we aim to contribute to the knowledge base and provide a detailed analysis of the complexities in mental health care training. This approach is designed to highlight the VR training’s practical relevance alongside its contribution to academic discourse.

A recent systematic review evaluated the effects of virtual patient (VP) interventions on knowledge, skills, and attitudes in undergraduate psychiatry education [ 13 ]. This review’s scope is limited to assessing VP interventions and does not cover other types of VR training interventions. Furthermore, it adopts a classification of VP different from our review, rendering their findings and conclusions not directly comparable to ours.

To the best of our knowledge, no systematic review has assessed and summarized the effectiveness of VR training interventions for health professionals in the assessment and treatment of mental health disorders. This systematic review addresses the gap by exploring the effectiveness of virtual reality in the training of knowledge, skills, and attitudes health professionals need to master in the assessment and treatment of mental health disorders.

This systematic review follows the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analysis [ 14 ]. The protocol of the systematic review was registered in the Open Science Framework register with the registration ID Z8EDK.

We included randomized controlled trials, cohort studies, and pretest–posttest studies, which met the following criteria: a) a population of health care professionals or health care professional students, b) assessed the effectiveness of a VR application in assessing and treating mental health disorders, and c) reported changes in knowledge, skills, or attitudes. We excluded studies evaluating VR interventions not designed for training in assessing and treating mental health disorders (e.g., training of surgical skills), studies evaluating VR training from the first-person perspective, studies that used VR interventions for non-educational purposes and studies where VR interventions trained patients with mental health problems (e.g., social skills training). We also excluded studies not published in English or Scandinavian languages.

Search strategy

The literature search reporting was guided by relevant items in PRISMA-S [ 15 ]. In collaboration with a senior academic librarian (IBN), we developed the search strategy for the systematic review. Inspired by the ‘pearl harvesting’ information retrieval approach [ 16 ], we anticipated a broad spectrum of terms related to our interdisciplinary query. Recognizing that various terminologies could encapsulate our central ideas, we harvested an array of terms for each of the four elements ‘health care professionals and health care students’, ‘VR’, ‘training’, and ‘mental health’. The pearl harvesting framework [ 16 ] consists of four steps which we followed with some minor adaptions. Step 1: We searched for and sampled a set of relevant research articles, a book chapter, and literature reviews. Step 2: The librarian scrutinized titles, abstracts, and author keywords, as well as subject headings used in databases, and collected relevant terms. Step 3: The librarian refined the lists of terms. Step 4: The review group, in collaboration with a VR consultant from KildeGruppen AS (a Norwegian media company), validated the refined lists of terms to ensure they included all relevant VR search terms. This process for the element VR resulted in the inclusion of search terms such as ‘3D simulated environment’, ‘second life simulation’, ‘virtual patient’, and ‘virtual world’. We were given a peer review of the search strategy by an academic librarian at Inland Norway University of Applied Sciences.

In June and July 2021, we performed comprehensive searches for publications dating from January 1985 to the present. This period for the inclusion of studies was chosen since VR systems designed for training in health care first emerged in the early 1990s. The searches were carried out in seven databases: MEDLINE and PsycInfo (on Ovid), ERIC and CINAHL (on EBSCOhost), the Cochrane Library, Web of Science Core Collection, and Scopus. Detailed search strategies from each database are available for public access at DataverseNO [ 17 ]. On July 2, 2021, a search in CINAHL yielded 993 hits. However, when attempting to transfer these records to EndNote using the ‘Folder View’—a feature designed for organizing and managing selected records before export—only 982 records were successfully transferred. This discrepancy indicates that 11 records could not be transferred through Folder View, for reasons not specified. The process was repeated twice, consistently yielding the same discrepancy. The missing 11 records pose a risk of failing to capture relevant studies in the initial search. In July 2023, to make sure that we included the latest publications, we updated our initial searches, focusing on entries since January 1, 2021. This ensured that we did not miss any new references recently added to these databases. Due to a lack of access to the Cochrane Library in July 2023, we used EBMR (Evidence Based Medicine Reviews) on the Ovid platform instead, including the databases Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Cochrane Clinical Answers. All references were exported to Endnote and duplicates were removed. The number of records from each database can be observed in the PRISMA diagram [ 14 ], Fig.  1 .

figure 1

PRISMA flow chart of the records and study selection process

Study selection and data collection

Two reviewers (JS, CWS) independently assessed the titles and abstracts of studies retrieved from the literature search based on the eligibility criteria. We employed the Rayyan website for the screening process [ 18 ]. The same reviewers (JS, CWS) assessed the full-text articles selected after the initial screening. Articles meeting the eligibility criteria were incorporated into the review. Any disagreements were resolved through discussion.

Data extracted from the studies by the first author (CWS) and cross-checked by another reviewer (JS) included: authors of the study, publication year, country, study design, participant details (education, setting), interventions (VR system, class label), comparison types, outcomes, and main findings. This data is summarized in Table  1 and Additional file 1 . In the process of reviewing the VR interventions utilized within the included studies, we sought expertise from advisers associated with VRINN, a Norwegian immersive learning cluster, and SIMInnlandet, a center dedicated to simulation in mental health care at Innlandet Hospital Trust. This collaboration ensured a thorough examination and accurate categorization of the VR technologies applied. Furthermore, the classification of the learning designs employed in the VP interventions was conducted under the guidance of an experienced VP scholar at Paracelcus Medical University in Salzburg.

Data analysis

We initially intended to perform a meta-analysis with knowledge, skills, and attitudes as primary outcomes, planning separate analyses for each. However, due to significant heterogeneity observed among the included studies, it was not feasible to carry out a meta-analysis. Consequently, we opted for a narrative synthesis based on these pre-determined outcomes of knowledge, skills, and attitudes. This approach allowed for an analysis of the relationships both within and between the studies. The effect sizes were calculated using a web-based effect size calculator [ 27 ]. We have interpreted effect sizes based on commonly used descriptions for Cohen’s d: small = 0.2, moderate = 0.5, and large = 0.8, and for Cramer’s V: small = 0.10, medium = 0.30, and large = 0.50.

Risk of bias assessment

JS and CWS independently evaluated the risk of bias for all studies using two distinct assessment tools. We used the Cochrane risk of bias tool RoB 2 [ 28 ] to assess the risk of bias in the RCTs. With the RoB 2 tool, the bias was assessed as high, some concerns or low for five domains: randomization process, deviations from the intended interventions, missing outcome data, measurement of the outcome, and selection of the reported result [ 28 ].

We used the Risk Of Bias In Non-randomized Studies of Interventions (ROBINS-I) tool [ 29 ] to assess the risk of bias in the cohort and single-group studies. By using ROBINS-I for the non-randomized trials, the risk of bias was assessed using the categories low, moderate, serious, critical or no information for seven domains: confounding, selection of participants, classification of interventions, deviations from intended interventions, missing data, measurement of outcomes, and selection of the reported result [ 29 ].

We included eight studies in the review (Fig.  1 ). An overview of the included studies is presented in detail in Table  1 .

Four studies were RCTs [ 19 , 20 , 21 , 22 ], two were single group pretest–posttest studies [ 23 , 26 ], one was a controlled before and after study [ 25 ], and one was a cohort study [ 24 ]. The studies included health professionals from diverse educational backgrounds, including some from mental health and medical services, as well as students in medicine, social work, and nursing. All studies, published from 2009 to 2021, utilized non-immersive VR desktop system interventions featuring various forms of VP designs. Based on an updated classification of VP interventions by Kononowicz et al. [ 30 ] developed from a model proposed by Talbot et al. [ 31 ], we have described the characteristics of the interventions in Table  1 . Four of the studies utilized a virtual standardized patient (VSP) intervention [ 20 , 21 , 22 , 23 ], a conversational agent that simulates clinical presentations for training purposes. Two studies employed an interactive patient scenario (IPS) design [ 25 , 26 ], an approach that primarily uses text-based multimedia, enhanced with images and case histories through text or voice narratives, to simulate clinical scenarios. Lastly, two studies used a virtual patient game (VP game) intervention [ 19 , 24 ]. These interventions feature training scenarios using 3D avatars, specifically designed to improve clinical reasoning and team training skills. It should be noted that the interventions classified as VSPs in this review, being a few years old, do not encompass artificial intelligence (AI) as we interpret it today. However, since the interventions include some kind of algorithm that provides answers to questions, we consider them as conversational agents, and therefore as VSPs. As the eight included studies varied significantly in terms of design, interventions, and outcome measures, we could not incorporate them into a meta-analysis.

The overall risk of bias for the four RCTs was high [ 19 , 20 , 22 ] or of some concern [ 21 ] (Fig.  2 ). They were all assessed as low or of some concern in the domains of randomization. Three studies were assessed with a high risk of bias in one [ 19 , 20 ] or two domains [ 22 ]; one study had a high risk of bias in the domain of selection of the reported result [ 19 ], one in the domain of measurement of outcome [ 20 ], and one in the domains of deviation from the intended interventions and missing outcome data [ 22 ]. One study was not assessed as having a high risk of bias in any domain [ 21 ].

figure 2

Risk of bias summary: review authors assessments of each risk of bias item in the included RCT studies

For the four non-randomized studies, the overall risk of bias was judged to be moderate [ 26 ] or serious [ 23 , 24 , 25 ] (Fig.  3 ). One study had a serious risk of bias in two domains: confounding and measurement of outcomes [ 23 ]. Two studies had a serious risk of bias in one domain, namely confounding [ 24 , 25 ], while one study was judged not to have a serious risk of bias in any domain [ 26 ].

figure 3

Risk of bias summary: review authors assessments of each risk of bias item in the included non-randomized studies

Three studies investigated the impact of virtual reality training on mental health knowledge [ 24 , 25 , 26 ]. One study with 32 resident psychiatrists in a single group pretest–posttest design assessed the effect of a VR training intervention on knowledge of posttraumatic stress disorder (PTSD) symptomatology, clinical management, and communication skills [ 26 ]. The intervention consisted of an IPS. The assessment of the outcome was conducted using a knowledge test with 11 multiple-choice questions and was administered before and after the intervention. This study reported a significant improvement on the knowledge test after the VR training intervention.

The second study examined the effect of a VR training intervention on knowledge of dementia [ 25 ], employing a controlled before and after design. Seventy-nine medical students in clinical training were divided into two groups, following a traditional learning program. The experimental group received an IPS intervention. The outcome was evaluated with a knowledge test administered before and after the intervention with significantly higher posttest scores in the experimental group than in the control group, with a moderate effects size observed between the groups.

A third study evaluated the effect of a VR training intervention on 299 undergraduate nursing students’ diagnostic recognition of depression and schizophrenia (classified as knowledge) [ 24 ]. In a prospective cohort design, the VR intervention was the only difference in the mental health related educational content provided to the two cohorts, and consisted of a VP game design, developed to simulate training situations with virtual patient case scenarios, including depression and schizophrenia. The outcome was assessed by determining the accuracy of diagnoses made after reviewing case vignettes of depression and schizophrenia. The study found no statistically significant effect of VR training on diagnostic accuracy between the simulation and the non-simulation cohort.

Summary: All three studies assessing the effect of a VR intervention on knowledge were non-randomized studies with different study designs using different outcome measures. Two studies used an IPS design, while one study used a VP game design. Two of the studies found a significant effect of VR training on knowledge. Of these, one study had a moderate overall risk of bias [ 26 ], while the other was assessed as having a serious overall risk of bias [ 25 ]. The third study, which did not find any effect of the virtual reality intervention on knowledge, was assessed to have a serious risk of bias [ 24 ].

Three RCTs assessed the effectiveness of VR training on skills [ 20 , 21 , 22 ]. One of them evaluated the effect of VR training on clinical skills in alcohol screening and intervention [ 20 ]. In this study, 102 health care professionals were randomly allocated to either a group receiving no training or a group receiving a VSP intervention. To evaluate the outcome, three standardized patients rated each participant using a checklist based on clinical criteria. The VSP intervention group demonstrated significantly improved posttest skills in alcohol screening and brief intervention compared to the control group, with moderate and small effect sizes, respectively.

Another RCT, including 67 medical college students, evaluated the effect of VR training on clinical skills by comparing the frequency of questions asked about suicide in a VSP intervention group and a video module group [ 21 ]. The assessment of the outcome was a psychiatric interview with a standardized patient. The primary outcome was the frequency with which the students asked the standardized patient five questions about suicide risk. Minimal to small effect sizes were noted in favor of the VSP intervention, though they did not achieve statistical significance for any outcomes.

One posttest only RCT evaluated the effect of three training programs on skills in detecting and diagnosing major depressive disorder and posttraumatic stress disorder (PTSD) [ 22 ]. The study included 30 family physicians, and featured interventions that consisted of two different VSPs designed to simulate training situations, and one text-based program. A diagnostic form filled in by the participants after the intervention was used to assess the outcome. The results revealed a significant effect on diagnostic accuracy for major depressive disorder for both groups receiving VR training, compared to the text-based program, with large effect sizes observed. For PTSD, the intervention using a fixed avatar significantly improved diagnostic accuracy with a large effect size, whereas the intervention with a choice avatar demonstrated a moderate to large effect size compared to the text-based program.

Summary: Three RCTs assessed the effectiveness of VR training on clinical skills [ 20 , 21 , 22 ], all of which used a VSP design. To evaluate the effect of training, two of the studies utilized standardized patients with checklists. The third study measured the effect on skills using a diagnostic form completed by the participants. Two of the studies found a significant effect on skills [ 20 , 22 ], both were assessed to have a high risk of bias. The third study, which did not find any effect of VR training on skills, had some concern for risk of bias [ 21 ].

Knowledge and skills

One RCT study with 227 health care professionals assessed knowledge and skills as a combined outcome compared to a waitlist control group, using a self-report survey before and after the VR training [ 19 ]. The training intervention was a VP game designed to practice knowledge and skills related to mental health and substance abuse disorders. To assess effect of the training, participants completed a self-report scale measuring perceived knowledge and skills. Changes between presimulation and postsimulation scores were reported only for the within treatment group ( n  = 117), where the composite postsimulation score was significantly higher than the presimulation score, with a large effect size observed. The study was judged to have a high risk of bias in the domain of selection of the reported result.

One single group pretest–posttest study with 100 social work and nursing students assessed the effect of VSP training on attitudes towards individuals with substance abuse disorders [ 23 ]. To assess the effect of the training, participants completed an online pretest and posttest survey including questions from a substance abuse attitudes survey. This study found no significant effect of VR training on attitudes and was assessed as having a serious risk of bias.

Perceived competence

The same single group pretest–posttest study also assessed the effect of a VSP training intervention on perceived competence in screening, brief intervention, and referral to treatment in encounters with patients with substance abuse disorders [ 23 ]. A commonly accepted definition of competence is that it comprises integrated components of knowledge, skills, and attitudes that enable the successful execution of a professional task [ 32 ]. To assess the effect of the training, participants completed an online pretest and posttest survey including questions on perceived competence. The study findings demonstrated a significant increase in perceived competence following the VSP intervention. The risk of bias in this study was judged as serious.

This systematic review aimed to investigate the effectiveness of VR training on knowledge, skills, and attitudes that health professionals need to master in the assessment and treatment of mental health disorders. A narrative synthesis of eight included studies identified VR training interventions that varied in design and educational content. Although mixed results emerged, most studies reported improvements in knowledge and skills after VR training.

We found that all interventions utilized some type of VP design, predominantly VSP interventions. Although our review includes a limited number of studies, it is noteworthy that the distribution of interventions contrasts with a literature review on the use of ‘virtual patient’ in health care education from 2015 [ 30 ], which identified IPS as the most frequent intervention. This variation may stem from our review’s focus on the mental health field, suggesting a different intervention need and distribution than that observed in general medical education. A fundamental aspect of mental health education involves training skills needed for interpersonal communication, clinical interviews, and symptom assessment, which makes VSPs particularly appropriate. While VP games may be suitable for clinical reasoning in medical fields, offering the opportunity to perform technical medical procedures in a virtual environment, these designs may present some limitations for skills training in mental health education. Notably, avatars in a VP game do not comprehend natural language and are incapable of engaging in conversations. Therefore, the continued advancement of conversational agents like VSPs is particularly compelling and considered by scholars to hold the greatest potential for clinical skills training in mental health education [ 3 ]. VSPs, equipped with AI dialogue capabilities, are particularly valuable for repetitive practice in key skills such as interviewing and counseling [ 31 ], which are crucial in the assessment and treatment of mental health disorders. VSPs could also be a valuable tool for the implementation of training methods in mental health education, such as deliberate practice, a method that has gained attention in psychotherapy training in recent years [ 33 ] for its effectiveness in refining specific performance areas through consistent repetition [ 34 ]. Within this evolving landscape, AI system-based large language models (LLMs) like ChatGPT stand out as a promising innovation. Developed from extensive datasets that include billions of words from a variety of sources, these models possess the ability to generate and understand text in a manner akin to human interaction [ 35 ]. The integration of LLMs into educational contexts shows promise, yet careful consideration and thorough evaluation of their limitations are essential [ 36 ]. One concern regarding LLMs is the possibility of generating inaccurate information, which represents a challenge in healthcare education where precision is crucial [ 37 ]. Furthermore, the use of generative AI raises ethical questions, notably because of potential biases in the training datasets, including content from books and the internet that may not have been verified, thereby risking the perpetuation of these biases [ 38 ]. Developing strategies to mitigate these challenges is imperative, ensuring LLMs are utilized safely in healthcare education.

All interventions in our review were based on non-immersive desktop VR systems, which is somewhat surprising considering the growing body of literature highlighting the impact of immersive VR technology in education, as exemplified by reviews such as that of Radianti et al. [ 39 ]. Furthermore, given the recent accessibility of affordable, high-quality head-mounted displays, this observation is noteworthy. Research has indicated that immersive learning based on head-mounted displays generally yields better learning outcomes than non-immersive approaches [ 40 ], making it an interesting research area in mental health care training and education. Studies using immersive interventions were excluded in the present review because of methodological concerns, paralleling findings described in a systematic review on immersive VR in education [ 41 ], suggesting the potential early stage of research within this field. Moreover, the integration of immersive VR technology into mental health care education may encounter challenges associated with complex ethical and regulatory frameworks, including data privacy concerns exemplified by the Oculus VR headset-Facebook integration, which could restrict the implementation of this technology in healthcare setting. Prioritizing specific training methodologies for enhancing skills may also affect the utilization of immersive VR in mental health education. For example, integrating interactive VSPs into a fully immersive VR environment remains a costly endeavor, potentially limiting the widespread adoption of immersive VR in mental health care. Meanwhile, the use of 360-degree videos in immersive VR environments for training purposes [ 42 ] can be realized with a significantly lower budget. Immersive VR offers promising opportunities for innovative training, but realizing its full potential in mental health care education requires broader research validation and the resolution of existing obstacles.

This review bears some resemblance to the systematic review by Jensen et al. on virtual patients in undergraduate psychiatry education [ 13 ] from 2024, which found that virtual patients improved learning outcomes compared to traditional methods. However, these authors’ expansion of the commonly used definition of virtual patient makes their results difficult to compare with the findings in the present review. A recognized challenge in understanding VR application in health care training arises from the literature on VR training for health care personnel, where ‘virtual patient’ is a term broadly used to describe a diverse range of VR interventions, which vary significantly in technology and educational design [ 3 , 30 ]. For instance, reviews might group different interventions using various VR systems and designs under a single label (virtual patient), or primary studies may use misleading or inadequately defined classifications for the virtual patient interventions evaluated. Clarifying the similarities and differences among these interventions is vital to inform development and enhance communication and understanding in educational contexts [ 43 ].

Strengths and limitations

To the best of our knowledge, this is the first systematic review to evaluate the effectiveness of VR training on knowledge, skills, and attitudes in health care professionals and students in assessing and treating mental health disorders. This review therefore provides valuable insights into the use of VR technology in training and education for mental health care. Another strength of this review is the comprehensive search strategy developed by a senior academic librarian at Inland Norway University of Applied Sciences (HINN) and the authors in collaboration with an adviser from KildeGruppen AS (a Norwegian media company). The search strategy was peer-reviewed by an academic librarian at HINN. Advisers from VRINN (an immersive learning cluster in Norway) and SIMInnlandet (a center for simulation in mental health care at Innlandet Hospital Trust) provided assistance in reviewing the VR systems of the studies, while the classification of the learning designs was conducted under the guidance of a VP scholar. This systematic review relies on an established and recognized classification of VR interventions for training health care personnel and may enhance understanding of the effectiveness of VR interventions designed for the training of mental health care personnel.

This review has some limitations. As we aimed to measure the effect of the VR intervention alone and not the effect of a blended training design, the selection of included studies was limited. Studies not covered in this review might have offered different insights. Given the understanding that blended learning designs, where technology is combined with other forms of learning, have significant positive effects on learning outcomes [ 44 ], we were unable to evaluate interventions that may be more effective in clinical settings. Further, by limiting the outcomes to knowledge, skills, and attitudes, we might have missed insights into other outcomes that are pivotal to competence acquisition.

Limitations in many of the included studies necessitate cautious interpretation of the review’s findings. Small sample sizes and weak designs in several studies, coupled with the use of non-validated outcome measures in some studies, diminish the robustness of the findings. Furthermore, the risk of bias assessment in this review indicates a predominantly high or serious risk of bias across most of the studies, regardless of their design. In addition, the heterogeneity of the studies in terms of study design, interventions, and outcome measures prevented us from conducting a meta-analysis.

Further research

Future research on the effectiveness of VR training for specific learning outcomes in assessing and treating mental health disorders should encompass more rigorous experimental studies with larger sample sizes. These studies should include verifiable descriptions of the VR interventions and employ validated tools to measure outcomes. Moreover, considering that much professional learning involves interactive and reflective practice, research on VR training would probably be enhanced by developing more in-depth study designs that evaluate not only the immediate learning outcomes of VR training but also the broader learning processes associated with it. Future research should also concentrate on utilizing immersive VR training applications, while additionally exploring the integration of large language models to augment interactive learning in mental health care. Finally, this review underscores the necessity in health education research involving VR to communicate research findings using agreed terms and classifications, with the aim of providing a clearer and more comprehensive understanding of the research.

This systematic review investigated the effect of VR training interventions on knowledge, skills, and attitudes in the assessment and treatment of mental health disorders. The results suggest that VR training interventions can promote knowledge and skills acquisition. Further studies are needed to evaluate VR training interventions as a learning tool for mental health care providers. This review emphasizes the necessity to improve future study designs. Additionally, intervention studies of immersive VR applications are lacking in current research and should be a future area of focus.

Availability of data and materials

Detailed search strategies from each database is available in the DataverseNO repository, https://doi.org/10.18710/TI1E0O .

Abbreviations

Virtual Reality

Cave Automatic Virtual Environment

Randomized Controlled Trial

Non-Randomized study

Virtual Standardized Patient

Interactive Patient Scenario

Virtual Patient

Post Traumatic Stress Disorder

Standardized Patient

Artificial intelligence

Inland Norway University of Applied Sciences

Doctor of Philosophy

Frenk J, Chen L, Bhutta ZA, Cohen J, Crisp N, Evans T, et al. Health professionals for a new century: transforming education to strengthen health systems in an interdependent world. Lancet. 2010;376(9756):1923–58.

Article   Google Scholar  

World Health Organization. eLearning for undergraduate health professional education: a systematic review informing a radical transformation of health workforce development. Geneva: World Health Organization; 2015.

Google Scholar  

Talbot T, Rizzo AS. Virtual human standardized patients for clinical training. In: Rizzo AS, Bouchard S, editors. Virtual reality for psychological and neurocognitive interventions. New York: Springer; 2019. p. 387–405.

Chapter   Google Scholar  

Merriam-Webster dictionary. Springfield: Merriam-Webster Incorporated; c2024. Virtual reality. Available from: https://www.merriam-webster.com/dictionary/virtual%20reality . [cited 2024 Mar 24].

Winn W. A conceptual basis for educational applications of virtual reality. Technical Publication R-93–9. Seattle: Human Interface Technology Laboratory, University of Washington; 1993.

Bouchard S, Rizzo AS. Applications of virtual reality in clinical psychology and clinical cognitive neuroscience–an introduction. In: Rizzo AS, Bouchard S, editors. Virtual reality for psychological and neurocognitive interventions. New York: Springer; 2019. p. 1–13.

Waller D, Hodgson E. Sensory contributions to spatial knowledge of real and virtual environments. In: Steinicke F, Visell Y, Campos J, Lécuyer A, editors. Human walking in virtual environments: perception, technology, and applications. New York: Springer New York; 2013. p. 3–26. https://doi.org/10.1007/978-1-4419-8432-6_1 .

Choudhury N, Gélinas-Phaneuf N, Delorme S, Del Maestro R. Fundamentals of neurosurgery: virtual reality tasks for training and evaluation of technical skills. World Neurosurg. 2013;80(5):e9–19. https://doi.org/10.1016/j.wneu.2012.08.022 .

Gallagher AG, Ritter EM, Champion H, Higgins G, Fried MP, Moses G, et al. Virtual reality simulation for the operating room: proficiency-based training as a paradigm shift in surgical skills training. Ann Surg. 2005;241(2):364–72. https://doi.org/10.1097/01.sla.0000151982.85062.80 .

Kyaw BM, Saxena N, Posadzki P, Vseteckova J, Nikolaou CK, George PP, et al. Virtual reality for health professions education: systematic review and meta-analysis by the Digital Health Education Collaboration. J Med Internet Res. 2019;21(1):e12959. https://doi.org/10.2196/12959 .

Bracq M-S, Michinov E, Jannin P. Virtual reality simulation in nontechnical skills training for healthcare professionals: a systematic review. Simul Healthc. 2019;14(3):188–94. https://doi.org/10.1097/sih.0000000000000347 .

World Health Organization. Transforming and scaling up health professionals’ education and training: World Health Organization guidelines 2013. Geneva: World Health Organization; 2013. Available from: https://www.who.int/publications/i/item/transforming-and-scaling-up-health-professionals%E2%80%99-education-and-training . Accessed 15 Jan 2024.

Jensen RAA, Musaeus P, Pedersen K. Virtual patients in undergraduate psychiatry education: a systematic review and synthesis. Adv Health Sci Educ. 2024;29(1):329–47. https://doi.org/10.1007/s10459-023-10247-6 .

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Syst Rev. 2021;10(1):89. https://doi.org/10.1186/s13643-021-01626-4 .

Rethlefsen ML, Kirtley S, Waffenschmidt S, Ayala AP, Moher D, Page MJ, et al. PRISMA-S: an extension to the PRISMA statement for reporting literature searches in systematic reviews. Syst Rev. 2021;10(1):39. https://doi.org/10.1186/s13643-020-01542-z .

Sandieson RW, Kirkpatrick LC, Sandieson RM, Zimmerman W. Harnessing the power of education research databases with the pearl-harvesting methodological framework for information retrieval. J Spec Educ. 2010;44(3):161–75. https://doi.org/10.1177/0022466909349144 .

Steen CW, Söderström K, Stensrud B, Nylund IB, Siqveland J. Replication data for: the effectiveness of virtual reality training on knowledge, skills and attitudes of health care professionals and students in assessing and treating mental health disorders: a systematic review. In: Inland Norway University of Applied S, editor. V1 ed: DataverseNO; 2024. https://doi.org/10.18710/TI1E0O .

Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210. https://doi.org/10.1186/s13643-016-0384-4 .

Albright G, Bryan C, Adam C, McMillan J, Shockley K. Using virtual patient simulations to prepare primary health care professionals to conduct substance use and mental health screening and brief intervention. J Am Psych Nurses Assoc. 2018;24(3):247–59. https://doi.org/10.1177/1078390317719321 .

Fleming M, Olsen D, Stathes H, Boteler L, Grossberg P, Pfeifer J, et al. Virtual reality skills training for health care professionals in alcohol screening and brief intervention. J Am Board Fam Med. 2009;22(4):387–98. https://doi.org/10.3122/jabfm.2009.04.080208 .

Foster A, Chaudhary N, Murphy J, Lok B, Waller J, Buckley PF. The use of simulation to teach suicide risk assessment to health profession trainees—rationale, methodology, and a proof of concept demonstration with a virtual patient. Acad Psych. 2015;39:620–9. https://doi.org/10.1007/s40596-014-0185-9 .

Satter R. Diagnosing mental health disorders in primary care: evaluation of a new training tool [dissertation]. Tempe (AZ): Arizona State University; 2012.

Hitchcock LI, King DM, Johnson K, Cohen H, McPherson TL. Learning outcomes for adolescent SBIRT simulation training in social work and nursing education. J Soc Work Pract Addict. 2019;19(1/2):47–56. https://doi.org/10.1080/1533256X.2019.1591781 .

Liu W. Virtual simulation in undergraduate nursing education: effects on students’ correct recognition of and causative beliefs about mental disorders. Comput Inform Nurs. 2021;39(11):616–26. https://doi.org/10.1097/CIN.0000000000000745 .

Matsumura Y, Shinno H, Mori T, Nakamura Y. Simulating clinical psychiatry for medical students: a comprehensive clinic simulator with virtual patients and an electronic medical record system. Acad Psych. 2018;42(5):613–21. https://doi.org/10.1007/s40596-017-0860-8 .

Pantziaras I, Fors U, Ekblad S. Training with virtual patients in transcultural psychiatry: Do the learners actually learn? J Med Internet Res. 2015;17(2):e46. https://doi.org/10.2196/jmir.3497 .

Wilson DB. Practical meta-analysis effect size calculator [Online calculator]. n.d. https://campbellcollaboration.org/research-resources/effect-size-calculator.html . Accessed 08 March 2024.

Sterne JA, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. Br Med J. 2019;366:l4898.

Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. Br Med J. 2016;355:i4919. https://doi.org/10.1136/bmj.i4919 .

Kononowicz AA, Zary N, Edelbring S, Corral J, Hege I. Virtual patients - what are we talking about? A framework to classify the meanings of the term in healthcare education. BMC Med Educ. 2015;15(1):11. https://doi.org/10.1186/s12909-015-0296-3 .

Talbot TB, Sagae K, John B, Rizzo AA. Sorting out the virtual patient: how to exploit artificial intelligence, game technology and sound educational practices to create engaging role-playing simulations. Int J Gaming Comput-Mediat Simul. 2012;4(3):1–19.

Baartman LKJ, de Bruijn E. Integrating knowledge, skills and attitudes: conceptualising learning processes towards vocational competence. Educ Res Rev. 2011;6(2):125–34. https://doi.org/10.1016/j.edurev.2011.03.001 .

Mahon D. A scoping review of deliberate practice in the acquisition of therapeutic skills and practices. Couns Psychother Res. 2023;23(4):965–81. https://doi.org/10.1002/capr.12601 .

Ericsson KA, Lehmann AC. Expert and exceptional performance: evidence of maximal adaptation to task constraints. Annu Rev Psychol. 1996;47(1):273–305.

Roumeliotis KI, Tselikas ND. ChatGPT and Open-AI models: a preliminary review. Future Internet. 2023;15(6):192. https://doi.org/10.3390/fi15060192 .

Kasneci E, Sessler K, Küchemann S, Bannert M, Dementieva D, Fischer F, et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn Individ Differ. 2023;103:102274. https://doi.org/10.1016/j.lindif.2023.102274 .

Thirunavukarasu AJ, Ting DSJ, Elangovan K, Gutierrez L, Tan TF, Ting DSW. Large language models in medicine. Nat Med. 2023;29(8):1930–40. https://doi.org/10.1038/s41591-023-02448-8 .

Touvron H, Lavril T, Gautier I, Martinet X, Marie-Anne L, Lacroix T, et al. LLaMA: open and efficient foundation language models. arXivorg. 2023;2302.13971. https://doi.org/10.48550/arxiv.2302.13971 .

Radianti J, Majchrzak TA, Fromm J, Wohlgenannt I. A systematic review of immersive virtual reality applications for higher education: design elements, lessons learned, and research agenda. Comput Educ. 2020;147:103778. https://doi.org/10.1016/j.compedu.2019.103778 .

Wu B, Yu X, Gu X. Effectiveness of immersive virtual reality using head-mounted displays on learning performance: a meta-analysis. Br J Educ Technol. 2020;51(6):1991–2005. https://doi.org/10.1111/bjet.13023 .

Di Natale AF, Repetto C, Riva G, Villani D. Immersive virtual reality in K-12 and higher education: a 10-year systematic review of empirical research. Br J Educ Technol. 2020;51(6):2006–33. https://doi.org/10.1111/bjet.13030 .

Haugan S, Kværnø E, Sandaker J, Hustad JL, Thordarson GO. Playful learning with VR-SIMI model: the use of 360-video as a learning tool for nursing students in a psychiatric simulation setting. In: Akselbo I, Aune I, editors. How can we use simulation to improve competencies in nursing? Cham: Springer International Publishing; 2023. p. 103–16. https://doi.org/10.1007/978-3-031-10399-5_9 .

Huwendiek S, De leng BA, Zary N, Fischer MR, Ruiz JG, Ellaway R. Towards a typology of virtual patients. Med Teach. 2009;31(8):743–8. https://doi.org/10.1080/01421590903124708 .

Ødegaard NB, Myrhaug HT, Dahl-Michelsen T, Røe Y. Digital learning designs in physiotherapy education: a systematic review and meta-analysis. BMC Med Educ. 2021;21(1):48. https://doi.org/10.1186/s12909-020-02483-w .

Download references

Acknowledgements

The authors thank Mole Meyer, adviser at SIMInnlandet, Innlandet Hospital Trust, and Keith Mellingen, manager at VRINN, for their assistance with the categorization and classification of VR interventions, and Associate Professor Inga Hege at the Paracelcus Medical University in Salzburg for valuable contributions to the final classification of the interventions. The authors would also like to thank Håvard Røste from the media company KildeGruppen AS, for assistance with the search strategy; Academic Librarian Elin Opheim at the Inland Norway University of Applied Sciences for valuable peer review of the search strategy; and the Library at the Inland Norway University of Applied Sciences for their support. Additionally, we acknowledge the assistance provided by OpenAI’s ChatGPT for support with translations and language refinement.

Open access funding provided by Inland Norway University Of Applied Sciences The study forms a part of a collaborative PhD project funded by South-Eastern Norway Regional Health Authority through Innlandet Hospital Trust and the Inland University of Applied Sciences.

Author information

Authors and affiliations.

Mental Health Department, Innlandet Hospital Trust, P.B 104, Brumunddal, 2381, Norway

Cathrine W. Steen & Kerstin Söderström

Inland Norway University of Applied Sciences, P.B. 400, Elverum, 2418, Norway

Cathrine W. Steen, Kerstin Söderström & Inger Beate Nylund

Norwegian National Advisory Unit On Concurrent Substance Abuse and Mental Health Disorders, Innlandet Hospital Trust, P.B 104, Brumunddal, 2381, Norway

Bjørn Stensrud

Akershus University Hospital, P.B 1000, Lørenskog, 1478, Norway

Johan Siqveland

National Centre for Suicide Research and Prevention, Oslo, 0372, Norway

You can also search for this author in PubMed   Google Scholar

Contributions

CWS, KS, BS, and JS collaboratively designed the study. CWS and JS collected and analysed the data and were primarily responsible for writing the manuscript text. All authors contributed to the development of the search strategy. IBN conducted the literature searches and authored the chapter on the search strategy in the manuscript. All authors reviewed, gave feedback, and granted their final approval of the manuscript.

Corresponding author

Correspondence to Cathrine W. Steen .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Not applicable .

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: table 2..

Effects of VR training in the included studies: Randomized controlled trials (RCTs) and non-randomized studies (NRSs).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Steen, C.W., Söderström, K., Stensrud, B. et al. The effectiveness of virtual reality training on knowledge, skills and attitudes of health care professionals and students in assessing and treating mental health disorders: a systematic review. BMC Med Educ 24 , 480 (2024). https://doi.org/10.1186/s12909-024-05423-0

Download citation

Received : 19 January 2024

Accepted : 12 April 2024

Published : 01 May 2024

DOI : https://doi.org/10.1186/s12909-024-05423-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Health care professionals
  • Health care students
  • Virtual reality
  • Mental health
  • Clinical skills
  • Systematic review

BMC Medical Education

ISSN: 1472-6920

systematic review of simulation in nursing education

  • Open access
  • Published: 17 June 2024

The effects of simulation-based education on undergraduate nursing students' competences: a multicenter randomized controlled trial

  • Lai Kun Tong 1 ,
  • Yue Yi Li 1 ,
  • Mio Leng Au 1 ,
  • Wai I. Ng 1 ,
  • Si Chen Wang 1 ,
  • Yongbing Liu 2 ,
  • Yi Shen 3 ,
  • Liqiang Zhong 4 &
  • Xichenhui Qiu 5  

BMC Nursing volume  23 , Article number:  400 ( 2024 ) Cite this article

60 Accesses

Metrics details

Education in nursing has noticed a positive effect of simulation-based education. There are many studies available on the effects of simulation-based education, but most of those involve a single institution, nonrandomized controlled trials, small sample sizes and subjective evaluations of the effects. The purpose of this multicenter randomized controlled trial was to evaluate the effects of high-fidelity simulation, computer-based simulation, high-fidelity simulation combined with computer-based simulation, and case study on undergraduate nursing students.

A total of 270 nursing students were recruited from five universities in China. Participants were randomly divided into four groups at each institution: the high-fidelity simulation group, the computer-based simulation group, the high-fidelity simulation combined with computer-based simulation group, and the case study group. Finally, 239 participants completed the intervention and evaluation, with 58, 67, 57, and 57 participants in each group. The data were collected at three stages: before the intervention, immediately after the intervention, and three months after the intervention.

The demographic data and baseline evaluation indices did not significantly differ among the four groups. A statistically significant difference was not observed between the four methods for improving knowledge, interprofessional collaboration, critical thinking, caring, or interest in learning. While skill improvement differed significantly among the different groups after the intervention ( p  = 0.020), after three months, no difference was observed ( p  = 0.139). The improvement in skill in the computer-based simulation group was significantly lower at the end of the intervention than that in the high-fidelity simulation group ( p  = 0.048) or the high-fidelity simulation combined with computer-based simulation group ( p  = 0.020).

Conclusions

Nursing students benefit equally from four methods in cultivating their knowledge, interprofessional collaboration, critical thinking, caring, and interest in learning both immediately and over time. High-fidelity simulation and high-fidelity simulation combined with computer-based simulation improve skill more effectively than computer-based simulation in the short term. Nursing educators can select the most suitable teaching method to achieve the intended learning outcomes depending on the specific circumstances.

Trial registration

This clinical trial was registered at the Chinese Clinical Trial Registry (clinical trial number: ChiCTR2400084880, date of the registration: 27/05/2024).

Peer Review reports

Introduction

There are many challenges nursing students face in the clinical setting because of the gap between theory and practice, the lack of resources, and unfamiliarity with the medical environment [ 1 ]. Nursing education needs an innovative teaching method that is more closely related to the clinical environment. Simulation-based education is an effective teaching method for nursing students [ 2 ]. It provides students with an immersive clinical environment for practicing skills and gaining experience in a safe, controlled setting [ 3 ]. This educational approach not only supports the development of various competencies [ 2 , 4 ], including knowledge, skill, interprofessional collaboration, critical thinking, caring, and interest in learning, but also enables students to apply learned concepts to complex and challenging situations [ 5 ].

Manikin-based and computer-based simulations are commonly employed simulators in nursing education. Manikin-based simulation involves the use of a manikin to mimic a patient’s characteristics, such as heart and lung sounds [ 6 ]. Computer-based simulation involves the modeling of real-life processes solely using computers, usually with a keyboard and monitor as inputs and outputs [ 6 ]. According to a recent meta-analysis, manikin-based simulation improves nursing students' knowledge acquisition more than computer-based simulation does, but there are no significant differences in confidence or satisfaction with learning [ 4 ].

Based on the level of fidelity, manikin-based simulation can be categorized as low, medium, or high fidelity [ 7 ]. High-fidelity simulation has become increasingly popular since it replaces part of clinical placement without compromising nursing student quality [ 8 ]. Compared to other teaching methods, high-fidelity simulation is associated with elevated equipment and labor costs [ 9 ]. To enhance cost-effectiveness, it is imperative to maximize the impact of high-fidelity simulation. To improve learning outcomes, mixed learning has gained popularity across higher education in recent years [ 10 ]. The most widely used mixed learning method for simulation education in the nursing field is high-fidelity simulation combined with computer-based simulation. There have been only a few studies on the effect of high-fidelity simulation combined with computer-based simulation on nursing students, and these are either pre-post comparison studies without control groups [ 11 ] or quasi-experimental studies without randomization [ 12 ]. To obtain a better grasp of the effects of combining high-fidelity simulation and computer-based simulation, a randomized controlled trial is needed.

In addition to enhancing effectiveness, optimizing cost-effectiveness can be achieved by implementing cost reduction measures. Case study, which eliminates the need for additional equipment, offers a relatively low-cost alternative. A traditional case study provides all pertinent information, whereas an unfolding case study purposefully leaves out information [ 13 ]. It has been shown that unfolding case study fosters critical thinking in students more effectively than traditional case studies [ 14 ]. Despite being regarded as an innovative and inexpensive teaching method, there is little research comparing unfolding case study with other simulation-based teaching methods. To address this knowledge gap, further study is necessary.

An umbrella review highlights that the existing literature on the learning outcomes of simulation-based education predominantly emphasizes knowledge and skills, while conferring limited focus on other core competencies, such as interprofessional collaboration and caring [ 15 ]. Therefore, future research should evaluate various learning outcome indicators.

This multicenter randomized controlled trial aimed to assess the effectiveness of high-fidelity simulation, computer-based simulation, high-fidelity simulation combined with computer-based simulation, and case study on nursing students’ knowledge, skill, interprofessional collaboration, critical thinking, caring, and interest in learning.

Study design

A multicenter randomized controlled trial was conducted between March 2022 and May 2023 in China. The study conforms to the CONSORT guidelines. This clinical trial was registered at the Chinese Clinical Trial Registry (clinical trial number: ChiCTR2400084880, date of the registration: 27/05/2024).

Participants and setting

Participants were recruited from five universities in China, two of which were private and three of which were public. Among the five universities, four were equipped with two high-fidelity simulation laboratories. Specifically, three universities had laboratories simulating intensive care unit wards and delivery rooms, while the remaining university had two laboratories simulating general wards. Additionally, one university possessed a high-fidelity simulation laboratory specifically designed to simulate a general ward setting. Three universities utilized Laerdal patient simulators in their laboratories, while the other two universities employed Gaumard patient simulators.

A recruitment poster with the time and location of the project promotion was posted on the school bulletin board. The research team provided a briefing to students at the designated time and location indicated on the poster, affording them the opportunity to inquire about and enhance their understanding of the project.

The study mandated that participants fulfill the following criteria: 1) enroll in a nursing undergraduate program; 2) have full-time student status; 3) complete courses in Anatomy and Physiology, Pathophysiology, Pharmacology, Health Assessment, Basic Nursing, and Medical and Surgical Nursing (Respiratory System); 4) have proficiency in reading and writing Chinese; and 5) participate voluntarily. Those who met the following criteria were excluded: 1) had a degree or diploma and 2) took the course again.

The sample size was calculated through the use of G*Power 3.1, which was based on F tests (ANOVA: Repeated measures, between factors). Several assumptions were taken into consideration, including a 5% level of significance, 80% power, four groups, three measurements, and a 0.50 correlation between pre- and postintervention time points. Compared to other teaching methods, high-fidelity simulation exhibited a medium effect size (d = 0.49 for knowledge, d = 0.50 for performance) [ 16 ]. The calculation employed a conservative approach, accommodating a small yet clinically significant effect size (0.25), thereby bolstering the reliability and validity of the findings. Based on these assumptions, the total sample size required was determined to be 124, with each group requiring 31 participants.

Randomization and blinding

Due to inconsistent teaching schedules at the five universities involved in the study, the participants were divided into four groups at each institution: the high-fidelity simulation group, the computer-based simulation group, the high-fidelity simulation combined with computer-based simulation group, and the case study group. Participant grouping was carried out by study team members who were not involved in the intervention or evaluation. The participants were each assigned a random nonduplicate number between zero and 100 using Microsoft Excel. The random numbers/participants were divided into four groups based on quartiles: the lower quarter, the lower quarter to a half, the half to three-fourths, and the upper quarter were assigned to the high-fidelity simulation group, the computer-based simulation group, the high-fidelity simulation combined with computer-based simulation group, and the case study group, respectively. It was not possible to implement participant blinding because the four teaching methods differed significantly, while effect evaluation and data analysis were conducted in a blinded manner. Each participant was assigned a unique identifier to maintain anonymity throughout the study.

Baseline test

Baseline testing started after participant recruitment had ended, so the timing of the study varied between universities. The baseline test items were the same for all participants and included general characteristics, knowledge, skills, interprofessional collaboration, critical thinking, caring, and interest in learning. The evaluation of skills was conducted by trained assessors, whereas a non-face-to-face online survey was utilized for the assessment of others.

Intervention

The four groups were taught with three scenarios covering the three different cases, in the following order: asthma worsening, drug allergy, and ventricular fibrillation. These three cases represent commonly encountered scenarios necessitating emergency treatment. It is anticipated that by means of training, students can enhance their aptitude to effectively handle emergency situations within clinical settings. It is vital that the case used in simulation-based education is valid so that its effectiveness can be enhanced [ 17 ]. The cases used in this study were from vSim® for Nursing | Lippincott Nursing Education, which was developed by Wolters Kluwer Health (Lippincott), Laerdal Medical, and the National League for Nursing. Hence, the validity of the cases can be assured. Participants received all the materials, including learning outcomes, theoretical learning materials, and case materials (medical history and nursing document), at least one day before teaching. All the teachers in charge of teaching participated in the meeting to discuss the lesson plans to reach a consensus on the lesson plans. The lesson plans were written by three members of the research team and revised according to the feedback. Table 1 shows the teaching experience of each case in the different intervention groups. The instructors involved had at least five years of teaching experience and a master's degree or higher.

Posttest and follow-up test

The posttest was conducted within one week of the intervention using the same items as those used in the baseline test. The follow-up test was administered after three months of the intervention.

General characteristics

The general characteristics of the participants included gender, age, and previous semester grade.

This was measured by five multiple-choice items developed for this study. The items were derived from the National Nurse Licensing Examination [ 18 ]. The maximum score was five, with one awarded for each correct answer. The questionnaire exhibited high content validity (CVI = 1.00) and good reliability (Kuder-Richardson 20 = 0.746).

The Creighton Competency Evaluation Instrument (CCEI) is designed to assess clinical skills in a simulated environment by measuring 23 general nursing behaviors. This tool was originally developed by Todd et al. [ 19 ] and subsequently modified by Hayden et al. [ 20 ]. The Chinese version of the CCEI has good reliability (Cronbach’s α = 0.94) and validity (CVI = 0.98) [ 21 ]. The CCEI was scored by nurses with master’s degrees who were trained by the research team and blinded to the intervention information. A dedicated person was assigned to handle the rating for each university, and the raters did not rotate among the participants. The Kendall's W coefficient for the raters' measures was calculated to be 0.832, indicating a high level of interrater agreement and reliability. All participants were tested using a high-fidelity simulator, with each test lasting ten minutes. The skills test without debriefing employed a single-person format, and the nursing procedures did not rely on laboratory results, so the items "Delegates Appropriately," "Reflects on Clinical Experience," "Interprets Lab Results," and "Reflects on Potential Hazards and Errors" were excluded from the assessment. The total score ranged from 0–19 and a higher score indicated a higher level of skill.

  • Interprofessional collaboration

The Assessment of the Interprofessional Team Collaboration Scale for Students (AITCS-II Student) was used to assess interprofessional collaboration. It consists of 17 items rated on a 5-point Likert scale (1 = never, 5 = always), for a total score ranging from 17 to 85 [ 22 ]. The Chinese version of the AITCS-II has good reliability (Cronbach’s α = 0.961) and validity [ 23 ].

  • Critical thinking

Critical thinking was measured by Yoon's Critical Thinking Disposition Scale (YCTD). It is a five-point Likert scale with values ranging from 1 to 5, resulting in a total score ranging from 27 to 135 [ 24 ]. Higher scores on this scale indicate greater critical thinking ability. The YCTD has good reliability (Cronbach’s α = 0.948) and validity when applied to Chinese nursing students [ 25 ].

Caring was assessed using the Caring Dimensions Inventory (CDI), which employs a five-point Likert scale ranging from 25 to 125 [ 26 ]. Higher scores on the CDI indicate a greater level of caring. The Chinese version of the CDI exhibited good reliability (Cronbach’s α = 0.97) and validity [ 27 ].

  • Interest in learning

The Study Interest Questionnaire (SIQ) was used to assess interest in learning. The SIQ is a four-point Likert scale ranging from 18 to 72, where a higher total score indicates a greater degree of interest in the field of study [ 28 ]. The SIQ has good reliability (Cronbach’s α = 0.90) and validity when applied to Chinese nursing students [ 29 ].

Ethical considerations

The institution of the first author granted ethical approval (ethical approval number: REC-2021.801). Written informed consent was obtained from all participants. Participants were permitted to withdraw for any reason at any time without penalty. Guidelines emphasizing safety measures and precautions during the intervention were provided to participants, and study coordinators closely monitored laboratory and simulation sessions to address concerns or potential harm promptly.

Data analysis

Descriptive statistics were used to describe the participant characteristics and baseline characteristics. Continuous variables are presented as the mean and standard deviation, while categorical variables are presented as frequencies and percentages. According to the Quantile–Quantile Plot, the data exhibited an approximately normal distribution. Furthermore, Levene's test indicated equal variances for the variables of knowledge, skill, interprofessional collaboration, critical thinking, caring, and interest in learning, with p-values of 0.171, 0.249, 0.986, 0.634, 0.992, and 0.407, respectively. The baseline characteristics of the four groups were compared using one-way analysis of variance. The indicators of knowledge, skill, interprofessional collaboration, critical thinking, caring, and interest in learning were assessed at baseline, immediately after the intervention, and three months postintervention. Changes in these indicators from baseline were calculated for both the postintervention and three-month follow-up periods. The changes among the four groups were compared using one-way analysis of variance. Cohen's d effect sizes were computed for the between-group comparisons (small effect size = 0.2; medium effect size = 0.5; large effect size = 0.8). Missing data were treated as missing without imputation. The data analysis was conducted using jamovi 2.3.28 ( https://www.jamovi.org/ ). Jamovi was developed on the foundation of the R programming language, and is recognized for its user-friendly interface. The threshold for statistical significance was established at a two-sided p  < 0.05.

Participants

A total of 270 participants were initially recruited from five universities for this study. However, an attrition rate of 11.5% was observed, resulting in 31 participants discontinuing their involvement. Consequently, the final analysis included data from 239 participants who successfully completed the intervention and remained in the study. Specifically, there were 58 participants in the high-fidelity simulation group, 67 in the computer-based simulation group, 57 in the high-fidelity simulation combined with computer-based simulation group, and 57 in the case study group (Fig.  1 ). The participant demographics and baseline characteristics are displayed in Table  2 , and no significant differences were observed in these variables.

figure 1

Study subject disposition flow chart

Efficacy outcomes

All the intervention groups showed improvements in knowledge after the intervention, with the high-fidelity simulation group showing the greatest improvement (Fig.  2 ). However, there were no significant differences in knowledge improvement among the groups (p = 0.856). The computer-based simulation group and case study group experienced a decrease in knowledge compared to baseline three months after the intervention, while the other groups showed an increase in knowledge. The high-fidelity simulation combined with computer-based simulation group performed best (Fig.  3 ), but no significant differences were observed (p = 0.872). The effect sizes between groups were found to be small, both immediately after the intervention and at the three-month follow-up (Table  3 ).

figure 2

Changes in all effectiveness outcomes at post intervention. Note: A  High-fidelity simulation group; B  Computer-based simulation group; C  High-fidelity simulation combined with computer-based simulation group; D  Case study group

figure 3

Changes in all effectiveness outcomes at three months of intervention. Note: A  High-fidelity simulation group;  B  Computer-based simulation group;  C  High-fidelity simulation combined with computer-based simulation group;  D  Case study group

The different intervention groups showed improvements in skills after the intervention and three months after the intervention. The high-fidelity simulation combined with computer-based simulation group showed the greatest improvement after the intervention (Fig.  2 ), while the greatest improvement was observed in the high-fidelity simulation group three months after the intervention (Fig.  3 ). There was a significant difference in the improvement in skills among the different groups after the intervention ( p  = 0.020). Specifically, the improvement observed in the computer-based simulation group was significantly lower than that in both the high-fidelity simulation group ( p  = 0.048) and the high-fidelity simulation combined with computer-based simulation group ( p  = 0.020). However, three months after the intervention, there was no statistically significant difference in skill improvement among the groups ( p  = 0.139). Except for the between-group effect sizes of the high-fidelity simulation group compared to the computer-based simulation group (Cohen d = 0.51) and the computer-based simulation group compared to the high-fidelity simulation combined with computer-based simulation group (Cohen d = 0.56), the effects were found to be medium after the intervention, while the other between-group effect sizes were small both after the intervention and three months after the intervention (Table  3 ).

In all intervention groups except for the high-fidelity simulation group, interprofessional collaboration improved after the intervention and three months after the intervention, with the case study group (Figs. 2 and 3 ) demonstrating the greatest improvement. No significant difference was found between the intervention groups after or three months after the intervention in terms of changes in interprofessional collaboration. Both immediately following the intervention and three months later, the effect sizes between groups were small (Table  3 ).

After the intervention and three months after the intervention, the critical thinking of all the intervention groups improved. Among them, the high-fidelity simulation group improved the most after the intervention (Fig.  2 ), while the computer-based simulation group improved the most three months after the intervention (Fig.  3 ). However, no statistically significant differences were observed in the improvement of critical thinking across the different groups. The between-group effect sizes of each group were small both after the intervention and three months after the intervention (Table  3 ).

Caring improved following the intervention in all intervention groups, with the exception of the high-fidelity simulation group and case study group (Fig.  2 ). However, no significant difference was observed between the intervention groups in terms of changes ( p  = 0.865). A decrease in caring was observed three months after the intervention in all intervention groups, except for the case study group (Fig.  3 ). Nevertheless, no statistically significant difference was detected between the intervention groups in terms of changes (p = 0.607). Both immediately following the intervention and three months later, the effect sizes between groups were small (Table  3 ).

In terms of interest in learning, both the high-fidelity simulation group and the high-fidelity simulation combined with computer-based simulation group improved after the intervention or three months later. Among the groups, the high-fidelity simulation combined with computer-based simulation group improved the most after both the intervention and three months after the intervention (Figs. 2 and 3 ). However, no statistically significant difference was detected between the intervention groups in terms of changes either after the intervention (p = 0.144) or three months after the intervention (p = 0.875). Both immediately following the intervention and three months later, the effect sizes between groups were small (Table  3 ).

To our knowledge, this study is the first multicenter randomized controlled trial to explore the effects of different simulation teaching methods on nursing students' competence and the first study in which multiple different indicators were evaluated simultaneously. The indicators included both objectively assessed indicators of knowledge and skills and subjectively assessed indicators of interprofessional collaboration, critical thinking, caring, and interest in learning. This study assessed the immediate and long-term effects of the intervention by examining its immediate impact as well as its effects three months postintervention.

The results obtained from this study indicate that high-fidelity simulation, computer-based simulation, high-fidelity simulation combined with computer-based simulation, and case study could improve nursing students’ knowledge immediately after intervention. Furthermore, these four teaching methods exhibited comparable effectiveness in improving knowledge. The findings of this study contradict previous meta-analyses that showed that high-fidelity simulation improved nursing students' knowledge over other teaching techniques [ 2 ]. This discrepancy may be attributed to the inclusion of simulation teaching in the previous study alongside theoretical teaching [ 12 ], whereas the current study solely employed simulation teaching without incorporating theoretical instruction. Notably, three months following the intervention, computer-based simulation and case study did not result in knowledge retention. Conversely, high-fidelity simulation, particularly when combined with computer-based simulation, demonstrated knowledge retention, with the latter exhibiting superior performance in this regard. The realistic nature of the simulation provided students with a context in which to apply their knowledge, enhancing their understanding of key concepts [ 30 ]. High-fidelity simulation surpasses computer-based simulation and case study in terms of realism. When combined with computer-based simulation, this approach affords students the opportunity to practice their knowledge in a safe environment while also providing them with access to additional resources and learning opportunities [ 31 ]. Therefore, in this study, high-fidelity simulation combined with computer-based simulation proved to be the most effective at retaining knowledge.

Four simulation-based education strategies were found to be effective at acquiring and retaining skills by the students in this study. High-fidelity simulation combined with computer-based simulation was found to be more effective at acquiring skill than was using either method alone. This method combines the benefits of both teaching methods, providing students with a comprehensive learning experience that combines physical realism and virtual interactivity [ 32 ]. Hybrid simulation creates a seamless learning experience in which individuals can practice their skills in a simulated environment, receive immediate feedback, and then transfer those skills to real-world situations. This integration provides a seamless transition from theoretical knowledge to practical skills, making it easier for individuals to apply what they have learned and enhance their overall performance [ 33 ]. Hybrid simulation may seem to be an attractive option [ 34 ]; however, this study found that hybrid simulation had no advantage in terms of skill retention; rather, high-fidelity simulation performed best. More research is needed in the future to confirm the results of this study and the underlying reasons since previous studies have not compared hybrid simulation with high-fidelity simulation on skill retention.

The findings of this study reveal a noteworthy observation: interprofessional collaboration improved across all interventions, except for high-fidelity simulation. This finding diverges from prior studies that indicated high-fidelity simulation as a more effective method for enhancing students' interprofessional collaboration compared to traditional case study [ 35 ]. This discrepancy may be attributed to the use of an unfolding case study in the current study, wherein patient scenarios evolve unpredictably, thereby prompting students and team members to engage in heightened collaborative efforts to address evolving patient care challenges [ 36 ]. Interprofessional collaboration plays a crucial role in improving healthcare outcomes. Studies have shown that when healthcare professionals collaborate effectively, patients experience better outcomes, fewer errors, and shorter hospital stays [ 37 ]. While high-fidelity simulation has gained popularity as a training tool, according to the results of this study, its impact on interprofessional collaboration remains limited. There may be two reasons for this. First, high-fidelity simulation scenarios are often time constrained [ 38 ], which can hinder effective interprofessional collaboration. Each team member may prioritize their individual goals or tasks, making it difficult to achieve optimal teamwork and coordination. Second, interprofessional team members may not have worked together extensively, which can hinder their ability to collaborate effectively in a high-fidelity simulation setting. It takes time to build trust and rapport, which may not be readily available in a simulated environment [ 39 ]. Despite being assigned the roles of senior nurse or junior nurse, participants in the high-fidelity simulation group were provided with the opportunity to engage with peers at various levels and individuals from different professions, such as instructors assuming the role of doctors. However, the duration of the simulation section for this group was limited to only 10 min. In contrast, participants in the computer-based simulation group and case study group were allocated 30 min and 35 min, respectively. It is crucial for healthcare institutions and educators to critically evaluate their simulation-based training programs and incorporate key components that promote interprofessional collaboration [ 40 ].

This study revealed that four interventions effectively promoted students' critical thinking, and these effects lasted for three months after the interventions. Furthermore, high-fidelity simulation was most effective at improving critical thinking in the short term, whereas computer-based simulation was most effective at fostering long-term improvements. High-fidelity simulation involves creating a realistic and immersive environment that closely resembles a real-world scenario [ 41 ]. This approach affords individuals the opportunity to actively participate and immerse themselves in the simulated scenario, thereby enhancing their experiential understanding [ 3 ]. Computer-based simulation does not provide the same immediate and tangible experience as high-fidelity simulation. High-fidelity simulation commonly incorporates the utilization of medical devices and mannequins that closely resemble clinical scenarios, thereby affording students a more authentic and immersive learning encounter. Only 5% of students perceive computer-based simulation as a viable substitute for mannequin-based simulation within the curriculum [ 42 ]. As a result, high-fidelity simulation is highly effective in the short term, and a previous meta-analysis reported similar results [ 43 ]. However, computer-based simulation provides advantages for data collection and analysis that contribute to the long-term development of critical thinking skills. In the simulation, participants can record their actions, decisions, and results [ 3 ]. These data can be used to compare different strategies and approaches, allowing participants to reflect on their own critical thinking skills and identify areas for improvement. Furthermore, it is noteworthy that the four simulation teaching methods demonstrated the ability to enhance students' critical thinking. However, it is important to consider the substantial disparity in costs among these methods. Therefore, educators should carefully evaluate their available resources and opt for the most cost-effective approach to foster students' critical thinking.

This study found limited evidence that all four simulation teaching methods contribute to improve caring among students. High-fidelity simulation often focuses on technical skills rather than patient interaction or emotional sensitivity [ 44 , 45 ]. Moreover, research has demonstrated that using mannequins in high-fidelity simulation leads some students to perceive them as separate from real-life patients [ 45 ]. This perception reduces students' concern for the consequences of their actions during the simulation [ 45 ], hindering empathy development and limiting the cultivation of their caring abilities [ 46 ]. Unlike high-fidelity simulation, which provides tactile experiences and simulates real-life interactions, computer-based simulation is characterized by the absence of human connections. This lack of physical proximity can hinder the development of caring behaviors such as nonverbal communication, empathy, and sympathy [ 47 , 48 ]. Similarly, the absence of direct patient interaction is a notable drawback of case study. Although case study simulates complex patient care scenarios, they do not allow students to practice hands-on or experience caregiving emotions. Similarly, the absence of direct patient interactions in case study is a notable limitation. This lack of personal connection and guided practice may hinder the development of caring behaviors. By recognizing these limitations and seeking alternative instructional methods, educational institutions can strive to enhance students' caring skills and equip them with the qualities and behaviors necessary for providing compassionate and patient-centered care.

The findings of this study revealed that neither computer-based simulation nor case study improved students' interest in learning, whereas high-fidelity simulation combined with computer-based simulation was most effective. One possible explanation for the ineffectiveness of computer-based simulation and case study in promoting students' interest is that they may lack the authenticity and immersive nature of real-world experiences [ 47 , 48 ]. High-fidelity simulation, on the other hand, provides a more lifelike and interactive learning environment, which may enhance students' engagement, interest, and retention [ 49 ]. High-fidelity simulation combined with computer-based simulation allows students to interact with the simulation in a hands-on manner while also having access to additional resources and information through computer-based simulation [ 50 ]. This combination provides a well-rounded learning experience that can captivate students' attention and keep them engaged. Notably, these findings are exploratory and should be further explored and validated in future studies. Further research should aim to identify the reasons behind the lack of improvement in students' interest in learning when using computer-based simulation and case study alone. Additionally, the impact of different combinations of simulation techniques on students' interest in learning should be investigated to further refine instructional practices.

Limitations

This study provides valuable insights into the effectiveness of simulation-based education in improving nursing students' competences. However, it is essential to acknowledge and address the study's limitations. One of the limitations is the possible selection bias introduced by the recruiting process. It is possible that students who were more motivated or had a greater interest in simulation-based education may have been more likely to participate in the study. This bias may have influenced the outcomes and interpretation of the results. Additionally, the participants were primarily from one cultural background, which may limit the generalizability of the findings. Future studies should include participants from diverse backgrounds to enhance generalizability. Third, participants assigned to different intervention groups may engage in communication and information sharing, potentially leading to contamination effects. To mitigate this issue, future studies could employ cluster randomized controlled trials, which can effectively minimize the risk of contamination among participants. Finally, the follow-up period was relatively short, which limits the understanding of the long-term impact of simulation-based education on competence. Long-term follow-up studies are needed to evaluate the sustained effect of simulation-based education on competence. Future research should aim to address these limitations to further our understanding of the effects of simulation-based education on undergraduate nursing students' competences.

The four methods are effective at improving skills and critical thinking both immediately and over time. In addition to high-fidelity simulation, the other three methods promote interprofessional collaboration both immediately and long term. High-fidelity simulation combined with computer-based simulation is the most effective approach for enhancing interest in learning both immediately and long term. Undergraduate nursing students benefit equally from four methods in cultivating their knowledge, interprofessional collaboration, critical thinking, caring, and interest in learning both immediately and over time. High-fidelity simulation and high-fidelity simulation combined with computer-based simulation improve skill more effectively than computer-based simulation in the short term. Nursing educators can select the most suitable teaching method to achieve the intended learning outcomes depending on the specific circumstances.

Availability of data and materials

The data that support the findings of this study are available from the corresponding author, upon reasonable request.

Panda S, Dash M, John J, Rath K, Debata A, Swain D, et al. Challenges faced by student nurses and midwives in clinical learning environment – A systematic review and meta-synthesis. Nurse Educ Today. 2021;101: 104875. https://doi.org/10.1016/j.nedt.2021.104875 .

Article   PubMed   Google Scholar  

Li YY, Au ML, Tong LK, Ng WI, Wang SC. High-fidelity simulation in undergraduate nursing education: A meta-analysis. Nurse Educ Today. 2022;111: 105291. https://doi.org/10.1016/j.nedt.2022.105291 .

Tamilselvan C, Chua SM, Chew HSJ, Devi MK. Experiences of simulation-based learning among undergraduate nursing students: A systematic review and meta-synthesis. Nurse Educ Today. 2023;121: 105711. https://doi.org/10.1016/j.nedt.2023.105711 .

Mulyadi M, Tonapa SI, Rompas SSJ, Wang R-H, Lee B-O. Effects of simulation technology-based learning on nursing students’ learning outcomes: A systematic review and meta-analysis of experimental studies. Nurse Educ Today. 2021;107: 105127. https://doi.org/10.1016/j.nedt.2021.105127 .

Chernikova O, Heitzmann N, Stadler M, Holzberger D, Seidel T, Fischer F. Simulation-Based Learning in Higher Education: A Meta-Analysis. Rev Educ Res. 2020;90(4):499–541. https://doi.org/10.3102/0034654320933544 .

Article   Google Scholar  

Lioce L. Healthcare Simulation Dictionary. 2nd ed. Rockville: Agency for Healthcare Research and Quality; 2020.

Book   Google Scholar  

Kim J, Park J-H, Shin S. Effectiveness of simulation-based nursing education depending on fidelity: a meta-analysis. BMC Med Educ. 2016;16(1):152. https://doi.org/10.1186/s12909-016-0672-7 .

Article   PubMed   PubMed Central   Google Scholar  

Roberts E, Kaak V, Rolley J. Simulation to Replace Clinical Hours in Nursing: A Meta-narrative Review. Clin Simul Nurs. 2019;37:5–13. https://doi.org/10.1016/j.ecns.2019.07.003 .

Lapkin S, Levett-Jones T. A cost–utility analysis of medium vs. high-fidelity human patient simulation manikins in nursing education. J Clin Nurs. 2011;20(23–24):3543–52. https://doi.org/10.1111/j.1365-2702.2011.03843.x .

Dziuban C, Graham CR, Moskal PD, Norberg A, Sicilia N. Blended learning: the new normal and emerging technologies. Int J Educ Technol High Educ. 2018;15(1):3. https://doi.org/10.1186/s41239-017-0087-5 .

Goldsworthy S, Ferreira C, Shajani Z, Snell D, Perez G. Combining Virtual and High-fidelity Simulation to Foster Confidence and Competency in Postpartum Assessment Complications among Undergraduate Nursing Students. Clin Simul Nurs. 2022;66:18–24. https://doi.org/10.1016/j.ecns.2022.02.001 .

Kang KA, Kim SJ, Lee MN, Kim M, Kim S. Comparison of Learning Effects of Virtual Reality Simulation on Nursing Students Caring for Children with Asthma. Int J Enviro Res Public Health. 2020;17(22):8417. https://doi.org/10.3390/ijerph17228417 .

Ellis M, Hampton D, Makowski A, Falls C, Tovar E, Scott L, et al. Using unfolding case scenarios to promote clinical reasoning for nurse practitioner students. J Am Assoc Nurse Pract. 2023;35(1):55–62. https://doi.org/10.1097/jxx.0000000000000806 .

Englund H. Using unfolding case studies to develop critical thinking skills in baccalaureate nursing students: A pilot study. Nurse Educ Today. 2020;93: 104542. https://doi.org/10.1016/j.nedt.2020.104542 .

Wang X, Yang L, Hu S. Teaching nursing students: As an umbrella review of the effectiveness of using high-fidelity simulation. Nurse Educ Pract. 2024;77: 103969. https://doi.org/10.1016/j.nepr.2024.103969 .

La Carmen C, Angelo D, Valeria C, Ilaria F, Elona G, Cristina P, et al. Effects of high-fidelity simulation based on life-threatening clinical condition scenarios on learning outcomes of undergraduate and postgraduate nursing students: a systematic review and meta-analysis. BMJ Open. 2019;9(2): e025306. https://doi.org/10.1136/bmjopen-2018-025306 .

Au ML, Tong LK, Li YY, Ng WI, Wang SC. Impact of scenario validity and group size on learning outcomes in high-fidelity simulation: A systematics review and meta-analysis. Nurse Educ Today. 2023;121: 105705. https://doi.org/10.1016/j.nedt.2022.105705 .

Book ECfAtNNLE. 2022 National Nurse Licensing Examination Guided Simultaneous Practice Question Set. Beijing: People's Medical Publishing House Co. LTD; 2022.

Todd M, Manz JA, Hawkins KS, Parsons ME, Hercinger M. The Development of a Quantitative Evaluation Tool for Simulations in Nursing Education. Int J Nurs Educ Scholarsh. 2008;5(1). https://doi.org/10.2202/1548-923X.1705

Hayden J, Keegan M, Kardong-Edgren S, Smiley RA. Reliability and Validity Testing of the Creighton Competency Evaluation Instrument for Use in the NCSBN National Simulation Study. Nurs Educ Perspect. 2014;35(4):244–52. https://doi.org/10.5480/13-1130.1 .

Song X, Jin R. Chinese revised CCEI cross-cultural debugging and measurement features evaluation. Int J Nurs. 2018;37(19):2622–7. https://doi.org/10.3760/cma.j.issn.1637-4351.2019.19.009 .

Orchard C, Mahler C, Khalili H. Assessment of the Interprofessional Team Collaboration Scale for Students-AITCS-II (Student): Development and Testing. J Allied Health. 2021;50(1):E1–7.

PubMed   Google Scholar  

Shi Y, Zhu Z, Hu Y. The reliability and validity of the Chinese version of the Assessment of Interprofessional Team Collaboration in Student Learning Scale. Chinese J Nurs Educ. 2020;17(5):435–8. https://doi.org/10.3761/j.issn.1672-9234.2020.05.011 .

Shin H, Park CG, Kim H. Validation of Yoon’s Critical Thinking Disposition Instrument. Asian Nurs Res. 2015;9(4):342–8. https://doi.org/10.1016/j.anr.2015.10.004 .

Au ML, Li YY, Tong LK, Wang SC, Ng WI. Chinese version of Yoon Critical Thinking Disposition Instrument: validation using classical test theory and Rasch analysis. BMC Nurs. 2023;22(1):362. https://doi.org/10.1186/s12912-023-01519-y .

Watson R, Lea A. The caring dimensions inventory (CDI): content validity, reliability and scaling. J Adv Nurs. 1997;25(1):87–94. https://doi.org/10.1046/j.1365-2648.1997.1997025087.x .

Article   CAS   PubMed   Google Scholar  

Tong LK, Zhu MX, Wang SC, Cheong PL, Van IK. A Chinese Version of the Caring Dimensions Inventory: Reliability and Validity Assessment. Int J Environ Res Public Health. 2021;18(13):6834. https://doi.org/10.3390/ijerph18136834 .

Schiefele U, Krapp A, Wild KP, Winteler A. Der Fragebogen zum Studieninteresse (FSI). [The Study Interest Questionnaire (SIQ)]. Diagnostica. 1993;39(4):335–51.

Google Scholar  

Tong LK, Au ML, Li YY, Ng WI, Wang SC. The mediating effect of critical thinking between interest in learning and caring among nursing students: a cross-sectional study. BMC Nurs. 2023;22(1):30. https://doi.org/10.1186/s12912-023-01181-4 .

Graham AC, McAleer S. An overview of realist evaluation for simulation-based education. Adv Simul. 2018;3(1):13. https://doi.org/10.1186/s41077-018-0073-6 .

Sharoff L. Faculty’s Perception on Student Performance using vSim for Nursing® as a Teaching Strategy. Clin Simul Nurs. 2022;65:1–6. https://doi.org/10.1016/j.ecns.2021.12.007 .

Cole R, Flenady T, Heaton L. High Fidelity Simulation Modalities in Preregistration Nurse Education Programs: A Scoping Review. Clin Simul Nurs. 2023;80:64–86. https://doi.org/10.1016/j.ecns.2023.04.007 .

Park S, Hur HK, Chung C. Learning effects of virtual versus high-fidelity simulations in nursing students: a crossover comparison. BMC Nurs. 2022;21(1):100. https://doi.org/10.1186/s12912-022-00878-2 .

Goldsworthy S, Patterson JD, Dobbs M, Afzal A, Deboer S. How Does Simulation Impact Building Competency and Confidence in Recognition and Response to the Adult and Paediatric Deteriorating Patient Among Undergraduate Nursing Students? Clin Simul Nurs. 2019;28:25–32. https://doi.org/10.1016/j.ecns.2018.12.001 .

Tosterud R, Hedelin B, Hall-Lord ML. Nursing students’ perceptions of high- and low-fidelity simulation used as learning methods. Nurse Educ Pract. 2013;13(4):262–70. https://doi.org/10.1016/j.nepr.2013.02.002 .

Cheng C-Y, Hung C-C, Chen Y-J, Liou S-R, Chu T-P. Effects of an unfolding case study on clinical reasoning, self-directed learning, and team collaboration of undergraduate nursing students: A mixed methods study. Nurse Educ Today. 2024;137: 106168. https://doi.org/10.1016/j.nedt.2024.106168 .

Kaiser L, Conrad S, Neugebauer EAM, Pietsch B, Pieper D. Interprofessional collaboration and patient-reported outcomes in inpatient care: a systematic review. Syst Rev. 2022;11(1):169. https://doi.org/10.1186/s13643-022-02027-x .

Tong LK, Li YY, Au ML, Wang SC, Ng WI. High-fidelity simulation duration and learning outcomes among undergraduate nursing students: A systematic review and meta-analysis. Nurse Educ Today. 2022;116: 105435. https://doi.org/10.1016/j.nedt.2022.105435 .

Livne N. High-fidelity simulations offer a paradigm to develop personal and interprofessional competencies of health students: A review article. Int J Allied Health Sci Pract. 2019;17(2). https://doi.org/10.46743/1540-580X/2019.1835

Marion-Martins AD, Pinho DLM. Interprofessional simulation effects for healthcare students: A systematic review and meta-analysis. Nurse Educ Today. 2020;94: 104568. https://doi.org/10.1016/j.nedt.2020.104568 .

Macnamara AF, Bird K, Rigby A, Sathyapalan T, Hepburn D. High-fidelity simulation and virtual reality: an evaluation of medical students’ experiences. BMJ simulation & technology enhanced learning. 2021;7(6):528–35. https://doi.org/10.1136/bmjstel-2020-000625 .

Foronda CL, Swoboda SM, Henry MN, Kamau E, Sullivan N, Hudson KW. Student preferences and perceptions of learning from vSIM for Nursing™. Nurse Educ Pract. 2018;33:27–32. https://doi.org/10.1016/j.nepr.2018.08.003 .

Lei Y-Y, Zhu L, Sa YTR, Cui X-S. Effects of high-fidelity simulation teaching on nursing students’ knowledge, professional skills and clinical ability: A meta-analysis and systematic review. Nurse Educ Pract. 2022;60: 103306. https://doi.org/10.1016/j.nepr.2022.103306 .

Najjar RH, Lyman B, Miehl N. Nursing Students’ Experiences with High-Fidelity Simulation. Int J Nurs Educ Scholarsh. 2015;12(1):27–35. https://doi.org/10.1515/ijnes-2015-0010 .

Au ML, Lo MS, Cheong W, Wang SC, Van IK. Nursing students’ perception of high-fidelity simulation activity instead of clinical placement: A qualitative study. Nurse Educ Today. 2016;39:16–21. https://doi.org/10.1016/j.nedt.2016.01.015 .

Dean S, Williams C, Balnaves M. Practising on plastic people: Can I really care? Contemp Nurse. 2015;51(2–3):257–71. https://doi.org/10.1080/10376178.2016.1163231 .

Chang YM, Lai CL. Exploring the experiences of nursing students in using immersive virtual reality to learn nursing skills. Nurse Educ Today. 2021;97: 104670. https://doi.org/10.1016/j.nedt.2020.104670 .

Jeon J, Kim JH, Choi EH. Needs Assessment for a VR-Based Adult Nursing Simulation Training Program for Korean Nursing Students: A Qualitative Study Using Focus Group Interviews. Int J Environ Res Public Health. 2020;17(23):8880. https://doi.org/10.3390/ijerph17238880 .

Davis R. Nursing Student Experiences with High-Fidelity Simulation Education [Ed.D.]. Arizona: Grand Canyon University; 2021.

Saab MM, Landers M, Murphy D, O’Mahony B, Cooke E, O’Driscoll M, et al. Nursing students’ views of using virtual reality in healthcare: A qualitative study. J Clin Nurs. 2022;31(9–10):1228–42. https://doi.org/10.1111/jocn.15978 .

Download references

Acknowledgements

Not applicable.

This work was supported by a research grant from Higher Education Fund of Macao SAR Government (project number: HSS-KWNC-2021–01). This funding source had no role in the design of this study and will not have any role during its execution, analyses, interpretation of the data, or decision to submit results.

Author information

Authors and affiliations.

Kiang Wu Nursing College of Macau, Edifício do Instituto de Enfermagem Kiang Wu de Macau, Avenida do Hospital das Ilhas no.447, Coloane, RAEM, Macau SAR, China

Lai Kun Tong, Yue Yi Li, Mio Leng Au, Wai I. Ng & Si Chen Wang

School of Nursing, Yangzhou University, No.136, Jiangyang Middle Road, Hanjiang District, Yangzhou, Jiangsu Province, China

Yongbing Liu

School of Nursing, Guangzhou Xinhua University, 19 Huamei Road, Tianhe District, Guangzhou, Guangdong Province, China

School of Nursing, Guangzhou Medical University, Dongfeng West Road, Yuexiu District, Guangzhou, Guangdong Province, China

Liqiang Zhong

School of Nursing, Shenzhen University, No. 3688, Nanhai Road, Nanshan District, Shenzhen, Guangdong Province, China

Xichenhui Qiu

You can also search for this author in PubMed   Google Scholar

Contributions

Study conceptualization and planning were organized and performed by LKT, YYL, MLA, WIN, SCW, YBL, YS, LQZ, and XCHQ. Data collection, data analysis and data interpretation were performed by LKT, YYL, MLA, WIN, SCW, YBL, YS, LQZ, and XCHQ. LKT drafted the initial version of the manuscript. YYL, MLA, WIN, SCW, YBL, YS, LQZ, and XCHQ revised the manuscript for important intellectual content. All authors had full access to the data and have reviewed and approved the submitted version of the manuscript. All authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Corresponding author

Correspondence to Mio Leng Au .

Ethics declarations

Ethics approval and consent to participate.

This research was approved by the Research Management and Development Department of Kiang Wu Nursing College of Macau (No. REC-2021.801) and conducted according to the Declaration of Helsinki. It was a completely voluntary, anonymous, and unrewarded study. Written consent was obtained from all participants.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Tong, L.K., Li, Y.Y., Au, M.L. et al. The effects of simulation-based education on undergraduate nursing students' competences: a multicenter randomized controlled trial. BMC Nurs 23 , 400 (2024). https://doi.org/10.1186/s12912-024-02069-7

Download citation

Received : 21 March 2024

Accepted : 05 June 2024

Published : 17 June 2024

DOI : https://doi.org/10.1186/s12912-024-02069-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • High-fidelity simulation
  • Computer-based simulation
  • High-fidelity simulation combined with computer-based simulation

BMC Nursing

ISSN: 1472-6955

systematic review of simulation in nursing education

This website uses cookies to ensure you get the best experience. Learn more about DOAJ’s privacy policy.

Hide this message

You are using an outdated browser. Please upgrade your browser to improve your experience and security.

The Directory of Open Access Journals

Quick search.

Healthcare (Dec 2023)

Teaching Strategies for Developing Clinical Reasoning Skills in Nursing Students: A Systematic Review of Randomised Controlled Trials

  • Ana Pérez-Perdomo,
  • Adelaida Zabalegui

Affiliations

Read online

Background: Clinical reasoning (CR) is a holistic and recursive cognitive process. It allows nursing students to accurately perceive patients’ situations and choose the best course of action among the available alternatives. This study aimed to identify the randomised controlled trials studies in the literature that concern clinical reasoning in the context of nursing students. Methods: A comprehensive search of PubMed, Scopus, Embase, and the Cochrane Controlled Register of Trials (CENTRAL) was performed to identify relevant studies published up to October 2023. The following inclusion criteria were examined: (a) clinical reasoning, clinical judgment, and critical thinking in nursing students as a primary study aim; (b) articles published for the last eleven years; (c) research conducted between January 2012 and September 2023; (d) articles published only in English and Spanish; and (e) Randomised Clinical Trials. The Critical Appraisal Skills Programme tool was utilised to appraise all included studies. Results: Fifteen papers were analysed. Based on the teaching strategies used in the articles, two groups have been identified: simulation methods and learning programs. The studies focus on comparing different teaching methodologies. Conclusions: This systematic review has detected different approaches to help nursing students improve their reasoning and decision-making skills. The use of mobile apps, digital simulations, and learning games has a positive impact on the clinical reasoning abilities of nursing students and their motivation. Incorporating new technologies into problem-solving-based learning and decision-making can also enhance nursing students’ reasoning skills. Nursing schools should evaluate their current methods and consider integrating or modifying new technologies and methodologies that can help enhance students’ learning and improve their clinical reasoning and cognitive skills.

  • nursing student
  • clinical reasoning
  • clinical decision making
  • thinking skills
  • randomised controlled trials

WeChat QR code

systematic review of simulation in nursing education

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Elsevier - PMC COVID-19 Collection

Logo of pheelsevier

Virtual Simulation to Enhance Clinical Reasoning in Nursing: A Systematic Review and Meta-analysis

Jia jia marcia sim.

a Nursing Department, Ng Teng Fong General Hospital, National University Health System, Singapore

Khairul Dzakirin Bin Rusli

b Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore

Tracy Levett-Jones

c School of Nursing & Midwifery, Faculty of Health, University of Technology Sydney, Australia

Sok Ying Liaw

Associated data.

The COVID-19 pandemic has given rise to more virtual simulation training. This study aimed to review the effectiveness of virtual simulations and their design features in developing clinical reasoning skills among nurses and nursing students.

A systematic search in CINAHL, PubMed, Cochrane Library, Embase, ProQuest, PsycINFO, and Scopus was conducted. The PRISMA guidelines, Cochrane's risk of bias, and GRADE was used to assess the articles. Meta-analyses and random-effects meta-regression were performed.

The search retrieved 11,105 articles, and 12 randomized controlled trials (RCTs) were included. Meta-analysis demonstrated a significant improvement in clinical reasoning based on applied knowledge and clinical performance among learners in the virtual simulation group compared with the control group. Meta-regression did not identify any significant covariates. Subgroup analyses revealed that virtual simulations with patient management contents, using multiple scenarios with nonimmersive experiences, conducted more than 30-minutes and postscenario feedback were more effective.

Conclusions

Virtual simulations can improve clinical reasoning skill. This study may inform nurse educators on how virtual simulation should be designed to optimize the development of clinical reasoning.

Introduction

The coronavirus disease 2019 pandemic (COVID-19) has led to increased opportunities for the development of virtual technologies in nursing education. With the unpredictable nature of the pandemic, continued public health mitigations such as safe distancing measures and avoidance of large group classes have necessitated the transition to more integrated, blended learning approaches in nursing education ( Haslam, 2021 ). Some educators have swiftly incorporated virtual simulation into nursing curricula to complement face-to-face teaching, while others turned to virtual simulation to supplement access to limited clinical placements, particularly in specialist areas such as mental health, pediatrics, and maternal health ( Verkuyl et al., 2021 ).

Despite the increased adoption, understanding of the definition of virtual simulation among educators and researchers has remained unclear ( Foronda et al., 2020 ). This paper adopts the definition of virtual simulation from the Healthcare Simulation Dictionary published by the Agency for Healthcare Research Quality (AHRQ), which refers to the recreation of reality portrayed on computer screens, involving real people operating simulated systems and playing key roles in performing skills, engaging in decision-making or communicating ( Lioce, 2020 ). Virtual simulation can take the form of serious games, virtual reality, or partial or complete immersive screen-based experiences, with or without the use of headsets. It can be an effective pedagogy in nursing education to improve acquisition of knowledge, skills, critical thinking, self-confidence, and learner satisfaction ( Foronda et al., 2020 ).

A unique function of virtual simulation is the development of clinical reasoning. Levett-Jones et al. (2010) conceptualized clinical reasoning as a process by which one gathers cues, processes the information, identifies the problem, plans and performs actions, evaluates outcomes, and reflects on and learns from the process. This cognitive and meta-cognitive process of synthesizing knowledge and patient data in relation to specific clinical situations is vital for nurses to respond to clinical changes and make decisions on care management ( Clemett & Raleigh, 2021 ; Victor-Chmil, 2013 ). It is also important to enhance nursing students’ and nurses’ clinical reasoning skills to enable them to provide quality and safe patient care by making accurate inferences and evidence-based decisions ( Mohammadi-Shahboulaghi et al., 2021 ).

Virtual simulation uses clinical scenarios for deliberate practice, a highly customizable, repetitive and structured activity with explicit learning objectives, to enhance learners’ performance in clinical decision-making skills in identifying patient problems and in care management that emphasizes decision-making with consequences ( Ericsson et al., 1993 ; LaManna et al., 2019 ; Levett-Jones et al., 2019 ). The scenarios can be adjusted in complexity to allow for repetition and deliberate practice in a safe and controlled environment ( Borg Sapiano et al., 2018 ). These scenarios provide feedback with associated expert practice and rationales that can support the development of clinical reasoning ( Posel et al., 2015 ). The features of virtual simulation correspond with best practice guidelines for healthcare simulation ( Motola et al., 2013 ) and the theoretical framework of experiential learning ( Shin et al., 2019 ). As in other simulation methods, virtual simulation allows learners to be actively engaged in an experience and reflect on those experiences through feedback and assessment methods. These experiences are conceptualized and stored in learners’ existing cognitive frameworks to be utilized in real-world clinical practice ( Kolb, 1984 ).

Prior reviews have supported the use of virtual simulation in the context of nursing education but are limited to mainly narrative synthesis and a focus on general learning outcomes ( Coyne et al., 2021 ; Foronda et al., 2020 ; Shin et al., 2019 ). Only one review examined the effect of virtual simulation for teaching diagnostic reasoning to healthcare providers ( Duff et al., 2016 ). However, that review employed a scoping review methodology and the acknowledged limitation of only 12 studies written between 2008 and 2015 ( Duff et al., 2016 ). Although the use of virtual simulation has increased in nursing education, evaluation of its impact on the development of clinical reasoning skills has not yet been carried out. This could be due to the variety of approaches, including multiple-choice questions, script concordance, and clinical performance assessment, that were used as a proxy to evaluate the outcome measure of clinical reasoning ( Clemett & Raleigh, 2021 ). Thampy et al. (2019) recommended targeting the higher level of Miller's pyramid of clinical competence to assess clinical reasoning. This pyramid has four levels of competency hierarchy: knowledge (tested by written assessment), applied knowledge (tested by problem-solving exercises such as case scenarios and written assignments), skills demonstration (through simulation and clinical exams), and practice (through observations in real clinical settings) ( Witheridge et al., 2019 ). Currently, there is limited understanding of how virtual simulations can be designed to optimize the development of clinical reasoning. In view of the abovementioned research gaps, this review aimed to evaluate the effectiveness of virtual simulations and their associated design features for developing clinical reasoning among nurses and nursing students. The review was guided by the following research questions:

  • 1. What is the effectiveness of virtual simulation on clinical reasoning in nursing education?
  • 2. What are the essential features in designing virtual simulation that improves clinical reasoning in nursing?

This systematic review and meta-analysis were reported according to the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) checklist (see Appendix 1) ( Liberati et al., 2009 ) and the Cochrane Handbook for Systematic Reviews ( Higgins, Thomas, et al., 2020 ).

Eligibility Criteria

Inclusion criteria were as follows: (a) pre or postregistration nursing education, (b) randomized controlled trial (RCT) with a comparison group, (c) study intervention using virtual simulation that incorporated experiential learning approaches, (d) at least one outcome assessing clinical reasoning at Miller's pyramid level two and above. Knowledge assessed at level one of Miller's pyramid (“knows” the facts) was excluded. A detailed description of the inclusion and exclusion criteria is presented in Appendix 2.

Search Strategy

Two systematic review databases, PubMed Clinical Queries and the Cochrane Database of Systematic Reviews, were searched to prevent duplication. This was followed by a three-step search strategy that was developed with a librarian and based on the Cochrane Handbook for Systematic Reviews ( Lefebvre et al., 2020 ). First, a search was conducted from inception to 10 January 2021 using keywords and index terms on seven bibliographic databases: PubMed, Scopus, Embase, Cochrane Central Register of Controlled Trials (CENTRAL), Cumulative Index to Nursing and Allied Health (CINAHL), PsycINFO and ProQuest (see Appendix 3). These databases were selected as they are major scientific databases for healthcare-related papers. Second, clinical trial registries including ClinicalTrials.gov and CenterWatch were searched for ongoing and unpublished trials. Lastly, grey literature, targeted journals, and reference lists were searched to optimize relevant articles.

Study Selection

The PRISMA framework involving four-stages (identification, screening, eligibility, and inclusion) was adopted for study selection. The management software EndNote X9 ( Clarivate Analytics, 2020 ) was used to import the record and remove duplicates. Next, two independent reviewers (JJMS and KDBR) screened titles and abstracts to select full-text articles for eligibility based on the inclusion and exclusion criteria. Subsequently, both reviewers compared findings, and any disagreements were resolved through discussion or consultation with a third reviewer (SYL). Reasons for trials exclusion are indicated in the PRISMA flow diagram (see Figure 1 ).

Figure 1

Preferred reporting items for systematic review and meta-analyses (PRISMA) flow diagram.

Data Extraction

A modified Cochrane data extraction form ( Li, Higgins, & Deeks, 2020 ) was used by the two independent reviewers (JJMS and KDBR) for data extraction. Items extracted from eligible studies included author(s), year, country, design, participants, sample size, intervention, comparator, outcomes, intention to treat (ITT), attrition rates, protocol, and trial registration. The specific components of virtual simulation included intervention regime (number/length of sessions, duration), learning content, feedback (postscenario/scenario-embedded), and immersion (nonimmersive/immersive). Authors were contacted for missing relevant data. A third reviewer (SYL) reviewed and confirmed the extracted data.

Quality Assessment

The Cochrane Collaboration's tool (version 1) was utilized by two independent reviewers (JJMS and KDBR) to assess risk of bias of individual studies ( Minozzi et al., 2020 ). The presence of five biases, including selection, performance, detection, attrition, and reporting biases, was examined through (a) random sequence generation, (b) allocation concealment, (c) blinding of participants and personnel, (d) blinding of outcome assessment, (e) incomplete outcome data and (f) selective reporting. These six domains were each appraised as low, unclear, or high risk depending on the information provided. In addition, attrition rate, missing data management, ITT, trial and protocol registration, and funding were examined to ensure robustness of trials.

Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) was used to assess the overall strength of evidence ( GRADEpro, 2020 ). An overall grade of “high,” “moderate,” “low,” or “very low” was given depending on the five domains (methodological limitations, inconsistency, indirectness, imprecise, on and publication bias) ( Schünemann, Brożek, Guyatt, & Oxman, 2013 ).

Data Synthesis and Statistical Analyses

Review Manager (RevMan) (version 5.4.1) (The Cochrane Collaboration, 2020) was utilized to analyze outcomes of meta-analyses. A random-effects model was used because it accounts for the statistical assumption of variation in the estimation of mean scores across all selected trials ( Deeks, Higgins, & Altman, 2020 ). Z -statistics at the significance level of p < . 05 was adopted to evaluate the overall effect. Standardized mean difference (SMD) or Cohen's d was used to express effect size and its magnitude of continuous outcomes, where d (0.01) = very small, d (0.2) = small, d (0.5) = medium, d (0.8) = large, d (1.2) = very large, and d (2.0) = huge ( Sawilowsky, 2009 ). Cochran's Q (χ 2 test) was utilized to evaluate statistical heterogeneity, with statistical significance of χ 2 set at p < . 10. I 2 statistics was adopted to quantify the degree of heterogeneity, where I 2 was categorized as unimportant (0%-40%), moderate (30%-60%), substantial (50%-90%), and considerable (75%-100%) ( Chaimani et al., 2020 ). Sensitivity analyses were utilized to remove heterogeneous trials to ensure homogeneity ( Deeks, Higgins, & Altman, 2020 ). Subgroup analyses were conducted to determine sources of heterogeneity and compare the intervention effects among intervention features ( Higgins, Savović, Page, Elbers, & Sterne, 2020 ). Predefined subgroups included learning content, duration, feedback, immersion, and number of scenarios ( Deeks, Higgins, & Altman, 2020 ).

Meta-regression was performed using Jamovi (version 1.6) ( The Jamovi Project, 2021 ) to examine whether heterogeneity among trials was attributed to covariates ( Deeks, Higgins, & Altman, 2020 ). The random-effects meta-regression model was utilized to determine if the year of publication, age of participants, sample size, learning content, number of scenarios, type of feedback, and immersive experience influenced the effect size of applied knowledge in virtual simulation. Random-effects meta-regression analysis at the significance level of p < . 05 was adopted.

As shown in Figure 1 , the study selection process conducted following the PRISMA guidelines ( Liberati et al., 2009 ) identified a total of 11,105 papers retrieved from seven databases including additional records, and removed 4,957 duplicate records using EndNoteX9 software. After 6,109 records based on titles and abstracts were excluded, the remaining 39 full-text articles were assessed for eligibility. Twelve RCTs met the criteria for this review.

Study Characteristics

Table 1 summarizes the characteristics of 12 RCTs. In total, there were 856 participants in the included studies, with a mean age of 18.97 ( Bayram & Caliskan, 2019 ) to 27.61 years ( Liaw et al., 2017 ). They represented seven countries, namely Canada (n = 1) ( Cobbett & Clarke, 2016 ), China (n = 1) ( Gu et al., 2017 ), France (n = 1) ( Blanié et al., 2020 ), Portugal (n = 1) ( Padilha et al., 2019 ), Singapore (n = 5) ( Liaw et al., 2014 ; Liaw, Wong, Ang, et al., 2015; Liaw, Wong, Chan, et al., 2015; Liaw et al., 2017 ; Tan et al., 2017 ), Turkey (n = 1) ( Bayram & Caliskan, 2019 ) and the United States (n = 2) (C. Li, 2016 ; LeFlore et al., 2012 ). The target population for all the RCTs was nursing students (n = 10), with most being final year students (n = 6), except for three ( Liaw, Wong, Ang, et al., 2015a ; Liaw, Wong, Chan, et al., 2015b ; Liaw et al., 2017 ) that included nurses.

Characteristics of 12 Included Randomized Controlled Trials (RCTS)

Author (Year)Study Design/ CountryParticipants/
Age: (Mean ± SD/Range)
Sample SizeIntervention
(I)
Comparator
(C)
Outcomes
(Measures)
-
value
Attrition Rate (%)ITT/MDMProtocol/
Registration/
Funding
2-arm RCT/
Turkey
Undergraduate Nursing Students (1st Year) /
I: NM
C: NM
Overall : 18.97 ± 1.00
I: 43
C: 43
T: 86
Virtual Game
(Single scenario in immersive virtual environment related to tracheostomy clinical procedural skills, followed by postscenario feedback)
LD: 10 minutes
NonDigital Education
(Classroom-Based Teaching)
-Applied Knowledge
(Self-Developed MCQ)
-Skills Demonstration
(Self-Developed Performance Tool)
.568
.017
I: 13.5%
C:13.5%
No/YesNo/No/Yes
2-arm RCT/
France
Undergraduate Nursing Students
(Final Year) /
I: 24.0 ± 6.40
C: 25.0 ± 6.50
Overall : 24.5 ± NM
I: 73
C: 73
T: 146
Virtual Game
(Multiple scenarios in nonimmersive virtual environment related to postoperative complications management, followed by postscenario feedback)
LD: 120 minutes
NonDigital Education
(Classroom-Based Teaching)
-Applied Knowledge
(SCTs)
.43I: 0%
C: 0%
NA/NANo/Yes/No
2-arm RCT/
Canada
Undergraduate Nursing Students
(Final Year) /
I: NM
C: NM
Overall : 25.0 ± NM
I: 27
C: 28
T: 55
vSim
(Single scenario in nonimmersive virtual environment related to maternal-newborn complications management, followed by postscenario feedback)
LD: 45 minutes
NonDigital Education
(PS)
-Applied Knowledge
(Self-Developed MCQ)
.31I:1.8%
C: 0%
No/YesNo/No/Yes
2-arm RCT/
China
Undergraduate Nursing Students
(2nd Year) /
I: 19.0 ± 0.58
C: 19.29 ± 0.73
Overall : 19.15 ± NM
I: 13
C: 14
T: 27
vSim
(Multiple scenarios in nonimmersive virtual environment related to fundamentals of nursing clinical procedural skills, followed by postscenario feedback)
LD: 29 minutes
NonDigital Education
(Lecture)
-Applied Knowledge
(Self-Developed MCQ)
.032 I: 3.6%
C: 0%
No/YesNo/No/No
2-Arm RCT/
United States
Undergraduate Nursing Students
(Final Year) /
I: 25.68 ± 6.80
C: 25.63 ± 4.13
Overall : 26.65 ± NM
I: 22
C: 27
T: 49
vSim
(Multiple scenarios in nonimmersive virtual environment related to management of acute and chronic diseases, followed by postscenario feedback)
LD: 150 minutes
NonDigital Education
(PS)
-Applied Knowledge
(HSRT)
.418I: 5.8%
C: 0%
No/YesNo/No/No
2-Arm RCT/
United States
Undergraduate Nursing Students
(Final Year) /
I: NM
C: NM
Overall : 25.7 ± NM
I: 46
C: 47
T: 93
Virtual
Patient Trainer (Unreal
Engine 3)
(Multiple scenarios in immersive virtual environment related to paediatric respiratory issues management, followed by scenario-embedded feedback)
LD: 180 minutes
NonDigital Education
(Lecture)
-Applied Knowledge
(Self-Developed MCQ)
.004 I: 0%
C: 0%
NA/NANo/No/No
2-Arm
RCT/
Singapore
Undergraduate Nursing Students
(Final Year) /
I: NM
C: NM
Overall : 21.86 ± 1.13
I: 31
C: 26
T: 57
e-RAPIDS
(Multiple scenarios in nonimmersive virtual environment related to clinical deterioration management, followed by postscenario feedback)
LD: 120 minutes
NonDigital Education
(PS)
-Skills Demonstration
(RAPIDS-Tool)
.94I: 0%
C: 6.6%
No/YesNo/No/Yes
2-Arm
RCT/
Singapore
Registered Nurses/
I: NM
C: NM
Overall : 25.58 ± 3.19
I: 35
C: 32
T: 67
e-RAPIDS
(Multiple scenarios in nonimmersive virtual environment related to clinical deterioration management, followed by postscenario feedback)
LD: 180 minutes
No Intervention Control Group-Skills Demonstration
(RAPIDS-Tool)
<.001 I: 0%
C: 4.3%
No/YesNo/No/Yes
2-Arm
RCT/
Singapore
Registered Nurses/
I: 26.17 ± 3.17
C: 24.94 ± 3.14
Overall : 25.6 ± NM
I: 35
C: 32
T: 67
e-RAPIDS
(Multiple scenarios in nonimmersive virtual environment related to clinical deterioration management, followed by postscenario feedback)
LD: 180 minutes
No Intervention Control Group-Applied Knowledge
(Self-Developed MCQ)
-Skills Demonstration
(Modified RAPIDS-Tool)
<.001
<.001
I: 0%
C: 4.3%
No/YesNo/No/Yes
2-Arm
RCT/
Singapore
Enrolled Nurses/
I: 28.16 ± 4.00
C: 27.06 ± 4.30
Overall : 27.61 ± 4.16
I: 32
C: 32
T: 64
e-RAPIDS
(Multiple scenarios in nonimmersive virtual environment related to clinical deterioration management, followed by postscenario feedback)
LD: 150 to 180 minutes
No Intervention Control Group-Applied Knowledge
(Self-Developed MCQ)
-Skills Demonstration
(Modified RAPIDS-Tool)
.01
.001
I: 3.0%
C: 1.5%
No/YesNo/No/Yes
2-Arm
RCT/
Porto, Portugal
Postgraduate
Nursing Students
(Final Year) /
I: 19.29 ± 0.46
C: 20.29 ± 2.19
Overall : 19.99 ± 1.99
I: 21
C: 21
T: 42
Clinical
Virtual Simulator (Body
Interact)
(Single scenario in nonimmersive virtual environment related to management of respiratory issues, followed by postscenario feedback)
LD: 45 minutes
NonDigital Education
(PS)
-Applied Knowledge
(Self-Developed True/False Questions And MCQ)
.001 I: 12.5%
C: 12.5%
No/YesNo/No/Yes
2-Arm
RCT/
Singapore
Undergraduate Nursing Students
(Second Year) /
I: 21.14 ± 2.08
C: 20.72 ± 0.96
Overall : 20.59 ± NM
I: 57
C: 46
T: 103
Virtual
Game (3DHive)
(Single scenario in immersive virtual environment related to blood transfusion and post -transfusion reaction management, followed by scenario-embedded feedback)
LD: 30 minutes
Wait-List
Control Group
-Applied Knowledge
(Self-Developed MCQ)
-Skills Demonstration
(Self-Developed Performance Tool)
<.001
.11
I: 4.5%
C: 2.7%
No/NoNo/No/No

a = total mean age of participants; C = comparator; CCTDI = California critical thinking disposition inventory; HSRT = health science reasoning test; I = intervention group; L = length of each intervention session; LD = learning duration; LEP = learning environment preferences; MCQ = multiple-choice questions; NA = not applicable; NM = not mentioned; NOS = number of intervention sessions; OD = overall duration of intervention; PS = physical simulation; RAPIDS = Rescuing a patient in deteriorating situations; SCTs = script concordance tests; T = total number of participants included in meta-analysis;

Details of Virtual Simulation

The types of virtual simulations included in the papers included vSim TM (n = 3) ( Cobbett & Clarke, 2016 ; Gu et al., 2017 ; Li, 2016 ), e-RAPIDS (n = 4) ( Liaw et al., 2014 ; Liaw, Wong, Ang, et al., 2015; Liaw, Wong, Chan, et al., 2015; Liaw et al., 2017 ) and Clinical Virtual Simulator (Body Interact) (n = 1) ( Padilha et al., 2019 ). Four of the RCTs had no specific name for the simulation. All of the reviews involved learning topics related to patient care management focused on acute care, except for Bayram and Caliskan (2019) and Gu et al. (2017) , which focused on clinical procedural skills including tracheostomy care, medication administration and urinary catheterisation. The virtual environments were either immersive using three-dimensional (3D; n = 3) or nonimmersive using two-dimensional (2D; n = 9) modalities. All involved one scenario, except for Cobbett and Clarke (2016) , which involved two; Li (2016) , which involved five; and Bayram and Caliskan (2019) and Gu et al. (2017) , which were user-determined. Only two studies used scenario-embedded feedback ( LeFlore et al., 2012 ; Tan et al., 2017 ), with the others (n = 10) utilising postscenario feedback. Learning duration ranged from 10 ( Bayram & Caliskan, 2019 ) to 180 (Liaw, Wong, Ang, et al., 2015) minutes.

Risk of Bias Assessment

All studies were appraised high risk, except Tan et al. (2017) and Liaw et al. (2017) , where the risk of bias was rated as unclear (see Appendix 4). Predominant risk of bias was observed in the following domains: unclear risk of reporting bias (100%) due to lack of trial registration and ITT analysis for transparency, high risk of performance bias (58.3%) due to nature of virtual simulation, and high risk or unclear risk of selection bias for allocation concealment due to lack of apparent evidence (58.3%). Attrition bias was significantly low as only two studies had an attrition rate exceeding 20% ( Bayram & Caliskan, 2019 ; Padilha et al., 2019 ), which raises threats to validity.

Applied Knowledge Outcomes

Figure 2 presents the pooled meta-analysis results from ten RCTs that measured knowledge scores between virtual simulation and comparator groups. A total of 732 participants were included in the analysis, which yielded a significant increase in knowledge scores in virtual simulation ( Z  = 3.39, p < . 001), with large effect size ( d  = 0.84). Given that substantial heterogeneity ( I 2  = 89%, p < . 001) was detected, sensitivity test and subgroup analyses were performed. Sensitivity analysis was attempted, but heterogeneity was not improved.

Figure 2

Forest plot of standardized mean difference (95% CI) on applied knowledge scores (post intervention) in VS.

Skills Demonstration Outcomes

Figure 3 illustrates the pooled meta-analysis results from six RCTs with a total of 444 participants where skills demonstration scores were used as an outcome. The analysis indicated a significant improvement of skills in virtual simulation ( Z  = 3.34, p < .001 ), with very large to huge effect size ( d  = 1.79). Because of considerable heterogeneity ( I 2  = 96%, p < . 001), sensitivity test and subgroup analyses were conducted. Sensitivity analysis was attempted, but heterogeneity was not improved.

Figure 3

Forest plot of standardized mean difference (95% CI) on skills demonstration scores (post intervention) in VS.

Subgroup Analyses

Subgroup analyses were conducted to examine key features of virtual simulation that result in acquisition of clinical reasoning skills through applied knowledge and skills demonstration (see Table 2 ). Virtual simulation had a greater effect in increasing knowledge scores when the learning content included patient care management ( d  = 0.91), when conducted in multiple scenarios ( d  = 0.84), and when using postscenario feedback ( d  = 0.73). As shown in Table 2 , virtual simulation had no significant subgroup differences for knowledge scores, when comparing learning duration (I 2  = 0%, p = . 41) and immersive experience (I 2  = 0%, p = . 59).

Subgroup Analyses of Virtual Simulation for Applied Knowledge and Skills Demonstration Scores

CategorySubgroupsNo. of
Studies
Sample
Size (n)
d (95% CI)Overall effect
(Z, -value for Z)
Subgroup Difference
( -value for Q, I )
Knowledge Scores
Learning ContentPatient Care Management8 6190.91 (0.33, 1.48)Z = 3.08, 002 25,
I = 23.8%
Clinical Procedural
Skills
2 1130.48 (0.02, 0.93)Z = 2.04, 04
Number of ScenariosMultiple6 4460.84 (0.35, 1.33)Z = 3.36, 0008 97,
I = 0%
Single4 2860.86 (-0.25, 1.96)Z = 1.52, 13
FeedbackPost Scenario8 5430.73 (0.25, 1.20)Z = 3.00, 003 36,
I = 0%
NonPost Scenario2 1891.33 (-0.61, 3.21)Z = 1.33, 18
Learning Duration≤ 30 minutes
> 30 minutes
3
5
216
321
1.16 (-0.16, 2.47)
0.57 (0.15, 1.00)
Z = 1.73, 08
Z = 2.63, 009
41,
I = 0%
Immersive ExperienceImmersive Environment
NonImmersive Environment
3
7
282
450
1.08 (-0.03, 2.19)
0.74 (0.17, 1.30)
Z = 1.90, 06
Z = 2.56, 01
59,
I = 0%
Skills Demonstration Scores
Learning Duration≤30 minutes
>30 minutes
2
4
189
255
0.28 (-0.01, 0.57)
2.82 (1.05, 4.59)
Z = 1.92, 06
Z = 3.13, 002
005 ,
I = 87.0%
Immersive
Experience
Immersive Environment
NonImmersive Environment
2
4
189
255
0.28 (-0.01, 0.57)
2.82 (1.05, 4.59)
Z = 1.92, 06
Z = 3.13, 002
005 ,
I  = 87.0%

Note: CI = Confidence Interval; d = Cohen's d (Effect Size); I ^2  = Heterogeneity; Ref = Reference; Z = z-Statistics;

Reference: a ( Bayram & Caliskan, 2019 ); b ( Blanié et al., 2020 ); c ( Cobbett & Clarke, 2016 ); d ( Gu et al., 2017 ); e ( Li, 2016 ); f ( LeFlore et al., 2012 ); g ( Liaw et al., 2014 ); h ( Liaw et al., 2015a ); i ( Liaw et al., 2015b ); j ( Liaw et al., 2017 ); k ( Padilha et al., 2019 ); l ( Tan et al., 2017 )

As shown in Table 2 , virtual simulation had a greater effect in increasing skills performance scores, with significant subgroup differences of considerable heterogeneity ( I 2  = 87.0%, p = . 005), when the duration was more than 30 minutes ( d  = 2.82) and when using nonimmersive virtual simulation ( d  = 2.82).

Meta-regression

The random-effects meta-regression was performed to assess the effects of the following covariates on the effect size of applied knowledge scores: year of publication, age of participants, sample size, learning content, number of scenarios, type of feedback and immersive experience (see Table 3 ). Covariates that had no effect on applied knowledge scores included year of publication (β = 0.10, p = . 36), age of participants (β = -0.17, p = . 37), sample size (β = 0.01, p = . 76), patient care management (β = 0.34, p = . 60), multiple scenarios (β = -0.03, p = . 96), postscenario feedback (β = 0.30, p = . 45) and nonimmersive environment (β = -0.34, p = . 55).

Random Effects Meta-regression Models of Virtual Simulation by Various Covariates

Covariates Standard Error95% Lower95% UpperZ
Year of Publication0.100.11-0.120.33-0.91.36
Age of Participants-0.170.19-0.550.21-0.88.37
Sample Size0.010.02-0.380.530.31.76
Patient Care Management0.340.65-0.931.610.52.60
Multiple Scenarios-0.030.54-1.081.03-0.05.96
PostScenario Feedback-0.570.65-1.830.70-0.88.38
Nonimmersive Environment-0.340.57-1.470.78-0.59.55

Note: β  = Regression coefficient; Z = Z statistics

Overall Quality Appraisal (GRADE)

Using the GRADE certainty assessment, the overall quality of evidence for knowledge and clinical performance outcomes was graded very low (see Appendix 5). The domains of certainty assessment, including biases, inconsistency, indirectness and imprecision, were downgraded as a result of methodological limitations; variabilities in population, intervention and comparator group; and small sample size. Publication bias was not detected for trials that reported applied knowledge scores as symmetrical distribution of the included trials on the funnel plot was observed (Egger's test, p = . 73) (see Appendix 6).

The meta-analysis demonstrated a significant improvement in clinical reasoning skills based on applied knowledge (know how) and clinical performance (show how) among nursing students and nurses in the virtual simulation groups compared with control groups. Subgroup analyses revealed that virtual simulation was more effective for the acquisition of clinical reasoning skills when learning content focused on patient management and when conducted for more than 30 minutes’ duration, using multiple scenarios with nonimmersive experiences and provision of postscenario feedback. Meta-regression did not identify any significant covariates.

By employing a quantitative synthesis of outcomes with selectively included studies, the findings from this review add further evidence to support earlier narrative reviews that identified the effectiveness of virtual simulation in improving knowledge and clinical performance of healthcare learners ( Coyne et al., 2021 ; Foronda et al., 2020 ). Similar to our review findings, a meta-analysis on virtual patient simulations in health professional education found improved skills performance outcomes compared with traditional education. ( Kononowicz et al., 2019 ).

The effectiveness of virtual simulation in improving clinical reasoning can be explained by the application of Kolb's (1984) experiential learning, which requires learners to engage in clinical decision-making processes through problem-solving of clinical scenarios and allows them access for feedback on performance to facilitate reflection. According to Fowler (2008) , the effectiveness of experiential learning depends on the quality of the experience and reflection on the experience. As reported by Edelbring (2013) , key design strategies for virtual simulation are essential to optimize experiential learning approaches. Although the features identified for effective learning in high-fidelity simulation include various clinical scenarios and training levels, deliberate practice and feedback were found to be commonly used in the design of virtual simulation ( Liaw et al., 2014 ). However, the application of these features for optimal design of virtual simulation may vary according to educational context ( Cook & Triola, 2009 ).

Our subgroup analysis provided evidence on the specific features of virtual simulation to optimize the facilitation of clinical reasoning. The findings revealed greater effect of virtual simulation programs that focused on developing critical thinking skills related to patient care management such as management of clinical deterioration than on developing knowledge application related to clinical procedure (e.g., tracheostomy care). Virtual simulation offers real-life clinical scenarios that enable learners to conduct nursing assessment based on the given scenario and apply the assessment findings to make clinical decisions in the development of a patient management plan ( LaManna et al., 2019 ). Thus, it is known to be best suited for promoting clinical reasoning skills related to patient management, to prepare students for a range of clinical situations ( Borg Sapiano et al., 2018 ). Conversely, the use of virtual simulation for the development of knowledge related to procedural skills has been criticized as other more cost-effective methods can be used (Cook & Triola., 2009).

In this subgroup analysis, virtual simulation was found to have greater effect when multiple clinical scenarios with longer duration (>30 minutes) were used. Expertise in clinical reasoning is believed to be developed through exposure to a range of clinical cases that can facilitate the ability to undertake appropriate pattern recognition—the process of recognizing similarity on the basis of a prior experience ( Norman et al., 2007 ). Apart from promoting pattern recognition, multiple and varied clinical scenarios can facilitate deliberate practice of reasoning process through reinforcing knowledge structures (Cook & Triola., 2009). The ease of access and flexibility in terms of time and place were shown to promote deliberate practice in virtual simulation, which made it as effective as one-off manikin-based simulations ( Liaw et al., 2014 ). Besides varied clinical scenarios, the deliberate practice of a mental model (e.g., ABCDE) in these scenarios was considered critical to arrive at the appropriate clinical decision for the specific patient ( Liaw et al., 2015a ). However, logistical challenges, such as scheduling of sessions with students and the development of appropriate case scenarios, should be taken into consideration during the virtual simulation development phase ( Liaw et al., 2020 ).

In the studies included in this review, feedback using quizzes or checklists was incorporated throughout the clinical cases. According to Norman and Eva (2010) , feedback that provides individual responses with rationales and evidence can support the development of clinical reasoning. Our study provides evidence that the incorporation of feedback at the end of each scenario is effective in supporting the development of clinical reasoning. As reported by Posel et al. (2015) , postscenario feedback that enables learners to review the case for errors made, their associated rationales and experts’ responses can provide an opportunity for students to undertake postcase reflection—a critical element in the development of clinical reasoning skills. More research is needed to inform how to effectively deliver postscenario feedback to optimize the development of clinical reasoning in virtual simulation.

Interestingly, our findings revealed that nonimmersive 2D (e.g., screen-based simulation) is more effective than immersive 3D virtual environments (e.g., virtual reality simulation). The application of emotional engagement theory and cognitive load theory may help to clarify this finding (La Rochelle et al., 2011 ; Van der Land et al., 2013 ). While the 3D virtual environment has the capability to enhance students’ motivation and engagement to learn through realistic, immersive, and interactive learning environments, it can increase learners’ cognitive load as they have to pay attention to irrelevant immersive stimuli that distract them from the learning tasks ( Van der Land et al., 2013 ). Thus, the 3D virtual environment should be used with caution as this approach, aiming to increase authenticity of learning, does not appear to improve clinical reasoning skills ( La Rochelle et al., 2011 ).

Strengths and Limitations

This is one of the first systematic reviews and meta-analyses to present contemporary and robust evidence of the effectiveness and essential features of virtual simulation for developing clinical reasoning in nursing education. Although a robust search was undertaken to decrease publication bias, the inclusion of English-only articles might have limited the study selection and may affect generalization of the findings. Only RCT designs were included in this review to ensure scientific credibility. However, the presence of small sample groups in selected trials might have resulted in small study effects. Larger trials are needed for future studies to strengthen existing evidence. As a result of high risk of selection, performance, and reporting biases, the overall quality of the evidence was low; thus, the results should be interpreted with caution. Miller's pyramid of clinical competence was applied to assess clinical reasoning at the higher levels to ensure clinical reasoning was appropriately evaluated. The reviewed studies only included “know how” (applied knowledge) and “show how” (skills demonstration) levels. However, proficiency at these levels may not automatically transfer to real-life clinical settings ( Thampy et al., 2019 ). Future studies should target the top of Miller's pyramid (“does” level) by examining whether the clinical reasoning skills gained in virtual simulations influence learners’ actual performance in clinical settings.

The development of clinical reasoning as a core competency is critical for nursing education to ensure the provision of safe and quality patient care. Our review demonstrated that the experiential learning approach in virtual simulation can improve this nursing competency. Future designs of virtual simulation should consider the use of nonimmersive virtual environments and multiple scenarios with postscenario feedback in delivering learning contents related to patient care management. Future studies using robust RCTs and examining the impact on actual clinical performance are needed to strengthen the existing evidence.

Acknowledgement

We would like to thank the Elite Editing for providing editing service for this manuscript.

Conflict of interest

There are no conflict of interest declared.

Supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.ecns.2022.05.006 .

Appendix. Supplementary materials

  • Bayram S., Caliskan N. Effect of a game-based virtual reality phone application on tracheostomy care education for nursing students: a randomized controlled trial. Nurse Education Today. 2019; 79 :25–31. doi: 10.1016/j.nedt.2019.05.010. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Blanié A., Amorim M.A., Benhamou D. Comparative value of a simulation by gaming and a traditional teaching method to improve clinical reasoning skills necessary to detect patient deterioration: a randomized study in nursing students. BMC Medical Education. 2020;(53):20. doi: 10.1186/s12909-020-1939-6. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Borg Sapiano A., Sammut R., Trapani J. The effectiveness of virtual simulation in improving student nurses' knowledge and performance during patient deterioration: a pre and post test design. Nurse Education Today. 2018; 62 :128–133. doi: 10.1016/j.nedt.2017.12.025. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chaimani, A., Caldwell, D., Li, T., Higgins, J., & Salanti, G. (2020). Chapter 11: Undertaking network meta-analyses. In cochrane handbook for systematic reviews of interventions. Accessed from: https://training.cochrane.org/handbook/current/chapter-11
  • Clarivate Analytics. (2020). EndNote X9 [Computer software]. Philadelphia, PA: the endnote team. Accessed from: https://endnote.com
  • Clemett V.J., Raleigh M. The validity and reliability of clinical judgement and decision-making skills assessment in nursing: a systematic literature review. Nurse Education Today. 2021; 102 doi: 10.1016/j.nedt.2021.104885. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cobbett S., Clarke E. Virtual versus face-to-face clinical simulation in relation to student knowledge, anxiety, and self-confidence in maternal-newborn nursing: a randomized controlled trial. Nurse Education Today. 2016; 45 :179–184. doi: 10.1016/j.nedt.2016.08.004. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cook D.A., Triola M.M. Virtual patients: a critical literature review andproposed next steps. Medical Education. 2009; 43 (4):303–311. doi: 10.1111/j.1365-2923.2008.03286.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Coyne E., Calleja P., Forster E., Lin F. A review of virtual-simulation for assessing healthcare students' clinical competency. Nurse Education Today. 2021; 96 doi: 10.1016/j.nedt.2020.104623. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Deeks, J., Higgins, J., & Altman, D. (2020). Chapter 10: Analysing data and undertaking meta-analyses. In cochrane handbook for systematic reviews of interventions. Accessed at: January 7, 2021 Accessed from: https://training.cochrane.org/handbook/current/chapter-10
  • Duff E., Miller L., Bruce J. Online virtual simulation and diagnostic reasoning: a scoping review. Clinical Simulation in Nursing. 2016; 12 (9):377–384. doi: 10.1016/j.ecns.2016.04.001. [ CrossRef ] [ Google Scholar ]
  • Edelbring S. Research into the use of virtual patients is moving forward by zooming out. Medical Education. 2013; 47 (6):544–546. doi: 10.1111/medu.12206. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ericsson K., Krampe R., Tesch-Römer C. The role of deliberate practice in the acquisition of expert performance. Psychological Review. 1993; 100 (3):363–406. doi: 10.1037/0033-295x.100.3.363. [ CrossRef ] [ Google Scholar ]
  • Foronda C.L., Fernandez-Burgos M., Nadeau C., Kelley C.N., Henry M.N. Virtual simulation in nursing education: a systematic review spanning 1996 to 2018. Simulation in Healthcare. 2020; 15 (1):46–54. doi: 10.1097/SIH.0000000000000411. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fowler J. Experiential learning and its facilitation. Nurse Education Today. 2008; 28 (4):427–433. doi: 10.1016/j.nedt.2007.07.007. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • GRADEpro. (2020). GRADEpro GDT: GRADEpro guideline development tool [software]. Accessed at: February 18, 2021. Accessed from: https://gradepro.org
  • Gu Y., Zou Z., Chen X. The effects of vSIM for nursing™ as a teaching strategy on fundamentals of nursing education in undergraduates. Clinical Simulation in Nursing. 2017; 13 (4):194–197. doi: 10.1016/j.ecns.2017.01.005. [ CrossRef ] [ Google Scholar ]
  • Haslam M.B. What might COVID-19 have taught us about the delivery of Nurse Education, in a post-COVID-19 world? Nurse Education Today. 2021; 97 doi: 10.1016/j.nedt.2020.104707. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Higgins, J. P. T., Thomas, J., Chandler, J., Cumpston, M., Li, T., Page, M.J., & Welch, V. A. (2020). Cochrane handbook for systematic reviews of interventions. Accessed at: January 7, 2021. Accessed from: www.training.cochrane.org/handbook [ PMC free article ] [ PubMed ]
  • Higgins, J., Savović, J., Page, M., Elbers, R., & Sterne, J. (2020). Chapter 8: assessing risk of bias in a randomized trial. In Cochrane handbook for systematic reviews of interventions . Accessed at: January 7, 2021. Accessed from: www.training.cochrane.org/handbook
  • Kolb D.A. Prentice-Hall; Englewood Cliffs, N.J: 1984. Experiential learning: experience as the sources of learning and development. [ Google Scholar ]
  • Kononowicz A., Woodham L., Edelbring S., Stathakarou N., Davies D., Saxena N., Zary N. Virtual patient simulations in health professions education: systematic review and meta-analysis by the digital health education collaboration. Journal of Medical Internet Research. 2019; 21 (7):14676. doi: 10.2196/14676. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • LaManna J., Guido-Sanz F., Anderson M., Chase S., Weiss J., Blackwell C. Teaching diagnostic reasoning to advanced practice nurses: positives and negatives. Clinical Simulation in Nursing. 2019; 26 :24–31. doi: 10.1016/j.ecns.2018.10.006. [ CrossRef ] [ Google Scholar ]
  • Lefebvre, C., Glanville, J., Briscoe, S., Littlewood, A., Marshall, C., Metzendorf, M, … Wieland, S. (2020). Chapter 4: searching for and selecting studies. Cochrane Handbook for Systematic Reviews of Interventions . Accessed at: January 7, 2021. Accessed from: https://training.cochrane.org/handbook/archive/v6.1/chapter-04
  • LeFlore J.L., Anderson M., Zielke M.A., Nelson K.A., Thomas P.E., Hardee G., John L. Can a virtual patient trainer teach student nurses how to save lives—Teaching nursing students about pediatric respiratory diseases. Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare. 2012; 7 (1):10–17. doi: 10.1097/sih.0b013e31823652de. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Levett-Jones T., Hoffman K., Dempsey J., Jeong S.Y., Noble D., Norton C.A., Hickey N. The ‘five rights’ of of clinical reasoning: an educational model to enhance nursing students' ability to identify and manage clinically 'at risk' patients. Nurse Education Today. 2010; 30 (6):515–520. doi: 10.1016/j.nedt.2009.10.020. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Levett-Jones T., Pich J., Blakey N. In: Clinical reasoning in the health professions. 4th Ed. Higgs J, editor. Elsevier; Marrickville: 2019. Teaching clinical reasoning in nursing education. [ Google Scholar ]
  • Li, C. (2016). A comparison of traditional face-to-face simulation versus virtual simulation in the development of critical thinking skills, satisfaction, and self-confidence in undergraduate nursing students. ProQuest Dissertations & Theses Global. 10800814.
  • Li, T., Higgins, J., & Deeks, J. (2020). Chapter 5: Collecting Data. In Cochrane Handbook for Systematic Reviews of Intervention . Accessed at: January 7, 2021. Accessed from: https://training.cochrane.org/handbook/current/chapter-05
  • Liaw S., Chan S., Chen F., Hooi S., Siau C. Comparison of virtual patient simulation with mannequin-based simulation for improving clinical performances in assessing and managing clinical deterioration: randomized controlled trial. Journal of Medical Internet Research. 2014; 16 (9):e214. doi: 10.2196/jmir.3322. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Liaw S., Wong L., Ang S., Ho J., Siau C., Ang E. Strengthening the afferent limb of rapid response systems: an educational intervention using web-based learning for early recognition and responding to deteriorating patients. BMJ Quality & Safety. 2015; 25 (6):448–456. doi: 10.1136/bmjqs-2015-004073. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Liaw S., Wong L., Chan S., Ho J., Mordiffi S., Ang S., Ang E. Designing and evaluating an interactive multimedia web-based simulation for developing nurses’ competencies in acute nursing care: randomized controlled trial. Journal of Medical Internet Research. 2015; 17 (1):e5. doi: 10.2196/jmir.3853. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Liaw S., Chng D., Wong L., Ho J., Mordiffi S., Cooper S., Chua W., Ang E. The impact of a Web-based educational program on the recognition and management of deteriorating patients. Journal of Clinical Nursing. 2017; 26 (23-24):4848–4856. doi: 10.1111/jocn.13955. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Liaw S.Y., Ooi S.W., Rusli K.D.B., Lau T.C., Tam W.W.S., Chua W.L. Nurse-physician communication team training in virtual reality versus live simulation: randomized controlled trial on team communication and teamwork attitudes. Journal of Medical Internet Research. 2020; 22 (4):e17279. doi: 10.2196/17279. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Liberati A., Altman D., Tetzlaff J., Mulrow C., Gotzsche P., Ioannidis J., Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009; 339 doi: 10.1136/bmj.b2700. b2700-b2700. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lioce L. Agency for Healthcare Research and Quality; Rockville, MD: 2020. Healthcare Simulation Dictionary –Second Edition. September 2020. AHRQ Publication No. 20-0019. [ CrossRef ] [ Google Scholar ]
  • Minozzi S., Cinquini M., Gianola S., Gonzalez-Lorenzo M., Banzi R. The revised cochrane risk of bias tool for randomized trials (RoB 2) showed low interrater reliability and challenges in its application. Journal of Clinical Epidemiology. 2020; 126 :37–44. doi: 10.1016/j.jclinepi.2020.06.015. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mohammadi-Shahboulaghi F., Khankeh H., HosseinZadeh T. Clinical reasoning in nursing students: a concept analysis. Nursing Forum. 2021; 56 (4):1008–1014. doi: 10.1111/nuf.12628. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Motola I., Devine L., Chung H., Sullivan J., Issenberg S. Simulation in healthcare education: a best evidence practical guide. AMEE guide no. 82. Medical Teacher. 2013; 35 (10):e1511–e1530. doi: 10.3109/0142159x.2013.818632. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Norman G., Eva K. Diagnostic error and clinical reasoning. Medical Education. 2010; 44 (1):94–100. doi: 10.1111/j.1365-2923.2009.03507.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Norman G., Yong M., Brooks L. Non-analytical models of clinical reasoning: the role of experience. Medical Education. 2007; 41 (12):1140–1145. doi: 10.1111/j.1365-2923.2007.02914.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Padilha J., Machado P., Ribeiro A., Ramos J., Costa P. Clinical virtual simulation in nursing education: randomized controlled trial. Journal of Medical Internet Research. 2019; 21 (3):e11529. doi: 10.2196/11529. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Posel N., Mcgee J., Fleiszer D.M. Twelve tips to support the development of clinical reasoning skills using virtual patient cases. Medical Teacher. 2015; 37 (9):813–818. doi: 10.3109/0142159x.2014.993951. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • La Rochelle J., Durning S., Pangaro L., Artino A., van der Vleuten C., Schuwirth L. Authenticity of instruction and student performance: a prospective randomised trial. Medical Education. 2011; 45 (8):807–817. doi: 10.1111/j.1365-2923.2011.03994.x. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sawilowsky S. New effect size rules of thumb. Journal of Modern Applied Statistical Methods. 2009; 8 (2):597–599. doi: 10.22237/jmasm/1257035100. [ CrossRef ] [ Google Scholar ]
  • Schünemann, H., Brożek, J., Guyatt, G., & Oxman, A. (2013). Introduction to grading of recommendations, assessment, development and evaluation (GRADE) handbook. Accessed at: February 18, 2021. Accessed from: https://gdt.gradepro.org/app/handbook/handbook.html
  • Shin H., Rim D., Kim H., Park S., Shon S. Educational characteristics of virtual simulation in nursing: an integrative review. Clinical Simulation in Nursing. 2019; 37 :18–28. doi: 10.1016/j.ecns.2019.08.002. [ CrossRef ] [ Google Scholar ]
  • Tan A.J.Q., Lee C.C.S., Lin P.Y., Cooper S., Lau S.T.L., Chua W.L., Liaw S.Y. Designing and evaluating the effectiveness of a serious game for safe administration of blood transfusion: a randomized controlled trial. Nurse Education Today. 2017; 55 :38–44. doi: 10.1016/j.nedt.2017.04.027. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Thampy H., Willert E., Ramani S. Correction to: assessing clinical reasoning: targeting the higher levels of the pyramid. Journal of General Internal Medicine. 2019 doi: 10.1007/s11606-019-05593-4. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • The Cochrane Collaboration (2020). Review Manager ( RevMan) [Computer Software]. (Version 5.4.1). Copenhagen: The Nordic Cochrane Centre. Accessed at: January 7, 2021. Accessed from: https://training.cochrane.org/online-learning/core-software-cochrane-reviews/revman
  • The Jamovi Project (2021). Jamovi (Version 1.6) [Computer Software]. Sydney, Australia. Accessed from: https://www.jamovi.org
  • Van Der Land S., Schouten A., Feldberg F., van den Hooff B., Huysman M. Lost in space? Cognitive fit and cognitive load in 3D virtual environments. Computers in Human Behavior. 2013; 29 (3):1054–1064. doi: 10.1016/j.chb.2012.09.006. [ CrossRef ] [ Google Scholar ]
  • Verkuyl M., Lapum J.L., St-Amant O., Hughes M., Romaniuk D. Curricular uptake of virtual gaming simulation in nursing education. Nurse Education in Practice. 2021; 50 doi: 10.1016/j.nepr.2021.102967. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Victor-Chmil J. Critical thinking versus clinical reasoning versus clinical judgment: differential diagnosis. Nurse Educator. 2013; 38 (1):34–36. doi: 10.1097/NNE.0b013e318276dfbe. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Witheridge A., Ferns G., Scott-Smith W. Revisiting Miller's pyramid in medical education: the gap between traditional assessment and diagnostic reasoning. International Journal of Medical Education. 2019; 10 :191–192. doi: 10.5116/ijme.5d9b.0c37. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

systematic review of simulation in nursing education

  • Subscribe to journal Subscribe
  • Get new issue alerts Get alerts

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

A Systematic Review of the Use of Standardized Patients as a Simulation Modality in Nursing Education

Rutherford-Hemming, Tonya; Alfes, Celeste M.; Breymier, Tonya L.

About the Authors Tonya Rutherford-Hemming, EdD, RN, CHSE, is an associate professor, University of North Carolina at Charlotte School of Nursing, Charlotte, North Carolina. Celeste M. Alfes, DNP, CNE, CHSE, is an associate professor and director, Center for Nursing Education, Simulation, and Innovation, Frances Payne Bolton School of Nursing, Case Western Reserve University, Cleveland, Ohio. Tonya L. Breymier, PhD, RN, CNE, is an assistant professor and associate dean, Nursing Graduate Programs, Indiana University East, Richmond, Indiana. For more information contact Dr. Rutherford-Hemming at [email protected] ; [email protected] .

The authors have declared no conflict of interest.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website ( www.neponline.net ).

The objective of the study was to search, extract, appraise, and synthesize studies using standardized patients (SPs) in nursing academia to determine how this modality of simulation is being used.

BACKGROUND 

SPs are a common simulation modality used in nursing education.

METHOD 

This review was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-analyses. Five databases were searched as well as keywords to retrieve nonindexed citations for the period January 2011 to September 2016. The inclusion criteria included nurses, a simulated experience with SPs, and original research published in English.

RESULTS 

Sixty-five studies were identified and analyzed.

CONCLUSION 

More randomized controlled trials and studies with power analyses and validated measurement instruments are needed. Studies that compare SPs to high-fidelity simulators are also desired to determine optimal student learning outcomes and standardize best practices in simulation.

Full Text Access for Subscribers:

Individual subscribers.

systematic review of simulation in nursing education

Institutional Users

Not a subscriber.

You can read the full text of this article if you:

  • + Favorites
  • View in Gallery

Readers Of this Article Also Read

Updating the simulation effectiveness tool: item modifications and reevaluation of psychometric properties</strong>', 'leighton kim; ravert, patricia; mudra, vickie; macintosh, christopher', 'nursing education perspectives', 'september/october 2015', '36', '5' , 'p 317-323');" onmouseout="javascript:tooltip_mouseout()" class="ejp-uc__article-title-link"> updating the simulation effectiveness tool: item modifications and reevaluation ..., predictor variables for nclex-rn readiness exam performance</strong>', 'simon elizabeth b.; mcginniss, shawn p.; krauss, beatrice j.', 'nursing education perspectives', 'january-february 2013', '34', '1' , 'p 18-24');" onmouseout="javascript:tooltip_mouseout()" class="ejp-uc__article-title-link"> predictor variables for nclex-rn readiness exam performance, student achievement and nclex-rn success: problems that persist, strategies to promote success on the nclex-rn®: an evidence-based approach..., a comprehensive approach to nclex-rn® success.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Improving the Use of Simulation in Nursing Education: Protocol for a Realist Review

Affiliation.

  • 1 Department of Health and Nursing Science, University of Agder, Grimstad, Norway.
  • PMID: 32347808
  • PMCID: PMC7221641
  • DOI: 10.2196/16363

Background: Nursing education has evolved in line with societal needs, and simulation-based learning (SBL) is increasingly being used to bridge the gap between practice and education. Previous literature reviews have demonstrated the effectiveness of using SBL in nursing education. However, there is a need to explore how and why it works to expand the theoretical foundation of SBL. Realist reviews are a theory-based approach to synthesizing existing evidence on how complex programs work in particular contexts or settings.

Objective: This review aims to understand how, why, and in what circumstances the use of simulation affects learning as part of the bachelor's program in nursing.

Methods: A realist review will be conducted in accordance with the realist template for a systematic review. In particular, we will identify and explore the underlying assumption of how SBL is supposed to work, that is, identify and explore program theories of SBL. The review will be carried out as an iterative process of searching, appraising, and synthesizing the evidence to uncover theoretical concepts that explain the causal effects of SBL. In the final section of the review, we will involve stakeholders in the Norwegian community in a web-based Delphi survey to ensure that the emerging theoretical framework derived from the published literature aligns with stakeholders' experience in practice.

Results: The Norwegian Centre for Research Data (project number 60415) has approved the study. We have performed an initial literature search, whereas quality appraisal and data extraction are ongoing processes.

Conclusions: The final outcome of the review is anticipated to extend the theoretical foundation for using simulation as an integrated component of the bachelor's program in nursing. Furthermore, the findings will be used to produce a briefing document containing guidance for national stakeholders in the community of simulation-based nursing education. Finally, the review findings will be disseminated in a peer-reviewed journal as well as national and international conferences.

International registered report identifier (irrid): DERR1-10.2196/16363.

Keywords: education; learning; nursing; realist review; simulation training.

©Torbjørg Træland Meum, Åshild Slettebø, Mariann Fossum. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 29.04.2020.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: None declared.

Review design.

Similar articles

  • Simulation-based learning in palliative care in postgraduate nursing education: a scoping review. Skedsmo K, Nes AAG, Stenseth HV, Hofsø K, Larsen MH, Hilderson D, Smis D, Hagelin CL, Olaussen C, Solberg MT, Bingen HM, Ølnes MA, Steindal SA. Skedsmo K, et al. BMC Palliat Care. 2023 Mar 29;22(1):30. doi: 10.1186/s12904-023-01149-w. BMC Palliat Care. 2023. PMID: 36991463 Free PMC article. Review.
  • Beyond the black stump: rapid reviews of health research issues affecting regional, rural and remote Australia. Osborne SR, Alston LV, Bolton KA, Whelan J, Reeve E, Wong Shee A, Browne J, Walker T, Versace VL, Allender S, Nichols M, Backholer K, Goodwin N, Lewis S, Dalton H, Prael G, Curtin M, Brooks R, Verdon S, Crockett J, Hodgins G, Walsh S, Lyle DM, Thompson SC, Browne LJ, Knight S, Pit SW, Jones M, Gillam MH, Leach MJ, Gonzalez-Chica DA, Muyambi K, Eshetie T, Tran K, May E, Lieschke G, Parker V, Smith A, Hayes C, Dunlop AJ, Rajappa H, White R, Oakley P, Holliday S. Osborne SR, et al. Med J Aust. 2020 Dec;213 Suppl 11:S3-S32.e1. doi: 10.5694/mja2.50881. Med J Aust. 2020. PMID: 33314144
  • The Effectiveness of Integrated Care Pathways for Adults and Children in Health Care Settings: A Systematic Review. Allen D, Gillen E, Rixson L. Allen D, et al. JBI Libr Syst Rev. 2009;7(3):80-129. doi: 10.11124/01938924-200907030-00001. JBI Libr Syst Rev. 2009. PMID: 27820426
  • Health professionals' experience of teamwork education in acute hospital settings: a systematic review of qualitative literature. Eddy K, Jordan Z, Stephenson M. Eddy K, et al. JBI Database System Rev Implement Rep. 2016 Apr;14(4):96-137. doi: 10.11124/JBISRIR-2016-1843. JBI Database System Rev Implement Rep. 2016. PMID: 27532314 Review.
  • Student and educator experiences of maternal-child simulation-based learning: a systematic review of qualitative evidence protocol. MacKinnon K, Marcellus L, Rivers J, Gordon C, Ryan M, Butcher D. MacKinnon K, et al. JBI Database System Rev Implement Rep. 2015 Jan;13(1):14-26. doi: 10.11124/jbisrir-2015-1694. JBI Database System Rev Implement Rep. 2015. PMID: 26447004
  • Benner P, Sutphen M, Leonard V, Day L. Educating Nurses: A Call For Radical Transformation. San Francisco: Jossey-bass; 2010.
  • Hughes RG. Patient Safety and Quality: An Evidence-based Handbook for Nurses. Rockville: Agency for Healthcare Research and Quality; 2008. - PubMed
  • Greiner A, Knebel E. Health Professions Education: A Bridge to Quality. Washington, DC: National Academies Press; 2003. - PubMed
  • Cronenwett L, Sherwood G, Barnsteiner J, Disch J, Johnson J, Mitchell P, Sullivan DT, Warren J. Quality and safety education for nurses. Nurs Outlook. 2007;55(3):122–31. doi: 10.1016/j.outlook.2007.02.006. - DOI - PubMed
  • Tanner CA. Thinking like a nurse: a research-based model of clinical judgment in nursing. J Nurs Educ. 2006 Jun;45(6):204–11. doi: 10.3928/01484834-20060601-04. - DOI - PubMed

Related information

Linkout - more resources, full text sources.

  • Europe PubMed Central
  • JMIR Publications
  • PubMed Central

Miscellaneous

  • NCI CPTAC Assay Portal
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • DOI: 10.2196/54987
  • Corpus ID: 269977987

Evolution of Chatbots in Nursing Education: Narrative Review.

  • Fang Zhang , Xiaoliu Liu , +1 author Shiben Zhu
  • Published in JMIR Medical Education 29 November 2023
  • Education, Medicine

52 References

Role of ai chatbots in education: systematic literature review, the application of chat generative pre-trained transformer in nursing education., awareness of using chatbots and factors influencing usage intention among nursing students in south korea: a descriptive study, personalized medical terminology learning game: guess the term., mastering medical terminology with chatgpt and termbot, ai chatbots in clinical laboratory medicine: foundations and trends., enhancing kidney transplant care through the integration of chatbot, a holistic approach to remote patient monitoring, fueled by chatgpt and metaverse technology: the future of nursing education., challenges for future directions for artificial intelligence integrated nursing simulation education, the effects of manikin-based and standardized-patient simulation on clinical outcomes: a randomized prospective study, related papers.

Showing 1 through 3 of 0 Related Papers

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

diseases-logo

Article Menu

systematic review of simulation in nursing education

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Diagnostic and therapeutic insights into spinal glomangioma of a unique intradural, extramedullary presentation—systematic review.

systematic review of simulation in nursing education

Share and Cite

Czyżewski, W.; Litak, J.; Pasierb, B.; Piątek, P.; Turek, M.; Banach, L.; Turek, G.; Torres, K.; Staśkiewicz, G. Diagnostic and Therapeutic Insights into Spinal Glomangioma of a Unique Intradural, Extramedullary Presentation—Systematic Review. Diseases 2024 , 12 , 132. https://doi.org/10.3390/diseases12060132

Czyżewski W, Litak J, Pasierb B, Piątek P, Turek M, Banach L, Turek G, Torres K, Staśkiewicz G. Diagnostic and Therapeutic Insights into Spinal Glomangioma of a Unique Intradural, Extramedullary Presentation—Systematic Review. Diseases . 2024; 12(6):132. https://doi.org/10.3390/diseases12060132

Czyżewski, Wojciech, Jakub Litak, Barbara Pasierb, Paula Piątek, Michał Turek, Lech Banach, Grzegorz Turek, Kamil Torres, and Grzegorz Staśkiewicz. 2024. "Diagnostic and Therapeutic Insights into Spinal Glomangioma of a Unique Intradural, Extramedullary Presentation—Systematic Review" Diseases 12, no. 6: 132. https://doi.org/10.3390/diseases12060132

Article Metrics

Further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. (PDF) Simulation-based learning in nurse education: systematic review

    systematic review of simulation in nursing education

  2. (PDF) Standardized Patient Simulation for More Effective Undergraduate

    systematic review of simulation in nursing education

  3. Virtual Simulation in Nursing Education: A Systematic Review Spanning

    systematic review of simulation in nursing education

  4. Effectiveness of Patient Simulation Manikins in Teaching Clinical

    systematic review of simulation in nursing education

  5. (PDF) Virtual Simulation in Nursing Education: A Systematic Review

    systematic review of simulation in nursing education

  6. Framework for Simulation Learning in Nursing Education

    systematic review of simulation in nursing education

VIDEO

  1. Careers in Nursing Education: Clinical Teaching in Simulation Centers

  2. Elsevier Shares Latest Nursing Simulation Technologies at IMSH 2024

  3. Fanshawe Simulation: Nursing Lab 1

  4. See Inside the UbiSim VR Platform for Nursing Simulation

  5. Realistic Mass Casualty Simulation for Nursing Students @WSUTech

  6. Sonography and Nursing Student Simulation

COMMENTS

  1. Virtual Simulation in Nursing Education: A Systematic Review ...

    As virtual simulation is burgeoning, faculty and administrators are asking for evidence of its effectiveness. The objective of this systematic review was to identify how virtual simulation impacts nursing student learning outcomes. Applying the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines, 80 studies were reviewed.

  2. Systematic review of the literature on simulation in nursing education

    The available literature on simulation and nursing education provides evidence that that simulation is useful in creating a learning environment which contributes to knowledge, skills, safety, and confidence. This systematic review of the literature revealed a gap in the literature pertaining to the transfer of these outcomes to the clinical ...

  3. Simulation in Clinical Nursing Education

    The use of simulation as an educational strategy represents a great challenge for nursing education. Simulation may improve health care and patient safety. No patient who is alive is put at risk at the expense of the trainee. ... Kim JH, Park I, Shin S. Systematic review of Korean studies on simulation within nursing education. J Kor Acad Soc ...

  4. Virtual Simulation in Nursing Education: A Systematic Review Spanning

    The objective of this systematic review was to identify how virtual simulation impacts nursing student learning outcomes. Applying the Preferred Reporting Items for Systematic Reviews and Meta-analyses guidelines, 80 studies were reviewed. Results indicate that most research (n = 69, 86%) supported virtual simulation as an effective pedagogy to ...

  5. Contemporary Integrative Review in Simulation-Based Learning in Nursing

    Background: In general, simulation-based learning (SBL) has been a part of nursing education in the past two decades, though nursing educators are facing difficulties in evaluating its effectiveness in theory and practice. The aim of this review was to synthesize the research findings regarding the effects of SBL among nursing students from published scientific articles.

  6. The Use of Hospital-Based Simulation in Nursing Education—A Systematic

    Systematic reviews are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably (Liberati et al., 2009).In a systematic review of the literature, Hallenbeck (2012) analyzed 16 research articles prior to 2012 pertaining to the use of high-fidelity simulation for staff development and education of nurses.

  7. Evaluation of the Effectiveness of Simulation-Based Teaching on Nursing

    Aims: This study aims to conduct a systematic review of the available literature to evaluate the effectiveness of simulation-based teaching in nursing education. Methods: A systematic review of ...

  8. Use of simulation-based learning in undergraduate nurse education: An

    This umbrella review took a global view of 25 reviews of simulation research in nursing education, comprising over 700 primary studies. To discern overall outcomes across reviews, statistical comparison of quantitative results (effect size) must be the key comparator. Simulation-based education cont …

  9. Frameworks for the design, implementation, and evaluation of simulation

    Corroborating the findings of this scoping review is a previous systematic review by Cant and Cooper on the use of simulation‐based learning in undergraduate nursing education, which also noted the scarcity of research and simulation implementation in low‐ and middle‐income countries. The lack of a context‐specific framework or theory ...

  10. Impact Of Simulation Design Elements on Undergraduate Nursing Education

    nursing; simulation; skills; undergraduate Abstract The primary aim of this review was to determine the effect of simulation-based education, when compared to traditional teaching methods in undergraduate nursing programs. The secondary aims were to describe variability in design elements. A systematic review and narrative synthesis of quanti-

  11. Impact of virtual simulation on nursing students' learning outcomes: a

    Virtual simulation in nursing education: a systematic review spanning 1996 to 2018. Simulation in Healthcare. 2020;15(1):46. Despite its growing use, there is limited synthesised knowledge on the effectiveness of virtual simulation (VS) as a pedagogical approach in nursing education. Measuring the effectiveness of VS as a nursing pedagogy may ...

  12. Systematic review of the literature on simulation in nursing education

    Norman (2012) completed a systematic literature review of nursing simulation education from 2000 to 2010, and found, in 32 articles, situational simulation teaching can assist students in ...

  13. Simulation‐based learning in nurse education: systematic review

    cant r.p. & cooper s.j. (2010) Simulation-based learning in nurse education: systematic review. Journal of Advanced Nursing66(1), 3-15.. Title. Simulation-based learning in nurse education: systematic review. Aim. This paper is a report of a review of the quantitative evidence for medium to high fidelity simulation using manikins in nursing, in comparison to other educational strategies.

  14. Educational Outcomes in Undergraduate Nursing: A Systematic Review

    Of the 20 articles included in this systematic review, simulation, face-to-face, asynchronous, problem-based learning, gaming, flipped classrooms, reflective writing, tweets, and podcasts were represented. ... An innovative pedagogy using simulation in nursing education. [National League for Nursing]. Nursing Education Perspectives, 36(6), 401 ...

  15. Effectiveness of simulation in undergraduate nursing programs

    This systematic review utilised the Joanna Briggs Institute (2017) review method. The review included only randomised controlled trials that assessed the effectiveness of simulation with participants enrolled in a pre-registration undergraduate nursing education or training program at any level of study from any country.

  16. The Evidence in Simulation-Based Learning Experiences in Nursing

    pedagogy primarily in nursing education among a variety of learning outcomes. The two previous published umbrella reviews were narrower in scope. Focused review of Doolen et al. (2016) only examined high-fidelity simulation reviews in undergraduate nursing education, excluding those reviews that included practicing nurses and other health care ...

  17. The effectiveness of virtual reality training on knowledge, skills and

    A robustly trained health care workforce is pivotal to forging a resilient health care system [], and there is an urgent need to develop innovative methods and emerging technologies for health care workforce education [].Virtual reality technology designs for clinical training have emerged as a promising avenue for increasing the competence of health care professionals, reflecting their ...

  18. Effectiveness of simulation-based nursing education depending on

    Background. Simulation-based nursing education is an increasingly popular pedagogical approach. It provides students with opportunities to practice their clinical and decision-making skills through various real-life situational experiences. However, simulation approaches fall along a continuum ranging from low-fidelity to high-fidelity simulation.

  19. Virtual Patient Simulations in Nursing Education: A Descriptive

    The objective of this systematic review was to identify how virtual simulation impacts nursing student learning outcomes. Applying the Preferred Reporting Items for Systematic Reviews and Meta ...

  20. The effects of simulation-based education on undergraduate nursing

    Education in nursing has noticed a positive effect of simulation-based education. There are many studies available on the effects of simulation-based education, but most of those involve a single institution, nonrandomized controlled trials, small sample sizes and subjective evaluations of the effects. The purpose of this multicenter randomized controlled trial was to evaluate the effects of ...

  21. Simulation-based learning in nurse education: systematic review

    Background: Human simulation is an educational process that can replicate clinical practices in a safe environment. Although endorsed in nursing curricula, its effectiveness is largely unknown. Review methods: A systematic review of quantitative studies published between 1999 and January 2009 was undertaken using the following databases: CINAHL ...

  22. Teaching Strategies for Developing Clinical Reasoning Skills in Nursing

    Based on the teaching strategies used in the articles, two groups have been identified: simulation methods and learning programs. The studies focus on comparing different teaching methodologies. Conclusions: This systematic review has detected different approaches to help nursing students improve their reasoning and decision-making skills.

  23. The Use of Standardized Patients to Teach ...

    Study show a weak indication that SP-based education is superior to other simulation methodologies in most contexts, however more rigorous studies with larger sample sizes, validated instruments, and effects on patient outcomes are needed to definitively determine the optimal method/modality for teaching communication to health care professionals. Objectives The aim of this systematic review ...

  24. Virtual Simulation to Enhance Clinical Reasoning in Nursing: A

    Eligibility Criteria. Inclusion criteria were as follows: (a) pre or postregistration nursing education, (b) randomized controlled trial (RCT) with a comparison group, (c) study intervention using virtual simulation that incorporated experiential learning approaches, (d) at least one outcome assessing clinical reasoning at Miller's pyramid level two and above.

  25. PDF Simulation in Nursing Education: Review of Research

    A systematic review of research on technology enhanced simulation in health professions revealed that in comparison with no intervention, technology-enhanced simulation

  26. Simulation-based education for teaching aggression ...

    @article{Mitchell2024SimulationbasedEF, title={Simulation-based education for teaching aggression management skills to health care providers in acute health care settings: A systematic review}, author={Marijke Jane Mitchell and Fiona Newall and Charmaine Bernie and Amanda Brignell and Katrina Williams}, journal={International Journal of Nursing ...

  27. A Systematic Review of the Use of Standardized Patients as... : Nursing

    SPs are a common simulation modality used in nursing education. METHOD . This review was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-analyses. Five databases were searched as well as keywords to retrieve nonindexed citations for the period January 2011 to September 2016.

  28. Improving the Use of Simulation in Nursing Education: Protocol for a

    Objective: This review aims to understand how, why, and in what circumstances the use of simulation affects learning as part of the bachelor's program in nursing. Methods: A realist review will be conducted in accordance with the realist template for a systematic review. In particular, we will identify and explore the underlying assumption of ...

  29. Evolution of Chatbots in Nursing Education: Narrative Review

    Integrating chatbots into nursing education presents a promising yet relatively unexplored avenue, and this review highlights the urgent need for original research, emphasizing the importance of ethical considerations. Background The integration of chatbots in nursing education is a rapidly evolving area with potential transformative impacts. This narrative review aims to synthesize and ...

  30. Diseases

    Contemporary literature lacks examples of intradural, extramedullary spinal glomangiomas. Moreover, glomus tumors in general are exceedingly rare among benign spinal tumors and are mostly located within epidural space or within intervertebral foramen, and only a few cases have been documented to date. This report provides a detailed analysis of the clinical presentation, imaging ...