Search NYU Steinhardt

virtual learning research paper

The Research Alliance for New York City Schools

A woman doing school work on her laptop.

Exploring the Evidence on Virtual and Blended Learning

Chelsea farley (2020).

The Research Alliance has developed an overview of research and practical guidance on strategies to implement remote teaching and learning, as well as strategies that combine virtual and in-class instruction. While not a complete summary of the relevant literature, our overview provides links to a variety of useful articles, resources, and reports. We hope this material can inform school and district leaders’ planning and support their ongoing assessment of what has and has not been effective, for whom, and under what conditions.

Key Takeaways from the Research Alliance’s Review

  • Eight months into the COVID-19 pandemic, there is still an enormous need for data and evidence to understand how the school closures that took place in NYC and around the country—and how the various approaches to reopening—have affected students’ academic, social/emotional, and health outcomes. New research is needed to inform critical policy and practice decisions. (Below we highlight specific kinds of data that would help answer the most pressing questions.)
  • Past research about online learning is limited and mostly focused on post-secondary and adult education. The studies that do exist in K-12 education find that students participating in online learning generally perform similarly to or worse than peers who have access to traditional face-to-face instruction (with programs that are 100% online faring worse than blended learning approaches). It is important to note that this research typically compares online learning with regular classroom instruction—rather than comparing it to no instruction at all—and that these studies took place under dramatically different conditions than those resulting from COVID-19.
  • Studies of blended learning, personalized learning, and specific technology-based tools and programs provide hints about successful approaches, but also underscore substantial “fuzziness” around the definition of these terms; major challenges to high-quality implementation; and a lack of rigorous impact research.
  • Teaching quality is more important than how lessons are delivered  (e.g., “clear explanations, scaffolding and feedback”);
  • Ensuring access to technology is key , particularly for disadvantaged students and families;
  • Peer interactions can provide motivation and improve learning outcomes  (e.g., “peer marking and feedback, sharing models of good work,” and opportunities for collaboration and live discussions of content);
  • Supporting students to work independently can improve learning outcomes  (e.g., “prompting pupils to reflect on their work or to consider the strategies they will use if they get stuck”, checklists or daily plans); and
  • Different approaches to remote learning suit different tasks and types of content.

Our overview highlights these and other lessons from dozens of relevant studies. It also underscores the need for more rigorous evidence about the implementation and impact of different approaches to remote and blended learning, particularly in the context of the current pandemic. To begin to fill these knowledge gaps,  the Research Alliance strongly encourages schools and districts—including the NYC Department of Education—to collect, analyze, and share data about :

  • COVID-19 testing results,
  • Professional development aimed at helping teachers implement remote and blended learning,
  • Students’ attendance and engagement (online and in person),
  • Students’ social and emotional wellbeing,
  • Students’ and families’ experiences with remote and blended instruction,
  • Teachers’ experiences with remote and blended instruction, and—critically—
  • What students are learning, over time.

All of this should be done with an eye toward pre-existing inequalities—especially differences related to race/ethnicity, poverty, home language, and disability. These data are crucial for understanding how COVID-19 has affected the educational trajectories of different groups of students and for developing strong policy and practice responses. 

Read our full overview here . This document was initially released in May and updated in November of 2020.

  • Open supplemental data
  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, insights into students’ experiences and perceptions of remote learning methods: from the covid-19 pandemic to best practice for the future.

virtual learning research paper

  • 1 Minerva Schools at Keck Graduate Institute, San Francisco, CA, United States
  • 2 Ronin Institute for Independent Scholarship, Montclair, NJ, United States
  • 3 Department of Physics, University of Toronto, Toronto, ON, Canada

This spring, students across the globe transitioned from in-person classes to remote learning as a result of the COVID-19 pandemic. This unprecedented change to undergraduate education saw institutions adopting multiple online teaching modalities and instructional platforms. We sought to understand students’ experiences with and perspectives on those methods of remote instruction in order to inform pedagogical decisions during the current pandemic and in future development of online courses and virtual learning experiences. Our survey gathered quantitative and qualitative data regarding students’ experiences with synchronous and asynchronous methods of remote learning and specific pedagogical techniques associated with each. A total of 4,789 undergraduate participants representing institutions across 95 countries were recruited via Instagram. We find that most students prefer synchronous online classes, and students whose primary mode of remote instruction has been synchronous report being more engaged and motivated. Our qualitative data show that students miss the social aspects of learning on campus, and it is possible that synchronous learning helps to mitigate some feelings of isolation. Students whose synchronous classes include active-learning techniques (which are inherently more social) report significantly higher levels of engagement, motivation, enjoyment, and satisfaction with instruction. Respondents’ recommendations for changes emphasize increased engagement, interaction, and student participation. We conclude that active-learning methods, which are known to increase motivation, engagement, and learning in traditional classrooms, also have a positive impact in the remote-learning environment. Integrating these elements into online courses will improve the student experience.

Introduction

The COVID-19 pandemic has dramatically changed the demographics of online students. Previously, almost all students engaged in online learning elected the online format, starting with individual online courses in the mid-1990s through today’s robust online degree and certificate programs. These students prioritize convenience, flexibility and ability to work while studying and are older than traditional college age students ( Harris and Martin, 2012 ; Levitz, 2016 ). These students also find asynchronous elements of a course are more useful than synchronous elements ( Gillingham and Molinari, 2012 ). In contrast, students who chose to take courses in-person prioritize face-to-face instruction and connection with others and skew considerably younger ( Harris and Martin, 2012 ). This leaves open the question of whether students who prefer to learn in-person but are forced to learn remotely will prefer synchronous or asynchronous methods. One study of student preferences following a switch to remote learning during the COVID-19 pandemic indicates that students enjoy synchronous over asynchronous course elements and find them more effective ( Gillis and Krull, 2020 ). Now that millions of traditional in-person courses have transitioned online, our survey expands the data on student preferences and explores if those preferences align with pedagogical best practices.

An extensive body of research has explored what instructional methods improve student learning outcomes (Fink. 2013). Considerable evidence indicates that active-learning or student-centered approaches result in better learning outcomes than passive-learning or instructor-centered approaches, both in-person and online ( Freeman et al., 2014 ; Chen et al., 2018 ; Davis et al., 2018 ). Active-learning approaches include student activities or discussion in class, whereas passive-learning approaches emphasize extensive exposition by the instructor ( Freeman et al., 2014 ). Constructivist learning theories argue that students must be active participants in creating their own learning, and that listening to expert explanations is seldom sufficient to trigger the neurological changes necessary for learning ( Bostock, 1998 ; Zull, 2002 ). Some studies conclude that, while students learn more via active learning, they may report greater perceptions of their learning and greater enjoyment when passive approaches are used ( Deslauriers et al., 2019 ). We examine student perceptions of remote learning experiences in light of these previous findings.

In this study, we administered a survey focused on student perceptions of remote learning in late May 2020 through the social media account of @unjadedjade to a global population of English speaking undergraduate students representing institutions across 95 countries. We aim to explore how students were being taught, the relationship between pedagogical methods and student perceptions of their experience, and the reasons behind those perceptions. Here we present an initial analysis of the results and share our data set for further inquiry. We find that positive student perceptions correlate with synchronous courses that employ a variety of interactive pedagogical techniques, and that students overwhelmingly suggest behavioral and pedagogical changes that increase social engagement and interaction. We argue that these results support the importance of active learning in an online environment.

Materials and Methods

Participant pool.

Students were recruited through the Instagram account @unjadedjade. This social media platform, run by influencer Jade Bowler, focuses on education, effective study tips, ethical lifestyle, and promotes a positive mindset. For this reason, the audience is presumably academically inclined, and interested in self-improvement. The survey was posted to her account and received 10,563 responses within the first 36 h. Here we analyze the 4,789 of those responses that came from undergraduates. While we did not collect demographic or identifying information, we suspect that women are overrepresented in these data as followers of @unjadedjade are 80% women. A large minority of respondents were from the United Kingdom as Jade Bowler is a British influencer. Specifically, 43.3% of participants attend United Kingdom institutions, followed by 6.7% attending university in the Netherlands, 6.1% in Germany, 5.8% in the United States and 4.2% in Australia. Ninety additional countries are represented in these data (see Supplementary Figure 1 ).

Survey Design

The purpose of this survey is to learn about students’ instructional experiences following the transition to remote learning in the spring of 2020.

This survey was initially created for a student assignment for the undergraduate course Empirical Analysis at Minerva Schools at KGI. That version served as a robust pre-test and allowed for identification of the primary online platforms used, and the four primary modes of learning: synchronous (live) classes, recorded lectures and videos, uploaded or emailed materials, and chat-based communication. We did not adapt any open-ended questions based on the pre-test survey to avoid biasing the results and only corrected language in questions for clarity. We used these data along with an analysis of common practices in online learning to revise the survey. Our revised survey asked students to identify the synchronous and asynchronous pedagogical methods and platforms that they were using for remote learning. Pedagogical methods were drawn from literature assessing active and passive teaching strategies in North American institutions ( Fink, 2013 ; Chen et al., 2018 ; Davis et al., 2018 ). Open-ended questions asked students to describe why they preferred certain modes of learning and how they could improve their learning experience. Students also reported on their affective response to learning and participation using a Likert scale.

The revised survey also asked whether students had responded to the earlier survey. No significant differences were found between responses of those answering for the first and second times (data not shown). See Supplementary Appendix 1 for survey questions. Survey data was collected from 5/21/20 to 5/23/20.

Qualitative Coding

We applied a qualitative coding framework adapted from Gale et al. (2013) to analyze student responses to open-ended questions. Four researchers read several hundred responses and noted themes that surfaced. We then developed a list of themes inductively from the survey data and deductively from the literature on pedagogical practice ( Garrison et al., 1999 ; Zull, 2002 ; Fink, 2013 ; Freeman et al., 2014 ). The initial codebook was revised collaboratively based on feedback from researchers after coding 20–80 qualitative comments each. Before coding their assigned questions, alignment was examined through coding of 20 additional responses. Researchers aligned in identifying the same major themes. Discrepancies in terms identified were resolved through discussion. Researchers continued to meet weekly to discuss progress and alignment. The majority of responses were coded by a single researcher using the final codebook ( Supplementary Table 1 ). All responses to questions 3 (4,318 responses) and 8 (4,704 responses), and 2,512 of 4,776 responses to question 12 were analyzed. Valence was also indicated where necessary (i.e., positive or negative discussion of terms). This paper focuses on the most prevalent themes from our initial analysis of the qualitative responses. The corresponding author reviewed codes to ensure consistency and accuracy of reported data.

Statistical Analysis

The survey included two sets of Likert-scale questions, one consisting of a set of six statements about students’ perceptions of their experiences following the transition to remote learning ( Table 1 ). For each statement, students indicated their level of agreement with the statement on a five-point scale ranging from 1 (“Strongly Disagree”) to 5 (“Strongly Agree”). The second set asked the students to respond to the same set of statements, but about their retroactive perceptions of their experiences with in-person instruction before the transition to remote learning. This set was not the subject of our analysis but is present in the published survey results. To explore correlations among student responses, we used CrossCat analysis to calculate the probability of dependence between Likert-scale responses ( Mansinghka et al., 2016 ).

www.frontiersin.org

Table 1. Likert-scale questions.

Mean values are calculated based on the numerical scores associated with each response. Measures of statistical significance for comparisons between different subgroups of respondents were calculated using a two-sided Mann-Whitney U -test, and p -values reported here are based on this test statistic. We report effect sizes in pairwise comparisons using the common-language effect size, f , which is the probability that the response from a random sample from subgroup 1 is greater than the response from a random sample from subgroup 2. We also examined the effects of different modes of remote learning and technological platforms using ordinal logistic regression. With the exception of the mean values, all of these analyses treat Likert-scale responses as ordinal-scale, rather than interval-scale data.

Students Prefer Synchronous Class Sessions

Students were asked to identify their primary mode of learning given four categories of remote course design that emerged from the pilot survey and across literature on online teaching: live (synchronous) classes, recorded lectures and videos, emailed or uploaded materials, and chats and discussion forums. While 42.7% ( n = 2,045) students identified live classes as their primary mode of learning, 54.6% ( n = 2613) students preferred this mode ( Figure 1 ). Both recorded lectures and live classes were preferred over uploaded materials (6.22%, n = 298) and chat (3.36%, n = 161).

www.frontiersin.org

Figure 1. Actual (A) and preferred (B) primary modes of learning.

In addition to a preference for live classes, students whose primary mode was synchronous were more likely to enjoy the class, feel motivated and engaged, be satisfied with instruction and report higher levels of participation ( Table 2 and Supplementary Figure 2 ). Regardless of primary mode, over two-thirds of students reported they are often distracted during remote courses.

www.frontiersin.org

Table 2. The effect of synchronous vs. asynchronous primary modes of learning on student perceptions.

Variation in Pedagogical Techniques for Synchronous Classes Results in More Positive Perceptions of the Student Learning Experience

To survey the use of passive vs. active instructional methods, students reported the pedagogical techniques used in their live classes. Among the synchronous methods, we identify three different categories ( National Research Council, 2000 ; Freeman et al., 2014 ). Passive methods (P) include lectures, presentations, and explanation using diagrams, white boards and/or other media. These methods all rely on instructor delivery rather than student participation. Our next category represents active learning through primarily one-on-one interactions (A). The methods in this group are in-class assessment, question-and-answer (Q&A), and classroom chat. Group interactions (F) included classroom discussions and small-group activities. Given these categories, Mann-Whitney U pairwise comparisons between the 7 possible combinations and Likert scale responses about student experience showed that the use of a variety of methods resulted in higher ratings of experience vs. the use of a single method whether or not that single method was active or passive ( Table 3 ). Indeed, students whose classes used methods from each category (PAF) had higher ratings of enjoyment, motivation, and satisfaction with instruction than those who only chose any single method ( p < 0.0001) and also rated higher rates of participation and engagement compared to students whose only method was passive (P) or active through one-on-one interactions (A) ( p < 0.00001). Student ratings of distraction were not significantly different for any comparison. Given that sets of Likert responses often appeared significant together in these comparisons, we ran a CrossCat analysis to look at the probability of dependence across Likert responses. Responses have a high probability of dependence on each other, limiting what we can claim about any discrete response ( Supplementary Figure 3 ).

www.frontiersin.org

Table 3. Comparison of combinations of synchronous methods on student perceptions. Effect size (f).

Mann-Whitney U pairwise comparisons were also used to check if improvement in student experience was associated with the number of methods used vs. the variety of types of methods. For every comparison, we found that more methods resulted in higher scores on all Likert measures except distraction ( Table 4 ). Even comparison between four or fewer methods and greater than four methods resulted in a 59% chance that the latter enjoyed the courses more ( p < 0.00001) and 60% chance that they felt more motivated to learn ( p < 0.00001). Students who selected more than four methods ( n = 417) were also 65.1% ( p < 0.00001), 62.9% ( p < 0.00001) and 64.3% ( p < 0.00001) more satisfied with instruction, engaged, and actively participating, respectfully. Therefore, there was an overlap between how the number and variety of methods influenced students’ experiences. Since the number of techniques per category is 2–3, we cannot fully disentangle the effect of number vs. variety. Pairwise comparisons to look at subsets of data with 2–3 methods from a single group vs. 2–3 methods across groups controlled for this but had low sample numbers in most groups and resulted in no significant findings (data not shown). Therefore, from the data we have in our survey, there seems to be an interdependence between number and variety of methods on students’ learning experiences.

www.frontiersin.org

Table 4. Comparison of the number of synchronous methods on student perceptions. Effect size (f).

Variation in Asynchronous Pedagogical Techniques Results in More Positive Perceptions of the Student Learning Experience

Along with synchronous pedagogical methods, students reported the asynchronous methods that were used for their classes. We divided these methods into three main categories and conducted pairwise comparisons. Learning methods include video lectures, video content, and posted study materials. Interacting methods include discussion/chat forums, live office hours, and email Q&A with professors. Testing methods include assignments and exams. Our results again show the importance of variety in students’ perceptions ( Table 5 ). For example, compared to providing learning materials only, providing learning materials, interaction, and testing improved enjoyment ( f = 0.546, p < 0.001), motivation ( f = 0.553, p < 0.0001), satisfaction with instruction ( f = 0.596, p < 0.00001), engagement ( f = 0.572, p < 0.00001) and active participation ( f = 0.563, p < 0.00001) (row 6). Similarly, compared to just being interactive with conversations, the combination of all three methods improved five out of six indicators, except for distraction in class (row 11).

www.frontiersin.org

Table 5. Comparison of combinations of asynchronous methods on student perceptions. Effect size (f).

Ordinal logistic regression was used to assess the likelihood that the platforms students used predicted student perceptions ( Supplementary Table 2 ). Platform choices were based on the answers to open-ended questions in the pre-test survey. The synchronous and asynchronous methods used were consistently more predictive of Likert responses than the specific platforms. Likewise, distraction continued to be our outlier with no differences across methods or platforms.

Students Prefer In-Person and Synchronous Online Learning Largely Due to Social-Emotional Reasoning

As expected, 86.1% (4,123) of survey participants report a preference for in-person courses, while 13.9% (666) prefer online courses. When asked to explain the reasons for their preference, students who prefer in-person courses most often mention the importance of social interaction (693 mentions), engagement (639 mentions), and motivation (440 mentions). These students are also more likely to mention a preference for a fixed schedule (185 mentions) vs. a flexible schedule (2 mentions).

In addition to identifying social reasons for their preference for in-person learning, students’ suggestions for improvements in online learning focus primarily on increasing interaction and engagement, with 845 mentions of live classes, 685 mentions of interaction, 126 calls for increased participation and calls for changes related to these topics such as, “Smaller teaching groups for live sessions so that everyone is encouraged to talk as some people don’t say anything and don’t participate in group work,” and “Make it less of the professor reading the pdf that was given to us and more interaction.”

Students who prefer online learning primarily identify independence and flexibility (214 mentions) and reasons related to anxiety and discomfort in in-person settings (41 mentions). Anxiety was only mentioned 12 times in the much larger group that prefers in-person learning.

The preference for synchronous vs. asynchronous modes of learning follows similar trends ( Table 6 ). Students who prefer live classes mention engagement and interaction most often while those who prefer recorded lectures mention flexibility.

www.frontiersin.org

Table 6. Most prevalent themes for students based on their preferred mode of remote learning.

Student Perceptions Align With Research on Active Learning

The first, and most robust, conclusion is that incorporation of active-learning methods correlates with more positive student perceptions of affect and engagement. We can see this clearly in the substantial differences on a number of measures, where students whose classes used only passive-learning techniques reported lower levels of engagement, satisfaction, participation, and motivation when compared with students whose classes incorporated at least some active-learning elements. This result is consistent with prior research on the value of active learning ( Freeman et al., 2014 ).

Though research shows that student learning improves in active learning classes, on campus, student perceptions of their learning, enjoyment, and satisfaction with instruction are often lower in active-learning courses ( Deslauriers et al., 2019 ). Our finding that students rate enjoyment and satisfaction with instruction higher for active learning online suggests that the preference for passive lectures on campus relies on elements outside of the lecture itself. That might include the lecture hall environment, the social physical presence of peers, or normalization of passive lectures as the expected mode for on-campus classes. This implies that there may be more buy-in for active learning online vs. in-person.

A second result from our survey is that student perceptions of affect and engagement are associated with students experiencing a greater diversity of learning modalities. We see this in two different results. First, in addition to the fact that classes that include active learning outperform classes that rely solely on passive methods, we find that on all measures besides distraction, the highest student ratings are associated with a combination of active and passive methods. Second, we find that these higher scores are associated with classes that make use of a larger number of different methods.

This second result suggests that students benefit from classes that make use of multiple different techniques, possibly invoking a combination of passive and active methods. However, it is unclear from our data whether this effect is associated specifically with combining active and passive methods, or if it is associated simply with the use of multiple different methods, irrespective of whether those methods are active, passive, or some combination. The problem is that the number of methods used is confounded with the diversity of methods (e.g., it is impossible for a classroom using only one method to use both active and passive methods). In an attempt to address this question, we looked separately at the effect of number and diversity of methods while holding the other constant. Across a large number of such comparisons, we found few statistically significant differences, which may be a consequence of the fact that each comparison focused on a small subset of the data.

Thus, our data suggests that using a greater diversity of learning methods in the classroom may lead to better student outcomes. This is supported by research on student attention span which suggests varying delivery after 10–15 min to retain student’s attention ( Bradbury, 2016 ). It is likely that this is more relevant for online learning where students report high levels of distraction across methods, modalities, and platforms. Given that number and variety are key, and there are few passive learning methods, we can assume that some combination of methods that includes active learning improves student experience. However, it is not clear whether we should predict that this benefit would come simply from increasing the number of different methods used, or if there are benefits specific to combining particular methods. Disentangling these effects would be an interesting avenue for future research.

Students Value Social Presence in Remote Learning

Student responses across our open-ended survey questions show a striking difference in reasons for their preferences compared with traditional online learners who prefer flexibility ( Harris and Martin, 2012 ; Levitz, 2016 ). Students reasons for preferring in-person classes and synchronous remote classes emphasize the desire for social interaction and echo the research on the importance of social presence for learning in online courses.

Short et al. (1976) outlined Social Presence Theory in depicting students’ perceptions of each other as real in different means of telecommunications. These ideas translate directly to questions surrounding online education and pedagogy in regards to educational design in networked learning where connection across learners and instructors improves learning outcomes especially with “Human-Human interaction” ( Goodyear, 2002 , 2005 ; Tu, 2002 ). These ideas play heavily into asynchronous vs. synchronous learning, where Tu reports students having positive responses to both synchronous “real-time discussion in pleasantness, responsiveness and comfort with familiar topics” and real-time discussions edging out asynchronous computer-mediated communications in immediate replies and responsiveness. Tu’s research indicates that students perceive more interaction with synchronous mediums such as discussions because of immediacy which enhances social presence and support the use of active learning techniques ( Gunawardena, 1995 ; Tu, 2002 ). Thus, verbal immediacy and communities with face-to-face interactions, such as those in synchronous learning classrooms, lessen the psychological distance of communicators online and can simultaneously improve instructional satisfaction and reported learning ( Gunawardena and Zittle, 1997 ; Richardson and Swan, 2019 ; Shea et al., 2019 ). While synchronous learning may not be ideal for traditional online students and a subset of our participants, this research suggests that non-traditional online learners are more likely to appreciate the value of social presence.

Social presence also connects to the importance of social connections in learning. Too often, current systems of education emphasize course content in narrow ways that fail to embrace the full humanity of students and instructors ( Gay, 2000 ). With the COVID-19 pandemic leading to further social isolation for many students, the importance of social presence in courses, including live interactions that build social connections with classmates and with instructors, may be increased.

Limitations of These Data

Our undergraduate data consisted of 4,789 responses from 95 different countries, an unprecedented global scale for research on online learning. However, since respondents were followers of @unjadedjade who focuses on learning and wellness, these respondents may not represent the average student. Biases in survey responses are often limited by their recruitment techniques and our bias likely resulted in more robust and thoughtful responses to free-response questions and may have influenced the preference for synchronous classes. It is unlikely that it changed students reporting on remote learning pedagogical methods since those are out of student control.

Though we surveyed a global population, our design was rooted in literature assessing pedagogy in North American institutions. Therefore, our survey may not represent a global array of teaching practices.

This survey was sent out during the initial phase of emergency remote learning for most countries. This has two important implications. First, perceptions of remote learning may be clouded by complications of the pandemic which has increased social, mental, and financial stresses globally. Future research could disaggregate the impact of the pandemic from students’ learning experiences with a more detailed and holistic analysis of the impact of the pandemic on students.

Second, instructors, students and institutions were not able to fully prepare for effective remote education in terms of infrastructure, mentality, curriculum building, and pedagogy. Therefore, student experiences reflect this emergency transition. Single-modality courses may correlate with instructors who lacked the resources or time to learn or integrate more than one modality. Regardless, the main insights of this research align well with the science of teaching and learning and can be used to inform both education during future emergencies and course development for online programs that wish to attract traditional college students.

Global Student Voices Improve Our Understanding of the Experience of Emergency Remote Learning

Our survey shows that global student perspectives on remote learning agree with pedagogical best practices, breaking with the often-found negative reactions of students to these practices in traditional classrooms ( Shekhar et al., 2020 ). Our analysis of open-ended questions and preferences show that a majority of students prefer pedagogical approaches that promote both active learning and social interaction. These results can serve as a guide to instructors as they design online classes, especially for students whose first choice may be in-person learning. Indeed, with the near ubiquitous adoption of remote learning during the COVID-19 pandemic, remote learning may be the default for colleges during temporary emergencies. This has already been used at the K-12 level as snow days become virtual learning days ( Aspergren, 2020 ).

In addition to informing pedagogical decisions, the results of this survey can be used to inform future research. Although we survey a global population, our recruitment method selected for students who are English speakers, likely majority female, and have an interest in self-improvement. Repeating this study with a more diverse and representative sample of university students could improve the generalizability of our findings. While the use of a variety of pedagogical methods is better than a single method, more research is needed to determine what the optimal combinations and implementations are for courses in different disciplines. Though we identified social presence as the major trend in student responses, the over 12,000 open-ended responses from students could be analyzed in greater detail to gain a more nuanced understanding of student preferences and suggestions for improvement. Likewise, outliers could shed light on the diversity of student perspectives that we may encounter in our own classrooms. Beyond this, our findings can inform research that collects demographic data and/or measures learning outcomes to understand the impact of remote learning on different populations.

Importantly, this paper focuses on a subset of responses from the full data set which includes 10,563 students from secondary school, undergraduate, graduate, or professional school and additional questions about in-person learning. Our full data set is available here for anyone to download for continued exploration: https://dataverse.harvard.edu/dataset.xhtml?persistentId= doi: 10.7910/DVN/2TGOPH .

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.

Author Contributions

GS: project lead, survey design, qualitative coding, writing, review, and editing. TN: data analysis, writing, review, and editing. CN and PB: qualitative coding. JW: data analysis, writing, and editing. CS: writing, review, and editing. EV and KL: original survey design and qualitative coding. PP: data analysis. JB: original survey design and survey distribution. HH: data analysis. MP: writing. All authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We want to thank Minerva Schools at KGI for providing funding for summer undergraduate research internships. We also want to thank Josh Fost and Christopher V. H.-H. Chen for discussion that helped shape this project.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2021.647986/full#supplementary-material

Aspergren, E. (2020). Snow Days Canceled Because of COVID-19 Online School? Not in These School Districts.sec. Education. USA Today. Available online at: https://www.usatoday.com/story/news/education/2020/12/15/covid-school-canceled-snow-day-online-learning/3905780001/ (accessed December 15, 2020).

Google Scholar

Bostock, S. J. (1998). Constructivism in mass higher education: a case study. Br. J. Educ. Technol. 29, 225–240. doi: 10.1111/1467-8535.00066

CrossRef Full Text | Google Scholar

Bradbury, N. A. (2016). Attention span during lectures: 8 seconds, 10 minutes, or more? Adv. Physiol. Educ. 40, 509–513. doi: 10.1152/advan.00109.2016

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, B., Bastedo, K., and Howard, W. (2018). Exploring best practices for online STEM courses: active learning, interaction & assessment design. Online Learn. 22, 59–75. doi: 10.24059/olj.v22i2.1369

Davis, D., Chen, G., Hauff, C., and Houben, G.-J. (2018). Activating learning at scale: a review of innovations in online learning strategies. Comput. Educ. 125, 327–344. doi: 10.1016/j.compedu.2018.05.019

Deslauriers, L., McCarty, L. S., Miller, K., Callaghan, K., and Kestin, G. (2019). Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom. Proc. Natl. Acad. Sci. 116, 19251–19257. doi: 10.1073/pnas.1821936116

Fink, L. D. (2013). Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses. Somerset, NJ: John Wiley & Sons, Incorporated.

Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., et al. (2014). Active learning increases student performance in science, engineering, and mathematics. Proc. Natl. Acad. Sci. 111, 8410–8415. doi: 10.1073/pnas.1319030111

Gale, N. K., Heath, G., Cameron, E., Rashid, S., and Redwood, S. (2013). Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med. Res. Methodol. 13:117. doi: 10.1186/1471-2288-13-117

Garrison, D. R., Anderson, T., and Archer, W. (1999). Critical inquiry in a text-based environment: computer conferencing in higher education. Internet High. Educ. 2, 87–105. doi: 10.1016/S1096-7516(00)00016-6

Gay, G. (2000). Culturally Responsive Teaching: Theory, Research, and Practice. Multicultural Education Series. New York, NY: Teachers College Press.

Gillingham, and Molinari, C. (2012). Online courses: student preferences survey. Internet Learn. 1, 36–45. doi: 10.18278/il.1.1.4

Gillis, A., and Krull, L. M. (2020). COVID-19 remote learning transition in spring 2020: class structures, student perceptions, and inequality in college courses. Teach. Sociol. 48, 283–299. doi: 10.1177/0092055X20954263

Goodyear, P. (2002). “Psychological foundations for networked learning,” in Networked Learning: Perspectives and Issues. Computer Supported Cooperative Work , eds C. Steeples and C. Jones (London: Springer), 49–75. doi: 10.1007/978-1-4471-0181-9_4

Goodyear, P. (2005). Educational design and networked learning: patterns, pattern languages and design practice. Australas. J. Educ. Technol. 21, 82–101. doi: 10.14742/ajet.1344

Gunawardena, C. N. (1995). Social presence theory and implications for interaction and collaborative learning in computer conferences. Int. J. Educ. Telecommun. 1, 147–166.

Gunawardena, C. N., and Zittle, F. J. (1997). Social presence as a predictor of satisfaction within a computer mediated conferencing environment. Am. J. Distance Educ. 11, 8–26. doi: 10.1080/08923649709526970

Harris, H. S., and Martin, E. (2012). Student motivations for choosing online classes. Int. J. Scholarsh. Teach. Learn. 6, 1–8. doi: 10.20429/ijsotl.2012.060211

Levitz, R. N. (2016). 2015-16 National Online Learners Satisfaction and Priorities Report. Cedar Rapids: Ruffalo Noel Levitz, 12.

Mansinghka, V., Shafto, P., Jonas, E., Petschulat, C., Gasner, M., and Tenenbaum, J. B. (2016). CrossCat: a fully Bayesian nonparametric method for analyzing heterogeneous, high dimensional data. J. Mach. Learn. Res. 17, 1–49. doi: 10.1007/978-0-387-69765-9_7

National Research Council (2000). How People Learn: Brain, Mind, Experience, and School: Expanded Edition. Washington, DC: National Academies Press, doi: 10.17226/9853

Richardson, J. C., and Swan, K. (2019). Examining social presence in online courses in relation to students’ perceived learning and satisfaction. Online Learn. 7, 68–88. doi: 10.24059/olj.v7i1.1864

Shea, P., Pickett, A. M., and Pelz, W. E. (2019). A Follow-up investigation of ‘teaching presence’ in the suny learning network. Online Learn. 7, 73–75. doi: 10.24059/olj.v7i2.1856

Shekhar, P., Borrego, M., DeMonbrun, M., Finelli, C., Crockett, C., and Nguyen, K. (2020). Negative student response to active learning in STEM classrooms: a systematic review of underlying reasons. J. Coll. Sci. Teach. 49, 45–54.

Short, J., Williams, E., and Christie, B. (1976). The Social Psychology of Telecommunications. London: John Wiley & Sons.

Tu, C.-H. (2002). The measurement of social presence in an online learning environment. Int. J. E Learn. 1, 34–45. doi: 10.17471/2499-4324/421

Zull, J. E. (2002). The Art of Changing the Brain: Enriching Teaching by Exploring the Biology of Learning , 1st Edn. Sterling, VA: Stylus Publishing.

Keywords : online learning, COVID-19, active learning, higher education, pedagogy, survey, international

Citation: Nguyen T, Netto CLM, Wilkins JF, Bröker P, Vargas EE, Sealfon CD, Puthipiroj P, Li KS, Bowler JE, Hinson HR, Pujar M and Stein GM (2021) Insights Into Students’ Experiences and Perceptions of Remote Learning Methods: From the COVID-19 Pandemic to Best Practice for the Future. Front. Educ. 6:647986. doi: 10.3389/feduc.2021.647986

Received: 30 December 2020; Accepted: 09 March 2021; Published: 09 April 2021.

Reviewed by:

Copyright © 2021 Nguyen, Netto, Wilkins, Bröker, Vargas, Sealfon, Puthipiroj, Li, Bowler, Hinson, Pujar and Stein. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Geneva M. Stein, [email protected]

This article is part of the Research Topic

Covid-19 and Beyond: From (Forced) Remote Teaching and Learning to ‘The New Normal’ in Higher Education

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 08 December 2022

Enhancing learning and retention with distinctive virtual reality environments and mental context reinstatement

  • Joey Ka-Yee Essoe   ORCID: orcid.org/0000-0002-7802-4200 1 , 2 ,
  • Nicco Reggente   ORCID: orcid.org/0000-0002-0511-9962 2 , 3 ,
  • Ai Aileen Ohno   ORCID: orcid.org/0000-0002-5577-480X 2 , 4 ,
  • Younji Hera Baek 2 , 5 ,
  • John Dell’Italia 2 , 6 &
  • Jesse Rissman   ORCID: orcid.org/0000-0001-8889-5539 2 , 7 , 8 , 9  

npj Science of Learning volume  7 , Article number:  31 ( 2022 ) Cite this article

6807 Accesses

9 Citations

174 Altmetric

Metrics details

  • Learning and memory

Memory is inherently context-dependent: internal and environmental cues become bound to learnt information, and the later absence of these cues can impair recall. Here, we developed an approach to leverage context-dependence to optimise learning of challenging, interference-prone material. While navigating through desktop virtual reality (VR) contexts, participants learnt 80 foreign words in two phonetically similar languages. Those participants who learnt each language in its own unique context showed reduced interference and improved one-week retention (92%), relative to those who learnt the languages in the same context (76%)—however, this advantage was only apparent if participants subjectively experienced VR-based contexts as “real” environments. A follow-up fMRI experiment confirmed that reinstatement of brain activity patterns associated with the original encoding context during word retrieval was associated with improved recall performance. These findings establish that context-dependence can be harnessed with VR to optimise learning and showcase the important role of mental context reinstatement.

Similar content being viewed by others

virtual learning research paper

Enriched environments enhance the development of explicit memory in an incidental learning task

virtual learning research paper

Virtual reality video game improves high-fidelity memory in older adults

virtual learning research paper

Memories for third-person experiences in immersive virtual reality

Introduction.

Considerable research has documented that human memory is inherently context-dependent 1 , 2 . During learning, contextual cues—whether environmental (e.g., a specific room) or internal (e.g., an emotional state)—become bound to the information being encoded. Although some of these cues may be relevant to the to-be-learnt materials, many will be seemingly irrelevant. Despite their relevance, the later presence of these same contextual cues can facilitate memory recall, whereas their absence can hinder recall 3 . Perhaps the most iconic example of this effect is Godden & Baddeley’s 4 demonstration that scuba divers were better able to recall words that they had studied underwater when tested underwater, and better able to recall words studied on land when tested on land, but impaired when these study and test contexts were mismatched. Context effects can be observed with far less dramatic environmental changes (e.g., being tested in a different room 5 , or in a more quiet/noisy environment 6 ), are most robust when memory is probed with recall rather than recognition tests 1 , 7 .

One situation where context effects can be particularly impactful for learning is when multiple sets of information are studied in close temporal proximity. When the to-be-learnt content is similar across these sets, the build-up of interference can make it difficult to maintain clear mental representations of each set and cause confusion between the sets. For instance, reading two conceptually similar scientific papers within the same hour may lead one to mentally misattribute a finding of one paper to another. Likewise, while traveling to a place where two phonetically similar languages are spoken, it might be challenging to keep vocabulary items in these two languages appropriately compartmentalised in one’s memory if they are studied on the same plane flight. Some research has shown that learning each information set in its own distinctive context can improve recall by reducing this type of interference 8 , 9 . Specifically, a distinctive context provides unique cues that will become bound to items from a given information set. This supports learners’ abilities to maintain separate mental representations, reducing interference between the sets. This context-induced benefit increases in magnitude when the contexts are more distinctive, and when fewer items are affiliated with each context 1 , 8 , 9 , 10 .

Although distinctive learning contexts have the potential to reduce interference, they run the risk of creating context-dependent associations that could hinder later recall under circumstances where those contextual cues are no longer present. Whenever individuals have the luxury of studying information and repeatedly taking practice tests on that information in a single context, they may acquire the information quickly and perform quite well without realising the extent to which they are using the contextual cues as a “crutch” to facilitate learning and retrieval 9 , 11 . Only when later struggling to recall the information in a new context—such as a foreign traveller trying to use vocabulary that had only ever been practiced in a classroom setting—does their reliance on this contextual crutch become apparent. In most real-world settings, it is impossible or impractical for learners to physically return to the original encoding context as a means to gain access to helpful retrieval cues. Fortunately, mental reinstatement—the act of vividly imagining oneself in the original encoding environment—presents one solution to promote information transfer across contexts. Indeed, mental reinstatement can be nearly as effective as physically returning to the learning context 2 , 12 . Thus context change-induced forgetting may be mitigated by mentally “returning” to the learning context during recall.

Learning protocols that harness the beneficial aspects of context-dependence while ameliorating the deleterious effects are likely to yield the best outcomes. How best to achieve this balance thus remains an active and important area of research. Designing and controlling distinct contexts in practice is challenging, for experimenters and learners alike. Manipulating one’s physical context can influence learning and recall, but doing so can be costly, time-consuming, and difficult to control. Background images 13 and videos 14 have been used as contexts in an effort to increase experimental control. While these can serve as proximate contextual cues in experiments, they do not allow navigation or immersion like real-world contexts, and thus ecological validity suffers 15 .

Virtual reality (VR) offers a powerful means to create immersive learning environments that are highly distinctive and well-controlled, in order to examine and exploit context-based memory modulation 15 , 16 . Indeed, one recent study used two distinctive VR-based contexts—one underwater and one on the surface of Mars 17 —to conceptually replicate Godden & Baddeley’s classic finding of context-dependent recall. When using VR environments as contexts, it is valuable to measure participants’ sense of presence 18 , 19 , 20 , 21 , which refers to their sense of experiencing a VR-based environment as a place that one has actually inhabited, rather than something that one was merely watching passively (e.g., “I feel like I am in this space station, walking around,” vs “I am watching this space station on a screen while sitting in a lab.”). If an individual does not perceive VR-based contexts as actual environments, then these contexts may have little or no effect on memory outcomes because the “contexts” themselves would not be subjectively valid.

Here, we aimed to leverage the benefits of context-dependence to enhance learning and retention. We chose to focus on foreign vocabulary learning as it is a domain of practical value to many people, while also being a paradigmatic paired associate learning task. To rigorously test this approach, we selected learning material to maximise potential interference and used a challenging recall test. English-speaking participants learnt the meanings and pronunciations of 80 foreign words from two phonetically similar Bantu languages: Swahili and Chinyanja. During testing, participants were prompted to verbally pronounce foreign words when cued with their English translations (note that this is far more difficult than being cued with the foreign word and recalling the English translation 22 ).

Two custom, first-person desktop VR environments served as contexts, which enabled maximal experimental control over the learning contexts and subsequent guided mental reinstatement. First, we investigated whether contextual support could improve learning outcomes by reducing interference and promoting transfer. To this end, participants were randomly assigned to one of two groups: a single-context ( n  = 24) group that learnt both languages in a single VR context, and a dual-context ( n  = 24) group that learnt each language in its own unique VR context. We hypothesised that dual-context participants would be better able to keep track of which translations went with which language and thus would show fewer intrusions (i.e., producing the Chinyanja translation of a word when cued to recall the Swahili translation), and greater long-term retention (as measured on a surprise recall test conducted one week later). Moreover, we predicted that the magnitude of these context effects might be contingent on whether participants subjectively experienced the VR-based contexts as actual environments they had inhabited (i.e., did they have a strong sense of presence?). Thus, a 10-item presence scale (range 1-5) from a prior study was used to measure the degree to which participants felt “as one” with their first-person avatar and experienced the VR as real environments 19 . To assess the role of mental context reinstatement, our paradigm explicitly cued participants to imagine themselves in a specified place prior to each vocabulary recall trial. This allowed us to measure the impact of context reinstatement congruency (i.e., whether they reinstated the same or different context in which they had learnt a given language) on recall performance. Finally, to further explicate a potential mechanism for contextually supported recall, we examined a separate group of dual-context participants ( n  = 22) during recall, using functional magnetic resonance imaging (fMRI) to provide a neural index of context-specific reinstatement on each retrieval trial 23 , 24 . We hypothesised that elevated reinstatement of brain activity patterns linked to the original encoding context would enhance the likelihood that participants would be able to successfully recall the cued foreign vocabulary item. Given the universal desire to develop protocols for memory enhancement across disciplines, this investigation holds considerable promise for fields such as cognitive research, pedagogy, and psychotherapies that involve therapeutic skill learning.

Initial learning: contextual crutch and desirable difficulties

Across two consecutive days, participants encoded a total of 80 foreign vocabulary items in two languages, in one learn-only round (Round 1), followed by three test-learn cycles (Rounds 2–4, retrieval attempts during these tests were scored as recall data for Times 1-3; T1-T3 . See Fig. 1 , Fig. 2e , Methods, and Supplementary Video 1 ). They learnt 10 words in Swahili only, 10 in Chinyanja only, and 30 words in both languages. To induce contextual crutch effects, test-learn cycles occurred within the learning context(s) as participants navigated along a predetermined path (Fig. 2a–d ). To further bolster initial learning we integrate a “desirable difficulties” technique 25 called expanding retrieval practice, in which the time interval between successive learning and testing opportunities progressively increased 26 . Differences between the single-context and dual-context groups were not expected to emerge during the initial learning stage, as the magnitude of context effects has been shown to increase with the length of the retention interval 1 .

figure 1

a Encoding tasks in VR-based contexts across Days 1 and 2. a1, In an underwater practice context, participants learnt VR navigation and received tasks instructions from “the teacher.” a2, Task Practice (under experimenter supervision). a3, Context A Encoding. In each of Context A’s nine named “rooms”, participants stood on a location marker and performed two clock-wise rotations (720°), while imagining themselves as tourists who forgot their camera, trying to remember what it felt like to be there. a4, Language 1 Encoding. Participants remained in Context A to encode Language 1 (Rounds 1–3, 40 words per round). a5, Context B Encoding. a6, Language 2 Encoding (Rounds 1–3). All participants experienced the same procedures except for the context in which Language 2 was encoded. Single-context participants returned to Context A to encode Language 2, while dual-context participants remained in Context B to encode Language 2. On Day 2 participants performed Rounds 4 of Language 1 and Language 2 Encoding. b Day 2: short-delay recall (T4). After a short delay, participants were tested outside of the VR contexts, in the laboratory or MRI scanner. In each of 80 trials, participants first mentally reinstated an auditorily cued room from one context before recalling the foreign translation of a cued word. In congruent reinstatement trials, the mentally reinstated room was the learning context of the cued word. In incongruent reinstatement trials, the mentally reinstated room was in the opposite context. c Day 8: one-week-delayed recall (T5). Participants were telephoned, ostensibly for an interview; experimenters then cued recall for all 80 foreign words. Image attribution: The VR environments and content depicted here were created by J.K.-Y.E or by Forde Davidson as commissioned by the research team, or were from the OpenSim community shared under the Creative Commons 0 License. The image of the telephone and computer monitor were modified from public domain images, and the image of the MRI scanner was provided by the UCLA Brain Mapping Center.

figure 2

Two custom-built VR-based contexts were used in this study. a “Fairyland Garden” was a fantasy-fiction inspired context that was bright, verdant, visually open, with lakes and wooden rooms opened to the outdoors. b Fairyland Garden’s predetermined path used in language encoding. This path’s hints were bright green footsteps; its pedestals tree stumps. c “Moon Base” was a science-fiction inspired context that was dark, rocky, closed-in, with narrow hallways and artificially coloured metallic rooms, and participants were confined indoors at all times. d Moon Base’s predetermined path used in language encoding. This path’s hints were bright yellow arrows; its pedestals yellow stands as shown in 2e. e Language encoding task. In each round of language encoding, participants interacted with 40 concrete objects representing each of the foreign words (e.g., a rooster), organised along a predetermined path. The VR environments were experienced through a first-person perspective (a visible avatar is only present in this figure for illustrative purposes). e1, Participants followed visual hints (e.g., arrows) to an object; these hints were transient and disappeared after use. After arriving at the object, participants first verbally say its English name (e.g., “rooster”), printed in floating text above the object. During Round 1 of each language, participants then ‘clicked’ the object. e2, During Rounds 2–4, participants first attempted to verbally recall the foreign words (T1-T3) before clicking the object. e3, When the object was clicked, participants would hear the foreign translation (e.g., Swahili word “jogoo,” meaning rooster) three times. They were to repeat aloud after it each time. Then they clicked the object’s pedestal to reveal transient path hints to the next object. Image attribution: The VR environments and content depicted here were created by J.K.-Y.E or by Forde Davidson as commissioned by the research team, or were from the OpenSim community shared under the Creative Commons 0 License.

Across groups, participants recalled 42% (±17%) of the 80 foreign words after two exposures (T2); note that each “exposure” refers to encountering an object and hearing and repeating back its translation three times in rapid succession (Fig. 3 ). This learning rate was considerably higher than expectations (22–26%) set based on a previous study that used similar learning material (42 Swahili-English word pairs; no secondary foreign language was learnt in that study), but did not employ distinctive learning contexts (see Supplementary Discussion: D 2 for additional discussion) 27 . After the third exposure to the foreign words, our participants were not tested until the following day (T3), and yet their recall performance remained robust at 42% (±17%). As expected, no group differences emerged during the initial learning stage ( p  > 0.05).

figure 3

a overall recall performance, split by context group and presence. b Main effect of mental reinstatement on T4 recall. c Main effect of context group condition on intrusions. d Interactions of context group and presence in one-week retention. * denotes statistical significance, error bars denote standard error of the mean.

Transfer and mental reinstatement

Transfer was measured by recall during a non-VR test (T4), which was the first test that occurred outside of the learning context. Across conditions, participants recalled 48% (±18%) in T4. A controlled mental reinstatement protocol was employed to maximise consistency across participants and across experiments (Fig. 4 ; see Methods). On each trial, participants were first cued to mentally reinstate a specific area within a given learning context (e.g., “Moon Base: Airlock”). Then, they were prompted by audio cues (e.g., “Swahili: dog”) to attempt to covertly retrieve the appropriate foreign translation, and finally a beep sound cued them to verbally pronounce the word. Two mental reinstatement conditions were employed: congruent reinstatement (when the original learning context of the to-be-recalled word was mentally reinstated) and incongruent reinstatement (when a different context was mentally reinstated). During T4, congruent mental reinstatement trials exhibited significantly greater recall (52% ± 18%) than incongruent reinstatement trials (47% ± 19%), RM-ANOVA, p  = 0.009, η p 2  = 0.31; Fig. 3b ; see Supplementary Note 1: A2, A3 ). This demonstrated that when recalling in a new context, transfer is enhanced when the learning context is mentally reinstated. This effect did not interact with context-group membership, suggesting that even those participants who learnt both languages in a single context still benefitted when prompted to mentally reinstate that context relative to when they reinstated a context in which neither language had been learnt.

figure 4

An example trial of the short-delay non-VR test. Each trial consisted of the following periods: Mental reinstatement, language recall, imagery vividness rating, and two arithmetic questions (which served as an active baseline period between trials). The words “Get Ready” appeared to indicate the start of each trial. Mental Reinstatement: Participants heard via headphone the name a room they had visited (e.g., “Moon Base: Airlock”). Then the screen turns black, cuing participants to close their eyes and mentally “place” themselves back in that room. They pressed Button 1 to indicate that they had successfully “arrived” and oriented themselves. Then they mentally performed the same rotations they had done in the context encoding task (Figs. 1 a.3, 1a.5 ), while pushing Buttons 2 and 3 to indicate their mental reinstatement progress until they heard a beep. In the fMRI experiment, brain activity patterns related to mental imagery were extracted for the period between the Button 1 press and the beep. Language Recall: Participants heard the language recall cue (e.g., “Swahili: Dog”). Participants began to covertly retrieve the foreign word and made a button-press to indicate success or failure of retrieval; they then continued thinking about that word until they heard a beep. Upon the beep, they verbally pronounced the foreign word, or the portion of it they could recall. In the fMRI experiment, brain activity patterns related to language recall were extracted from the 6 s after the audio cue offset. Imagery Rating: Participants rated how vivid the previous mental reinstatement had been. These ratings were later used for trial exclusion for analyses involving mental reinstatement. Arithmetic Questions: At the end of each trial, participants answered two simple arithmetic questions. Each involved a display of two single-digit integers, and they were to press Button 1 if the product of these numbers was odd, and Button 2 if even.

Interference reduction

Interference was measured by intrusions from the opposite language (i.e., producing the Chinyanja translation of a word when cued to recall the Swahili translation, or vice versa), as these indicate a failure to maintain clear and distinctive representations between the two languages. While the intrusion count was generally low (less than 10 items out of 80), dual-context participants exhibited 38% fewer intrusions (4.09 ± 4.82) than the single-context (6.57 ± 4.69) participants (Fig. 3c ; RM-ANOVA, p  = 0.014, η p 2  = 0.13; see Supplementary Note 1: A3 ). This suggests that learning each language in its own distinctive context helped participants to maintain better separated mental representations and reduced interference.

One-week retention

A surprise memory test (T5; Fig. 1d ) was conducted via telephone one-week after T4. In a pre-scheduled “follow-up interview,” experimenters asked participants several interview questions and then began to conduct T5 (e.g., “How do you say ‘cherry’ in Chinyanja?”). Retention score was the percentage of information that survived the one-week delay interval, after it had been previously recalled in T4 (i.e., words that were not successfully recalled in T4 were excluded, see Methods). Furthermore, as the context manipulation was conducted via VR, presence (one’s sense of inhabiting a VR-based context as a real location) was entered into the analyses as a factor—if participants did not experience the VR environments as real contexts, then the context manipulation should have little to no effect.

Results showed that amongst participants who reported high presence (based on a mean split of presence scores, see Supplementary Table 2 ), the dual-context group exhibited a striking 92% (±7%) one-week retention rate, which was significantly higher than 76% (±12%) retention rate exhibited by the single-context group (Fig. 3d ; RM-ANOVA interaction, p  = 0.03, η p 2  = 0.11; simple main effect, p  = 0.002; see Supplementary Note 1: A4 ). Single- and dual-context participants who reported low presence did not perform differently on one-week retention (simple main effect for low-presence participants, p  = 0.47), nor did they differ from single-context participants reporting high presence (all contrasts p  > 0.05). Collectively, these results demonstrate that contextual support from unique contexts dramatically enhanced one-week retention, but only when participants subjectively perceived the contexts as actual environments they had inhabited.

Neural correlates of contextually supported recall

To further investigate the mechanisms by which distinctive learning contexts can later be brought back to mind to support the recall of foreign vocabulary items, we conducted a follow-up fMRI experiment. We recruited a separate group of participants ( n  = 23; analyses included n  = 22; see Methods) and assigned them all to the dual-context learning condition, since our goal was to measure context-specific reactivation on individual recall trials so as to characterise the behavioural advantage afforded by such reactivation. Given resource constraints, it was not possible for us to scan a separate group of single-context participants, nor would fMRI data from such participants be especially useful for our primary research question.

The use of verbal material separated the sensory modalities between contexts (visuospatial) and memoranda (verbal/auditory), allowing us to disentangle the neural correlates of contextual support from the memory retrieval itself. First, a whole-brain Searchlight Multi-Voxel Pattern Analysis (Supplementary Fig. 1 ; SL-MVPA) identified brain regions whose local fMRI activity patterns could most accurately discriminate between the two contexts during the mental reinstatement period. Each participant’s resulting searchlight map was thresholded to create an individualised binary mask, indicating which 2000 voxels would be used for the subsequent steps. Because the particular voxels selected for each participant will differ, we are unable to make claims about how individual brain regions contributed to our analyses. However, in an effort to provide a coarse portrait of which regions’ local activity patterns tended to be most able to facilitate context decoding, the group mean of the searchlight map is visualised in Supplementary Fig. 2 and shows that peak decoding was observed in bilateral visual association regions (superior lateral occipital cortex, ventral occipito-temporal cortex, fusiform gyrus), medial parietal regions (precuneus, posterior cingulate cortex), lateral parietal regions (intraparietal sulcus and superior parietal lobule), and the left inferior frontal sulcus. Second, a brain-response pattern was derived within this mask for each of the two learning contexts (Fig. 5a ; context template). Third, a Representational Similarity Analysis (Fig. 5a ; RSA) produced a similarity score between (1) the brain patterns during covert retrieval of each word and (2) the context template of the learning context of that word. This RSA score provided an objective, quantitative measure for mental contextual reinstatement during verbal recall for each individual trial, which we will refer to as its “representational fidelity.” Fourth, the verbal recall scores of words with high vs low representational fidelity (mean-split within-subject) were compared—which allowed us to examine whether trials with greater evidence for contextually supported retrieval enjoyed a behavioural performance advantage relative to those with less evidence for contextually supported retrieval.

figure 5

After feature selection, fMRI activity patterns from each participant’s top 2000 voxels were used in a within-subject representational similarity analysis (RSA); RSA output was used to analyse verbal recall data. a RSA computed the correlations between activity patterns for each word during covert word recall (right) and the context template (left) of the word’s original learning context. The context template was an average of all the imagery patterns for a given context. The resulting correlation values were then used to divide recall trials into high fidelity vs low fidelity reinstatement trials, and verbal recall results were examined for each trial type. The effects of reinstatement prompt (congruent vs. incongruent) and/or reinstatement fidelity (high vs. low) on recall are plotted respectively for: ( b ), all non-VR tests (T4 and T5; collapsed across reinstatement prompt conditions), ( c ), short-delay non-VR test (T4), and ( d ), one-week-delayed non-VR test (T5). * denotes statistical significance for pairwise tests; see main text for description of interaction effects. Image attribution: The VR environments depicted here were created by J.K.-Y.E. or by Forde Davidson as commissioned by the research team, or were from the OpenSim community shared under the Creative Commons 0 License. The icons used were either created by J.K.-Y.E. or were modified from stock icons in MS PowerPoint or public domain.

A main effect of representational fidelity was observed (RM-ANOVA, F (1, 21) = 13.712, p  = 0.001, η p 2  = 0.395; see Supplementary Note 2 ), where high representational fidelity trials (0.50 ± 0.17) were associated with 5% higher recall than low representational fidelity trials (0.45 ± 0.18), collapsing across the short-delay test (T4) and one-week-delayed test (T5). When broken down by Times (Fig. 5b ), the effect of representational fidelity was significant at both T4 (RM-ANOVA, F (1, 21) = 8.60, p  = 0.008, η p 2  = 0.29; High = 0.56 ± 0.19; Low = 0.51 ± 0.20) and T5 (RM-ANOVA, F (1, 21) = 8.53, p  = 0.008, η p 2  = 0.29; High = 0.44 ± 0.19; Low = 0.39 ± 0.20) in follow-up analyses. Furthermore, a significant interaction between reinstatement prompt and representational fidelity was observed across T4 and T5 (RM-ANOVA, F (1, 21) = 6.59, p  = 0.02, η p 2  = 0.24; not shown). This examined how recall performance was impacted by the relationship between representational fidelity and the reinstatement prompt at the beginning of each trial (i.e., whether participants were cued to recall a room in a context congruent or incongruent with the language that was about to be probed). Follow-up analyses revealed that this interaction was driven by T5 one-week delayed recall ( simple interaction: p  = 0.006; Fig. 5d ), and not T4 short-delay recall ( p  > 0.05; Fig. 5c ). After incongruent mental reinstatement, if representational fidelity had been high during T4 recall, participants enjoyed a 10.1% advantage one week later (0.45 ± 0.19) as compared to if representational fidelity had been low (0.35 ± 0.20). This effect was absent in the trials preceded by congruent mental reinstatement, and recall was still high for both conditions (both 0.43 ± 0.20).

These findings indicated that we were able to quantify contextual support via mental reinstatement—by identifying neural representations of the two learning contexts and measuring their expression during each covert word retrieval attempt. Overall, we found a striking relationship between trial-specific evidence of context reinstatement fidelity and the likelihood of successfully recalling the cued word in the specified language on that trial. The behavioural advantage of high-fidelity reinstatement was not only present in the immediate term (T4 recall) but also persisted after a one-week delay (T5 recall). That this advantage was most apparent during incongruent reinstatement trials indicates that as long as participants were able to reinstate the original learning context during the word recall phase (despite having been prompted to imagine a different context several seconds earlier) they could minimise the potential disadvantage of this contextual incongruency.

By using distinctive virtual reality environments to provide rich contextual support, our behavioural protocol facilitated robust learning of highly challenging material—foreign vocabulary in two phonetically similar languages—while ameliorating the negative effects of context-dependence via “desirable difficulties” and mental reinstatement. These memorable contexts could later serve as retrieval cues when mentally reinstated during recall. After only four learning sessions, participants were able to recall nearly half of the 80 foreign words they had studied, and they showed relatively little forgetting after one week (up to 92% retention). Importantly, the knowledge acquired within the VR-based contexts transferred well to support recall in non-VR settings (i.e., a laboratory testing room, an MRI scanner, and a surprise telephone test), despite the fact that the learning contexts shared relatively few cues with real-world environments. In so doing, we leveraged the benefits of the “contextual crutch” phenomenon whereby rapid acquisition was facilitated by repeatedly learning and testing in the same context while mitigating the deficits of transfer and retention that typically accompany this occurrence (See Supplementary Discussion: D 3 ) 1 , 11 , 28 .

Our results provide evidence that contextual support optimises language learning in a manner that leads to high retention—but only when three critical conditions are met: First, participants must subjectively experience the VR-based contexts as actual environments that they feel like they are physically inhabiting during learning (i.e., they must report a high sense of presence). Second, a unique context must support the learning of each language. A high degree of presence, on its own, was insufficient to enhance retention for those participants in the single-context group who learnt the two languages in the same VR-based context. Only those participants assigned to the dual-context group—and who exhibited high presence during learning—showed superior retention of the material at the long-delayed test conducted one-week later. These high-presence dual-context participants were subjectively learning the two languages while actively navigating through two very different places, whereas low-presence participants presumably felt like they were learning both languages while sitting in a laboratory testing room. Third, benefits to memory recall must be evaluated after a long delay. Although dual-context participants did show fewer intrusions of the incorrect language translations (e.g., producing the Swahili translation when cued to recall the Chinyanja translation) at the immediate non-VR test (i.e., T4 on Day 2), they didn’t show an overall improvement in recall performance on this test. The dual-context participants’ advantage only emerged after the passage of one week’s time (i.e., T5 on Day 8). This finding illustrates that learning the two languages in two distinctive contexts can protect against forgetting, but only if participants felt highly present within the contexts. That the benefit was only observed after a long delay is consistent with previous reports that context-dependent effects tend to increase with longer retention intervals 1 , 29 . This may be due to the fact that that at shorter retention intervals a greater number of internal contextual cues (e.g., moods, levels of hunger or fatigue, private thoughts, etc.) may match those present during learning, thus outshining the effects of environmental context. Because we only assessed memory immediately after learning and at a one-week delay, we are unable to draw precise conclusions about the time course of the dual-context advantage. It is possible that the advantage could have emerged sooner (e.g., on Day 3 after one additional night of sleep), and it is also possible the magnitude of the effect could have grown even larger over time (e.g., if we waited two weeks before conducting the surprise memory test).

One critical attribute of our task design was the experimentally cued mental reinstatement of a specific environmental context prior to each vocabulary recall trial. This manipulation gave us precise experimental control over participants’ mental content immediately preceding each retrieval attempt. The cued context could either be congruent with the information the participant was about to be tested on (i.e., imagining themselves in the exact same ‘room’ where they had learnt that vocabulary item) or it could be incongruent (i.e., imagining themselves in a different ‘room’ from a completely different environment). Consistent with prior evidence for the benefits of mental reinstatement 2 , 12 , we found that imagery-based reinstatement of the congruent learning context enabled better recall in the short-delay non-VR test (i.e., T4).

In order to gain further insight into the impact of context reinstatement, we devised a follow-up experiment that used fMRI to measure neural correlates of context representations. This provided an objective index of the degree to which learning contexts were mentally reinstated during the language recall period of each trial. Unlike the behavioural experiment, the fMRI experiment enabled us to quantify mental reinstatement without relying on inferring mental reinstatement based on task instructions and participants’ subjective reports, nor to rely on the assumption that the reinstatement state would linger from the mental reinstatement period into the language recall period. Our fMRI experiment revealed evidence for contextually-supported retrieval of verbal materials. The results demonstrated that increased brain pattern similarity to the original learning context during covert verbal retrieval was associated with more successful recall performance. Trials with high reinstatement fidelity scores yielded short-delay recall performance (i.e., recall that took place seconds later) that was 5% higher than trials with low reinstatement fidelity scores. These high-fidelity reinstatement trials continued to enjoy the 5% recall advantage when memory was again tested one week later. This result expands upon a recent demonstration that context-specific fMRI activity patterns, induced through a closed-loop neurofeedback procedure, could facilitate verbal recall when the reinstated context was congruent with the learning context 30 .

When we examined the joint effects of mental reinstatement prompts and representational fidelity, we noted an interesting pattern. While high-fidelity mental reinstatement during recall improved short-delay recall regardless of pre-recall reinstatement prompts, after a one-week delay (T5) this advantage only appeared for words that had been paired with an incongruent pre-recall reinstatement prompt during T4. Thus, instructions to imagine oneself in a context that, just moments later, turns out to be incongruent with the learning context of the prompted language will serve to diminish the one-week retention of that word unless the participant manages to counteract this initial miscue and engage in high-fidelity reinstatement of the original learning context during word recall. In this sense, the act of overcoming incongruently cued context reinstatement by rapidly bringing the correct context back to mind may be considered a “desirable difficulty,” 25 given its ability to promote one-week retention.

Although our study did not systematically compare the influence of spatial contexts with other aspects of event representation, our findings are consistent with the notion that spatial context is crucial in event representations. There is growing evidence that spatial context is possibly a dominant attribute over and above other episodic details (e.g., objects and persons) 31 , 32 . Intracranial electroencephalographic recordings from human hippocampus show that spatial context information is often reactivated earliest in the retrieval process and guides recall of items learnt in that context 33 . When recalling short stories, spatial cues lead to quicker and more detailed memories about events 34 . In a VR learning paradigm based on the Method of Loci mnemonic techniques, we previously demonstrated that memory for the spatial layout of VR environments is correlated with participants’ ability to recall words learnt in those environments 35 . Even though the contexts used in the present study’s foreign vocabulary learning task bore no direct relevance to the verbal content being learnt, these richly detailed virtual environments provided a consequential scaffolding that helped mitigate potential interference 36 and provided memorable spatial cues that learners could later think back to when attempting word recall. While we did not directly test for this, the ability of our participants to actively navigate through the contexts during learning was likely an important determinant of the contextual effects we observed. One prior study investigating context-dependency used VR environments as passively presented backgrounds during word learning and found no impact of context reinstatement on behaviour 37 , 38 . Although there were other critical differences between our respective paradigms, this suggests that investigation of context effects will benefit when contexts are experienced in a more ecologically valid manner—such as the navigable, interactive desktop VR used here. When such contexts are experienced in VR, our results expand upon prior work emphasizing the importance of high presence in mediating the mnemonic benefits 37 . More broadly, our results showcase the critical importance of context in learning and bolster recent calls for cognitive neuroscientists to move beyond the study of isolated decontextualised stimuli 39 .

Presence, in addition to enabling virtual environments to serve as contexts for context-dependent memory effects, may be contributing to enhance learning in its own right. The recent Cognitive Affective Theory of Immersive Learning (CAMIL) 40 would predict that VR experiences that induce a sense of presence can increase learner interest and intrinsic motivation, which in turn generates greater learner efforts and willingness to attend the task, thereby facilitating learning and recall. Indeed, engaging learning environments using head-mounted display (HMD)-based VR, and generative learning activities therein, have been found to lead to better transfer 41 , 42 . Although we did not quantitatively examine our participants’ interest, intrinsic motivation, or engagement, these advantageous internal contexts during our desktop VR-based learning tasks may have contributed to our participants recalling 42% of the 80 foreign words after only two exposures, considerably higher than a previous non-VR study (22–26%) that used arguably easier to-be-learnt material without distinctive learning contexts 27 . Furthermore, CAMIL would posit that if the VR contexts were more meaningful and relevant to the to-be-learnt items, the learning enhancement effects would be greater still due to an increased sense of presence and agency.

Our study has several limitations that should be addressed in future work. In an effort to gain greater experimental control, we elected to cue mental reinstatement of a specific context immediately prior to each foreign word recall prompt. While this manipulation allowed us to examine the effects of reinstatement congruency and facilitated our effort to create context-specific brain activity templates, it prevented us from knowing how our participants would have performed—and to what degree neural reinstatement would have predicted their performance—had we not invoked any explicit reinstatement instructions. Also, our use of fMRI was focused on using neural measures to index putative mental states, which we could then relate to behaviour. Although our whole-brain multivariate pattern analysis approach afforded us enhanced power in our ability to measure context reactivation effects (which could incorporate perceptual, semantic, and emotional attributes of the respective contexts, represented across a wide array of brain regions), it limited our ability to draw conclusions about the role of specific brain structures in supporting context reinstatement and vocabulary recall. Furthermore, as the context-dependent learning enhancement effect was contingent on participants’ subjective sense of presence, future research using newer, more immersive HMD-based VR systems—especially those using omnidirectional treadmills for navigation—may find even stronger context-dependent effects due to the likely increased sense of presence. Additional studies with larger sample sizes will be necessary to characterise more fully how individual differences in presence levels impact the degree of context-dependence in VR learning tasks. Finally, along with CAMIL 40 , recent work has shown that the relevance of an environmental context to the information being learnt in that context is consequential for that information’s memorability 17 and transfer 41 , 42 . In our task, the relationship of the contexts to the languages and vocabulary being learnt was completely arbitrary. Future studies may confer additional memory advantages if language learning occurs in VR-based replicas of familiar real-world environments where that language would actually be useful (e.g., learning fruit vocabulary while navigating through the produce section of a grocery store or outdoor farmer’s market). Moreover, investigators should systematically quantify potentially relevant factors such as engagement, intrinsic motivation, interest, and agency in addition to measuring presence.

In summary, this study successfully harnesses context-dependence to enhance the learning of highly challenging and interference-prone material, while remedying the negative effects of context-dependence. After leveraging “contextual crutch” and “desirable difficulties” to enable a rapid learning rate, contextual support and mental reinstatement enabled transfer and overcame context change-induced forgetting, facilitating the real-world retrieval of information learnt in VR. This approach led to strikingly high one-week retention (92%) in participants who received unique contextual support for each language they had learnt, as long as they subjectively perceived the VR-based contexts as actual environments they had inhabited. Moreover, using neuroimaging to quantify mental context reinstatement during vocabulary recall, we found that trials with higher fidelity reinstatement of the learning context were associated a better ability to recall the foreign words they had learnt in that context. As learning and memory are involved in nearly every aspect of life—and they must always occur in some form of contexts—harnessing context-dependence to enhance memory bears far ranging practical implications for education, skill training, health care, as well as a potential to enhance therapeutic learning in evidence-based psychotherapy.

Participants

Data from forty-eight adult participants (26 females, age range 18–27 years; Supplementary Table 1 ) were included in the analyses for the behavioural experiment; participants were randomly assigned to one of two context conditions (single- and dual-context, each n  = 24). Data from twenty-two different adult participants (12 females, age range 19–25 years) were included in the analyses for the fMRI experiment; all were assigned to the dual-context condition.

Participants were recruited through flyers posted around the campus of the University of California, Los Angeles (UCLA) and social media advertisements targeting the same geographical area. Participants were tested individually, and they received course credit or were compensated monetarily ($20 per hour for fMRI procedures, $10 per hour for non-fMRI procedures). All participants provided written informed consent, and all study procedures were approved by the Institutional Review Board at the UCLA.

Eligibility screening was conducted using the Research Electronic Data Capture (REDCap) online survey system 43 . Inclusion criteria were as follows: (1) being monolingual English speakers (with no more than high school language courses for any other language) for the behavioural experiment, and being bilingual English speakers (having more than high school language courses for exactly one other language) for the fMRI experiment—this criterion was established for the fMRI experiment to increase baseline recall levels based on pilot results showing that bilingual participants learnt novel foreign vocabulary more quickly; (2) having limited (<5 h) prior exposure to the VR platform used in the experiment; (3) having normal or corrected-to-normal vision and audition; (4) having no diagnosis of learning disabilities; (5) reporting no substance dependence; and (6) not taking any psychotropic medications. Behavioural experiment data from an additional 13 people were acquired but excluded from analyses: five did not complete the procedure due to technical difficulties, three withdrew due to motion sickness during their desktop-VR experience, three did not return for Day 2 procedures, and two were excluded for not following instructions. fMRI experiment data from one additional person was acquired but excluded from analyses, for this individual reported falling asleep during procedure.

In the behavioural experiment, participants were randomly assigned to one of the two conditions (single- or dual-context); all participants in the fMRI experiment were assigned to the dual-context condition. All participants underwent the same procedural sequence (Fig. 1 ): Context A encoding, Language 1 encoding in Context A, Context B encoding, Language 2 encoding in Context A (single-context condition) or Context B (dual-context condition), non-VR test (in laboratory or in MRI scanner), and surprise telephone test.

This experiment measured recall at five time-points (Times 1–5, hence T1–T5). Each language was encoded four times in the VR-based learning contexts: one initial study session followed by three test-study cycles (T1–T3) across two lab visits on consecutive days. At the end of the Day 2 visit, participants were tested outside of the VR learning contexts (T4), either in the lab or in the MRI scanner, and tested again over the telephone one week later (T5).

Virtual reality

Two distinctive VR-based contexts were used for the learning task (Fig. 2a–d ). Participants navigated the world using a computer mouse and keyboard, where the mouse aimed the avatar and the arrow-key press translated to movement in the direction of the given key. They were instructed that the up-arrow (forward motion) was the least likely to lead to simulator sickness. Participants interacted with 3D objects via mouse clicks, and used headphones with a built-in microphone to hear the stimuli and communicate with experimenters. All graphics were displayed on a 27” LED monitor.

“Fairyland Garden” was a fantasy-fiction type context that was bright, verdant, visually open, and expansive. This context’s landscape was rich with water and trees, the buildings were wooden, every room was opened to the outdoors, with birdsongs, crickets, and nature-based ambient sounds (Fig. 2a ). “Moon Base,” on the other hand, was a science-fiction type context in which participants were confined indoors within the base, whose structure featured metallic walls, narrow hallways, electronic control panels, artificial colours, mechanical ambient sounds, and participants were always confined indoors (Fig. 2c ). Each context contained nine named areas (hence, “rooms”); the names of each room were displayed in English on signs at the boundaries.

The VR-based contexts displayed different experimental objects during the context encoding phase and language encoding phase. During context encoding, location markers were placed in each room to demarcate the location for participants to “stand” as they encoded the context. During language encoding, interactive 3-D objects representative of the to-be-learnt words were placed on “pedestals” in each room, organised along a hinted floor path that displayed transient markers between pedestals (Fig. 2b , d ).

An additional VR environment (Fig. 1a.1 , 1a.2 ) was used for participants to learn to control their avatars, receive task instructions, and practice the Context Encoding Task and the Language Encoding Task. This training environment was underwater in honour of one of the pioneering demonstrations of context-dependent memory 4 . It was designed to be visually attractive and highly fantastical (e.g., swimming fishes, shifting lights), so as to allow participants time to adjust to the other-worldly nature of VR experience. This aimed to allow participants to focus on the learning tasks without being distracted by the novelty of the VR experience itself.

These desktop-VR-based contexts were created for this study using the open source OpenSimulator platform (v0.8.2.1, Diva Distribution). Firestorm Viewer v4.4.2-v5.0.7 (2014–2017) rendered content, presented on a computer running Windows 7 Professional. A high-resolution (2560 x 1440) flatscreen display, which participants viewed in close proximity in a darkened room, was used instead of a head-mounted display (HMD). Our initial piloting with an HMD (Oculus RIFT DK1) found that many participants experienced eventual motion sickness that interfered with their ability to concentrate on the task. Switching to an LED monitor (often referred to as “desktop VR”) largely ameliorated this issue, although this may have led to some of our participants reporting a limited sense of “presence” in the VR worlds.

During the VR tasks, an experimenter was present to monitor the behaviour of the participant and to communicate with the participant over headphones. While experimenter and participant were in same room, they were separated by cubicle wall such that they were out of sight from one another.

Word list, cues, and testing

The to-be-learnt word lists were designed to be as similar, and thus as confusable, as possible. A total of 60 English words, and their translations in two phonetically similar Bantu languages—Swahili and Chinyanja—were used in the experiment. Each participant learnt to pronounce altogether 80 foreign words: 10 learnt in Swahili only, 10 in Chinyanja only, 30 in both languages. The Swahili word list was drawn from Carpenter & Olson (2012) 27 , and the Chinyanja versions of these words were translated using Google Translate™ and modified (see Appendix I. for the word lists and details regarding the modifications).

Audio stimuli for language learning and testing

During language encoding, audio recordings of the foreign words accompanied their written form. These recordings were pronounced by a single speaker who had no formal training with Bantu languages (J.K.-Y.E.). This was an intentional decision to ensure the foreign words were readily pronounceable by English speakers, as this experiment prioritised the memory aspect of the task over the degree of linguistic authenticity.

As Smith, Glenberg, and Bjork (1978) 5 found that experimenters constituted part of the learning contexts, we took precautions to prevent uncontrolled context reinstatement by virtue of subject-experimenter interactions. First, a single speaker recorded audio for both languages during the learning task—to ensure that speaker identity or voice would not serve as context cues between the languages. Every attempt was made by this speaker to not speak to participants during experimental procedures—only providing supervision for the study team in a separate office during the behavioural experiment procedure, and in the fMRI experiment, greeting participants by gestures, then managing equipment in the MRI control room (when asked, participant-facing researchers explained that this person was not to speak to them for scientific reasons, and that the team can answer questions on this matter at the end of their participation). Second, tests that were conducted outside of the learning contexts were cued by other speakers. The English audio cues used in T4 were recorded by A.O., and T5 was conducted by a team of research assistants.

Testing software

The short-delay non-VR test (T4; Fig. 4 ) was presented using PsychoPy2 44 , 45 . The long-delay surprise memory test was administered over telephone calls using Google’s Hangouts™ communication platform (audio-only), digitally recorded with participant permission, with foreign vocabulary recall cued conversationally by experimenters.

fMRI protocol and in-scanner verbal response recording

Fmri protocol.

fMRI data were collected with a Siemens 3.0 Tesla Magnetom Prisma scanner at the UCLA Ahmanson-Lovelace Brain Mapping Center, using a 64-channel head coil. Functional data were acquired using T 2 *-weighted simultaneous multislice echoplanar imaging (EPI) sequences (TR = 1.0 s; TE = 30 ms; flip angle = 52°; FoV = 20.8 cm; multiband acceleration factor = 5; 65 oblique axial slices; voxel resolution 2 × 2 × 2 mm). Each of the 10 runs consisted of 330 volumes and included eight trials of the task (we did not discard initial volumes as the version of Syngo software did not begin recording until T1 stabilised). Additionally, a T1-weighted structural MRI [axial magnetisation-prepared rapid gradient-echo (MPRAGE), 0.8 mm 3 ] was obtained for spatial registration of the functional data.

Auditory stimuli were presented via OptoActive™ noise cancelling headphones, which were equipped with the FOMRI III™ + microphone (Fig. 1c ) to record participants’ verbal responses during fMRI scans. This system provided online noise cancellation, which enabled high-quality recordings of participants’ vocalisations and allowed participants to clearly hear the audio stimuli despite the scanner noise. No post-experimental denoising of the verbal response was required. Button responses were recorded via CurrentDesign Fibre Optic Response Pads, an MR-compatible button box device. MR-compatible goggles were used to for visual presentations.

Procedure: day 1 and day 2, context and language encoding (T1–T3)

Familiarisation, instructions, and practice.

After informed consent and general instructions, participants “entered” the introductory VR environment. Therein, participants first familiarised themselves with the navigational controls. They then received instructions for the context- and language encoding tasks by watching a video on a screen within the world (Fig. 1a.1 , Supplementary Video 2 ), and practiced the two tasks (Fig. 1a.2 ) under the supervision of an experimenter, who provided corrective feedback to ensure that participants had proper understanding of the tasks. Participants practiced the context encoding task (see below) by performing it in the practice context. Then they practiced the language encoding task by learning the translations of a set of practice items in the pseudo-language ‘Pig Latin’.

Context A encoding (Fig. 1a.3 )

Participants were then “teleported” to Context A (Moon Base or Fairyland Garden, counterbalanced across participants), where they performed a guided encoding task of the VR-based context itself. Each context contained 9 “rooms,” each equipped with a location marker. In each room, participants were instructed to walk to the marker and do two full clock-wise rotations (720°) within 30 s while looking around the room. Participants were instructed to pretend that they were a tourist who had forgotten their camera and that they should try to remember what it felt like to be in that particular place. As participants entered and exited each room, the experimenter informed participants the names of the rooms (e.g., “You are now leaving Sickbay and entering Airlock.”).

Language 1 encoding (T1–T2; Fig. 1a.4 , Supplementary Video 1 )

There were four rounds of language encoding for each language (three rounds on Day 1, and one on Day 2). Before each round, participants were told which language they would be learning. After Context A encoding and a mandatory 2-min break, participants re-entered Context A for Round 1 of Language 1 encoding (Swahili or Chinyanja, counterbalanced across participants).

In each round, participants navigated along the hinted walking path (Fig. 2b , d ) and encountered a series of 40 pedestals (with 3–5 pedestals in each room). Upon each pedestal hovered a slowly rotating, 3-D object representation of the to-be-learnt word (e.g., a rooster), with its English name floating above to ensure that participants could have certainty about what that object was (i.e., so they knew it was not a hen or turkey). As Fig. 2e denotes, participants were instructed to walk up to each object, read its English name aloud, and then to click on it. The click changed the floating English text to reveal the foreign transliteration, and participants would hear the foreign pronunciation three times via headphones, evenly spaced across 10 s. Participants were instructed to repeat after the audio each time by pronouncing the foreign word aloud. Upon completion, they would then click the pedestal to reveal a visible path marking the way to the next pedestal with the next object. The path hints were transient and disappeared after use. Object sequences were controlled so that they were consistent within each language. That is, for a given participant, the same object always appeared in the same location for one language, but always in a different location for the other language. The pedestal locations and navigational route remained consistent across all rounds. A 5-min break was inserted between Rounds 2 and 3.

Retrieval practice (Fig. 2e.2 )

Retrieval practice was incorporated into Rounds 2–4. During Rounds 2–4, after participants walked up to each object and spoke aloud its English name, they were to first attempt to verbally recall its foreign translation before clicking the object. If the participant did not recall the translation and did not wish to attempt a guess, they had the option to say “pass.” They then clicked the object, which triggered the transliteration of the foreign word to the appear and the audio of its pronunciation to be played. Thus, regardless of whether they were correct, incorrect, or passed, the participant received feedback as to the correct answer. Then, as with Round 1, participants heard and repeated after the audio three times within a 10 s period. Participants’ verbal responses were digitally recorded and used to index their memory recall ability during each round, with performance summarised as: T1 (recall during Round 2 before the 2nd encoding), T2 (recall during Round 3 before the 3rd encoding), and T3 (recall after an overnight delay, before the 4th and the final encoding). In the rare cases when participants neglected to attempt recall or say “pass” before clicking an object, the associated vocabulary words were dropped from analysis after that time point. For example, consider a participant who clicked the 3-D boat object during Round 3 before attempting to recall the Swahili word for “boat.” Even though the participant would continue to encounter the boat in Round 4 to maintain consistency across participants, that word would be excluded in analyses of that participant’s T3, T4, and T5 data.

Context B encoding (Fig. 1a.5 )

After Round 3 of Language 1 encoding, participants encoded Context B. The procedure was identical to Context A encoding, except it occurred in the other VR-based context. This was followed by a 5-min break.

Language 2 encoding (T1–T2; Fig. 1a.6 )

After the break, participants began Language 2 encoding. This is the only portion of the procedures in which the experiences of the two context groups diverged. Dual-context participants remained in Context B to encode Language 2, while single-context participants were teleported back to Context A to encode Language 2 (note that single-context participants never learnt any language in Context B). The encoding procedure was identical to Language 1 encoding.

Post-VR questionnaires

Thereafter, participants completed on REDCap 43 a presence scale used in a prior study 19 , an immersion survey (this survey was not used in the analysis) 18 , 46 , the Simulator Sickness Questionnaire 47 , and the Pittsburgh Sleep Quality Index 48 . They were then reminded of their appointment the next day, and sent home.

Participants returned the next day around the same time of day to perform Language 1 Encoding Round 4 (T3). Then, following a 2-min break, participants performed Language 2 Encoding Round 4 (T3). Round 4 was participants’ last exposure to the foreign words and VR contexts.

Procedure: day 2, short-delay, non-VR testing (T4)

Language encoding was followed by a 10-min break (behavioural experiment) or 30-min break (fMRI experiment), after which participants were tested for the first time outside of the VR-based learning contexts (T4), either in the lab (behavioural experiment) or in the MRI scanner (fMRI experiment). During the break, participants in the behavioural experiment were unoccupied for 10 min under supervision, seated in a waiting room without using internet-capable devices. A 30-min interval was scheduled for participants in the fMRI experiment. During this time, each participant was escorted by their experimenter to the Ahmanson-Lovelace Brain Mapping Center (an 8-min walk from the laboratory), underwent final MRI safety screening, and was set up in the MRI scanner.

T4 consisted of 80 trials (one for each foreign word learnt) evenly divided into 10 runs. Each trial (Fig. 4 ) consisted of the following periods: “Ready” screen, mental reinstatement, language recall, imagery vividness rating, and two trials of an arithmetic task that served as active baseline for fMRI data analysis. T4 procedures were identical in the behavioural and fMRI experiments.

Ready (1 s)

A grey screen with the words “Get Ready” printed was presented to mark the beginning of each trial.

Mental reinstatement (10 s)

The mental reinstatement period began with an audio cue for each trial, which stated the name of a VR-based context, followed by that of a room therein (e.g., “Moon Base: Airlock”). Following the audio cue, the screen turned black, and based on instructions provided to the participants before the scan, they knew that this meant that they should close their eyes, imagine themselves back in that specific room, and mentally perform the full rotations (as they had practiced the prior day in the VR-based context encoding task) until they heard a beep. Participant used a series of button presses to indicate the progress of their imagined rotation: mentally “placed” themselves on the marker, rotated 180°, 360°, 540° and so on. If participants completed a full rotation before the allotted time, they were instructed to continue mentally rotating and button-pushing until the beep. Upon hearing the beep, which sounded 10 s after audio cue offset, participants were to cease performing the mental rotation task and open their eyes to prepare for the next phase of the trial.

In the congruent reinstatement condition, participants were cued to reinstate the specific room in which they had learnt the word to be recalled later in this trial. In the incongruent condition, they were cued to reinstate a room from the other context (for dual-context participants, this was the context where they had learnt the other language; for single-context participants, this was the context where they had not encoded any language). These conditions were pseudo-randomly intermixed.

Language recall (8 s)

The language recall period began 2 s after the onset of the previous beep. Participants first heard an audio cue, which stated a language, then an English word whose translation they had learnt in the stated language (e.g., “Chinyanja: rooster”). After hearing the cue, participants were to covertly retrieve the English word’s translation in the cued language (i.e., to mentally recall the foreign word without saying it aloud). If they felt they were successful, they were to push Button 1 and to continue thinking about the word until they heard a beep. If they failed to retrieve the foreign word, they were to push Button 2 and continue to attempt retrieval until the beep—should they succeed at any point after indicating failure, they were to push Button 1 at the moment of successful retrieval. The beep sounded 8 s after the cue offset, at which point participants were to verbally pronounce the foreign word, or as much of it as they could remember. These responses were recorded and scored as T4 data. The length of the verbal response recording period varied between 6.5–7.0 s depending on the length of the cue (3.0–3.5 s), so that the combined duration of the two always summed to 10 s.

Imagery vividness rating (2 s)

After verbal recall, participants were then asked to rate how vivid the previous mental reinstatement had been (1 for very vivid, 2 for vivid, 3 for not vivid, and 4 for unsuccessful). These ratings were later used for trial exclusion during the analyses involving mental reinstatement.

Arithmetic task (5 s)

At the end of each trial, participants performed an arithmetic task. Participants saw a display (2.5 s) with two single-digit integers, and they were to push Button 1 if the product of these numbers was odd, and Button 2 if even. Then a new pair of digits appeared (2.5 s) and participants performed the same task.

Procedure: day 2, post-experimental survey

After T4, participants completed a short survey to ask them about what strategies (if any) they had implemented to learn and recall the words, and if there was anything else they would like to communicate to the experimenters.

Procedure: day 8, one-week delay, surprise testing (T5)

On Day 8, participants were telephoned for a scheduled “follow-up interview” with the understanding that an experimenter would “ask them about things they had experienced in the VR.” The only instruction they received about the phone call was that they were to be at home, seated in a quiet place. Participants were not informed that they would be tested again.

During the call, the experimenter requested permission to record the participant’s responses. After permission was granted, experimenter asked the following questions: (1) Had they looked up or studied any of the Swahili or Chinyanja words during the preceding week? (2) Had they expected to be tested again? (3) What percentage of the words did they expect to recall? (see Supplementary Note 3 ).

The experimenter then conducted a cued recall test to test participants’ memory for all 80 of the foreign words they had learnt. On each trial, the experimenter cued the participant with an English word and a language that it was to be translated into (e.g., “How do you say ‘cherry’ in Swahili?”). The order in which the words are tested was fully randomised, such that testing hopped back and forth between the two foreign languages. Participants’ vocal responses were recorded and scored as T5 data.

Language test scoring

Digital recordings of the verbal responses from T1–T5 were scored offline by two scorers. The score for each word was the number of correct phonemes divided by the number of total phonemes. Scorers were trained to use a detailed decision tree, and when the two scorers disagreed, the average between the two scores was used as the final recall score for that word. The partial word score was used to provide more fine-grained results than binary (correct vs incorrect) word recall. In this scoring scheme, phonemes in shorter words were weighed more heavily than phonemes in longer words. This weighting mirrors the consequences of phonemic errors in real-world communication. When one mistakenly places, for instance, a “P” instead of an “V” in the word “van” it tends to be more consequential than in a longer word like “supervisor,” and a lot more difficult for the listeners to guess the intended meaning.

Retention measures

Retention was measured inversely via a forgetting score between two tests. Overnight retention (reported in Supplementary Note 1: A4 ) was computed based on the difference between T3 and T2. One-week retention was computed based on the difference between T5 and T4.

Forgetting score

The forgetting score was computed as follows: First, an item-wise forgetting index was computed for each word with a non-zero score in the earlier test (i.e., if no phonemes were recalled in T4, the word was excluded from this computation for one-week forgetting). These forgetting indices measured loss between the two tests: a negative forgetting index would mean the word was recalled worse after one-week, and a forgetting index of zero would mean no forgetting, thus perfect one-week retention. For example, consider a word that had a recall score of 1 (full, correct recall) on T4, but only 0.5 (half of the phonemes were missing or incorrect) in T5. It would receive a “−0.5” on the forgetting index, indicating half of the word had been forgotten. On the other hand, if a word had a score of 1 on both T4 and T5, it would receive a “0” on the forgetting index, indicating perfect retention. These forgetting indices were then averaged within each participant (across all eligible words) to produce a forgetting score. The forgetting score was a metric of forgetting, or the inverse of retention—the more negative the score, the more forgetting and thus the poorer retention.

Retention score

For the ease of interpretation, a positive retention score was computed by 1 minus averaged forgetting score. In which 1 indicates perfect retention across all eligible words, 0.5 indicates half of the information was retained, while 0 means no information were retained.

Intrusion measure

When scoring T4 and T5, scorers were instructed to compare the transliteration of each word to its counterpart in the other language, and to determine from experience whether the word in question was similar to any other words in either language (see Appendix II for intrusion coding). The scorers were experimenters who became highly familiar with the words in both languages. In addition to formal training, scorers spent 2–6 h each week monitoring participants during language encoding, testing participants during T5, or scoring verbal response offline. Despite this, “similarity” between words remains arbitrary and experience-based. Therefore, two cautions were introduced: a newer scorer was always paired with a very experienced one in the scoring assignments, and the maximum code was used when the scorers disagreed—as the higher ratings denote more severe intrusions, and preliminary examination revealed that novice scorers tend to underrate intrusion rather than overrate them.

Behavioural data analysis

Multiple statistical tests were conducted using SPSS 26.0 49 . The between-subject factors were Context Group (single- vs . dual-context) and Presence (high- vs . low-presence, a mean-split grouping using the presence scale 19 ). The within-subject factors were Times (T1–T5), Language Order (Language 1 vs 2; not reported, see Supplementary Note 1: A1 ), and Reinstatement (congruent vs . incongruent reinstatement). The dependent variables were intrusions (number of items coded to be intrusions from the opposite language, out of a total of 80 items), recall (mean of item-wise percentage phonemes correct for a given test), and retention (see Retention Score above).

fMRI data analysis

Fmri pre-processing.

Functional data were pre-processed without spatial smoothing, pre-whitening, nor B0 unwarping using the FMRI Software Library 5.0.4 and Advanced Normalisation Tools (ANTS 2.0) 50 . FSL Brain Extraction Tool (BET2) 51 was used to perform brain extraction. FSL 52 FEAT 53 was used to apply a high-pass temporal filter (128 Hz). Timeseries alignment, motion correction, and registration to standard Montreal Neurological Institute (MNI) template was performed using FMRIB’s Linear Image Registration Tool (FLIRT) 54 , 55 , 56 , Motion Correction FLIRT (MCFLIRT) 54 , and ANTS.

fMRI task timing and trial categorisation

The mental reinstatement (Fig. 4 “Imagery”) and language retrieval (Fig. 4 “Language”) periods from each trial were extracted from the dataset. The BOLD timeseries for these periods were extracted using the adjusted onset and offset times (5 s, i.e., 5 TRs, were added to onsets and offsets to account for the lagging hemodynamic response, or HDR). The resulting truncated timeseries was then temporally averaged at each voxel, yielding one averaged imagery pattern and one averaged language pattern for each trial.

Each “Imagery” period began when participants indicated that they had mentally “placed” themselves in the to-be-reinstated context via a button push (Fig. 4 “Orient”), and end at the beep onset (the beep which informed participants to open their eyes and end mental reinstatement). The onset for each trial was based on participants’ responses, thus the imagery period duration varied in length. Imagery period data were labelled as Moon base or Fairyland Garden, based on the world that participants were cued to reinstate. Trials were excluded if participants reported they were “unsuccessful” during the imagery rating portion, or did not push buttons to report mental reinstatement rotation progress.

Each “Language” period began with the onset of the audio cue, and ended 6 s afterwards. The duration of this period was task-based, and fixed in length. Language period data were labelled by the foreign word to be recalled (e.g., Chinyanja: Dress).

Searchlight multi-voxel pattern analysis (SL-MVPA)

A SL-MVPA was conducted using the Imagery patterns to identify regions in the brain that expressed multivariate patterns of activity capable of discriminating between a participant’s mental reinstatement of Moon Base vs. Fairyland Garden (Supplementary Fig. 1 ). To this end, we employed a support vector machine (SVM) classifier with a linear kernel using libSVM (c-SVC, c = 1) 57 and a whole-brain searchlight mapping approach (radius = 4 voxels). Classification was cross-validated using a leave-one-run-out method—the classifier was trained on valid trials from 9 runs (9 × 8 trials), and tested on the valid trials from the left-out run (8 trials). Trial labels were balanced prior to classification by randomly sampling from the overrepresented trials to match the underrepresented trial types. The entire cross-validation procedure was repeated over 10 iterations (one for each run) and the classification results were averaged. This produced a brain map whose voxel values reflected the classifier’s cross-validation accuracy when the searchlight sphere was centred on that voxel (Supplementary Fig. 1.4 ). The top 2000 voxels with the highest classification accuracies were identified for each participant, and used to create a distributed region of interest for the subsequent representational similarity analysis as a within-subject feature selection (Supplementary Fig. 1.5 ).

Representational similarity analysis (RSA)

For each word that each participant had learnt, the RSA produced a value of similarity between (1) the brain response pattern when the participant was recalling this word, and (2) the averaged brain response pattern when the participant was mentally reinstating that word’s learning context (Fig. 5a ).

This within-subject RSA was conducted using custom MATLAB code. First, trial-specific imagery and language patterns (produced by the aforementioned temporal average of HDR-adjusted timeseries within trial period) for each participant were masked using the participant’s top 2000 voxels identified in the SL-MVPA. Second, the imagery patterns for each learning context were averaged within-subject to produce a participant-specific mental reinstatement template for Moon Base and Fairyland Garden. Third, the language pattern for each word was then correlated (Pearson’s r ) with the reinstatement template of its learning context. For instance, consider a participant who had learnt “banana” in Chinyanja in Fairyland Garden. The language period during the covert retrieval of the word “banana” in Chinyanja would be correlated with the Fairyland Garden template—an average of all imagery patterns during the mental reinstatement of Fairyland Garden. Fourth, the resultant r -values were Fisher transformed to normally distributed z -values to allow for comparison across trial-types. Lastly, a mean split was performed on the z -values to categorise each trial as either a high-fidelity reinstatement trial or a low-fidelity reinstatement trial to analyse the verbal response data.

Repeated measure analysis of variance (RM-ANOVA)

A 2 × 2 × 2 × 2 RM-MANOVA was performed on with the factors Times (T4, T5) × Reinstatement instructions (congruent vs incongruent) × RSA (high- vs low-RSA) × Presence (high- vs low-presence) on recall using SPSS 26.0 49 . The dependent variables were proportion syllables recalled during T4 (short-delay recall in the MRI scanner) and T5 (one-week-delayed recall over the telephone).

Reporting summary

Further information on research design is available in the Nature Research Reporting Summary linked to this article.

Data availability

De-identified data available from the corresponding author, upon request.

Code availability

The MATLAB scripts used for fMRI data preprocessing and statistical analysis are available from the corresponding author, upon request.

Smith, S. M. & Vela, E. Environmental context-dependent memory: a review and meta-analysis. Psychon. Bull. Rev. 8 , 203–220 (2001).

Article   CAS   Google Scholar  

Smith, S. M. Remembering in and out of context. J. Exp. Psychol.: Hum. Learn. Mem. 5 , 460 (1979).

Google Scholar  

Tulving, E. & Thomson, D. M. Encoding specificity and retrieval processes in episodic memory. Psychological Rev. 80 , 352–373 (1973).

Article   Google Scholar  

Godden, D. R. & Baddeley, A. D. Context‐dependent memory in two natural environments: on land and underwater. Br. J. Psychol. 66 , 325–331 (1975).

Smith, S. M., Glenberg, A. & Bjork, R. A. Environmental context and human memory. Behav. Res. Methods, Mem. Cognition, Mem. Cognition. 6 , 342–353 (1978).

Grant, H. M. et al. Context-dependent memory for meaningful material: information for students. Appl. Cogn. Psychol. 12 , 617–623 (1998).

Godden, D. & Baddeley, A. When does context influence recognition memory? Br. J. Psychol. 71 , 99–104 (1980).

Smith, S. M. Effects of environmental context on human memory. The SAGE Handbook of Applied Memory 162 (2013).

Smith, S. M. & Handy, J. D. Effects of varied and constant environmental contexts on acquisition and retention. J. Exp. Psychol.: Learn., Mem., Cognition 40 , 1582–1593 (2014).

Bjork, R. A. & Richardson-Klavehn, A. On the puzzling relationship between environmental context and human memory in Current issues in cognitive processes : The Tulane Flowerree Symposium on Cognition. (ed. Izawa, C.) 313–344 (Erlbaum, 1989).

Smith, S. M. & Handy, J. D. The crutch of context-dependency: effects of contextual support and constancy on acquisition and retention. Memory 1–8 https://doi.org/10.1080/09658211.2015.1071852 (2015).

Bramão, I., Karlsson, A. & Johansson, M. Mental reinstatement of encoding context improves episodic remembering. Cortex 94 , 15–26 (2017).

Wang, W.-C., Yonelinas, A. P. & Ranganath, C. Dissociable neural correlates of item and context retrieval in the medial temporal lobes. Behavioural Brain Res. 254 , 102–107 (2013).

Smith, S. M., Handy, J. D., Angello, G. & Manzano, I. Effects of similarity on environmental context cueing. Memory 22 , 493–508 (2014).

Reggente, N. et al. Enhancing the ecological validity of fMRI memory research using virtual reality. Front. Neurosci . 12 , 408 (2018).

Smith, S. A. Virtual reality in episodic memory research: a review. Psychon. Bull. Rev. 26 , 1213–1237 (2019).

Shin, Y. S., Masís-Obando, R., Keshavarzian, N., Dáve, R. & Norman, K. A. Context-dependent memory effects in two immersive virtual reality environments: on Mars and underwater. Psychon. Bull. Rev. 28 , 574–582 (2021).

Slater, M., Usoh, M. & Steed, A. Depth of presence in virtual environments. Presence.: Teleoperators Virtual Environ. 3 , 130–144 (1994).

Fox, J., Bailenson, J. & Binney, J. Virtual experiences, physical behaviors: The effect of presence on imitation of an eating avatar. Presence.: Teleoperators Virtual Environ. 18 , 294–303 (2009).

Bowman, D. A. & McMahan, R. P. Virtual reality: how much immersion is enough? Computer 40 , 36–43 (2007).

Sanchez-Vives, M. V. & Slater, M. From presence to consciousness through virtual reality. Nat. Rev. Neurosci. 6 , 332–339 (2005).

Kroll, J. F. & Stewart, E. Category interference in translation and picture naming: evidence for asymmetric connections between bilingual memory representations. J. Mem. Lang. 33 , 149–174 (1994).

Rissman, J. & Wagner, A. D. Distributed representations in memory: insights from functional brain imaging. Annu Rev. Psychol. 63 , 101–128 (2012).

Levy, B. J. & Wagner, A. D. Measuring memory reactivation with functional MRI: implications for psychological theory. Perspect. Psychol. Sci. 8 , 72–78 (2013).

Bjork, R. A. & Bjork, E. L. Desirable difficulties in theory and practice. J. Appl. Res. Mem. Cognition 9 , 475–479 (2020).

Storm, B. C., Bjork, R. A. & Storm, J. C. Optimizing retrieval as a learning event: When and why expanding retrieval practice enhances long-term retention. Mem. Cognition 38 , 244–253 (2010).

Carpenter, S. K. & Olson, K. M. Are pictures good for learning new vocabulary in a foreign language? Only if you think they are not. J. Exp. Psychol.: Learn., Mem., Cognition 38 , 92–101 (2012).

Lamers, M. H. & Lanen, M. Changing between virtual reality and real-world adversely affects memory recall accuracy. Front. Virtual Real. 2 , 602087 (2021).

Niki, K. et al. Immersive virtual reality reminiscence reduces anxiety in the oldest-old without causing serious side effects: a single-center, pilot, and randomized crossover study. Front Hum. Neurosci. 14 , 598161 (2021).

deBettencourt, M. T., Turk-Browne, N. B. & Norman, K. A. Neurofeedback helps to reveal a relationship between context reinstatement and memory retrieval. NeuroImage 200 , 292–301 (2019).

Robin, J. Spatial scaffold effects in event memory and imagination. WIREs Cogn. Sci. 9 , e1462 (2018).

Robin, J., Buchsbaum, B. R. & Moscovitch, M. The primacy of spatial context in the neural representation of events. J. Neurosci. 38 , 2755–2765 (2018).

Herweg, N. A. et al. Reactivated spatial context guides episodic recall. J. Neurosci. 40 , 2119–2128 (2020).

Robin, J., Wynn, J. & Moscovitch, M. The spatial scaffold: The effects of spatial context on memory for events. J. Exp. Psychol.: Learn., Mem., Cognition 42 , 308–315 (2016).

Reggente, N., Essoe, J. K. Y., Baek, H. Y. & Rissman, J. The method of loci in virtual reality: explicit binding of objects to spatial contexts enhances subsequent memory recall. J. Cogn. Enhanc. 4 , 12–30 (2020).

Kyle, C. T., Stokes, J. D., Lieberman, J. S., Hassan, A. S. & Ekstrom, A. D. Successful retrieval of competing spatial environments in humans involves hippocampal pattern separation mechanisms. Elife 4 , e10499 (2015).

Schomaker, J., van Bronkhorst, M. L. V. & Meeter, M. Exploring a novel environment improves motivation and promotes recall of words. Front. Psychol. 5 , 918 (2014).

Wälti, M. J., Woolley, D. G. & Wenderoth, N. Reinstating verbal memories with virtual contexts: myth or reality? PLOS ONE 14 , e0214540 (2019).

Willems, R. M. & Peelen, M. V. How context changes the neural basis of perception and language. iScience 24 , 102392 (2021).

Makransky, G. & Petersen, G. B. The cognitive affective model of immersive learning (CAMIL): a theoretical research-based model of learning in immersive virtual reality. Educ. Psychol. Rev. 33 , 937–958 (2021).

Makransky, G. et al. Investigating the feasibility of using assessment and explanatory feedback in desktop virtual reality simulations. Educ. Technol. Res. Dev. 68 , 293–317 (2020).

Parong, J. & Mayer, R. E. Learning science in immersive virtual reality. J. Educ. Psychol. 110 , 785–797 (2018).

Harris, P. A. et al. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J. Biomed. Inform. 42 , 377–381 (2009).

Peirce, J. W. PsychoPy—psychophysics software in Python. J. Neurosci. Methods 162 , 8–13 (2007).

Peirce, J. W. Generating stimuli for neuroscience using PsychoPy. Front. Neuroinformatics 2 , 10 (2009).

Slater, M., Usoh, M. & Chrysanthou, Y. The influence of dynamic shadows on presence in immersive virtual environments. in Virtual environments’ 95 , 8–21 (Springer, 1995).

Kennedy, R. S., Lane, N. E., Berbaum, K. S. & Lilienthal, M. G. Simulator sickness questionnaire: An enhanced method for quantifying simulator sickness. Int. J. Aviat. Psychol. 3 , 203–220 (1993).

Buysse, D. J. et al. Quantification of subjective sleep quality in healthy elderly men and women using the Pittsburgh Sleep Quality Index (PSQI). Sleep 14 , 331–338 (1991).

CAS   Google Scholar  

SPSS, I. IBM SPSS Statistics for Windows, Version 20.0. (IBM Corp Armonk, NY, 2011).

Avants, B. B. et al. A reproducible evaluation of ANTs similarity metric performance in brain image registration. Neuroimage 54 , 2033–2044 (2011).

Smith, S. M. Fast robust automated brain extraction. Hum. Brain Mapp. 17 , 143–155 (2002).

Jenkinson, M., Beckmann, C. F., Behrens, T. E., Woolrich, M. W. & Smith, S. M. Fsl. Neuroimage 62 , 782–790 (2012).

Woolrich, M. W., Ripley, B. D., Brady, M. & Smith, S. M. Temporal autocorrelation in univariate linear modeling of FMRI data. NeuroImage 14 , 1370–1386 (2001).

Jenkinson, M., Bannister, P., Brady, M. & Smith, S. Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage 17 , 825–841 (2002).

Jenkinson, M. & Smith, S. A global optimisation method for robust affine registration of brain images. Med. Image Anal. 5 , 143–156 (2001).

Greve, D. N. & Fischl, B. Accurate and robust brain image alignment using boundary-based registration. NeuroImage 48 , 63–72 (2009).

Chang, C.-C. & Lin, C.-J. LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2 , 1–27 (2011).

Download references

Acknowledgements

The authors gratefully acknowledge the funding agencies and the following individuals for their contribution to this manuscript: research assistant team (Priyanka Mehta, Alvin T. Vuong, Jacob Yu Villa, Gabriel Hughes, Alana Sanchez-Prak, Ruwanthi Ekanayake, and Hugo Shiboski, Daniel Lin); J.K-Y.E.’s dissertation committee (Drs. Elizabeth L. Bjork, Robert A. Bjork, and Kimberley Gomez) for valuable theoretical input; Forde “JubJub” Davidson for help with VR content development and custom functionality; the OpenSim community for VR content published under CC licensing; Andrew E. Silva, Ph.D. for data analysis advice; Joseph F. McGuire, Ph.D. and Joshua M. Essoe for manuscript editing. This work was supported by a Defense Advanced Research Project Agency (DARPA) Research Grant awarded to J.R. (D13AP00057) and National Science Foundation (NSF) Graduate Research Fellowships awarded to J.K-Y.E. (DGE-1144087), N.R. (DGE-1650604), and J.D. (DGE-1144087).

Author information

Authors and affiliations.

Center for OCD, Anxiety, and Related Disorders for Children, Division of Child and Adolescent Psychiatry, Department of Psychiatry and Behavioral Sciences, The Johns Hopkins University School of Medicine, Baltimore, MD, 21205, USA

Joey Ka-Yee Essoe

Department of Psychology, University of California, Los Angeles, CA, 90095, USA

Joey Ka-Yee Essoe, Nicco Reggente, Ai Aileen Ohno, Younji Hera Baek, John Dell’Italia & Jesse Rissman

Institute for Advanced Consciousness Studies, Santa Monica, CA, 90403, USA

Nicco Reggente

School of Medicine, California University of Science and Medicine, Colton, CA, 92324, USA

Ai Aileen Ohno

Division of Psychology, Communication, and Human Neuroscience, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, M13 9PL, UK

Younji Hera Baek

Birmingham Veterans Affairs, Birmingham, AL, 35233, USA

John Dell’Italia

Department of Psychiatry & Biobehavioral Sciences, University of California, Los Angeles, CA, 90095, USA

Jesse Rissman

Brain Research Institute, University of California, Los Angeles, CA, 90095, USA

Integrative Center for Learning and Memory, University of California, Los Angeles, CA, 90095, USA

You can also search for this author in PubMed   Google Scholar

Contributions

J.K.-Y.E. and J.R. conceived the study idea. J.K.-Y.E., J.R., N.R. designed the study. J.K.-Y.E. created and programmed the VR-based contexts, scripted and managed data collection. A.A.O. coordinated the experiment and contributed to RA team management. A.A.O., Y.H.B., and RAs collected and scored the behavioural data. J.K.-Y.E., A.A.O., Y.H.B., and N.R. collected the fMRI data. J.K.-Y.E. analysed the behavioural data. J.K.-Y.E., N.R., and J.D. pre-processed the fMRI data. N.R. and J.K.-Y.E. analysed the fMRI data. J.R. and J.D. advised on fMRI data analyses. J.K.-Y.E., J.R., and N.R. wrote the manuscript. All authors read and revised the manuscript and provided critical intellectual contributions.

Corresponding author

Correspondence to Jesse Rissman .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Demonstration of language encoding task, day 1 instructional video for participant, revised supplemental material, reporting summary checklist, supplementary videos legends, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Essoe, J.KY., Reggente, N., Ohno, A.A. et al. Enhancing learning and retention with distinctive virtual reality environments and mental context reinstatement. npj Sci. Learn. 7 , 31 (2022). https://doi.org/10.1038/s41539-022-00147-6

Download citation

Received : 14 May 2022

Accepted : 21 October 2022

Published : 08 December 2022

DOI : https://doi.org/10.1038/s41539-022-00147-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Stand in surgeon’s shoes: virtual reality cross-training to enhance teamwork in surgery.

  • Benjamin D. Killeen
  • Mathias Unberath

International Journal of Computer Assisted Radiology and Surgery (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

virtual learning research paper

Harnessing the Power of Virtual Learning

 alt=

The coronavirus pandemic has raised the profile of virtual learning, but those of us in the leadership development community have been witnessing its growing acceptance for a number of years now. The key to harnessing the power of virtual learning? Its design. Virtual learning gives us the ability to space learning out, expanding the opportunities to leverage methods that enhance the learning process. This white paper outlines our four-step framework for designing virtual learning to support the continuous learning and agility required in today’s rapidly changing world.

Harnessing the Power of Virtual Learning

To download the full idea brief, tell us a little bit about yourself

  • First Name * Required
  • Last Name * Required
  • Organization * Required
  • Title * Required
  • Business Email * Required
  • Country * Required Select Your Country Afghanistan Albania Algeria American Samoa Andorra Angola Antigua and Barbuda Argentina Armenia Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Brazil Brunei Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Colombia Comoros Congo, Democratic Republic of the Congo, Republic of the Costa Rica Côte d'Ivoire Croatia Cuba Curaçao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic East Timor Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Faroe Islands Fiji Finland France French Polynesia Gabon Gambia Georgia Germany Ghana Greece Greenland Grenada Guam Guatemala Guinea Guinea-Bissau Guyana Haiti Honduras Hong Kong Hungary Iceland India Indonesia Iran Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati North Korea South Korea Kosovo Kuwait Kyrgyzstan Laos Latvia Lebanon Lesotho Liberia Libya Liechtenstein Lithuania Luxembourg Macedonia Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Mauritania Mauritius Mexico Micronesia Moldova Monaco Mongolia Montenegro Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands New Zealand Nicaragua Niger Nigeria Northern Mariana Islands Norway Oman Pakistan Palau Palestine, State of Panama Papua New Guinea Paraguay Peru Philippines Poland Portugal Puerto Rico Qatar Romania Russia Rwanda Saint Kitts and Nevis Saint Lucia Saint Vincent and the Grenadines Saint Martin Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Sint Maarten Slovakia Slovenia Solomon Islands Somalia South Africa Spain Sri Lanka Sudan Sudan, South Suriname Swaziland Sweden Switzerland Syria Taiwan Tajikistan Tanzania Thailand Togo Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States Uruguay Uzbekistan Vanuatu Vatican City Venezuela Vietnam Virgin Islands, British Virgin Islands, U.S. Yemen Zambia Zimbabwe
  • Yes, send me emails from Harvard Business Publishing Corporate Learning (HBPCL)

By checking this box you will receive emails and communications from HBPCL regarding solutions, news, leadership content, webinars and events.

  • Email This field is for validation purposes and should be left unchanged.

Speech bubbles

Let’s talk

Change isn’t easy, but we can help. Together we’ll create informed and inspired leaders ready to shape the future of your business.

© 2024 Harvard Business School Publishing. All rights reserved. Harvard Business Publishing is an affiliate of Harvard Business School.

  • Privacy Policy
  • Copyright Information
  • Terms of Use
  • About Harvard Business Publishing
  • Higher Education
  • Harvard Business Review
  • Harvard Business School

LinkedIn

We use cookies to understand how you use our site and to improve your experience. By continuing to use our site, you accept our use of cookies and revised Privacy Policy .

Cookie and Privacy Settings

We may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.

Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.

These cookies are strictly necessary to provide you with services available through our website and to use some of its features.

Because these cookies are strictly necessary to deliver the website, refusing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.

We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.

We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.

We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.

Google Webfont Settings:

Google Map Settings:

Google reCaptcha Settings:

Vimeo and Youtube video embeds:

You can read about our cookies and privacy settings in detail on our Privacy Policy Page.

Comparative effects of dynamic geometry system and physical manipulatives on Inquiry-based Math Learning for students in Junior High School

  • Published: 02 May 2024

Cite this article

virtual learning research paper

  • Hao Guan 1 , 2 ,
  • Jing Li 1 ,
  • Yongsheng Rao   ORCID: orcid.org/0000-0001-9615-3658 1 ,
  • Ruxian Chen 1 &
  • Zhangtao Xu 3  

Mathematical inquiry involving hands-on activities has received increasing attention in mathematics education. Besides various customized physical teaching aids, subject-specific information technology, such as Dynamic Geometry System (DGS), finds extensive use in mathematical inquiry activities. However, effects of DGS and physical manipulatives on inquiry-based math learning remain an open question. Hence, by adopting a quasi-experimental research design, this paper aims to empirically compare the immediate learning outcomes, knowledge retention, and learning interest of seventh-grade students who explore with virtual manipulatives (i.e., the DGS) and who explore with physical manipulatives. Specifically, 131 students participated in learning activities centered on exploring pyramids and prisms. During the inquiry process, Group A ( n  = 33) constructed pyramids and prisms in DGS, Group B ( n  = 34) observed pre-made virtual models in DGS, Group C ( n  = 32) observed physical models, and Group D ( n  = 32) made physical pyramids and prisms with polymer clay and small sticks. Moreover, pretest, post-test, and delayed post-test designed according to the Van Hiele model, as well as an adapted interest questionnaire, were employed to evaluate students’ performances; and collected data were analyzed by means of ANCOVA and t-test. Findings of the study revealed that in the context of construction, students employing the DGS exhibited superior immediate learning outcomes and greater knowledge retention compared to their peers who utilized physical manipulatives; while in the case of observation, the virtual and physical manipulatives yielded similar impacts on students’ immediate learning outcomes, but students who involved in the DGS demonstrated higher knowledge retention. Furthermore, regarding the DGS environment, students who engaged in constructive manipulation surpassed their peers who engaged in observing manipulation. In terms of interest, the DGS proved to be more effective in both stimulating and maintaining higher interest compared to the physical manipulatives, and constructive manipulation was more effective than observing manipulation. In summary, as compared to the physical manipulatives, DGS, particularly when employed with constructive strategy, has shown to encourage students to more actively engage in task-related cognitive behaviors, thereby supporting and enhancing inquiry-based math learning of students in junior high school.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

virtual learning research paper

Data availability

The datasets used or analyzed during the current study are available from the author on reasonable request.

Anderson, R. D. (2002). Reforming science teaching: What research says about inquiry. Journal of Science Teacher Education , 13 (1), 1–12. https://doi.org/10.1023/A:1015171124982 .

Article   MathSciNet   Google Scholar  

Artigue, M., & Blomhøj, M. (2013). Conceptualizing inquiry-based education in mathematics. Zdm , 45 , 797–810. https://doi.org/10.1007/s11858-013-0506-6 .

Article   Google Scholar  

Aydede, M. (1999). What makes perceptual symbols perceptual? Behavioral and Brain Sciences , 22 (4), 610–611. https://doi.org/10.1017/S0140525X99232141 .

Ayres, P. (2020). Something old, something new from cognitive load theory. Computers in Human Behavior , 113 , 106503. https://doi.org/10.1016/j.chb.2020.106503 .

Baki, A., Kösa, T., & Güven, B. (2011). A comparative study of the effects of using dynamic geometry software and physical manipulatives on the spatial visualisation skills of pre-service mathematics teachers. British Journal of Educational Technology , 42 (2), 291–310. https://doi.org/10.1111/j.1467-8535.2009.01012.x .

Baudon, O., & Laborde, J. (1996). Cabri-graph, a sketchpad for graph theory. Mathematics and Computers in Simulation , 42 , 765–774. https://doi.org/10.1016/S0378-4754(96)00049-3 .

Bokosmaty, S., Mavilidi, M., & Paas, F. (2017). Making versus observing manipulations of geometric properties of triangles to learn geometry using dynamic geometry software. Computers & Education , 113 , 313–326. https://doi.org/10.1016/j.compedu.2017.06.008 .

Broda, M. D., Ross, E., Sorhagen, N. S., & Ekholm, E. (2023). Exploring control-value motivational profiles of mathematics anxiety, self-concept, and interest in adolescents. Frontiers in Psychology , 14:1140924. https://doi.org/10.3389/fpsyg.2023.1140924 .

Chan, K. K., & Leung, S. W. (2014). Dynamic geometry software improves mathematical achievement: Systematic review and meta-analysis. Journal of Educational Computing Research , 51 (3), 311–325. https://doi.org/10.2190/EC.51.3.c .

Disbudak, O., & Akyuz, D. (2019). The comparative effects of concrete manipulatives and dynamic software on the geometry achievement of fifth-grade students. International Journal for Technology in Mathematics Education , 26 (1), 3–20. https://doi.org/10.1564/TME_V26.1.01 .

Donnelly-Hermosillo, D. F., Gerard, L. F., & Linn, M. C. (2020). Impact of graph technologies in K-12 science and mathematics education. Computers & Education , 62 , 62–71. https://doi.org/10.1016/j.compedu.2019.103748 .

Dorier, J., & Mass, K. (2020). Inquiry-based mathematics education. Encyclopedia of mathematics education. In S. Lerman (Ed.), Encyclopedia of Mathematics Education (pp. 384–388). Springer. https://doi.org/10.1007/978-3-030-15789-0_176 .

Chapter   Google Scholar  

Erbas, A. K., & Yenmez, A. A. (2011). The effect of inquiry-based explorations in a dynamic geometry environment on sixth grade students’ achievements in polygons. Computers & Education , 57 , 2462–2475. https://doi.org/10.1016/j.compedu.2011.07.002 .

Georgiou, Y., Ioannou, A., & Kosmas, P. (2021). Comparing a digital and a non-digital embodied learning intervention in geometry: Can technology facilitate? Technology Pedagogy and Education , 30 (2), 345–363. https://doi.org/10.1080/1475939X.2021.1874501 .

Giardino, V. (2022). Experimenting with triangles. Axiomathes , 32 (1), 55–77. https://doi.org/10.1007/s10516-022-09639-x .

Gibbons, A. S., Mcconkie, M., Seo, K. K., & Wiley, D. (2009). Simulation approach to instruction. In D. Ifenthaler, P. Pirnay-Dummer, & J. M. Spector (Eds.), Instructional-design theories and models (pp. 3–38). Springer. https://doi.org/10.4324/9780203872130-16 .

Gilligan-Lee, K., Hawes, Z., Williams, A. Y., Farran, E. K., & Mix, K. S. (2023). Hands-On: Investigating the role of physical manipulatives in spatial training. Child Development , 94 (5), 1205–1221. https://doi.org/10.1111/cdev.13963 .

Gómez-Chacón, I. M., Bacelo, A., Marbán, J. M., & Palacios, A. (2023). Inquiry-based mathematics education and attitudes towards mathematics: Tracking profiles for teaching. Mathematics Education Research Journal . https://doi.org/10.1007/s13394-023-00468-8 .

Gordon, R., & Ramani, G. B. (2021). Integrating Embodied Cognition and Information Processing: A Combined Model of the Role of Gesture in Children’s Mathematical Environments. Frontiers in Psychology , 12 , https://doi.org/10.3389/fpsyg.2021.650286 .

Guan, H., Qin, X. L., & Rao, Y. S. (2019). Research and design of dynamic mathematical digital resources open platform. Journal of Harbin Institute of Technology , 51 (5), 14–22. https://doi.org/10.11918/j.issn.0367-6234.201811037 .

Guan, H., Qin, X. L., Rao, Y. S., & Cao, S. (2020). Domain model of web-based dynamic geometry software and its applications. Journal of Computer Applications , 40 (4). https://doi.org/10.11772/j.issn.1001-9081.2019091672 .

Guan, H., Rao, Y., Zhang, J., Cao, S., & Qin, X. (2021). Method for processing graph degeneracy in dynamic geometry based on domain design. Journal of Computer Science and Technology , 36 , 910–921. https://doi.org/10.1007/s11390-021-0095-8 .

Harlen, W. (2013). Inquiry-based learning in science and mathematics. Review of Science Mathematics and ICT Education , 7 (2), 9–33. https://doi.org/10.26220/rev.2042 .

Hidi, S., & Baird, W. (1988). Strategies for increasing text-based interest and students’ recall of expository texts. Reading Research Quarterly , 23 (4), 465–483. https://doi.org/10.2307/747644 .

Hidi, S., & Renninger, A. (2006). The four-phase model of Interest Development. Educational Psychologist , 41 (2), 111–127. https://doi.org/10.1207/s15326985ep4102_4 .

Hillmayr, D., Ziernwald, L., Reinhold, F., Hofer, S. I., & Reiss, K. M. (2020). The potential of digital tools to enhance mathematics and science learning in secondary schools: A context-specific meta-analysis. Computers & Education , 153 , 103897. https://doi.org/10.1016/j.compedu.2020.103897 .

Hu, F., Ginns, P., & Bobis, J. (2015). Getting the point: Tracing worked examples enhances learning. Learning and Instruction , 35 , 85–93. https://doi.org/10.1016/j.learninstruc.2014.10.002 .

Jiang, P. J., Niu, W. Q., & Xiong, B. (2020). A literature review of the integration of information technology into mathematics instruction in China. Journal of Mathematics Education , 29 (4), 96–102.

Google Scholar  

Kalyuga, S. (2011). Cognitive load theory: How many types of load does it really need? Educational Psychology Review , 23 , 1–19. https://doi.org/10.1007/s10648-010-9150-7 .

Kapici, H. O., Akçay, H., & de Jong, T. (2019). Using hands-on and virtual laboratories alone or together-which works better for acquiring knowledge and skills? Journal of Science Education and Technology , 28 , 231–250. https://doi.org/10.1007/s10956-018-9762-0 .

Karakuş, F., & Peker, M. (2015). The effects of dynamic geometry software and physical manipulatives on pre-service primary teachers’ van hiele levels and spatial abilities. Turkish Journal of Computer and Mathematics Education , 6 , 338–365. https://doi.org/10.16949/TURCOMAT.31338 .

Kortenkamp, U., & Richter-Gebert, J. (2002). Making The Move: The Next Version Of Cinderella. In: Cohen, A, M. (Eds.), Mathematical Software (pp. 208–216). World Scientific. https://doi.org/10.1142/9789812777171_0021 .

Koskinen, A., McMullen, J., Hannula-Sormunen, M. M., Ninaus, M., & Kiili, K. (2023). The strength and direction of the difficulty adaptation affect situational interest in game-based learning. Computers & Education , 194 , 104694. https://doi.org/10.1016/j.compedu.2022.104694 .

Lee, H., & Boo, E. (2022). The effects of teachers’ instructional styles on students’ interest in learning school subjects and academic achievement: Differences according to students’ gender and prior interest. Learning and individual differences. Learning and Individual Differences , 99 , 102200. https://doi.org/10.1016/j.lindif.2022.102200 .

Li, M. L., Ding, R. X., Zhang, Y., Liu, W. T., He, H. C., & Liu, H. Q. (2018). From cognitive sciences to learning sciences: The past, the present and the future. Tsinghua Journal of Education , 39 (4), 29–39. https://doi.org/10.14138/j.1001-4519.2018.04.002911 .

Marshall, J. A., & Young, E. S. (2006). Preservice teachers’ theory development in physical and simulated environments. Journal of Research in Science Teaching , 43 (9), 907–937. https://doi.org/10.1002/TEA.20124 .

Moyer-Packenham, P. S., & Bolyard, J. J. (2016). Revisiting the definition of a virtual manipulative. In P. Moyer-Packenham (Ed.), International perspectives on Teaching and Learning mathematics with virtual manipulatives (pp. 3–23). Springer. https://doi.org/10.1007/978-3-319-32718-1_1 .

Ng, O., Shi, L., & Ting, F. S. (2020). Exploring differences in primary students’ geometry learning outcomes in two technology-enhanced environments: Dynamic geometry and 3d printing. International Journal of STEM Education , 7 , 1–13. https://doi.org/10.1186/s40594-020-00244-1 .

Ning, L. H. (2005). Characteristic of the study on the inquiry learning of mathematics and reflecting on it. Journal of Mathematics Education , 14 (4), 28–30. https://doi.org/10.3969/j.issn.1004-9894.2005.04.007 .

Olympiou, G., & Zacharia, Z. C. (2012). Blending physical and virtual manipulatives: An effort to improve students’ conceptual understanding through science laboratory experimentation. Science Education , 96 (1), 21–47. https://doi.org/10.1002/sce.20463 .

Ondes, R. N. (2021). Research trends in dynamic geometry software: A content analysis from 2005 to 2021. Journal on Educational Technology , 13 (2), 236–260. https://doi.org/10.18844/wjet.v13i2.5695 .

Pedersen, I. F., & Haavold, P. Ø. (2023). Students’ mathematical beliefs and motivation in the context of inquiry-based mathematics teaching. International Journal of Mathematical Education in Science and Technology , 54 (8), 1649–1663. https://doi.org/10.1080/0020739X.2023.2189171 .

Peer, M., Brunec, I. K., Newcombe, N. S., & Epstein, R. A. (2020). Structuring knowledge with cognitive maps and cognitive graphs. Trends in Cognitive Sciences , 25 , 37–54. https://doi.org/10.1016/j.tics.2020.10.004 .

Pezzulo, G., & Calvi, G. (2011). Computational explorations of perceptual symbol systems theory. New Ideas in Psychology , 29 (3), 275–297. https://doi.org/10.1016/j.newideapsych.2009.07.004 .

Pouw, W., Gog, T. V., & Paas, F. (2014). An embedded and embodied cognition review of instructional manipulatives. Educational Psychology Review , 26 (1), 51–72. https://doi.org/10.1007/s10648-014-9255-5 .

Quaresma, P., Santos, V., & Maric, M. (2017). WGL, a web laboratory for geometry. Education and Information Technologies , 23 , 237–252. https://doi.org/10.1007/s10639-017-9597-y .

Shabat, G., Of, A., & Semenov, R. A. (2023). Computer experiment in Teaching Mathematics. Doklady Mathematics , 107 , S92–S116. https://doi.org/10.1134/S1064562423700618 .

Shapiro, L. A., & Stolz, S. A. (2018). Embodied cognition and its significance for education. Theory and Research in Education , 17 (1), 19–39. https://doi.org/10.1177/1477878518822149 .

Sørensen, H. K., Mathiasen, S. K., & Johansen, M. W. (2024). What is an experiment in mathematical practice? New evidence from mining the Mathematical Reviews. Synthese , 203 (2), 1–21. https://doi.org/10.1007/s11229-023-04475-x .

Sullah, R. M., Ismail, N., & Abdullah, A. H. (2017). A comparison between virtual and physical manipulatives in geometry learning for standard 2 mathematics. Man in India , 97 (17), 235–247.

Sweller, J. (1988). Cognitive load during problem solving: Effects on Learning. Cognitive Science , 12 (2), 257–285. https://doi.org/10.1207/s15516709cog1202_4 .

Tran, C., Smith, B., & Buschkuehl, M. (2017). Support of mathematical thinking through embodied cognition: Nondigital and digital approaches. Cognitive Research: Principles and Implications , 2 (1), 1–18. https://doi.org/10.1186/s41235-017-0053-8 .

Ulusoy, F., & Turuş, İ. B. (2022). The mathematical and technological nature of tasks containing the use of dynamic geometry software in middle and secondary school mathematics textbooks. Education and Information Technologies , 27 , 11089–11113. https://doi.org/10.1007/s10639-022-11070-z .

Vishnyakov, Y. S., Semenov, A. A., & Shabat, G. (2023). The work of a Mathematician as a Prefiguring of Mastering Mathematics by students: The role of experiments. Doklady Mathematics , 107 , S91. https://doi.org/10.1134/S1064562423700606 .

Wang, C. X. (2020). Substitution and transcendence of virtual manipulatives to physical manipulatives: From perspective of embodied cognition. e-Education Research , 41 (12), 50–58. https://doi.org/10.13811/j.cnki.eer.2020.12.007 .

Weisberg, S. M., & Newcombe, N. S. (2017). Embodied cognition and STEM learning:Overview of a topical collection in CR:PI. Cognitive Research: Principles and Implications , 2 (1), 38. https://doi.org/10.1186/s41235-017-0071-6 .

Xu, Z. T. (2011). Super Sketchpad: An excellent cognitive platform for acquiring basic math activities experiences. Journal of Mathematics Education , 20 (3), 97–99.

Ye, B. B., & Feng, M. M. (2019). From materialization, Electronicization to informatization: The evolution of Primary School Mathematics Teaching Aids. Curriculum Teaching Material and Method , 7 , 68–75. https://doi.org/10.19877/j.cnki.kcjcjf.2019.07.011 .

Ye, H. S. (2023). Embodied mind and embodied education. Educational Research , 44 (03), 32–41.

Yuan, Y., Lee, C., & Wang, C. (2010). A comparison study of polyominoes explorations in a physical and virtual manipulative environment. Journal of Computer Assisted Learning , 26 (4), 307–316. https://doi.org/10.1111/j.1365-2729.2010.00352.x .

Zambak, V. S., & Tyminski, A. M. (2020). Examining mathematical technological knowledge of pre-service middle grades teachers with Geometer’s Sketchpad in a geometry course. International Journal of Mathematical Education in Science and Technology , 51 , 183–207. https://doi.org/10.1080/0020739X.2019.1650302 .

Zengin, Y. (2023). Effectiveness of a professional development course based on information and communication technologies on mathematics teachers’ skills in designing technology-enhanced task. Education and Information Technologies . https://doi.org/10.1007/s10639-023-11728-2 .

Zhang, J. Z., Chen, R. X., Lu, X. H., Xu, Z. T., & Rao, Y. S. (2022). Research on training model for mathematics teachers’ TPACK under the background of internet+ --taking the Netpad training for middle school mathematics teacher in Wuhou district as an example. Journal of Mathematics Education , 31 (5), 1–8.

Zhang, Y., Wang, P., Jia, W., Zhang, A., & Chen, G. (2023). Dynamic visualization by GeoGebra for mathematics learning: A meta-analysis of 20 years of research. Journal of Research on Technology in Education . https://doi.org/10.1080/15391523.2023.2250886 .

Download references

Acknowledgements

The author thanks in particular the teachers and students who participated in the study. In addition, thanks to Aunt He, who is a dedicated math teacher as well as an enlightened mom.

This work was supported by the National Natural Science Foundation of China (No. 62172116) and the Innovation Research for the Full-time Postgraduates of Guangzhou University (No. 2022GDJC-M34).

Author information

Authors and affiliations.

Institute of Computing Science and Technology, Guangzhou University, Guangzhou, China

Hao Guan, Jing Li, Yongsheng Rao & Ruxian Chen

School of Computer Science of Information Technology, Qiannan Normal University for Nationalities, Duyun, China

School of Mathematics and Statistics, Central China Normal University, Wuhan, China

Zhangtao Xu

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Yongsheng Rao .

Ethics declarations

Conflict of interest.

The authors do not have any possible conflicts of interest.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Guan, H., Li, J., Rao, Y. et al. Comparative effects of dynamic geometry system and physical manipulatives on Inquiry-based Math Learning for students in Junior High School. Educ Inf Technol (2024). https://doi.org/10.1007/s10639-024-12663-6

Download citation

Received : 05 December 2023

Accepted : 21 March 2024

Published : 02 May 2024

DOI : https://doi.org/10.1007/s10639-024-12663-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Mathematical learning
  • Constructive manipulation
  • Observation
  • Dynamic geometry system
  • Physical model
  • Find a journal
  • Publish with us
  • Track your research

Purdue University Graduate School

DESIGN, IMPLEMENTATION, AND ASSESSMENT OF A SOFTWARE TOOL KIT TO FACILITATE EXPERIMENTAL RESEARCH IN SOCIAL PSYCHOLOGY

This paper introduces a comprehensive Software Toolkit designed to facilitate the design, implementation, and assessment of experimental research within the field of social psychology. The toolkit includes a python tool integrated with Unreal Engine to run simulations using virtual reality. This tool allows Students and Psychologists to conduct experiments for learning purposes by helping them to create real-life like simulations in different environments. Following a series of experiments, we have determined that the tool performs effectively. With the potential for further updates and enhancements, we believe that the tool holds promise for utilization in social psychology experiments.

Degree Type

  • Master of Science
  • Computer Graphics Technology

Campus location

  • West Lafayette

Advisor/Supervisor/Committee Chair

Additional committee member 2, additional committee member 3, usage metrics.

  • Computer gaming and animation
  • Computer graphics

CC BY 4.0

Subscribe to the PwC Newsletter

Join the community, trending research, catlip: clip-level visual recognition accuracy with 2.7x faster pre-training on web-scale image-text data.

virtual learning research paper

Contrastive learning has emerged as a transformative method for learning effective visual representations through the alignment of image and text embeddings.

virtual learning research paper

X-LoRA: Mixture of Low-Rank Adapter Experts, a Flexible Framework for Large Language Models with Applications in Protein Mechanics and Molecular Design

ericlbuehler/mistral.rs • 11 Feb 2024

Starting with a set of pre-trained LoRA adapters, our gating strategy uses the hidden states to dynamically mix adapted layers, allowing the resulting X-LoRA model to draw upon different capabilities and create never-before-used deep layer-wise combinations to solve tasks.

Improving Diffusion Models for Virtual Try-on

Finally, we present a customization method using a pair of person-garment images, which significantly improves fidelity and authenticity.

virtual learning research paper

AgentScope: A Flexible yet Robust Multi-Agent Platform

modelscope/agentscope • 21 Feb 2024

With the rapid advancement of Large Language Models (LLMs), significant progress has been made in multi-agent applications.

OpenVoice: Versatile Instant Voice Cloning

The voice styles are not directly copied from and constrained by the style of the reference speaker.

FlowMap: High-Quality Camera Poses, Intrinsics, and Depth via Gradient Descent

virtual learning research paper

This paper introduces FlowMap, an end-to-end differentiable method that solves for precise camera poses, camera intrinsics, and per-frame dense depth of a video sequence.

virtual learning research paper

How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites

Compared to both open-source and proprietary models, InternVL 1. 5 shows competitive performance, achieving state-of-the-art results in 8 of 18 benchmarks.

virtual learning research paper

PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning

PLLaVA achieves new state-of-the-art performance on modern benchmark datasets for both video question-answer and captioning tasks.

virtual learning research paper

ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving

ConsistentID comprises two key components: a multimodal facial prompt generator that combines facial features, corresponding facial descriptions and the overall facial context to enhance precision in facial details, and an ID-preservation network optimized through the facial attention localization strategy, aimed at preserving ID consistency in facial regions.

Make Your LLM Fully Utilize the Context

While many contemporary large language models (LLMs) can process lengthy input, they still struggle to fully utilize information within the long context, known as the lost-in-the-middle challenge.

virtual learning research paper

IMAGES

  1. (PDF) Virtual Classroom Learning for Higher Education: A Result of

    virtual learning research paper

  2. (PDF) Virtual reality and augmented reality for education: panel

    virtual learning research paper

  3. (PDF) Project-Based Research Learning (PBRL) Integrated With E-Learning

    virtual learning research paper

  4. Impact Of Online Learning Research Paper

    virtual learning research paper

  5. Virtual Reality Research Paper

    virtual learning research paper

  6. ⇉Virtual Learning Environment Sample Essay Example

    virtual learning research paper

VIDEO

  1. Implementing a Deep Learning Research paper in python (Part -1)

  2. MVLRI Webinar: More Research Required

  3. How to Read Research Paper? Read, understand and start writing research paper

  4. ML/Deep Learning Research Paper Reading

  5. How to Read and Critique Deep Learning Papers

  6. Hey there, I just launched a YouTube channel visualising AI Research Papers, fancy taking a look?

COMMENTS

  1. Measuring the effectiveness of virtual training: A systematic review

    The amount of research on virtual reality learning tools increases with time. Despite the diverse environments and theoretical foundations, enough data have been accumulated in recent years to provide a systematic review of the methods used. ... The purpose of this paper was to summarize the research on the effectiveness of VT. Our main goal ...

  2. Online and face‐to‐face learning: Evidence from students' performance

    1.1. Related literature. Online learning is a form of distance education which mainly involves internet‐based education where courses are offered synchronously (i.e. live sessions online) and/or asynchronously (i.e. students access course materials online in their own time, which is associated with the more traditional distance education).

  3. Effects of virtual learning environments: A scoping review of

    Abstract. The purpose of this scoping review is to isolate and investigate the existing data and research that identifies if the synchronous face-to-face visual presence of a teacher in a virtual learning environment (VLE) is a significant factor in a student's ability to maintain good mental health. While the present research on this ...

  4. Online education in the post-COVID era

    The COVID-19 pandemic has forced the world to engage in the ubiquitous use of virtual learning. ... McIsaac, M. S. & Gunawardena, C. N. in Handbook of Research for Educational Communications ...

  5. Exploring the Evidence on Virtual and Blended Learning

    The authors provide a summary of policy and practice implications from more than 60 studies of remote and blended learning, computer-supported collaborative learning, computer-assisted instruction, and educational games. These include: Different approaches to remote learning suit different tasks and types of content.

  6. Full article: Virtual Learning During the COVID-19 Pandemic: A

    This study aims to assess the key research trends in virtual learning during the COVID-19 pandemic through a bibliometric analysis of 1595 studies from 589 journals during 2020-21. Our study highlights the influential aspects, such as the most contributing countries, journals, authors, and keywords in this research field.

  7. Making virtual learning engaging and interactive

    1. INTRODUCTION. In a recent analysis of successful life science research exemplars, all of the most important characteristics identified by these exemplars related to the human dimension of research: relationships, passion, and resilience were the top three characteristics. 1 A key part of the learning experience of these exemplars began in the same place it does for all of us: in the classroom.

  8. Insights Into Students' Experiences and Perceptions of Remote Learning

    Beyond this, our findings can inform research that collects demographic data and/or measures learning outcomes to understand the impact of remote learning on different populations. Importantly, this paper focuses on a subset of responses from the full data set which includes 10,563 students from secondary school, undergraduate, graduate, or ...

  9. Exploring The Impact of Virtual Learning Environments on Student

    Purpose: This review research paper aims to explore the impact of virtual learning environments on student engagement and academic achievement.

  10. (PDF) The Influence of Virtual Learning Environments in Students

    This paper focuses mainly on the relation between the use of a virtual learning environment (VLE) and students' performance. Therefore, virtual learning environments are characterised and a study ...

  11. Combining the Best of Online and Face-to-Face Learning: Hybrid and

    Blended learning as defined by Dziuban et al. (2004), is an instructional method that includes the efficiency and socialization opportunities of the traditional face-to-face classroom with the digitally enhanced learning possibilities of the online mode of delivery.Characteristics of this approach include (a) student centered teaching where each and every student has to be actively involved in ...

  12. Full article: Framework of virtual platforms for learning and

    It extracts data from the Scopus database, encompassing more than 78 million records of scientific papers published since 1996 from more than 24,000 journals and more than 5,000 publishers worldwide. ... we defined a new area of research using the keywords virtual learning, competencies, and higher education for works published from 2017 to 2023.

  13. Traditional Learning Compared to Online Learning During the COVID-19

    This paper focuses on the impact of the pandemic in the education sector. ... online platforms. An LMS, also known as a course management system or a virtual learning environment, is a system that many faculty members must adopt to improve student learning and ... The SAGE Handbook of E-learning Research. 2007. SAGE Knowledge. Book chapter . 2 ...

  14. PDF The Influence of Virtual Learning Environments in Students ...

    2.1. Virtual Learning Environments . Virtual learning environments have been associated with formal learning and with relationships between teachers, students and school. There is an increasing interest in the virtual learning environments supported by the internet, namely among education institutions, students and teachers. The concept of ...

  15. Effects of virtual learning environments: A scoping review of

    The purpose of this scoping review is to isolate and investigate the existing data and research that identifies if the synchronous face-to-face visual presence of a teacher in a virtual learning environment (VLE) is a significant factor in a student's ability to maintain good mental health. While the present research on this explicit interaction among VLE implementation and student mental ...

  16. Online Learning: A Panacea in the Time of COVID-19 Crisis

    Rapid developments in technology have made distance education easy (McBrien et al., 2009).). "Most of the terms (online learning, open learning, web-based learning, computer-mediated learning, blended learning, m-learning, for ex.) have in common the ability to use a computer connected to a network, that offers the possibility to learn from anywhere, anytime, in any rhythm, with any means ...

  17. Enhancing learning and retention with distinctive virtual reality

    Considerable research has documented that human memory is inherently context-dependent 1,2.During learning, contextual cues—whether environmental (e.g., a specific room) or internal (e.g., an ...

  18. (PDF) Students' Virtual Learning Challenges and Learning ...

    The goal of this paper is to provide a conceptual framework to research the relationship between virtual learning challenges and students' learning satisfaction.

  19. PDF The Effectiveness of E-Learning: An Explorative and Integrative Review

    works' in learning. Figure 1a shows the 761 papers relevant to this research, and Figure 1b shows 111 intensively coded abstracts of the 761 papers (which are described in further detail in the methodology section below). There are fewer papers published in 2013 than in any other year because the structured search took place in October 2013.

  20. The impact of virtual learning on students' educational behavior and

    Virtual learning and its effect on students' understanding of the subjects/materials Extra self-effort: The need to put extra effort to understand the lecture: 122 (77.7%) ... Our research found that 75% of university students in Saudi Arabia suffer from some degree of depression. Half of these students showed moderate to extreme levels of ...

  21. Harnessing the Power of Virtual Learning

    Virtual learning gives us the ability to space learning out, expanding the opportunities to leverage methods that enhance the learning process. This white paper outlines our four-step framework for designing virtual learning to support the continuous learning and agility required in today's rapidly changing world.

  22. (PDF) Virtual reality in education: a tool for learning in the

    difficult to teach (Smith and Hu, 2013). Vi rtual reality, an immersive, hands-on. tool for learning, can play a unique role in addressing these educational. challenges. In this paper, we present ...

  23. Comparative effects of dynamic geometry system and physical ...

    Hence, by adopting a quasi-experimental research design, this paper aims to empirically compare the immediate learning outcomes, knowledge retention, and learning interest of seventh-grade students who explore with virtual manipulatives (i.e., the DGS) and who explore with physical manipulatives. ... International perspectives on Teaching and ...

  24. Design, Implementation, and Assessment of A Software Tool Kit to

    This paper introduces a comprehensive Software Toolkit designed to facilitate the design, implementation, and assessment of experimental research within the field of social psychology. The toolkit includes a python tool integrated with Unreal Engine to run simulations using virtual reality. This tool allows Students and Psychologists to conduct experiments for learning purposes by helping them ...

  25. (PDF) Impact of virtual learning on Student's Academic Achievements

    A learning system based on dignified teaching, with the help of digital resources is known as virtual learning. This study aims To find out the impact of virtual learning on student`s academic ...

  26. The latest in Machine Learning

    FlowMap: High-Quality Camera Poses, Intrinsics, and Depth via Gradient Descent. dcharatan/flowmap • • 23 Apr 2024. This paper introduces FlowMap, an end-to-end differentiable method that solves for precise camera poses, camera intrinsics, and per-frame dense depth of a video sequence. Novel View Synthesis Optical Flow Estimation +1.