空 挡 广 告 位 | 空 挡 广 告 位

Reduction of Forgetting by Contextual Variation During Encoding Using 360-Degree Video-Based Immersive Virtual Environments

Note: We don't have the ability to review paper

PubDate: Apr 2024

Teams: The University of Tokyo

Writers: Takato Mizuho, Takuji Narumi, Hideaki Kuzuoka

PDF: Reduction of Forgetting by Contextual Variation During Encoding Using 360-Degree Video-Based Immersive Virtual Environments

Abstract

Recall impairment in a different environmental context from learning is called context-dependent forgetting. Two learning methods have been proposed to prevent context-dependent forgetting: reinstatement and decontextualization. Reinstatement matches the environmental context between learning and retrieval, whereas decontextualization involves repeated learning in various environmental contexts and eliminates the context dependency of memory. Conventionally, these methods have been validated by switching between physical rooms. However, in this study, we use immersive virtual environments (IVEs) as the environmental context assisted by virtual reality (VR), which is known for its low cost and high reproducibility compared to traditional manipulation. Whereas most existing studies using VR have failed to reveal the reinstatement effect, we test its occurrence using a 360-degree video-based IVE with improved familiarity and realism instead of a computer graphics-based IVE. Furthermore, we are the first to address decontextualization using VR. Our experiment showed that repeated learning in the same constant IVE as retrieval did not significantly reduce forgetting compared to repeated learning in different constant IVEs. Conversely, repeated learning in various IVEs significantly reduced forgetting than repeated learning in constant IVEs. These findings contribute to the design of IVEs for VR-based applications, particularly in educational settings.

您可能还喜欢...

Paper