Comparisons of Target Localization Abilities during Physical and Virtual Rotating Scenes by Cognitively-Intact and Cognitively Impaired Older Adults
2018
Background: Previous studies have reported that coordinate information (i.e. distance between any two objects in a specific direction) is encoded differently from Virtual Reality (VR) and physical scenes. However, the accuracy of encoding categorical information (i.e. relative positions of objects) from VR scenes has not been adequately investigated. In this study, we used a novel rotating visual scene to study the effects of aging, prior experience with VR, and dementia on the accuracy of encoding categorical information between physical and virtual environments.
Methods: We recruited a cohort of 60 cognitively-healthy older adults with and without previous VR experience (Experiment 1) as well as 18 older adults with mild to moderate Alzheimer disease (AD) (Experiment 2). In both of the experiments, the participants were asked to attend to a target window in a virtual or real building (dependent to the group assignment) as the building is being rotated around its vertical axis in depth of the scene. They were required to verbally judge the final position of the target in terms of directions (e.g., left, right, back, and front) with respect to the entrance of the buildings after the full rotation has stopped. The experimenters calculated a score for each participants based on s/her accuracy in locating the target window.
Results: Healthy older adults succeeded in accurately localizing the target's position from both environments, whereas individuals with AD were only able to encode the target’s position from the physical environment.
Conclusions: Our results suggest the inability to encode from a rotating VR scene might be a symptom of dementia
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI