Objects learned within single enclosed spaces (e.g., rooms) can be represented within a single reference frame. Contrarily, the representation of navigable spaces (multiple interconnected enclosed spaces) is less well understood. In this study we examined different levels of integration within memory (local, regional, global), when learning object locations in navigable space. Participants consecutively learned two distinctive regions of a virtual environment that eventually converged at a common transition point and subsequently solved a pointing task. In Experiment 1 pointing latency increased with increasing corridor distance to the target and additionally when pointing into the other region. Further, when pointing within a region alignment with local and regional reference frames, when pointing across regional boundaries alignment with a global reference frame was found to accelerate pointing. Thus, participants memorized local corridors, clustered corridors into regions, and integrated globally across the entire environment. Introducing the transition point at the beginning of learning each region in Experiment 2 caused previous region effects to vanish. Our findings emphasize the importance of locally confined spaces for structuring spatial memory and suggest that the opportunity to integrate novel into existing spatial information early during learning may influence unit formation on the regional level. Further, global representations seem to be consulted only when accessing spatial information beyond regional borders. Our results are inconsistent with conceptions of spatial memory for large scale environments based either exclusively on local reference frames or upon a single reference frame encompassing the whole environment, but rather support hierarchical representation of space. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Humans have been shown to perceive and perform actions differently in immersive virtual environments (VEs) as compared to the real world. Immersive VEs often lack the presence of virtual characters; users are rarely presented with a representation of their own body and have little to no experience with other human avatars/characters. However, virtual characters and avatars are more often being used in immersive VEs. In a two-phase experiment, we investigated the impact of seeing an animated character or a self-avatar in a head-mounted display VE on task performance. In particular, we examined performance on three different behavioral tasks in the VE. In a learning phase, participants either saw a character animation or an animation of a cone. In the task performance phase, we varied whether participants saw a co-located animated self-avatar. Participants performed a distance estimation, an object interaction and a stepping stone locomotion task within the VE. We find no impact of a character animation or a self-avatar on distance estimates. We find that both the animation and the self-avatar influenced task performance which involved interaction with elements in the environment; the object interaction and the stepping stone tasks. Overall the participants performed the tasks faster and more accurately when they either had a self-avatar or saw a character animation. The results suggest that including character animations or self-avatars before or during task execution is beneficial to performance on some common interaction tasks within the VE. Finally, we see that in all cases (even without seeing a character or self-avatar animation) participants learned to perform the tasks more quickly and/or more accurately over time.
Establishing verbal memory traces for non-verbal stimuli was reported to facilitate or inhibit memory for the non-verbal stimuli. We show that these effects are also observed in a domain not indicated before – wayfinding. Fifty-three participants followed a guided route in a virtual environment. They were asked to remember half of the intersections by relying on the visual impression only. At the other 50% of the intersections, participants additionally heard a place name, which they were asked to memorize. For testing, participants were teleported to the intersections and were asked to indicate the subsequent direction of the learned route. In Experiment 1, intersections’ names were arbitrary (i.e., not related to the visual impression). Here, participants performed more accurately at unnamed intersections. In Experiment 2, intersections’ names were descriptive and participants’ route memory was more accurate at named intersections. Results have implications for naming places in a city and for wayfinding aids.