We conducted a followup experiment to the work of Lin et al. [2011]. The experimental protocol was the same as that of Experiment Four in Lin et al. [2011] except the viewing condition was binocular instead of monocular. In that work there was no distance underestimation, as has been widely reported elsewhere, and we were motivated in this experiment to see if stereoscopic effects in head-mounted displays (HMDs) accounted for this effect.
In this special issue of IEEE Transactions on Visualization and Computer Graphics (TVCG), we are pleased to present the top papers from the 30th IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR 2023), held March 25–29, 2023, in Shanghai, China, in hybrid format.
Fine-grained grasping and interaction can be frustrating and discouraging in a conventional virtual reality system with standard handheld controllers. Experience Orchestra is an immersive application in which users can experience playing orchestral instruments, either individually or in an ensemble, in realistic ways. We describe how conventional grasping methods were modified to make the experience more realistic.
Although immersive virtual reality is attractive to users, we know relatively little about whether higher immersion levels increase or decrease spatial learning outcomes. In addition, questions remain about how different approaches to travel within a virtual environment affect spatial learning. In this paper, we investigated the role of immersion (desktop computer versus HTC Vive) and teleportation in spatial learning. Results showed few differences between conditions, favoring, if anything, the desktop environment. There seems to be no advantage of using continuous travel over teleportation, or using the Vive with teleportation compared to a desktop computer. Discussing the results, we look critically at the experimental design, identify potentially confounding variables, and suggest avenues for future research.
The privacy and security of personal data has been at the forefront of public concern for some time now, and is typically understood in the context of data collected from online interaction (social media, transactions, search engine queries, etc.). The advent of immersive technologies expand data collection beyond what can typically be extracted via online interaction, particularly in terms of the availability of biometric data (eye tracking and gait analysis). However, it has not yet been brought to light that interactions and interaction data will soon be of concern in terms of privacy. We mediate interactions in everyday life through the maintenance of personal space and allow certain individuals and objects into our personal space. We do the same in virtual reality. Our personal space allows us to preserve our feeling of safety, and the way we mediate it shows our biases and preferences. In this work, we take a look at the implications of interaction and the availability of personal data that will bring a host of ethical and privacy concerns.
Augmented Reality (AR) can enhance safety in navigation by providing feedback for threatening areas to avoid. We developed a virtual city where participants were tasked with avoiding a pre-defined threat while navigating to a beacon. We implemented two simulated AR cues in the virtual city that indicated threat areas: (1) a world-locked cue that color coded the ground area (GA) to delineate the boundaries of the threat, or (2) a screen-locked cue that provided dynamic text to indicate numeric distance to the threat (DT). Participants were instructed to complete each trial by freely navigating to a beacon in an efficient but safe manner. They navigated to 6 target beacons twice (in random order), once with each cue type in place. The GA cue resulted in the lowest time spent in danger areas (safer navigation), while the DT cue resulted in more efficient navigation. We argue that the GA cue was worth this safety versus efficiency trade-off since the loss of efficiency was minimal.
This paper explores a method for re-sequencing an existing set of animation, specifically motion capture data, to generate new motion. Re-using animation is helpful in designing virtual environments and creating video games for reasons of cost and efficiency. This paper demonstrates that through nonlinear dimensionality reduction and frame re-sequencing, visually compelling motion can be produced from a set of motion capture data. The technique presented uses Isomap and ST-Isomap to reduce the dimensionality of the data set. Two distance metrics for nonlinear dimensionality reduction are compared as well as the effect of global degrees of freedom on the visual appeal of the newly generated motion.