We conducted a followup experiment to the work of Lin et al. [2011]. The experimental protocol was the same as that of Experiment Four in Lin et al. [2011] except the viewing condition was binocular instead of monocular. In that work there was no distance underestimation, as has been widely reported elsewhere, and we were motivated in this experiment to see if stereoscopic effects in head-mounted displays (HMDs) accounted for this effect.
We conducted four experiments on egocentric depth perception using blind walking with a restricted scanning method in both the real and a virtual environment. Our viewing condition in all experiments was monocular. We varied the field of view (real), scan direction (real), blind walking method (real and virtual), and self-representation (virtual) over distances of 4 meters to 7 meters. The field of view varied between 21.1° and 13.6°. The scan direction varied between near-to-far scanning and far-to-near scanning. The blind walking method varied between direct blind walking and an indirect method of blind walking that matched the geometry of our laboratory. We varied self-representation between having a self-avatar (a fully tracked, animated, and first-person perspective of the user), having a static avatar (a mannequin avatar that did not move), to having no avatar (a disembodied camera view of the virtual environment). In the real environment, we find an effect of field of view; participants performed more accurately with larger field of view. In both real and virtual environments, we find an effect of blind walking method; participants performed more accurately in direct blind walking. We do not find an effect of distance underestimation in any environment, nor do we find an effect of self-representation.
This experiment investigates the spatial memory and attention when human acts as supervisor of one or two groups of distributed robot teams in a large virtual environment (VE). The problem is similar to learning a new environment and interpreting its spatial structure, e.g., [Mou and McNamara 2002], but less is known when attention is divided or locomotion is involved. Our motivation arises in the context of humans and robots acting cooperatively together as a team. Such teaming is becoming increasingly important in many scenarios, such as disaster relief, and wilderness search and rescue [Humphrey and Adams 2009].
This paper presents a mixed reality system for combining real robots, humans, and virtual robots. The system tracks and controls physical robots in local physical space, and inserts them into a virtual environment (VE). The system allows a human to locomote in a VE larger than the physically tracked space of the laboratory through a form of redirected walking. An evaluation assessed the conditions under which subjects found the system to be the most immersive.
This paper evaluates the combination of two methods for adapting bipedal locomotion to explore virtual environments displayed on head-mounted displays (HMDs) within the confines of limited tracking spaces. We combine a method of changing the optic flow of locomotion, effectively scaling the translational gain, with a method of intervening and manipulating a user's locations in physical space while preserving their spatial awareness of the virtual space. This latter technique is called resetting. In two experiments, we evaluate both scaling the translational gain and resetting while a subject locomotes along a path and then turns to face a remembered object. We find that the two techniques can be effectively combined, although there is a cognitive cost to resetting.