Abstract The brain mechanism of embodiment in a virtual body has grown a scientific interest recently, with a particular focus on providing optimal virtual reality (VR) experiences. Disruptions from an embodied state to a less- or non-embodied state, denominated Breaks in Embodiment (BiE), are however rarely studied despite their importance for designing interactions in VR. Here we use electroencephalography (EEG) to monitor the brain’s reaction to a BiE, and investigate how this reaction depends on previous embodiment conditions. The experimental protocol consisted of two sequential steps; an induction step where participants were either embodied or non-embodied in an avatar, and a monitoring step where, in some cases, participants saw the avatar’s hand move while their hand remained still. Our results show the occurrence of error-related potentials linked to observation of the BiE event in the monitoring step. Importantly, this EEG signature shows amplified potentials following the non-embodied condition, which is indicative of an accumulation of errors across steps. These results provide neurophysiological indications on how progressive disruptions impact the expectation of embodiment for a virtual body.
Here, we present a low-cost solution to perform the online and realistic representation of users using an array of depth cameras.The system is composed of a cluster of 10 Microsoft Kinect 2 cameras, each one associated to a compact NUC PC to stream live depth & color images to a master PC which reconstructs live the point cloud of the scene and can in particular show the body of users standing in the capture area.A custom geometric calibration procedure allows accurate reconstruction of the different 3D data streams.Despite the inherent limitations of depth cameras, in particular sensor noise, the system provides a convincing representation of the user's body, is not limited by changes in clothing (also during immersion), can capture complex poses and even interactions between two persons or with physical objects.The advantage of using depth cameras over conventional cameras is that little processing is required for dynamic reconstruction of unknown shapes, thus allowing true interactive applications.The resulting live 3D model can be inserted in any virtual environment (e.g.Unity 3D software integration plugin), and can be subject to all usual 3D manipulation and transformations.
The ability to build and control exposure through a seamless synergy of interaction and narration is a strongly required feature of a new type of immersive-VR training and therapy system. This paper presents a practical approach for immersive-VR training and therapy applications based on interactive storytelling. It provides detailed description of a working implementation of the Interactive Narration Space (INS); this approach combines and satisfies both interaction and narration requirements through the use of high-level social interaction. By introducing the Social Channel, we aim at minimizing the contradictions between control over the story required by the trainer/therapist and interaction required by the trainee/patient. These concepts and their practical realization have been investigated in the context of emergency-situation training and psychotherapeutic exposure, and could validate the usability of mediated interaction with a virtual assistant.
Performing motor tasks in virtual environments is best achieved with motion capture and animation of a 3D character that participants control in real time and perceive as being their avatar in the virtual environment. A strong Sense of Embodiment (SoE) for the virtual body not only relies on the feeling that the virtual body is their own (body ownership), but also that the virtual body moves in the world according to their will and replicates precisely their body movement (sense of agency). Within that frame of mind our specific aim is to demonstrate that the avatar can even be programmed to be better at executing a given task or to perform a movement that is normally difficult or impossible to execute precisely by the user. More specifically, our experimental task consists in asking subjects to follow with the hand a target that is animated using non-biological motion; the unnatural nature of the movement leads to systematic errors by the subjects. The challenge here is to introduce a subtle distortion between the position of the real hand and the position of the virtual hand, so that the virtual hand succeeds in performing the task while still letting subjects believe they are fully in control. Results of two experiments (N=16) show that our implementation of a distortion function, that we name the attraction well, successfully led participants to report being in control of the movement (agency) and being embodied in the avatar (body ownership) even when the distortion was above a threshold that they can detect. Furthermore, a progressive introduction of the distortion (starting without help, and introducing distortion on the go) could even further increase its acceptance.
In immersive Virtual Reality (VR), users can experience the subjective feeling of embodiment for the avatar representing them in a virtual world. This is known to be strongly supported by a high Sense of Agency (SoA) for the movements of the avatar that follows the user. In general, users do not self-attribute actions of their avatar that are different from the one they actually performed. The situation is less clear when actions of the avatar satisfies the intention of the user despite distortions and noticeable differences between user and avatar movements. Here, a within-subject experiment was condutected to determine wether a finger swap helping users to achieve a task would be more tolerated than one penalizing them. In particular, in a context of fast-paced finger movements and with clear correct or incorrect responses, we swapped the finger animation of the avatar (e.g. user moves the index finger, the avatar moves the middle one) to either automatically correct for spontaneous mistakes or to introduce incorrect responses. Subjects playing a VR game were asked to report when they noticed the introduction of a finger swap. Results based on 3256 trials (∼24% of swaps noticed) show that swaps helping users have significantly fewer odds of being noticed (and with higher confidence) than the ones penalizing users. This demonstrates how the context and the intention for motor action are important factors for the SoA and for embodiment, opening new perspectives on how to design and study interactions in immersive VR.
Abstract Previous studies investigated bodily self‐consciousness (BSC) by experimentally exposing subjects to multisensory conflicts (i.e., visuo‐tactile, audio‐tactile, visuo‐cardiac) in virtual reality (VR) that involve the participant's torso in a paradigm known as the full‐body illusion (FBI). Using a modified FBI paradigm, we found that synchrony of visuo‐respiratory stimulation (i.e., a flashing outline surrounding an avatar in VR; the flash intensity depending on breathing), is also able to modulate BSC by increasing self‐location and breathing agency toward the virtual body. Our aim was to investigate such visuo‐respiratory effects and determine whether respiratory motor commands contributes to BSC, using non‐invasive mechanical ventilation (i.e., machine‐delivered breathing). Seventeen healthy participants took part in a visuo‐respiratory FBI paradigm and performed the FBI during two breathing conditions: (a) “active breathing” (i.e., participants actively initiate machine‐delivered breaths) and (b) “passive breathing” (i.e., breaths’ timing was determined by the machine). Respiration rate, tidal volume, and their variability were recorded. In line with previous results, participants experienced subjective changes in self‐location, breathing agency, and self‐identification toward the avatar's body, when presented with synchronous visuo‐respiratory stimulation. Moreover, drift in self‐location was reduced and tidal volume variability were increased by asynchronous visuo‐respiratory stimulations. Such effects were not modulated by breathing control manipulations. Our results extend previous FBI findings showing that visuo‐respiratory stimulation affects BSC, independently from breathing motor command initiation. Also, variability of respiratory parameters was influenced by visuo‐respiratory feedback and might reduce breathing discomfort. Further exploration of such findings might inform the development of respiratory therapeutic tools using VR in patients.