A Generic Model for Embedding Users' Physical Workspaces into Multi-Scale Collaborative Virtual Environments

2010 
Most Virtual Reality (VR) systems must consider the users’ physical environment to immerse these users in a virtual world and to make them aware of their interaction capabilities. However, no consensus has been found in the existing VR systems to embed the real environment into the virtual one: each system meets its particular requirements according to the devices and interaction techniques used. This paper proposes a generic model that enables VR developers to embed the users’ physical environment into the Virtual Environment (VE) when designing new applications, especially collaborative ones. The real environment we consider is a multi-sensory space that we propose to represent by a structured hierarchy of 3D workspaces describing the features of the users’ physical environment (visual, sound, interaction or motion workspaces). A set of operators enables developers to control these workspaces in order to provide interactive functionalities to endusers. Our model makes it possible to maintain a co-location between the physical workspaces and their representation in the VE. As the virtual world is often larger than these physical workspaces, workspace integration must be maintained even if users navigate or change their scale in the virtual world. Our model is also a way to carry these workspaces in the virtual world if required. It is implemented as a set of reusable modules and used to design and implement multi-scale Collaborative Virtual Environments (msCVE). We also discuss how three “state of the art” VR techniques could be designed using our model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    32
    Citations
    NaN
    KQI
    []