The AXLNet Mobile project builds upon the web-based leadership classroom training application, AXLNet, originally created by ICT and ARI in 2006. We ported both the functionality and pedagogical approach of AXLNet to an Apple iPhone in 2008-2009, and tested the resulting application for usability with a population of 46 Captains and 1 Lieutenants at Ft. Leonard Wood. Initial findings indicate supplemental training on the mobile device is an acceptable and possibly more engaging delivery method for today’s troops.
Over the last 15 years, a virtual revolution has taken place in the use of Virtual Reality simulation technology for clinical purposes. Shifts in the social and scientific landscape have now set the stage for the next major movement in Clinical Virtual Reality with the "birth" of intelligent virtual humans. Seminal research and development has appeared in the creation of highly interactive, artificially intelligent and natural language capable virtual human agents that can engage real human users in a credible fashion. No longer at the level of a prop to add context or minimal faux interaction in a virtual world, virtual humans can be designed to perceive and act in a 3D virtual world, engage in spoken dialogs with real users and can be capable of exhibiting human-like emotional reactions. This paper will present an overview of the SimCoach project that aims to develop virtual human support agents to serve as online guides for promoting access to psychological healthcare information and for assisting military personnel and family members in breaking down barriers to initiating care. The SimCoach experience is being designed to attract and engage military Service Members, Veterans and their significant others who might not otherwise seek help with a live healthcare provider. It is expected that this experience will motivate users to take the first step – to empower themselves to seek advice and information regarding their healthcare and general personal welfare and encourage them to take the next step towards seeking other, more formal resources if needed.
To address a part of the challenge of testing and comparing various 3D user interface devices and methods, we are currently developing and testing a VR 3D User Interface benchmarking scenario. The approach outlined in this paper focuses on the capture of human interaction performance on object selection and manipulation tasks using standardized and scalable block configurations that allow for measurement of speed and efficiency with any interaction device or method. The block configurations that we are using as benchmarking stimuli are accompanied by a pure mental rotation visuospatial assessment test. This feature will allow researchers to test users’ existing spatial abilities and statistically parcel out the variability due to innate ability, from the actual hands-on performance metrics. This statistical approach could lead to a more pure analysis of the ergonomic features of interaction devices and methods separate from existing user abilities. An initial test was conducted at two sites using this benchmarking system to make comparisons between 3D/gesture-based and 2D/mouse-based interactions for 3D selection and manipulation. Our preliminary results demonstrated, as expected, that the 3D/gesture based method in general outperformed the 2D/mouse interface. As well there were statistically significant performance differences between different user groups when categorized by their sex, visuospatial ability and educational background.