Humans use different spatial reference frames that impact how they interact with displays and perform everyday spatial tasks. Switching visual attention between a distant or extrapersonal reference frame and a near or peripersonal frame is more effortful than requires switching within a given frame. However, much less is known about auditory spatial attention. In this study, 177 listeners identified auditory locations in rapid succession within and across peripersonal and extrapersonal regions of space (ROS). Participants responded faster when stimuli were moving towards them as long as stimuli were within the same ROS; but, not when the stimuli crossed ROS. Further, individuals with a poor sense of direction were more sensitive to direction of travel and responded disproportionally slower to stimuli that seemed to be moving away rather than towards them. Those with a good sense of direction responded equally fast to both directions. Implications of these findings for performance with complex auditory displays are discussed.
Previous studies have begun exploring the possibility that “adaptable” automation, in which tasks are delegated to intelligent automation by the user, can preserve the benefits of automation while minimizing its costs. One approach to adaptable automation is the Playbook ® interface, which has been used in previous research and has shown performance enhancements as compared to other automation approaches. However, additional investigations are warranted to evaluate both benefits and potential costs of adaptable automation. The present study incorporated a delegation interface into a new display and simulation system, the multiple unmanned aerial vehicle simulator (MUSIM), to allow for flexible control over three unmanned aerial vehicles (UAVs) at three levels of delegation abstraction. Task load was manipulated by increasing the frequency of primary and secondary task events. Additionally, participants experienced an unanticipated event that was not a good fit for the higher levels of delegation abstraction. Treatment of this poor “automation fit” event, termed a “Non-Optimal Play Environment” event (NOPE event), required the use of manual control. Results showed advantages when access to the highest levels of delegation abstraction was provided and as long as operators also had the flexibility to revert to manual control. Performance was better across the two task load conditions and reaction time to respond to the NOPE event was fastest in this condition. The results extend previous findings showing benefits of flexible delegation of tasks to automation using the Playbook interface and suggest that Playbook remains robust even in the face of poor “automation-fit” events.
The research project aimed at validating the interactive fixed-base driving simulator of the Interuniversity Research Center for Road Safety (CRISS) to enable its use for design of decelerations lanes, in function of the lane length. The research was developed in two phases. In the first one a field study was carried out on a section of a real highway to study driver’s behavior in deceleration lanes with three different lengths. The second one was a experiment using the driving simulator of CRISS. Forty-two driver drove in the simulator on three configurations of the deceleration lane. Trajectories and speeds in field and in simulator were analyzed. The driver’s behavior in terms of deceleration rate was also analyzed. The analysis revealed that the average trajectory is developed in the same phases in field and in simulation. Taper is also used in a correct way in reality as well as in the driving simulation. Before arriving at the deceleration lane, speeds in virtual reality are higher than those in field measurement. This was probably determined by the fact that no inertial force on the driver is transferred in the driving simulator. The inability of driver to discern roadway scenario long distances ahead may have also contributed. Into the deceleration lane, the perception of the scenario is better, and consequently speeds were similar than field data. No relation between the deceleration rates and the lane length were found in reality as well as in driving simulator.
We examined performance and preference for tactile route guidance formats. Participants drove a simulated vehicle through counterbalanced pairings of four distinct cities using one of four navigation systems (three tactile and one auditory control). One tactile system used only the pulse rate, the second system used only tactor location, and the third used both pulse rate and location to convey guidance instructions. All navigation systems provided both a preliminary and an immediate cue indicating to take the next most immediate turn. The pulse-rate route guidance system was the most commonly preferred system. Results also indicate that participants’ ability to accurately retrace their route and identify landmarks did not differ across navigation systems. All four systems resulted in equivalent wayfinding performance and support previous literature indicating that tactile guidance systems can effectively support navigation in unfamiliar environments.
An experiment utilizing an auditory-spatial Stroop paradigm was created to assess whether participants are better able to attend to spatial or semantic information across near and far regions of space. Participants were instructed to attend to either the semantic information of a stimulus or identify the location of where the stimulus came from, depending on the condition. The sounds came from speakers that were physically located in either near space (peripersonal region of space) or far space (extrapersonal region of space) and the words were either “near” or “far.” Results indicate that participants in general were quicker at responding to the semantic condition than the location condition. Furthermore, consistent with findings of many other Stroop-like experiments, there was a significant difference between congruent and incongruent trials in both task conditions. The results of this investigation provide additional insight into how people process different types of information across near and far regions of space.