Several researchers have focused on studying driver cognitive behavior and mental load for in-vehicle interaction while driving. Adaptive interfaces that vary with mental and perceptual load levels could help in reducing accidents and enhancing the driver experience. In this paper, we analyze the effects of mental workload and perceptual load on psychophysiological dimensions and provide a machine learning-based framework for mental and perceptual load estimation in a dual task scenario for in-vehicle interaction (https://github.com/amrgomaaelhady/MWL-PL-estimator). We use off-the-shelf non-intrusive sensors that can be easily integrated into the vehicle's system. Our statistical analysis shows that while mental workload influences some psychophysiological dimensions, perceptual load shows little effect. Furthermore, we classify the mental and perceptual load levels through the fusion of these measurements, moving towards a real-time adaptive in-vehicle interface that is personalized to user behavior and driving conditions. We report up to 89% mental workload classification accuracy and provide a real-time minimally-intrusive solution.
In the last decade, empirical sciences have faced a tremendous change in the way of conducting research. As a broad interdisciplinary field, research in Affective Computing often employs empirical user studies. The current paper analyzes research practices in Affective Computing and deduces recommendations for improving the quality of methods and reporting. We extracted a total of k = 65 empirical studies from the two most recent International Conferences on Affective Computing & Intelligent Interaction (ACII) '17 and '19. Three raters summarized characteristics of studies (e.g., number of experimental studies) and how much methodological (e.g., participant characteristics) and statistical information (e.g., degrees of freedom) were missing. Also, we conducted a p-curve analysis to test the overall evidential value of findings. Results showed that 1. in at least half of the studies, one important information about statistical results was missing, and 2. those k = 31 studies that had reported all necessary information to be included into the p-curve showed evidential value. In general, all criteria were never met in one single study. We provide concrete recommendations on how to implement open research practices for empirical studies in Affective Computing.
Several researchers have focused on studying driver cognitive behavior and mental load for in-vehicle interaction while driving. Adaptive interfaces that vary with mental and perceptual load levels could help in reducing accidents and enhancing the driver experience. In this paper, we analyze the effects of mental workload and perceptual load on psychophysiological dimensions and provide a machine learning-based framework for mental and perceptual load estimation in a dual task scenario for in-vehicle interaction (https://github.com/amrgomaaelhady/MWL-PL-estimator). We use off-the-shelf non-intrusive sensors that can be easily integrated into the vehicle's system. Our statistical analysis shows that while mental workload influences some psychophysiological dimensions, perceptual load shows little effect. Furthermore, we classify the mental and perceptual load levels through the fusion of these measurements, moving towards a real-time adaptive in-vehicle interface that is personalized to user behavior and driving conditions. We report up to 89% mental workload classification accuracy and provide a real-time minimally-intrusive solution.
This work proposes a new method for guiding a user's attention towards objects of interest in a cyber-physical environment (CPE). CPEs are environments that contain several computing systems that interact with each other and with the physical world. These environments contain several sensors (cameras, eye trackers, etc.) and output devices (lamps, screens, speakers, etc.). These devices can be used to first track the user's position, orientation, and focus of attention to then find the most suitable output device to guide the user's attention towards a target object. We argue that the most suitable device in this context is the one that attracts attention closest to the target and is salient enough to capture the user's attention. The method is implemented as a function which estimates the "closeness" and "salience" of each visual and auditive output device in the environment. Some parameters of this method are then evaluated through a user study in the context of a virtual reality supermarket. The results show that multi-modal guidance can lead to better guiding performance. However, this depends on the set parameters.
Hand pointing and eye gaze have been extensively investigated in automotive applications for object selection and referencing. Despite significant advances, existing outside-the-vehicle referencing methods consider these modalities separately. Moreover, existing multimodal referencing methods focus on a static situation, whereas the situation in a moving vehicle is highly dynamic and subject to safety-critical constraints. In this paper, we investigate the specific characteristics of each modality and the interaction between them when used in the task of referencing outside objects (e.g. buildings) from the vehicle. We furthermore explore person-specific differences in this interaction by analyzing individuals' performance for pointing and gaze patterns, along with their effect on the driving task. Our statistical analysis shows significant differences in individual behaviour based on object's location (i.e. driver's right side vs. left side), object's surroundings, driving mode (i.e. autonomous vs. normal driving) as well as pointing and gaze duration, laying the foundation for a user-adaptive approach.
Many car accidents are caused by human distractions, including cognitive distractions. In-vehicle human-machine interfaces (HMIs) have evolved throughout the years, providing more and more functions. Interaction with the HMIs can, however, also lead to further distractions and, as a consequence, accidents. To tackle this problem, we propose using adaptive HMIs that change according to the mental workload of the driver. In this work, we present the current status as well as preliminary results of a user study using naturalistic secondary tasks while driving (i.e., the primary task) that attempt to understand the effects of one such interface.