A Robust Driver’s Gaze Zone Classification using a Single Camera for Self-occlusions and Non-aligned Head and Eyes Direction Driving Situations

2020 
Distracted driving is one of the most common causes of traffic accidents around the world. Recognizing the driver’s gaze direction during a maneuver could be an essential step for avoiding the matter mentioned above. Thus, we propose a gaze zone classification system that serves as a base of supporting systems for driver’s situation awareness. However, the challenge is to estimate the driver’s gaze inside not ideal scenarios, specifically in this work, scenarios where may occur self-occlusions or non-aligned head and eyes direction of the driver. Firstly, towards solving miss classifications during self-occlusions scenarios, we designed a novel protocol where a 3D full facial geometry reconstruction of the driver from a single 2D image is made using the state-of-the-art method PRNet. To solve the miss classification when the driver’s head and eyes direction are not aligned, eyes and head information are extracted. After this, based on a mix of different data pre-processing and deep learning methods, we achieved a robust classifier in situations where self-occlusions or non-aligned head and eyes direction of the driver occur. Our results from the experiments explicitly measure and show that the proposed method can make an accurate classification for the two before-mentioned problems. Moreover, we demonstrate that our model generalizes new drivers while being a portable and extensible system, making it easy-adaptable for various automobiles.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    0
    Citations
    NaN
    KQI
    []