Deep Unsupervised Multi-Modal Fusion Network for Detecting Driver Distraction

2020 
Abstract The risk of incurring a road traffic crash has increased year by year. Studies show that lack of attention during driving is one of the major causes of traffic accidents. In this work, in order to detect driver distraction, e.g., phone conversation, eating, texting, we introduce a deep unsupervised multi-modal fusion network, termed UMMFN. It is an end-to-end model composing of three main modules: multi-modal representation learning, multi-scale feature fusion and unsupervised driver distraction detection. The first module is to learn low-dimensional representation of multiple heterogeneous sensors using embedding subnetworks. The goal of multi-scale feature fusion is to learn both the temporal dependency for each modality and spatio dependencies from different modalities. The last module utilizes a ConvLSTM Encoder-Decoder model to perform an unsupervised classification task that is not affected by new types of driver behaviors. During the detection phase, a fine-grained detection decision can be made through calculating reconstruction error of UMMFN as a score for each captured testing data. We empirically compare the proposed approach with several state-of-the-art methods on our own multi-modal dataset for distracted driving behavior. Experimental results show that UMMFN has superior performance over the existing approaches.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    43
    References
    6
    Citations
    NaN
    KQI
    []