Image transformer for explainable autonomous driving system

2021 
In the last decade, deep learning (DL) approaches have been used successfully in computer vision (CV) applications. However, DL-based CV models are generally considered to be black boxes due to their lack of interpretability. This black box behavior has exacerbated user distrust and therefore has prevented widespread deployment DLCV models in autonomous driving tasks even though some of these models exhibit superiority over human performance. For this reason, it is essential to develop explainable DL models for autonomous driving task. Explainable DL models are able to not only boost user trust in autonomy but also serve as a diagnostic approach to identify the defects and weaknesses of the model during the system development phase. In this paper, we propose such an explainable end-to-end autonomous driving system using “Transformer,” a state-of-the-art (SOTA) self-attention based model, to map visual features from images collected by onboard cameras to guide potential driving actions with corresponding explanations. The results demonstrate the efficacy of our proposed model as it outperforms the benchmark model by a significant margin in terms of actions and explanations prediction with lower computational cost.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    3
    Citations
    NaN
    KQI
    []