Perceiving Driving Hazards in a Data-fusion Way Using Multi-Modal Net and Semantic Driving Trajectory
2020
This paper mainly focuses on hazardous driving event detection. Since the traffic environment is complicated and drivers in fatigue or distraction state have limited perception ability of unexpected hazardous situation, a long-term prediction model which alert potential collision risk could promote driving safety. The proposed model, called multi-modal net, is a deep convolutional neural network that can process both video data and kinematics data in a data-fusion way. Semantic trajectory images, which are extracted from driving videos, are used to illustrate the traffic environment and to be the input of a multi-modal net. After training on the SHRP2 Safety Dataset, the proposed model reaches a 91.6% accuracy. The experiment shows that the proposed datafusion model has outperformed other single-source data models, which proves that the data-fusion model has advantages in complex environment perception and hazards prediction.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI