Driver Drowsiness Detection using Knowledge Distillation Technique for Real Time Scenarios

2020 
Deep Learning algorithms have proven to be highly accurate for feature detection tasks in images. However, the large size of the deep learning models makes it very difficult for them to be deployed in real time applications because of the high memory consumption and delayed response. Detecting driver drowsiness poses a need for high accuracy, along with high speed and lesser memory requirement. In order to utilize the highly accurate deep learning models for solving the problem, we incorporate compression of deep learning network using knowledge distillation technique. Using knowledge distillation technique, we train the small size student network using the complex teacher network to increase the accuracy of the student network. A modified version of VGG19 is incorporated as teacher network involving 85 million parameters and student network is a reduced VGG-16 network having only 2 million parameters, which finally gives an accuracy of 95% on the ZJU dataset.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    8
    References
    0
    Citations
    NaN
    KQI
    []