An Efficient Model Compression Method of Pruning for Object Detection.

2020 
In this paper, we propose an efficient model compression method for object detection network. The key to this method is that we combine pruning and training into a single process. This design benefits in two aspects. First, we have a full control on pruning of convolution kernel, which ensures the model's accuracy to maximum extent. Second, compared with previous works, we overlap pruning with the training process instead of waiting for the model to be trained before pruning. In such a way, we can directly get a compressed model that is ready to use once training finished. We took experiments based on SSD(Single Shot MultiBox Detector) for verification. Firstly, when compressing the ssd300 model with dataset of Pascal VOC, we got model compression of 7.7X while the model accuracy only drops by 1.8%. Then on the COCO dataset, under the premise that the accuracy of the model remains unchanged, we got the model compressed by 2.8X.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    0
    Citations
    NaN
    KQI
    []