Handling Difficult Labels for Multi-label Image Classification via Uncertainty Distillation
2021
Multi-label image classification aims to predict multiple labels for a single image. However, the difficulties of predicting different labels may vary dramatically due to semantic variations of the label as well as the image context. Direct learning of multi-label classification models has the risk of being biased and overfitting those difficult labels, e.g., deep network based classifiers are over-trained on the difficult labels, therefore, lead to false-positive errors of those difficult labels during testing. To handle difficult labels of multi-label image classification, we propose to calibrate the model, which not only predicts the labels but also estimates the uncertainty of the prediction. With the new calibration branch of the network, the classification model is trained with the pick-all-labels normalized loss and optimized pertaining to the number of positive labels. Moreover, to improve performance on difficult labels, instead of annotating them, we leverage the calibrated model as the teacher network and teach the student network about handling difficult labels via uncertainty distillation. Our proposed uncertainty distillation teaches the student network which labels are highly uncertain through prediction distribution distillation, and locates the image regions that cause such uncertain predictions through uncertainty attention distillation. Conducting extensive evaluations on benchmark datasets, we demonstrate that our proposed uncertainty distillation is valuable to handle difficult labels of multi-label image classification.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
46
References
0
Citations
NaN
KQI