Marginal samples for knowledge distillation

2022 
Previous work like Category Structure Knowledge Distillation proposes to construct category-wise relations for knowledge distillation by introducing intra-category and inter-category relations based on category centers. However, category centers may be unreliable when feature representations of wrongly classified samples are used to form category centers. Besides, inter-category relations based on category centers are coarse-grained. In this paper, we propose a Marginal Sample Knowledge Distillation (MSKD) to construct reliable category centers and fine-grained inter-category relations by introducing label filtering and marginal samples. Label filtering removes feature representations of wrongly classified samples from the calculation of category centers to create unbiased and reliable category centers. Marginal samples are defined as the correctly classified samples close to category boundaries. Marginal samples contain information of other categories and form fine-grained category boundaries for knowledge distillation. Extensive experiments on different datasets and teacher-student architecture settings show that our method has excellent performances when compared with closely related methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []