PuzzleShuffle: Undesirable Feature Learning for Semantic Shift Detection

2021 
When running a machine learning system, it is difficult to guarantee performance when the data distribution is different between training and production operations. Deep neural networks have attained remarkable performance in various tasks when the data distribution is consistent between training and operation phases, but performance significantly drops when they are not. The challenge of detecting Out-of-Distribution (OoD) data from a model that only trained In-Distribution (ID) data is important to ensure the robustness of the system and the model. In this paper, we have experimentally shown that conventional perturbation-based OoD detection methods can accurately detect non-semantic shift with different domain, but have difficulty detecting semantic shift in which objects different from ID are captured. Based on this experiment, we propose a simple and effective augmentation method for detecting semantic shift. The proposed method consists of the following two components: (1) PuzzleShuffle, which deliberately corrupts semantic information by dividing an image into multiple patches and randomly rearranging them to learn the image as OoD data. (2) Adaptive Label Smoothing, which changes labels adaptively according to the patch size in PuzzleShuffle. We show that our proposed method outperforms the conventional augmentation methods in both ID classification performance and OoD detection performance under semantic shift conditions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []