Explanation Consistency Training: Facilitating Consistency-Based Semi-Supervised Learning with Interpretability.

2021 
Unlabeled data exploitation and interpretability are usually both required in reality. They, however, are conducted independently, and very few works try to connect the two. For unlabeled data exploitation, state-of-the-art semi-supervised learning (SSL) results have been achieved via encouraging the consistency of model output on data perturbation, that is, consistency assumption. However, it remains hard for users to understand how particular decisions are made by state-of-the-art SSL models. To this end, in this paper we first disclose that the consistency assumption is closely related to causality invariance, where causality invariance lies in the main reason why the consistency assumption is valid. We then propose ECT (Explanation Consistency Training) which encourages a consistent reason of model decision under data perturbation. ECT employs model explanation as a surrogate of the causality of model output, which is able to bridge state-of-the-art interpretability to SSL models and alleviate the high complexity of causality. We realize ECT-SM for vision and ECT-ATT for NLP tasks. Experimental results on real-world data sets validate the highly competitive performance and better explanation of the proposed algorithms.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    2
    Citations
    NaN
    KQI
    []