An Empirical Study of Uncertainty Gap for Disentangling Factors

2021 
Disentangling factors has proven to be crucial for building interpretable AI systems. Disentangled generative models would have explanatory input variables to increase the trustworthiness and robustness. Previous works apply a progressive disentanglement learning regime where the ground-truth factors are disentangled in an order. However, they didn't answer why such an order for disentanglement is important. In this work, we propose a novel metric, namely Uncertainty Gap, to evaluate how the uncertainty of generative models changes given input variables. We generalize the Uncertainty Gap to image reconstruction tasks using BCE and MSE. Extensive experiments on three commonly-used benchmarks also demonstrate the effectiveness of our Uncertainty Gap in evaluating both informativeness and redundancy of given variables. We empirically find that the significant factor with the largest Uncertainty Gap should be disentangled before insignificant factors, which indicates that a suitable order of disentangling factors facilities the performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    7
    References
    0
    Citations
    NaN
    KQI
    []