Gradient Importance Learning for Incomplete Observations.

2021 
Though recent works have developed methods that can generate estimates (or imputations)of the missing entries in a dataset to facilitate downstream analysis, most depend onassumptions that may not align with real-world applications and could suffer from poorperformance in subsequent tasks such as classification. This is particularly true if the datahave large missingness rates or a small sample size. More importantly, the imputationerror could be propagated into the prediction step that follows, which may constrain thecapabilities of the prediction model. In this work, we introduce the gradient importancelearning (GIL) method to train multilayer perceptrons (MLPs) and long short-term memo-ries (LSTMs) todirectlyperform inference from inputs containing missing valueswithoutimputation. Specifically, we employ reinforcement learning (RL) to adjust the gradientsused to train these models via back-propagation. This allows the model to exploit theunderlying information behindmissingness patterns. We test the approach on real-worldtime-series (i.e., MIMIC-III), tabular data obtained from an eye clinic, and a standarddataset (i.e., MNIST), where ourimputation-freepredictions outperform the traditionaltwo-stepimputation-based predictions using state-of-the-art imputation methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    58
    References
    0
    Citations
    NaN
    KQI
    []