IB-M: A Flexible Framework to Align an Interpretable Model and a Black-box Model
2020
Both interpretation and accuracy are very important for a predictive model in real applications, but most of previous works, no matter interpretable models or black-box models, cannot simultaneously achieve both of them, resulting in a tradeoff between model interpretation and model accuracy. To break this trade-off, in this paper, we propose a flexible framework, named IB-M, to align an $\underline{I}$nterpretable model and a $\underline{B}$lack-box $\underline{M}$odel for simultaneously optimizing model interpretation and model accuracy. Generally, we think most of samples that are well-clustered or away from the true decision boundary can be easily interpreted by an interpretable model. Removing those samples can help to learn a more accurate black-box model by focusing on the left samples around the true decision boundary. Inspired by this, we propose a data re-weighting based framework to align an interpretable model and a black-box model, letting them focus on the samples what they are good at, hence, achieving both interpretation and accuracy. We implement our IB-M framework for a real medical problem of ultrasound thyroid nodule diagnosis. Extensive experiments demonstrate that our proposed framework and algorithm can achieve a more interpretable and more accurate diagnosis than a single interpretable model and a single black-box model.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
19
References
2
Citations
NaN
KQI