Explainable Predictions of Renal Cell Carcinoma with Interpretable Tree Ensembles from Contrast-enhanced CT Images

2021 
Diagnosis of renal cell carcinoma (RCC) is critical in automated clinical decision-support system. Existing state-of-the-art methods focus on designing complex machine learning models for high identification accuracy; especially, deep neural networks improve the prediction accuracy. Such designs ignore the explainability of models, and their “black box” nature is a barrier to model trust. In addition, little attention has been paid to evaluating clinical utility. To explain model predictions and evaluate risks and benefits, this paper introduces the explainable machine learning predictions that incorporate the balancing of risks and benefits of treatment to RCC prediction models. The proposed explainable network is based on tree ensembles with four improvements: (1) A multiscale feature extraction module, obtaining comprehensive radiomic features; (2) An attribute optimization module based on Chi-square test, guiding the network to focus on useful information at variables; (3) Appending a SHapley Additive exPlanations (SHAP) module to the framework to automatically and efficiently interpret the prediction of the models; And (4) a decision curve analysis (DCA) module is performed for the clinical utility evaluation. By integrating the above improvements in series, the models' performances are gradually enhanced. By the comparison of different tree ensembles-based algorithms, our study finds the random forest (RF) and extra trees (ET) classifier can be valuable diagnosis tools for explainable RCC predictions. To demonstrate the generalizability, our tree ensembles-based models achieve higher accuracies than the state-of-the-art pretrained deep models with fine-tuned parameters.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    0
    Citations
    NaN
    KQI
    []