Minimizing effects of methodological decisions on interpretation and prediction in species distribution studies: An example with background selection
Catherine S. JarnevichMarian TalbertJeffery MorisetteCameron L. AldridgeCynthia S. BrownSunil KumarDaniel J. ManierColin TalbertTracy R. Holcombe
42
Citation
53
Reference
10
Related Paper
Citation Trend
Keywords:
Interpretability
Robustness
Environmental niche modelling
Kernel density estimation
The interpretability of ML models is important, but it is not clear what it amounts to. So far, most philosophers have discussed the lack of interpretability of black-box models such as neural networks, and methods such as explainable AI that aim to make these models more transparent. The goal of this paper is to clarify the nature of interpretability by focussing on the other end of the "interpretability spectrum". The reasons why some models, linear models and decision trees, are highly interpretable will be examined, and also how more general models, MARS and GAM, retain some degree of interpretability. It is found that while there is heterogeneity in how we gain interpretability, what interpretability is in particular cases can be explicated in a clear manner.
Interpretability
Black box
Cite
Citations (5)
The interpretability of ML models is important, but it is not clear what it amounts to. So far, most philosophers have discussed the lack of interpretability of black-box models such as neural networks, and methods such as explainable AI that aim to make these models more transparent. The goal of this paper is to clarify the nature of interpretability by focussing on the other end of the 'interpretability spectrum'. The reasons why some models, linear models and decision trees, are highly interpretable will be examined, and also how more general models, MARS and GAM, retain some degree of interpretability. I find that while there is heterogeneity in how we gain interpretability, what interpretability is in particular cases can be explicated in a clear manner.
Interpretability
Black box
Cite
Citations (1)
Statistic
Cite
Citations (15)
Interpretability
Simplicity
Cite
Citations (9)
Over the years, ensemble methods have become a staple of machine learning. Similarly, generalized linear models (GLMs) have become very popular for a wide variety of statistical inference tasks. The former have been shown to enhance out- of-sample predictive power and the latter possess easy interpretability. Recently, ensembles of GLMs have been proposed as a possibility. On the downside, this approach loses the interpretability that GLMs possess. We show that minimum description length (MDL)-motivated compression of the inferred ensembles can be used to recover interpretability without much, if any, downside to performance and illustrate on a number of standard classification data sets.
Interpretability
Predictive power
Minimum description length
Cite
Citations (4)
We consider model selection in generalized linear models (GLM) for high-dimensional data and propose a wide class of model selection criteria based on penalized maximum likelihood with a complexity penalty on the model size. We derive a general nonasymptotic upper bound for the expected Kullback-Leibler divergence between the true distribution of the data and that generated by a selected model, and establish the corresponding minimax lower bounds for sparse GLM. For the properly chosen (nonlinear) penalty, the resulting penalized maximum likelihood estimator is shown to be asymptotically minimax and adaptive to the unknown sparsity. We discuss also possible extensions of the proposed approach to model selection in GLM under additional structural constraints and aggregation.
Divergence (linguistics)
Cite
Citations (0)
Interpretability
Cite
Citations (15)
We consider model selection in generalized linear models (GLM) for high-dimensional data and propose a wide class of model selection criteria based on penalized maximum likelihood with a complexity penalty on the model size. We derive a general nonasymptotic upper bound for the expected Kullback-Leibler divergence between the true distribution of the data and that generated by a selected model, and establish the corresponding minimax lower bounds for sparse GLM. For the properly chosen (nonlinear) penalty, the resulting penalized maximum likelihood estimator is shown to be asymptotically minimax and adaptive to the unknown sparsity. We discuss also possible extensions of the proposed approach to model selection in GLM under additional structural constraints and aggregation.
Divergence (linguistics)
Cite
Citations (0)
Generalized additive model
Environmental niche modelling
Species distribution
Data set
Cite
Citations (180)
Due to the "black-box' nature of artificial intelligence (AI) recommendations, interpretability is critical to the consumer experience of human-AI interaction. Unfortunately, improving the interpretability of AI recommendations is technically challenging and costly. Therefore, there is an urgent need for the industry to identify when the interpretability of AI recommendations is more likely to be needed. This study defines the construct of Need for Interpretability (NFI) of AI recommendations and empirically tests consumers' need for interpretability of AI recommendations in different decision-making domains. Across two experimental studies, we demonstrate that consumers do indeed have a need for interpretability toward AI recommendations, and that the need for interpretability is higher in utilitarian domains than in hedonic domains. This study would help companies to identify the varying need for interpretability of AI recommendations in different application scenarios.
Interpretability
Black box
Cite
Citations (1)