Interpretability of Machine Learning Solutions in Industrial Decision Engineering
2019
The broad application of machine learning (ML) methods and algorithms in diverse range of organisational settings led to the adoption of legislation, like European Union’s General Data Protection Regulation, which require firm capabilities to explain algorithmic decisions. Currently in the ML literature there does not seem to be a consensus on the definition of interpretability of a ML solution. Moreover, there is no agreement about the necessary level of interpretability of such solution and on how this level can be determined, measured and achieved. In this article, we provide such definitions based on research as well as our extensive experience of building ML solutions for various organisations across industries. We present CRISP-ML, a detailed step-by-step methodology, that provides guidance on creating the necessary level of interpretability at each stage of the solution building process and is consistent with the best practices of project management in the ML settings. We illustrate the versatility and effortless applicability of CRISP-ML with examples across a variety of industries and types of ML projects.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
20
References
2
Citations
NaN
KQI