Improved Deep Fuzzy Clustering for Accurate and Interpretable Classifiers

2019 
While deep learning has demonstrated excellent performance in many challenging learning tasks, it has not yet found broad acceptance amongst users. The key deficiency seems to be the uninterpretable nature of a deep neural network; no general, practical method for explaining the predictions or decisions of such a network has been devised. Lacking such, users seem unwilling to entrust deep learning with critical decisions.One approach to generating explanations is to design algorithms that are inherently more interpretable. Neuro-fuzzy systems are an example, which we are extending to deep networks; in particular by designing deep fuzzy clustering algorithms. Deep fuzzy clustering employs a deep learner as an automated feature extractor. A fuzzy clustering is performed in the extracted feature space, and a classifier built from it. The resulting model appears more interpretable, but at the cost of lower accuracy. This paper explores improvements to deep fuzzy clustering leading to a more accurate deep fuzzy classifier that still seems highly interpretable. We evaluate the accuracy and interpretability of the model on the MNIST dataset.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    63
    References
    2
    Citations
    NaN
    KQI
    []