Explainable Activity Recognition over Interpretable Models

2021 
The majority of the approaches to sensor-based activity recognition are based on supervised machine learning. While these methods reach high recognition rates, a major challenge is to understand the rationale behind the predictions of the classifier. Indeed, those predictions may have a relevant impact on the follow-up actions taken in a smart living environment. We propose a novel approach for eXplainable Activity Recognition (XAR) based on interpretable machine learning models. We generate explanations by combining the feature values with the feature importance obtained from the underlying trained classifier. A quantitative evaluation on a real dataset of ADLs shows that our method is effective in providing explanations consistent with common knowledge. By comparing two popular ML models, our results also show that one versus one classifiers can provide better explanations in our framework.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    0
    Citations
    NaN
    KQI
    []