Mutual Information Matrix for Interpretable Fault Detection

2020 
This paper presents a novel mutual information (MI) matrix based method for fault detection. Given a m-dimensional fault process, the MI matrix is a m$\times$m matrix in which the (i,j)-th entry measures the MI values between the i-th dimension and the j-th dimension variables. We demonstrate that the transformed components extracted from the obtained MI matrix can precisely unveil the dynamics of the underlying (possibly nonlinear) process, thus offering a reliable indicator to the occurrence of different types of faults. We also suggest that the recently proposed matrix-based Renyi's $\alpha$-entropy is a good surrogate to the classical Shannon's entropy in MI estimation. Experiments on both synthetic data and the benchmark Tennessee Eastman process demonstrate the interpretability of our methodology in identifying the root variables that cause the faults, and the superiority of our methodology in terms the improved fault detection rate (FDR) and the lowest false alarm rate (FAR).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    53
    References
    2
    Citations
    NaN
    KQI
    []