A multi-modal classifier for heart sound recordings

2016 
Information extracted from heart sound signals are associated with valvular heart diseases and other cardiovascular disorders. This study aims to develop a computational framework for the classification of a given heart sound recording. Different techniques have their respective superiority in classifying heart sound recordings with various patterns, and it is difficult to find one technique that outperforms all the others. We hence propose a multi-modal classifier by fusing the classification results from various techniques based on various features. Using the data obtained from the 2016 PhysioNet/CinC Challenge, we generate two different feature sets: one set was calculated from segmented results by peak-finding method, and the other set was extracted by audio signal analysis. We then assess the performance of two classification techniques — support vector machines (SVMs) and extreme learning machines (ELMs) — by feeding them with the best subset of features selected from these two feature sets. The final heart sound classification result (normal / abnormal) is determined by ensembling the two classifiers with voting. The best performance out of five online entries achieved an overall score of 0.83 with sensitivity=0.70 and specificity=0.96.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    7
    References
    6
    Citations
    NaN
    KQI
    []