A decision fusion classification architecture for mapping of tongue movements based on aural flow monitoring

2006 
A complete signal processing strategy is presented to detect and precisely recognize tongue movement by monitoring changes in airflow that occur in the ear canal. Tongue movements within the human oral cavity create unique, subtle pressure signals in the ear that can be processed to produce command signals in response to that movement. The strategy developed for the human machine interface architecture includes energy-based signal detection and segmentation to extract ear pressure signals due to tongue movements, signal normalization to decrease the trial-to-trial variations in the signals, and pairwise cross-correlation signal averaging to obtain accurate estimates from ensembles of pressure signals. A new decision fusion classification algorithm is formulated to assign the pressure signals to their respective tongue-movement classes. The complete strategy of signal detection and segmentation, estimation, and classification is tested on 4 tongue movements of 4 subjects. Through extensive experiments, it is demonstrated that the ear pressure signals due to the tongue movements are distinct and that the 4 pressure signals can be classified with over 96% classification accuracies across the 4 subjects using the decision fusion classification algorithm
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    7
    References
    4
    Citations
    NaN
    KQI
    []