Depth-based features in audio-visual speech recognition

2016 
We study the impact of depth-based visual features in systems for visual and audio-visual speech recognition. Instead of reconstruction from multiple views, the depth maps are obtained by the Kinect sensor, which is better suited for real world applications. We extract several types of visual features from video and depth channels and evaluate their performance both individually and in cross-channel combination. In order to show the information complementarity between the video-based and the depth-based features, we examine the relative importance of each channel when combined via weighted multi-stream Hidden Markov Models. We also introduce novel parametrizations based on discrete cosine transform and histogram of oriented gradients. The contribution of all presented visual speech features is demonstrated in the task of audio-visual speech recognition under noisy conditions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    0
    Citations
    NaN
    KQI
    []