Video analysis using spatiotemporal descriptor and kernel extreme learning machine for lip reading
2015
Lip-reading techniques have shown bright prospects for speech recognition under noisy environments and for hearing-impaired listeners. We aim to solve two important issues regarding lip reading: (1) how to extract discriminative lip motion features and (2) how to establish a classifier that can provide promising recognition accuracy for lip reading. For the first issue, a projection local spatiotemporal descriptor, which considers the lip appearance and motion information at the same time, is utilized to provide an efficient representation of a video sequence. For the second issue, a kernel extreme learning machine (KELM) based on the single-hidden- layer feedforward neural network is presented to distinguish all kinds of utterances. In general, this method has fast learning speed and great robustness to nonlinear data. Furthermore, quantum-behaved particle swarm optimization with binary encoding is introduced to select the appropriate feature subset and parameters for KELM training. Experiments conducted on the AVLetters and OuluVS databases show that the proposed lip- reading method achieves a superior recognition accuracy compared with two previous methods. © 2015 SPIE and IS&T (DOI: 10.1117/1.JEI.24.5.053023)
Keywords:
- Feedforward neural network
- Feature selection
- Computer vision
- Kernel (linear algebra)
- Pattern recognition
- Discriminative model
- Encoding (memory)
- Robustness (computer science)
- Machine learning
- Computer science
- Extreme learning machine
- Particle swarm optimization
- Artificial intelligence
- Classifier (linguistics)
- Computer programming
- Binary data
- Speech recognition
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
27
References
4
Citations
NaN
KQI