Radon transform of auditory neurograms: a robust feature set for phoneme classification
2017
Classification of speech phonemes is challenging, especially under noisy environments, and hence traditional speech recognition systems do not perform well in the presence of noise. Unlike traditional methods in which features are mostly extracted from the properties of the acoustic signal, this study proposes a new feature for phoneme classification using neural responses from a physiologically based computational model of the auditory periphery. The two-dimensional neurogram was constructed from the simulated responses of auditory-nerve fibres to speech phonemes. Features of neurogram images were extracted using the Discrete Radon Transform, and the dimensionality of features was reduced using an efficient feature selection technique. A standard classifier, Support Vector Machine, was employed to model and test the phoneme classes. Classification performance was evaluated in quiet and under noisy conditions in which test data were corrupted with various environmental distortions such as additive noise, room reverberation, and telephone-channel noise. Performances were also compared with the results from existing methods such as the Mel-frequency cepstral coefficient, Gammatone frequency cepstral coefficient, and frequency-domain linear prediction-based phoneme classification methods. In general, the proposed neural feature exhibited a better classification accuracy in quiet and under noisy conditions compared with the performance of most existing acoustic-signal-based methods.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
41
References
2
Citations
NaN
KQI