Dynamic cell assemblies and vowel sound categorization

2002 
By simulating a neural network model we investigated roles of background spectral components of vowel sounds in the neuronal representation of vowel sounds. The model consists of two networks, by which vowel sounds are processed in a hierarchical manner. The first network, which is tonotopically organized, detects spectral peaks called first and second formant frequencies (F1 and F2). The second network has a tonotopic two-dimensional structure and receives input from the first network in a convergent manner. The second network detects the combinatory information of the first (F1) and second (F2) formant frequencies of vowel sounds. We trained the model with five Japanese vowels spoken by different people and modified synaptic connection strengths of the second network according to the Hebbian learning rule, by which relevant dynamic cell assemblies expressing categories of vowels were organized. We show that for creating the dynamic cell assemblies background components around two-formant peaks (F1, F2) are not necessary but advantageous for the creation of the cell assemblies.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    6
    References
    0
    Citations
    NaN
    KQI
    []