The induction of phonotactics for speech segmentation: converging evidence from computational and human learners

2011 
During the first year of life, infants start to learn various properties of their native language. Among these properties are phonotactic constraints, which state the permissible sound sequences within the words of the language (e.g., Dutch words typically do not contain the sequence ‘pf’). Such constraints guide infants' search for words in continuous speech, thereby facilitating the development of the mental lexicon. An intriguing problem is how infants are able to acquire knowledge of phonotactics. This dissertation proposes a computational model of phonotactic learning, which is based on psycholinguistic findings. The model connects two learning mechanisms: statistical learning and feature-based generalization. Using these mechanisms, phonotactic constraints with varying levels of generality are induced from transcribed utterances of continuous speech. The constraints are subsequently used for the detection of word boundaries. The computational model, StaGe (Statistical learning and Generalization), induces biphone constraints through the statistical analysis of segment co-occurrences in continuous speech (Frequency-Driven Constraint Induction), and generalizes over phonologically similar biphone constraints to create more general, natural-class-based constraints (Single-Feature Abstraction). The model learns phonotactic constraints of two types: markedness constraints (*xy) and contiguity constraints (Contig-IO(xy)). The former type exerts pressure towards the insertion of boundaries into the speech stream, while the latter type militates against the insertion of boundaries. Conflicts between constraints are resolved using principles of Optimality Theory. The model is tested in various empirical studies. A series of computer simulations demonstrate that a crucial property of the model, feature-based generalization, improves the segmentation performance of the learner. In addition, StaGe provides a learnability account of a constraint from theoretical phonology, OCP-Place, and of its effect on human segmentation. A detailed analysis of the constraint set that is induced by StaGe reveals that the model learns constraints that resemble OCP-Place through abstractions over statistical patterns found in continuous speech. Moreover, these simulations show that both specific and abstract phonotactic constraints are needed to account for human segmentation behavior. The psychological plausibility of the phonotactic learning approach is addressed in a series of artificial language learning experiments with adult participants. It was found that human learners can induce novel phonotactics from continuous speech. The results were limited to constraints on specific consonants. No evidence was found for feature-based generalization to novel consonants (i.e., consonants not presented during training). This null result was possibly due to perceptual confusion. This dissertation demonstrates that phonotactic constraints can be learned from continuous speech by combining mechanisms that are available to language learners. The computational model provides a better account of speech segmentation than models that rely solely on statistical learning. With respect to human learning capacities, the dissertation shows that adult learners can induce novel phonotactic constraints from a continuous speech stream from an artificial language. By combining computational modeling with psycholinguistic experiments, this dissertation contributes to our understanding of the mechanisms involved in early language acquisition
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    0
    Citations
    NaN
    KQI
    []