Identifying an augmentative and alternative communication (AAC) method for children with autism spectrum disorder (ASD) might be informed by comparing their performance with, and preference for, a range of communication modalities. Towards this end, the present study involved two children with ASD who were taught to request the continuation of toy play by: (a) signing MORE, (b) exchanging a picture card representing MORE, and (c) touching a MORE symbol from the screen of a speech-generating device. The children were also given opportunities to choose among the three modalities to identify their preferred method of communication. Both children performed better with picture exchange and the speech-generating device than with manual signing, but had variable performance during follow-up. Both children more often chose the speech-generating device, suggesting a preference for that modality. We conclude that concurrent intervention across several communication methods can generate data to inform the selection of an AAC modality.
Perceptual learning paradigms involving written feedback appear to be a viable clinical tool to reduce the intelligibility burden of dysarthria. The underlying theoretical assumption is that pairing the degraded acoustics with the intended lexical targets facilitates a remapping of existing mental representations in the lexicon. This study investigated whether ties to mental representations can be strengthened by way of a somatosensory motor trace.Following an intelligibility pretest, 100 participants were assigned to 1 of 5 experimental groups. The control group received no training, but the other 4 groups received training with dysarthric speech under conditions involving a unique combination of auditory targets, written feedback, and/or a vocal imitation task. All participants then completed an intelligibility posttest.Training improved intelligibility of dysarthric speech, with the largest improvements observed when the auditory targets were accompanied by both written feedback and an imitation task. Further, a significant relationship between intelligibility improvement and imitation accuracy was identified.This study suggests that somatosensory information can strengthen the activation of speech sound maps of dysarthric speech. The findings, therefore, implicate a bidirectional relationship between speech perception and speech production as well as advance our understanding of the mechanisms that underlie perceptual learning of degraded speech.
Objective: To compare how quickly children with autism spectrum disorder (ASD) acquired manual signs, picture exchange, and an iPad®/iPod®-based speech-generating device (SGD) and to compare if children showed a preference for one of these options.Method: Nine children with ASD and limited communication skills received intervention to teach requesting preferred stimuli using manual signs, picture exchange, and a SGD. Intervention was evaluated in a non-concurrent multiple-baseline across participants and alternating treatments design.Results: Five children learned all three systems to criterion. Four children required fewer sessions to learn the SGD compared to manual signs and picture exchange. Eight children demonstrated a preference for the SGD.Conclusion: The results support previous studies that demonstrate children with ASD can learn manual signs, picture exchange, and an iPad®/iPod®-based SGD to request preferred stimuli. Most children showed a preference for the SGD. For some children, acquisition may be quicker when learning a preferred option.
Intelligibility improvements immediately following perceptual training with dysarthric speech using lexical feedback are comparable to those observed when training uses somatosensory feedback (Borrie & Schäfer, 2015). In this study, we investigated if these lexical and somatosensory guided improvements in listener intelligibility of dysarthric speech remain comparable and stable over the course of 1 month.Following an intelligibility pretest, 60 participants were trained with dysarthric speech stimuli under one of three conditions: lexical feedback, somatosensory feedback, or no training (control). Participants then completed a series of intelligibility posttests, which took place immediately (immediate posttest), 1 week (1-week posttest) following training, and 1 month (1-month posttest) following training.As per our previous study, intelligibility improvements at immediate posttest were equivalent between lexical and somatosensory feedback conditions. Condition differences, however, emerged over time. Improvements guided by lexical feedback deteriorated over the month whereas those guided by somatosensory feedback remained robust.Somatosensory feedback, internally generated by vocal imitation, may be required to affect long-term perceptual gain in processing dysarthric speech. Findings are discussed in relation to underlying learning mechanisms and offer insight into how externally and internally generated feedback may differentially affect perceptual learning of disordered speech.
Enterprises of the organic food sector contribute in various ways to sustainable devel-opment, wealth, and quality of life in their region. We present a preliminary description and evaluation of these multi-dimensional effects (telephone interviews with directors of 58 enterprises) and the institutional framework conditions of the organic food sector in the region of Berlin and Brandenburg in north-east Germany.
The social validity of different communication modalities is a potentially important variable to consider when designing augmentative and alternative communication (AAC) interventions. To assess the social validity of three AAC modes (i.e., manual signing, picture exchange, and an iPad®-based speech-generating device), we asked 59 undergraduate students (pre-service teachers) and 43 teachers to watch a video explaining each mode. They were then asked to nominate the mode they perceived to be easiest to learn as well as the most intelligible, effective, and preferred. Participants were also asked to list the main reasons for their nominations and report on their experience with each modality. Most participants (68–86%) nominated the iPad-based speech-generating device (SGD) as easiest to learn, as well as the most intelligible, effective, and preferred. This device was perceived to be easy to understand and use and to have familiar and socially acceptable technology. Results suggest that iPad-based SGDs were perceived as more socially valid among this sample of teachers and undergraduate students. Information of this type may have some relevance to designing AAC supports for people who use AAC and their current and future potential communication partners.
The purpose of this study was to examine stuttering behavior in German-English bilingual people who stutter (PWS), with particular reference to the frequency of stuttering on content and function words. Fifteen bilingual PWS were sampled who spoke German as the first language (L1) and English as a second language (L2). Conversational speech was sampled in each language and analyzed for the percentage of overall stuttering-like disfluencies and distribution of stuttering on content and function words. Significantly more stuttering was found to occur in L2 compared to L1. Stuttering occurred significantly more often on content words compared to function words in L1. No significant difference between stuttering on function and content words was observed in L2. Examination across L1 and L2 found a significantly greater percentage of stuttering on function words in L2 compared to L1, and a significantly lower percentage of stuttering on content words in L2 compared to L1. The characteristics of stuttering in L2 could not be differentiated on the basis of an L2 proficiency measure. The differences observed in the amount of stuttering between L1 and L2 suggest that stuttering in bilingual speakers is closely related to language dominance, with features of stuttering in L2 indicative of a less developed language system.