Much of our daily communication occurs in the presence of background noise, compromising our ability to hear. While understanding speech in noise is a challenge for everyone, it becomes increasingly difficult as we age. Although aging is generally accompanied by hearing loss, this perceptual decline cannot fully account for the difficulties experienced by older adults for hearing in noise. Decreased cognitive skills concurrent with reduced perceptual acuity are thought to contribute to the difficulty older adults experience understanding speech in noise. Given that musical experience positively impacts speech perception in noise in young adults (ages 18-30), we asked whether musical experience benefits an older cohort of musicians (ages 45-65), potentially offsetting the age-related decline in speech-in-noise perceptual abilities and associated cognitive function (i.e., working memory). Consistent with performance in young adults, older musicians demonstrated enhanced speech-in-noise perception relative to nonmusicians along with greater auditory, but not visual, working memory capacity. By demonstrating that speech-in-noise perception and related cognitive function are enhanced in older musicians, our results imply that musical training may reduce the impact of age-related auditory decline.
Figure.: This figure shows the developmental trajectory of the auditory brainstem response (ABR) from infancy to senescence. For each age group, the average ABR wave V latency is plotted (error bars: ± 1 standard error). Following two years of rapid maturation, wave V latencies match those of adults before continuing to get faster for a few more years. (Adapted from Cereb Cortex 2013; doi: 10.1093/cercor/bht311.)Auditory development is a complex and protracted process. While the cochlea is mature at birth, and frequency resolution is mature by 6 months of age, temporal processing likely remains in flux until adolescence. Understanding this time course can aid in diagnosing listening difficulties that are often characterized as maturational delays and in tracking the development of children deprived of auditory experience.Figure.: Nina Kraus, PhDOne lesson from the textbooks is that the auditory brainstem is mature by about age 2, when auditory brainstem responses (ABRs) appear adultlike with respect to latency, amplitude, and morphology.Figure.: Travis White-SchwochGiven what we know about the complex development of auditory behaviors, however, we wondered if there was more to the story. ABRs ACROSS THE LIFESPAN Erika Skoe and colleagues addressed this question in a study of 586 listeners age 3 months to 72 years ( Cereb Cortex 2013; doi: 10.1093/cercor/bht311http://cercor.oxfordjournals.org/content/early/2013/12/21/cercor.bht311.full). The researchers collected ABRs to a suprathreshold click stimulus and compared wave V latencies across the lifespan. What emerged was indeed complex and protracted. From birth to age 2, latencies got faster by nearly a millisecond. While the 2-year-old latencies certainly matched those of the adults, a remarkable pattern appeared that suggested an overshoot—from age 3 to 5, the latencies got even quicker before starting to peter out around age 8. Consistent with previous evidence, there was a gradual slowing of wave V latency after age 40 that continued into senescence. These results demonstrated that auditory brainstem development was far more enduring and involved than previously thought. In a follow-up study, Emily Spitzer and colleagues took a fine-grained lens to the preschool ABR ( J Am Acad Audiol 2015;26[1]:30-35http://aaa.publisher.ingentaconnect.com/content/aaa/jaaa/2015/00000026/00000001/art00004). Using identical methodology, the researchers evaluated developmental changes between age 3 and 5 in 71 typically developing preschoolers. Consistent with the lifespan comparison, they found a systematic pattern whereby wave V latencies got faster and faster as children got older. ROLE OF AUDITORY EXPERIENCE What mechanisms underlie this prolonged development? One potential cause is that myelination increases in the auditory pathway, speeding up neural conduction time. The faster latencies also could be due to a proliferation of synapses that are slowly pruned away as listeners enter puberty. An alternate view considers the role of experience and top-down modulation of auditory processing. The auditory brainstem is subject to a massive series of corticofugal projections that refine its anatomy and physiology. Experience shapes auditory brain circuits through these top-down pathways, and ABRs reflect the fine-tuning ( Nat Rev Neurosci 2010;11[8]:599-605http://www.nature.com/nrn/journal/v11/n8/abs/nrn2882.html). Most of the research concerning auditory experience focuses on special populations, such as musicians or speakers of two languages, but nothing is as powerful as our daily experiences in and with sound. Think about how important it is to provide children access to sound to bootstrap language development. One line of evidence that ABR maturation could, in part, be experience dependent comes from studies of children with cochlear implants. Karen Gordon and colleagues investigated the electrically evoked ABR with respect to age at implantation. Latencies were shown to decrease as a function of auditory experience (Ear Hear 2003;24[6]:485-500http://journals.lww.com/ear-hearing/pages/articleviewer.aspx?year=2003&issue=12000&article=00003&type=abstract). Remarkably, interaural latency differences are observed in children who received bilateral cochlear implants at different times ( J Neurosci 2012;32[12]:4212-4223http://www.jneurosci.org/content/32/12/4212.full). That is, the side that was implanted earlier in life has faster latencies, even a year or two after getting a bilateral implant. This discrepancy is not observed in children who receive bilateral implants simultaneously. Taken together, these studies demonstrate that auditory brainstem development is a nuanced process. Even as robust and reliable a measure as the ABR can undergo a neurodevelopmental push and pull. During childhood, the ABR may only be adultlike transiently before undergoing a second period of developmental flux. These studies also show that auditory experience can be a major factor affecting these tried-and-true metrics. Thus, when interpreting ABRs, we need to keep in mind the influences of both ongoing maturation and the listener's auditory experiences. Of course, the click ABR is just the tip of the maturational iceberg. Neurophysiological responses to speech sounds provide far greater insight into auditory processing, its development, and the role of experience, especially because there are so many more aspects of the responses than a few peak latencies, and each has its own distinct course of maturation. Stay tuned!
Human hearing depends on a combination of cognitive and sensory processes that function by means of an interactive circuitry of bottom-up and top-down neural pathways, extending from the cochlea to the cortex and back again. Given that similar neural pathways are recruited to process sounds related to both music and language, it is not surprising that the auditory expertise gained over years of consistent music practice fine-tunes the human auditory system in a comprehensive fashion, strengthening neurobiological and cognitive underpinnings of both music and speech processing. In this review we argue not only that common neural mechanisms for speech and music exist, but that experience in music leads to enhancements in sensory and cognitive contributors to speech processing. Of specific interest is the potential for music training to bolster neural mechanisms that undergird language-related skills, such as reading and hearing speech in background noise, which are critical to academic progress, emotional health, and vocational success.
Abstract Diagnosis, assessment, and management of sports-related concussion requires a multi-modal approach. Yet, currently, an objective assessment of auditory processing is not included. The auditory system is uniquely complex, relying on exquisite temporal precision to integrate signals across many synapses, connected by long axons. Given this complexity and precision, together with the fact that axons are highly susceptible to damage from mechanical force, we hypothesize that auditory processing is susceptible to concussive injury. We measured the frequencyfollowing response (FFR), a scalp-recorded evoked potential that assesses processing of complex sound features, including pitch and phonetic identity. FFRs were obtained on male Division I Collegiate football players prior to contact practice to determine a pre-season baseline of auditory processing abilities, and again after sustaining a sports-related concussion. We predicted that concussion would decrease pitch and phonetic processing relative to the student-athlete’s preseason baseline. We found that pitch and phonetic encoding was smaller post-concussion. Studentathletes who sustained a second concussion showed similar declines after each injury. Auditory processing should be included in the multimodal assessment of sports-related concussion. Future studies that extend this work to other sports, other injuries (e.g., blast exposure), and to female athletes are needed.
Rhythmic expertise is a multidimensional skill set with clusters of distinct rhythmic abilities. For example, the ability to clap in time with feedback relates extensively to distinct beat- and pattern-based rhythmic skills in school-age children. In this study we aimed to determine whether clapping in time would relate to both beat- and pattern- based rhythmic tasks among adolescents and young adults. We assessed our participants on seven tasks: two beat-based tasks (Metronome and Tempo adaptation), two pattern-based tasks (Reproducing rhythmic patterns and Remembering rhythmic patterns), a self-paced drumming task, a task of drumming to a music beat, and a clapping in time task. We found that clapping in time correlated with all other rhythmic tasks, even though some were not mutually related to one another. These results provide insight into the taxonomy of rhythmic skills and support the practice of clapping in time with feedback as a means of developing broad spectrum rhythmic abilities.
In September, the Kennedy Center hosted the second Sound Health concert and workshop series (http://bit.ly/2Iaf8qn). This initiative was organized in partnership with the National Institutes of Health (NIH) to connect music and health. It is the brainchild of legendary opera singer Renée Fleming and NIH Director Francis Collins, MD, PhD, and included workshops, public lectures, and concerts. I had the privilege of participating in this initiative since its inaugural workshop last year. It began with a simple question: Are there connections between music and health? The answer is a resounding yes, noting a connection that encompasses mental health, education, brain development, pain, and more. The initiative has sparked similar diverse interests among musicians, drawing perspectives from leading musicians from classical, jazz, world, folk, rock, and rap genres. In fact, the NIH just announced that they will fund music research for the first time. At the workshop, my focus was to emphasize the benefits of making music for brain health (http://bit.ly/2I9pCWM). Making music is arguably one of the healthiest things you can do for your brain (http://bit.ly/2IbAGCP). It tunes your listening skills, sharpens your mental acuity, and boosts language skills. In children, making music speeds up brain development. In older adults, making music mitigates age-related declines in sound processing. A common argument against daily music education is that it takes time away from teaching fundamentals such as reading and math. But evidence shows that music training actually improves children's reading and math skills, suggesting that it can pay dividends in more traditional academic domains. I recognize that schools have limited resources and competing priorities. It's important to remember that their core mission is to promote child development—and music education does just that. My lab's research has recently grown to study sound processing in athletes, with an emphasis on understanding concussions (http://bit.ly/2I9bYD7). I see a strong parallel between physical education and music education. But as a society, we emphasize neither for the typical child. My uncle, Hans Kraus, was a physician on the President's Council of Physical Fitness under President John F. Kennedy, which promoted fitness standards for American school children. Today, I fear there is a tendency for only the kids who love sports or make the varsity teams to get the best coaching, even though all children would benefit from being strong and flexible. I see music education in the same way: All children should get high-quality daily training. You don't have to become Mozart or Mickey Hart to have fun, make friends, and boost your brain function. Speaking as a scientist, some of the best things you can do for your brain is to make music and be physically active, and these would benefit every child. Thoughts on something you read here? Write to us at [email protected]
Event-related potentials (ERPs) were obtained to synthesized speech stimuli in 16 school-aged children (7-11 years) and compared to responses in 10 adults. P1, N1, and P2 event-related potentials were elicited by the phoneme /ga/. The mismatch negativity (MMN) was elicited by variants of /da/ and /ga/, which differ in the onset frequency of the second and third formant transitions. In general, the well-defined N1/P2 complex characteristic of the adult response, was not found in children. Waves P1 and N1 had longer peak latencies in children than in adults. Wave P2 amplitude was smaller in children than in adults. In contrast to the often poorly delineated earlier cortical potentials, the MMN was well defined in children. Significant MMNs were obtained in all subjects tested. MMN magnitude (peak amplitude and area) was significantly larger in the children. No significant differences were found in peak latency and duration of the MMN in children compared to the adult response. Another negative wave occurring at 400 msec was also observed in response to the deviant stimuli. This negative wave occurred at a similar latency in adults and children and was significantly larger and more robust in children. Results support the view that development of ERPs does not involve a hierarchical process with respect to latency. That is, earlier occurring waves do not necessarily mature before later occurring waves. The latencies of P1, N1, and P2 and overall morphology of these waves may provide a measure of maturation of central pathways. The early development of the MMN, its apparent robustness in school-aged children, and its reflection of the processing of acoustic differences in speech stimuli suggest its possible use in the assessment of central auditory function.