Auditory Figure-Ground Segregation Using a Complex Stochastic Stimulus

2012 
In contrast to the disordered acoustic environment, most studies of auditory segregation have used relatively simple signals. We developed a new stimulus – “stochastic figure-ground” (SFG stimulus; Teki et al., 2011) that incorporates stochastic variation in frequency-time space that is not a feature of the predictable sequences used previously. Stimuli consist of a sequence of 50 ms chords containing a random number of pure-tone components. Occasionally, a subset of tonal components repeat in frequency over several consecutive chords, resulting in a spontaneous percept of a “figure” popping out of a background of varying chords. Our behavioral results demonstrate that human listeners are remarkably sensitive to the emergence of such figures (Experiment 1). To characterize the brain mechanisms that underlie segregation in such a stochastic stimulus, we investigated the degree to which behavior is affected by systematic stimulus manipulations. In Experiment 2, we demonstrate that figure-detection is unaffected when the duration of each chord is reduced to 25 ms, suggesting that detection ARO Abstracts 378 Volume 35, 2012 depends on the number of repeating chords and not the absolute duration of the figure. In experiment 3, performance was unchanged when white noise (50 ms) was inserted between successive chords. In experiment 4, figures were “ramped” (successive figure components were not repeating but increasing in frequency in steps of 2*I or 5*I, where I = 1/24th of an octave is the resolution of our frequency-bank). Results show decreased sensitivity, although, remarkably, listeners could still perform the task. Experiment 5 tested figure-detection by removing the “background-only” chords, which preceded and followed the figure and results show no significant effect on performance. Overall, the notable sensitivity exhibited by listeners cannot be explained by prevailing adaptation-based models of segregation. Using computational modeling, we show that the behavioral data are consistent with the temporal coherence model of auditory scene analysis (Shamma et al., 2010). References: Teki, S., Chait, M., Kumar, S., von Kriegstein, K., and Griffiths, T. D. (2011). Brain bases for auditory stimulus-driven figure-ground segregation. J. Neurosci 31, 164-171. Shamma, S. A., Elhilali, M., and Micheyl, C. (2010). Temporal coherence and
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []