The relation between binaural and monaural loudness was measured by magnitude estimation and magnitude production for a 1000-Hz tone and for a white noise. Four types of stimuli—monaural and binaural tone, monaural and binaural noise—were presented together, at eight levels, in mixed, randomly selected sequences. Subjects were instructed to rate or adjust the four stimuli according to a single loudness scale. The loudness of the monaural and binaural tones was a power function of sound pressure with an exponent near 0.5. The loudness of the noise increased more rapidly at low levels than that of the tone; at high levels, it increased more slowly. The bow shape of the noise function would be predicted from loudness matches between wide-band and narrow-band stimuli. A binaural sound was 1.3 to 1.7 times louder than a monaural sound at the same SPL. The results of these direct loudness estimations agreed almost perfectly with earlier results from another group of subjects who made loudness matches between binaural and monaural stimuli. [Research supported by a grant from the National Institute of Neurological Diseases and Blindness.]
Simple loudness adaptation is the decrease in loudness that takes place when a continuous sound is presented alone for a period of time. Simple adaptation normally occurs only when a sound is soft to begin with, no more than 30 dB above threshold; except for some persons with a retrocochlear lesion, sounds above 30 dB SL do not diminish in loudness over time. However, adaptation can be induced in at least two ways: (1) A steady sound to one ear, presented together with an intermittent sound to the contralateral ear, decreases in loudness by 50-60% within 3 min. (2) An otherwise steady sound that is intermittently increased in level by at least 5 dB becomes softer during its weaker periods. When, for example, a 40-dB tone is increased every 20 s to 60 dB for 15 s, its loudness decreases by about 50% within 3 min. We report measurements of both simple and induced adaptation on 10 persons listening to a 1 000-Hz tone via earphones or from a loudspeaker. The results provide an overview of both types of adaptation. They also permitted a correlational analysis that reveals some of the similarities and differences between the two kinds of adaptation.
Two opposite sequential loudness effects concern the effect of a stronger Tone 1 on the loudness of a subsequent weaker Tone 2, as assessed by loudness matches with Tone 3. Loudness enhancement is reported when Tone 1 precedes Tone 2 by 50 to 100 ms. Loudness recalibration (or induced loudness reduction) is obtained for delays of about 1 s. This letter argues that what appears as an enhancement of Tone 2’s loudness is, in fact, an induced reduction of Tone 3’s loudness, which occurs because Tones 1 and 3 are at the same frequency. Preliminary experiments support this analysis.
An intermittent sound to one ear often causes a large decrease in the loudness of a steady sound to the other ear, a decrease that does not disappear upon termination of the intermittent sound. To study this phenomenon, we measured the loudness of a 1000-Hz tone to the right ear before, while, and after it was accompanied by an intermittent (500 ms on, 500 ms off) 1000-Hz tone to the left ear. Both tones were 60 dB SPL. Ten listeners assigned a number every 20 s to represent the loudness of the steady tone which lasted 5 min. During the 160 s that the intermittent tone was on, the loudness of the steady tone declined by 60% (equivalent to 13 dB). After cessation of the intermittent tone, loudness increased slowly over 100 s but only to about 35% of its preadaptation value. To return loudness to its full value, the steady tone had to be interrupted for at least 20 s after cessation of the intermittent tone. Apparently, an intermittent tone unleashes a long-lasting suppression of the loudness of a steady tone in the contralateral ear. [Work supported by NIH.]
The question was asked whether briefly flashed line segments are easier to detect when presented at an expected, rather than an unexpected, orientation. Detection rates were measured in a two-interval forced choice (2IFC) paradigm that did not require the subject to identify the orientation of the line segment, only to detect its presence. The 2IFC paradigm was used to rule out bias or criterion effects. Subjects were led to expect lines in a particular or primary orientation by being presented lines with that orientation as cues before every trial, and by being tested with only that orientation during practice. Lines of the orthogonal, probe orientation replaced the primary on 25% of experimental trials. When the stimulus location was known in advance, lines of the primary orientation were detected more accurately than were probe lines, but when stimulus location was not known, detection rates were equal. Detection rates were also equal when subjects were informed of the probe at the end of the practice period, so that both orientations were expected; hence the subjects' expectations, not the probability of stimulus occurrence, are necessary for the effect to occur. Thus expecting a line of a particular orientation at a particular location facilities its detection.
A new and powerful procedure for determining frequency analysis in the auditory system, as evidence by the critical band, is described. The onset time difference, delta T, needed to lateralize 30-msec tone bursts toward the leading ear was measured as a function of the frequency difference, delta F, between the brust in one ear and the burst in the other ear. When delta F was less than the critical band, threshold delta T was constant at 100 mu sec or less, depending on center frequency; beyond the critical band, delta T increased with delta F. These dichotically measured critical bandwidths increased from 110 Hz at a center frequency of 500 Hz to 1100 Hz at a center frequency of 6000 Hz. They were unaffected by varying signal level from 25 to 80 dB or signal duration from 10 to 300 msec. The sam e critical-band values have been measured with monaural stimuli in loudness summation, maskin, detection, phase perception, consonance, and so forth.
The loudness of four-tone complexes centered at 250, 2000, and 4000 cps was measured as a function of the over-all spacing, ΔF, of the components, both in the quiet and against various levels of a uniform masking noise. When the masking noise was held at a constant level, the loudness of the complex increased more with ΔF at moderate sensation levels—between about 30 and 60 db—than at either higher or lower levels. Near the masked as well as the absolute threshold, the loudness decreased as ΔF was increased beyond the critical bandwidth. Only when ΔF was less than a critical band, was loudness independent of ΔF and was the amount of loudness summation invariant with level. These results support the hypothesis that the amount of loudness summation depends upon the slope of the loudness functions for the individual critical bands that form the complex.