Can reverberation override binaural cues and reintroduce frequency grouping in a crossing stimulus?
by Emily Huang
In Crossing of Auditory Streams, Tougas and Bregman studied the principles of frequency-proximity and trajectory. The frequency-proximity principle refers to the tendency for tones that are similar in frequency to group together, while the trajectory principle refers to the grouping of tones with a continual trajectory together. The participants of this study were presented with different stimulus patterns to test how these principles work together to help us perceive auditory streams. Tougas and Bregman found that, when presented with a stimulus with ascending tones and descending tones resembling an X pattern, participants tended to group the tones by frequency. Two separate sets were heard - one with the low frequencies of the crossing stimulus, and one with the high frequencies. This suggests that frequency-proximity grouping is preferred over trajectory grouping in auditory scene analysis.
Last year, a student was able to overcome the frequency-proximity grouping by splitting the stimulus into binaural streams, which indicated that binaural cues are more important than frequency-proximity in auditory scene analysis. Since binaural cues are used in sound localization, and reverberation can impair sound localization, I aimed to test whether adding reverberation to the binaural streams would override the binaural grouping and reintroduce frequency grouping. First, I created a stimulus with an ascending stream of seven tones in the left ear, and a descending stream of the same tones in the right ear. Then, using Audacity, I applied the reverb effect to the stimulus with reverberance at 25%, a dry gain level of -16 dB, and three levels of wet gain, -6, -3, and 0 dB. The ratio of dry gain to wet gain determines the amount of reverberation, so in this case a lower wet gain added less reverberation.
Original:
With Binaural Split:
With Reverb (Wet Gain of 0):
With Reverb (Wet Gain of -3):
With Reverb (Wet Gain of -6):
a. Compared with the binaural split, for me it seemed like as the wet gain got smaller, grouping got easier and easier, so the reverb does help reintroduce frequency grouping.
b. I believe that the demo answers the question posed fairly well. However, I was able to group the tones by trajectory the whole way through, so I wonder if there is a limit to how much reverberation can have an effect on grouping. I also wonder if it would work the same in the same manner if the tones were more naturalistic sounds/instruments though.
0
Anonymous
@Michael Anoke, Thanks for listening!! This seems to answer the question I posed - clearly the reverberation is able to reintroduce frequency grouping but perhaps not by completely overriding binaural cues. I also found that, if I concentrated, I could hear the trajectory in all three clips as well. Perhaps the next step would be to see if more natural reverberation would be more effective, since I'm sure creating the reverberation on Audacity isn't the best to mimic real world hearing.
0
Ben Radovitzky
Great experiment! My experience was the following: the original set was very chaotic and poorly grouped, whereas I definitely noticed the grouping for the binaural split example. Then for the ones with reverb, I noticed that in the examples with less wet gain (and therefore lower reverb), the grouping was more obvious, implying that the reverb does have an effect on the grouping.
From the perspective of a researcher, I would say that although the question you asked was indeed answered, it is possible that this isn't so much a property of the "reverberation" itself but instead some other "lower level" feature of reverb is that messed with the grouping, whether that was a time or intensity cue or even something else that is escaping me. Still, the question posed was interesting and this does indeed show that something about reverb changes the way that we group noises.
0
Anonymous
@Ben Radovitzky, Thanks for the comment! I agree, I do think that using the reverb setting in Audacity introduces a lot of confounding variables into the experiment. While I tried to keep all other settings constant, I think a good experiment in the future would be changing all of the settings and seeing if other ones also have an effect, or doing what Michael proposed and using natural reverberation to mimic real world environments. I would say that this answers the question I posed in that we can conclude that at least some property of the reverberation has an effect on binaural grouping, allowing frequency grouping to be reintroduced.