MIT professor’s book digs into the eclectic, textually linked reading choices of people in medieval London.
Music soothes the savage beast, but MIT researchers reported in the March 7 issue of Nature that they have invented mythical beasts of sound that may pave the way to better musical experiences for the hearing impaired.
Imagine an orchestra that pounded out only a rhythm and a few snatches of melody, or created an annoying jumble of conflicting sounds. This is what music is like for many of the more than 45,000 hearing-impaired adults and children worldwide who use the bionic ear known as the cochlear implant.
MIT researchers separated and then recombined two components of speech and music to better understand how the brain processes sound. They named the resulting weird sounds "auditory chimeras" after the mythological beast with the head of a lion, the body of a goat and the tail of a serpent.
With the knowledge gleaned from this study, cochlear implant users may someday be able to upgrade the software controlling their surgically implanted devices to enjoy a much richer musical experience. With one implant in each ear, they may pick out a single voice in a noisy environment or a single instrument within an orchestra.
"Speech is such a rich signal, we can still understand it with severe degradation," said Zachary M. Smith, co-author of the Nature paper and a graduate student in the Harvard-MIT Division of Health Sciences and Technology's Speech and Hearing Bioscience and Technology Program.
"Cochlear implants take advantage of that, and only provide a few channels of information that successfully allow many patients to understand speech," said Smith, who also is affiliated with Massachusetts Eye and Ear Infirmary (MEEI). "We have found that delivering the sound component called fine structure may improve patients' pitch perception and ability to localize sounds."
Better pitch perception would help cochlear implant users appreciate music and catch intonations in speech, write Smith and co-authors Bertrand Delgutte, principal research scientist in MIT's Research Laboratory of Electronics (RLE) Auditory Physiology group, and Andrew J. Oxenham, research scientist in RLE's Sensory Communication group. This is especially critical in languages such as Chinese in which the same words have different meanings when delivered with different inflections.
Better sound localization also may help the increasing number of patients with cochlear implants in both ears take advantage of the cues that normal-hearing listeners use to pick out speech among competing sound sources, said Delgutte, who, with co-authors Smith and Oxenham, is affiliated with the Harvard Medical School Department of Otology and Laryngology.
Smith and Delgutte are collaborating with Donald K. Eddington, director of the Cochlear Implant Research Laboratory at MEEI and a principal research scientist at MIT's RLE, to find new ways to effectively deliver fine structure information with cochlear implants.
CREATING MYTHICAL BEASTS
While preparing a dish at home one Thanksgiving, it struck Delgutte that it would be fun to put together the slowly varying part of one sound waveform with the rapidly varying part of another. When you do this, "you get really weird things," said Delgutte, who also is a research associate at MEEI.
For instance, by combining music and speech you get a progression of sounds that turns from a spoken sentence into a short piece of instrumental music. The researchers called these novel stimuli auditory chimeras.
It started out as a game, but Delgutte began to think that maybe his creations would be useful in perceptual experiments that might shed light on how the brain processes sound. This led to the current work, which seeks to identify whether the envelope or the fine structure is more important in interpreting different sounds.
In the study, normal-hearing subjects listened to auditory chimeras of various sentences and musical melodies and wrote down what they heard.
BREAKING DOWN SOUNDS
The cochlea is a snail-shaped structure in the inner ear that converts sounds that enter the ear canal from mechanical vibrations to electrical signals. The electrical signals are sent to the brain via the auditory nerve.
In those with sensory-neural deafness, a cochlear implant can be surgically placed in the inner ear. The cochlear implant is programmed using speech-coding strategies that stimulate the cochlea through an electrode array.
The inner ear performs a sort of mathematical computation when it maps sound frequencies along the length of the cochlea's membrane. Another mathematical way to break down a sound signal is to factor it into the product of two components: a slowly varying envelope and a rapidly varying fine time structure. These two parts make up every sound, and the MIT researchers' goal was to find out which part is most crucial for auditory perception.
Many processing strategies for cochlear implants discard the fine time structure and present only about six to eight bands of envelope information. This makes sense, because the envelope is more critical in speech, but the researchers point out that adding fine time structure would help with pitch perception in music and identifying where sounds come from.
This work is funded by the National Institutes of Health.
A version of this article appeared in MIT Tech Talk on March 13, 2002.