An algorithm that can accurately gauge heart rate by measuring tiny head movements in video data could ultimately help diagnose cardiac disease.
The following story is adapted from one that appeared in the spring 2002 issue of Two If By Sea, a joint newsletter of the MIT and WHOI Sea Grant programs.
Since World War II, the best way to measure the maximum wind speed of a hurricane, and thus its potential risk to humans, has been to fly a weather reconnaissance aircraft directly into the eye of the storm. Now an MIT professor is developing what could be a less harrowing (and less expensive) method, based on listening to the storm from the ocean floor.
It is well known that sound waves can carry useful information for hundreds of miles beneath the ocean's surface. "This capacity of the ocean as a sonic information channel has been exploited by scientists for decades and by fish and marine mammals for millennia," said Nicholas Makris, an associate professor in the Department of Ocean Engineering.
So it's not surprising that in a conversation about ambient noise in the ocean with world hurricane expert Kerry Emanuel, professor of meteorology in the Department of Earth, Atmospheric and Planetary Sciences, the two discussed the possibility of detecting hurricanes by measuring sound. That conversation led to their current project: exploring how hydrophones deployed in the ocean might gather acoustic data and provide critical information about hurricanes. The project is supported by MIT Sea Grant and the Office of Naval Research.
Satellite imaging technology is a reliable means of detecting and locating hurricanes. However, even with satellite images, said Makris, assessments of a hurricane's destructive power are off by at least an order of magnitude, with scientists crudely estimating wind speeds. Those estimates are used to make critical decisions about evacuating coastal communities. An unnecessary evacuation can mean millions of dollars in lost revenue, but not evacuating when a deadly storm strikes means loss of life as well as destruction of property.
However, single hydrophones or acoustic arrays placed strategically could record the sound associated with high winds and provide accurate information about a hurricane's power. Makris likened acoustic arrays to an acoustic eye or radar dish. "It's like a lens or a telescope that allows you to see in one direction and to discriminate other directions," he said.
To prepare for recording actual hurricanes, Makris and graduate student Joshua Wilson have developed a model they've applied to areas of the world where hurricanes are a problem, such as in the Bay of Bengal. The researchers have also begun collecting existing underwater acoustic data from the U.S. Navy's SOSUS underwater listening stations. These bottom-mounted acoustic arrays were originally used during the Cold War to find Russian submarines and provide a record of all the natural and man-made sounds in the ocean.
The researchers also plan to make measurements of their own by deploying an autonomous underwater vehicle (AUV) during a hurricane in, say, waters off the west coast of Florida, or in the Yucatan. The AUV would be sent to the ocean floor - out of the hurricane's way - to make one single-point measurement at various frequencies with the storm arriving overhead. By separating the hurricane noise from the general bubbly noise of the ocean, and by discriminating the direction of the hurricane, Makris hopes to discern the storm's power.
Before heading too far afield, the researchers are looking forward to testing an ambient noise acquisition system in local waters. Tests of the system in the Charles River at the MIT sailing pavilion have begun, and they suggest other benefits to the technology.
"Getting all these ambient noise measures as a function of location in the Charles River and in Boston Harbor will be very valuable," Makris said. The tests, for instance, could help establish definite correlations between marine mammal populations and noise pollution in coastal waters - a controversial topic in recent years.
A version of this article appeared in MIT Tech Talk on October 30, 2002.