Research shows the success of a bacterial community depends on its shape.
CAMBRIDGE, Mass. -- When it was founded two years ago, MIT's Project Oxygen Alliance set forth to create a new form of computing and communication: human-centered, ubiquitous, and transparent. At the second annual meeting of the MIT Oxygen Alliance, held June 12-13, researchers showed the technology advances this ambitious goal has inspired.
Begun as a partnership between the Laboratory for Computer Science (LCS), the Artificial Intelligence Laboratory (AI Lab), and six major corporations, with support from the Defense Advanced Research Projects Agency, the Project Oxygen Alliance seeks to make computation and communication as abundant and natural to use as oxygen in the air. The goal is to free people from computer jargon, keyboards, mice and other specialized devices, allowing them to meet their computation and communication needs any time, anywhere.
MIT researchers have been busy creating speech and vision technologies that enable humans to communicate naturally with computers, just as they would with other people. They are developing decentralized networks and robust software/hardware architectures that adapt to mobile users, currently available resources, or varying operating conditions. Researchers are also at work devising security and privacy mechanisms that safeguard personal information and resources.
"The theme of the second year of the Oxygen alliance is integration," said Professor Victor Zue, director of LCS, "For example, speech and vision techniques are used jointly to recognize a person and to provide speech and gesture understanding. Similarly, wireless location support, ad hoc networks, and novel security protocols are utilized to provide mobile and secure information delivery." New technologies demonstrated included:
- Multilingual conversational systems that can recognize, understand, and respond to naturally spoken requests. The system can be configured rapidly to handle complex dialogs that allow users to obtain information such as the weather in Tokyo or traffic conditions in Boston.
- An integrated vision and speech system that uses cameras and microphone arrays to track a speaker's location and arm position, extract the speaker's voice from background noise, and respond to a combination of pointing gestures and spoken commands such as "Move that one over here" or "Show me the video on that screen."
- Systems that integrate software services to accomplish user-defined tasks. For example, a smart room equipped with embedded speech, video, and motion detectors automatically records and recalls key meeting events, monitoring and responding to visual and auditory cues that flow naturally from normal interactions among group members.
- A computer-aided design tool that understands simple mechanical devices as they are sketched on whiteboards or tablets. Liberated from mice, menus, and icons, users can draw, simulate, modify, and test design elements in the same way they would with an expert designer.
- Location and resource discovery systems that enable users to access computers, printers, and remote services by describing what they want to do rather than by remembering computer-coded addresses. Low-cost ceiling-mounted beacons enable mobile users to determine where they are indoors, without having to reveal their location. These integrated systems respond to user commands such as "Print this picture on the nearest color printer."
- A secure, self-configuring, decentralized wireless network that enables mobile users to communicate spontaneously using handheld devices and to share information with one another, utilizing multiple network protocols without requiring additional access points or intervention from service providers.
- Hardware and software architectures that determine and implement the best allocation of resources for streaming multimedia applications. These architectures optimize the use of computer circuitry and power, thereby boosting the performance and lowering the cost of wireless handheld devices that link mobile users to Oxygen networks.
ABOUT THE MIT OXYGEN ALLIANCE
Now in its second year, Project Oxygen is a five-year alliance between LCS and AI Lab at MIT and industry leaders including the Acer Group, Delta Electronics, Hewlett-Packard, Nippon Telegraph and Telephone, Nokia and Philips. Since its inception, the MIT Oxygen Alliance has combined the expertise of researchers at MIT's LCS and AI Lab with that of its industry partners. Visiting researchers from the Alliance's six companies have routinely exchanged knowledge and tools with their counterparts at MIT through joint projects, workshops, and conferences such as THIS annual meeting. MIT has worked with HP's Cambridge Research Labs, Philips, and Delta to develop a new H21 handheld device for Oxygen, and with Acer to port Oxygen applications to a tablet. In addition, MIT has passed on devices and software to its industry partners for their own research purposes. Such collaborations have driven innovation both on and off-campus. Oxygen, funded by its industry partners, received seed funding from the Defense Advanced Research Projects Agency.