Skip to content ↓

Artificial-intelligence research revives its old ambitions

A new interdisciplinary research center at MIT, funded by the National Science Foundation, aims at nothing less than unraveling the mystery of intelligence.
Press Inquiries

Press Contact:

Kimberly Allen
Phone: 617-253-2702
Fax: 617-258-8762
MIT News Office

Media Download

Download Image
Credits: Illustration: Christine Daniloff/MIT

*Terms of Use:

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license. You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

Close
Credits:
Illustration: Christine Daniloff/MIT

The birth of artificial-intelligence research as an autonomous discipline is generally thought to have been the monthlong Dartmouth Summer Research Project on Artificial Intelligence in 1956, which convened 10 leading electrical engineers — including MIT’s Marvin Minsky and Claude Shannon — to discuss “how to make machines use language” and “form abstractions and concepts.” A decade later, impressed by rapid advances in the design of digital computers, Minsky was emboldened to declare that “within a generation ... the problem of creating ‘artificial intelligence’ will substantially be solved.”

The problem, of course, turned out to be much more difficult than AI’s pioneers had imagined. In recent years, by exploiting machine learning — in which computers learn to perform tasks from sets of training examples — artificial-intelligence researchers have built special-purpose systems that can do things like interpret spoken language or play Jeopardy with great success. But according to Tomaso Poggio, the Eugene McDermott Professor of Brain Sciences and Human Behavior at MIT, “These recent achievements have, ironically, underscored the limitations of computer science and artificial intelligence. We do not yet understand how the brain gives rise to intelligence, nor do we know how to build machines that are as broadly intelligent as we are.”

Poggio thinks that AI research needs to revive its early ambitions. “It’s time to try again,” he says. “We know much more than we did before about biological brains and how they produce intelligent behavior. We’re now at the point where we can start applying that understanding from neuroscience, cognitive science and computer science to the design of intelligent machines.”

The National Science Foundation (NSF) appears to agree: Today, it announced that one of three new research centers funded through its Science and Technology Centers Integrative Partnerships program will be the Center for Brains, Minds and Machines (CBMM), based at MIT and headed by Poggio. Like all the centers funded through the program, CBMM will initially receive $25 million over five years.

Homegrown initiative

CBMM grew out of the MIT Intelligence Initiative, an interdisciplinary program aimed at understanding how intelligence arises in the human brain and how it could be replicated in machines.

“[MIT President] Rafael Reif, when he was provost, came to speak to the faculty and challenged us to come up with new visions, new ideas,” Poggio says. He and MIT’s Joshua Tenenbaum, also a professor in the Department of Brain and Cognitive Sciences (BCS) and a principal investigator in the Computer Science and Artificial Intelligence Laboratory, responded by proposing a program that would integrate research at BCS and the Department of Electrical Engineering and Computer Science. “With a system as complicated as the brain, there is a point where you need to get people to work together across different disciplines and techniques,” Poggio says. Funded by MIT’s School of Science, the initiative was formally launched, in 2011, at a symposium during MIT’s 150th anniversary.

Headquartered at MIT, CBMM will be, like all the NSF centers, a multi-institution collaboration. Of the 20 faculty members currently affiliated with the center, 10 are from MIT, five are from Harvard University, and the rest are from Cornell University, Rockefeller University, the University of California at Los Angeles, Stanford University and the Allen Institute for Brain Science. The center’s international partners are the Italian Institute of Technology; the Max Planck Institute in Germany; City University of Hong Kong; the National Centre for Biological Sciences in India; and Israel’s Weizmann Institute and Hebrew University. Its industrial partners are Google, Microsoft, IBM, Mobileye, Orcam, Boston Dynamics, Willow Garage, DeepMind and Rethink Robotics. Also affiliated with center are Howard University; Hunter College; Universidad Central del Caribe, Puerto Rico; the University of Puerto Rico, Río Piedras; and Wellesley College.

CBMM aims to foster collaboration not just between institutions but also across disciplinary boundaries. Graduate students and postdocs funded through the center will have joint advisors, preferably drawn from different research areas.

Research themes

The center’s four main research themes are also intrinsically interdisciplinary. They are the integration of intelligence, including vision, language and motor skills; circuits for intelligence, which will span research in neurobiology and electrical engineering; the development of intelligence in children; and social intelligence. Poggio will also lead the development of a theoretical platform intended to undergird the work in all four areas.

“Those four thrusts really do fit together, in the sense that they cover what we think are the biggest challenges facing us when we try to develop a computational understanding of what intelligence is all about,” says Patrick Winston, the Ford Foundation Professor of Engineering at MIT and research coordinator for CBMM.

For instance, he explains, in human cognition, vision, language and motor skills are inextricably linked, even though they’ve been treated as separate problems in most recent AI research. One of Winston’s favorite examples is that of image labeling: A human subject will identify an image of a man holding a glass to his lips as that of a man drinking. If the man is holding the glass a few inches further forward, it’s an instance of a different activity — toasting. But a human will also identify an image of a cat turning its head up to catch a few drops of water from a faucet as an instance of drinking. “You have to be thinking about what you see there as a story,” Winston says. “They get the same label because it’s the same story, not because it looks the same.”

Similarly, Winston explains, development is its own research thrust because intelligence is fundamentally shaped through interaction with the environment. There’s evidence, Winston says, that mammals that receive inadequate visual stimulation in the first few weeks of life never develop functional eyesight, even though their eyes are otherwise unimpaired. “You need to stimulate the neural mechanisms in order for them to assemble themselves into a functioning system,” Winston says. “We think that that’s true generally, of our entire spectrum of capabilities. You need to have language, you need to see things, you need to have language and vision work together from the beginning to ensure that the parts develop properly to form a working whole.”

Related Links

Related Topics

More MIT News