Artificial Intelligence Laboratory

The research of the Artificial Intelligence Laboratory (AI Lab) is aimed at both building intelligence software and hardware artifacts and at understanding the computational basis for human intelligence. The laboratory began as the Artificial Intelligence Project as part of the Research Laboratory for Electronics in 1959. It became part of Project MAC in 1963 and split off as a separate laboratory in 1970. Over the last four decades, research in the AI Lab has been directed both at building intelligence artifacts and at understanding the basis of human intelligence from a computational point of view.

The laboratory is an interdepartmental entity in the School of Engineering and includes faculty and students from the Department of Electrical Engineering and Computer Science (the largest contingent) as well as from the Department of Aeronautics and Astronautics and the Department of Ocean Engineering. It also has significant membership from outside the School, from the Department of Brain and Cognitive Sciences, from the Program in Media Arts and Sciences, and from the Whittaker College of Health Sciences and Technology.

Our research is sponsored by the US government, primarily through the Defense Advanced Research Projects Agency but also through the National Science Foundation, the National Aeronautics and Space Administration, the Central Intelligence Agency, the National Institutes of Health, the Office of Naval Research, and the Air Force Office of Scientific Research and Rome Labs. The largest industrial sponsors are the Nippon Telegraph and Telephone Corporation (NTT) and Project Oxygen (see below), but the laboratory also receives support from Ford, Hewlett Packard, Microsoft, Honda, the Singapore-MIT Alliance, the Cambridge-MIT Institute, and the Deshpande Center at MIT.

The Artificial Intelligence Laboratory has strong ties to the Laboratory for Computer Science (LCS) and works jointly with LCS on Project Oxygen and in the NTT collaboration project. Project Oxygen is funded by six companies (Nokia, Philips, Hewlett-Packard, Acer, Delta Electronics, and NTT) and is concerned with exploring pervasive human-centered computing. Besides Project Oxygen, NTT has sponsored research projects across the two laboratories in almost all areas of research during a five-year research collaboration.

The laboratory's current research includes work in the fundamentals of computer vision, computer vision applied to medicine, computer vision applied to user interfaces, machine learning, knowledge representation, natural language understanding, computational politics, foundations of computer languages, understanding sketches, pervasive computer interfaces, synthetic biology, amorphous computing, modeling biology at the molecular level, simultaneous localization and mapping for mobile robots, self-aware and self-diagnostic robots, humanoid robots, and artificial life.

Highlights

Fundamentals of Computer Vision

Jekwan Ryu has obtained the first images using a "synthetic aperture microscope." This instrument overcomes the coupling of resolution, working distance, field of view, and depth of field in classic optical microscopy. The method, using many exposures with different finely textured interference patterns, promises to allow scaling of microscopes to ultraviolet and even X-ray wavelengths since it uses reflective rather than refractive optical elements. Critical to its operation is the computational derivation of the image. This work is being done with Professor Dennis Freeman in the Research Laboratory of Electronics and Professor Berthold Horn in the AI Lab.

Roberto Accorsi has developed a coded aperture gamma ray imaging method that removes "near field" artifacts and can, for example, be used to greatly increase the resolution of existing gamma ray cameras. Critical to its operation is the computational derivation of the image. One practical application is in biomedical testing with tracer chemicals, where the higher resolution allows one to work with mice rather than more expensive rats. Applications to imaging contraband at a distance are also being explored. This work is being done with Professor Richard Lanza in the Laboratory for Nuclear Science and Professor Berthold Horn in the AI Lab.

Marshall Tappen has developed a new algorithm for recovering shading and reflectance from a single image. The algorithm (a) gathered local evidence from color and spatial patterns, then (b) pooled that local evidence using Bayesian belief propagation in a Markov network. Given an estimate of the light direction, the resulting program can break an image into two separate images, the parts of the first image caused by paint changes and the parts of the second image caused by shape changes. The results are compelling: the shading images look like photographs of objects spray painted with some diffuse, white paint, while the paint images look like cartoon versions of the original photographs, with no shading effects. This problem has not been solved before. These shading and paint component images are useful for a number of tasks. They can be used for image editing—modifying the paint components of an image without changing the apparent shapes. In the past, the many shape-from-shading algorithms developed in the computer vision community have only worked on artificial images of uniformly painted objects, severely limiting their usefulness. Marshall's algorithm, which can separate the effects of shading from those of paint, can effectively extend the scope of shape-from-shading algorithms to include real-world images. Finally, this is a problem that people solve, and a computer algorithm helps us understand what computations people may be doing. This work is being done with Professor William Freeman in the AI Lab.

Synthetic Biology

Dr. Tom Knight, Professor Gerry Sussman, and Randy Rettberg of the AI Lab, along with Dr. Drew Endy of the Division for Biological Engineering and the Department of Biology, have embarked on a new effort in synthetic biology. They are trying to build and catalog standard parts that are expressed as genetic sequences and can be used as "of-the-shelf" components for engineering new functions into living cells. This effort is inspired by the way in which most engineering disciplines have generated standard parts that can be put together using standard interfaces to build larger, more complex systems. For electrical engineers, the most accessible analogy is the data book of series 7400 integrated circuits. In the biological domain, the parts switch on and off other parts in the genome and produce different proteins with predictable interactions in the cell. In the January 2003 Independent Activities Period, a class was held for undergraduate students and graduates students. Some students had no previous molecular biology experience, while others were already proficient in the lab. The students worked in teams and built oscillators that switched a luminescent gene on and off, making the host cells flash (slowly).

Organizational Changes

During the last year, part of the laboratory—the computer vision group—moved from 200 Technology Square to 400 Technology Square to provide more room in Building 200 for both the Artificial Intelligence Laboratory and the Laboratory for Computer Science.

On July 1, 2003, the 40th anniversary of the founding of Project MAC, the AI Lab will merge with the Laboratory for Computer Science to become the Computer Science and Artificial Intelligence Laboratory. In January 2004 the new laboratory will move from Technology Square to the new Stata Center and occupy all of the Gates Building and much of the Dreyfoos Building.

Rodney A. Brooks
Director
Fujitsu Professor of Computer Science and Engineering

More information on the Artificial Intelligence Laboratory can be found on the web at http://www.ai.mit.edu.

 

return to top
Table of Contents