MIT Reports to the President 1997-98


The Laboratory for Information and Decision Systems (LIDS) is an interdepartmental research laboratory of the Massachusetts Institute of Technology. Its staff includes faculty members, full-time research scientists, postdoctoral fellows, graduate research assistants, and support personnel. Undergraduate students participate in the research program of the Laboratory through the Undergraduate Research Opportunities Program (UROP). Every year several research scientists from various parts of the world visit the Laboratory to participate in its research programs.

The fundamental research goal of the Laboratory is to advance the field of systems, communication, control, and signal processing. In doing this, it explicitly recognizes the interdependence of these fields and the fundamental role that computers and computation play in this research. The Laboratory is conducting basic theoretical studies in communication, control, and signal processing, and is committed to advancing the state of knowledge in technologically important areas.

As an interdepartmental laboratory, LIDS reports to the Dean of the School of Engineering, Professor Robert A. Brown. The Co-Directors of the laboratory are Professors Robert G. Gallager, Sanjoy K. Mitter, and John N. Tsitsiklis (Acting Co-Director).

The Center for Intelligent Control Systems, an inter-university, interdisciplinary research center operated by a consortium of Brown University, Harvard University, and MIT, resides administratively within LIDS.

Twelve faculty members, several research staff members, and approximately 60 graduate students are presently associated with the Laboratory and the Center. Currently, the Laboratory and the Center provide some 50 research assistantships to graduate students. Undergraduate students also participate in research and thesis activities. A number of postdoctoral and visiting appointments are made.

Financial support is provided by the Air Force Office of Scientific Research (AFOSR), the Army Research Office (ARO), the Advanced Research Projects Agency (ARPA), C.S. Draper Laboratory, Motorola University Partnerships in Research, the National Science Foundation (NSF), the Office of Naval Research (ONR), Siemens AG, Tellabs, Inc., and the University Research Initiative Program (ARO).


The current research activities of the laboratory cover a wide range of theoretical and applied areas in systems, communications, control and signal processing. These areas include the following:



Researchers from LIDS, RLE, Lincoln Laboratories, ATT Bell Laboratories and Digital Equipment Corporation have collaborated as part of a Consortium on All-Optical Networks. Funding for the Consortium was provided by DARPA. The goal of the Consortium was to pursue research and development on the optical technologies, the architectures, and the application interfaces required for a scalable, universal, wide-area, wideband, all-optical network. A key element of the Consortium's work was the construction of an extensive test bed linking the Consortium members and demonstrating the feasibility of the design concepts. The Consortium was viewed as a long-term effort paving the way to the national and international information infrastructures of the future. Professor Robert G. Gallager, Dr. Steven G. Finn, and a number of graduate students have been involved in the Consortium, which continues to strive towards this end.

In continuation of this research, Dr. Steven G. Finn, Professor Robert G. Gallager, and a number of graduate students from LIDS are collaborating with Lincoln Laboratory on All-Optical Networks. They are working on wavelength division multiplexing (WDM) wide area networks, particularly with respect to their use in backbones, and on very high speed local area time division multiplexing (TDM) networks. The work at LIDS is focused on the system and architectural aspects of these networks. Also, a node of a WDM network testbed is located at LIDS for experimental work.


In many applications, e.g., mobile wireless communication and military communication in the presence of jamming, the channel characteristic and the nature of the noise are unknown in the design stage of the communication link. For such applications it is imperative to design robust receivers and codes that allow reliable communication over each of a wide family of channels. To this end Professor Amos Lapidoth is studying universal receivers that do not require precise knowledge of the channel law, and yet perform asymptotically as well as the best receivers that could have been designed had the channel been known in advance. Professor Lapidoth is also studying the ultimate bounds on the rates at which reliable communication can be guaranteed over a channel that is only known to belong to some given family of channels.


The major objective of this work is to develop the scientific base needed to design data communication networks that are efficient, robust, and architecturally clean. Both wide-area and local-area networks, both high-speed and low-speed networks, and both point-to-point and broadcast communication channels are of concern. Some specific topics of current interest are multi-access interference, routing in wireless networks, quality of service control, diverse traffic mixes, the communication complexity and delay of distributed algorithms, failure recovery, and topological design. Professors Dimitri P. Bertsekas, Robert G. Gallager, Dr. Steven G. Finn, and their students are conducting this research.


Professors Robert G. Gallager and Mitchell D. Trott, together with several students, have ongoing projects in mobile communication aimed at developing a cohesive theory and set of insights for wireless multiple access. Specific research includes multiple-antenna transmission diversity, the capacity of fading channels, joint source/channel coding for broadcast channels, the transmission of bursty sources over a shared time-varying channel, transmitter power allocation across many cells, and capacity improvements through joint decoding.


Professor G. David Forney, Jr. and several students and colleagues have been studying codes defined on graphs and iterative decoding algorithms, such as turbo codes, low-density parity-check codes, and tail-biting codes. Some of these codes have been shown to be able to approach the Shannon limit remarkably closely, using implementable decoding algorithms. However, understanding of the construction of such codes and of the behavior of these decoding algorithms is still in a fairly primitive state.


The Laboratory for Information and Decision Systems and Tellabs Operations, Inc., a telecommunications equipment manufacturer, are developing a novel approach to collaborative research. In this approach, LIDS and Tellabs integrate industrial research interests with MITís research and educational environment. The key difference between this new model of collaboration and traditional approaches is the focus on human resources as the primary enabler. Toward this end, LIDS provides Tellabs with access to faculty, students, visitors, facilities, and infrastructure support, while Tellabs dedicates resident corporate research positions to the effort, assuming responsibility for technology transfer as an internal corporate process. LIDS benefits from the persistent presence of industrial researchers, and Tellabs benefits from the leveraging of LIDSís staff. LIDS and Tellabs have been jointly working on this new research model for three years and look forward to its growth and refinement.


For some time now there has been considerable interest in algorithms for processing signals or images at multiple resolutions. A theory involving the so-called wavelet transform has been recently developed for the deterministic representation of signals at multiple resolutions, and this has sparked a considerable response from the research community in exploring potential applications in a variety of areas ranging from computer vision to the fusion of multispectral measurements. An essential element in the development of a systematic methodology for the design of multiscale algorithms is the development of a statistical theory for multi-resolution signals. Continuing efforts to develop and apply such a theory are underway by Professor Alan S. Willsky and Dr. Hamid Krim, together with their students and a group of researchers in Rennes, France. Through other collaborations with Professor Stephan Mallat, attempts are being made at the fusion of statistical and wavelet-based approaches to signal processing. The initial results that have been obtained, together with the considerable attention these topics are receiving from the research community, give reason to believe that this will be an extremely fruitful area for some years to come. In addition considerable progress has been made towards a unifying framework and the so-called scale space analysis with the wavelet and wavelet-packet-based denoising. The robustness achieved in a newly introduced class of edge enhancement algorithms has been shown theoretically as well as experimentally, and the results are extremely promising.


Part of the research of Professors Alan S. Willsky and his students on multidimensional statistical signal processing and estimation focuses on the development of efficient and highly parallelizable algorithms for multidimensional estimation. Essentially all multidimensional estimation problems involve potentially explosive computational demands, caused both by working in multiple spatial dimensions and by the compounding lack of an obvious notion of causality and hence recursion (in contrast to traditional time series analysis). Results obtained so far deal with new notions of recursive computation in multiple dimensions; parallelization achieved by data partitioning and a generalization of the technique of nested dissection for solving partial differential equations; and effective methods for handling processes that evolve in time as well as in space. In particular, in the realm of multidimensional spatial processing a new approach using radial recursion of boundary conditions has been developed and demonstrated to yield efficient and accurate solutions. This approach may then be coupled with spatial data partitioning to produce a new class of multidimensional parallel processing algorithms.

In the area of space-time processes, a key contribution of this work has been the recognition that the major computational problem in space-time filtering, namely that of calculating error covariance functions for predicted spatial fields and using these covariances in the incorporation of new measurements, can be viewed as a problem of statistical modeling of random fields. This insight has led to very effective methods for reduced-order modeling allowing the solution of space-time estimation problems of very high dimension. Novel approaches have been developed to the dynamic computation of spatial Markov random field models as they evolve in time, and for the use of these models for the efficient assimilation of spatial data over time. Areas of application for these methodologies are currently being explored, including such areas as global ocean circulation estimation, remote sensing, real-time image sequence analysis, multidimensional filter design and analysis, and fast partial differential equation solution and simulation.


Professor Alan S. Willsky and his students are engaged in research on the estimation and reconstruction of geometric features in multidimensional data. Interest in this area is motivated by the need to develop radically different methods for problems in which the focus is on geometric rather than pixel-by-pixel features. Major contributions of this work include the development of a methodology for the reconstruction of 3-D objects from their 2-D silhouettes, and the tracking of objects with time-varying shape. This methodology has been tested in practice, resulting in a new approach to temporal imaging of the heart given very low-dose (and thus low SNR) imagery. New approaches have also been developed to the characterization and parameterization of geometrically-described random fields and to the direct extraction of geometric information from tomographic data -- providing new algorithms in computational geometry that directly accommodate, and hence are robust to, uncertainties and errors in the observed data. These methods are applied to such problems as the estimation and tracking of geophysical and oceanic features from sparse data, the nondestructive evaluation of materials, the detection and quantification of atherosclerotic plaques, and the evaluation of cardiac structure and function. Extension and fusion of these methods with multiresolution approaches promise to allow multiscale descriptions of object geometry.


Multivariable and Robust Control

The systematic design of multiple-input, multiple-output systems, using a unified time-domain and frequency-domain framework to meet accurate performance in the presence of plant and input uncertainty is an extremely active research area in the Laboratory. Various theoretical and applied studies are being carried out by Professors Michael Athans, Munther A. Dahleh, Gunter Stein, and their students. Theoretical research deals with issues of robustness, aggregation, and adaptive control. The aim of the research is to derive a computer-aided design environment for the design of control systems which can address general performance objectives for various classes of uncertainty. Furthermore, new results on the robustness of nonlinear feedback systems, using feedback linearization, have been obtained for unstructured uncertainty model errors. Recent application-oriented studies include the control of large space structures, helicopters, submarine control systems, issues of integrated flight control, control of chemical processes and distillation columns, and automotive control systems.

In related research, the quick maturation of robustness concepts when applied to linear systems has led Professor Eric Feron to redirect his efforts in this area towards other classes of systems. In particular, Professor Feron is now interested in systematic analysis and design methods for the robust nonlinear control of systems subject to "hard" nonlinearities such as actuators with position and rate saturations, as well as other nonlinear systems. The main tools for robust stability and performance analysis are Lyapunov's stability theory as well as the theory of approximation of difficult, nonconvex problems via positive semidefinite programming. The current applications of this research include Unmanned Aerial Vehicle (UAV) control as well as vehicle anticollision problems arising in Air Traffic Control.

Feedback Control Using Approximate Dynamic Programming

Feedback controllers for nonlinear systems are often driven by potential (Lyapunov) functions, whereby the controller at each step steers the system in a direction of decrease of the potential function. The optimal cost-to-go function that results from dynamic programming formulations of control problems is a suitable such Lyapunov function, except that it may be difficult to compute. This research investigates whether recent approximate dynamic programming methods, that rely extensively on simulation and neural network training, can lead to a viable methodology for designing Lyapunov functions and controllers for nonlinear feedback systems. This research is carried out by Professors Munther A. Dahleh and John N. Tsitsiklis, and their students.

Identification And Adaptive Control

Determining the fundamental limitations and capabilities of identification and adaptive control is an active area of research, carried out by Professors Munther A. Dahleh, John N. Tsitsiklis, and their students. This research program draws upon areas such as information-based complexity theory and computational learning theory, as well as upon the theory of robust control. One aim of this research is to develop a theory that combines both system identification and robust control within the same framework, in which a controller that meets given performance specifications can be designed based on finite noisy data. Issues studied include the representation of uncertainty in both noise and model, design of experiments, sample and computational complexity, as well as implementation of optimal algorithms.


Professor Alexandre Megretski and his students are working on the development of new methods of nonlinear system analysis, and application of these techniques in various control systems, (flight control, firm control, animation control, hybrid systems, etc.). The work involves a broad spectrum of system-theoretic topics including modelling, identification, stability analysis, and optimization. One important objective is to learn how simplifications necessarily made in nonlinear system modelling affect the validity of nonlinear control design.


Hybrid systems are compositions of continuous systems (described by ordinary differential equations) and discrete systems that are event-driven. A theory of optimal control of such systems, based on the theory of impulse control and piecewise-deterministic processes, has been developed by Professor Sanjoy K. Mitter in collaboration with Professor Michael Branicky, currently at Case Western Reserve University, Professor Vivek Borkar, visiting from the Indian Institute of Science, and Dr. Nicola Elia, a post-doctoral scientist. Numerical methods for the dynamic programming inequalities arising out of the optimality conditions for these systems have also been developed. Incorporation of the model in the simulation package OMULA/OMSIM has been undertaken in joint work with Prof. Astrom and his group at Lund, Sweden. Professor Mitter has been working with Professors Borkar and Chandru of the Indian Institute of Science, Bangalore on solving questions and problems in logic using mathematical programming. It is planned to unify this work with the previously mentioned work on Hybrid Systems.


This project focuses on analytical and computational methods for solving broad classes of optimization problems arising in engineering and operations research, as well as for applications in communication networks, control theory, power systems, computer-aided manufacturing, and other areas. Currently, in addition to traditional subjects in nonlinear and dynamic programming, there is an emphasis on the solution of large-scale problems involving network flows, as well as in the application of decomposition methods. Professors Dimitri P. Bertsekas and John N. Tsitsiklis and their students perform this work.


Problems of sequential decision making under uncertainty are all-pervasive; for example, they arise in the contexts of communication networks, manufacturing systems, logistics, and in the control of nonlinear dynamical systems. In theory, such problems can be addressed using dynamic programming techniques; in practice, however, only problems with a moderately-sized state space can be handled. This research effort deals with the application of neural networks and other approximation and interpolation methodologies to overcome the curse of dimensionality of real-world stochastic control problems. The objectives driving this research are twofold. First, to develop the theoretical foundations and improve the understanding of such methods, using a combination of tools from approximation theory, dynamic programming, and stochastic algorithms. Second, to use these methods for solving some large-scale problems of practical interest. Application areas being currently investigated include problems in logistics (resource scheduling and assignment), finance (pricing of high-dimensional derivative instruments), and communications (dynamic channel allocation). Professors Dimitri P. Bertsekas and John N. Tsitsiklis and their students perform this work.


The problem of inferring information about complex objects in a natural scene from visual information requires appropriate models to represent the available knowledge, and efficient inference algorithms capable of achieving acceptable guaranteed performance in the face of the variability and unpredictability of the environment.

To incorporate the available knowledge about the relationship between objects and data without introducing arbitrary assumptions, one can adopt a coarse probabilistic approach where the class of measurable events is somehow restricted and uncertainty in the probability measure itself is explicitly represented. This leads to a formulation of the inference problem which contains ingredients both from classical Bayesian estimation and from robust set-based approaches based on the sup norm. Coarse probabilistic models can represent model uncertainty in a natural way and, at the same time, capture the information about the relative probabilities of events.

In developing efficient and robust algorithms it is important to exploit the hierarchical and distributed nature of the real world. Complex objects are composed of parts which can be further decomposed into smaller parts (hierarchical nature). Moreover, a typical scene contains elements which influence the data independently from each other (distributed nature). This suggests that the description of the world generated by the inference algorithm (and the nature of computation itself) should also be hierarchical and distributed, that is, consisting of a pyramidal network of descriptors.

Hierarchical computation requires that uncertainty be represented explicitly, for the simpler and more local knowledge modules used at the lower levels can only generate uncertain information, to be further elaborated by the higher levels. To this end, the classical single-state optimal estimators are inadequate whereas multi-state covering estimators seem more appropriate.

Recently, we have been developing a general mathematical framework to describe robust compositional inference systems and to design efficient algorithms achieving guaranteed performance. The major challenge is to obtain complete coverings of the hypothesis space without incurring combinatorial complexity during the aggregation and compositional stages. To secure a certain performance level, the coarse probabilistic model of the problem is transformed into a set-based system whose set of behaviors has high probability. Then, covering estimates are derived in a set-based setup. In general, the choice of the high-probability "safe" set of behaviors and the design of the covering algorithm are done in parallel.

The major application area for this framework is the problem of visual recognition of occluded targets in highly variable and cluttered environments. We claim that some problems usually considered belonging to the realm of signal-processing, and thus just a pre-processing stage of recognition (e.g. edge detection), should instead be treated as recognition problems themselves. In the last few years we have been developing a hierarchical and compositional edge detector based on a hierarchy of representations (from the bottom level up): raw image; tangent vectors; regular visible "shapeless" curves; general curves with shape and singularities; partially invisible curves; layered regions (representing occlusion). Descriptors at one level are composed "bottom-up" from descriptors at the previous levels, while "top-down" feedback loops ensure that global knowledge and constraints are incorporated. This work is supervised by Professor Sanjoy Mitter and Dr. Stefano Casadei.

Problems of speech recognition (speaker-independent), handwritten character recognition (on- and off-line), and robust vision system design have turned out to be much more difficult than originally thought, owing to the richness and variability of the data and the resulting complexity of the problem of representation. Moreover, the precise characterization of similarity between objects poses major challenges. Professor Sanjoy K. Mitter, Dr. Mohamad Akra and their students have recently proposed a new paradigm for pattern recognition, where primary emphasis is placed on representation. These ideas are being tested in the domains of character recognition, speech recognition, and image analysis. Current efforts consist of developing highly efficient algorithms using ideas of computational geometry and a computing environment where feasibility of multi-font character recognition using this methodology can be demonstrated. A patent incorporating these ideas has been obtained.


In prior work (Dabby, Chaos, AIP 1996), a chaos-based technique was designed for generating musical variations of an original work. The variations can be close to the original, mutate almost beyond recognition, as well as achieve degrees of variability in between these two extremes. A virtually infinite set of variations is possible. The goal is to make music that changes from one hearing to the next --- not in random ways --- but rather by musical choice of the composer. Accordingly, the musical score becomes dynamic, not fixed. The technique employs two chaotic trajectories, each corresponding to a different set of initial conditions for the Lorenz system. These trajectories map the pitch sequence of a musical score into a variation based on the pitch events of the original piece. The mapping tempers the sensitive dependence of chaotic trajectories to initial conditions via two mechanisms --- linking and tracking --- to help the variations maintain a tie with the original. At present, the chaotic mapping has been extended to generate rhythmic, as well as pitch, variations. The chaotic mapping can also be used to infuse a given work with the attributes of another, e.g., Bach can metamorphose into Gershwin. The design reflects dynamic system concepts, especially those found in nonlinear and chaotic dynamics, coupled with the rich tradition of Western music theory. That the technique produces variations capable of being analyzed and used for musical means --- despite the highly context-dependent nature of music --- suggests the chaotic mapping might be applicable to other context-dependent sequences of symbols, e.g., symbol sequences from scanned art work. Algorithmic development for extension of the chaotic mapping to image and other applications is underway. Research conducted by Diana Dabby, PhD EECS, MIT, and Visiting Assistant Professor of Music at Middlebury College.


The Center for Intelligent Control Systems (CICS) combines distinguished faculty from MIT, Harvard University, and Brown University in interdisciplinary research on the foundations of intelligent machines and intelligent control systems. Established in October 1986, CICS is headed by Professor Sanjoy Mitter, Director; Professor Roger Brockett, Harvard University, Associate Director; and Professor Donald McClure, Brown University, Associate Director. The research activities of the Center are loosely grouped in five areas: Signal Processing, Image Analysis, and Vision; Automatic Control; Mathematical Foundations of Machine Intelligence; Distributed Information and Control Systems; and Algorithms and Architectures. A number of outstanding graduate students are appointed Graduate Fellows. The Center also hosts several senior visitors for varying lengths of time each year.

Speakers in the Colloquium and Seminar Series included: Dr. Mohamad Akra, LIDS MIT; Dr. David Forney, Motorola Inc. and LIDS, MIT; Dr. Richard A. Barry, MIT Lincoln Laboratory; Dr. Gerald S. Goodman, University of Florence; Prof. Alan Wilskey, LIDS, MIT; Dr. Albert Benveniste, INRIA-IRISA, Rennes, France; Peter Carr, Morgan Stanley; Dr. George D. Stamoulis, University of Crete; Dr. Demonsthenis Teneketzis, University of Michigan; Professor Murad S. Taqqu, Boston University; Professor Balagi Praphakar, LIDS, MIT; Professor W. Wonham, University of Toronto; Professor Ernst Dickmanns, Munich Germany; Dr. Tom Richardson, Bell Labs; Dr. Frederic Guichard, Paris-Dauphine University; Professor Jetendra Malik, University of California Berkeley; Professor David Tse, University of California Berkeley; Professor Andy Lo, MIT; Professor Pierre A. Hamblet, Eurecom Institute; Dr. David Clark, MIT; Professor John Baras, University of Maryland; Professor Eduardo D. Sontag, Rutgers University; Professor P. R. Kumar, University of Illinois; Professor Nick McKeown, Stamford University;


Visitors to the Laboratory for Information and Decision Systems included: Professor John S. Baras, University of Maryland; Dr. Agnes H. Chan, Northeastern University; Dr. Eugene Feinberg, Northeastern University; Dr. Gunter Stein, Honeywell; Dr. Jun Zhang, University of Wisconsin.

In July 1997, Professor Athans delivered an invited plenary talk on Issues on Robust Control at the Robust Control Conference in Budapest, Hungary.

In July 1997, Professor Athans delivered an invited plenary talk on How to Teach Multivariable Control systems at the IFAC Control Education Conference in Istanbul, Turkey and another one on Progress in Multivariable Control synthesis at the Conference on Intelligent systems, also in Istanbul, Turkey.

In August 1997, Professor Athans delivered an invited paper on Multivariable Robust Control at the IEEE Mediterranean Conference, Paphos, Cyprus.

From September 1997 till January 1997, Professor Athans was a Visiting Scientist at the Institute for Systems and Robotics, Instituto Superior Tecnico, Lisbon, Portugal, where he presented a series of lectures on Robust Multivariable Control Synthesis to interested faculty and students. He also presented an invited seminar control Theory for Dummies at the Department of Computer Science of the New University of Lisbon.

In May 1998, Professor Athans presented an invited keynote address Distributed Decision Problems at the World Automation Congress, Anchorage, Alaska.

Robert G. Gallager, Sanjoy K. Mitter

MIT Reports to the President 1997-98