Skip to content
MIT Department of Aeronautics and Astronautics

Aero-Astro Magazine Highlight

The following article appears in the 2007–2008 issue of Aero-Astro, the annual report/magazine of the MIT Aeronautics and Astronautics Department. © 2008 Massachusetts Institute of Technology.

Supervising automation: humans on the loop

By Mary "Missy" Cummings

The human link to the control mechanism becomes critical as systems grow larger, with increasing numbers of components and additional operators, such as in an air traffic control environment.

Missy Cummings and doctoral student Sylvain Bruni with the Humans and Automation Lab’s new Mobile Advanced Command and Control Station, a portable testbed for human supervisory control research. (William Litant photograph)

Cummings and the HAL mobile lab

My primary research focus is the field of human supervisory control: intermittent human operator interaction with a remote, automated system in order to manage a controlled process or task environment. Human supervisory control represents humans “on-the-loop,” as opposed to “in-the-loop.” Example human supervisory control settings include air traffic control, process control, military and space command and control, crises response management, and unmanned vehicle operations. With the rapid expansion of automated technology in everyday settings, human supervisory control settings are expanding to medicine, driving, and business and commercial applications. I became interested in this field as the pilot of a single-seat Navy F/A-18, a highly automated and often difficult to understand aircraft with extremely narrow margins for mistakes. It was clear to me that future pilots will really be automation managers, and that principled research is needed to determine more effective forms of interaction.

Human supervisory control is an interdisciplinary field that includes: 1) the psychology of human decision making, which is critical in high risk, time-pressured systems, and often the limiting factor in system success, 2) computer science, specifically the design of algorithms (and resultant automation), as well as the interfaces that communicate with the operator (including aural, visual, and haptic), and 3) the engineering of the system that executes the task (e.g., it is important that a designer of a UAV cockpit understand how latencies in communication systems can negatively impact the human understanding and execution of a control mechanism). In addition, it is imperative to understand the systems engineering implications, as the human is linked, via various subsystems, to the physical control mechanism. This aspect is even more important as systems grow larger with increasing numbers of components and additional operators, such as in an air traffic control environment.

To facilitate my research, I formed the Humans and Automation Lab — HAL — under the auspices of the MIT Aeronautics and Astronautics Department.

Critical supervisory control areas

The research I conduct within the human supervisory control domain primarily falls within two areas: decision support design and human-system performance evaluation. Both areas are critical for human supervisory control system development, since decision support tools allow operators to not only understand complex system states, but also how to interact with automated agents. Equally important is the development of metrics and tools for HSC assessment to ensure that design interventions are not only positively impacting human performance, but system performance as well. A significant part of this aspect of my research is the development of stochastic simulation-based system models that integrate both human and system models. These models are particularly useful in determining upper limits of human performance that will significantly impact overall system performance.

In terms of decision support design, I am developing scheduling decision support tools for operators managing multiple complex tasks; specifically, multiple unmanned aerial vehicles. This effort examines the impact of increasing levels of automation of an operator’s ability to manage multiple complex tasks. Adding more automation into a system is not necessarily better, since it can cause operator complacency as well as confusion in different modes of operation. I am examining how to visually represent time-critical, uncertain data with low cognitive overhead such that operators can more easily understand the effects of their actions on the current system, as well as what probable consequences of their actions on future system states.

I am also investigating innovative new decision support tool designs that span both visualization and aural displays. This includes NASA-sponsored research to determine how to provide effective path planning decision support for astronauts conducting traversals on the moon and Mars, as well as another NASA-sponsored effort to design the new lunar lander cockpit. The lunar lander display effort led to the development of a heads-up display component for vertically landing aircraft called the Vertical Altitude and Velocity Indicator. This display integrates information from multiple data sources in an easily understood format that promotes improved pilot performance while reducing training time. Displays embody more than visual components; my laboratory recently completed two studies focused on the HSC performance impact of sonifications (the combination of aural cues to represent numerical data for streamlined cognitive processing) and haptic cues (pressure vests and vibration sleeves).

Assessing human and system performance

Several projects are also underway to develop better metrics for human-system performance. In a research effort with Lincoln Laboratory, we have developed a set of three metric classes — attention allocation efficiency, interaction efficiency, and neglect efficiency — that we propose, when taken together, can comprehensively assess human and system performance. This distinction is important because in human factors studies, researchers typically focus on just human performance, and not the overall impact of human control processes on the system state. In general, linking human and system performance beyond reaction time, subjective effort, and performance-based metrics has been difficult for researchers in the past, but our research shows promise, and potentially could revolutionize the way supervisory control systems are evaluated in the future. My goal is to develop metrics that can be used to assess human-system performance in real time to develop more robust, fault-tolerant systems that allow operator flexibility in decision making without compromising system safety.

Another important effort in terms of evaluation and metric development is the design of the Tracking Resource Allocation Cognitive Strategies tool, originally designed for an Office of Naval Research project. TRACS allows for the correlation of decision strategies with objective and subjective performance measures in resource allocation tasks, such as deploying a network of vehicles to deliver time-critical payloads, to determine the bounds of robust decision-making. It also can demonstrate where and how particular designs may or may not adequately support decision-making processes. While TRACS currently depicts a post-hoc visual representation of a user’s decision-making processes while interacting with a multivariate optimization-based planning decision-support system, it is being modified to be used to predict, with some degree of uncertainty, when the performance of a user of a decision support system might degrade, and what the overall impact on the system would be. This kind of tool would be helpful to supervisors of groups of operators, such as the supervisor of air traffic controllers.

HAL research extends to many other domains, such as submarine and warehousing command and control center design, system architecture decision support tools, and the development of integrated displays for automobiles. The volume and growth rate of research is indicative of an overall systemic problem. Even the most elegantly designed systems will perform below expectations or fail unless human interactions are not taken into account. With the ever-growing demand for human-systems modeling-based approaches that enable design and evaluation of human supervisory control systems the outlook for my research and HAL is bright.

Cummings in HAL mobile lab

Missy Cummings and student Sylvain Bruni work with innovative command and control mobile computer and communications equipment within the Humans and Automation Lab’s Mobile Advanced Command and Control Station. (William Litant photograph)

Mary (Missy) Cummings is an Associate Professor in Aeronautics and Astronautics at MIT and the director of the Humans and Automation Laboratory. She received her B.S. in Mathematics from the U.S. Naval Academy (1988), her M.S. in Space Systems Engineering from the Naval Postgraduate School (1994), and her Ph.D. in Systems Engineering from the University of Virginia (2003). A naval officer and military pilot from 1988-1999, she was one of the Navy's first female fighter pilots. Her teaching experience includes instructing for the Navy at Pennsylvania State University and as an assistant professor for the Virginia Tech Engineering Fundamentals Division. Her research interests include human interaction with autonomous vehicle systems, humans supervisory control, direct-perception interaction decision support design, human-computer interaction, and the ethical and social impact of technology. Missy Cummings may be reached at

Massachusetts Institute of Technology, 77 Massachusetts  Avenue, 33 - 207, Cambridge, MA 02139

Contact|Site Map|Home