Skip to content
MIT Department of Aeronautics and Astronautics

AeroAstro Highlight

The following article appears in the 2010–2011 issue of AeroAstro, the annual report/magazine of the MIT Aeronautics and Astronautics Department. © 2011 Massachusetts Institute of Technology.

HUMAN-AUTOMATION COLLABORATION PRESENTS POSSIBILITIES UNATTAINABLE BY EITHER ALONE

By Mary “Missy” Cummings, Jonathan P. How, and Brian Williams

While we humans are capable of complex — even astounding — tasks and feats, we have known since the earliest days of mechanization that we can employ machines to extend human abilities, making it possible to do things faster and better.

carrier deck simulation
In the AeroAstro Humans and Automation Lab, Professor Missy Cummings and research assistant Jason Ryan work with the Deck operations Course of Action Planner, a virtual representation of aircraft carrier deck activity that, by partnering human and computer abilities, could greatly enhance planning tasks in a chaotic environment. (William Litant/MIT photo)

Most people today are familiar with automated vehicles, such as aircraft drones, that require one or more people to control a single machine. But, in the future, we will see more and more systems where a small team, or even a single individual, oversees networks of a number of automated “agents.” In these cases involving multiple vehicles traversing random, dynamic, time-pressured environments, the team or individual overseer is not humanly capable of the rapid and complex path planning and resource allocations required: they need automated planning assistance. However, such planning systems can be brittle and unable to respond to emergent events. Enter a human/machine planner partnership, known as “humans-in-the-loop,” where operators provide their human knowledge-based reasoning and experience to enhance the nonhuman planners’ abilities.

While numerous studies have examined the ability of underlying automation (in the form of planning and control algorithms) to control a network of heterogeneous unmanned vehicles (UxVs), a significant limitation of this work is a lack of investigation of critical human-automation collaboration issues. Researchers in AeroAstro’s Humans and Automation Laboratory (HAL), the Aerospace Controls Laboratory (ACL), and Model-based Embedded and Robotic Systems (MERS) are investigating these issues in several domains.

EXPEDITIONARY MISSIONS

ACL, HAL, and Aurora Flight Sciences have developed the Onboard Planning System for UxVs Supporting Expeditionary Reconnaissance and Surveillance (OPS-USERS), which provides a planning framework for a team of autonomous agents, under human supervisory control, participating in expeditionary missions that rely heavily on intelligence, surveillance, and reconnaissance. The mission environment contains an unknown number of mobile targets, each of which may be friendly, hostile, or unknown. The mission scenario is multi-objective, and includes finding as many targets as possible, keeping accurate position estimates of unknown and hostile targets, and neutralizing the latter. It is assumed that static features in the environment, such as terrain type, are known, but dynamic features, such as target locations, are not.

Given a decentralized task planner and a goal-based operator interface for a network of unmanned vehicles in a search, track, and neutralize mission, this research demonstrated that humans guiding these decentralized planners improved system performance by up to 50 percent. However, those tasks that required precise and rapid calculations were not significantly improved with human aid. Thus, there is a shared space in such complex missions for human-automation collaboration.

AIRCRAFT CARRIER DECK OPERATIONS

A second application domain for humans-in-the loop collaboration involves the complex world of aircraft carrier deck operations. Into this already chaotic environment of humanpiloted airplanes, helicopters, support vehicles, and crew members, the military is now introducing Unmanned Aerial Vehicles, further complicating the choreography in a field of restricted real estate. Currently, deck operation planning tasks are performed by human operators using relatively primitive support tools. In fact, a primary tool, colloquially known as the “Ouija Board,” involves pushing tiny model planes around a table on which a scaled deck is outlined. Due to the expertise of key human decisionmaking skills, this approach works, but is sometimes inefficient. Given the desire to improve and streamline operations that will involve UAVs, decision makers need real-time decision support to manage the vast and dynamic variables in this complex resource allocation problem.

More than a decade into the 21st
century, aircraft carrier handlers’
primary tool, the “Ouija Board,”
consists of tiny model planes that
they push around a table. It works,
but performance is limited.
(US Navy photo)
John Langford

The Deck operations Course of Action Planner (DCAP) project, a collaboration among AeroAstro professors Cummings, How, Roy, Frazzoli and their students, and Randy Davis of EECS/CSAIL, is a decision support system for aircraft carrier contingency planning. DCAP is a collaborative system, using both a human operator and automated planning algorithms in the creation of new operating schedules for manned and unmanned vehicles on the carrier deck and in the air approaching the carrier. To facilitate operator situational awareness and communication between an operator and the automation, a visual decision support system has been created consisting of a virtual deck, people, and vehicles projected on a table-top display. DCAP allows human decision makers to guide the automated planners in developing schedules. The system supports a range of operator decision heuristics, which work well when carrier operations are straightforward with few contingencies to manage. However, when multiple failures occur and the overall system, both in the air and on the deck, is stressed due to unexpected problems such as catapult failures, overall performance is enhanced by allowing the automation to aid the operator in monitoring for safety violations and making critical decisions.

carrier deck simulation

A detailed look at the Deck operations Course of Action Planner as the aircraft carrier “handler” would see it. Here, the operator has submitted to the planning algorithm priority rankings for four personnel groups, and desired schedules and priorities for individual aircraft. On the bottom edge of the image we see a pair of Deck Resource Timelines. Each of these contains fi ve timelines showing the allocation of tasks to the four launch catapults and the landing strip. The upper half shows the current resource allocations; the bottom half shows the proposed allocation. This allows the user to quickly identify what changes have been made in the schedules. This convention is also used on the right-hand side of the screen in the Aircraft Schedule Panel. The ASP shows individual timelines of operation for all aircraft in the system. For each aircraft, we again show two timelines stacked on top of one another, and again, the current is on top, with the proposal on bottom. This allows users to make a quick visual pass over the data and review how the planner is suggesting the schedules be changed — or how changes in the dynamics of the system, due to failures or delays, have affected schedules  Another important item is the multicolored diamond in the upper right area of the screen. This is the Disruption Visualization Tool, which displays relative changes in personnel group workload between the current and proposed schedules. Smaller, green triangles imply that this group will take less time to perform its task assignments in the proposed schedule. Larger, red triangles imply that more time is required. Note that, at this time, the DVT shows workload as it is the main criterion that the scheduling algorithm optimizes. The planner attempts to minimize both overall mission duration (time to complete all tasks) and minimize the workload of individual personnel groups.

 

PERSONAL AIR VEHICLES

Personal air vehicles are a vision of aviation’s future popularized in the early 1960s when George Jetson packed his family and dog into his famous clear-domed car and, at the push of a button, took to the skies over Orbit City. After decades of less-than-successful plans and prototypes, companies like the MIT spin-off Terrafugia are making this vision a reality by offering vehicles that can both fly through the air and drive down the road. To fly these vehicles, one must be a certified pilot, thus limiting the population that can benefit from this innovative concept.

John Langford

Transition, an airplane that doubles as a car, is an example of a Personal Air Vehicle that could benefi t from advanced human/computer interaction. Now in preproduction, its designers conceived of the vehicle while they were AeroAstro grad students. (Terrafugia photo)

The MERS group has demonstrated in simulation the concept of an autonomous personal air vehicle, called PT, in which passengers interact with the vehicle in the same manner that they interact today with a taxi driver. To interact with PT, passengers speak their goals and constraints; for example, “PT, I would like to go to Hanscom Field now, and we need to arrive by 4:30. Oh, and we’d like to fly over Yarmouth, if that’s possible. The Constitution is sailing today.” PT checks the weather, plans a safe route, and identifies alternative landing sites, in case an emergency landing is required. In the event that the passenger’s goals can no longer be achieved, PT presents alternatives. PT might say, “A thunderstorm has appeared along the route to Hanscom. I would like to re-route to avoid the thunderstorm. This will not provide enough time to fly over Yarmouth and still arrive at Hanscom by 4:30. Would you like to arrive later, at 5, or skip flying over Yarmouth?” In the future, PT will be able to reason about user preference, and will be able to ask the user probing questions that will help her identify the best options.


CummingsMary “Missy” Cummings is an associate professor in the MIT Aeronautics and
Astronautics Department. She is the director of the Humans and Automation Lab. Her research interests are human supervisory control, human-unmanned vehicle interaction, bounded collaborative human-computer decision-making, decision support, information complexity in displays, and the ethical and social impact of technology. Missy Cummings may be reached at missyc@mit.edu.

HowJonathan P. How is the MIT Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics. Prior to joining the MIT faculty in 2000, he was an assistant professor in the Department of Aeronautics and Astronautics at Stanford University. Research interests in the design and implementation of distributed robust planning algorithms to coordinate multiple autonomous air/ground/space vehicles in dynamic uncertain environments. Jon How may be reached at jhow@mit.edu.

Youssef MarzoukBrian Williams is a professor and the undergraduate officer in the MIT Aeronautics
and Astronautics Department where he leads the Model-based Embedded and Robotic Systems group. His research interests include space and aerial robotics, cognitive robotics, automated reasoning and artifi cial intelligence, automation for operations and design, hybrid control systems, robot coordination, and energy management. Brian Williams may be reached at williams@mit.edu.

Massachusetts Institute of Technology, 77 Massachusetts  Avenue, 33 - 207, Cambridge, MA 02139

Contact|Site Map|Home