|
AeroAstro HighlightThe following article appears in the 2010–2011 issue of AeroAstro, the annual report/magazine of the MIT Aeronautics and Astronautics Department. © 2011 Massachusetts Institute of Technology. HUMAN-AUTOMATION COLLABORATION PRESENTS POSSIBILITIES UNATTAINABLE BY EITHER ALONEBy Mary “Missy” Cummings, Jonathan P. How, and Brian Williams While we humans are capable of complex — even astounding — tasks and feats, we have known since the earliest days of mechanization that we can employ machines to extend human abilities, making it possible to do things faster and better.
Most people today are familiar with automated vehicles, such as aircraft drones, that require one or more people to control a single machine. But, in the future, we will see more and more systems where a small team, or even a single individual, oversees networks of a number of automated “agents.” In these cases involving multiple vehicles traversing random, dynamic, time-pressured environments, the team or individual overseer is not humanly capable of the rapid and complex path planning and resource allocations required: they need automated planning assistance. However, such planning systems can be brittle and unable to respond to emergent events. Enter a human/machine planner partnership, known as “humans-in-the-loop,” where operators provide their human knowledge-based reasoning and experience to enhance the nonhuman planners’ abilities. While numerous studies have examined the ability of underlying automation (in the form of planning and control algorithms) to control a network of heterogeneous unmanned vehicles (UxVs), a significant limitation of this work is a lack of investigation of critical human-automation collaboration issues. Researchers in AeroAstro’s Humans and Automation Laboratory (HAL), the Aerospace Controls Laboratory (ACL), and Model-based Embedded and Robotic Systems (MERS) are investigating these issues in several domains. EXPEDITIONARY MISSIONSACL, HAL, and Aurora Flight Sciences have developed the Onboard Planning System for UxVs Supporting Expeditionary Reconnaissance and Surveillance (OPS-USERS), which provides a planning framework for a team of autonomous agents, under human supervisory control, participating in expeditionary missions that rely heavily on intelligence, surveillance, and reconnaissance. The mission environment contains an unknown number of mobile targets, each of which may be friendly, hostile, or unknown. The mission scenario is multi-objective, and includes finding as many targets as possible, keeping accurate position estimates of unknown and hostile targets, and neutralizing the latter. It is assumed that static features in the environment, such as terrain type, are known, but dynamic features, such as target locations, are not. Given a decentralized task planner and a goal-based operator interface for a network of unmanned vehicles in a search, track, and neutralize mission, this research demonstrated that humans guiding these decentralized planners improved system performance by up to 50 percent. However, those tasks that required precise and rapid calculations were not significantly improved with human aid. Thus, there is a shared space in such complex missions for human-automation collaboration. AIRCRAFT CARRIER DECK OPERATIONSA second application domain for humans-in-the loop collaboration involves the complex world of aircraft carrier deck operations. Into this already chaotic environment of humanpiloted airplanes, helicopters, support vehicles, and crew members, the military is now introducing Unmanned Aerial Vehicles, further complicating the choreography in a field of restricted real estate. Currently, deck operation planning tasks are performed by human operators using relatively primitive support tools. In fact, a primary tool, colloquially known as the “Ouija Board,” involves pushing tiny model planes around a table on which a scaled deck is outlined. Due to the expertise of key human decisionmaking skills, this approach works, but is sometimes inefficient. Given the desire to improve and streamline operations that will involve UAVs, decision makers need real-time decision support to manage the vast and dynamic variables in this complex resource allocation problem.
The Deck operations Course of Action Planner (DCAP) project, a collaboration among AeroAstro professors Cummings, How, Roy, Frazzoli and their students, and Randy Davis of EECS/CSAIL, is a decision support system for aircraft carrier contingency planning. DCAP is a collaborative system, using both a human operator and automated planning algorithms in the creation of new operating schedules for manned and unmanned vehicles on the carrier deck and in the air approaching the carrier. To facilitate operator situational awareness and communication between an operator and the automation, a visual decision support system has been created consisting of a virtual deck, people, and vehicles projected on a table-top display. DCAP allows human decision makers to guide the automated planners in developing schedules. The system supports a range of operator decision heuristics, which work well when carrier operations are straightforward with few contingencies to manage. However, when multiple failures occur and the overall system, both in the air and on the deck, is stressed due to unexpected problems such as catapult failures, overall performance is enhanced by allowing the automation to aid the operator in monitoring for safety violations and making critical decisions.
PERSONAL AIR VEHICLESPersonal air vehicles are a vision of aviation’s future popularized in the early 1960s when George Jetson packed his family and dog into his famous clear-domed car and, at the push of a button, took to the skies over Orbit City. After decades of less-than-successful plans and prototypes, companies like the MIT spin-off Terrafugia are making this vision a reality by offering vehicles that can both fly through the air and drive down the road. To fly these vehicles, one must be a certified pilot, thus limiting the population that can benefit from this innovative concept.
The MERS group has demonstrated in simulation the concept of an autonomous personal air vehicle, called PT, in which passengers interact with the vehicle in the same manner that they interact today with a taxi driver. To interact with PT, passengers speak their goals and constraints; for example, “PT, I would like to go to Hanscom Field now, and we need to arrive by 4:30. Oh, and we’d like to fly over Yarmouth, if that’s possible. The Constitution is sailing today.” PT checks the weather, plans a safe route, and identifies alternative landing sites, in case an emergency landing is required. In the event that the passenger’s goals can no longer be achieved, PT presents alternatives. PT might say, “A thunderstorm has appeared along the route to Hanscom. I would like to re-route to avoid the thunderstorm. This will not provide enough time to fly over Yarmouth and still arrive at Hanscom by 4:30. Would you like to arrive later, at 5, or skip flying over Yarmouth?” In the future, PT will be able to reason about user preference, and will be able to ask the user probing questions that will help her identify the best options. Mary “Missy” Cummings is an associate professor in the MIT Aeronautics and Jonathan P. How is the MIT Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics. Prior to joining the MIT faculty in 2000, he was an assistant professor in the Department of Aeronautics and Astronautics at Stanford University. Research interests in the design and implementation of distributed robust planning algorithms to coordinate multiple autonomous air/ground/space vehicles in dynamic uncertain environments. Jon How may be reached at jhow@mit.edu. Brian Williams is a professor and the undergraduate officer in the MIT Aeronautics |