Jennifer McGovern Narkevicius
Alexandria, VA, USA
Emphasis in systems performance has historically focused on the development and implementation of technology. But success for these systems hinges on the successful performance of the humans interacting with these systems to meet the required capabilities. Proper systems engineering practice demands meticulous requirements definition detailing the relationship between people and the systems they use. To define these requirements, it is essential to understand the inherent “capacity” of user populations, the operational environment in which they work, and the work and context in which they operate when using the new system. These more diverse data must be included in systems engineering and trade space analyses to ensure that the system will perform as envisioned throughout the projected operational environment. All of the human centered domains contribute to the definition, specification, and utilization of the system but it is the context and predictability measures that can contribute greatly to the HSI process. While trade offs must be made inside the human domains, the integration of the domains allows for more balanced trade-offs with other specialty engineering disciplines. This tutorial will explore methods, tools, techniques, processes and architectures that are currently being used to achieve successful human systems integration.
Research has shown that teams are particularly vulnerable at the “seams,” where a seam may exist due to differences in technology, location, organization, goals, incentives, knowledge, resources, responsibility, and more. As increasingly diverse technologies, resources, and organizations/agencies are interconnected through Network-Centric Operations, additional seams are inevitably created. Success will rely heavily on good Human Systems Integration (HSI) to reinforce these seams and facilitate effective interactions between people, and between people and technology, in order to perform new tasks and exploit the increased volume/rate of information. In addition to the myriad challenges of enabling communication between entities across an enterprise, challenges can arise from an unwillingness to share information, difficulty in “wrapping” information with its associated context and pedigree, appropriate visualization of information, differences in perspective and motivation, and misaligned priorities or lines of authority. And when so many sources of information and lines of communication become available, critical tradeoffs must be made to achieve a balance between overloading decision makers and providing sufficient awareness for collaboration, and to engender appropriate levels of trust among people, and between humans and supporting automation. This talk will discuss some of the resulting challenges for truly integrating HSI into the systems engineering process.
Jack Carroll, Chris Hass
American Institutes for Research (AIR)
Concord, MA, USA
In this session, senior members of AIR's Human Factors Research and Design practice will discuss how developers and their products benefit from flexible user centered design models that can be adapted to serve diverse user populations across a range of products and interfaces. Examples will be drawn from proven research designs illustrating a variety of human factors approaches, commercial and government applications, and user populations. Includes the introduction of innovative technologies to a general consumer population, providing health information delivery to low literacy populations, and assessing the usability and accessibility of products and user interfaces among aging and disabled populations.
University of Massachusetts Lowel
Lowell, MA, USA
Past attempts at using multiple sensor modalities in urban search and rescue (USAR) robots have resulted in missed detection of victims and operator disorientation. We have performed studies of over a dozen USAR robot systems at the AAAI and RoboCup Robot Rescue competitions and have conducted usability testing with domain experts at the National Institute of Standards and Technology. These studies have shown that the interfaces do not provide adequate situation awareness to the human operator. Based upon our findings, we have developed design guidelines for interacting with a remote robot which include how information should be presented and how people interact with autonomy. We have validated these guidelines by creating and testing a new interface for operating a remote robot. This talk will present design guidelines, developed systems and evaluation methods for human-robot interaction in the USAR domain as well as other applications, including assistive technology. We will also discuss the interaction roles that people can have with robots and present our taxonomy of human-robot interaction.
Mitsubishi Electric Research Laboratories (MERL)
Cambridge, MA, USA
Ordinary people already have great difficulty using the advanced features of digitally-operated household devices, and the problem is getting worse as more customization and programming features are continually being added. I will discuss research at MERL which addresses this problem by adding a collaborative task guidance capability to complex devices, such as digital home appliances, medical instrumention, or military equipment. Concretely, we have developed a reusable Java framework, called DiamondHelp, for building collaborative task guidance systems. DiamondHelp combines a generic conversational interface, adapted from online chat programs, with an application-specific direct manipulation interface. DiamondHelp provides ``a things to say'' mechanism for use without spoken language understanding; it also supports extensions to take advantage of speech technology. DiamondHelp's software architecture factors all application-specific content into two modular plug-ins, one of which includes Collagen (http://www.merl.com/projects/collagen) and a task model.
Joan Morris DiMicco
Sun Microsystems, Inc.
Burlington, MA, USA
Our individual behavior is strongly influenced by the social cues around us. Technology has the ability to provide us with additional information about our behavior, and thus has the power to also influence our interactions. This talk will discuss how we can design technology to specifically influence behavior and will present an application that reveals on-going turn-taking patterns to a group in a face-to-face discussion. Results from behavior experiments indicate that visualizations of turn-taking influence group behavior and decision-making.
Marietta, GA, USA
Situation awareness is a fundamental construct driving human decision making in complex and dynamic environments. Several studies indicate that achieving adequate situation awareness is one of the most central and difficult portions of decision makers' jobs in many domains. An examination of the underlying factors associated with situation awareness provides an integrated perspective for addressing system design issues. This perspective focuses on the means by which the human operator maintains an on-going representation of the state of the environment. To be effective, system designs, including decision support tools, must keep the operator at a high level of situation awareness, allowing for effective oversight and or interaction with the tools to achieve operational objectives. Many efforts, however, have found significant difficulties in effectively combining a human decision maker with decision support tools and higher levels of automation. The human and computer agents apparently are not simply additive in combining to form solutions, but may interact in ways that can seriously compromise the benefits of many forms of decision support. The effects of automated systems on situation awareness will be discussed. A taxonomy of levels of automation will be presented, along with research on the taxonomy that seeks to identify key characteristics of automated tools that aid or hinder effective human/automation collaboration. In addition, an understanding of situation awareness provides insights into the development of decision support tools.
University of Cambridge
The increasing power and falling cost of computers, combined with improvements in digital projectors and cameras, are making the use of video interaction in human-computer interfaces more popular. This talk will present two recent video interface projects at the University of Cambridge. People manage large amounts of information on a physical desk, using the space to arrange different documents to facilitate their work. The 'desk top' on a computer screen only offers a poor approximation. The Escritoire is a desk-based interface for a personal workstation that uses two overlapping projectors to create a foveal display: a large display surface with a central, high resolution region to allow detailed work. Multiple pen input devices are calibrated to the display to allow input with both hands. A server holds the documents and programs while multiple clients connect to collaborate on them. [Work by Mark Ashdown] Facial displays are an important channel for the expression of emotions, and are often thought of as projections of a person's mental state. Computer systems generally ignore this information. Mind-reading interfaces infer users' mental states from facial expressions, giving them a degree of emotional intelligence. Video processing is used to track two dozen features on the user's face. These are then interpreted as basic action units, which are interpreted using statistical techniques as one of six basic emotions or 18 more complex cognitive emotions. [Work by Rana el Kaliouby]
Systems Technology Inc.
Hawthorne CA, USA
This paper describes a novel mixed reality technique for robust, real-time chroma-key processing for training applications using software and off-the-shelf video hardware. This technique has been coined and patented by the author as “Fused Reality.” Until now chroma-keying has been conducted using dedicated hardware, imposing substantial restrictions on the visual environments that will support chroma-key. Variations on the traditional chroma-key setup, such as using retro-reflective screens and light-emitting cameras, can overcome some of the technique’s original drawbacks (such as lighting diffulties), but they can introduce new problems as well (i.e., a user’s hand can obstruct the light projected from the head-mounted camera, destroying the effect). The novel chroma-key method introduced in this paper is applied to training helicopter aircrew personnel using a prototype simulator, the Aircrew Virtual Environment Training (AVET) System. The ultimate goal of the AVET will be to provide training to Navy aircrewmen in all operational aspects of the MH-60S, including aerial gunnery, search and rescue, and vertical replenishment. A key requirement for these types of tasks is for trainees to see and manipulate physical objects (e.g., a jammed gun) at close range while viewing an interactive flight and shipboard environment. In order to satisfy space and cost constraints, some physical objects that the aircrewmen physically interact with (such as a rescue litter) must be capable of being sent out into the virtual environment and later retrieved. Fused Reality accomplishes this critical feature through contraction and expansion of specific real-world objects as they move away and toward the trainee. Fused Reality’s adaptive color recognition allows for realistic set lighting, colors, and user movement and positioning. It also enables multiple keying colors to be used (vice just blue or green), which in turn allows “reverse chroma-keying” – preserving only keyed colors and rendering all others transparent. Thus a gunner trainee wearing a green flight suit firing a mock gun painted blue would see only those items while the surrounding visual environment is replaced in virtual. This extremely compact training device could be operated even while the user is being transported to a future mission. Examples of Fused Reality such as these are demonstrated and discussed.
Klein Associates Division, ARA
Fairborn, OH, USA
If we take the notion of complexity seriously we must consider new ways of managing the development and evaluation of the systems we are working with. The technical systems we develop and evaluate are increasingly complicated. As we draw new boundaries around what is to be designed and evaluated, to include the people and organizations doing the development and those using and relying on the technical system, there can be no doubt that we are engaging with complex systems. Traditional analytic experimental techniques alone are not effective in guiding the development and evaluation of these complex systems. We are formulating a life-cycle approach we call the Cognitive Performance Case. We have gathered together proven evaluation techniques and focused them on problem of evaluating the cognitive impact of a technical system throughout the system development life-cycle. We have defined four evaluation techniques for assessment: Expert Review, Macrocognitive Walkthrough, Prototype Inquiry, and Observation of Use. These techniques are extensions or refinements of existing techniques, and can be used independently. We argue that their full value is realized only when they are combined as a part of life-cycle evaluation activity.
The control of multiple Uninhabited Air Vehicles (UAVs) is operator intensive and can involve high levels of workload. Feedback from operators indicates that improvements in operator interfaces would reap significant gains in system performance and effectiveness. Incorporating automation agents in UAV control stations has been proposed as a solution to reduce workload and improve overall human-machine performance. This research investigated the efficacy of agent-aided operator interfaces in a scenario that involved multiple UAVs with the interfaces modeled as part of the UAV tactical workstations of a maritime patrol aircraft. A performance model was developed and a simulation was conducted to compare the difference between mission activities with and without agent aids. An experimental environment was also designed and implemented for a crew of three UAV operators. The human-in-the-loop evaluation findings will be compared with simulation results for the verification of automation agent concepts applied in this research. The comparison results will provide guidelines for generic operator interface design.
Laurence (Nuke) Newcome
SRA International, Inc.
Alexandria, VA, USA
Unmanned aviation originated in the same era as did manned aviation, and a number of the early pioneers of flight, including Orville Wright and Glen Curtiss, contributed to the development of both. The development of unmanned aviation has been the driving or contributing motivation behind many of the key innovations in aviation: the autopilot, the inertial navigation system, and data links, to name a few. Despite being hobbled by technology insufficiencies through most of the 20th century, focused efforts in small, discrete military projects overcame the problems of automatic stabilization, remote control, and autonomous navigation. The last several decades have been spent improving the technologies supporting these capabilities largely through the integration of increasingly capable microprocessors in the flight and mission management computers. Optimizing the human-unmanned aircraft interface remains a key challenge for ensuring flight safety and mission effectiveness. The early part of the 21st century will see even more enhancements in unmanned aircraft and their expansion into commercial roles as they continue their growth. The ongoing revolution in the biological sciences will soon impact aviation and, together with future microprocessors, will enable intelligent, vice robotic, unmanned aircraft to fly over the Earth and eventually other planets.
Semi-Automated Cueing of Predator UAV Operators
from RADAR Moving Target Data
- Speaker Slides (pdf) -
Laurence (Larry) Bush
Lexington, MA, USA
The Predator Unmanned Air Vehicle (UAV) is ideal for a wide variety of surveillance tasks, such as identifying activity of interest, because its video output is readily interpretable by human operators. However, due to its narrow field of view it can only monitor a small area at a time. Consequently, this platform alone is not appropriate for wide area search and monitoring. In contrast, Moving Target Indication RADAR (MTI) can detect moving objects over a wide area. The data it provides is exploitable by automated decision support algorithms; however, its information content is limited. Therefore, an MTI surveillance system can monitor activity over a large geographic area, but it cannot identify this activity with certainty. In this operator-in-the-loop experiment, we explore the complementary strengths of these sensing modalities. The capability being explored is the use of MTI decision support algorithms to cue a Predator UAV video camera operator. The algorithms under consideration are convoy detection and behavioral change detection / situational inference. The automated decision support algorithms provide a prioritized set of cues to the operator. A cue provides the location of the suspected activity of interest. The cues are used by the operator to find and identify the activity of interest. However, algorithms alone are not the answer. Semi-automated systems do not necessarily provide the desired improvement to human performance. To do so, algorithms must provide usable information to the UAV video camera operator. To address this issue, we have created an Integrated Sensing and Decision Support (ISDS) test-bed comprised of real-time interactive virtual world simulations, operator-in-the-loop experimentation, and a distributed data collection environment. This talk will address this experimental framework as well as the tested concepts, algorithms and preliminary results.
Sara Louise Howitt & Dale Richards
Considering the implications for controlling and supervising a team of Unmanned Aerial Vehicles (UAV), it is apparent that efficient design and implementation of the Human-Machine Interface (HMI) is required. The HMI represents the fundamental point of interaction and the means of communicating knowledge between the system and the operator. Quality of HMI affects operator performance and maintenance of good situation and intent awareness. Operator information requirements during individual phases of a mission must be determined before considering how to present such information. Additionally, the HMI enables operator control of the level of autonomy, which dictates the extent to which an individual or group of UAVs can make decisions. This lecture will outline the approach QinetiQ has adopted in designing a suite of modular and flexible interfaces for UAV control that are not restricted to a single UAV type and can be operated from any location. It will discuss the representation of information to the operator in terms of designing a system that displays information in a way that is cognitively compatible with the user’s information processing capabilities. Additionally, it will seek to highlight methods for maintaining optimum operator situation and intent awareness through the visual display of information.
The talk will describe a key risk area threatening the widespread deployment of unmanned vehicles (UVs) and in particular unmanned air vehicles (UAVs), that of attaining high levels of autonomy. Autonomy will be loosely defined in the context of UAVs and the meaning of "level of autonomy" discussed. The talk will go on to argue that the achievement of high levels of autonomy is not merely a function of increasing machine intelligence but also of maintaining the human operator's engagement with the decision making process and retaining human authority. Certainly a human being in the loop will be a requirement for safety, flight clearance and legal reasons on early systems. Therefore, developers of highly autonomous systems are presented with a paradox. It will be argued that the human must be placed at the centre of the design process and consequently the human machine interface and the system architecture become critical to achieving high levels of autonomy. This quality impacts on the entire knowledge acquisition and design cycle and broadens what is meant by that term placing it as a discipline firmly in the systems design community. Ongoing research will be described to illustrate the arguments raised during the talk.
Laurentian University / Penguin Automated Systems Inc.
Sudbury, ON, Canada
The international mining industry is under constant pressure to provide product more competitively and to find new sources of material. The key to accomplishing this is a constant drive to lower the cost of mining in more and more difficult environments. Mines today are typically of two types: open pits and underground. Open pit deposits are running out or not proceeding due to regulations, while underground mines are beginning to push the technical limits of the rock in terms of mining depth. Ultimately, this situation needs to be addressed. One of the leading approaches in the world today is teleautonomous mining or telemining. This talk discusses the current state-of-the-art in mining technology for the open pits and underground mines of the world. Further, it discusses what has been accomplished today and provides a view for the future of mining in the short, medium and long term. Robotics and automation will play a pivotal roll supporting mining companies into the future of mining deep underground, under the seas and oceans and in space. This talk will provide insight as to how this will be accomplished.
Yellow Springs, OH, USA
The Air Force, Navy and Army initiated programs to embed cognitive engineering within the acquisition processes. These programs’ purpose is to arrive at a feasible, fieldable process and toolset to support development of systems that enable operators and sustainers to better employ their cognitive abilities. A study of the DoD’s Joint Capabilities Integration and Development System and its Defense Acquisition System was conducted to determine how cognitive engineering could be practiced within those frameworks to yield the desired system qualities. Cognitive engineering activities were identified and associated with systems engineering and programmatic activities. An attempt was made to associate existing cognitive engineering techniques with the identified activities. Activities not supported by existing methods are targeted for future work. There are significant cultural barriers to attaining regular, routine cognitive engineering practice. These were identified and approaches to overcoming those barriers were devised. This presentation discusses the process, activities, methodology shortfalls and issues involved with customary cognitive engineering participation. A successful process will have broad cross-discipline support. Thus, the community will be encouraged to comment on the aptness of the activities and methods and to suggest modifications and additions that would result in a more feasible, fieldable and effective implementation.
École Nationale Supérieure des Télécommunications (ENST) de Bretagne
We propose to discuss about the role of Human Factors in Decision Support Systems and related assisting tools that can be used in the Operational Research field. The aim of the presentation is to review some tools such as utility theory or decision theory that are used to tackle new problems in the context of human-centered processes, especially when considering the recent evolution of Information Systems towards distributed ones. Emphasis shall be put on the cognitive aspects of decision and on the related computational solutions that can be proposed. Two applications shall be presented in details : a first one devoted to the assistance of an expert decision maker in charge of controlling an industrial process, a second one related to the decision support dedicated to the crew of a maritime surveillance aircraft.
Woburn, MA. USA
Technologies are often created to support humans who are working in complex, dynamic environments (e.g. information systems to support military decision-makers, robotic systems to support search and rescue, sensors and artificial intelligence technologies to supplement infantry soldier perception). The complexity of these environments, and variability associated with human operator decisions and interactions with new technology, can make it difficult to measure a new technology's impact on overall mission performance. And as with most technologies, the cost of changing requirements increases significantly as a project gets further along the system development life-cycle. An inability to measure the impact of the technology in the complex environment makes it difficult to validate requirements. These are primary factors driving a need for reliable and valid methods to evaluate the impact of new technologies on mission performance in complex environments. Aptima is a Human Centered Engineering company that has been developing and applying a variety of methods to measure mission performance and the impact of technology elements on that overall mission performance. This presentation provides an overview of these methods through a review of several ongoing efforts to guide the development of new technologies to support infantry soldier operations in urban environments.
Roth Cognitive Engineering
Brookline, MA, USA
This talk will describe a visualization and decision-support system that exemplifies a particular approach to human-centered computing called Work-Centered Support Systems (WCSS). The system was developed for an airlift service organization. It is designed to support situation awareness of mission plans and repercussions of changes in mission plans during mission execution. The talk will describe the system that was built, the design philosophy and methodology it embodies, as well as the results of a work-centered evaluation study that was conducted to evaluate the usability, usefulness, and impact of the system on the work of the organization.
Middletown, RI, USA
There is more and more pressure on the Navy to adapt an HSI approach to ship design. Although HSI has been called out as a key factor in several recent surface ship designs, lack of integrated design tools and formal design processes have blunted its impact and effectiveness. In the submarine world, HSI has traditionally been focused mainly on the combat system, although recent activity has expanded to other aspects of the ship. At the same time, technological advancements are driving a revolution in submarine design. As an example, the recent DARPA Tango Bravo BAA called out specific technologies which are expected to radically change the look and layout of future submarines. At first, this might seem that the current focus on HSI is a minor change whose scope is limited to manning optimization. Looking forward, however, the focus on HSI, coupled with the introduction of radical new submarine technologies, could lead to a fundamental change in the ship design process and could ultimately result in a shift in the role of the prime contractor on future contracts. This paper will discuss these changes and explore the potential impacts of the adoption of an HSI perspective on future ship designs.
Idaho National Laboratory
Idaho Falls, ID, USA
This presentation will overview recent advances in human reliability analysis (HRA). HRA originated as a way to identify and quantify the risks in human interaction with designed systems. As the human factors counterpart of probabilistic risk assessment, HRA has maintained a crucial role in safety critical industries such as the nuclear power industry, where HRA is used to determine the safe operating bounds of the system and establish regulatory guidance for acceptable levels of human performance. The determination of human contribution to risk is a vital component of ensuring the overall safety and reliability of systems. Much of HRA research focuses on reactive risk determination, the estimation of the human contribution to risk in incidents or events with an unfavorable outcome (such as accidents). An equally important direction for HRA is in prescriptive risk determination. In prescriptive HRA, the research focus is on determining the human performance shaping factors that might contribute to an unsafe or undesirable outcome in system usage. The emphasis in prescriptive HRA is to use human performance information to design a system that minimizes negative outcomes in human-system interaction. In identifying those human performance shaping factors that have a deleterious effect on the system usage as well as those factors that enhance system usage, it is possible to incorporate these considerations in the design of a novel system. In quantifying these factors, it is possible to arrive at a risk-informed design and develop design success criteria based on human reliability. This presentation will introduce HRA-based methods for system design with examples in the areas of software usability, certification of human-rated aerospace systems, and semi-autonomous instrumentation and control system design at nuclear power plants.
User Interaction Research and Design, Inc.
Point Roberts, WA, USA
Automation researchers have long debated the appropriate roles of humans and automation in complex systems. Human centered automation philosophies recommend that users be granted full control over all system functions, but others hold that minimizing the operator’s role in system operation minimizes the potential for human error. Automation can be seen as a means of tipping the balance of control from user to designer. To the extent automation makes a system less vulnerable to operator misuse or error, it becomes more vulnerable to designer error. For the past two decades or so, Boeing and Airbus have been conducting a sort of natural experiment in this area, as their divergent automation philosophies have exposed a range of automation-related problems. However, advancing technology is pushing these issues from the rarified air of academia and aviation into the society at large. Human and automation issues are now cropping up in cars, politics, and digital rights management, to name a few. How well society grapples with these problems will depend to some degree on how well informed it is about human centered automation issues. In this lecture, I will outline some of the history of human centered automation issues from aviation, nuclear power, railroads, computer interfaces, elections, and other areas of technology, and demonstrate how these issues are migrating from limited industrial concerns to society at large. I will also describe an analysis procedure that may help designers better predict human errors and system failure conditions that may arise due to automation.
Charles River Analytics
Cambridge, MA, USA
Charles River Analytics, Inc. has been a leader in applying cutting-edge computational intelligence technologies to real-world problems since 1983. The REASON (Rapid Evidence Aggregation Supporting Optimal Negotiation) platform is the product of a recent Darpa funded effort to develop a collaborative decision-making platform. REASON represents a decision as an “argument network,” which has as its root a set of decision options and as its branches chained arguments for and against the options. The interface presents geographically distributed users with a graph-based representation of an argument network. Any user contribution forms a new node in the network, and users may vote on existing arguments to express their belief. REASON dynamically aggregates beliefs using Dempster-Shafer belief aggregation to identify the “best” decision option at any point in time. The system also evaluates the degree of consensus associated with each node in the network, and displays this graphically. This feedback allows collaborators to stay aware of the contentious areas within the discussion, and helps guide them to consensus.
US Naval Research Laboratory
Washington, DC, USA
While some interruptions can be helpful, many interruptions are very disruptive. We have developed a theory called memory for goals that we are applying to our study of interruptions. Our focus has been on understanding how people resume a task after being interrupted. I will describe both the theory (both computational and qualitative) and a series of experiments in multiple domains that validates the core components of the theory. Based on the theory and experiments I will make several practical suggestions for reducing the disruptiveness of interruptions. I will also demonstrate a tool we are building that uses several of our theoretically derived principles to help people resume a task after being interrupted.
CHI Systems, Inc.
Lower Gwynedd, PA, USA
Cognitive Engineering researchers have strongly advocated a data-driven approach to front-end analysis and design of human-computer systems. In this approach, cognitive task analysis methods are used to explore the details of the specific design case and form the basis for the design process. While it theoretically leads to a maximal focus on human needs and constraints, it also pragmatically leads to a frequently unacceptably inefficient design process. The now widely-accepted CTA process gathers and analyzes data on first principles but lacks a clear process for reasoning from that data to features of a design solution, particularly features that are consistent with a priori pragmatic engineering constraints. An alternate approach presented here is to begin with a well-defined design space consisting of categories of design solutions -- in essence design patterns of human-computer synergies -- and a software architectural framework for integrating them. A joint cognitive/systems engineering process based on this design space involves three broad steps: (1) mapping the functional requirements of the work to be supported/automated into the design space, and selecting the appropriate design patterns; (2) identifying the features of the specific design case that are needed to fully apply and tailor the design patterns for this specific case; and (3) selecting and tailor the human-computer interface components. This approaches yields a design process that is more efficient, focused, and cognizant of engineering design constraints. [Joint research with Robert Eggleston, US Air Force Research Laboratory]
West Lafayette, IN, USA
The focus of this talk is on the challenges of effective communication and information technology systems design in space mission operations. The origin of Prof. Caldwell’s research program comes from the question: Can we use feedback control engineering tools and concepts to improve our understanding of how teams get, share, and use information in complex, time critical environments? This talk will begin with a brief overview of Thomas Sheridan’s Human Supervisory Control framework, with a special emphasis on the Sheridan-Tulga work on event- and deadline-driven task performance. Prof. Caldwell’s work expands into team-based coordination and performance, developing the concept of Distributed Supervisory Coordination (DSC). Information flow and coordinated expert performance in the space flight environment includes specific challenges, including limited communications bandwidth; significant delays and lags in information updating, transmission, and sensemaking; and critical constraints in time, resources, and context-relevant reference knowledge. The DSC framework examines coordination and synchronization across multiple time scales and event rates, and incorporates mathematical aspects of adaptive feedback response to dynamic systems behavior. Ongoing research is described using both Space Shuttle and Space Station operations, with potential impacts for mission operations architectures for Crew Exploration Vehicle and beyond.
University of Michigan
Ann Arbor, MI, USA
In this talk I will describe the queueing network (QN) theory and methods for modeling human performance in general and three of our QN modeling work in particular. First, a QN theory of reaction time (RT) was developed that integrates the influential architectural RT models as special cases, including the serial discrete-stages, the serial continuous-flow, and the discrete network models (such as the critical path network model). Further, the QN models cover a broader range of mental architectures and can be subjected to well-defined empirical tests. Second, the architectural RT models and the sequential sampling RT/accuracy models are unified through QN-RMD (Reflected Multidimensional Diffusions). Specifically, the “state” of a K-server QN of mental architecture is represented as a reflected diffusion space of K dimensions, in which “reflecting barriers” reveal architectural constraints, while “absorbing barriers” represent accuracy-related response criteria. QN-RMD moves beyond the current 1-dimensional random walk/diffusion/accumulator models that have successfully accounted for but are limited to single-stage fast responses. Third, QN-MHP (Model-Human-Processor) was developed to bridge the mathematical and the symbolic models of mental architecture and to support mathematical modeling and real-time generation of task performance and mental workload. QN-MHP has been applied to generate and model a variety of tasks including psychological refractory period, visual search, transcription typing, and driving a vehicle simulator.
Cognitive Engineering Research Institute / Arizona State University
Mesa, AZ, USA
As team tasks proliferate, so do team and collaborative technologies. Parallel to individual tasks, team tasks are increasingly inundated with new technology, often “thrown at the problem” as the ultimate solution. From emergency operations to military command and control, examples of well-intended technologies that go unused or worse, get in the way, abound. It is clear that the nature of the collaboration needs to be understood and used to guide design. Unfortunately there is a dearth of methods, measures, and metrics with which to understand collaboration and those that exist tend to require intense resources and are largely qualitative. Some new ways of conceptualizing team or collaborative cognition suggest some other possibilities. New measures have been developed to assess collaborative performance, coordination, and situation awareness. These methods provide quantitative, a well as qualitative output and rely on ongoing behavior and communication streams collected during a collaborative session. In this talk I will present examples of poorly placed technologies and will offer a solution to designing for collaboration through communication-based metrics.
University of Calgary
Calgary, AB, Canada
The research community’s awareness of the complexity of the problem of designing enjoyable collaborative interfaces is growing. In response, variations in metrics and methodologies are being developed within the well known approaches such as task centered design, participatory design, and user studies. As part of this movement to develop more methods of bringing the human into the design loop, we are currently focusing on what we observe and how we use those observations. By closely observing the interaction between low-level sub-task details of each team member’s behavior and the functioning of the team as a whole, we have been able to identify meta-communication practices that can be supported in software. This approach will be illustrated through the use of orientation of items for communication and the use of tabletop territoriality for coordination.
Halifax, NS, Canada
GPS and other location-based technologies are becoming commonplace on mobile devices. These technologies are opening up a world of opportunities for small, mobile devices. In particular, the ability to provide location-awareness information has the potential to significantly impact our social interactions. However, if these technologies are not designed appropriately, the benefits may be lost, or worse, these technologies could end up being a detriment to our social relationships. There are many human-centered challenges for mobile location-awareness devices. This presentation with highlight a number of key issues we investigated related to the use of location-awareness information for social coordination. First, we examined the benefits that location-information can provide and how people utilized this information when rendezvousing. Second, we explored what location-information needs to be displayed (for the task at hand) and ways to effectively display this information on a small device. Finally, we carefully examined the impact 'mobility' has on interaction. Overall, the results of this work clearly demonstrate that location-awareness information can provide significant benefits, if the associated strengths and limitations of the technology are known and carefully balanced.