Andy in Parabolic FlightResearch Overview

Home | Research | Publications | Family Photos | Recreation

Here is a brief description of some projects that I am currently involved with in the MVL:

Efficient Individualized Teleoperation Training via Spatial Ability Assessment
Current training procedures for both the Shuttle and ISS manipulators are intensive, requiring both introductory classroom sessions and practical experience with various training simulators. The initial Generic Robotics Training (GRT) for prospective prime operators requires a minimum of nearly 30 hours of training. Shuttle Payload Deployment Retrieval System (PDRS) training requires an additional 70 hours or more. From our discussions with trainers and astronauts, we have learned that trainees show significant variability in their initial level of ability, rate of learning, and level of mastery. Initial level of ability is not a reliable predictor of the final level of mastery. In some cases, a high level of proficiency can be obtained, but only after a protracted training and practice period. The goal of this project is to collaborate with the NASA Mechanical and Robotic Systems Group and Astronaut Office to improve the efficiency of NASA robotics training by designing individualized programs based on an assessment of astronaut spatial abilities. We are using multiple regression techniques to correlate robotics training performance with scores from a suite of commonly used spatial ability tests and the NASA Robotics Aptitude Assessment to determine if specific aspects of training performance can be predicted. We will work with the robotics instructors to identify how current lessons can be modified to enhance an astronaut's learning and reduce the training time required.
(Sponsored by the National Space Biomedical Research Institute)

Development of Aleternative Locomotive In-Cab Alerter Technology
Fatgue, drowsiness and medical incapacitation have been recognized as major problems in railroad operations for more than a hundred years. In addition to work hour limitations, most locomotive cabs now have some form of vigilance device that provides an automatic train stop if the engineer should become incapacitated. Originally, these were some form of "deadman" control but in the last 50 years, vigilance control devices known as "alerters" in the US have been used. Typically, these alerters check for some form of train control activity (e.g. brake or throttle usage) or whether the engineer has pre-emptively reset the system within a designated period of time. Unfortunately, the design has led to the occurrence of automatic or reflexive behavior by the engineer to pre-emptively reset to system, even when drowsy. We have been working with the Department of Transportation and Federal Rail Administration to investigate the current industry stakeholder views on the conceptual and performance problems of current alerters, demonstrate the safety and business case for improving them, and to synthesize federal regulations, industry guidelines and established human factors engineering principles to propose an approach to improving or replacing the current technology.
(Sponsored by the Department of Transportation/Volpe National Transportation Center)

Sensorimotor interaction with vehicle displays and controls to enhance human-machine cooperation during precision lunar landing
Lunar landing depends on the selection and identification of an appropriate location that is level and free of hazards, along with a stable controlled descent to the surface. During crewed landings, astronauts are expected to interact with automated systems, based upon improved terrain maps and sensor updates, to perform tasks such as manual redesignation of landing point, adjustment of descent trajectory or direct manual control. However, sensorimotor limitations, both vestibular and visual, are likely to interfere with performance and safety. This integrated project examines the nature of the anticipated spatial disorientation and terrain perception limits as they affect the transition from automatic to manual control, and develops advanced display countermeasures to
overcome these limitations. I am participating in two of the specific aims: (1) Examine the nature of anticipated sensorimotor difficulties (e.g., spatial disorientation, limits on terrain perception) as they affect the transition from automatic to manual control, and (2) Develop and evaluate advanced display countermeasures, for enhancing situation and terrain awareness and for overcoming performance limitations caused by reduced visibility associated with lunar lighting, terrain reflectivity and the absence of atmosphere utilizing Draper Laboratory’s fixed-base lunar lander cockpit simulator.
(Sponsored by the National Space Biomedical Research Institute)

Here are some interesting past projects which are finished or languishing because of a lack of time...

Visuomotor and Orientation Investigations in Long-Duration Astronauts (VOILA)
This International Space Station Human Research Facility experiment investigates our general hypothesis that mental processes involved in self-orientation, object perception and motor control are fundamentally altered in microgravity environments. These alterations often result in visual reorientation, inversion, and proprioceptive illusions frequently reported in-orbit by astronauts. We are preparing our virtual reality-based experiments on self-orientation, linear vection, and object perception which will help us characterize the contribution of gravity to the mechanisms underlying these activities, namely the vestibular and visual systems. In 2005, NASA ended the VR experiment hardware development at MIT and now we are working as Co-Investigators with a CNES-experiment led by Joe McIntyre. This effort finished in May 2008 and was not renewed by NASA.

(Sponsored by NASA)

Visual Orientation, Navigation and Spatial Memory Countermeasures
Astronauts in space find it difficult to recognize their orientation while facing any of the viewing directions in 6-ported space station node modules. This was particularly evident in the NASA-Mir missions where crew members reported that they had difficulty envisioning their orientation with respect to the different modules of Mir. Interestingly, they were still able to gain route knowledge to move through the station. However, this lack of 3D "survey" knowledge may have contributed to the accidental collision between Mir and a Progress supply ship. I am working on a project sponsored by the National Space Biomedical Research Institute to understand how people cope with this type of spatial learning and navigation problem and to develop countermeasures that could be used before or duing spaceflight to mitigate these issues. We have completed three studies investigating cognitive strategies that astronauts use to learn the spatial layout of a station module or their attitude relative to some other part of the station. We are now working on a possible VR training tool ("Virtual Porthole") which could be used during pre-flight training in the Johnson Space Center ISS mock-ups to help the astronauts develop an orientation-independent mental map of ISS.

Advanced interfaces for teleoperation and space flight
The goal was to develop improved displays and controls for orientation and navigation in virtual environments, particularly for use by astronauts in simulations of weightlessness. We focused our work in three areas: 1) Path Integration in Virtual Environments: We have conducted experiments to define the role of the rotatory component of virtual viewpoint motion on 2D and 3D path integration error. Analysis shows that subjects overestimated the rotatory component of virtual viewpoint motion, presumably due to the absence of vestibular and haptic cues. Our data support the view that VR training systems should be designed so that real head movements accompany visual viewpoint rotations. 2) "Spacecraft in Miniature": We extended the "World In Miniature" concept (Pausch et al., 1995)and applyied it to the difficult problem of 6 degree of freedom navigation within large spacecraft. This VR based tool provides an 3D "you-are-here" map within a larger VR space station stimulation. of on the results from the previously described experiment on virtual path integration, we eliminate the rotatory component of visual motion whenever the user's viewpoint flies in to the model. We performed an evaluation of SIM compared with simple exploration of the virtual space station. The results showed that SIM trainees developed better survey knowledge of the station and equivalent route knowledge compared with the control group. 3) "Virtual Video": We defined a hybrid VR display rendering technique intended for use in immersive teleoperation and windowless cockpit vehicles (e.g., X-38/CRV for Space Station). Our approach extended the concept of the desktop panoramic static image viewer (e.g., Quicktime VR; IPIX) by presenting a real time, dynamic, stereo video image to an HMD user. Captured video images of a real scene were used as a dynamic texture on the interior of a virtual surface surrounding the observer. A graphics accelerator renders the view of the virtual surface in head coordinates, minimizing the perceptual lag associated with head tracking.

Driver Models for Intelligent Vehicles
The long-term goal of this project is to make cars that are able to function as a "co-pilot" that monitors the driving environment along with the human driver and provides assistance or takes control when these actions would facilitate (rather than go against) the current goals of the driver. We are currently exploring algorithms that infer driver intentions and facilitate the design and implementation of intelligent vehicle warning systems. Our driver models are currently based on probabilistic hidden Markov dynamic models and also on cognitive architectures, such as ACT-R. This research was done in collaboration with Dario Salvucci at Drexel University.

Last updated: Aug 2008


Home | Research | Publications | Family Photos | Recreation

© Andrew Liu