Interview with Tom Sheridan
MIT, Cambridge, Mass., April 9, 2003
Thomas B. Sheridan
received his B.S. in Mechanical Engineering in 1951 from Purdue University, M.S.
in Engineering in 1954 from the University of California at Los Angeles,
and Sc.D. in 1959 from MIT. He joined MIT faculty in 1959. Since 1984,
he has been Professor of Engineering and Applied Psychology; since 1993, Professor of
Aeronautics and Astronautics. Dr. Sheridan is Former president, IEEE Systems, Man and Cybernetics Society; Former president, Human Factors Society; Former chair, Human Factors
Committee, National Research Council; Former chair, IVHS Safety and Human Factors Committee; Senior Editor,
Presence (MIT Press journal). He has served on a variety of government and non-government advisory committees.
He is the author of over 200 published papers and several books,
including Man-Machine Systems: Information, Control, and Decision Models of Human Performance,
MIT Press, 1974 (with William R. Ferrell); Monitoring Behavior and Supervisory
Control, Plenum Press, 1976 (edited, with Gunnar Johannsen); Telerobotics, Automation, and Human Supervisory Control
( MIT Press, 1992); Perspectives on the Human Controller, Erlbaum, 1997
(edited, with Anton VanLunteren); and Humans and Automation: System Design and Research Issues
(J. Wiley, 2002). His research interests include human factors in automation with respect to aviation, highway and rail systems, telerobotics, process control, health care, humanitarian demining, computer-interaction, virtual environments.
This interview was conducted by Slava
Gerovitch: What is your background,
education? How did you get into human factors engineering?
Sheridan: If we go far enough back, I was a very confused engineering student at Cornell University. In high school, I had been interested in design of systems. I thought I wanted to be an industrial designer, one of these people who makes devices pretty, and I interviewed some industrial design firms. It seemed to me to be very artificial, and I got turned off at that point on industrial design. I started electrical engineering at Cornell. My roommate was an architect; he got me interested in architecture. I was totally mixed up about what I wanted to do. I lost time on my courses; my parents weren't very happy with me, and I ended up going back to Purdue University near my home. Cornell at that time was a five-year engineering program; Purdue was a regular four-year university, so I finished in mechanical engineering there.
But I was always hankering for something a little different from straight engineering. That was the time of the Korean war. You either got drafted, or you got officer training, and I decided to do the latter. I was very lucky to be assigned to what was then called the Aeromedical Laboratory at the Wright-Paterson Air Force base, where human engineering began. Actually it had begun there a few years earlier, but I was at a fairly early stage. They were developing systems for fighter aircraft, escape and survival systems, ejection seats, parachutes, g-suits—this kind of thing.
I was very lucky again, for I was assigned to be the helper of an officer who was a record holder for high-altitude ejections and some other aviation feats. I was not married at the time; I was young and eager, so I volunteered to be an experimental subject. So I found myself doing parachute jumping, riding ejection seats on a track that they had (this was not in the air actually), being a subject in a centrifuge, where they were testing how many g's one can take. (I was a point on the curve, I think, for 10 g's for 2 minutes. I don't think I'd ever do this again.) Floating around in ice water, testing out survival suits and so on… It gave me an appreciation of the human body, and the relationship between people and machines, in this case. more on the physiological level.
Then I went to graduate school at the University of California and worked for a man who was trained as a psychologist, but was in the engineering school. That's when I got interested in the psychological side of human factors. Then I came back and did a summer job for a design-oriented professor here at MIT and decided to apply and to continue my graduate study after my master's degree. I came back to MIT and worked out an interdepartmental program between mechanical engineering and psychology.
Part of that program was at Harvard, because at that time MIT did not have a regular psychology department. Luckily some of the great names in experimental psychology were there: B.F. Skinner, S.S. Stevens, who is known for psychophysics, George Miller, who is a language expert, George von Bekesy, who is a Nobel laureate in sensing. I had some small exposure to these people and became fascinated with the psychological aspect of the human-machine relation.
At that point, I became a fence-sitter between engineering and psychology. I continued my graduate study in control engineering, because, as you portrayed so well in your talk the other day [on April 4, 2003, Slava Gerovitch presented a paper on human-machine issues in the Soviet space program in the seminar
series of the Mechanical Engineering Department at MIT], the influence of Norbert Wiener was very strong. As a graduate student, I read, I think, every book he wrote, maybe except one or two technical ones, which I looked up but could not quite understand the mathematics for, but I certainly read all the personal biographies. The one I remember best, other than
Cybernetics, is one called God and Golem, Inc. Wiener used the Jewish mythology of the Golem monster as a metaphor for the computer. I was very taken by that, and I still believe that he was on to something important, in terms of the concerns we have for computers doing harmful things to us, expectations that go beyond what computers are really good for, and so on.
I was lucky enough to be invited to stay at MIT as a faculty member, and I've been here for 45 years altogether, mostly in mechanical engineering. A few years back the Aeronautics Department decided to move a little bit further into human-machine interaction, and I was invited to join that department as well - all this while running a small laboratory with graduate students working for NASA on space robotics, for the Department of Transportation on railroad and automobile problems, for other government agencies - looking at display problems, and for some industrial companies, mostly in automobile area, and human-robot interaction.
The last thing we did on robots was medical robots, for so-called telesurgery. That got me working on medical problems, and I still, in my retirement, am doing some of that. I am also now working for the Department of Transportation in Cambridge on aviation and automobile safety, and for Harvard Medical on some problems of hospital safety, patient safety. This whole field of human-machine interaction and safety is where my thrust has been, using engineering ideas where they are useful, and psychological ideas where they are useful.
Gerovitch: Did you have any personal interactions with Wiener when he was teaching at MIT?
Sheridan: When I was a graduate student, too terrified to go introduce myself to the man (he was quite an old man at the time), one of the senior professors, Robert Mann, invited me to have lunch with him and Wiener. Mann - I think I can say as a friend a little bit in jest - liked to talk. Usually when he had a meeting with me, he would dominate the conversation. When we had lunch with Wiener, it was amazing to see that my friend professor Mann did not get a word edgewise; professor Wiener did all the talking! I kept my mouse completely shut, of course, because I was in some awe of this person. I think, he was retired at that time
Gerovitch: He retired around 1960.
Sheridan: Yes, this would have been about then. I was fascinated with the ideas of cybernetics. I thought Wiener was right on target in terms of the experiences I'd had in the military service and in my graduate work. I felt that the cybernetic model, as a way of looking at things, was entirely appropriate. It could, of course, be overdone. I often said to people that in terms of theory in this human-machine area, we live on borrowed engineering theories to a large extent (information theory, control theory), but in terms of a unique theory for human-machine interactions, it has not been invented yet. I sometimes say: "Newton's Law has not been invented yet; we cannot write differential equations for humans." Maybe in some very narrow respects yes (for blood flow, for example), but this is straight fluid mechanics. I've always been a modeler; I like models,
but those are mostly engineering borrowed models, or decision models from economics. Human factors engineering has to be backed up by a lot of experimentation. It's like medicine; it's an experimental science.
Gerovitch: What was the general attitude toward cybernetics here at MIT,
among the engineers, and at Harvard, among the psychologists?
Sheridan: Psychologists heard about cybernetics, but relatively few of them had a sense of continuous dynamics, that is, forces and motions in time. Discrete theories, like information theory, which is mostly attributed to Claude Shannon, who at the time was moving from Bell Laboratories to MIT, were more appreciated by psychologists. Psychologists were better equipped in terms of their knowledge of statistics. There was a period when psychologists were doing many experiments to find out how many bits per second people can process, playing the piano or memorizing etc. There were some very famous and still useful works that came out of that, for example, George Miller's "Magical Number
Seven" paper, which is about the most misquoted paper I think I ever found in science.
In terms of Wiener's ideas, I would say that not so many psychologists in the US paid much attention to it. There were, of course, a few people, computer or engineering types, who got excited by it. I am sure a few psychologists, life science, and medical people were also interested. Cybernetics always seemed to be a personal perspective. That is, people thought of cybernetics in association with Norbert Wiener. A few people Wiener worked with; a mathematician named Lee, and some others around who wanted to identify with it, but it was mostly a personal theory identified with Wiener, not seen as overarching communication and control science.
I went to the Soviet Union in 1960 for the first meeting of the International Federation of Automatic Control, held in Moscow. There were about a dozen of us, I was the junior person in the group; Wiener was certainly the senior person. I saw Wiener being regarded as not only a hero, but almost a religious figure. People there felt that strongly in terms of cybernetics perspective. That was in part, I suppose, because all the attendees were control theory types.
Only weeks prior to that meeting, which was in June 1960, the U-2 aircraft had been shot down, and the Americans were told in the newspapers that it was not true. Well, our whole delegation was led personally by our hosts to the front of the line, which was a kilometer or two long, waiting to get to the U-2 exhibit. I think, it was in Gorky Park. And there it was; it was surely American; we saw the labels. The whole thing was true after all. This made clear to me that we cannot always believe our own press. That was very educational.
I had a good time. I spent another several weeks, roaming around by myself, leaving my wife and a couple of kids. I went down to Kiev and to the Black Sea and a couple of other places just to see the country. It was an exciting time.
Gerovitch: What were your impressions from the
1960 trip to the Soviet Union? Did you see what you had expected to see, or was there
Sheridan: I was a bit skeptical of the press already. Things were, on the surface at least, much better off than we were led to believe. People were not cowering in doorways. It seemed normal in most respects. I had befriended a man named Aaron Kobrinskii, who was a well-known cybernetician in the medical area. He was developing prosthetic arms; he developed one of the first myographic, or nerve-muscle activated artificial arms. I met him in his laboratory; we had had correspondence. Our book
[Man-Machine Systems: Information, Control, and Decision Models of Human Performance (MIT Press, 1974), by Thomas B. Sheridan and William R.
Ferrell, came out in the Russian translation in 1980] was actually translated by his son. I later made contact with the head of the Mechanical Engineering Laboratory, Konstantin Frolov. He was an Academician, and I think was later one of Gorbachev's inner circle. I made several trips to Moscow to his lab.
I think I saw both sides of the USSR. Kobrinskii was a Jew, and unfortunately he was put down by the administration, his research stopped, he was retired, and he finally came to this country and died too early. I have had contact with a few other Russians as well. I took on a graduate student named Leon Charny, who is still in this country. His brother, Ben Charny, was a refusnik, a very famous one, working in mathematical physics area. So during a couple of my trips to the Soviet Union I would sneak out into the town and meet with his family. Ben at the time was at the hospital; he was not well. We finally arranged for him to come to this country. In fact, he was flown in a private jet as part of a deal made through Senator Kennedy and some people well above my head. Ben came to this country, survived a few more years, and then died.
I haven't been to Russia in the last few years. My last trip was at the invitation of a Russian colleague living in this country. He was doing some work with the Russians (at that time it was already Russia, not the Soviet Union). We were some of the first Americans to go into the Russian Cosmonaut Training Center outside of Moscow, Star City. That was very interesting. I got to go through the mock up of MIR, the Russian space station
Gerovitch: Let's go back to the time when you first started working on human-machine systems. Was your doctoral dissertation on control engineering, you said?
Sheridan: Yes, it was. All the time I was a graduate student there had been an effort by the Americans, funded both by NASA and by the Air Force, to model the human operator of a control system, mostly for airplanes. The whole point was that you couldn't design a high-performance aircraft, like a fighter plane, which itself is marginally stable, without knowing the dynamic characteristics of the human,
from the visual input to the handle/joystick output, because the human is part of the closed loop. So you had the dynamical equation of the airplane and the dynamical equation of the human, and you must solve those equations together. We were struggling to come up with some way of modeling the human that was a robust, useful tool.
In 1957-60, man named Duane McRuer, got a bright idea that the human really was so adaptable that he or she could adapt to different types of dynamic systems, and the right way to model the human was to model the whole closed loop: the human and the airplane, or whatever system he was controlling, together as a single entity. This was a great insight. My doctoral thesis was looking at the time-change characteristics of learning or fatigue, how did that relationship change over time, and how to cope with that mathematically. That was the work I reported in Moscow at that early meeting.
Gerovitch: Did it apply to airplanes?
Sheridan: Mostly to airplanes, because that's what people were thinking about. But it could conceivably apply to driving cars or balancing a broomstick or balancing oneself while walking - anything one is controlling that is dynamic. If you stop control any system can go unstable or break or crash.
Gerovitch: Did it involve a study of experimental subjects?
Sheridan: Yes, a lot of experimentation and modeling of experimental subjects. That's the work that got started my own lab. I went from that work into robotics. I became a faculty member [at MIT] in 1959, and the US was just beginning to think about going to the Moon, and about developing lunar rover vehicles. The question was: How could you control a roving buggy on the
Moon from Earth? There is an unavoidable time delay of three tenths of a second for the radio signal round trip to the Moon and back. So we became one of the first groups to cope with this problem of time delay in a control system in space. That has continued to be a problem. I think, our research kind of scared NASA off, instilling a feeling it was impossible to control anything from the ground, unless it was very very slow. That gave some thrust to the need for sending astronauts. There were lots of reasons, of
course; people naturally wanted to be there to explore ...
So-called teleoperation - where a human is controlling a vehicle or robot arm, but the vehicle or robot arm is somewhere else,
like on the Moon - was an area where we worked in for many years. We looked at applications not only in space, but also in deep ocean. Dana Yoerger, who designed the vehicle that swam into the sunken Titanic when it was first discovered, was my Ph.D. student. Your colleague David Mindell then became a student of Dana's. Dana is still at Woods Hole. Other applications of robotics include nuclear power plants, handling of chemical, toxic, radioactive materials, and so on.
The teleoperation (or telerobotics) interest later led to an interest in virtual reality (VR). If you look at VR systems, there is a very close relationship to teleoperation. There is an MIT Press journal, called
Presence, which I helped start a few years ago. It's now 9 years old. The theme of the journal is looking at the relationship between people and machines where one is doing remote control, or between people and computers where what you are controlling
remotely is virtual, that is to say, it is completely artificial, generated by a computer. From the human's point of view, whether one is controlling something on the Moon or playing a computer game, what appears in front of one is potentially the same kind of thing: you are taking some actions, you are seeing the remote (or virtual) device move, and there are some dynamic interactions. You'd like to have some feeling that you are "there." But "there" can be in the computer or on a distant planet. Teleoperation-VR is an active area right now.
Gerovitch: In that study of the feasibility of remote control of a lunar rover, did you find a fundamental limit on the human capability to control something remotely at such a great distance?
Sheridan: It is not a human limit, it is a physical limit, because the time it takes for light to go to the Moon and back is three tenths of a second. When you go to Mars, it's half an hour. So the problem is that when you take some action, the feedback is out of sync, so to speak, with what you're doing. Your motions then go unstable. At fairs, they sometimes have demonstrations where you talk to a machine and you hear your own voice delayed by a fraction of a second. With such delayed feedback you can't even count to ten; you simply break down. The brain needs instantaneous feedback, and if the feedback is longer than a tenth of a second, the nervous system has terrible difficulty.
Gerovitch: So this is actually a human limitation.
Sheridan: It's human, but it's caused by the physics of the problem, which is the time delay. There is no comfortable way for the human to overcome that. People have been struggling with that problem in various ways. There is the notion of supervisory control: if you can't control something continuously, then you give high-level instructions to the machine "open loop" with no [initial] feedback: do this, and then do this, and be careful about that, and make sure you don't do this, and here are the criteria: okay, go ahead and do it! You wait for some period of time, and then you get your feedback from the machine, having engaged the task part way. Then you give more instructions, or may correct something. It's an intermittent operation, rather than a continuous one. This supervisory telerobotic approach started in the early 1960s. Lots of people used different words, but we all meant pretty much the same thing. This did not show up in my first book
[Man-Machine Systems], but it did in Telerobotics, Automation, and Human Supervisory Control (Cambridge, Mass.: MIT Press, 1992) and in
Humans and Automation: System Design and Research Issues (New York: J. Wiley, 2002).
Gerovitch: Now let's talk about the Apollo project. How did you get involved in it?
Sheridan: Jim Nevins, who was the first author on the paper you have
[Man-Machine Allocation in the Apollo Navigation, Guidance and Control
System, by James L. Nevins, Ivan S. Johnson, and Thomas B. Sheridan (Cambridge,
Mass.: M.I.T. Instrumentation Laboratory, 1968)], was running a small group at Instrumentation Laboratory, which was looking at the human interaction needed for the lunar mission. Stark Draper was originally a regular professor at MIT, in the Aeronautics Department. Then his laboratory with graduate students got so big and they were taking on such large contracts, that they broke off from MIT proper. His graduate students became the team leaders of this operation. They were developing gyroscopes initially. Those early gyroscopes were used for military systems, to navigate aircraft, and in missile systems. This seemed an obvious way to guide a space rocket to the
Moon. Richard Battin, who was one of the senior people in developing space navigation, is still active over in the Aeronautics Department. Battin and Wallace Vandervelde were two early Draper Lab people still active at MIT. You might want to interview them.
Gerovitch: David Mindell has interviewed Richard Battin for this project; you can look it up
Sheridan: Harold Lanning was senior to Battin. The two developed some of the early software for the Apollo system. They were limited to a very small memory computer. It was incredible what they were doing with that tiny (by comparison to today) computer. Also, it became clear that for astronauts to interact with that computer, even to do simple things, they had to key in programs, which was a very different experience for astronauts than flying an airplane. The original astronauts were pilots, and they were used to controlling vehicles by joysticks and pedals. Punching buttons was something that was very strange to them. Programming a computer, punching buttons on a console was something they knew nothing about.
I was then an assistant professor, and I was invited to this group as a consultant. They were looking for someone who knew about human-machine interaction. Draper Lab got the contract to build the guidance and navigation system for Apollo. That system went both into mother ship, the command module, and into the lunar excursion module, the ship that dropped from orbit down to the Moon. That was a big contract. Draper Lab expanded rapidly at that time; they occupied a new building. I can well remember the very first group of astronauts - mostly new Apollo astronauts, but some of the Gemini astronauts were present too - had a meeting with us to consider the astronaut tasks. Eventually they had some very fancy simulators to use for design and training, but before we had a simulator we took the drawings of the design for the console and pasted them on the wall in a small room, and this became our simulator. We said: "Here are the programming procedures you're going to follow." We had the astronauts punching pieces of paper on the wall as a dummy computer simulator.
Gerovitch: Did you register whether they punched correctly or incorrectly at this point?
Sheridan: Early on, no. Eventually we mechanized it. There came to be some instruments they had to use to navigate, optical instruments, for example, a telescope and a sextant. Those became more than just paper; they weren't quite the real thing, but they were working devices that we attached to the wall. We went from paper to plywood with actual instruments, and eventually worked our way up to the full development technology.
Later we had several other crews of astronauts come in and use these devices to navigate.
At that point, we were taking measurements. One of the things we found was very interesting. First, before we invited the astronauts in, we had the engineers and the secretaries trying it. It was regarded as a fun thing, like a computer game. Everybody wanted to try and see what they could do. So we got data on the in-house engineers and the secretaries. We thought the astronauts would probably do a little bit better, but on the whole be pretty much the same. But I was astounded that they really were very much better at these tasks. I didn't expect it at all.
One night we were in the lab with several astronauts, and I had my then 11-year old son along, and introduced him to the astronauts. Because his father was into this, he was all excited about space. About two or three weeks later, there was fire in the Apollo capsule during training that killed the whole
crew. I recall that my son could not go to school for a whole week, he was so upset by this.
Gerovitch: Were the astronauts who died among those who had worked on the simulator?
Sheridan: Yes, several of those astronauts my son had met. It was quite a traumatic event for him, and of course for others too.
Gerovitch: Was the purpose of those simulations to train people to use the computer or to improve the design of the computer?
Sheridan: Both. Initially, it was primarily to verify design, to make sure that all the procedures were proper and that everything worked, and to familiarize the astronauts with what it would be, so the astronauts could make comments: I don't like this placed here, I think this procedure looks strange, and so on. We were working with the astronauts in the design phase. Later on, there was some training, but final training really moved down to Houston, where eventually there were full training simulators. So primarily our early simulators were for design and verification purposes.
Gerovitch: Do you recall any specific comments the astronauts made about things they liked or didn't like?
Sheridan: I don't really. One of my jobs that seems very trivial now was the keypad layout for the Apollo computer. We are now accustomed to a telephone keypad, which looks like this:
1 2 3
4 5 6
7 8 9
At the time, that telephone keypad didn't exist. There was a rotary dial. A telephone company was just then experimenting with the pushbutton kind, but it was not in production. Some older calculators started with the bottom:
7 8 9
4 5 6
1 2 3
We had this question: which way should we arrange the keyboard? We did experiments and had the astronauts try it. It turned out, it really didn't make much difference. They could handle it either way. In the end, we knew the telephone company was going to put out [the former arrangement]; they were closing on this design, so we said we'd just go with that. That wasn't rocket science, but it was something that we had to do.
And there were some really fun aspects. There were questions whether the astronauts could properly line up in front of the sight while in the space suit and helmet and not move too much. If you were bouncing around or moving, you'd lose track of the stars or the Moon. You had to look at the edge of the Moon and then get the star and see how far above the horizon the star was. This looked like a wonderful excuse to take a ride in the zero-gravity airplane called the "vomit comet." We arranged for that, and again, while the test wasn't very profound, everything worked okay, there was no big problem. But we had wonderful time experiencing zero gravity. I took several rides. You
can push off and float through space, or do back flips. It gives you about 30-40 seconds of zero gravity. The aircraft dives to the ground, and then you're pinned to the floor of the airplane, and then it does a large parabola. You have 2 g's, then zero, then again 2 g's at the end. I had a good time. Like so much research: you do it partly because it's important, partly because it's fun. Maybe in some cases more because it's fun than because it's important.
Gerovitch: Did you work mostly on the keyboard and sighting devices, or did you touch on larger
issues, like allocation of function between human and machine on board?
Sheridan:. The allocation of function at that time was not done by any great scientific method. It was done in committee discussions: Do you think we can have the astronaut do this or that? What would the difficulties be? We were starting from scratch; nobody had done these things before. We thought about airplanes, and we consulted people who knew about airplanes—and we extrapolated to space. The astronauts themselves were experienced pilots. They had a fairly large say in how things were designed, at least from the human factors point of view. That seemed to be realistic, because we had no experience with space. At least one could blame them if it didn't work! In large measure it was a matter of consulting with them on making decisions concerning allocation questions and then setting up simulations to test it out to see if it would work.
Gerovitch: Do you recall any specific debates over these issues?
Sheridan: Off hand, not at the moment. I can recall some misunderstandings on the part of the public. It was portrayed in the press that the first pilot of the lunar lander was in total control of the vehicle, as it sat down on the Moon. In fact, there was a very large amount of automatic control, many functions were automated. He was controlling some things, and the lunar lander was automatically controlling some other things.
So there were some efforts to clarify what was automatic and what was manual, and how these two were cooperating.
I was a consultant, and not in a management position, and I was not senior enough to be involved in big arguments, but there were surely arguments about how best to approach the Moon and how best to do certain kinds of navigation functions.
There were various programming bugs that were fixed at the last minute by clever programmers, I can remember that. Even after the flights were on their way, there were glitches that occurred, and certain of the staff would go over and run simulations all night long to come up with answers on how they could fool the computer in certain ways by punching in certain buttons and get out of a certain problem. They were debugging the computer in real time, as the system was up there. That was not just the astronauts' human capability solving the problems, it was rather the astronauts and the engineers on the ground talking back and forth, cooperatively debugging. That was very important; it saved several missions, I am sure. It was a combination of a human in space and a human on the ground working together in real time
Gerovitch: When you were trying to simulate these things, did you take into account the possibility that people on the ground would play such a major role, or were you thinking mostly in terms of the astronaut and the onboard equipment?
Sheridan: I don't think they anticipated people on the ground playing such an important role in real time.
Gerovitch: Did the discussions on the allocation of function that you were involved in take place mostly within Nevins' group, or did they include NASA people?
Sheridan: For glitches in the Apollo guidance system, it was Nevins' group and other computer people. Since Apollo, I've been involved with NASA in many other research projects, but as far as Apollo was concerned, my role did not go outside of being a consultant.
Gerovitch: Did you have a sense that the astronauts, for example, were pushing to have more functions assigned to them, and the engineers wanted more functions to be automated, or it was not so clear-cut?
Sheridan: I don't think it was clear-cut on Apollo itself. Several astronauts are around MIT, and you can talk to them with regard to the later flights. I have had three graduate students, one master's and two Ph.D. students, becoming astronauts. I think this may be a record actually, which I am very proud of. The first one was Dan Tani. After he left MIT, he was active in a space company, then he became an astronaut, a mission specialist doing some experiments. The second was Mike Massimino. He did the EVA repair on the recent Hubble telescope repair mission. The third is Nick Patrick, who has not been assigned yet. When Nick was a graduate student, he was a very enthusiastic flyer, so I said to him: "I'll take you on as a graduate student, if you teach me how to fly, because I've been pontificating about human factors in aviation, and really have never flown an airplane." He said, fine. So at age 66 or 67 I took flying lessons and got my license. That was great fun.
As for allocation of function in spaceflight, I think it is more or less clear. When you go into deep space, you have to go by robotics. In near space, there should be many automatic functions, on which the astronaut has override privileges. As I said, some of those overrides paid off in the Apollo project.
Even on an airplane, I don't think the passengers realize it, but the pilot is in constant communication by radio not only with traffic controllers but also with his own company. If something is wrong with the system on a commercial airplane, the pilot can talk to the engineers in the company, and they can try to figure out what to do about a problem. The same thing happens in a spacecraft, only more so. A lot of those people who sit in Houston in mission control are those kinds of people. They, in turn, can get in touch with engineers on particular systems. It is not just the astronaut and the space vehicle.
Whether a function is actually automated, or whether the plan is so nominal that you have an explicit list of procedures to do, you can say it's automated. It is not a simple matter of being automated or not being automated. There is a gradual continuum in between "I can't touch it" and "I'm doing everything."
In between, there is some procedure that one normally follows exactly, but if the operator is getting something back that he does not like, then he may have to break into a different system, and under some circumstances do something brand new.
These in-between stages, I think, people don't appreciate. Real interaction, you might say.
That's where the frontier is. Where do you automate where the pilot has no control? Where do you give full manual control capability, with no help at all? To what extent do you use expert systems, which are just advisory? To what extent is the pilot just monitoring the automation, so when automation fails, he takes over? You have all these intermediate stages. And it's not even a single continuum. It is really a multidimensional continuum.
Gerovitch: Were you overall satisfied with the allocation of function on Apollo? Was there anything that you wish they would have done, but they did not?
Sheridan: To be honest, I was not senior enough to even worry about that. I just worked on what I was told to work on at the time, because I was in awe of these older people who were planning the mission. I think, on more recent flights a lot more could have been automated and has been.
In terms of recent history of manned spaceflight, I am critical. We are spending a lot of money putting astronauts up there doing things that could be automated. There is a kind of make-work aspect to sending astronauts.
No question, Apollo was breaking ground. For political reasons, we continue with a lot of astronaut functions that could be automated.
On those early flights, I don't feel I know enough and I did not know enough at the time about what could or could not be automated. I knew it was an extremely primitive computer, much more primitive than today's pocket calculator. The memory was very small. It is really amazing, looking back, what they did with that funny little computer.
Gerovitch: Did the work on Apollo influence your further research?
Sheridan: Yes, indirectly, I would say. I was not at the outset doing much robotics; it was straight human engineering, human factors work. But we started in telerobotics about then.
Gerovitch: In this report [Man-Machine Allocation in the Apollo Navigation, Guidance and Control
System, by J.L. Nevins, I.S. Johnson, and T.B. Sheridan (Cambridge, Mass., M.I.T. Instrumentation Laboratory, 1968)], the role of the astronaut on board is characterized as "supervisor of automatic systems" (p.1). Did it later find a way into your concept of supervisory control?
Sheridan: As far as I remember, that was a pretty general report. I had been pushing the idea of supervisory control since 1962, and was probably putting a few words in people's mouths. It was an early vision of supervisory control. But you can look back even earlier and say that an elevator is supervisory control in some sense. You indicate what you want to happen, say the third floor, and then it goes there, and then it stops, and then you can say that you don't want to go to the third floor, but want to go to the basement. Supervisory control was characterized quite generally in that report.
But if you look now, for example at the so-called flight management system in a commercial aircraft, the supervisory control is far more sophisticated than it was in Apollo. In the modern aircraft the flight management computer can help you with navigation, help you get through weather, diagnose all kinds of systems. You can program the autopilot at 3 or 4 different levels. In terms of supervisory control, we've moved well beyond those early days of Apollo. And the newer spacecraft are the same: much more flexibility.
Gerovitch: This report concludes with the following statement: "By placing the crew in a more administrative or supervisory role and limiting their use to unique, carefully prescribed decision processes, the burden on the crew can be reduced significantly. At the very least, the crew can be released to perform other functions (scientific experiments, onboard data analysis) divorced from aircraft or spacecraft control" (p. 21). Was it your idea or someone else's?
Sheridan: The general idea was obvious at the time. Yes, I may have been one to push this term and popularize it, but I think it was in the air, or use the German word Zeitgeist. It was an obvious direction if you just looked at what was happening with computers and control.
Some people have cited the failures you can get into this kind of automation. There is a British woman named Lisanne Bainbridge, who has written some wonderful papers on what she calls "ironies of automation." It's a classic paper. I cite some of this stuff in this new book,
Humans and Automation.
In nuclear power plants, people have been very anxious about how much to automate, and what to have the human do. If a system fails, and the human is not in the loop, is not actively participating, it can be very difficult to get involved sufficiently to really understand what is going on. So in aviation and in nuclear power they have been reluctant to go too far into supervisory control.
I was on a committee a few years ago looking at the air traffic control system of the United States, and we were asking ourselves the same question: How far do you go with automation? There is no simple answer to it. It is what we are going to be living with for a long time. But my feeling is that we are going to be edging in the automation direction continuously. As technology makes progress, we are going to automate more and more things, but at the same time our systems are becoming more complex. It's like Sisyphus' work: the task changes as you make progress.
Gerovitch: Did your personal experience as a pilot later on change some of your earlier perceptions of human-machine interaction?
Sheridan: First of all, I've thought a lot about piloting, but I am a very inexperienced pilot. I don't have very many hours. One of the terms in human factors is mental work load, and another is situation awareness. These are buzzwords. For a pilot, they become real. As a neophyte pilot I found myself making the same stupid mistakes that I have been talking about and writing about for many years. When you become overloaded, you think about certain things, and you forget about other things. When approaching an airport it is easy to suddenly find one's self at the wrong altitude for landing. It takes training, it takes conditioning. So, yes, I think, the reality experience makes you appreciate certain things that you can't appreciate from theory.
Gerovitch: Russian specialists in human engineering often talk about the pilots' sense of being "one with the machine," when they feel movement with their own body, feel being part of the aircraft. That helps them control it better. Do you think this aspect is really important? Is it taken into account in the design of human-machine systems?
Sheridan: There have been words written about it. There is a visitor in the Aero Department named Kim Vicente, from Toronto, a young and very bright professor. He has written a book called
Cognitive Work Analysis. He and other people talk a lot about ecological design. The word ecological refers to the environment. There was a psychologist named Gibson back in the 1950s, who analyzed how we perceive and take actions that relate to our genetic ecological needs. Other things in the environment we don't even notice if they are not relevant to our lives or to what we are doing at the time. People learn what's important. When you learn to drive a car, you learn what's important, and eventually you become one with the car, and even one with the environment. You just move through the environment naturally, Vicente might say ecologically. After enough training one does not think about pushing pedals or steering, but rather thinks at a higher level about intentions.
In some sense, I think, the brain is doing supervisory control; the muscles are doing local control. You are still controlling the car, but you relegate to a lower level of activity those detailed control tasks. There is a Danish professor named Jens Rasmussen, who talks about levels of control; his diagram is in my book
Humans and Automation. On one level of control, one operates like a servomechanism; it's continuous. At another level, one operates as a supervisory controller, executing stored procedures. At a higher third level one operates as an artist, or an inventor, or a creative thinker. Rasmussen explicates three levels of control: knowledge-based, rule-based, and skill-based. On the bottom
level, say, driving a car, you don't think in terms of steering, it just happens. At a next higher level, data comes in, and you access stored rules to stop at a stop sign or turn at the proper intersection. At a still higher level, you do things you never encountered before, analyze them afresh and be creative, like deciding where to eat or get gas. In most any kind of task we are operating on all three of these levels simultaneously.
Gerovitch: Let's go back to your recent trip to Moscow, to the Cosmonaut Training Center in Star City. Did you notice any difference in the way the Russians train their cosmonauts as compared to the U.S.?
Sheridan: They had a mock-up of the MIR space station; we could walk through it. They used it, I am sure, for
training. One impression was its size. Everything was bigger in scale than American technology. The American technology all seemed to be miniaturized.
I think American space people have come to have a lot of respect for the Russians. They've done very nice work. The level of Russian technology in terms of miniaturization and sophistication has always probably tracked a little bit behind the Americans, but they have done impressively well with what they've had. Maybe, in some sense, the Russians had fewer accidents. At that meeting we were mostly talking about the prospect of doing some joint research, which never quite materialized.
One of the things I remember well was one cold winter day when they picked us up in a hotel in Moscow, and the car had a leak in the radiator and ran out of coolant. So the driver stopped at a local bar and bought a big bottle of vodka, unscrewed the cap, and dumped it in the radiator. I thought it was very funny. I am sure it worked just fine. It turned out that vodka was cheaper than radiator fluid. It is funny how one has these recollections.
Star City was quiet at the time. The economics were grim. They just opened it to visitors. It had been under military control, and it just came under civilian control. The uniforms were half-military, half-civilian at the time, and they were making jokes about their uniforms. This was in the early 1990s, maybe 1992-93.
Gerovitch: Back in the 1960s, did you have interest in the human engineering work done in the Soviet Union?
Sheridan: Yes, what I could find. I later went over with Jim Nevins to talk about collaboration on robotics. We met with their robotics people. I don't think anything came out of it. I knew a bit about the Institute of Psychology. There was a professor named Valerii Venda who came over here, and I helped him find a faculty job in Canada. On one of my trips to Moscow, he had been our host. I also visited Professor Frolov at the Mechanical Engineering Institute a couple of times. I think I've made 4 trips to Moscow all told.
I definitely got the sense that psychology was on the second plane to engineering and physical science. The people doing experimental work were at second-level institutes.
Another time when I was over there, Gorbachev had a "peace conference," and he invited all sorts of people. It was shortly before he fell from power. The meeting was held in the Kremlin, in the great halls. I had to go to Canada to get a flight, because there were no American flights to Moscow. The Russians paid for our flights over, which was amazing. It was a wonderful party! I think I was invited through Frolov. I suppose different political figures were able to invite a certain number of names. Jerry Wiesner, MIT President, went over then; there must have been 6 or 8 people from MIT. Sakharov was there, and everybody was watching him. He was very well known, of course, very well respected, and he was allowed to come back into the open science community. It was touching to see how well he was being treated at the time.
The great Russian theoreticians have always been well respected, but the human factors folks less
so. Venda was one of the latter, and there was a professor named Lomov, who ran the Psychology Institute. I have had some correspondence with some other Russian human factors people recently. Munipov, an ergonomics professor, just sent me his book.
Gerovitch: Did you get a sense that they were largely following the
U.S. trends or they were trying to do something different?
Sheridan: I got the sense that in pure theoretical areas the Russians were always ahead of the Americans. But when it came to engineering practice, the Americans were ahead, evident from just looking at our technology, laboratory instruments and things like that.
Part of it is just economics: we could afford things that Russia just couldn't afford. In 1960, we went to a computer institute in Kiev, and the computers there were very primitive. During my trip to Star City, I saw some fairly modern IBM computers, small ones, that they were using, while there were big Soviet computers gathering dust in the corner. It was clear that they were deferring to American technology when they could get their hands on it. By then they were getting basic computers from us easily maybe through third parties. We had export restrictions on certain items and tried to keep them from being sold to Russia, but that was totally impossible.
Gerovitch: Thank you very much for the interview.