Operational Testing in the Department of Defense

Dr. Phil Coyle, Center for Defense Information

April 2, 2001

Dr. Phil Coyle spoke to students, faculty, and associates of the MIT Security Studies program on April 26 about the process of operational testing in the Department of Defense. Dr. Coyle is currently employed at the Center for Defense Information (CDI) in Washington, DC. Prior to working at CDI, Dr. Coyle worked at the Department of Energy and, from 1994 until 2000, was the Director of the Pentagon's Department of Operational Testing and Evaluation (DOT&E). In this last capacity, Dr. Coyle was responsible for the operational testing of all new major weapons systems slated for procurement by America's military forces.

DOT&E was formed in 1983 as a result of Congressional pressure on the Pentagon; it was designed to remedy a perceived lack of testing and an underemphasis on testing in realistic combat conditions. Without independent oversight, it was feared, Presidents and/or arms contractors could push through systems that were unnecessary, unreliable, or unsafe. DOT&E's Director is appointed by the President, approved by the Congress, and reports to both the Executive and Legislative branches, ensuring some degree of independence.

Defense Department testing is, according to Dr. Coyle, different from other types of experimental processes. Whereas in most types of research unexpected results are desirable and often lead to more important discoveries than those originally aimed at, in DOT&E people loathe surprises. The development phase, which DOT&E is not responsible for, is a process of scientific discovery, but the operational testing phase is intended only to confirm that the development phase worked. Weapons systems are run through a series of tests jointly designed by DOT&E and the services themselves, and approved by the former. These tests are "totally open book," raising the question of why so many systems flunk their initial tests. In recent years, eighty percent of Army systems have not achieved fifty percent of their reliability goals; nearly seventy percent of all Air Force programs had to be stopped mid-way through operational testing; the Navy, by contrast, has solved similar problems that used to plague the test of its systems.

Dr. Coyle then went through and discussed problems in a number of programs that are currently in operational testing or have recently completed operational testing. The F/A 18E "Super Hornet," was not as fast or as maneuverable as the Navy had hoped, but was fairly effective as a "truck for weapons." The C-17 airlifter was good as a carrier of material, but had a serious problem: it creates a giant swirling backdraft that could prove dangerous for paratroopers jumping out. Given that one of the primary missions of the C-17 is delivering such paratroopers, this problem was not a small one. The B-2 bomber provided an example of what could happen when operational testing was done under less than realistic conditions. After having been tested in the dry, pleasant weather of California, the B-2 was deployed to Europe where pieces of its stealth coating began falling off in the wind and rain. Dr. Coyle expects similar problems to befall the F-22 Raptor which was pushed rapidly through the R&D phase by anxious members of Congress and the Pentagon.

Dr. Coyle concluded by attempting to explain some of the reasons he thought these problems were so widespread. First, time and cost pressures consistently hang over developers and force them to be less thorough than they should be. Unrealistic requirements are a second reason for the high rates of failure in operational testing. Finally, Dr. Coyle pointed out that the Pentagon's use of fixed-price contracts for highly experimental projects encouraged the developers to cut corners and underinvest in development, which they are not reimbursed for.

Rapporteur: Todd Stiefler


back to seminar schedule, Spring 2001