Main
Background
Mode S Technology
Interviews
Reports
Bibliography
Glossary
Other Links
|
Tom Goblick interview - 11/17/00
Summary: ATCAC formed in 1969 to look at ATC systems. Suggested an
upgrade to ATCRBS. In 1972, DABS/Super Beacon was developed.
Compatiblity was an important issue in its development. Hard to find
a new frequency for DABS. 1030/1090 MHz band was overloaded. The
challenge for DABS was to overlay a new system onto an overloaded
channel. ATCRBS was ripped off from IFF. Before ATCRBS only radar
was used for civilian systems. Planes were just blips on a screen
with no identity. This caused an accident in which a plane crashed
into a mountain because an air traffic controller started giving
directions to one plane and then unknowingly continued giving the
directions to a different plane. Sidelobe suppression was built into
ATCRBS. Lincoln Lab solves a problem. But it solves it within cost
limitations and keeping in mind the requirements of the general
aviation community and compatibiliy issues. General aviation
community is cost sensitive.
[History of ATC]
1969 ATCAC (Air Traffic Control Advisory Committee) looked at all kinds
of options for collision avoidance, including satellites. Finally they
settled on an upgrade to the existing ATCRBS system.
1972 DABS/superbeacon, called ADSEL (address selective) by the British
was created. In some ways it was inspired by SAGE, which used radar and
beacons.
The compatibility issue was especially important because ground stations
were all over the place, and each needed its own access road,
electricity, phone, and other utilities. If location was determined by
triangulation, coverage by multiple sites was required.
If a new frequency was chosen, a new, clear frequency would have to be
found, and there were none. Instead, they chose to use the DME
frequency band.
Frequencies:
VHF 108-138 MHz
military 225-400 MHz
DME stations 960-1218 MHz
1030 and 1090 were chosen as the interrogation and reply frequencies.
It was a "de facto standard" that there be at least 60 MHz between them,
because of modulation done on the waveforms.
DABS/Mode S was developed was that the 1030/1090 bands were getting
overloaded. The reason for this was that a given transponder would be
interrogated several times within the sweep of a beacon, about 50 times
per transponder, generating many replies. The issue was scalability.
Also there was the "synchronous garble" problem. If the communication
between ground station and transponder could be reduced to ONE
interrogation and reply, traffic in the channel would be greatly
reduced. This requires a tracking system, because specific aircraft are
interrogated (addressed). "Monopulse" allows the ground station to hit
the aircraft with only one interrogation.
Especially with aircraft in holding patterns, it was impossible to
distinguish one aircraft from another because the pulses from their
replies would overlap.
The design challenge for DABS was to overlay a new system onto an
overloaded channel, so that the system would work together with the old
system. In addition, as people moved to using the new system, the
traffic would be reduced. DABS also should be compatible with IFF which
is the military ATC system, which they are still using.
Essentially ATCRBS was ripped off from IFF. In the 1950s IFF was using
the 1030-1090 band. In civil ATC only radar was used, there was no way
to identify which blip on the screen corresponded with which aircraft.
This caused a collision when a student pilot got on the channel and
started following the instructions that were being given to another
aircraft, and the air traffic controller unknowingly switched to giving
the student directions instead of the other plane, and the other plane
crashed into a mountain. After this the FAA decided they needed to be
able to identify planes, and adopted IFF, which became the first
generation of ATCRBS. The military had modes 1, 2, 3, and 4. Mode 3
transmits identification info, and in ATCRBS this became Mode A. Mode 4
is encrypted, the military uses that today. Mode C which deals with
altitude came later. Now the FAA owns the 1030/1090 channel, but IFF
still uses it, so sometimes there are problems.
Mode S was "really thoroughly wrung out here [at LL]." One time LL had
to prove to a bunch of people from Britain that it really worked, so
they put a camera on the tracking system and used optics and a mirror so
that the mirror would actually reflect the plane that they were
tracking. They then tried out the system on a 727 that happened to be
taking off from Logan. The system was so accurate that the mirror was
pinpointed to within 3 feet of the antenna on the 727, not just the
plane itself. This kind of visual presentation was really important
because it was convincing; it was a very powerful demonstration.
"It's all about the application of technology to engineering." LL has
to appreciate where the technology is, and where it's heading, and use
that to solve problems. In designing Mode S, LL proposed that the
transponder act like a modem, with lots of modes of data transmission.
Possibilities for data were the next waypoints of a flight path,
location data (like GPS), and other stuff. Mode S was designed for
"future compatibility" so it was very flexible. In fact, LL proposed an
air to air mode for DABS, but an FAA official said "absolutely not"
because they were focused on low cost at the time, "not 25 cents more
than necessary for the ATC job" not air to air.
Ironically, during this development there was a collision in
Indianapolis between a DC-9 (using IFR) and a Cherokee (using VFR) on a
cloudy day. One of the planes ducked into a cloud and came out and the
other couldn't avoid it, and the DC-9 lost its tail and crashed. After
this the FAA became concerned with air to air data transmission, and one
FAA official tried to design a system of his own that was "truly
garbage." In fact, there's a version of the DABS spec that has this
system in it, it's called "synchro DABS" and doesn't use squitter but
relies on precisely synchronized clocks, etc. and would have been really
expensive. This system was completely removed from the DABS spec, but
DABS msg formats were designed flexibly to accomodate new data links.
TCAS developed instead, using Mode S to do collision avoidance.
At this time, Dr. Goblick was an Assistant Group Leader in the ATC
group. There were two groups, one that dealt with sensors and one that
dealt with "systems" and knitting together the data and making it
useful.
Testing that was done with DABS was very complete, to the point of
pouring concrete to make a field to test on. BCAS was the predecessor
to TCAS. [Something about ACAS then BCAS?]
NATO sponsored an effort to replace IFF at one point, in fact there
existed a NATO standard, but when the USSR collapsed the funding dried
up.
When LL was trying to make DABS compatible with ATCRBS transponders,
they ran into a lot of problems with electromagnetic compatibility.
Ideally, the transponders should not respond to DABS signals. LL
thought they had designed DABS appropriately but they went out and
bought some $500 transponders that were popular with GA aircraft owners
and discovered a fatal flaw in the FAA's specs for ATCRBS transponders;
they said "lots of stuff about what they *should* do and nothing about
what they shouldn't do" and so some of the transponders were responding
to stuff like simple sine waves. At this point LL was very concerned
about making DABS so that it was usable by everyone so "it went out and
bought a whole bunch of GA transponders as well as some commercial ones"
and tested DABS with them all and concluded that there was no way to be
transparent to them all except to use sidelobe suppression.
Sidelobe suppression was built into the original ATCRBS system. What
happened was that sidelobes could also trigger transponder responses, as
well as the main beam, so in order to suppress responses to the
sidelobes [this part really needs the diagram he drew] ATCRBS
transponders were designed so that they would only respond if the first
pulse were at least 3 db greater than the second pulse. If not, ATCRBS
would ignore 25-35 microseconds of the transmission starting with the
first pulse. DABS used this by making the second pulse *always* greater
in magnitude than the first pulse, and using the remaining 20
microseconds to transmit information. This was what dictated the 56/112
bits. The fastest flipping frequency was 4 MHz because really fast
flipping back and forth generates out of band spectra, and with the 4
MHz set, 112 bits was as many bits as they dared to push through. The
question was how many bits could be transmitted in the 20 microseconds?
Originally the devices were analog circuits and so 20 microseconds was
an upper limit. Now that they are digital more data could probably be
transmitted because the curve is cleaner; in retrospect 64/128 bits
would have made more sense for the short/long replies for DABS, but at
the time there was no way to know that 128 bits would be safe in the
future. It's all based on "what information people have when they're
making design decisions." After finishing up with this design they
tested DABS with automated tests in the normal transponder environment.
FAA wanted to use amplitude modulation instead of phase
modulation for both interrogation and reply, because they were
convinced it was difficult to demodulate phase modulation. Once LL
proved to them it could be done they were okay with it. In choosing
the modulation scheme, LL was basically limited to phase, amplitude,
and frequency modulation, because of cost limitations they couldn't
get too complicated. LL wanted to use phase modulation because it was
less susceptible to multipath effects; pulses that were phase shifted
would not be amplified. In the end, different modulation schemes were
used for uplink and downlink.
The transponder was "cleverly designed for its time."
Basically what was needed was a high powered transmitter.
Demodulation used DPSK and modulation used PAM - binary PPM = energy
for every bit. They could not anticipate solid state transmitters
that had enough power, so they gave in to the FAA.
[About how the 1030/1090 band was secured.] DME which originally owned
the band cleared out some frequencies around both for ATC, and the
military was more stubborn but eventually FAA pressure forced them to
reassign.
When designing the format of the data, they had to anticipate how many
modes there would be, because they marked a certain number of bits for
control - to specify which mode/kind of message was being sent. All of
these kinds of considerations lead to a very thick and detailed spec for
DABS.
LL solves a problem. They design and evolve a set of requirements,
looking at advanced technology and trying to take advantage, within the
cost limitations. They have to keep in mind the cost, the requirements
of the GA pilots, EM compatibility issues, etc. "What's put together
here is not what's going to go out in the field and be used" so LL has
to always keep in mind the audience. Dr. Goblick flew in order to
determine the pilot workload during a flight, as part of the official
research on DABS.
It's a good deal for the gov't, LL doesn't have any patents, they all
belong to the gov't. They own everything, get a very detailed spec, and
get unbiased information. Commercial companies always have their own
interests in mind. Because LL is non-profit, there's no incentive to
use proprietary technology, they can *only* work for the government,
unless there's a special exception, anyway.
Now there are lots of Mode S transponders and sensors deployed.
Industry always argue that they have R&D resources, but they have bias.
"To some people, a satellite is the answer, before they even hear the
question." Mode S was well designed and went on to become the
international standard; countries could hire contractors w/o worrying
about proprietary technology. "I give the FAA credit for coming to LL"
but they had a very narrow goal; "if we wanted to go and look at an air
to air mode we'd get beaten down." The FAA is technologically
conservative, and didn't understand error control coding at all, so they
accepted it as "magic" and didn't ask questions. [Some good stuff about
the error code here but I didn't get it.]
It wasn't an LL idea to overlap the parity and address bits to
save 24 bits, but it was a good idea. "Erasure" channel was used in
downlink coding. There were three options: 0, 1, X. The goal was the
make the reply channel into an erasure channel. If there was an error
or uncertainty an X would be transmitted. If there were not too many
X's, correction code can fix the signal. The code was designed to
correct 24 microseconds or shorter, so it could deal with overlapping
ATCRBS replies which were 22 microseconds, including the framing
pulses. These were turned into X's with an occasional 1 or 0. If an
ATCRBS reply overlaps a pulse, even at a higher magnitude, the system
can correct it, but if there are two overlapping ATCRBS replies, the
code is saturated and the signal is not handled. However, the chances
of there being two ATCRBS replies is much lower than the chance of
there being one.
Beacon compared to radar: radar is accurate up to a fraction of a mile,
but beacon uses active transmitters, so the signal levels are much
higher. Reflections for radar lose energy along the path, and they lose
more on reflection because not all the energy reflects. Beacon is a
cooperative system.
Lesson from SAGE (1950s) is that there is difficulty in doing
computation on data, but a slight increase in data quality makes
everything easier; it's a non-linear relationship between data quality
and computation jobs (metaphorically speaking).
GA requirements on Mode S were mostly cost related - private owners
wanted the transponder to be affordable, like the $500 ATCRBS
transponders they were using before. LL did cost studies (Collins,
Bendix?) with 3-4 contractors, and in the end Mode S cost no more than
50% more than ATCRBS - it was 1.5x the cost of the King transponder
which was popular among GA pilots. However, even though LL thought Mode
S could be done so that GA would accept it (~$5000) GA has not accepted
it. GPS Squitter is now free with the transponder so if they would
install Mode S they would have the benefit of GPS as well.
TCAS, etc. were all designed within Mode S capabilities, Mode S was kind
of like an infrastructure element that other things could be put into.
Airlines need two transponders (IPC) so there is that cost issue, but
there aren't a lot of engineering requirements from the commercial
airliners. If a new frequency was used the system requirements would
double; more transponder, more everything. This would be very traumatic
for the system, so in doing design LL would have had to prove that the
problem couldn't have been solved using the old frequency. Instead,
they used the old frequency. Bill Harman worked on modeling the traffic
that would occur during the transition period. They had to prove it was
possible to transition to the new system, Mode S.
Currently 100s of ground stations have transitioned, as well as
commercial aircraft. The military has not moved to mode S but is
coming to realize that Mode S is useful and good. They're starting to
install TCAS, they like GPS squitter, etc. The next generation of IFF
will incorporate Mode S. FAA is now trying to make Mode S appealing
rather than mandating it, but this incentive based approach has been
largely a failure. TIS/TIS-B (broadcast) and weather information are
being offered to AOPA/GA as enticement to install Mode S, but it's not
really working. European organizations are much more successful in
mandating stuff; they've mandated 8.33 MHz for voice (instead of old
25 MHz system) because they're running out of room, and it's the same
with TCAS; they have the strongest Mode S mandate in the world.
Commercial airlines install things because if there is a collision and
the technology exists to avoid it, the company can be held liable. All
aircraft with over 30 seats have TCAS. New TCAS transponders all have
Mode S, and in the future may have GPS squitter.
|