Accessibility

6.033--Computer System Engineering


Suggestions for classroom discussion

Ross J. Anderson. Why cryptosystems fail. Communications of the ACM 37, 11 (November, 1994) pages 32-40.

by J. H. Saltzer, April 4, 1996, updated March 20, 1997



Starting observation (from Corbato):  Many of the problems Anderson
describes arise because having secure *modules* doesn't mean that you
have a secure *system*.  The interactions, the procedures, the
management, and the overall implementation can all defeat the security
that may have been designed well into an individual module.

1.  The curtain of silence:  The cryptographic agencies follow the
principle of open design:  in designing a cryptosystem, they assume
that the enemy knows all the details of the cryptosystem except for the
current key.  But then, just to make sure, they try to keep the design
secret anyway.  Why?  (The real goal has nothing to do with "just to
make sure"; it is to avoid having your own weapons used against you.
Government cryptographic agencies are responsible both for creating
cryptosystems and also for attacking cryptosystems used by enemies.  If
a system created by this agency is as good as hoped, one hazard is that
the enemy will use it or ideas from it, in their own cryptosystems, and
thereby make life difficult for the cryptanalysis part of the agency.)

2.  What is the practical consequence of the curtain of silence?  (1. If
successful, the enemy has to reinvent everything and may make some
exploitable mistakes in doing so.  2.  If the system is later
discovered to have an unexpected flaw, it may be possible to gracefully
withdraw it from the field and replace it with a better system, rather
than having to repair it on the fly.  3.  Whether or not successful, it
drastically reduces the number of people who review and analyze the
system; it is possible that problems will therefore be overlooked.  4.
It suppresses a feedback loop that would be expected to produce
improvements not just in the cryptosystem but in the way it is operated.)

3.  What is the practical effect of the difference between U.S. and
British law on presumption of correctness in ATM disputes?  (It
suppresses anaother useful error-correction feedback loop.  Since, in
the British system, unrecognized withdrawals are legally presumed to be
errors of the customer, there is no motivation to trace them down to
find their real cause.  In the U. S. system, where the bank is presumed
to be at fault, there is an incentive for the bank to investigate reports
and get to the bottom of them; those that have causes that can be fixed
on a cost-effective basis will gradually be eliminated.)

4.  On Athena you are told to choose an 8-character password that
contains upper- and lower-case letters and that isn't a word from the
dictionary.  For an ATM a four-digit Personal Identification Number (PIN)
is used.  Yet there seem to be more reports of hackers breaking into
computer accounts by guessing passwords than of reports of breaking into
ATM's by guessing PIN's.  Why?  (The computer password is often
vulnerable to an off-line dictionary attack.  But each guess of a PIN
requires keying that PIN in to the ATM, and the central system has a
chance to notice that the guess is wrong.  Three wrong guesses, and the
ATM confiscates the card.  So the four-digit PIN is probably more secure
than the 8-character UNIX password.)

5.  How can I launch a dictionary attack on someone's Athena password?
(Send Kerberos a request for a ticket-granting ticket claiming that you
are that person. Kerberos will happily send you back a packet sealed
under that person's secret key, and containing the timestamp that you
have specified.  Now, for each entry in the dictionary of
frequently-used passwords, run the entry through the string-to-key
one-way transformation, and try to decipher the packet.  If, upon
decipherment, you find that the packet contains the timestamp, you have
apparently used the correct key, which means that you have guessed the
password. The timestampe, because it can be recognized, is called
"verifiable plaintext". If the first attempt fails, try another
dictionary entry. It takes only a few hours to try all the entries in a
good-sized dictionary.)

6.  Can this problem be fixed?  (Yes.  The trick is to redesign the
Kerberos protocol to ensure that there is no verifiable plaintext in
that returned packet.  Eliminating verifiable plaintext while
maintaining freshness is a bit challenging; read the following paper
for details of one such design.  Li Gong, Mark A. Lomas, Roger M.
Needham, and Jerome H. Saltzer.  Protecting poorly chosen secrets from
guessing attacks.  IEEE Journal on Selected Areas in Communications 11,
5 (June,1993) pages 648Ð656.)

7.  Back to our starting observation.  People sometimes suggest that you
can solve thisproblem by certifying things.  For example, one could have
a team certify that the ATM machine behaves according to its specs, with
a rate of failure that is tolerable.  (This approach loses because

a.  Ross establishes that the problems are usually not in the components
but in the integration with the environment.

b.  There is no reason to believe that the certified components can be
used by ordinary mortals.  Their interface properties may be too complex,
or have too many subtle properties.)

8.  So is there no role for component certification?  (There is, but one
must recognize that it is only one part of the system solution; you
can't stop there.)

9.  Anderson suggests that one needs an organizing principle for security
robustness.  He further suggests explicitness as that organizing
principle.  What does explicitness mean?  (List your assumptions about
the environment, list your specifications about what the system is
supposed to do for all possible input combinations.)

10.  What is the practical problem with this approach?  (1.  For
security, there is no way to know whether or not the list is complete.
2.  As the list grows, the complexity of verifying that
the implementation does what the list says grows even faster.)

11.  So is the organizing principle of any value?  (Yes.

  a.   explicitness flushes out fuzzy thinking.
  b.   explicitness exposes complexity.

both of which undermine secure design.

12.  What is the relation between explicitness and feedback?  (For
explicitness to work it is essential to also have a good feedback
mechanism that adds things to the list when problems in the field
indicate that something was left out.  It is hard to use this organizing
principle when the legal system or custom are set up in such a way as to
frustrate the feedback loop.  So here we have one of the important--but
implicit--messsages of Anderson's paper.)

13.  What is the relation between this paper and the Therac paper?
(Paranoid design is applicable to two quite different areas:  secure
systems *and* safety-critical systems.  Perhaps it is applicable to all
systems.)

14.  Buzzword patrol:  "It also shows that the TCSEC program has a long way
to go." (p. 39, near bottom lef left column).  TCSEC?  (Acronym for
"Trusted Computer System Evaluation Criteria," referring to a defense
research program that tries to identify systems that are certifiably safe
places in which to store classified data.)


Comments and suggestions: Saltzer@mit.edu