Accessibility

6.033--Computer System Engineering

Suggestions for classroom discussion of:

6.033 Authentication Logic applied to the web of trust.

by J. H. Saltzer, April 1996, revised April 13, 1999 to use sign/verify


Let us consider how to apply our simplified authentication logic to the
web of trust.

Alice has received a signed message from Jim, whom she doesn't
know. Representing Jim's public key as KPjim, his secret key as
KSjim, and the original message as M3, we will describe the
message that Alice received as {M3}KSjim, since it was signed
with Jim's secret key.

Alice looks on what she believes is Jim's home page and finds
what is claimed to be three copies of KPjim, each in a
certificate signed by Mary, Bob, and Frans, respectively.  The
only thing she knows about those three people is that, by
reputation, Frans is very careful and can be trusted.  She looks
at what appears to be Mary's home page and finds a public key
signed by Ann and by Bill.  On Bob's home page is a public key
signed by Mary and Bill.  On Frans' home page is a public key
signed by Bob, Jim, and Chris. Chris and Ann are good friends of
Alice; she knows their public keys and she trusts their
judgement.  [draw picture of web of knowledge]

Can Alice establish the authenticity of Jim's public key?  She
goes through the following analysis:  "I know Ann's key, so I can
verify the certificate signed by Ann on Mary's home page.  That
will give me Mary's key, which I can use to verify the
certificate that Mary signed on Jim's  home page.  In that
certificate is Jim's key, which I can use to verify that the
signed message from Jim is authentic.  So there is no risk that
someone has sent me a bogus home page and a bogus message that
consistently make the message appear to be signed by Jim.

Authentication logic will tell us just what hidden assumptions are
implicit in Alice's reasoning, and why Alice has made a mistake.  It
will also show us that the mistake is harmless. But to do that we will
have to use a slightly more general rule of which Reasoning rule #1 is
actually only a specific example.

Let's start with a simpler case:  Alice receives a message allegedly
signed by Ann, {M1}KSann.  Our immediate goal is to decide whether or not
Alice should believe the statement (Ann SAYS M1).

If Ann hands Alice a piece of paper on which Ann's public key is written,
we can describe that transaction in authentication logic as follows:

    Ann SAYS (KPann SPEAKS FOR Ann)
           (assuming the KSann has not been compromised)

and a message M1 signed by KSann can be described as

    KSann SAYS (Ann SAYS M1)

When Ann receives M1 she tries to verify it by performing

    verify( {M1}KSann, KPann)

If verify replies "True" AND the public-key cryptosystem they are
using is secure, then alice can conclude [with a modest amount of
hand-waving; we really should use BAN more carefully to get this
result]:

   KPann SAYS (Ann SAYS M1)
            (assuming the public-key cryptosystem is secure)

so Alice believes that

1.  Ann SAYS (KPann SPEAKS FOR Ann)

2.  KPann SAYS (Ann SAYS M1)

Our reasoning rules aren't quite powerful enough to deal with this
situation yet; he must introducing a third one:  If someone says
someone else speaks for him, we believe it.  That is,

Reasoning Rule #3:

    x SAYS (y SPEAKS FOR x)
    implies (y SPEAKS FOR x)

We can apply reasoning rule #3 to Alice's first belief to conclude that

    KPann SPEAKS FOR Ann)

and now we can apply reasoning rule #1 to this belief and Alice's second
belief to conclude that

   Ann SAYS M1

Alice is happy, because this belief is exactly what she hoped to verify.

Now, suppose Alice receives a message allegedly signed by Mary (whom she
has never met): {M2}KSmary  But remember that Mary's home page contains a
certificate signed by Ann.  What can Alice safely believe now?

The certificate {Mary, KPmary}KSann is just a message, M2, from Ann; it can
therefore be described in authentication logic as follows:

    KSann SAYS (Ann SAYS (Mary says (KPmary SPEAKS FOR Mary)))

and since KPann SPEAKS FOR Ann

    Ann SAYS (Mary SAYS (KPmary SPEAKS FOR Mary))
           (assuming that the cryptosystem is secure)

again using reasoning rule #1.  But now we run into a concern.  To
apply reasoning rule #1 a second time we need to believe that

    Ann SPEAKS FOR Mary

Alice must trust that Ann does not sign certificates recklessly.  So we
must add something to Alice's list of assumptions.  Since Alice has
told us that she trusts Ann, we can introduce this logic statement if
we explicitly attach the assumption:

    Ann SPEAKS FOR Mary
        (Assuming Ann is trustworthy)

Alice can now believe that (KPmary SPEAKS FOR Mary) and thus that
(Mary SAYS M2) using the same reasoning as in the case of the
message from Ann; she also has a list of assumptions that she is
making in coming to that belief.

Now we are ready to tackle the original problem.  Alice received
{M3}KPjim.  One of the certificates on Jim's home page is signed
by Mary.  The same line of reasoning just used will take us to

    Mary SAYS (KPjim SAYS (KPjim SPEAKS FOR Jim))

but now the roadblock is more serious.  Alice doesn't know Mary, so
Alice has no idea how careful Mary is about signing random
certificates.  The assumption that Mary is careful requires a leap of
faith that Alice should not be willing to make.  In particular, if Mary
is under Lucifer's influence, it is possible that the certificate
actually says

   Mary SAYS (KPJim SAYS KPlucifer SPEAKS FOR Jim)

Jim may not know about this problem; Lucifer may have interecepted Jim's
home page and replaced it with one containing this unreliable certificate.

Alice's original analysis overlooked this point.

So Alice examines her collection of certificates:

C1:    Mary SAYS (Jim SAYS (KPjim SPEAKS FOR Jim))
C2:    Bob SAYS (Jim SAYS (KPjim SPEAKS FOR Jim))
C3:    Frans SAYS (Jim SAYS (KPjim SPEAKS FOR Jim))
C4:    Ann SAYS (Mary SAYS (KPmary SPEAKS FOR Mary))
C5:    Bill SAYS (Mary SAYS (KPmary SPEAKS FOR Mary))
C6:    Mary SAYS (Bob SAYS (KPbob SPEAKS FOR Bob))
C7:    Bill SAYS (Bob SAYS (KPbob SPEAKS FOR Bob))
C8:    Bob SAYS (Frans SAYS (KPfrans SPEAKS FOR Frans))
C9:    Jim SAYS (Frans SAYS (KPfrans SPEAKS FOR Frans))
C10:   Chris SAYS (Frans SAYS (KPfrans SPEAKS FOR Frans))

Since Alice knows Chris and Ann, following the same line of reasoning
used before she can conclude

    Frans SAYS (Jim SAYS (KPjim SPEAKS FOR Jim))
    Mary SAYS (Jim SAYS (KPjim SPEAKS FOR Jim))

And now we see a way for Alice to proceed.  Although Alice does not know
Frans, and has never seen his public key before, she believes his
reputation as being a careful, trustworthy person.  So she decides to
make the following explicit assumption:

    Frans SPEAKS FOR Jim
              (assuming Frans is trustworthy)

She can now conclude, using reasoning rules 1 and 3 as before:

     KPjim SPEAKS FOR Jim

 and therefore

     Jim SAYS M3

 The complete list of assumptions Mary had to make to establish the
 authenticity of Jim's message is

           (assuming Frans is trustworthy)
           (assuming Chris is trustworthy)
           (assuming KSchris hasn't been compromised)
           (assuming KSfrans hasn't been compromised)
           (assuming KSjimJ hasn't been compromised)
           (assuming the public-key cryptosystem is secure)


Comments and suggestions: Saltzer@mit.edu