A Logic of Authentication by Burrows, Abadi and Needham

Timo Kyntaja
Department of Computer Science
Helsinki University of Technologytimo.kyntaja@vtt.fi

Abstract

A formal method for describing and analyzing authentication protocols
was first suggested in late 1980's. Since then the development on the field
has moved on extending and changing the semantics of the basic BAN logic.
This document gives an introduction to the BAN logic and discusses some of
the additions suggested to it.

In computer networks the communicating parties share not only
the media, but also the set of rules on how to communicate.
These rules, or protocols, have become more and more important
in communication networks and distributed computing. However,
the increase of the knowledge of the communication protocols has
also brought up the question of how to secure the communication
against intruders. To solve this, a large number of cryptographic
protocols have been produced.

As in any protocol designing, there are specific problems also
in specifying cryptographic protocols. First, this document gives a
description of the general problem concerning mostly authentication
protocols. In Chapter 3 the first effort to exceed these problems
is presented: the BAN logic. The following chapters discuss the
methods developed to improve the original logic.

In Chapter 6 there is a comparison of the the methods
presented, and in Chapter 7 a brief summary of the
differences between the BAN logic and its successors; and finally, the
references to the articles discussed in this document are
listed.

Cryptographic protocols were developed to combat against various
attacks of intruders in computer networks. Nowadays, the comprehension
is that the security of data should rely on the underlying cryptographic
technology, and that the protocols should be open and available.

However, many protocols have been found to be vulnerable to attacks
that do not require breaking the encryption, but instead manipulate
the messages in the protocol to gain some advantage. The advantages
range from the compromise of confidentiality to the ability to
impersonate another user.

As there are different protocol design decisions appropriate to
different circumstances, there also exists a variety of authentication
protocols. Protocols often differ in their final states, and sometimes
they even depend on assumptions that one would not care to make.
To understand what is really accomplished with such a protocol,
a formal description method is needed.

The goal of the logic of authentication is to formally describe
the knowledge and the beliefs of the parties involved in authentication,
the evolution of the knowledge and the beliefs while
analyzing the protocol step by step. After the analysis, all the
final states of the protocol are set out.

The BAN logic was published first by Burrows, Abadi and Needham
(hence the name) in 1989. It was the first suggestion to formalise
the description and analysis of authentication protocols.
In [1] they gave an introduction to the method and
some clear examples of how it can be used to analyze existing protocols and
to bring out their flaws.

The conventional method for describing the protocols, by listing
the source, the destination and the contents symbolically, is
replaced with logical formulas. This representation aims to formulate
the protocol step in a way that all the essential information gained from
the step is shown.
This is called the idealisation
of the protocol. Furthermore, the idealised protocol is annotated
with assertions, which usually describe the beliefs of the principals
at that point of the protocol. At the same time the protocol is
analysed step by step using a set of inference rules.

The inference rules in the BAN logic are not defined in this document,
but are briefly discussed in Chapter 4 with an example.
The complete set of the rules may be found in [1].

In the logic, there are distinguished several sorts of objects:
principals or communicating parties, encryption keys and logical
formulas. A logical formula is an idealised version of the original
message, and may sometimes be referred to as a logical statement.
Typically, the symbols
A, B and S denote principals; Kab,
Kas and Kbs denote shared keys; Ka, Kb
and Ks denote public keys, and Ka^(-1), Kb^(-1)
and Ks^(-1) denote the corresponding secret keys; Na,
Nb and Nc denote statements. The symbols P,
Q and R range over principals; X and Y
range over statements; K ranges over encryption keys.

In the BAN logic, time is divided in two epochs: the past and
the present. The present begins at the start of the run of the
protocol. All messages sent before this time belong to the past,
and the protocol should prevent such messages from being accepted
as recent.

An extension to this concept is given in [2],
and it is also described in this document in
Chapter 5.2.

A protocol is presented in steps, where each step includes sending
and receiving one message. When a protocol step is stated, it
is usually written, e.g.,

This denotes that the principal A sends the message {A,
Kab} encrypted with the key Kbs, and that the principal
B receives it. If the B knows the key Kbs,
the message tells that Kab is a key to communicate with
A.

An idealised protocol step would then be:

This means that A sends and B receives a message
encrypted with a shared key Kbs. The message itself includes
a shared key Kab intended to be used between parties A
an B.
When the message is sent to B, it can be deduced that

indicating that the receiving principal is aware of the message
(B sees the message) and can act upon it.

The purpose of idealisation is to omit the parts of the message
that do not contribute to the beliefs of the recipient. The
idea of the amount of contribution may vary between different
logics, as shown in later chapters. An idealised protocol
in the BAN logic omits clear text parts, stating that they can
be forged, and so do not contribute anything useful to the authentication
protocol.

The Needham-Schroeder protocol (with shared keys) is taken as
an example because it is well known and the weakness of it is
familiar.

The conventional description of the protocol is as follows:

The corresponding idealised protocol is then:

The first message is omitted, since in the BAN logic the contribution
of a clear text to the logic, is considered insignificant. Therefore,
the result is as S acted spontaneously.

The second message consists of four formulas describing a nonce,
a key Kab, a statement that the key is fresh and an encrypted
key Kab. All this is encrypted using the key Kas.

Since the contents of the last three messages is quite obvious,
the description of them is omitted here. The only new notation here is
the use of from, which is used to distinguish the last two messages
due to simplification of the substraction formula.

The first five are the keys known to each principal. The next
three indicate what the clients trust the server to do. The last
four are about the beliefs of freshness of the nonces and the
keys.

The last assumption is unusual: B believes to the freshness
of the key. The authors of the protocol did not even realise they
were making this assumption and the protocol has been criticised
for using it.

When further analyzing the protocol, the BAN logic uses inference
rules to make annotations and to deduct the development of the
beliefs. As an example of the most commonly used rules consider
the following message meaning rule.

For shared keys it is postulated:

That is, if P believes that the key K is shared
with Q and sees the message X encrypted under key
K, the P believes that Q once said X.

Similarly, for public keys:

That is, if P believes that the key K is the public
key of Q and P sees the message X encrypted
under a corresponding shared key, the P believes that Q
once said X.

Other rules used in the analysis are the nonce-verification rule,
the jurisdiction rule, and the rules stating that what holds for
message components, holds for the entire message. However, they
are not explained here for brevity. For more complete explanations, refer
to [1].

In the protocol step one, A sends a clear text message
containing a nonce, which S sees. S replies and
A sees the whole message,

.

Since A knows Na to be fresh, the nonce verification
rule gives:

.

That is, A believes that S believes in the goodness
and the freshness of the key. After that, the jurisdiction rule gives:

meaning that A believes them also.

Since A has also seen the part of the message encrypted
under B's private key, he can send it to B. Now
B decrypts the message, and it can be obtained:

meaning that B believes that S once sent the key.
From this point B is unable to proceed unless the dubious
assumption made earlier. B knows nothing fresh in the message,
it simply assumes that it is fresh.

Since the necessary assumption, we can proceed and get:

.

The last two messages cause A and B to become convinced
that the other exists, and is in possession of the key. The
final beliefs are:

,

stating that both the parties, A and B, believe
in the key, and furthermore they believe that the other one believes
in it also.

The problem with the protocol is that B has no interaction
with S. This can be corrected by starting the protocol
with B rather than A. It was also done by the original
authors in 1987.

The idealisation of the protocol is not defined unambigously. For example
in the protocol of this chapter the idealisation of the third message
is somewhat unclear.

In the protocol the contents of the message is the principal A and
the session key between A and B. This is idealised to the
form of sending a good session key between the parties involved.

Now consider a case where conventional description of the protocol step
would also include information about the receiving party, B in
this case. How will the idealisation differ from the one mentioned
earlier? The answer is that it does not differ at all. Therefore one
can not always derive what was said in the original protocol step
just by examining the idealised version.

This obscurity is removed in the later chapters by adding the concept
of recognisability of the message and omitting the idealisation step.

Paper [2] suggests two enhancements. The first is that
since the original BAN logic notation is restricted to Symmetric Key Crypto
Systems (SKCS), it should be extended to cater for PKCS. The second
is the extension of the time concept.

In PKCS it is assumed that there exists a non empty collection
of principals, which each have one public and one private key.
The paper only considers signature properties, not secrecy.

The symbols added to the basic BAN logic are as follows:

: U possesses the good
public key K. Thus, there also exists some unique secret key
corresponding to K.

: U possesses some good
private key which is only known to P.

: The formula X is signed
with the private key belonging to U.

These notations make it possible to add new inference rules. Using
them one can conclude, e.g., that when believing in possessing
good public and secret keys, parties can communicate and trust
the contents of the received messages.

When BAN logic divided time in two epochs, and was only interested
in the present, [2] suggests otherwise: a formula
generated some time ago in the past, might still be fresh.

The authors of [2] defined the concept of
durationstamps.
A timestamp added to the message, usually states the time the message
was generated and therefore, it can be assumed that this message
was is some sense good at the time of the generation. The
durationstamp may be used to tell the time interval during which
the creator of the message claims it to be good.

The following two notations are suggested:

X holds in the interval t1,t2.

t1,t2 denotes a good interval.

The principals are supposed to make explicit time references respect
to their own local and unique system clocks, and the protocol
runs must be short. Obviously when using durationstamps, one commits
to believe in the contents of the message during that time.

When the assumptions given above hold, it can be deduced that the
message including the durationstamp is good.

Enhancements suggested in [3], aim to generalise some
approaches in the basic BAN and to give more accurate
concepts and definitions.

The notion of recognisability is to capture a receiver's
expectation of the contents of the message. The concept
not-originated-here means that a principal can not always determine
whether a message was originated by himself or not. A difference is also
made between possessing and believing. This makes it possible
to process several levels of trust in the analysis. Also, clear
text messages can in some cases be used to derive further conclusions.

In following the most important new notions are presented and
also, the differences between [1] and
[3].

Pis told
formula X.

Ppossesses,
or is capable of processing, formula X.

P believes that formula
X is recognisable. That is, P would recognize
X if P has certain expectations about the contents of
X before actually receiving X.

A star is placed in front
of a formula when the formula was not-originated-here.
In [3] this is done by parser, which uses pattern
scanner to insert the stars at the correct places within protocol steps.

As an example in [3] the Needham-Schroeder protocol is
analysed. The steps of the protocol are presented in this document
in the beginning of section 4. The complete analysis
is omitted; only the significant differences between [1]
and [3] are discussed here.

This method focuses to analyze the development of the beliefs and
posessions of the principals during a protocol run. It also skips
the exact protocol idealisation and uses extensions to point out
the results of the protocol steps analysed.

The initial assumptions differ slightly from the ones in
[1]. Namely the new concepts, posessing and
recognizing, are used with keys and principals, respectively.

It can be concluded from the first message that S posesses the
information about the parties A and B and the nonce
Na, and can therefore proceed the protocol correctly.

With the message two the results are the same in both methods, but the
reasoning mechanism is slighty diffrent. Due to recognisability, A
has to have something in the message to identify it, before
it can analyze the contents of it. In this case it is the principal
B. This is stated in the initial assumptions. The rest of the
analyzes with this message are identical.

When examining the third message this method can conclude even less than
the basic BAN logic. The only derivation is that B posesses the key;
there is nothing fresh in the message. It can not even be sure that S
has once sent it, because there is nothing that is recognisable for
B. Therefore B can not be sure that he once sent the
message itself.

The same problem occurs with message number 4. The principal A now posesses
the nonce. Since it is merely some random number generated by B it is not
recognized by A. Although the right key is used it can not be sure who
originated the message.

The last message gives nothing useful. Although B does recognize the
message, it can not derive any conlusions about A's state, since it can not
be sure who sent it.

This method clearly shows the problems with Needham-Schroeder protocol
and the authors give similar corrections as was done in
[1]. The differences lie in new concepts and in
more strict demandings when analyzing the contents of the messages.

The other logics presented before are applied only to monotonic
protocols. Monotonicity here means that the amount of belief can
only increase and nothing can be forgotten. In [4] it is
suggested a completion of logic with adding negation. Also, a difference
is made between the belief and the knowledge.

A method for analyzing nonmonotonically about knowledge in cryptographic
protocols is proposed. The paper suggests the following two changes
to [1]:

idealisation is not required and

each principal's actions are defined separately

When in basic BAN logic protocol definition, idealisation and
analysis were separate tasks, here it is presented a method that
combines the definition and the analysis of a protocol: when the
specification is complete the analysis is complete.

The methods presented earlier were considered belief monotonic: once something
is believed, it is always believed. The same applied for knowledge.
In monotonic protocols it is impossible to nullify some information,
although this might be important in some cases. For example, if
a session key is compromised the belief that it is a good key
must be changed.

The difference between knowledge and belief is subtle. A principal
knows that his key is K. A principal believes
that a nonce is fresh. In general, a principal knows things like
secrets and data. The believing is connected with meta-data or
information about the data, such as freshness.

Protocol definition is handled by using sets for every principal
using the protocol. Some of the sets are global and some are local
to each principal. Only the main purposes of the semantics are presented here.
The complete description can be found in [4].

The protocol specification starts with instantiating the global
assumptions of the protocol. The contents of the sets include
objects (e.g., principals and keys), properties and relations
between the principals.

The run of a protocol begins from some predefined action. It is
followed by checking the conditions related to the action, and
if they are true, the results are applied.

The sets are constantly evolving during the run of the protocol.
Because of the nonmonotonicity, any of the sets may change while proceeding
the protocol. Updating the sets is the key function when using the method
presented in [4].

At any state of the analysis, checking for protocol flaws is possible.
If such a flaw is found, the analysis is aborted. It is also possible
to simulate intruders as participants in the protocol, which can then
be checked for known attacks.

The basic BAN logic introduces a formal description method for
idealising and analyzing cryptographic protocols. The analysis
is based on a set of inference rules and by applying them, the
analyst may deduce the real beliefs and the knowledge that a protocol
run produces.

There are some simplifications, that can be sometimes considered limitations,
in [1]. The successors of the basic BAN logic
make some efforts to improve the method.

The concept of time in [1] is divided in two epochs:
the present and the past. Therefore, fresh formulae are to be found only in
the present time. In [2], the freshness can be extended to
the past with using durationstamps; a formula can be considered fresh
between some predefined instant of time. The durationstamps can
be verified by using some shared time system. The paper [2]
also introduces notions intended specially for PKCS authentication
protocols.

The third paper, [3], presents some new and more exact
definitions, such as recognisability, not-originated-here and distinguishing
between possessing and believing. Those concepts add a mechanism
to the basic BAN logic to remember and identify the steps of a
protocol run more precisely.

The last paper considered here, [5], introduces a whole
new semantics to handle the analysis of a protocol. It starts from accepting
the nonmonotonicity of knowledge and belief. The protocol and
the participants are declared using global and local sets. The
sets are evolving during a protocol run and are updated using
inference rules. The paper also suggests that this method is suitable
for defining new protocols, as it shows the flaws of the protocol
during the analysation.

Today the situation has developed a bit further since the beginning
of the decade. The logics of authentication have been studied
and improvements have been made. But still the main difficulty is
the formulation of a general authentication protocol;
as the successors of the basic BAN logic have shown, excessive
simplification takes the logic away from the real world solutions.

It should also be noted that the examples used in the papers
discussed here used quite commonly known protocols and their flaws.
How do the methods work with new protocols which have not yet been tested?
Unfortunately this documentation does not include a study on this
matter, but it can be stated that the problem still exists today.

As one of the the main goals is to develop a 'complete' logic,
not only for analysation but also for designing purposes, a lot of work
is yet to be done. The methods shown here give a hint that maybe
through more developing, the logic of authentication could be worth
using in this field.

The problem in cryptographic protocols has been the difficulty
in analyzing their faults. Many protocols have been found to be
vulnerable to attacks that do not require breaking the encryption,
but merely manipulation of the messages in the protocol to gain
some advantage. To establish a method for finding such flaws and
designing protocols to be more reliable, a new formalism was introduced.

The BAN logic [1] was the first suggestion to formalise
the description and the analysis of authentication protocols in 1989.
It was soon followed by works that expanded and adjusted the method.

K.Gaarder and E.Snekkenes [2] presented more exact ideas
about handling time and PKCS authentication protocols. L. Gong,
R. Needham and R. Yahalom [3] adjusted the reasoning
about belief and possession. The last work discussed in this document
was done by A.D. Rubin and P. Honeyman [4]
in nonmonotonic protocols.