1997 seminars

Subscription financed pay-TV channels such as BSkyB scramble
their broadcast signal and provide their subscribers with special
decoders to prevent unauthorized free access. Modern
smartcard-based pay-TV access control systems like VideoCrypt are
the first large scale consumer application of both cryptography and
tamper resistant processors. Their security aspects have been
scrutinized in the past five years by both professional pirate
decoder manufacturers and amateur hackers. Professional pirates
have developed reverse engineering skills previously only assumed
to be available to major governments or corporations, while
undergraduate students have found surprisingly simple and cheap
ways to evade cryptographic protection schemes. There are valuable
lessons to be learned for the design of future large-scale
cryptographic applications.

We propose a new notion of fair and blind certification of
knowledge. Blindly certified knowledge has a verifiable structure
which can only be constructed by its exclusive owner with the help
of a certification authority (CA). Verification of knowledge
possession will include a simple check on the structure to convince
a verifier of proper certification of the knowledge in addition to
its exclusive belonging to the owner (prover). Unlike a blind
signature, in blind knowledge certification no visible signature of
the CA is available to the verifier and thus different sessions in
which the same knowledge is used can easily be made unlinkable. As
a result, a single blindly certified knowledge can be re-used
polynomially many times without linking to the anonymous user. To
prevent anonymity misuse with impunity, we also add fairness to the
unlinkable anonymity by escrowing the knowledge to an off-line
third party.

A system has been developed that allows the rapid, secure and
private transmission of Nuclear Medicine and Positron Emission
Tomography (PET) image data with associated patient files between a
number of hospital sites facilitating clinical collaboration and
accelerated education of Nuclear Medicine physicians.

Participating departments maintain an mSQL database populated
with DICOM objects generated from Interfile or proprietary data.
The database contains patient and study information and references
to image data which may, or may not, be in DICOM format. An
internet navigator is used to query the database and Multipurpose
Mail Extension (MIME) typing enables the navigator to download a
specific image, and launch a viewer appropriate to the data format
and viewing platform. The clinicians can then report the images and
submit a text report via either fax or over the internet. The
protocols are independent of the network infrastructure linking the
hospital sites and data transfer has been secured through the
Secure Socket Layer (SSL). ATM networking has made the
establishment of a dial-up telemedicine virtual LAN practical
offering enhanced security.

Nuclear Medicine and PET departments on three hospital sites are
currently involved in this project and the scaleable nature of this
solution makes further expansion practical.

In recent years the use of elliptic curves in cryptography has
become something of a "hot topic". This is because the discrete
logarithm problem on elliptic curves is in general harder than the
associated problem in the more traditional finite fields. In this
talk I shall outline how one should choose one's elliptic curve by
showing how certain attacks work. In particular I shall concentrate
on the recent attack on "trace one curves".

The Clark-Wilson security model may be used for systems where
security is enforced across both the operating system and the
application systems. Under this model, a secure system may be
viewed as a certified application running on top of a trusted
computing base (TCB). Certifying an application corresponds to
arguing (to a degree) its correctness; the TCB is expected to have
undergone some sort of security evaluation. A variety of existing
implementation models, for example multilevel security, have been
shown capable of upholding the Clark-Wilson TCB requirements.

We argue that, given an evaluated TCB, an application designer
should try to minimize the amount of security critical code that is
contained within the application and rely on the TCB to enforce
security wherever possible. Under the Clark-Wilson model, the TCB
is expected to support (enforce) static segregation of duties.
However it appears that dynamic segregation of duty must be
implemented within the application itself.

In this talk I will describe a framework in which dynamic
Clark-Wilson style segregation of duty policies can be expressed
and supported by the TCB. I will also describe how these policies
can be enforced under Unix and Multilevel TCBs.

Internet browsers use security protocols to protect confidential
messages. An inductive analysis of TLS (a descendant of SSL 3.0)
has been performed using the theorem prover Isabelle. Proofs are
based on higher-order logic and make no assumptions concerning
beliefs or finiteness. All the obvious security goals can be
proved; session resumption appears to be secure even if old session
keys have been compromised. The analysis suggests modest changes to
simplify the protocol.

TLS, even at an abstract level, is much more complicated than
most protocols that researchers have verified. Session keys are
negotiated rather than distributed, some messages are optional, and
others may be sent at various times. The resources needed to verify
TLS are modest: the inductive approach scales up.

The purpose of EDI is to process business data in an automated
way. This used to be handled on a bilateral basis between
contracting parties using leased lines etc., but when parties
without an initial contract do business over the Internet and/or
X.400, it is absolutely vital to secure the interchanges by what we
call security services: non-repudiation of origin/receipt and
confidentiality are the obvious choices.

Back in the early 90's, the UN Security Joint Working Group came
up with a number of proposals for the integration of these services
at the EDIFACT syntax level, thus making them independent of the
transport mechanism in use. This implied that the EDI translators
would handle security, and in an automated fashion. The next step
was to bring the supporting public key infrastructure with
communicating CAs, LRAs and Directories into play, and a new
EDIFACT message, KEYMAN, was designed to handle this.

The EDIFACT certificate thus derived was far more business
minded than the original X.509 certificate in its design. The PKI
and the underlying business model will be described, and we will
explain how to avoid blacklists. We believe this model is the right
one to take forward in electronic commerce, and this is exactly
what we are doing in large pilots such as SEMPER, BOLERO, DYP, and
ELSME.

Pachyderm is an experimental email system exhibiting the
following properties:

The Platform is the Web: all interaction with Pachyderm is
through a web browser; you can create, read and browse your email
from any web-connected computer.

Location Independence: there is no state locked in particular
client computers; you can move among client computers at will, and
your email state will still be available to you.

Bandwidth Tolerance: Pachyderm is designed to tolerate a wide
range of connectivity bandwidths, from high-speed local area
networks to dial-up modems.

Easy Data Retrieval: all access to your email is based on
queries on a full-text index; you can find the right message from
among tens of thousands without any need for manual classification
schemes such as folders.

The talk will outline the rationale for Pachyderm and describe
the structure of the system.

Emergent behaviours are those that result from interaction
between the behaviour of the components of a composite system. We
show that they play a role in the composite system's security
properties: they may give rise to vulnerabilities directly, or
result in the non-composability of security properties.

Using an emergent properties analysis, we can identify which
aspects of component behaviour lead to undesirable emergent
behaviour. This may enable us to strengthen individual systems so
that desired properties compose. We can also use this approach to
identify, a priori, when non-composable properties will be violated
within a composite system.

We have shown how to apply this approach to several toy examples
and are currently using it to analyse a Network Reference
Monitor.

In 1978, Rivest, Shamir and Adleman introduced the public-key
cryptosystem RSA. Thereafter, it was extended to Lucas sequences
and elliptic curves. In this talk, we will analyse the security of
these cryptosystems in given contexts. In particular, some major
known attacks against RSA-type systems will be reviewed. We will
also see how these attacks can be avoided.

Visual cryptography was introduced by Naor and Shamir as a way
to allow fast visual decryption of graphic objects. No decryption
device is required; instead decryption is done by fitting slides
together. Several schemes were suggested which allow users to share
secret pictures (and text) in an information theoretically secure
way, so that deciphering is easy if all the shares are given, but
it is impossible if one of them is missing. The drawbacks of all
the existing methods are the exponentially small contrast of the
deciphered picture as the number of shares increases, and the
reduction in quality due to pixels' being represented by many
smaller (black and white) pixels.

In this talk we suggest new visual cryptographic schemes based
on light polarization which are better than the optimal existing
schemes. Then we present an ultimate scheme which does not
subdivide pixels, and in which the contrast is independent of the
number of shares.

Java is typically compiled into an intermediate language, which
we call JVML. The Java Virtual Machine interprets JVML code.
Because mobile JVML code is not always trusted, a bytecode verifier
enforces static constraints that prevent various dynamic errors.
Given the importance of the bytecode verifier to security, its
current descriptions are inadequate. We consider the use of typing
rules to describe the bytecode verifier because they are more
precise than prose, clearer than code, and easier to reason about
than either. We explore the viability of this approach by
developing a sound type system for a subset of JVML. This subset,
despite its small size, is interesting because it includes JVML
subroutines, a source of substantial difficulty for the bytecode
verifier. (Joint work with Raymie Stata.)

The problem of optimizing combinatorial problems or breaking
cryptographic codes led to several novel paradigms for carrying out
such a massively parallel random search, including quantum and DNA
computers. In this talk, the speaker will propose a new paradigm,
which is based on a simple and easy to implement idea.

The speaker will use some props to demonstrate the new paradigm
in real time. It's accessible to everyone, even though some
familiarity with the structure of DES like schemes helps to
motivate the research.

There are two distinct areas of work in mobility: "mobile
computing", concerning computation that is carried out in mobile
devices, and "mobile computation", concerning mobile code that
moves between devices. These distinctions are destined to vanish.
We aim to describe all aspects of mobility within a single
framework that encompasses mobile agents, the ambients where agents
interact and the mobility of the ambients themselves.

The main difficulty with mobile computation is not in mobility
per se, but in the crossing of administrative domains. Mobile
programs must be equipped to navigate a hierarchy of domains, at
every step obtaining authorization to move further. Therefore, at
the most fundamental level we need to capture notions of locations,
of mobility and of authorization to move.

We identify "mobile ambients" as a fundamental abstraction that
generalizes both dynamic agents and the static domains they must
cross. From a formal point of view we develop a simple but
computationally powerful calculus that directly embodies domains
and mobility (and little else). The calculus forms the basis of a
small-language/ Java-library. We demonstrate the expressiveness of
the approach by a series of examples, including showing how a
notion

For a decade, people have viewed the binding of names to keys as
the only problem. I will argue that names are almost meaningless in
the context of the global Internet, and that we need rather to
transfer authorisation securely from one entity to another. I will
discuss two new attempts to do this, namely SDSI and SPKI.

24 March
14:15Steganography and copyright marking / Ross
Anderson, University of Cambridge

One of the fastest growing areas of security research is
steganography - hiding information in other information. One
example is hiding encrypted copyright marks in digital images; the
same ideas can be applied to other problems such as annotation and
indexing, and to other kinds of object such as digital audio.

Research in this subject is highly interdisciplinary, and a
number of people with backgrounds in graphics, signals processing
and statistics have expressed interest in getting involved. I will
therefore be giving a brief tutorial on the subject and outlining
where the interesting areas of research appear to be.

We survey some of the major security flaws found in Java-enabled
web browsers from Sun, Netscape, and Microsoft over the last 15
months. While numerous issues have been found throughout the
system, the worst problems come from type safety failures in the
implementations that allow an attacker to run arbitrary machine
code. Several of the type safety failures can be traced to dynamic
linking. We examine a formal model of dynamic linking, and find
some necessary conditions for safety.

This seminar is an illustrated and light-hearted introduction to
vectors of insecurity within modern computer systems, focussing
especially on Unix-like operating systems (i.e. don't expect the
speaker to bang on about viruses for very long) and how it seems
that the same problems come up time and again in new guises.

The presentation should be suitable for programmers, C/S
students, and systems administrators of all grades, especially
those with a programming bent, and will try to educate the audience
away from some of the more grotesque mistakes of systems
programming.

The invention of public-key cryptography led to the idea of
non-repudiation protocols, which are intended to enable the
resolution of disputes about pervious protocol exchanges. Later
work on non-repudiation has shown that the means by which
cryptographic keys are managed is just as important as the
cryptographic algorithm, and public-key cryptography is neither
necessary nor sufficient.

25 February The composition of
security properties / Aris Zakinthinos,
University of Cambridge

Room TP4, Computer Laboratory

The ability to design and construct complex systems that enforce
a security property presupposes an understanding of security
properties themselves, as well as the security properties of a
system that is composed from secure components. This talk will
present a general theory of possibilistic security properties and
of system composition for such properties.

It has been demonstrated that security properties do not fit
into the saftey/liveness framework defined by Alpern and Schneider.
That is, a security property can not be expressed as a property of
a trace. (A trace is an ordered stream of events that can occur at
the inputs and outputs of a component). However, we demonstrate
that security properties can be expressed as a predicate over the
set of all traces of a component that are consistent with a given
low-level view of a trace.

The issue of composition with feedback has been the focus of
much research, we demonstrate that the problem with feedback
composition is related to the synchronization of the communications
events between the various components. This allows us to provide
necessary and sufficient conditions for determining when feedback
composition will fail for Generalized Noninterference and for all
properties stronger than Generalized Noninterference.

An understanding of what is a security property allows us to
provide a method of constructing a system that satisfies a desired
security property. This analysis yields a condition that can be
used to determine how a property may emerge under composition.

SDSI is a new distributed security infrastructure, joint work by
Butler Lampson and Ron Rivest. It has a simple public-key
infrastructure that emphasizes linked local name spaces rather than
a hierarchical global name space. SDSI makes it easy to define
groups and issue group-membership certificates. Groups provide
simple, clear terminology for defining access control lists and
security policies.

18 February Trust / Perri 6,
Demos

Room TP4, Computer Laboratory

Unless, as citizens and consumers, we believe that we can trust
the government and business organisations that keep personal
information about us, to treat that information confidentially and
to use it only in ways that we consider to be in our interest, the
``information society'' will be a conflict-ridden, expensive and
litigious affair.

In this presentation, I want to explore how we can understand
better what determines trust in general. I then go on to look more
specifically at how trust in connection with privacy, and I will
conclude by setting out some strategies for policy in the fields of
data protection, trusted third parties and independence of subject
access to buttress trust and trustworthiness in the digital
age.

7 February The breaking of the
german lorenz world war 2 cypher: max newman's contribution and
colossus. / Tony Sale, President, The Bletchley Park
Trust

To mark the centenary of the birth of M.H.A. Newman (1897-1984),
St John's College are hosting a talk by Tony Sale, President of the
Bletchley Park Trust.

Tony Sale is widely known for his achievement in constructing a
working replica of the wartime Colossus computer.

The centenary will also be marked by an exhibition about Max
Newman's life and work, to be held in the College Library from
February 7th until Easter.

4 February Security based on error
correcting codes and its application to pay-tv / Jean-Bernard
Fischer, Thomson Consumer Electronics, France

Room TP4, Computer Laboratory

We build an original cryptographic toolbox based on
error-correcting codes. Having studied the difficulty of the
syndrome decoding problem, we define a one-way function and a
general setting for its use. Our results allow us to prove the
security of Stern's authentication protocol SD; we also construct a
provably secure pseudo-random generator and a very efficient and
versatile keyed one-way function. These algorithms are used to
provide end-to-end security for an analog pay-tv system using smart
cards, similar to VideoCrypt.

28 January Mechanised proofs for a
recursive authentication protocol / Larry Paulson, University
of Cambridge

Room TP4, Computer Laboratory

A novel protocol has been formally analyzed using the prover
Isabelle/HOL, following the inductive approach described in the
speaker's earlier work. A single run of the protocol delivers
session keys to any number of agents, allowing neighbours to
perform mutual authentication. The basic security theorem states
that session keys are correctly delivered to adjacent pairs of
honest agents, even if other agents in the chain are compromised.
The complexity of the protocol caused modest difficulties in the
specification and proofs, but symmetries in the protocol reduced
the number of separate theorems to prove.