"TC" writes:
I understand public key encryption, but I do not understand the
terminology surrounding security certificates.

For starters, what exactly is a certificate? On the Microsoft website,
I've read that it is a private key / public key pair. Verisign has told
me, however, that a certificate is a public key signed with a
certification authority's private key. Which is it?

basically there is technology with asymmetric encryption (one key
encrypts, another key decrypt ... they form a key pair).

public key encryption involves a business process for doing things
like digital signature authentication. the businees process designates
one of the keys from the key pair to be a private key which will be
kept confidential and never divulged or exposed. The business process
designates the other key as public and allows it to be made widely
available.

a person then can use the private key to encode a hash of some data
... creating a digital signature. the digital signature can be
transmitted and verified using the registered public key. From
3-factor authentication paradigm
http://www.garlic.com/~lynn/subintegrity.html#3factor
• something you have• something you know• something you are

... the verification of a digital signature with a public key implies
something you have authentication ... i.e. the originator has
exclusive access to the private key producing the digital qsignature.
The advantage of public key vis-a-vis a shared-secret ... is the
public key can be used for verifying a digital signature ... but can't
be used for impersonation by creating a digital signature (while
anybody with access to a registerd shared-secret can not only used the
shared-secret for verification but also for impersonation).

digital certificates were originally targeted at the offline email
paradigm from the early 80s ... where somebody called the local
(electronic) postoffice, exchanged email and hung up. The scenario was
if email was received from somebody with no prior contact ... how was
the receiver going to authentication a complete stranger ... never
having contact with the person before.

In the PGP secure email model ... you register the public keys of the
parties that you are acquanted with. In the offline stranger scenario
... you have no method of validating whether any specific public key
belongs to a specific person. So the idea was that in your trusted
public key repository ... in addition to directly registering public
keys of entities you directly communicated with ... you would also
register public keys of something called "certification authorities"
(or CAs). CAs would create things called digital certificates where
they bind some information about an entity to their public key ...
and the digital certificates are digitally signed by the certification
authorities.

This is basically the "letters of credit" model from the sailing ship
days. The recipient gets a communication that is digitally signed and
has an appended digital certificate. The recipient verifies the CA's
digital signature (on the digital certificate) using the CA's public
key stored in the recipient's trusted public key repository. Having
validated the CA's digital signature, they can now trust the contents
of the digital certificate ... from which they pull the originator's
public key ... to validate the digital signature on the actual
message. This provides for indirect authentication when a total
stranger is involved and the recipient has no recourse to online,
electronic, and/or other forms of timely communication to validate
communication from an unknown stranger. This is compared to the PGP
trust model ... where individuals load their trusted public key
repository with the actual public keys of individuals they communicate
with ... as opposed to having it loaded with public keys of
certification authorities (since the actual public keys are directly
available ... there is no need for certification authorities and/or
digital certificates, certified by certification authorities).

An example is the SSL domain name digital certificate scenario. The
issue were various perceived integrity issues with the domain name
infrastructure and whether a browser client could actually trust that
the URL they had typed into the browser ... corresponded to the
webserver they were talking to.

The browsers have a preloaded trusted public key store.of numerous
recognized certification authorities. Servers contact one of these
certification authorites to get a SSL domain name digital certificate
containing their domain name and their public key (digitally signed by
the certification authority). The browser contacts the server in a SSL
session. The server returns a digitally signed message and their SSL
domain name certificate. The browser validates the CA's digital
signature on the returned digital certificate (using the CA's public
key that has been preloaded into their browser) ... and then uses the
public key from the digital certificate to validate the digitally
signed message. If that works ... they then can check whether the
domain name (in the supplied digital certificate) matches the domain
name that they typed in as part of the URL (aka prooves that the
server you think you are talking to is actually the server you are
talking to)..

So an issue for the SSL domain name certification authorities is that
an applicant supplies some amount of identification information. The
certification authority then must cross-check that identification
information on file with the authoritative agency responsible for
domain name ownership. This identification matching process is an
error prone, time-consuming, and expensive activity. It also turns out
that the authoritative agency responsible for domain name ownership is
the domain name infrastructure ... which has the integrity concerns
giving rise to the need for SSL domain name certificates.

So somewhat a proposal backed by the SSL domain name certification
authority industry ... is that applicants for domain names should also
provide a public key as part of domain name registration. Use of this
"on-file" public key could help with a number of the existing domain
name infrastructure integrity issues. Also the certificaton authority
industry could require that SSL domain name certificate applications
be digitally signed. Then the certification authority can replace an
error-prone, expensive and time-consuming identification matching
process with a much simpler and more reliable authenticaton process
(they simply retrieve the on-file public key for that domain from the
domain name infrastructure and validate the digital signature on the
SSL domain name certificate application ... note that this is a
certificate-less operation being able to do real-time online retrieval
of public keys instead of getting them from a stale, static digital
certificate).

All well and good ... however it represents soemthing of a catch-22
for the certificaton authority industry. By improving the integrity of
the domain name infrastructure ... they improve the integrity of their
certifying that the entity requesting the domain name certificate is
the actual owner of the domain name. However, the questions about the
integrity of the domain name infrastructure is a major factor for
needing SSL domain name digital certificates. Improving the integrity
of the domain name infrastructure reduces the requirement for SSL
domain name digital certificates.

The other issue is if there are now public keys on-file for real-time,
online retrieval ... for access by the certification authorities in
certificate-less digital signature authentication ... then it would
also be possible for other entities to also do real-time online
retrieval of such public keys. One scenario would be to modify the the
SSL protocol for certificate-less operation by retrieving the
appropriate public key directly from the domain name infrastructure.

jacost writes:
Certificate binds your public key with your identity, confirmed by
Certificate Authority by its signature. The public key itself is not
enough -- you have to be sure that the public key really belongs to
the person whose name/address is in it. To verify the signature, you
need something you can trust. For this purpose browsers have some
certificates of well known root CAs built in. When you start an SSL
session, your browser gets a certificate for that website and the
certificates of all CAs (trust path) until it reaches self-signed
certificate of root CA it knows. If it is not able to do it, it warns
you that it cannot confirm the identity of that website.

The same applies to other uses of PKI -- e-mail and so on. For
example, if you want to encrypt a message, you have to get a public
key of the recipient. You use that key to encrypt a symmetric session
key, which in turns is used to encrypt a message. If you get this key
(in the form of certificate) from public source (web page, e-mail,
directory server) then you have to confirm that this key is really
belongs to that person -- you (your software) validate trust path
using a set of root CA's certificates you trust.

the basic idea is whether you are really talking to the webserver you
think you are talking to ... aka are you really talking to the
webserver that corresponds to the URL that you typed in.

the problem was that most webservers found that the SSL overhead
decreased their webserver capacity by at least 80% ... so SSL was
quickly limited to just the payment portion of the e-commerce shopping
experience.

so no longer do you typically type in a URL and have an immediate SSL
session that validates that the domain name in the URL (you typed) is
the same as the domain name in the SSL domain name certificate sent
back as part of SSL handshake (since there is no SSL handshake, there
is no SSL domain name validation).

Instead, you wait until you get to shopping experience checkout and
select the PAYMENT button ... which takes you to a URL SSL session
provided by the PAYMENT button.

The problem is that if you really aren't at the website you think you
should be at ... and there is some possible slight-of-hand or
faudulent activity afoot ... then the basic shopping experience could
be occuring at a fraudulent site ... and there is no SSL domain name
certificate validation that the website you are talking to is the same
as the URL you typed in.

Now if you actually happened to be at a fraudulent site ... and they
put up a PAYMENT button ... it is likely that the fraudulent site will
have obtained a valid SSL domain name certificate ... for some
doamin/host name that they actually registered ... and the PAYMNET
button will invoke an SSL URL that corresponds to the certificate that
they do have (their SSL certificate is validating the domain that they
provided in their PAYMENT button ... and not validating something that
you provided).

So everything appears to work ... but it actually isn't correct.

This is separate from the issue mentioned in the previous postings
where the certification authority ... in ceritifying the request for
an SSL domain name certificate ... has to certify that the requesting
entity is entitled to request a SLL domain name certificate for that
domain. They do this certification by checking with the authoritative
agency for domain name ownership ... and require that somebody
requesting an SSL domain name certificate is actually associated
with the ownership of that domain.

The problem is that the authoritative agency that the certification
authorities have to check with (responsible for domain name ownership)
is the domain name infrastructure. The possible integrity issues with
the domain name infrastructure is what gave rise to requiring SSL
domain name certificates in the first place. So if there is an
integrity issue with the domain name infrastructure .... and the
domain name infrastructure is the authoritative agency responsible for
domain name ownership ... then there is possibly integrity issues with
domain name ownership (various stories about domain name hijacking
over the years).

So it turns out that the actual trust "chain" for domain name
ownership ... doesn't stop with a "ROOT" certificaton authority
... but continues back to the authoritative agency responsible for the
information that is being certified by the certification authorities.

If the information is wrong at the authoritative agency responsible
for the information being certified ...then the certification
authorities can be certifying invalid information ... aka somebody
does a domain name hijacking and then applies for a domain name
certificate (this is different than somebody obtaining any possible
domain, applying for a valid SSL certificate for that domain, and then
using it in conjunction with a fraudulent e-commerce site and a
fraudulent "PAYMENT" button).

As mentioned in the previous posts ... a solution for improving the
integrity of the domain name infrastructure ... to better improve the
integrity of the information that is being certified by the
certification authorities .... is to start requiring registration of
public keys at the same time domain names are obtained. Then they
require that SSL domain name certificate application be digitally
signed. The certification authorities then can validate the digital
signature on the application by doing a real-time, online retrieval of
the "on-file" public key from the domain name infrastructure. Note
that this is a digital signature validation with public key in a
certificate-less operation ... using online, realtime access instead of
stale, static certificates that were originally designed to address
the offline email authentication problem in the early 80s.
http://www.garlic.com/~lynn/subpubkey.html#certless

The issue (or catch-22) then becomes if the certification authorities
can do certificate-less authentication of digital signatures with
on-file, online, real-time public keys ... then it is possible that
others could also start doing certificate-less authentication of
digital signatures with on-file, online, real-time public keys (rather
than redundant, superfluous, stale, static certificate-based public
keys) ... aka modify SSL protocols to directly do real-time retrieval
of public keys from the domain name infrastructure instead of
relying on public keys in stale, static certificates.

Cross-Realm Authentication

darren.hoch@litemail.org (Darren Hoch) writes:
I am giving a pretty lengthy presentation on Sun Kerberos next week
and I want to make sure I have the correct understanding of how
cross-realm authentication works.

for some topic drift ... not too long ago i sat thru a presentation on
a SAML implementation for cross-domain authentication; i observed that
the message flows looked almost exactly like kerberos cross-domain
message flows. the person giving the presentation looked surprised
that anybody knew anything about kerberos and then finally commented
something about practically there are just a limited number of ways
for information flowing between domains.

What is a Certificate?

Juergen Nieveler writes:
It creates a self-signed certificate - a certificate that will work
just like any other, but cannot be used to proof anything. The
signature on the certificate was generated by its own key, that's like
saying "I'm me because I say I'm me".

The important part about real certificates is that they are signed by a
Certificate Authority - somebody who is known to check the authencity
of keys before signing them. Obviously, this cannot work with
self-signatures :-)

note however, to get the CA's public key for validating a signature on
a CA-issued digital certificate ... you get that out of a CA
self-signed digital certificate ... which says I'm the CA because I
say I'm the CA.

so who checks the authenticity of the CA's keys?

basically there you have a trusted repository of public keys. In the
case of PGP ... you validated the public keys that you are
communicating with before classifying them as trusted.

CA public keys are also typically in a trusted repository of public
keys (frequently built inside an application like a browser) ... and
somebody hopefully has validated the authenticity of the CA public
keys before (pre)loading them into the client's trusted repository of
public keys>.

At some point all operations have to resort to some repository of
trusted public keys ... whether it is a certificate-less environment
(where you typically are doing the validation of the authenticity of
the public keys and related information ... like PGP operation)
.... or a certificate-based environment ... where there is one or
more level of indirection between the validation of the digital
signature on the original communication and the final validation of a
CA's digital signature on a communication (in the format of a digital
certificate) ... using a CA's public key that has been (pre)loaded in
an application trusted public key repository.

"D. J. Bernstein" writes:
That's a crazy premise. Network load simply doesn't work that way in the
real world. Packets come in waves and bursts and spikes, forcing almost
all hops to have far more capacity than they usually need.

this is one of the issues raised with the original slow start stuff
... some of the prototyping had been done on direct ethernet
connection between two hosts.

I believe that the same month that slow-start presentation was made an
an IETF meeting ... there was a paper published in ACM SIGCOMM
proceedings about slow-start not being stable in real-world
environments (because of things like significant bursty trends).

for a little side drift ... one of the issues has been communication
protocols using windowing algorithms to coordinate the number of
outstanding sent packets with the amount of buffers at receiving nodes
(originally in direct point to point links involving latency either in
transmission and/or at end-point processing).

Windowing tends to only be indirectly related to amount of
intermediate node congestion. Intermediate node congestion tends to be
things like the arrival rate and stuff like back-to-back packet
arrivals. One of the issues raised in the early SIGCOMM proceedings
paper is that returning ACK-packets (which open windows) can get
bunched up ... which results in opening multiple windows at the
sending side. The sending side then tends to transmit multiple
back-to-back packets (for the size represented by the multiple bunched
ACKs). The multiple back-to-back transmission by the sender then tends
to aggravate saturation and congestion at intermediate nodes (this
bursty characteristic is one of the things contributing to non-stable
slow-start operation).

If the multiple arriving back to back packets contribute to
intermediate node saturation ... a possible solution is to have
time-delays between packet transmission to address intermediate node
congestion. However, it is possible to create an infrastructure that
dynamically adjusts the intermediate transmission delay to address
intermediate node saturation ... and totally ignore window-based
strategies.

for HSDT in the mid-80s, we had done rate-based pacing w/o having to
resort to windowing oriented algorithms for long latency network
connections. a problem at the time of slow-start work was that there
were a whole class of machines and software with primitive timing
faclities that made if difficult to implement rate-based pacing using
a dynamically adaptive inter-packet transmission delay (directly
controlling the rate of packet production in order to influence the
rate of packet arrival at intermediate nodes).

Where should the type information be?

"Charlie Gibbs" writes:
I thought the keys had labels like "large Coke", etc. I guess
they're accomodating the decline of literacy, too. (Is it time
to resurrect my rant about hieroglyphics, pictograms, "international
road signs", etc.? National Lampoon once did a wonderful spoof of
road signs that were meant to say things like "Bears playing cellos,
next 10 km".)

how bout when boarding procedures shifted from telling people seat
numbers XXX and higher .... to section 1, section 2, section 3, etc
(it eliminated issue with people having to figure out whether their
seat number was higher than the most recent seat called).

you can readily see the 4/1 designation difference in this small
extract from the IETF rfc-index.txt file:
4040 RTP Payload Format for a 64 kbit/s Transparent Call. R. Kreuter.
April 2005. (Format: TXT=15363 bytes) (Status: PROPOSED STANDARD)

Where should the type information be?

"Trevor L. Jackson, III" writes:
This is an excellent example of an effect I only recently
recognized. The success of Really Bad Software(tm) can be mostly
explained by the fact that the buyers can;t tell whether the
software is working or not. Since they can't tell good software from
bad software, bad software wins in the marketplace.

this is somewhat reminiscent of discussions in the 70s & 80s about
the automobile industry

nemu writes:
Hi. Lately i found my wallet full of smartcards, and was wondering
what's inside them. I got an old smartmouse reader, and installed
smartcard and chipcard-tools on my debian, but it seems i can't get
any info out of the reader. I'm new in this field and googled a
bit. I think for these smartmouses any program should be okay, am i
wrong? Then, there's one think i can't really figure out. How do i
recognize different chips and cards? I know there are card
supporting ctapi drivers, others do support PCSC. If i find which
kind of cards i've in my pocket i'd rather build my own reader and
make a software for it. Any information is welcome!

smartcards were originally targeted at the portable computing market
niche currently occupied by PDAs and cellphones. At the time there was
no portable input/output technology ... so there was big push for
physical interface standards so that your portable computing device
could be used with large numbers of distributed, fixed input/output
stations. Somewhere along the way ... portable input/output capability
was invented and pretty much closed off that market niche for
smartcards.

so the next market niche is the offline authentication and
authorization market segment ... at much lower value than represented
by the PDA/cellphone market segment. Note, however, that various kinds
of online and wireless facilities are opening that market segment to
PDA/cellphone appliances (shrinking the segment that chipcards still
find some applicability).

one of the competing market forces ... is the competing models for
institutional-centric issued authentication/authorization tokens
vis-a-vis individual-centric authentication/authorization tokens.

In the institutional-centric paradigm ... every institution issues
you a unique token (akin to the requirement for unique authentication
shared-secret for every institution or security
domain). This results in you having one token or smartcard per
institution ... somewhat akin to the DRM/piracy model that evolved in
the early to mid-80s ... harddisks were appearing and you could load
all your applications on harddisks ... but for every application you
had running ... you had to have the corresponding anti-piracy floppy
disk in the floppy disk reader (it made running multiple concurrent
applications all but impossible and would have potentially resulted in
you having to shuffle scores of anti-piracy floppy disks). I've raised
the issue at various booths at various smartcard expositions ... that
if the prevaling smartcard authentication/authorization
institutional-centric model was ever to become successful, you would
have scores of hardware tokens .... somewhat akin to the people that
have a keyring with scores of keys.

The competing model is to have some unique individual-centric
authentication model ... that is registered with every institution.
... from three-factor authentication model
http://www.garlic.com/~lynn/subintegrity.html#3factor
• something you have• something you know• something you are

the unique characteristic registered with every institution might be a
biometric.

Alternatively, it might be some PDA/cellphone based mechanism with a
unique (and difficult to counterfeit) characteristic (or for 2 or 3-factor authentication ... it might be some combination of operations).

"TC" writes:
Thank you. I believe I understand. Please tell me if I've got this
right: All certificates are signed by a certification authority. If a
certificate is signed by a certification authority which does not
itself have a certificate, then it is a self-signed certificate.

When I use Microsoft's "selfsign.exe", I am using a certification
authority which does not have a certificate. I am using the CA's
private key to sign a newly generated public key, and therefore
creating a new self-signed certificate.

This makes sense to me.

I am confused about one part of your explanation, however: You suggest
that self-signed certificates are signed by the certification authority
which issued them. Aren't all certificates signed by the CA which
issued them, and only by the CA which issued them?

the format of certificate is a technical standard/specification that
includes things like public keys and a digital signature.

who signs the certificate is a business specification.

the business specification of the verification of digital signatures
typically involves the relying party (entity relying on the digital
signature verification) having some form of trusted repository of
public keys. These trusted repositories of public keys may be used to
implement direct verification ... like in PGP signed email. Other
public keys in the relying party's trusted public keys may belong to
parties that issue digitally signed digital certificates ... that
attest to the trust involving other entities.

In general, all "root" CA digital certificates are self-signed. In
this situation it is up to the relying party (or somebody that the
relying party trusts) to load the public keys for "root" certification
authorities into some trusted public key repository.

selfsign needs to have a pair of asymmetric cryptography keys. The
key of the pair that is designated "public" is placed in the
certificate and the key (of the pair) that is designated "private" is
used to digitally sign the certificate. This can be used to become a
new "root" certification authority.

To the extent that such a new "root" certification authority has any
business accpetance ... is the extent that the associated public key
can be loaded into relying parties trusted public key repositories.

In general, all the common PKI infrastructures ... have at their root
... one (or more) self-signed digital certificate.

The PKI business issue is if all the digital certificates are
self-signed and require somebody to make a specific decision about
each associated public key for loading into a trusted public key
repository ... then you can do away with the certificates all together
and just go about directly loading the trusted public keys into the
repositories.

The whole point about PKIs and digital certificates ... is that for
some entity that you trust (have their public key loading into your
trusted public key repository) ... will you "trust" their judgement as
to other public keys ... aka CAs have made trust judgements (and/or
certified other public keys) and placed that trust judgement in
something called a digital certificate and have digitally signed that
digital certificate.

The business purpose of the stale, static digital certificates are a
substitute for

the relying party having their own trusted judgment about the
associated public key (aka having the specific public key already
loaded into their own trusted public key repository)(

the relying party not being able to contact the certification
authority in a timely manner to directly verify information

there was the scenario in the mid-90s about x.509 identity
certificates overloaded with privacy information and found
institutions retrenching to relying-party-only certificates
(i.e. where the relying party registered the public key and also
issued a certificate). The relying-party-only certificates usually
only contained some identifier (like an account number) which referred
to all the information that the relying-party knew about the
key-owner. Rather than having the information in a digital certificate
where it might be transmitted all over the world and unnecessarily
exposted the information to all sorts of prying eyes ... the
information was kept securely at the relying party. However, it was
trivial to show that such stale, static digital certificates were
redundant and superfluous ... since the relying-party already had all
the information on record (including the registered public key) and
therefor there existed no information in the stale, static digital
certificates that the relying party didn't already have. It was futhre
aggravated in the case of payment transactions for financial relying
parties. Not only were the stale, static digital certificates
redundant and superfluous, but they were tended to also be one hundred
times larger than the typical payment transaction size (resulting in
enormous payload bloat).
http://www.garlic.com/~lynn/subpubkey.html#rpo

there is the scenario involving SSL server domain name certificates,
which were motivated in large part by integrity concerns involving the
domain name infrastrucutre. somebody would apply for a SSL server
domain name certificate and supply a lot of identification
information. The certification authority would then contact the
authoritative agency for domain name ownership to validate whether the
applicant for the digital certificate was the actual owner of the
domain name. This raised two issues:

the authoritative agency for domain name ownership is the domain
name infrastructure ... which has integrity concerns ... which give
rise to the requirement for SSL domain name certificates

trying to match an applicants identification information with the
domain name identification information on file with the domain name
infrastructure was a time-consuming, costly, and error prone process.

Somewhat to address both these issues (domain name infrastructure
integrity and vaguries of identity matching) a proposal (somewhat from
the CA industry) is that a domain name owner register a public key
when they obtain a domain name. Future interaction between the domain
name owner and the domain name infrastructure then are digitally
signed and the domain name infrastructure can verify the digital
signature with the on-file public key ... note: no certificate.
http://www.garlic.com/~lynn/subpubkey.html#certlessAlso, CAs can require that SSL domain name certificates applications
be digitally signed. The CAs then can do a real-time retrieval of the
on-file public key and verify the digital signature on the application
(turning a time-consuming, costly, and error prone identification
process into a simple and reliable authentication process). note:
no certificate
http://www.garlic.com/~lynn/subpubkey.html#certless

The catch-22 for the CA industry is

improving the integrity of the domain name infrastructure then
lessens some of the original justification for SSL domain name
certificates

if the CA industry can use certificate-less public keys on file from
the domain name infrastructure (for validating digital signatures from
domain name owners) ... then it is also possible that others could
also do real-time retrieval of certificatelss on-file public keys (aka
do a modified version of TLS/SSL that uses certificate-less public keys
retrieved directly from the domain name infrastructure).

Part of the issue is that the whole CA trust chain/hierarchy is a
business definition. As a business definition ... it can't be limited
to just those business processes that involve digital certificates and
public keys ... but has to extend all the way thru the business
processes that CAs use for certification as well as the authoritative
agencies that they rely on for the original information.

A frequent comment is no (trust) chain is stronger than its weakest
link. If the CA industry is looking at improving their trust
hierarch/chain by using certificate-less, on-file public keys kept by
the domain name infrastructure ... then why can't others directly also
use such certificate-less, on-file public keys (and eliminate all the
extraneous business processes).
http://www.garlic.com/~lynn/subpubkey.html#sslcert

What is a Certificate?

"TC" writes:
Thanks. It seems to me that you are describing a system different
from the one described by Jacost. In the system you describe, a
self-signed certificate is a public key signed with its own private
key. In the system described by Jacost, a self-signed certificate is
a public key signed with the private key of a certification
authority which has no certificate.

the definition of a self-signed certificate is a certificate signed by
the private key that corresponds to the public key contained in the
certificate. an indication of a self-signed certificate ... is that
the public key contained in the certificate itself is used for
validating the certificate's digital signature ... as opposed to
continue searching the trust hierarchy looking for a "higher"
certificate with a public key for validating the current digital
certificate's digital signature.

a CA can have a self-signed "root" certificate ... and that CA can use
the corresponding ("root") private key to sign other certificates
where the same business organization is in possession of the
corresponding private keys. this would be a more general business
reference to a CA signing certificates for itself. In such a business
sense ... such CA certificates are self-signed, in the sense that they
are signed by the same business operation. However, this isn't the
common definition of a "self-signed certificate".

The authority key identifier extension provides a means of
identifying the public key corresponding to the private key used to
sign a certificate. This extension is used where an issuer has
multiple signing keys (either due to multiple concurrent key pairs or
due to changeover). The identification MAY be based on either the
key identifier (the subject key identifier in the issuer's
certificate) or on the issuer name and serial number.

The keyIdentifier field of the authorityKeyIdentifier extension MUST
be included in all certificates generated by conforming CAs to
facilitate certification path construction. There is one exception;
where a CA distributes its public key in the form of a self-signed
certificate, the authority key identifier MAY be omitted. The
signature on a self-signed certificate is generated with the private
key associated with the certificate's subject public key. (This
proves that the issuer possesses both the public and private keys.)
In this case, the subject and authority key identifiers would be
identical, but only the subject key identifier is needed for
certification path building.

The certificateTrustTrees identifies a set of self signed
certificates for the trust points used to start (or end) certificate
path processing and the initial conditions for certificate path
validation as defined RFC 2459 [7] section 4. This ASN1 structure is
used to define policy for validating the signing certificate, the
TSA's certificate and attribute certificates.

The largest outstanding defect with the S/MIME mechanism is the lack
of a prevalent public key infrastructure for end users. If self-
signed certificates (or certificates that cannot be verified by one
of the participants in a dialog) are used, the SIP-based key exchange
mechanism described in Section 23.2 is susceptible to a man-in-the-
middle attack with which an attacker can potentially inspect and
modify S/MIME bodies. The attacker needs to intercept the first
exchange of keys between the two parties in a dialog, remove the
existing CMS-detached signatures from the request and response, and
insert a different CMS-detached signature containing a certificate
supplied by the attacker (but which seems to be a certificate for the
proper address-of-record). Each party will think they have exchanged
keys with the other, when in fact each has the public key of the
attacker.

from rfc3379, section 5.
http://www.garlic.com/~lynn/rfcidx11.htm#3379
The DPD server response includes zero, one, or several certification
paths. Each path consists of a sequence of certificates, starting
with the certificate to be validated and ending with a trust anchor.
If the trust anchor is a self-signed certificate, that self-signed
certificate MUST NOT be included. In addition, if requested, the
revocation information associated with each certificate in the path
MUST also be returned.

1) Obtain an identity from a traditional Certification Authority
(CA).

2) Obtain a new identity independently - for example by using the
generated public key and a self-signed certificate.

3) Derive the new identity from an existing identity.

from rfc3830, section 3.
http://www.garlic.com/~lynn/rfcidx12.htm#3830
Public-key cryptography can be used to create a scalable system. A
disadvantage with this approach is that it is more resource consuming
than the pre-shared key approach. Another disadvantage is that in
most cases, a PKI (Public Key Infrastructure) is needed to handle the
distribution of public keys. Of course, it is possible to use public
keys as pre-shared keys (e.g., by using self-signed certificates).
It should also be noted that, as mentioned above, this method may be
used to establish a "cached" symmetric key that later can be used to
establish subsequent TGKs by using the pre-shared key method (hence,
the subsequent request can be executed more efficiently).

Moving assembler programs above the line

hancock4 writes:
I hated APL. Never could understand it, never liked it.

To me, it was only useful for high level mathematicians
who would understand the complex methods each little
character represented. It was too much for the rest of
us mere mortals used to Fortran and Cobol.

APL was frequently used like spreadsheets programs are used today
... sometimes even for adhoc ... upfront learning curve was a higher
than existing spreadsheets ... but level of learning for proficient
user wasn't a whole lot different.

Bill Manry writes:
On the mainframe, maybe, but I believe a version for the IBM 1130
predates APL\360 by a few years.

there was work on the ibm/pc predecessor with a chip that emulated 360
and the machine then provided apl\360. a earlier prototype was done
with a chip that emulated the 1130 and provided apl\1130.

i have a apl\360 "users manual" published as a tjw/ykt research
document by iverson and falkoff dated august 1968. It references a
1962 publication by iverson ... and credits a predecesor
implementation on 7090 by breed and abrams at stanford. it also gives
the same 1968 date for both the apl\360 primer and the apl\1130
primer.

The largest outstanding defect with the S/MIME mechanism is the lack
of a prevalent public key infrastructure for end users. If self-
signed certificates (or certificates that cannot be verified by one
of the participants in a dialog) are used, the SIP-based key exchange
mechanism described in Section 23.2 is susceptible to a man-in-the-
middle attack with which an attacker can potentially inspect and
modify S/MIME bodies. The attacker needs to intercept the first
exchange of keys between the two parties in a dialog, remove the
existing CMS-detached signatures from the request and response, and
insert a different CMS-detached signature containing a certificate
supplied by the attacker (but which seems to be a certificate for the
proper address-of-record). Each party will think they have exchanged
keys with the other, when in fact each has the public key of the
attacker.

note that the man-in-the-middle (MITM) attack applies to effectively
keys that have to be loaded in the relying party's trusted public key
repository ... including those keys belonging to certification
authorities.

in effect, some additional &/or out-of-band method is required for
further validating the association between some public key and some
other characteristic (nominally represented by information contained
in a signed digital certificate).

one method used in the PGP case is to have a key-id (basically a short
encoded representation of the full public key) and the key-id is
verbally confirmed in telephone, face-to-face and/or other independent
(trusted) communication.

frequently in many of the PKI/CA related deployments ... the
application vendor prebuilds a trusted public key repository into the
application itself ... and the relying party then places some level of
trust in the application vendor.

one of the issues that has been raised in the case of various vendor
provided trusted public key repositories ... is some of them involve
CA keys that have become quite stale and/or the CA companies have even
gone out of business. In these situations there may be no apparent
trusted constodian of the corresponding private key. Furthermore, the
PKI/CA deployments that have been made ... make no differentiation
between the trust levels of the CA public keys that have been
preloaded into a vendor provided trusted public key repository ... aka
all certificates are accepted equally regardless of which CA public
key (from the relying party's trusted public key repository) has been
used to validate a digital certificate's digital signature
(i.e. regardless of which CA issued the certificate).

the original design point for PKI, certificates, ca etc ... was the
offline email scenario from the early 80s ... where a relying party
dialed up their local (electronic) post office, exchanged email, and
then hungup. The issue was for received email from senders that the
relying party had never before interacted with (and/or never before
communicated with) ... how were they to authenticate the email
origin. Since there never before had been any communication ... the
relying/receiving party likely wasn't to have any information on hand
about the sender. Also, since the relying/receiving party had hungup
(and was now offline), they had no timely, online recourse to
information from any authoritative agency. Digital certificates
filled the gap as a substitute for the real information (in this
particular communication scenario was offline and receiving something
from a stranger).

However, in an online world, it is frequently trivially possible to
show that stale, static digital certificates are redundant and
superfluous.

DOS/360: Forty years

hancock4 writes:
Further, S/360 was able to handle far more I/O devices and activity
than a PC can. I doubt even today's sophisticated desktop PCs could
handle multiple users banging away on them, along with multiple high
speed printers, readers and punches. But our DOS S/360-40 handled
it smoothly. (Admittedly our 2415 tape drivers were incredibly
slow).

to some extent it was transfered bytes per mip ... and/or arm accesses
per mip.

in the late 70s ... i started making some observation that disk
relative system performance had declined by something like a factor of
ten times (by the early 80s, cpus, memories, etc had increased by a
factor of 50 times while disk arm performance had only increased by a
factor of five times or less ... therefor the disk arm performance had
a relative system performance decline of a factor of ten times).

we had a 360/67 that support 70-80 cp67/cms users ... and much later
there was a processor with nearly fifty times the processing power
supporting 300 vm370/cms users. It turns out the disk arm performance
had possibly improved by a factor of four times.

One of the things you really started seeing was trying to leverage
various kinds of caching (explosion in the availability of memory
sizes) to compensate for the lack of disk performance improvements.
Other approaches that tended to trade-off electronic memory vis-a-vis
disk arm performance was to transfer in significantly larger blocks
(which could make sense for some application environments).

In any case, my assertions upset some number of people in the disk
division ... and their performance modeling group got assigned to
refute it. After a couple months, they came back and effectively said
that I somewhat understated the decline in disk relative system
thruput. They then turned the study into SHARE presentation ... where
they described disk strategies to help overall system thruput.

Good passwords and security priorities

"sinister" writes:
I have the impression that in many situations, simple but critical
security protection measures are overlooked, even though complicated
but less vital measures are implemented.

Isn't it true that a policy enforcing good passwords is critical,
and a set of security policies that overlooks that is flawed?

one big issue is that when using shared-secrets ... the policy
requires a unique password/pin for every distinct security domain
(i.e. you don't want the password for online banking, connecting to
your neighborhood isp, and your employee shared-secret to all be the
same). the proliferation of unique electronic security domains
sometimes results in a single person required to have scores of unique
passwords.

many time, a security officer for a specific security domain will
totally ignore the human factors issues involved when a person is
required to memorize scores of complex, hard to guess passwords that
possibly change once a month. a myopic security policy that operates
as if it is the only security domain ... and is specifying the only
password that a person is required to memorize ... is overlooking
real-world reality and human factors. people have hard enuf time
memorizing a complex password that is changing monthly ... but it
becomes impossible when a person is faced with scores of such
situations.

lots of 370/148 were 512k and sometimes even a mbyte (memory
technology getting used was cheaper) and 370/168 wasn't unusual to
have 4mbytes.

i worked on microcode for virgil/tully (aka 138/148) associated with
the vm performance assist. another feature of 148 was that it had
significantly faster floating point than 145 ... and was targeted as
competitive machine in world trade markets against clone 370s.

as part of the effort, i got to spend quite a bit of time running
around the world doing 138/148 business forecast analysis with various
organizations. There was some big difference between how the sales and
marketing business was done in the US and how sales and marketing
business was done in the rest of the world.

In the US, yearly business forecasts rolled up from the branch offices
to the regions and then to DP hdqtrs ... and that information was
provided to the manufacturing plants for resource and capacity
planning for the upcoming year. If sales business people were wrong,
the manufacturing plants were responsible for the difference.

In world trade, the country sales forecasts were basis for placing
orders with the manufacturing plants (each country bought machines
from manufacturing). The machines were then built and shipped to the
countries ordering them. If forecasts were too high ... the unsold
machines were on the books of the country that ordered them ... not on
the books of the manufacturing plant.

One of the things that became very apparent was the people doing sales
forecasts in world trade countries were quite a bit more dllligent than
the people doing sales forecasts in the US; aka in world trade their
jobs could ride on how accurate the forecasts were (since world trade
countries bought and carried the inventory corresponding to the sales
forecasts)).

In the US, since the forecasts were not held accountable against the
marketing/sales organization (i.e. manufacturing plants carried the
inventory not the sales organizaiton) ... there was much less
accountability in the forecasts. US sales forecasts tended to be much
more aligned with corporate strategic statements than possibly with
what they felt customers might actually be expected to buy. The
result was manufacturing tended to discount the accuracy of US sales
forecasts and tended to duplicate the forecasting effort for US sales
i.e. to try and come up with accurate numbers ... rather than numbers
that possibly tended to support the current corporate hdqtrs strategic
thinking.

Allodoxaphobia <bit-bucket@config.com> writes:
rwong? -- unless _my_ bitrot is as severe as yours. :-) T'wern't
any Sys/370 148's. The 145 had a DAT box from Day 1, and that model
number endured.

(customer) 145s had DAT from day 1 ... but it wasn't enabled until
virtual memory was announce (at which time, the 145s got new microcode
loads to enable virtual memory). the 145s did have front panel with
lots of lights and "rollers" with crypted designation about the
meanings of the lights. all 145s shipped to customers had the physical
rollers ... and included a "xlat" designation for one of the lights.
this resulted in some speculation in the press prior to 370 virtual
memory announcement.

165s had to have (fairly large) hardware retrofit in the field.

138, 148, 158, and 168s were all new technology and models to the
previous 135, 145, 155, and 165.

virgil/tully (138/148) besides being faster & typically more memory
than their predecessor ... had operating system microcode performance
assists (vs1 and vm) ... and the 148 had significantly faster floating
point than the 145 (much faster than the nominal overall speedup of
the 148 over the 145).

had an earlier joint project with endicott. this was to create
software virtual machine (cp67 running 360/67) that emulated the 370
architecture (including virtual memory), as oppsed to emulating 360/67
architecture. 370 architecture had some number of new instructions not
found in 360 ... and the virtual memory tables had a number of
differences from those defined in 360/67. this was running in regular
use at least a year before the first engineering 370/145 with virtual
memory was operational.

DOS/360: Forty years

Anne & Lynn Wheeler writes:
as part of the effort, i got to spend quite a bit of time running
around the world doing 138/148 business forecast analysis with
various organizations. There was some big difference between how the
sales and marketing business was done in the US and how sales and
marketing business was done in the rest of the world.

there were some interesting strategic marketing meetings at various
locations that were looking at VAMPS vis-a-vis 148 (before VAMPS got
killed). to some extent they were targeted at the same market segment
and therefor were viewed was somewhat competitive. the strategic
marketing meetings were to look at which should be chosen (if
necessary) ... and so the meetings pitted the 148 in competition with
VAMPS. the problem in these meetings was that i had to fairly
represent both factions ... since i was doing a lot of work on both
products ... and would represent both products at such meetings (in
theory i was suppose to argue the pros & cons of both products with
myself).

Wolfgang Kueter writes:
I see no reason why an application should try to handle routing
(IMHO one should let the routers just do their job) or offer
debugging possibilities for problems on lower layers. Implementing
that would mean much more code and thus a less robust stack (which
no longer would be a stack).

there are some funny things about the layers ... for instance the
hostname->ip-address mapping is effectively normally a call done by
applications ... which then requests a connection based on the
ip-address. in the case of multiple A-records (domain name system maps
the same hostname to multiple ip address) ... there is some latitude
about which ip-address the application may choose to use and/or retry
if it is unable to make a connection.

in the case of "multihomed" hosts with connections into various
strategic backbone locations in the web ... the application layer
may have some knowledge about the best choice of which of the
ip-addresses to try.

part of this has been that the traditional layered architecture
paradigms have followed straight-forward message flows. The things
that allow such architectures to operate have tended to be described
as some variation on "service" applications that operate outside the
straight-forward message flows ... aka some of these implementations
refer to this as out-of-band control functions ... there is an RFC
someplace talking about the differencies of internet TCP w/o
out-of-band control channel compared to the (much better) TCP
originally implemented on arpanet that provide support for out-of-band
control channel.

some of multiple A-record and multihome recovery issues were
aggravated when the internet switched from arbritrary/anarchy routing
to hierarchical routing in the ealy 90s (when the internet was much
smaller, infrastructure could look for alternate paths to the same
interface ... however that didn't scale well ... requiring the
switch-over to hierarchical routing). With hierarchical routing
change-over ... there was much more of requirement for multihoming
into different parts of the internet backbone (for availability).

Protocol stack - disadvantages (revision)

another example ... also from the early/mid 90s ... about the same
time as the switch-over to hierarchical routing was ipsec vis-a-vis
SSL. ipsec was suppose to handle all the function ... totally
encapsulated in the lower-level protocol levels.

SSL came along at the application level and subsume some amount of the
function being projected (at the time) for ipsec. the whole
certificate and public key stuff was supposed to be the lower-level
function in ipsec (using publickey stuff to setup transport layer
encrypted channel). SSL did all that ... but SSL in the
application/browser implementation (w/o requiring anybody to change
the machine's protocol stack and/or operating system) also used the
same public key certificate to check whether the domain name typed
into the browser was the same domain name on the certificate. in the
ipsec scenario it would have been handled all at the lower level ...
which had no idea what a person had typed in for a URL at the
application layer. If the certificate had all be stripped away at the
lower level ... the browser application would have had no way of
comparing the domain name in the certificate to the domain name typed
in as the URL.

Thou shalt have no other gods before the ANSI C standard

Peter Flass writes:
I *believe* that you could send data while the carriage was returning,
and the TTY would try to type them. You'd get a few random characters,
the line would start on about the 4'th character of data, and then
would overtype the stuff it had typed on the return. If I'm not mixing
the TTY up with some other terminal, the drivers would have to pad
with some NULs after a CR to prevent this. How many NULs depended on
the line speed (return speed was constant, so it took more NULs on
faster lines), so lines that autobauded had to determine this on the
fly. Fun stuff in the old days.

same problem with 2741s ... when cp67 wrote a line ... after the CR at
the end of the stream it added something like 1 idle character for
ever ten characters transferred.

note in the above ... there possibly is some problem with 145 & 145-3
dates. first it claims that 145 was "withdrawn" only a couple months
after it started shipping. also the 145-3 date shows it was announced
on the same date as the 148.

it does say virtual memory was announced for 370 8/72 and shipped
8/73. also that the 135-3 & 145-3 were announce on 8/72 also and
shipped 8/73. They do appear to have some gotchas ... with some of the
boxes and/or software associated with virtual memory available prior
to 8/73 (that virtual memory supposedly was available) .... "virtual
storage" (the name at the time for virtual memory) would be prereq for
any software that used virtual storage. Also 158 had virtual memory
... and so couldn't ship before virtual memory was officially
available.

Moving assembler programs above the line

Bill Manry writes:
On the mainframe, maybe, but I believe a version for the IBM 1130
predates APL\360 by a few years.

One other detail...after APL\360 and before VSAPL there was a
product called APL-SV (or APL\SV?) that ran on OS/VS2-SVS and
introduced the shared variable concept. It came with a shared
variable processor that would access MVS PS/PO data sets, albeit
slowly. We developed SV processors that would do ISAM and
(later) VSAM, and our shop began using APL as a business
applications development language for certain types of online
apps (mainly inquiry things). Took a bunch of COBOL programmers
and taught them APL, and most of them loved it.

here is falkoff's "The family of APL systems"
http://www.research.ibm.com/journal/sj/304/ibmsj3004C.pdf

Maximum RAM and ROM for smartcards

Daniel James writes:
Smartcards typically have a few kB each of ROM and non-volatile RAM
(Flash/EE), and a few hundred bytes of (volatile) RAM storage.
Higher-end cards typically have no more than 32kB of NVRAM.

There are a few reasons that smartcard memory sizes are limited:

- The card has limited physical size.
- The card has limited available electrical power. More memory
(= more transistors) => higher power requirement
- The card has limited capacity for heat dispersion, more memory
(= more transistors) => more heat.
- The card has to provide physical protection to prevent the memory
being accessed directly, bypassing security of the card OS. This
makes the chip package bulkier and reduces its heat dispesion
capability.
- Cards are accessed by 1/2-duplex serial link normally running
at 9600 baud (but up to 115k2 on some cards) ... it takes a long
time to send large amounts of data to a card. Filling 2MB would
take a *very* long time; most people wouldn't want to do wait.

There are designs for cards with high-speed USB-style interfaces in
place of the serial port, when these cards become available it will be
practical to support much higher memory capacities ... but these cards
will be incompatible with the millions of readers already deployed
throughout the world. A changeover won't happen overnight.

there is somewhat a use issue. smartcards market segment target from
the late 80s was somewhat the portable (and disconnected) computing
niche. one of the reasons for the iso standards was that there was no
portable input & output capability readily available in that period
... so you needed fixed physical stations that provided the
input/output capability that you could take your portable computing
device to.

starting in the early 90s ... you started to see PDAs and cellphones
with portable input and output capability built-in. This started to
fill the niche that some had envisioned for smartcards. Part of the
issue ... which would you prefer a device with possibly a couple
mbytes that had no input/output capability and you had to go find some
station .... or a device with the input/output capability you could
carry along with you.

in some respect it took so long for smartcards to catcqqh on in that
niche ... that other technology came along and made them obsolete.

however, numerous operations have invested significant amount in the
core technology and they have been seriously motivated to find a way
to recoup that investment.

not really highlighted was that most of the early APLs ... up until
cms\apl (and then apl\cms) typically offered 16-32kbyte workspaces
... apl\360 "service" offered by the philly science center had 16kbyte
or 32kbyte workspaces ... and the apl\360 service contained a monitor
that swapped the complete workspace to/from disks when it switched
users.

part of going to cms\apl was that it completely eliminated the monitor
... but it also opened up workspaces to hundreds of k (or megabytes)
using virtual memory. As part of that, the APL storage allocation
mechanism had to be completely reworked ... because it tended to touch
every available piece of available storage in the workspace (which
worked in a real-storage swap environment ... but was very painful in
a paged virtual memory environment).

when cambridge started offering cms\apl as an internal corporate
service ... you started to see places like corporate hdqtrs coming on
to do business modeling ... since you could start doing reasonably
sized programs. cms\apl also introduced being able to do standard
system calls (including file i/o). prior to that, APL systems tended
to be restricted to having all the data and program contained in
single (16kbyte) workspace.

cms\apl and then apl\cms work also did a lot to make sure that the
core interpreter could reside in shared segments (i.e. the same image
could be concurrently in multiple different address spaces).

For HONE, a "padded cell" APL application evolved that tried to
totally isolate the sales & market people from having to deal with
standard online computing environment. The consolidation of all the US
HONE centers in Cal. ... and the growing dependency on sales&marketing
on HONE for major aspects of the operation ... saw the US HONE center
growing towards 40,000 defined userids in the 1980 time-frame. The US
HONE complex was also replicated at numerous datacenters around the
world ... for worldwide sales & marketing support.

The "padded cell" APL application was code-named Sequoia and was the
default application loaded into every APL workspace at initialization.
Sequoia grew overtime to a couple hundred kbytes. Sequoia provided the
"user interface" that most marketing and sales people dealt with
(underlying APL and/or CMS environment was rarely seen).

Sequoia would load, execute and delete other applications from the
standard workspace. At some point, performance analysis indicated that
having private copies of Sequoia in every workspace (and every virtual
address space) represented a major paging issue. The guys at PASC that
had done apl\cms ... helped HONE with an APL hack where the Sequoia
APL program was moved from normal workspace into part of the HONE APL
interpreter (so it became part of the shared segments ... i.e. only a
single copy that was shared across all the different virtual address
spaces).

Major HONE APL applications were "configurators" ... basically a
somewhat speadsheet type application where customer machine
configuration requirements were entered ... and the configurator would
determine what all needed to be specified for the machine order.

in the early to mid 70s ... prior to consolidation of all the US HONE
datacenters .. one of the US HONE datacenters occupied the 2nd floor
of a bldg at prominent location on wilshire blvd.

the computer room was in the center ... and staff had cubicles along
the outside windows. One of the HONE support staff had a fairly large
telescope (w/floor tripod) in his cubicle that he used to admire the
opposite gender out on wilshire during lunch hour.

Certificate Management Tools

"TC" writes:
I have also determined that I do not have the ability to create such a
certificate. I have Microsoft's selfcert.exe and the certification
authoriy included with Microsoft Windows 2003 Server. With these tools,
I can create certificates, but I have no control over the expiration
date and I cannot export the private key (and therefore can only apply
the certificate from the computer on which it was created).

private keys are stored in some sort of encrypted file ... totally
separate from any certificate.

at least one vendor has a virus demo where they copy an encrypted
private key file off a victim machine and break the encryption in
something like an avg. of 40-50 seconds (brute force guessing on
secret/symmetric key used to encrypt the private key file).

In PGP and SSH it is relatively trivial to identify the encrypted
private key file ... and copy it across multiple machines ... however
these implementations also make due w/o requiring public key
certificates.
http://www.garlic.com/~lynn/subpubkey.html#certless

quicky use of search engine turns up this ssh for windows:
http://www.jfitz.com/tips/ssh_for_windows.html
http://sshwindows.sourceforge.net/http://bmrc.berkeley.edu/people/chaffee/winntutil.html

also using search engine ... the first several sites
that come up about generating certificate
http://slacksite.com/apache/certificate.htmlhttp://tirian.magd.ox.ac.uk/~nick/openssl-certs/ca.shtml
http://www.pseudonym.org/ssl/ssl_cook.html
http://www.geotrusteurope.com/support/csr/csr_apache.htm
http://www.ssl.com/support/apacheOpenSSLInstall.jsp

an earlier part of the HONE cms/apl experience was making (and then
apl\cms) was making the APL interpreter part of shared code (so you
didn't need a unique copy resident in each address space).
http://www.garlic.com/~lynn/subtopic.html#hone

the original cp67 facilities for sharing pages across address spaces
was fairly hokey. Basically there was a table built into the kernel
which defined groups of page areas on disk. A privilege system
command, savesys .... allowed specific pages from the users current
address space to be written to specific collection area on disk.
There was a hack to the virtual IPL (boot) command that instead of
simulating the device boot process .... would map pages from a defined
group into current address space (including establishing any "sharing"
characteristics).

An unshared version of this was used by a number of customers to get
fast OS360 ipl/boot (i.e. pages from a virtual os360 guest would be
saved after os360 had gone thru is laborious startup process).

An early HONE apl\cms hack was to save pages after APL interpretor had
been loaded. cp67 & vm370 could be setup so that it would
automatically execute an IPL command at login; so instead of
specifying an IPL device ... for most HONE users, it specified a
specially group of saved pages that included both the CMS kernel and
the APL interpretor.

one of the activites that I had done was virtual memory management
enhancements to cp67 (cms paged map file system, dynamically loading
additional page image stuff ... after virtual IPL/boot ... including
shared page/segment specifications). there were two cambridge "Z"
publications on the work (I & II, ZZ20-6001 and ZZ20-6002, aka "Z"
publications only available internally). various references to some of
the stuff involved in the virtual memory management enhancements
http://www.garlic.com/~lynn/submain.html#mmaphttp://www.garlic.com/~lynn/submain.html#adcon

The IPL APL hack (which most users didn't even realize was happening)
worked for HONE as long as the user never needed to leave the
sequoia/apl environment. However, the HONE/APL/Sequoia environment was
extremely CPU entensive (one of the characteristics of interpreted APL
environment). Some analysis was done finding a few critical
configurator components doing sophisticated modeling ... could give
back well over half of all HONE cpu useage by being recoded in
Fortran. The problem with the IPL hack for shared pages is that it
wasn't possible to transparently transition from the IPL/APL
environment to a Fortran application and back to IPL/APL.

Early in the VM370 release 2 product cycle, i ported the cp67 VMM
enhancements to vm370 and also integrated them into HONE operation. It
was now possible to configure the userids for sales&marketing people
to automatically IPL/BOOT a relatively normal CMS ... and then
automatically transition into an APL/Sequoia environment (that was
able to reference a page-image of the APL interpretor on a cms page
mapped filesystem ... including specifications for shared
page/segments transparent to the end user. It was then possible for
Sequoia to setup automated scripts for exiting the APL/Sequoia
environment, executing specific Fortran applications and automatically
re-entering the APL/Sequoia environment (all transparent to the
sales&marketing people using HONE).

This dynamic invoking of shared images was deemed to be relatively
useful ... and the vm370 development group picked up a small subset of
the changes (w/o the page mapped filesystem support) for vm370 release
3 as something called discontiguous shared segments (DCSS).

Maximum RAM and ROM for smartcards

Sylvain writes:
actually not, sorry for mistake, USB cards I know only support USB
protocol and are so not compliant with ISO readers.

there have been two-chip configurations that support ISO7816 and USB
... with USB interface on one end and 7816 on the other end. there
has been some talk of single chip implementations that would
support 7816, 14443, and USB.

Maximum RAM and ROM for smartcards

Sylvain writes:
the key features of a PDA or cellphone are what you do with the input
and output devices (such as being able to transmit voice, display info
as so fork), not the availability, nor internal details of these
interfaces.

smartcards offer "low-level" communication interfaces only ok, but
being able to use a MB smartcard to, say, watch a movie on a
14443-enabled PDA may have sense; a smartcard is just (cryptographic
capabilities don't rely on sizes) a secured data carrier (and reading
performances are close to these USB storage-key).

the original design point of smartcards was for portable computing ...
but there wasn't the portable technology for providing input/output
capability ... which gave rise to the requirement for ISO standards
that you could have fixed input/output stations that would
interoperate with the portable computing device (also viewed as not
requiring high transfer rate for keyboard input and character display
output ... lots of 9600 baud terminals around in that period).

that niche got filled by PDAs and cellphones before smartcards ever
penetrated the market. however, there was significant investment into
the technology ... and since then it has sort of been a solution in
search of a problem.

there was then some attempt to apply smartcards to high transaction
environments ... however the physical contact characteristics were
shown to be totally inadequent for high transaction environments ...
which prompted the 14443.

going on in parallel with the smartcard technology efforts ... you
found the PC technology market developing interfaces like USB.

So ... there has been some flaying around attempting to adopt some of
the computer-originated technologies to smartcards (to help recover
the significant smartcard investment). However, if you are talking
about plugging some sort of hardware token into a portable computing
device (like a pda or cellphone) ... then any computing processes in
the hardware token are going to be pretty redundant and superfluous to
the computing capability in the "real" computing device. the majority
of these scenarios are possibly looking at high-speed storage/memory
interface ... to augment the existing storage/memory of the "real"
portable computing devices.

This is totally different design point than smartcards ... which
required both their own computing capability as well as memory and
only required a slow-speed interface for character input (typing) and
character output (display). It would seem more sensible, rather than
try and force fit smartcards into a solution that they were never
designed for ... to design a solution that fits the requirement.

Besides (computer) USB usurping the role of 7816 in the portable
"contact" computing market ... you are starting to see bluetooth,
various wireless, and even cellphone technology usurping 14443 in the
contactless market segment.

so back to the original assertion ... the target market niche for the
smartcard design point evaportated before smartcards ever took
off. Since then they have been a solution in search of a problem (in
part because of the original significant investment in the
technology). They've been proposed as solutions for problems that they
were never designed for. It lots of cases, their combination of both
memory and computing represents redundant and superfluous capability
for the problem they get targeted for (i.e. usually because they would
be used in conjunction with something that already has significantly
more memory and/or computing power ... that the smartcard solution
represents extraneous and unneeded capability).

Good passwords and security priorities

"sinister" writes:
Overall, I don't disagree; I have a list of all my passwords for work
and personal business on my home computer, and it's quite long, which
is a real hassle.

However, at my work nearly all the risk is with break-ins from remote
places in cyberspace. So there's nothing wrong with most users just
writing their password down and leaving it in a desk drawer.

several studies have indicated that at least 77% of fraudulent breakins
involve insiders.

recently in the news, there was something about plan for lifting a
couple hundred million from some bank. there appeared to have been
keyloggers installed on serveral machines ... possibly by somebody
from a maint. or cleaning crew. the keyloggers were almost
undetectable.

not only wasn't the physical area not safe (for storing recorded
passwords), but static data, shared-secret authentication mechanisms
were vulnerable (whether they were written down or not).

Sylvain <noSpam@mail.net>
basically they are designated to perform "secure" (as secure as
possible) debit-credit transactions, and yes they do that better
than a mag-strip and to secure your GSM connexion; now (recently) we
use their internal protection to hold data such as your ID; do you
suggest to use PDA as a passport or ID card ?

for most of corporate needs, I'm agree that large storage is useless,
strong cryptographic capabilities should be enough assuming that data
are managed on back-office.

but still, ID markets require storage (not yet MB) and computing
power of cards is not ridiculous (I know some crypto-libs on Wintel
that run slower than a smartcard!) so, they offer a good choice for
portable computing and storage ... even if they don't come with a
display and so on (or because they don't), they are supposed to be
used in a CAD not in standalone way (that's why I said comparison
was meaningless).

there was some deployment for payment operations with stored value
paradigm in parts of the world where there were no online connectivity
... or the connectivity was extremely expensive ... aka the cost of
the chip was much less expensive than any available telco costs.

they saw much less uptake in parts of the world where online
connectivity was pervasive and readily available at nominal cost
... in those environments you saw magstripe stored-value cards
... even for stored value ... since in such environments with
pervasive online connectivity, online stored-value w/o chips is much
less expensive than offline stored-value w/chips.

there is some discussion about using smartcards for things like door
badge access. basically from 3-factor authentication paradigm
• something you have• something you know• something you are

a hardware token for authenticated has been around for many years. in
the early days ... many of the door badge systems and other types of
access systems, suffered the same difficiency as the target market for
chip-based stored value ... there was no connectivity. In these
systems the authentication device was viewed as not only providing the
authentication ... but also the access permissions (which doors that
could be open ... somewhat analogous to the stored-value chips that
contained the current balance). However as technology progressed a lot
of the access control systems shifted to online implementations. All
the permissions were maintained online and could be updated in real
time.

It was only the no-value infrastructures that stuck with the offline
paradigm ... because there wasn't enuf value proposition to upgrade to
the higher integrity of an online, real-time system (being able to do
things like log, recognize patterns, reconfigure in real-time, etc).

So again, there tends to be the infrastructures that involve value ...
which involve online, real-time systems regarding permissions and
access control ... and only require authentication mechanisms (not
identification and/or permissions) ... and the no-value
infrastructures that can't afford the higher integrity operations and
will rely on devices that provide both the authentication as well as
the authorization/permission information.

So if you are dealing with infrastructures of any value ... in parts
of the world where online connectivity is becoming pervasive and
cost-effective ... you go to online permissions and authorizations ...
and are purely relying on the remote operation for authentication.

Such a well-designed high-integrity permissions/authorization
infrastructure should be capable of supporting a variety of
authentication mechanisms concurrently ... preferrably tailoring the
integrity of the authentication required to the value of the specific
operation needing permission/authorization (for instance a couple $10
transactions per day might require much lower integrity authentication
mechanism than being able to do numerous $1m transactions per day).

So for purely a reaonably infrastructure involving value of any kind
... authentication proportional to the value involved would be needed.
something you have authentication could be some sort of electronic
hardware device that could demonstrate its uniqueness and have
extremely difficult antitampering and anticounterfeiting mechanisms.
It is possible to implement such a characteristic using a
sophisticated smartcard device ... but again it is overkill ... like
having huge amounts of memory in a smartcard is overkill.

Given a requirement for straight-forward something you have
authentication, that can demonstrate uniqueness and have strong
antitampering and anticounterfeiting characteristics ... you need much
less processing that what is typically available in a smartcard.

Many of the old paradigm "id cards" might require significant amount
of storage was because the relying institutions like any reasonable
online capability ... and so had to provide a substitute for they
would be relying on ... in the institution specific cards that might
be provisioned to each person.

This was one of the early 90s proposals for smartcards for use as
driver's licenses ... all your personal history, driving record and
lots of other information would be carried in your driver's
license. When stopped, law enforcement (not having any kind of radio,
cellular, and/or other kinds of electronic communication) could rely
on processing your driver's license chip to find out everything about
you (outstanding packing tickets, speeding tickets, DUIs, revoked, and
whether the card was lost or stolen). What occured in reality was law
enforcement obtained online connectivity and have timely, real-time
information at their fingertips ... they don't need to ask your
driver's license whether it was lost or stolen ... they query the
online database. The only thing they really need is some unique
characteristic that they can reliably use to associate the person with
their online records. Having all that information in a smarcard
driver's license is redundant and superfluous and doesn't
provide as high quality information as the online database records.

Maximum RAM and ROM for smartcards

... so another from early 90s was the medical record card ... you
would have your complete medical records on a card ... in case you
were in an accident and the paramedics would review your complete
medical history before starting emergency treatment.

there were a number problems with this scenario ... 1) there is not
much a paramedic can do in an emergency situation with your complete
medical records, 2) any processing of your complete medical records
imply some nominal level of electronic equipment, 3) in all cases that
i know of where paramedics have some nominal electronic equipment
... it is used to go online with real doctors who are required to
authorize many kinds of treatment.

I would contend that there is a lot more benefit from having online
medical records for emergency access ... than there is trying to
deploy personal medical records in a smartcard. If i was looking at
getting a copy of my complete medical records ... i would probably go
after it as an add-on to my PDA ... rather than having a separate
emergency medical record card.

Daniel James writes:
You could do such a thing with a Palm/Pocket PC device, of course,
but you would have no guarantee of the integrity of the pocket
device -- a virus or worm might compromise that device and cause it
to reveal its keys or to misrepresent the data being
agreed/signed. Using a sealed, tamper-resistant,
non-software-upgradeable device -- such as a hypothetical smartcard
with a display and keypad -- would eliminate this

just add a very reduced set of electronics inside the body of a
PDA. it is a lot cheaper than having separate PDA and smartcard. this
is somewhat the trusted computing scenario.

i gave a talk a couple years ago at the intel developer's conference
in the trusted computing track. in the talk, i commented that it
appeared that over the previous couple years their design appeared to
have gotten significantly simpler and was approaching the design i had
did about the time they started out. somebody in the front row quipped
that was possibly because that I didn't have a committee of a couple
hundred people helping me with the design.

almost by definition, almost every "smartcard" on the market today,
does allow loading of software ... in part because they are still
somewhat a solution in search of a problem. they regularly position
themselves that if there is a problem ... then of course they could
load the appropriate software to address the problem.

shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
[1] You could take that last as a sign that they didn't trust me, but
took it as a sign that they were exercising reasonable and
necessary prudence. A good security policy need not get
underfoot, and that policy was very easy to live with.

there have been a number of recent studies that at least 77 percent of
fraud and theft involve insiders ... which confirms an age old adage
that the majority of fraud is by insiders.

in the early 80s there started to be a lot of work on collusion
controls ... i.e. processes that required two or more different people
for their operation and work on how to recognize when the checks and
balances were being subverted by collusion.

in recent years, the internet has tended to take attention away from
insider threats to focus on external vulnerabilities. in some cases
the issue of the internet obfuscates whether the exploit was by an
insider or an outsider (even tho the majority of fraud continues to
involve insiders).

shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
SAF allows for more granularity than you see in the PC and *ix
worlds; take advantage of it. Create userids tailored to roles; if
someone wears multiple hats, issue him multiple ids. If someone
needs to submit batch jobs with specific privileges but doesn't need
them only, create an id that is only allowed in batch and give him
surrogate authority. Don't give UID(0) to anyone unless they can't
do the job with, e.g., su. In particular, don't make your SMP userid
UID(0).

in the early days ... the default was allowed/permitted ... except for
what was denied. the transition to default deny except what was
permitted created a significant complexity problem.

lets say there are a million data processing objects, each with its
own permission and a thousand people. In theory, a security officer
had a million by thousand matrix or a billion decisions to make. This
was obviously not going to be a practical paradigm in any real world
scenario.

one solution was role-based access control. people have job
descriptions and there tends to be significantly fewer different job
descriptions than people. Do a role or job analysis cataloging the
explicit permissions needed for each job. When somebody new shows up,
they have a job description. The security officer just assigns the
"role" that matches that job description (and the infrastructure takes
care of populating the fine-grain permissions associated with that
role for that person). The security officer is left with very few
decisions to make as to granting permissions ... reduces possibly a
billion decision problem to possibly a couple hundred decision
problem.

this has been characterized as handling 95% of the problem. Real world
situations always involve people having to do things outside of their
strict role/job definition. That is where the security officer work
shows up ... making permission decisions about the remaining five
percent of real world scenarios.

the upfront role-based analysis would have looked at job descriptions
and permission partitioning to enforce multiple person involvement for
achieving some business operation.

however, frequently a lot of the thot that went into establishing
partitioning as part of addressing collusion issues ... can be
obfuscated in the actual deployment. When it comes time for a
security officer to allow the same person to have multiple roles (to
address practical real world situations) ... some of the business
rules about giving the same person a collective set of permissions
(that need to remain disjoint) may be violated (in the roll-up of
fine-grain permissions to collections of permissions represented by
roles, knowledge about which sets of fine-grain permissions that are
required to be disjoint, may be lost).

in that sense, most of the implementations that do fine-grain
permissions roll-up into role aggregates (RBAC systems) lack the
ability to also specify rules about which sets of fine grain
permissions have to be kept disjoint ... as well as identify minimum
set of fine-grain permissions that are required for achieving various
kinds of exploits.

I've seen scenarios for 24hr (and even 8hr) certificates ... where the
information certified today couldn't be relied on to still be true
tomorrow.

The certificate model ... again, is the offline scenario evolving the
letters-of-credit paradigm left over (at least) the sailing ship days.
The person involved could have a credential and the relying party
relies on the credential in lieu of being able to directly contact the
authoritative agency responsible for the information.

the short-lived certificates are starting to blur the line regarding
whether the relying party would be better off directly contacting the
authoritative agency in real-time ... rather than relying on a stale,
static certificate provided by the party they were trying to validate.

there have also been a number of deployments where the relying party
went thru the motions of performing the digital certificate processing
and then, in real-time, went directly to the authoriative agency
responsible for the information anyway (making the use of a stale,
static certificate, redundant and superfluous).

Perhaps a review of the DOD "orange book" for the B2 security
requirements would be appropriate.

orange book is being depreciated ... supposedly being supplanted by
common criteria ... i've been criticized for continuing to carry
orange book definitions in my merged security taxonomy & glossary
http://www.garlic.com/~lynn/index.html#glosnote

one of the things that orange book tended to assume was general
purposes dataprocessing system being concurrently used for multiple
different activities that possibly involved multiple different
security criteria. common criteria is much more open to dedicated
operations and compensating procedures for things that would never be
able to handle the security requirements of a general dataprocessing
serivce with numerous different concurrent (and possibly conflicting)
operations going on.

there, then is the internal folklore of a gov. operation requesting
all the exact source code for a specific deployed
operational MVS system (or for any deployed MVS system). after the
expediture of truely huge amounts of resources and money investigating
the issue ... it was finally determined to not be practical.

Maximum RAM and ROM for smartcards

Daniel James writes:
That's the point: you could shop online using the browser of an
untrusted PC on an insecure network but then send the final
"shopping basket" to to a trusted secure handheld terminal and
review the items purchsed, the prices, and the delivery address and
sign that "shopping basket" on the secure device. It would not then
matter that the browser, the PC and the network are insecure because
the signature guarantees that the transaction details cannot be
changed. This requires more than just a smartcard, though, as the
device has to be able to display the "shopping basket"'s contents,
and the customer has to be able to trust the device.

2) is any pin-entry actually going directly to the card (and not being
skimmed and/or potentially replayed)

a side issue with POS devices with secure modules is whether there are
overlays (like in various atm machine exploits) which raise issues
similar to those that finread standard attempts to address.

one of the provisions allowed for in the x9.59 payment standard
(requirement for the standard working group was preserve the
integrity of the financial infrastructure)
http://www.garlic.com/~lynn/x959.html#x959

was the possibility of dual-signatures. there is the prospect that
terminals/readers are deployed with security modules, tamper-resistant
features and other security characteristics ... but when a financial
institution is receiving transactions ... and is doing risk assesement
... how does the financial infrastructure know that a terminal
actually meeting security requirements was being used (as opposed to a
counterfeit). to address this issue ... x9.59 allowed that for an
embedded security chip to also digitally sign the transaction (after
being digitally signed by the costomer). The signing by a embedded
security chip in the terminal's security module ... would provide
evidence (to the processing financial institution) that a counterfeit
terminal wasn't being used (however, it wouldn't preclude that such a
terminal environment hadn't been compromised with something like an
overlay).

one of the issues in using personal devices (like PDAs) for (wireless)
point-of-sale transactions, is that a dilligent customer has a lot
more confidence that their personal device hadn't been compromised
with counterfeit, overlays or other skimming technologies (it is
"their" display and "their" keypad). An embedded security chip in the
personal device can provide some evidence as to the integrity
charactistics of the device/environment originating the transactions
(it doesn't preclude all types of exploits ... but it possibly
provides a better bound on what exploits and therefor what risks might
be involved).

Rick Jones writes:
When the application on A called shutdown or close, the TCP endpoint
would send a FIN and transition to FIN_WAIT_1. It is now waiting for
an ACK of the FIN, and will retransmit until it gets one. If it
receives an ACK of the FIN it will transition to FIN_WAIT_2 and sit
there until it recieves a FIN from the remote. If it does not receive
an ACK of its FIN, it will rtx timeout and die, perhaps sending a RST.
It will not go into CLOSE_WAIT.

CLOSE_WAIT is the stat that B would enter when it received the FIN as
it is now the one waiting for the application to call close.

in the mid-90s the problem would even crop up unintentially. the
advent of HTTP over TCP created an explosion in short-lived sessions
(and enormous FINWAIT lists). up until then most implementations had
sequential searches of the FINWAIT list (nobody expected the size of
the list to exceed more than tens of elements).

in the '96 time-frame there were starting to be numerous high-use HTTP
servers that found that they had 100% cpu consumption ... with
something like 95% of that spent scanning the FINWAIT list.

netscape was into serious replication of their servers ... in part
because of the enormous amounts of cpu spent spinning in FINWAIT list
scan. at one point, netscape transitioned to a large sequent server.
sequent had previously addressed the FINWAIT list scan issue because
of various commercial dataprocessing customer installations that would
deal with 20,000 telnet sessions.

eventually most vendors got around to rewriting their FINWAIT list
handling code.

Those with real problems in probability and statistics
do not need to know how to compute anything, but they
need to be able to formulate their problems in
"mathematical space", and understand what is going on.

I would make the mathematics requirement for college
entrance the ability to formulate word problems,
which may not be complete, of about a half page, in
symbols which communicates the information accurately,
using as many variables as they wish.

however it is reasonable to be able to perform some reasonable
calculation estimates ... frequently as a real-world sanity check on
large projects & theories. you can't imagine the enormous $$$ i've
seen poured down the drain on really brilliant ideas because somebody
didn't bother to apply a little 8th grade math as a sanity check.

somewhat unrelated ... in some work with a very large state land-grant
univerisity ten years ago ... they had commented that they had dumbed
down their entering freshmen text books three times in the previous 25
years (and there is nothing to indicate that the trend has
significantly changed). of course this was about the time that the
census bureau had a report published claiming that half the 18yr olds
(stats from the 1990 census) were functionally illiterate.

was to handle a perceived problem with the domain name infrastructure
where you might not be talking to the webserver you believed you were
talking to. so webservers got SSL domain name certificates ... and in
the SSL exchange, the browser would validate the webservers
certificate ... and then compare the domain name in the certificate
with the domain name in the URL. the assumption was that you then
probably were talking to the webserver that you thot you were talking
to.

well, webservers found out that they took a 80-90 percent performance
hit. so the common deployment tese days is that ssl isn't used for the
actual shopping experience ... the part where you had the person
potentially was the one that directed the browser to the website
initially .... so there is no check matching the domain name in the
webservers certificate with what was in the URL initially provided for
the browser.

you eventually get to checkout and select the payment button ... which
initiates the https part. Note, however, if you have been talking to a
fraudulent site (the thing that SSL was suppose to prevent) .. it is
likely that the payment button will specify a URL with a domain name
for which the crooks have a matching certificate for (i.e. with the
payment button, the crooks get to specify both the URL for the browser
as well as provide any SSL certificate; it isn't likely they would
provide something that doesn't match).

so the other thing that has been in the news for the past couple weeks
... is people getting to an incorrect site when they mistype
google.com. again, whoever has set up the site that people go to when
they mistype google.com ... are likely also to have a svalid SSL
certificate that matches the domain name you are at.

Maximum RAM and ROM for smartcards

Sylvain writes:
ok, it wasn't exactly my point but I'm agree with your
conclusions. sicne 99% of web users don't know what PKI, X509, or even
"domain name" mean, having a trusted cert to confirm that such page
comes from it is certainly useless.

I was more thinking about mutual authentication (SSL 3) made by an
active agent that allow or prevent specific connexion (and it could
rely on a smartacrd with its list of pre-approved merchant-sites, or
more likely the cert of your bank).

we mandated mutual authentication SSL ... before there was a mutual
authentication SSL. It turns out the certificates were redundant and
superfluous.

the purpose of certificates were to handle provide relying
parties (in this case both parties are "relying") who had
no previous contact and/or knowledge about each other.

in the case of financial institution and their customers, they have
previous knowledge about each other ... and even records about each
other. They don't need to rely on certificates to provide that
information. In the mutual SSL autnetication scenario for e-commerce,
certificates were utilized ... in large part because the software
already existed. However, once the initial certificate processing was
completed ... both sides they looked up into their existing records to
verify that they were communicating with the expected parties. In
effect, other than some minor preleminary processing that leveraged
existing software, the certificates were redundant and superfluous
(since information related to pre-existing relationship was what drove
the business process).

in the early 90s there was started to be some amount of effort
into x.509 identity certificates ... however it was completely predictable what
sets of identity information ... various future relying parties
might be interested in. as a result, you started to see various
CAs looking enormous overloading of privacy information for
x.509 identity certificates. in the middle 90s, financial institutions
beginning to realize x.509 identity certificates represented
enormous privacy and liability problems.

a financial institution would register all the information in an
account record ... including the customer's public key. they
then created an r-p-o certificate that they gave to the
customer. in the future, the customer created a payment transaction,
signed the transaction ... and sent the combination of the
payment transaction, the digital signature, and the r-p-o digital
certificate back to the financial institution.

since the financial institution already had all the information,
having the customer send back the digital certificate represented
totally redundant and superfluous operation; or almost. It turns
out that even a relying-party-only certificate was on the order
of 100 times larger than typical payment transactions. Including
a redundant and superfluous digital certifcate with every
payment transaction had the effect of enormous increase in
payload bloat ... increasing typical transaction transmission
size by one hundred times.

In the scenario where customer establishes a relationship with
a financial institution ... they can register their public
key with the financial institution and the financial institution
can verify their digital signature with the on-file public key.
It is a totally certificatelss operation
http://www.garlic.com/~lynn/subpubkey.html#certless

and furthermore matches the existing business models where people
provide their "human" signature on a signature card when they
establish business relationships with a financial institution.

at the same time, the financial institution can provide their
customers with the financial institutions public key .... and
the customer can save in their trusted public key repository.
again no digital certificates. Effectively the trusted public
key repository is the model used by implementations like
PGP and SSH ... getting along w/o certificates. Furthermore,
the trusted public key repository is also the foundation
of PKI implementation ... where the public keys of the
certificate authorities are typically loaded into application
trusted public key repositories (this is where the public
keys come from that allow relying parties to validate
any digital certificates that have been sent to them).

In almost all of the SSL mutual authentication scenarios, the actual
digital certificates are totally redundant and superfluous ... since
the connection is normally being establish between two parties that
have prior/existing relationship (they do not have to depend on the
certification from totally extraneous certification authorities to
determine information about the other party).

Sylvain writes:
I said that it is (most of the time) useless to perform hash
computation inside of the cards (if you don't trust a hash, why will
you trust the plain text (still no display) ?), but of course PKCS1
padding shall always be made internally of the card prior signature,
as well as OAEP padding prior encipherment. moreover the smartcard
application shall check the length of transmitted data (the message
digest) or set some low limits.

fundamentally digital signature is a misnomer ... it carries none of
the properties typically associated with a human signature, like view,
understand, approval. authorize, aggree, etc.

for the most practicale purposes, digital signatures are purely
authentication ... from three factor authentication paradigm
• something you have• something you know• something you are

baiscally, digital signature is a form of something you have
authentication ... it indicates that some entity has access to and use
of a private key.

in the authentication scenario ... random data is sent (countermeasure
for replay attacks), the remote end digital signs the random data and
returns the digital signature. The relying party then can validate the
digital signature with the on-file public key. the result is some
implication regarding something you have authentication.

Some relying parties might find of some interest what is required
to access and use a private key ... is the prviate key in an
encrypted software file ... or was it generated in some hardware
token with specific integrity characteristics and there are no
provisions for the private key to depart the token. most of the
(PKI) infrastructures maintain little or no information about
integrity characteristics that might be of some real interest
to relying parties.

the previous mentioned EU finread standard does define
a terminal that provides an environment where it attempts
to establish that the transaction being signed was read
and some indication that there was understanding. The purpose
of the 2nd digital signature ... by the signing environment
... was to provide a possible indication that some procedure
was followed that might possibly satisfy "human" signature
requirements (as opposed to just the users digital signature
which is just there to provide authentication).

Hownever, there is a vulnerability in such systems if
asymmetric keys were used for both authentication operation
and signature operations. If "random data" was ever signed
w/o being viewed as part of purely authentication event ...
and signatures were also accepted for "agree, authorize,
and/or approval" messages ... then an attack is to transmit
valid data under the guise of being "random" authentication
data.

there are some forms of attacks on private keys involving encryption
of carefully constructed data. one of the purposes for hardware token
to hash the data ... even it is hashing an existing hash of data
... as countermeasure against such attacks aka how does a hardware
token know that the data it is being asked to sign is actually a real
hash ... anymore than does a person know that random data (that they
might sign as part of an authentication protocol) is truely random.

in the dual-use attack ... possible countermeasures is to append
some disclaimer to every piece of data that is digitally signed
(prior to digitally signing it) and transmit back both the
digital signature as well as the actual message signed (as
modified with the disclaimer)

Daniel James writes:
There are designs for cards with high-speed USB-style interfaces in
place of the serial port, when these cards become available it will
be practical to support much higher memory capacities ... but these
cards will be incompatible with the millions of readers already
deployed throughout the world. A changeover won't happen overnight.

one of the market segments that smartcards wandered into was
transactions of various kinds ... these tended to have time
constraints and could be high-use. one of the short-comings of iso7816
in high-use was wear and tear on contacts. a common failure mode was
high-use readers developing burs on the contacts which in turn would
would "rip" contacts off smartcards (when inserted or removed).

14443 proximity was somewhat developed to replace 7816 in this market
segment (helping address both time-constraint and mechanical wear).

the problem with 14443 in time-constraint market-segment was that it
severely exhaserbated the power-constraint issue that you raised.

for instance, one of the approaches to addressing the enormous
time-problem with RSA operations has been to significantly increase
the circuit count (1100-bit multiplier in lieu of a 16-bit or 32-bit
multiplier) ... for some chips this represented a 30percent or larger
increase in circuit count and significant increase in power draw
(exceeding the power-constraints of typical 14443 proximity capability
... aka you can have fast & high-power draw or you can have slow &
low-power draw).

an old data point from the mid-90s using BSAFE2 library on 100mhz PC
... was on the order of 20-30secs processing for a defined financial
transaction (involving multiple RSA operations). This was using
standard BSAFE2 16-bit math operations. A friend had done a rewrite of
the BSAFE2 library to use 32-bit math operations (in lieu of the
16-bit math operations and got a factor of four speed-up).

One of the issues with USB represents a totally different target
market ... even tho the small form factor may remain similar. The
original issue was having a convienent portable form factor w/o its
own "human" input/output interface ... because the technology didn't
exist at the time ... and requiring ISO standards for interfaces
... so that the device could be used with fixed stations that did have
"human" input/output capability.

Technology did evolve for human input/output, displacing that early
target market. The issue now is finding different target markets for
portable small form factor w/o its own power and w/o its own human
input/output interface.

Given a market place where a human is tending to carry any portable
device (including ones with their own input and display capability)
... then many of the advantages cited for miniature single chip tokens
... can also be achieved by embedding such a chip in a slightly larger
device with more capability.

One of the potential infrastructure issues is a model with lots of
different institutions issuing miniature single chip tokens vis-a-vis
person having a single portable device that can be registered in
different contexts.

In the 3-factor authentication model
• something you have
• something you know
• something you are

... there has been a requirement for something you knowshared-secrets to be unique in every unique security
domain ... so that other individuals that have access to your
shared-secret in one security domain ... can't
impersonate you in a different security-domain, aka your local
neighborhood ISP can't use your ISP login password to impersonate you
at your bank. This has led to the problem with people having to deal
with scores of different and unique
shared-secrets.

in the past, institutions have tended to issue unique something you
have tokens ... somewhat worried about integrity of the tokens and
being able to generate counterfeit tokens.

however, in the evolving 3-factor authentication environment
... it is becoming pretty obvious that you aren't going to have a
unique thumbprint (something you are) authentication, for all
the different possible different security domains that you will
interface with.

the technology is now available for high integrity something you
have tokens that are at least as hard to counterfeit as
thumbprints (something you are). one issue is will any
something you have (token) based environment come down on the
side of the old fashion something you knowshared-secret
paradigm (potentially having scores of different tokens, one for every
unique security-domain) ... or will it be able to evolve more like the
something you are paradigm ... where the same token can be used
across a number of different operational infrastructures (aka
person-centric infrastructure as opposed to institution-centric
infrastructure).

An operational issue is that if the infrastructure believes a lot of
infrastructure-specific information has to be loaded into a token ...
then it will tend to have to follow the unique shared-secret
paradigm as opposed to the something you are model
(infrastructures not figuring out how to give you a unique thumb for
every institution ... as well as encode lots of institution specific
information into the thumb they provide you).

So, there are several possibilities:
- ssh from PC to server, personalized accounts, su
Control who is doing something - but DISPLAY is "lost", isn't it?
- a central host for loggin in and then jump to destination
[...]

ssh has been a lot more focused on one-to-many and/or many-to-one
scenarios ... not tending to have the administrative processes for
handling generalized many-to-many.

one of the things that kerberos attempted to target in the late 80s
was to also include facilities that could be used for managing the
administrative issues of a many-to-many environment (as opposed to the
simple, inline authentication operation ... but also possibly lots of
administrative gorp managing a large operation).

not too long ago ... i saw a detailed presentation on a new SAML
deployment. My comment to the presentor was all the SAML message flows
looked exactly like kerberos message flows that were put together in
the late 80s (except the SAML messages tended to have somewhat larger
information content than the typical kerberos message).

into something that didn't involve static data that would be
susceptable to simple replay and impersonation.

the early pkinit draft for kerberos was straight-forward digital
signature authentication w/o needing PKIs and/or other complex
infrastructure, digital certificates, etc. It simply involved
registering public keys in place of passwords, and performing digital
signature verification (with the registered public keys)
http://www.garlic.com/~lynn/subpubkey.html#certless

avoiding the additional complexity, business processes and potentially
external business operations that is typical of certificate-based PKI
deployments.

There have also been similar enhancements for RADIUS ... i.e. backend
protocol for authentication & authorization information ... doing
dynamic-data authentication using digital signatures with public keys
registered in lieu of passwords (and avoiding the additional
complexity, business processes and potentially external business
operations that is typical of certificate-based PKI deployements)
http://www.garlic.com/~lynn/subpubkey.html#radius

"Best practices" or "Best implementations"?

both the radius and kerberos (and most saml) implementations can
provide an administrative repository for authentication and
authorization informration.

typically in the radius scenario ... you go thru standard
authentication at each point ... but the authentication and
authorization information is retrieved from a radius server (that can
be setup for the whole infrastructure).
http://www.garlic.com/~lynn/subpubkey.html#radius

radius was originally done by livingston for their modem concentrators
and later made available to IETF and expanded into server of
generalized authentication and authorization information (it
probably sees its dominant use by ISPs for customer
authentication&authorization, but has been adapted for lots of other
authentication applications).

kerberos has been somewhat more oriented towards single-sign-on where
you authenticate to the kerberos administrative point and the whole
infrastructure is setup to utilize the kerberos authenticaton and
authorization based information
http://www.garlic.com/~lynn/subpubkey.html#kerberos

kerberos was original done at MIT project athena (jointly funded by
dec and ibm) ... and evolved also into IETF standard. numerous vendors
have kerberos-enabled their products that perform authentication
operations (including most of the unix vendors as well as m'soft and
windows infrastructures).

For the most part, SAML has focused on defining the message types and
message fields ... which is possibly why implementation deployments
have adapted the kerberos message flows. There is still the
administrative issue about infrastructure for maintaining, updating,
and changing a large operation's authentication and authorization
rules (as opposed to the straight inline, barebones operation of a
single authentication event).

Maximum RAM and ROM for smartcards

Anne & Lynn Wheeler writes:
In almost all of the SSL mutual authentication scenarios, the actual
digital certificates are totally redundant and superfluous ... since
the connection is normally being establish between two parties that
have prior/existing relationship (they do not have to depend on the
certification from totally extraneous certification authorities to
determine information about the other party).

real live one from today ... went to a (commercial) site where i
actually type in the https://www.domainname.com ... and the browser
comes back to me with a message about unknown certificate ... and did
i want to view the certificate.

so of course, i said yes. the actual certificate was from some "self"
(an unknown generic brand?) certificate authority and it was for
"localhost.localdomain".

so i wonder if i'm victim of the DNS poisoning that has been
in the press. I do a little snooping to check what the actual
ip-address i'm connected to ... actually might be.

infrastructure claims the ip address is some akamai.net thing with
some aliases, including the domain name that i had typed in.

i then access other places in the internet to do explicit DNS domain
name to ip-address mapping.

i then retype https://"ip-address" ... rather than the domain name
... i don't know which is worse ... typing in the domain name and the
browser coming back with a certificate that is flagged as totally
unacceptable ... or typing in the direct ip-address and having nothing
for the browser to match against).

at least if i check around several places in the internet for the
domainname to ip-address mapping ... and then use the ip-address based
on the concensus ... i have some hope that i'm at least not subject to
some specific site DNS poisoning.

the something you know authentication have frequently been
shared-secrets ... pins, passwords, account numbers, etc.
There is frequently a recommendation that people are required to have
a unique shared-secret for every domain they operate in. The
vulnerability is that people that are part of one security
domain infrastructure not only can use the information to
authenticate you in one security domain ... they can also use
the same information to impersonate you in another security
domain (i.e. pin/password used with your local neighborhood ISP
isn't likely to be the same pin/password you use for online banking or
at your place of employment).

the use of static (shared-secret) paradigm for authentication has
led to crooks harvesting both information in flight ... as well as
large repositories of information at rest. At lot of this has been
hitting the news recently with regard to identity theft. Nominally,
identity theft is obtaining enuf (static) information about you to
open new accounts in your name. It is also being used to obtain
any information necessary that enables them to perform fraudulent
transactions with your existing accounts. an example in discussion
of security proportional to risk:
http://www.garlic.com/~lynn/2001h.html#61other random stuff with respect to shared-secret infrastructures
http://www.garlic.com/~lynn/subintegrity.html#secrets

in any case, it has given rise to many people having scores of
pin/passwords that they have to keep track of ... which frequently
leads to them all being record on a piece of paper (or in a file) that
is subject to be stolen (or copied).

In any case, all of this has given rise to other authentication
mechanisms ... including something you have (frequently chips that
have some unique characteristic which is difficult to counterfeit) or
something you are (biometrics). Ideally, in either of these other
paradigms, you no longer need a unique thing per security-domain for
instance it is unlikely that you are going to be issued a unique thumb
in lieu of every existing unique password you may currently have (with
tokens of sufficient integrity characteristics ... you shouldn't also
need to be issued a unique token in lieu of every existing unique
password). The advantage of unique thumbs or tokens ... is that they
are much harder to counterfeit than shared-secret pin-passwords (and
proof of token possession shouldn't be dependent on generation of
static data which can be skimmed and later replayed for
impersonation).

So one of the PC hardware proposals was to put a something you have
hardware token chip that performed authentication using some kind of
dynamic data ... that couldn't simply be harvested or skimmed
(evesdropping) for later impersonation and/or fraudulent purposes.
Your applications running on your PC could utilize the chip in
internet authentication protocols on your behalf.

Unfortunately several other market forces complicate such deployment.
In the IBM unbundling of june 23rd, 1969 ... IBM announced that it was
going to start charging for software (motivated quite a bit because of
litigation from the fed. gov. and other entities). At first, it was
just application software where kernel software continued to ship as
"free". In order to appropriately charge for each copy of software
being used, each copy was installed on a machine referencing a unique
processor serial number (in effect, if you didn't enforce a customer
to pay for each copy they were using ... you might be still be
considered guilty of bundling software and hardware).

eventually the legal forces (especially from the federal gov) to
enforce separate charging for hardware and software began to permeate
much of the rest of the industry. a cornerstone of the software
pricing was to be able to uniquely associated each copy of software
with its use. One way of doing that was having each processor uniquely
identified.

in the early 80s, there was some of this permeating the PC industry
... sometimes referred to under the heading of DRM (digital rights
management). The PCs of the period didn't have unique,
non-counterfeitable identification. The mechanism used was to ship a
unique and (supposedly) non-counterfeitable (and non-copy'ble) floppy
disk. You could install the application on your hard disk ... but the
application required a specific floppy disk to be in the reader in
order for it to operate.

One such view ... if it is possible to create a unique authentication
mechanism for each PC ... system and application software might also
be able to use it to make sure that it was running on the machine that
it was supposed to be running on (the mainframe model introduced in
the 70s when litigation forced unbundling ... being applied to the PC
market).

This might be considered to be slightly more appearling than having
every system component and application ship with its own unique USB
(chip) token ... and the associated component/application wouldn't
operate unless the associated USB token was currently plugged in
(i.e. the mid-80s DRM model substituting non-copy'able and
non-counterfietable chips for non-copy'able and non-counterfeitable
floppy disks) ... if managing scores of different passwords was
difficult ... imagine trying to concurrently manage hundreds of
unique USB tokens for every machine.

"sqrfolkdnc" writes:
What about BPS? I think it was less than BOS. IIRC, I loaded BPS,
then loaded an emulator program (all on cards) and could run TWO 1401
microcode based emulations on a 32K 360/30. The BPS and 360 program
handled the time sharing between the two 1401 emulations. That was
needed after DOS grew 2K and the emulator program we ran all day under
DOS would no longer fit to allow running 1401 programs under DOS batch.

the univ. had a 709 with a 1401 doing ur<->tape front-end with
program deck called MPIO. the univ. got a 360/30 replacing the 1401
(on its way to getting a 360/67 replacing the 709).

They gave me a summer job writing MPIO program for 360/30 assembler.
I got to design/write my own storage manager, device drivers,
interrupt handler, task manager, etc. I did implementation with
conditional assembler test that did either DCBs and ran under os/360
... or a stand-alone version ... that you could load with the BPS
loader.

I would get the machine room for the weekend for development and test
(8am sat. until 8am monday). The program took nearly 30 minutes to
assemble (standalong version ... or around 50 minutes for the os/360
version ... each DCB macro taking approx. 5 minutes elapsed time to
process).

Before I learned that the BPS loader supported "REP" cards ... I
would do quicky program patches by repunching the "TXT" cards (along
the way, i had to learn to read punch-codes ... since there was no
character code for the hex printed at the top of the TXT decks).

Winged writes:
On authentication, I am only aware of few biometric device that work
reliably. I have used a wide variety of commercial systems and all
have various issues, for example, one iris identification system I
have worked with , did not work reliably, same for a retina scan
system used, depending on the day, and even time of day, it frequently
denied access to what I needed access to.

At the other end of the spectrum, my fingerprint ID system on my
laptop, lets my daughter into the system more reliably than me, go
figure.

for the past several years at ID shows ... my fingerprint tended to
not register reliably ... although it has somewhat improved within
the last couple months. they claimed that my ridges were more
characteristic of "asian" genotype ... lots of fine, closely spaced
ridges (as well as lots of old abrasions; long ago and far away i
would demonstrate macho by handling re-bars w/o gloves, back in the
days of who could rip their shift sleave just by flexing their bicep)
... as opposed to european norm ... which tends to have fewer and
larger ridges.

one of the past arguments against using fingerprints on payments
vis-a-vis debit cards with pins ... was how easy it was to counterfeit
fingerprints. the counter argument was that something like 30percent
of the people write their pins on their debit card. The comparison
then becomes, after stealing a debit card, which is more difficult:

1) lifting fingerprint from the card and counterfeiting fingerprint
entry
2) lifting a pin written on the card and counterfeiting pin entry

biometrics usually involve fuzzy matches ... with false positives and
negatives somewhat under control of the choice of the scoring
threshold that is set. identification may try for a higher scoring
match (i.e. attempting to search a collection of recorded fingerprints
for a match) than simple authentication (attempting to determine if a
supplied fingerprint matches the authorized fingerprint).

In authentication, there is also the issue of security proportional to
risk ... infrastructures may be willing to accept much lower scoring
values for $5 transactions than they are likely to accept for $1m
transactions. slight topic drift on security proportional to risk:
http://www.garlic.com/~lynn/2001h.html#61Security Proportional To Risk

for some payment infrastructures involving offline payment
transactions, they've tended to focus on the single optimal fixed
scoring threshhold value and the choice of the optimal value
... obfuscating that most online systems are migrating to concepts
related to security proportional to risk .... i.e. require higher
scoring values ... and even possibly multiple readings from multiple
fingers for higher values (and multiple readings from multiple fingers
might be considered an addendum to 3-factor authentication paradigm).

further addenda to evolution of software pricing and licensing of
software to specific processor (installing licensed software so that
it only ran on a specific processor ... and software being able to
recognize the specific processor that it had been licensed for)

initially just applciation software was priced (and licensed for
specifc processor) as part of the june 23rd, 1969 unbundling
announcement (note it might have been considered a violation of the
unbundling requirement if there was no per processor licensing
enforced ... aka customers still effectively being able to run
software for free).

However, it took almost another ten years before there was kernel
(operating system) pricing (& processor specific licensing). it
appeared that the company was arguing that kernel software should
continue to be free (required for correct operation of the hardware so
remained "bundled")

FS was an extremely large project that was eventually killed before it
was even announced (very few people outside the company were even
aware of it at the time). I didn't make myself very popular with the
FS people. There was a long running "cult" film at a theater down in
central sq ... and I would liken a lot of FS to the inmates being in
charge of the asylum.

along the way, supposedly the radical departure of FS from 360 was
contributing factor in Amdahl leaving to build 360 mainframe procssor
clones. at a presentatio he gave at MIT in the early 70s, he was asked
what reasoning did he use with the VC people to fund his undertaken.
He replied that even if IBM were to completely walk away from 360 at
that moment (can be considered a veiled referene to FS), customers had
already invested over $100B in 360 application software, which would
keep him in buiiness at least thru the end of the century.

in the morphing of 360 product to 370, a lot of the performance work i
had done as an undergraduate was dropped from the product. In the
mid-70s there was a resolution raised by the SHARE user group to have
my performance work put back in the 370 operating systme.

This was at a time when clone mainframe was starting to make market
penetration. In the original unbundling, the execuse was used that
only application software should be licensed and charged for ... that
kernel software should still be "free" (aka bundled as part of the
hardware) since it was necessary for the operation of the computer.

With the advent of clone processors, the issue of not pricing and
licensing kernel (operating system) software was revisted (aka
customer could buy their processors from clone manufactur and then get
the operating system for free from IBM ... the clone guys did have to
encour the significant expense associated with operating systems(.

My "new" resource manager was selected to be the guinea pig for
licensed/priced kernel software. I got to spend time on and off for
six months with the business people formulating the kernel software
pricing policies. The half-way measure taken for this round was that
"kernel" software that was direcxtly involved in hardware support (aka
device drivers, interrupt handlers, multiprocessor support, etc) would
still be free; everything else could be charged for. The "resource
manager" supposedly was better management of workload ...... so it
wasn't directly needed for the basic hardware operation. In theory,
customers buying large Amdahl clone machines might start paying IBM
for some kernel software stuff.

This did result in an unanticipated problem. I had done a lot of work
on multiprocessor support and there was a large part of the "resource
manager" that involved kernel restructure that had been done with
multiprocessor support in mind. When they decided that they would
ship multiprocessor support to customers in the next release
http://www.garlic.com/~lynn/subtopic.html#smp.. they were faced with a dilemma.

Multiprocessing support had to be "free" (under the guidelines that
kernel code directly involved in hardware support was free) ... but it
was dependent on a lot of the kernel reorganization code that was
already in customer shops as part of the resource manager (which was
charged for kernel code), The solution was creation of "new" resource
manager ... all the code (about 80-90 percent) of the resource manager
that was involved in kernel restructuring required by SMP support
... was removed and made part of the "free" kernel. The new, improved
and drastically reduced (in number of lines of code) resource manager
continued to be licensed at its original price.

Along with the continued penetration of clone processors into the
market ... there was eventually a transition to charge for all kernel
software (whether it was required for direct hardware support or
not)).

a super-secure online system was put together with all the
documentation in soft copy ... people could only view the
documentation on 3270 terminals (real terminals ... before terminal
emulation, cut&paste, screen-scraping, etc) ... with no ability to
print or copy the information. For various reasons they made some
claim that even if I was in the machine room, even I wouldn't be able
to break the security (even I?, hard not to rise to that bait). So the
counter was that it would take less than a couple minutes. First thing
i had to do was cut off the machine totally from any outside access
... and then i flipped a bit in the memory of the machine and totally
defeated all their security. Typical authentication routine involves
calling a routine that validates the authentication information and
then branching based on the return code. I flipped a bit so that no
matter what condition the validation routine returned ... everything
would be treated as correct validation (it was a mistake to give me
the benefit of being in the same room with the machine).

possibly as revenge, i got assigned to help orientate the new company
CSO that had come from high level position at some gov. agency
(at least in that period, CSOs coming to industry from a
fed. gov. career had physical security background)

Joe Morris wrote:
I wonder how many shops reinvented that wheel. My PPOE had a 1401
running IOUP (Input/Output Utility Program) to process SYSIN/SYSOUT
tapes for our 7040 (IBSYS) system; by the time the 1401 finally left
one of our really great 1401 programmers had pretty much rewritten
it from scratch, producing LOUP (Louie's Own Utility Program) with
lots of bells and whistles.

When the 1401 was replaced with a 360/40 running MFT the sysprog
team wrote a utility that stayed active while batch jobs were
running (the equivalent of a TSR in PC-DOS terminology), complete
with a round-robin internal scheduler to allow simultaneous
tasks and centralized storage allocation. All sorts of hacks were
included, of course (this was in the 1967 time frame), but it
was stable and it worked.

there was share contributed program, LLMPS (lincoln labs
multiprogramming system) which was a lot of unit record and tape
applications (I have the share contribution document in a box
someplace).

had trusted ID type support ... but not for actually building a
trusted system ... but for the evolving software pricing & licensing
infrastucture ... similar to current day DRM issues.

in the 60s & 70s we had built secure multi-user timesharing systems
... significantly more secure than many of the systems out there
today. the security of these systems weren't dependent on hardware
identification ... but basically from the ground-up builtin security
design. some of this is referenced in past posts about commerical
multi-user, timesharing systems from the 60s & 70s
http://www.garlic.com/~lynn/submain.html#timeshare

recent post about specific example where cambridge
http://www.garlic.com/~lynn/subtopic.html#545techwas already providing some general access to various BU, MIT, Harvard,
and other students in the Cambridge area ... and then with the advent
of cms\apl, cms\apl system file i/o capability and really "large"
workspaces ... corporate hdqts people loaded some of the most valuable
corporate data on the machine for doing business modeling.
http://www.garlic.com/~lynn/2005g.html#27 Moving assembler programs above the line

the other application for trusted system identification ... is not so
much whether a system is built with high level of integrity .... but
if a system asserts such a characteristic to a remote operation
... how much trust can the remote operation place in the integrity
assertion. This is similar to the EU finread terminal standard.
http://www.garlic.com/~lynn/subintegrity.html#finread

the standard specifies a number of integrity characteristics for
finread ... however the standard doesn't actually specify a mechanism
where a remote, relying party has any assurance that a finread was
actually used via-a-vis some counterfeit terminal. one of the things
in the x9.59 financial standard
http://www.garlic.com/~lynn/x959.html#x959

allowed for the terminal digital signing a transaction in addition to
the end user. the user's digital signature provides some
authentication about the originating party (aka verification of the
digital signature with a public key implies something you have
authentication, aka the originating entity has access and use of the
corresponding private key) ... the terminal digital signature provides
some indication as to some integirty characteristics of the digital
signing environment (was a finread standard terminal in use).

In that sense, a trusted machine authentication mechanism may not only
provide reference for licensed software running on the local machine
... but possibly also a kind of reference for distributed licensing
infrastructures.

There have been various infrastructure definitions for really personal
computing devices ... where a trusted machine authentication not only
serves as the scaffolding for software (and other kind of ... aka DRM)
licensing ... but also as authenticating the device owner ... in lieu
of a separate personal authentication token.

This does start to trample on the institution-centric token vis-a-vis
person-centric token paradigms. In the institutional centric
paradigm ... each institution provides each individual with a unique
token basically one-for-one replacement for existing
shared-secret pin/password
http://www.garlic.com/~lynn/subintegrity.html#secrets

In a person-centric token paradigm ... the individual registers
their (something you have) personal token(s) with each
institution (analogous to the way in a something you are
authentication scenario, somebody might register biometrics).

In the 90s, we were looking at the end-to-end business process
associated with token authentication infrastructure as well as trying
to significantly achieve business cost-reductions. One of the big
expense items in the institutional centric model is typically the
personalization that an institution performs for every
token. Elimination of institutional personalization can significantly
reduce costs in a token-based something you have
infrastructure. Sometimes this, in conjunction with other
streamlining can represent as much as a 10-to-one cost reduction.

More interesting ... in moving away from institutional token
personalization, it can also enable the transition to
person-centric token infrastructure; rather than every
institution personalizing a unique token for every person ... a person
registers their token(s) with every institution.

If a transition to an institution-centric token system (with each
person have a unique token in place of every existing pin/password) is
considered ... then a 10:1 reduction in token infrastructure costs is
significant (in part, by streamlining the infrastructure delivery
costs).

However, if you assume that every person eventually requires an
avg. of one hundred tokens (in such an institutional-centric model),
then a transition from an institutional-centric model to a
person-centric model can represent a 100:1 reduction in the
number of tokens ... with a corresponding 100:1 reduction in
infrastructure token costs. A combination of 10:1 reduction in per
token cost plus a 100:1 reduction in token number costs ... could
represent an overall 1000:1 cost reduction in token infrastructure
related costs.

if one were considering the 3-factor authentication model
• something you have• something you know• something you are

a transition from a per institution something you have token
for every person, to a person-centricsomething you have token
could be considered making a person-centric token paradigm
closer aligned with biometric paradigm (aka as long as it is unique
... a person doesn't need to have a unique thumbprint per institution)