GPG

Doug Laidlaw <doug@dougshost.invalid> writes:
I have never believed in "Don't ask questions; just follow the crowd."
Accepting "the crowd" has given me a disk bloated with drivers that I will
never use, and locales that I will never use, with no better justification
than the famous "Because they are there!"

I am still wondering if I need GPG at all. About the only scenario I can
see where it is worth the trouble is emailing credit card details. If such
an email is signed with GPG, is it protected during transit? It is in no
way protected upon arrival.

"asymmetric cryptography" is technology where there are a pair of keys
... what one key encodes, the other key decodes. this is in contrast to
symmetric key technology where the same key is used to both encode &
decode.

"public key" is a business process where one of the key pair is
designated "public" and is freely published. the other key is kept
private & confidential and never divulged.

"public key" business process can be used to address the key
distribution problem in symmetric key infrastructures. it also addresses
problem of repositories of symmetric keys which may become compromised
(it isn't necessary to keep "public keys" confidential in the way that
"symmetric keys" are required to be kept confidential).

knowing somebody's "public key" allows anybody to encode information and
transmit it, knowing that only the entity with the corresponding "private
key" is able to decode it.

only the appropriate entity can decode the information, however the
recipient won't know who the sender was.

"digital signature" is a business process where an entity typically
encodes a representation of information (typically a secure hash of the
information, but it could be the whole message) with their private key.
anybody can use the entity's corresponding public key to do a decode
operation to validate the origin of the information.

SSL uses a form of public/private key technology to encrypt information
transmitted on the internet.

we had been called in to consult with a small client/server startup
that wanted to do payment transactions on their server ... and had
this technology they invented called SSL they wanted to use. Part of
that effort involved deploying something called a payment gateway
... misc. past posts
http://www.garlic.com/~lynn/subnetwork.html#gateway

and the result is now frequently called "electronic commerce". This use
of SSL for electronic commerce (to hide the financial transaction
details) is still the major use on the internet today.

In the case of SSL, there is an additional business process called
"digital certificates" and institutions frequently called "certification
authorities". As part of the "electronic commerce" activity ... we had
to do some end-to-end business process audits of these (at the time) new
entities calling themselves "certification authorities". The design
point of "digital certificates" is a way of publishing information
related to the entity associated with public/private key pair ... for
first time communication between strangers. This is analogous to the
letters of credit/introduction from the sailing ship days where the
relying party had no other recourse about the stranger they were dealing
with.

In the case of GPG/PGP, public keys may be exchanged between parties w/o
requiring 3rd party certification authorities.

In the mid-90s, after having worked on this thing now called "electronic
commerce", we were asked to participate in the x9a10 financial standard
working group which had been given the requirement to preserve the
integrity of the financial infrastructure for *ALL* retail
payments. Part of that effort involved detailed, end-to-end, threat and
vulnerability studies of the various environments (internet,
point-of-sale, debit, credit, unattended, transit, stored-value, etc,
i.e. *ALL*). The outcome of that was the x9.59 financial standard
transaction protocol ... some references
http://www.garlic.com/~lynn/x959.html#x959

X9.59 protocol slightly tweaked the paradigm to require transactions to
be authenticated ... one scenario is very light-weight digital
signature. It turns out that with authenticated transaction, it is no
longer necessary to "hide" the transaction details. This eliminates the
threats from skimming, evesdropping, and data breaches. It doesn't
eliminate skimming, evesdropping, and data breaches ... but it
eliminates the threat that crooks can use the information to perform
fraudulent financial transactions. As a side-effect, it also eliminates
the major use of SSL on the internet for hiding electronic commerce
transactions.

As an aside, there were some other financial transactions specification
efforts going on at the same time as x9a10 in the mid-90s ... which also
looked at leveraging public/private key technologies ... but in
association with "digital certificates". Part of this issue was that
with prior relationship between an individual and their financial
institution ... it invalidated design point of "digital certificates"
for first time communication between strangers ... rendering the
"digital certificates" redundant and superfluous. The other issue
was that the redundant and superfluous "digital certificates" enormously
increased the typical payment transaction payload size and processing
overhead by factor of 100 times ... misc past posts mentioning this
enormous bloat resulting from using redundant and superfluous "digital
certificates"
http://www.garlic.com/~lynn/subpubkey.html#bloat

GPG

Nico Kadel-Garcia <nkadel@gmail.com> writes:
I don't see them describing the PGP/GPG public/private key technology as a
business practice. Really, I see a lot of mentions about using them *in*
business practices, but I've never seen the technology itself referred to as
one. Or am I missing something in your references? Seriously, you're the only
one I've seen do so, and even your own references above to your own
conversations don't seem to do this.

GPG

Nico Kadel-Garcia <nkadel@gmail.com> writes:
And key handling is, in many cases, a personal practice due to its use
for personal correspondence. In fact, there are good reasons to use it
for all correspondence as a default, but various factors have
prevented it from becoming widespread in mail clients.

key handling of a private key kept "confidential and never divulged" is
more than "personal practice" ... if other parties are to "rely" on the
convention ... it has to be an accepted business process. if it is
personal preference on how the key pairs are treated/managed ... then it
is still asymmetric cryptography technology. it is when others come to
depend on ("relying parties") how the key pairs are treated/managed,
that it becomes a turst issue and business processes.

if no other entities are affected by how an individual deals with their
private key ... then it is "personal pracice" ... if others are to
depend on how an individual deals with their private key ... then it
becomes a business process.

some number of countries have even passed laws regarding the business
process of "digital signatures" ... which includes a bunch
of related stuff percolating down thru the whole public/private key
business process infrastructure.

we were called in to help word-smith the cal. state electronic signature
legislation (and later federal) ... and as a result had to go thru a
bunch of the handling from the standpoint of relied on business
processes ... numerous past posts mentioning electronic signature
(business process) http://www.garlic.com/~lynn/subpubkey.html#signature

similarly, i've mentioned that we were called to consulte with a small
client/server startup that had invented this technology called SSL and
wanted to use it for payment transactions on their server. as part of
that we had to do a lot of work related to apply a technology to
business processes that world could trust. for people to "trust" both
"digital signatures" and "SSL" there is a trust chain starts with the
business process (more than "personal practice") of keeping the private
key (of a public/private key pair) confidential and never divulged. If
it is purely "personal practice" ... then the abilitiy of whether or not
others can place any trust in the whole infrastructure unravels.

have looked at a whole lot of issues for market inhibitors to
public/private key. part of it is the cost/convenience vis-a-vis
incremental security/privacy. since a large part of lack of security
involves compromised PC ... just having a software-based public/private
key operation doesn't provide a whole lot (witness that a majority of
spam in the world originates from compromised PCs ... that have been
organized in botnets).

a decade ago there were efforts to introduce hardware tokens (supporting
public/private key business processes) into the personal computer
environment as part of countermeasure to compromised PCs. some of that
was with respect to the EU FINREAD standard ... misc. past posts
http://www.garlic.com/~lynn/subintegrity.html#finread

Nico Kadel-Garcia <nkadel@gmail.com> writes:
But it's still public key encryption. Please don't call the
technology, itself, a business practice. That can confuse people who
read it: you remain the *only one* I've ever seen who calls public key
encryption, itself, a business practice, and your own citations of
your own writing seem to agree with my point that key handling is an
important practice, but do not refer to the technology itself as a
business practice. I've no idea why you did so: please stop.

the technology is asymmetric cryptography with a pair of keys ... what
one key encode, the other key decodes.

the business process is publishing one key of the key-pair ... and
keeping the other key of the key-pair confidential and never divulging
it. by definition ... the attributes "public key" and "private key"
refer to the business processes of key handling ... aka the very
attributes "public" and "private" refer to the key handling business
process ... not to the asymmetric cryptography technology.

if you use the term "public" ... by defintion, you are not referring to
the cryptography technology ... you are referring to the business
process key handling.

the technology is asymmetric cryptography technology that deals with
encryption/decryption.

using the terms "public" and/or "private" ... moves past talking about
the asymmetric cryptography technology ... and are referring to key
handling business processes.

by definition the terms "public" and/or "private" refers to the business
process of handling the keys .... and has moved past the basic
asymmetric cryptography technology.

asymmetric cryptography refers to the technologies of cryptography,
encryption, etc.

public key, private key, etc ... refers to the key handling business
processes.

as in the reference to cognitive dissonance and/or semantic confusion
with regard to the term "digital signature" .. recent post:
http://www.garlic.com/~lynn/2008p.html#79 PIN entry on digital signature + extra token

... there may be similar cognitive dissonance and/or semantic confusion
with the term "public key cryptography".

"public key" refers to the key handling business processes.

asymmetric cryptography refers to the cryptography technology.

another similar cognitive dissonance and/or semantic confusion occurs
when CA is used for "certificate authority" ... when in fact, CA refers
to "certification authority" ... and a "certification authority" issues
certificates which are representation of some certification business
process.

I've gotten (similar) jabs for continuing to be about the only person
that continues to insist on the semantic correct "certification
authority" ... as opposed to the more popular "certificate authority".
The popular use tends to obscure the fact that certificates are
representations of some certification business process ... possibly
allowing certificates to actually be meaningless and fail to represent
anything.

GPG

Nico Kadel-Garcia <nkadel@gmail.com> writes:
But it's still public key encryption. Please don't call the
technology, itself, a business practice. That can confuse people who
read it: you remain the *only one* I've ever seen who calls public key
encryption, itself, a business practice, and your own citations of
your own writing seem to agree with my point that key handling is an
important practice, but do not refer to the technology itself as a
business practice. I've no idea why you did so: please stop.

from above:
asymmetric cryptography is technology where there are a pair of keys
... what one key encodes, the other key decodes. this is in contrast to
symmetric key technology where the same key is used to both encode &
decode.

public key is a business process where one of the key pair is
designated "public" and is freely published. the other key is kept
private & confidential and never divulged.

... snip ...

I referred to "asymmetric cryptography" as technology ... and i referred
to "public key" as (key handling) business processes.

I didn't use the term "public key encryption" (except when quoting some
other use) ... as means of clearly differentiating the "asymmetric
cryptography" technology and the "public key" (key handling) business
processes.

Part of this is trying to avoid the cognitive dissonance &/or semantic
confusion ... i clearly differentiated "asymmetric cryptography"
technology and "public key" (key handling) business processes.

GPG

Nico Kadel-Garcia <nkadel@gmail.com> writes:
But it's still public key encryption. Please don't call the
technology, itself, a business practice. That can confuse people who
read it: you remain the *only one* I've ever seen who calls public key
encryption, itself, a business practice, and your own citations of
your own writing seem to agree with my point that key handling is an
important practice, but do not refer to the technology itself as a
business practice. I've no idea why you did so: please stop.

I would assume, for somebody using the term "public key encryption"
... which i try to avoid (to minimize the semantic confusion),
... they would semantically be referring to both "asymmetric
key encryption" technology as well as "public(/private) key" handling
business processes

... since the use of the word encryption implies the technology and
the word public implies the key handling business processes.

GPG

Nico Kadel-Garcia <nkadel@gmail.com> writes:
But it's still public key encryption. Please don't call the
technology, itself, a business practice. That can confuse people who
read it: you remain the *only one* I've ever seen who calls public key
encryption, itself, a business practice, and your own citations of
your own writing seem to agree with my point that key handling is an
important practice, but do not refer to the technology itself as a
business practice. I've no idea why you did so: please stop.

the use of the term "asymmetric" was somewhat chosen to differentiate
from "symmetric key encryption".

there are times when "symmetric key encryption" is referred to as
"secret key encryption" ... referring to both the key handling business
process (keeping the key secret) as well as the symmetric key encryption
technology ... somewhat analogous when "public key encryption" is used
to refer to both the key handling business process and the asymmetric
key encryption.

in the case of the public key handling business process ... "public"
refers to the business process handling of one of the key-pair being
made public. the other key (of the key-pair) is kept secret ... but is
called "private" key ... to both differentiate it from the key handling
in symmetric key encryption ... and to have a semantic connotation that
is more the opposite of "public".

in symmetric key encryption ... for "communication", the "secret" key
becomes a shared-secret ... since both ends of the communication have
to share the same "secret" key.

in most asymmetric key encryption implementations involving
"communication" ... like SSL ... a shared-secret symmetric key is
normally used (for efficiency purposes) .... but it is generated at
random. it becomes a random/temporary session key ... that is used to
encode the communication ... and then that session/secret key is encoded
with the recipient's "public key" (and the both the encoded
communication and the encoded secret key are transmitted together).

the recipient then decodes the secret key (using their private key) and
then uses the temporary/session (shared-secret) key to decode the actual
message.

the business process characteristic of not having to "share" a private
key ... is also an enabler for the "digital signature" business process.

the connotation of the "public key" handling business process ... there
is the implication of being shared ... while the connotation of the
"private key" business processes carries the implication of never being
shared ... which further differentiates it from a shared-secret
business process key handling that is found in various uses of symmetric
key encryption (futher differentiating symmetric key encryption and
asymmetric key encryption technologies).

shared-secret handling of symmetric key encryption carries some
of the same vulnerabilities involved with shared-secretsomething you know authentication. the "never shared"
implication of the "private key" handling business process has also
been leveraged for improved authentication as part of "digital
signature" authentication business process.

GPG

Tim Greer <tim@burlyhost.com> writes:
I don't understand, the poster asked who refers to it this way, besides
you. You then went on to post 100 or so links to your own site where
you yourself referred to it as such?

GPG

Tim Greer <tim@burlyhost.com> writes:
This is what I read the poster say:

"I don't see them describing the PGP/GPG public/private key technology
as a business practice. Really, I see a lot of mentions about using
them *in* business practices, but I've never seen the technology itself
referred to as one. Or am I missing something in your references?
Seriously, you're the only one I've seen do so, and even your own
references above to your own conversations don't seem to do this."

So, perhaps the wording was confusing? I really don't mind anyway, just
thought it was odd.

that I made reference to "public key encryption" as a business process.

but as repeatedly quoted (and can be clearly seen in the original post)
and in the above references ... i clearly differentiated between
"asymmetric cryptography" as a technology and "public key" as a business
process (involving the key handling business process).

in the original post, i never used the term "public key encryption"
... although in later explanations ... i contend that "public key
encryption" would tend to imply combined reference to both the
(asymmetric) encryption technology and the (public) key handling
business process.

the analogy is the use of "symmetric key encryption" (referenced in the
original post) as being technology (and the use of "asymmetric" to
differentiate from "symmetric").

if the term "secret key encryption" were to be used, it would tend to
combine references to both the "symmetric key encryption" technology and
the "secret key" key handling business process.

hawk writes:
And as for GM, Ford, and Chrysler, they should probably be split into
many pieces if some kind of bailout occurs. Standardize engine &
transmission mounts so that the final assemblers can buy engines from
multiple sources. Some will survive, some will die. The UAW will also
be a necessary casualty; it just isn't possible to pay their wages and
compete internationally. UAW wages are a leftover from the US being the
only industrial power to survive WWII. That left the US automotive
industry with a cartel, and unions were able to claim part of the cartel
profits. That just isn't the case any more.

i remember a (wash post?) article from the 80s calling for 100% unearned
profit tax on us automobile industry. supposedly import quotas were
designed to remove (price) pressure on the industry allowing them
significantly increased profits that would be used to remake
themselves. instead the money was squandered on salaries and
benefits. w/o the competition from foreign imports ... this allowed them
to approx. double the price in short period of time (for enormously
increased profits). this had side-effect that auto price as multiple of
avg. salary went way up ... requiring change from 2-3yr loans to 5-6yr
loans. side-effect, loans could outlast the poorer quality autos.

part of foreign autos dealing with import quotas ... was they learned to
efficiently manufacture in the US

Dumbest People' Industry Image May Cost Wagoner Job
http://www.bloomberg.com/apps/news?pid=newsarchive&sid=ap8pS2oslvn0&refer=home
a couple quotes from above:
"There's the feeling that next to financial services, automotive execs
are the dumbest people in the world"

"It's pretty clear that management has made some pretty bad decisions
over the last 20 years"

"Toyota generated pretax profit of $922 per vehicle on North American
sales in 2007, while GM lost $729"

large number of loans packaged up somewhat to resemble securities and
selling them. technique was used two decades ago during S&L crisis to
unload questionable loans (obfuscate the underlying value).

unregulated mortgage originators used them to fund their operation
(regulated financial institutions had funded mortgages out of deposits
and kept them on their books). one of the side-effects was that the
mortgage orginators could (also) unload mortgages off their books
... eliminating any incentive to pay attention to mortgage quality.

this was further aggravated by being able to get triple-A rating on the
CDOs (greatly expanding the institutions that would buy the instruments
and further obfuscating underlying value). recent congressional
testimony claimed that both the mortgage originators and the rating
agencies knew the toxic CDOs weren't worth triple-A rating ... but the
rating agencies were being paid to give them triple-A ratings anyway
(the word "fraud" was periodically mentioned). when it all started to
unravel ... this contributed to loss of confidence in ratings and
freezing up some of the markets where investors were dependent on
ratings (earlier this yr, warren buffett stepped into to the muni-bond
market to back muni-bonds as countermeasure to loss of confidence in
ratings).

the huge influx of funds provided by toxic CDOs allowed speculators (in
conjunction with mortgage originators no longer had to pay attention to
quality) to obtain undocumented, no-down-payment, 1-2% interest only
ARMS (planning on flipping before rate adjusted, possibley 2000% ROI or
better) ... basically enabling the home-owner market to be treated like
the unregulated stock market of the 20s. The huge amount of speculation
creating enormous, ugly price pimple/boil (plot avg. home prices and
prices as percent of avg. salary back to 70s, current pimple/boil as
only partially deflated).

Large number of institutions, retirement funds, etc ... were buying
these triple-A rated toxic CDOs (many that wouldn't have dealt with them
if it hadn't been for the triple-A rating). When it started to unravel,
institutions were getting them off their books for 22cents on the dollar
(and taking tens of billions in losses).

There were a couple people on CSPAN yesterday explaining how several
gov. operations saw the problem over the past decade and attempted to
take actions to prevent current crisis ... but that large financial
institutions heavily lobbied the current administration to not interfer
in all the activity.

For random other topic drift ... broadcast of a congressional hearing
yesterday on CSPAN ... had treasury undersecretary and a congressman
grilling about money laundering. Apparently the significant relaxing of
regulation enforcement has resulted in enormous amounts of drug money
being laundered through mortgages (large number of mortgages are
obtained and payments are made using drug money ... then property sold).

Anne & Lynn Wheeler <lynn@garlic.com> writes:
For random other topic drift ... broadcast of a congressional hearing
yesterday on CSPAN ... had treasury undersecretary and a congressman
grilling about money laundering. Apparently the significant relaxing of
regulation enforcement has resulted in enormous amounts of drug money
being laundered through mortgages (large number of mortgages are
obtained and payments are made using drug money ... then property sold).

one of the statements made during the hearing was the possibility of
prosecuting the financial institutions involved under RICO
(... including three times actual damages).

RICO has been used to prosecute multiple parties by showing criminal
conspiracy. There have also been suggestion for RICO prosecution of the
mortgage originators and rating agencies involved in giving triple-A
ratings to toxic CDOs (congressional testimony calling it fraud).

pure trivia, this person worked at the science center in the early 70s

one of the major motivations for SSL was perceived weaknesses in the
domain name infrastructure.

the major use of SSL in the world today is part of this thing called
electronic commerce as part of hiding information in the transaction
... x9.59 eliminates the need to hide the information (so eliminates
the major use of SSL).

kkt <kkt@zipcon.net> writes:
I am dubious that they could servive as smaller companies. The trend
in recent decades has been toward larger companies, because only large
companies have the resources to develop new vehicles.

while various regulation enforcement may have been relaxed .. GAO has
been doing database of the increasing number of public company financial
filings that are being restated (in spite of SOX). basically the numbers
are fiddled, the executives take bonuses based on inflated numbers and
then later the numbers may be restated (and the executives don't have to
forfeit the bonuses).

recent post referring to study of (270) US public companies that had
realized that they were being mismanaged because of executive focus on
fiddling quarterly numbers ... and redid the executive bonus plan in
attempt to remove the motivation (and refocus on corporate health &
vitality):
http://www.garlic.com/~lynn/2008p.html#9 Do you believe a global financial regulation is possible?

... there were some jokes in the 70s about disposable cars ... promoted
getting new one every 2-3 yrs. after import quotas, in very short period
of time, price was nearly doubled as percent of avg. salary. that
necessitated doubling period of loans ... which had the downside of
loans lasting longer than some cars.

there have been past discussions mentioning that agility and ability to
rapidly adapt to changing conditions may be more important. large
buearacracies can encourage large number of individuals going through
there processes purely by rote w/o actually needing to understand what
they were doing. actually understanding promotes being able to rapidly
adapt and can have side-effect of improving quality (possibly even
lowering costs).

some quality control is about catching/rejecting defects (which can
increase costs) ... more intelligent quality control is not having
defects in the 1st place (which can reduce costs).

lots of news articles from earlier this year about toyota/gm sales
running neck&neck and that gm might be retaining "top" position ...
those stories tended to avoid the issue that gm was loosing money on
every sale:

"Toyota generated pretax profit of $922 per vehicle on North American
sales in 2007, while GM lost $729"

Dave Garland <dave.garland@wizinfo.com> writes:
Much of this management behavior has been exacerbated by Wall Street
(whose competence we are now admiring) and their relentless pressure for
quarterly profits instead of investing for the long haul. I really
don't know what the solution is for this one. Auto management, you can
cut the cords of their golden parachutes and push them out the window.
But what do you do about the financiers?

congressional hearings this morning starts out with chairman of the
committee going into detail about what was specified in the $700b
bailout bill ... and wanting to know why several things, required by the
legislation, weren't being done.

the response seems to net out that the money goes to prop up those
financial institutions and their executives ... which promotes "moral
hazard" (rewarding the worst behavior).

there was also somewhat of a semantic disconnect ... there were lots of
references to the "financial crisis" has been responsible for a variety
of things ... totally obfuscating the fact that the financial
institutions were responsible for the "financial crisis" ... obfuscating
the causes of the "financial crisis".

rather than the executives of the institutions being the cause of the
distress of those institutions as well as the "financial crisis" ...
there is metamorphis to the "financial crisis" (abstraction) being the
cause of the distress of those institutions.

Dave Garland <dave.garland@wizinfo.com> writes:
The Industrial Revolution brought the concept of workers as
interchangeable cogs. Owners rented them as cheaply as possible and
worked them until they broke, then threw them away.

There are two solutions to that problem. One is government control.
The other is a "private" solution, unions. I'm pretty sure you don't
like the first. When the day comes that managers have as much concern
for each individual worker as they do for their own personal piece of
the pie, then we won't need unions.

I've periodically repeated the story Boyd used in his briefings about US
entry into WW2. Basically at entry ... there was a need to mobilize
large forces that had little training and/or experience. As a result to
leverage the small pool of experience was to create a very rigid,
top-down, command & control infrastructure. much of the war was
conducted using overwhelming resources to win by attrition ... some
cases with 10:1 resource superiority. One example he used was mass
production of Sherman tanks ... in tank battles with Germans ... there
was almost 10:1 kill ratio (but US could still prevail with enormous
resource advantage ... although there was morale problem with Sherman
crews).

The point of the story in the briefing was his observation was that
those young officers (that got their training in how to operate large
organizations) were starting to permeate corporate america management
(with philosiphy only the people at the very top had any idea what they
were doing). This scenario has also been used to explain the enormous
explosion in the ratio (400:1) of executive compensation to worker
compensation (up significantly from the earlier 20:1 ... and 10:1 in
much of the rest of the world).

as previously pointed out, foreign car companies have managed to
successfuly adapt to building in the US:
"Toyota generated pretax profit of $922 per vehicle on North American
sales in 2007, while GM lost $729"

congressional hearings this morning starts out with chairman of the
committee going into detail about what was specified in the $700b
bailout bill ... and wanting to know why several things, required by
the legislation, weren't being done.

the response seems to net out that the money goes to prop up financial
institutions and their executives ... which possibly promotes "moral
hazard" (rewarding bad behavior).

there was also a lot of statements about financial crisis was
responsible for the distress to the financial institutions. this is in
contrast to business school article from last spring that made the
statement that 1000 executives are responsible for approx. 80% of the
"current" mess (and it would go a long way to fixing the problems if
the gov. could figure out how they could loose their jobs). so the
situation morphs from
"1000 executives caused most of the distress to financial institutions
resulting in the financial crisis"

to
"financial crisis caused the distress to the financial institutions"

today also has lots of legislative discussions about bail-outs for
automobile industry.

one of the issues is that in the wake of import quotas (long ago and
far away), foreign car companies learned how to efficiently
manufacture in the US (which provides some contrast to US
companies). so a couple recent articles:

CDOs were used two decades ago during the S&L crisis to obfuscate the
underlying values (sell off for more than they are worth).

Home owner market used to be semi-regulated with regulated financial
institutions making loans based on deposits. Unregulated mortgage
originators could leverage packaging mortgages as CDOs as source of
funds.

A couple weeks ago in congressional hearings on CDOs, testimony was
that both mortgage originators and rating agencies knew that toxic
CDOs weren't worth triple-A ratings but mortgage originators were
paying the rating agencies to give triple-A ratings to toxic CDOs
anyway ("fraud" was used to describe the activity). Being able to
unload every mortgage (regardless of quality) as triple-A rated toxic
CDO; 1) greatly increased market for toxic CDOs, 2) greatly increased source
of funds for toxic CDOs, and 3) eliminated any motivation to manage loan
quality (source of funds, contributed to greatly increased speculation
in home owner market).

On the institution side buying these (toxic CDO, packaged)
mortgages .... the institutions were 1) playing long/short mismatch
and 2) heavily leveraging. Playing long/short mismatch (alone) has
been known to take down institutions for centuries (in this case, even
if the toxic CDOs had been worth their triple-A ratings).
Comments were that Bear-Stearn and Lehman had marginal chance of
surviving playing long/short mismatch. This was further aggravated
with heavy leverage ... in some cases leveraging capital 40-80 times
in buying triple-A rated toxic CDOs.

Unregulated mortgage originators found a large untapped source of
funds by packaging mortgages as triple-A rated toxic CDOs. Since they
could unload ever mortgage they could write w/o regard to quality (as
triple-A rated toxic CDOs) ... the question is what kind of mortgages
had little activity. In the past, there was limited source of funds
for writing low-quality mortgages. With triple-A rated toxic CDOs,
funds for this market became almost unlimited. This nearly unlimited
source of funds became very attractive for speculators;
no-documentation, no-downpayment, 1percent, interest only ARMs could
be leveraged for 2000% or better ROI (planning on flipping the
property before the rate reset).

Subprime had originally been targeted at 1st time, low-income home
buyers. However, speculators could leverage "sub-prime" all across the
home-owner market. The speculation, in addition to greatly inflating
home prices, made it appear like demand was much larger that it
actually was. As a result, construction companies took out loans to
build large number of additional houses & stripmalls for the apparent
big upswing in demand (anticipating they would sell the houses &
stripmalls and pay off the loans). Companies that supplied material
for building, took out loans to stock the additional supplies. Cities
& towns sold bonds to build all the infrastructure services for all
the new housing projects (anticipating all the additional real estate
taxes when the properties sold ... would fund the bonds).

When the speculation bubble burst, the properties went unsold
... hitting all the construction companies (and their loans), the
building material supply companies (& their loans), and the
municipalities (and their bonds). Bursting of the speculation bubble
then starts to spread throughout much of the economy.

CDOs were used two decades ago during S&L crisis to obfuscate
underlying value and sell for more than they were worth.

Congressional hearings a couple weeks ago looked at toxic CDOs
getting triple-A ratings. Testimony was that both mortgage originators
and rating agencies knew that the toxic CDOs weren't worth
triple-A rating ... but the mortgage originators were paying for the
triple-A ratings. The word "fraud" was periodicly used. This
enormously increased the market for these instruments (and the source
of funds).

On the institution side buying all these triple-A rated toxic CDOs
... there was questionable behavior ... they were playing both 1)
long/short mismatch ... which has been known for centuries to take
down institutions and 2) capital leveraged 40-80 times buying triple-A
rated toxic CDOs.

All of the individual characteristics had been around before the
triple-A ratings ... but the availability of funds was severely
limited. Getting the triple-A ratings on toxic CDOs contributed
to all the isolated hotbeds of greed and corruption to turn into a
firestorm.

In the past, there was market inhibitor to some of these types of
products with merchants expecting lower discount rate because fraud
was lower and financial institutions wanting to charge more for
"safer" products.

a couple quotes from above:
"There's the feeling that next to financial services, automotive execs
are the dumbest people in the world"

"It's pretty clear that management has made some pretty bad decisions
over the last 20 years"

"Toyota generated pretax profit of $922 per vehicle on North American
sales in 2007, while GM lost $729"

... snip ...

possibly 30yrs or more?

there was article (possibly washington post?) 25-30 yrs suggesting a
100% unearned profit tax on the US auto industry ... in the wake of
some prior gov. support programs ... where billions were suppose to
have gone to remaking the industry ... and it was never spent that
way.

At least since the import quotas, there have been a number of studies
about how the industry can be more agile and efficient. I attended
some of the C4 meetings, circa 1990 ... which was looking at heavily
leveraging IT technologies to improve agility (radically reduce
elapsed time from inception to rolling off the line) and
efficiency. turns out there is also some relationship between quality,
agility, and efficiency ... since all tend to improve when there is
better understanding of all aspects.

oh ... from a long running thread on the subject in 2000 ... one of my
particularly long-winded posts ... including several gov. & industry
URL references on the subject (several have since gone 404, so I've had
to resort to the wayback machine)
http://www.garlic.com/~lynn/2000f.html#43

for another facet regarding the problems ... there were a number of
articles in the 90s related to the downward spiral of the US education
system.

One was that foreign auto makers (establishing plants in the US) were
requiring junior college degrees in order to get workers with high
school education.

From 1990 census information ... there was articles that half of US
manufacturing workers were "subsidized" (i.e. worker benefits exceeded
the value of their work) and half of 18 yr olds were functionally
illiterate. There were calculations at the time ... assuming trends
continued ... that by 2020 ... only 3percent of US workers would NOT
be subsidized (i.e. value of work at least equivalent to benefits
received).

Newsgroups dying?

Mensanator <mensanator@aol.com> writes:
So, you were in the oil business? Making windfall profits

one of the suggestions that i recently heard ... was for the oil
industry use its windfall profits to bail out the US car builders
.... since a big source of those profits were generated by those
automobiles.

from above:
Intel's very recently announced Core i7, the seventh iteration of its
Pentium technology using the Nehalem micro-architecture, has four cores
each running two threads. Stick that in a 4-socket server and you have
Hyper-V heaven: 16 CPUs and 32 threads and say 5 VMs/core giving us 80
VMs in one server.

... snip ...

and ...

Research on hypervisors for massive multicore systems
http://www.hipeac.net/node/2157
Is Virtualization the Future of Supercomputing?
http://itmanagement.earthweb.com/netsys/article.php/3786321/Is+Virtualization+the+Future+of+Supercomputing?.htm

long-winded, decade-old post discussing some of the current
problems. also mentions that citigroup in the S&L crisis was almost
taken down by adjustable rate mortgages and needed private bailout to
continue functioning:
http://www.garlic.com/~lynn/aepay3.htm#riskm

from above:
At an investor presentation in May, Citigroup Inc. Chief Executive
Officer Vikram Pandit said shrinking the bank's $2.2 trillion balance
sheet....was a cornerstone of his turnaround plan. Nowhere mentioned
in the accompanying 66-page handout were the additional $1.1 trillion
of assets that New York-based Citigroup keeps off its books...

... snip ...

pbs program that citigroup was the major player in repeal of
Glass-Steagall, which had been passed in the aftermath of '29 crash to
keep the unregulated, risky investment banking separate from
safety&soundness of regulated financial institutions:

my past comments was that RISC appeared to swing the pendulum to the
opposite extreme from the (failed) future system project (where lots &
lots of stuff was being pushed down into the hardware). misc. past posts
mentioning FS
http://www.garlic.com/~lynn/submain.html#futuresys

compiler technology (pl.8) and monitor technology was to more than
offset the added complexity moved out of the hardware (compared to
FS). for instance, RISC didn't have any hardware protection domains,
pl.8 compiler would only generate correct code ... and the system loader
would only load (correct) pl.8 programs. This had side-effect of
eliminating overhead of making kernel calls ... since things would be
performed either with inline code and/or direct library calls.

later, some of advanced pl.8 programming technology started to permeate
some of the other language compilers.

my wife was recently reminiscing about badgering/lobbying to get
transferred to FS project (because it was dealing with all the latest,
new, really neat ideas). FS had a number of different "executives"
responsible for specific areas ... and she eventually got assigned to
reporting directly to one of the area executives. She was commenting
about going through the documents line-by-line ... and there would be
architecture references to some other area of the machine ... and
going to that area description and not finding anything. She made
herself something of a pest complaining about whole missing areas of
machine specification.

Blinkenlights

jmfbahciv <jmfbahciv@aol> writes:
Now double-check the people appointed in the treasury department
and the Fed Reserve Board and the regional boards and see who
were from Citi-bank. Note the years they were appointed and
correlate with mess events. I was doing this with Sandy Weill
and a couple other people in the 90s.

business TV news show just now discussing some number of the current
problems ... including citibank. comment was that citibank wasn't worth
anything but it was too big to allow to fail (as opposed to FDIC
liquidating stuff, stock disappears and resources parceled out to other
banks).

TOPS-10

Anne & Lynn Wheeler <lynn@garlic.com> writes:
my past comments was that RISC appeared to swing the pendulum to the
opposite extreme from the (failed) future system project (where lots &
lots of stuff was being pushed down into the hardware). misc. past posts
mentioning FS
http://www.garlic.com/~lynn/submain.html#futuresys

TOPS-10

Peter Flass <Peter_Flass@Yahoo.com> writes:
Only if you don't program in assembler. Also the VAX "CISC"
instruction set may have been complex, but it was well-thought out and
orthogonal. It's easy to pick the exact instruction to do what you
need, no unnecessary loads and stores, no work-arounds for missing
instructions, no need to insert delay slots in the pipeline.

In a discussion of the relative difficulty of pipelining CISC
instructions, Mashey outlines the many stages in the execution of the
complex VAX instruction "ADDL @(R1)+, @(R1)+, @(R2)+". In summary, his
observations were as follows:

TOPS-10

timcaffrey@aol.com (Tim McCaffrey) writes:
Funny thing is, Burroughs Large Systems (now Unisys Libra series) used a
similar approach (trusted code, no overhead kernel calls, etc), 10 years
before IBM even started the FS project. I guess Honeywell did something
similar with Multics(?).

from above:
The GE-635 had two privilege modes: Master Mode (privileged) and Slave
Mode (unprivileged). Master Mode programs could reference all of main
memory (in "absolute" mode)

... snip ...

and:
In the early 1970s, Multics was ported to the Honeywell 6180, a
Honeywell 6080 with Multics segmentation, paging, and ring hardware
added. The segmentation and paging hardware was very similar to the 645
implementation (we missed a major opportunity to enlarge segments!), but
the ring hardware was all new.

... snip ...

there were also recent discussion here in a.f.c. about ring/domain
protections and equivalence between multics and current (x86) linux.

as aside, i don't believe early monitors (ibsys, 709, 7090, etc) used
hardware protection ... just directly invoked library calls ... but then
there was also no concurrent operations by different programs.

romp was originally going to be a "traditional" 801 for an OPD
displaywriter "follow-on" (pl.8 programming language, cp.r monitor,
etc). when that effort got killed, there was search to find alternative
... and settled on unix workstation market. this required (at least)
retrofitting hardware protection domain to romp chip ... for what
eventually became pc/rt and aix. the vendor that had done the port of
AT&T unix for PC/IX was contracted to to similar port for the pc/rt for
what was called aix, misc. past 801 posts
http://www.garlic.com/~lynn/subtopic.html#801

independent of the vendor doing AT&T unix port to pc/rt (for what was
called aix), the corporate academic business unit also did a port of BSD
to the pc/rt that was called "AOS".

there was a separate evolution for the corporate MVS batch system
because of long history of API pointer passing paradigm ... from the
days of single real-storage address space. Initial migration of os/360
MVT to os/vs2 svs ... effectively laying an MVT system out in single
16mbyte virtual address space ... with a little bit of virtual address
management crafted on the side and (in the original prototypes), the i/o
channel program translation routine (CCWTRANS) borrowed from cp67
(because channel programs ran with "real" addresses, but had long
heritage of application space building the channel program and passing a
pointer to the program ... with applications running in virtual address
spaces ... all the channel program addresses had to be translated ... by
first creating a copy of the original channel program and replacing all
the virtual addresses with real.

In the migration from SVS (single virtual address space) to MVS
(multiple virtual address spaces) ... all applications got there own
virtual address space ... or half of one ... since an image of the
kernel appeared in half of every virtual address space.

the problem became a lot of "subsystem" services that previously ran
outside of the kernel ... but were called by applications. subsystems
were now in there own separate virtual address space ... but being
called by pointer-passing APIs ... which required a pass thru the kernel
in each direction to wap address space pointers and kludge to address
the parameters.

"dual-address" space mode was introduced in the 3033 ... which still
required pass thru the kernel to swap between application virtual
address space and subsystem virtual address space ... but subsystem
could use alternate address space addressing to directly access
application parameters.

this has since been generalized with a hardware table that defines the
rules for swapping virtual address spaces and new "call" & "return"
hardware instructions (eliminating the pathlength overhead of kernel
calls).

possibly the biggest has already occurred, the word was repeatedly
used in congressional hearings a couple weeks ago to describe the
mortgage originators paying the rating companies to give triple-A
ratings to toxic CDOs (when both knew that the toxic CDOs weren't
worth triple-A ratings). about that time there was representative from
one of the rating companies on one of the business TV news shows to
discuss downgrading of some company ... and the host kept trying to
get the representative to take credit for the whole crisis.

DASDBill2@AOL.COM (, IBM Mainframe Discussion List) writes:
The way z/OS itself attaches a device to a particular channel is with the
Modify Subchannel (MSCH) instruction. This instruction causes a real link to
be built between a device-unique control block in the Hardware Storage Area
(HSA) and a channel path-unique control block in the HSA. However, this
requires that both the device and channel path be real.

Gerhard's answer reminded me of what VM does. It controls real devices and
real channel paths. It intercepts all privileged instructions. When a guest
machine is IPLed under control of VM, VM intercepts all the instructions
that the guest machine is doing to initialize its use of devices and channels.
VM simulates the way the MSCH instruction works. VM also intercepts all
SSCHs that occur after the virtual IPL, and knows which real device corresponds
to the virtual (guest) machine's virtual device.

starting with cp40 circa 1966 (custom modified 360/40 with virtual
memory, which morphed into cp67 when standard 360/67 with virtual memory
became available ... and then morphed into vm370 when virtual memory
became available on all 370s). takes interrupts by owning the real page
zero ... and the virtual machines run in a virtual address space
... where they have a virtual machine, virtual page zero (that isn't
real page zero). it runs the virtual machine in problem mode so it
intercepted all supervisor instructions ... including start i/o (x'9c').

hasp intercepted excp entry in order to take over all requests to
("psuedo") unit record devices.

nsc/hyperchannel did something similar for MVS in the 80s ... for
"channel" connected devices ... that were actually connected to (remote)
hyperchannel device adapter. the devices appeared to be a local channel
attached controller/device. there was a hyperchannel a22x adapter
connected to real mainframe channel. at the remote end there was a51x
device adapter that simulated the real mainframe channel ... (that
controllers attached to). the intercept would make a "shadow" of the
real channel program which is downloaded to the memory of the a51x
device adapter (simulating mainframe channel).

the device i/o actually went thru the channel attached a22x ... to the
simulated channel a51x adapter to the controller (to the device). the
interrupt came back as a22x which had to be fielded and then generated a
simulated interrupt for the psuedo device (actually connected remotely
to a51x).

in the HASP scenario ... the psuedo unit record device was simulated
with disk spooled operations. in the nsc/hyperchannel scenario, the i/o
was actually executed on a a51x adapter simulating mainframe channel.

I had originally done support in 1980 for the STL lab ... as source
changes to VM370 ... as part of STL lab filling up and needing to move
300 people from the IMS group to offsite bldg. They had looked at remote
3270s but the performance & human factors was totally unacceptable. The
alternative was to remote 300 "local" 3270s at the remote site via
nsc/hyperchannel over T1 link. while T1 is only about 150kbytes and 3274
is 600kbytes ... it was close enuf that there wasn't noticable
slow-down. In fact, because of certain other issues, overall performance
actually improved.

it was vetoed allowing me to directly release the software as a
product ... so nsc had to redo it from scratch ... including a mvs
flavor w/o requiring source changes.

there was a glitch, i had chosen to simulate channel check when i got
an unrecoverable error on T1 link. This would push the recovery/retry
up into operating system error recovery. i got a call from 3090
product manager after 3090 had been in customer shops for a year. They
found that unexpected number of channel checks had been recorded for
3090 product line ... which was traced back to a some customers
running nsc/hyperchannel remote device support (both vm & mvs). I
did some analysis and decided that IFCC (interface control check)
simulation would result in effectively the some retry sequence ... and
then asked nsc if they would change the implementation to simulate
"ifcc" instead of "cc".

the STL lab implementation was duplicated in boulder for the ims field
support group there ... when they were moved to bldg. on the other
side of busy road. for the boulder implementation, T1 infrared modems
mounted on the roofs of the two bldgs was used. there was concern that
fog, and/or rain/snow storms might interfer with the infrared
signal. Turns out, there were some signal degradation recorded during
a white-out snow storm when nobody was able to get into work.

inviting me to nsc/hyperchannel meeting at stl that sucked me into
rewriting the whole software. at the time, nsc had already done some
software that was perported to work ... but i had to totally redo it
from scratch. then when i wasn't allowed to release it as a product
... nsc basically redid their software from scratch based on what i had
done for stl (and boulder).

Startio Question

Jo.Skip.Robinson@SCE.COM (Skip Robinson) writes:
At some point in the late 90s, NSC got sold to CNT, its former arch rival
whose core technology was entirely different. It looks as if CNT filed this
patent for RDS technology that had been commercially available for 15 years
from a different vendor. Like I said, intriguing...

Startio Question

DASDBill2@AOL.COM (, IBM Mainframe Discussion List) writes:
There are typically many ways to intercept certain events inside z/OS.
Causing a program interrupt with an invalid SCHID and front-ending the program
FLIH is just one of several ways to do it. There is much less overhead in this
method than if you intercept all STARTIO macros and all I/O interrupts,
since only I/O events occurring on one particular device will cause the intercept
to occur.

in the nsc/hyperchannel scenario ... the start i/o was intercepted ...
the channel program translated ... then a hyperchannel channel program
built to download the translated channel program to the (remote) device
adapter (simulating mainframe channel) ... this cold be "chained" to
channel program that activated the downloaded channel program ... i.e.
the download of the channel program and the execution of the downloaded
channel program could be done either as single startio ... or as
separate startios. if done as separate startios ... there would be
separate i/o interrupts ... one for the hyperchannel download of the
device channel program ... and one for the execution of the device
channel program. the interrupt associated with the execution of the
device channel program (at the remote emulated channel) would have to be
processed and then converted to simulate an interrupt for a physically,
locally attached device.

this is sort-of what got me into trouble later (in the mid-80s) with the
3090 product manager ... having chosen to simulate various kinds of
hyperchannel & telco transmission errors as "channel check" ... and
being talked into substituting "interface control check" instead.

the reference to the NCAR nas/san implementation from the mid-80s, had
a request/message coming over hyperchannel from one of the
supercomputers to the ibm mainframe. the ibm mainframe would do
various things and then download a disk channel program into memory of
the (hyperchannel) device adapter (simulating mainframe channel). the
ibm mainframe would then respond (over hyperchannel) to the
supercomputer with pointer/handle of the specific channel program.
The supercomputer then would directly activate the disk channel
program in the device adapter. the ibm mainframe basically acted as
very sophisticated disk controller .... but the data flow went
directly between the disk to the supercomputers (w/o having to pass
through the ibm mainframe, aka the ibm mainframe provided the control
functions w/o being in the actual data flow).

later as part of the HiPPI & IPI3 standardization work (circa 1990)
... there was an effort to have the HiPPI switch supporting "3rd party
transfers" ... as a way of migrating the NCAR nas/san implementation off
hyperchannel (and ibm mainframe) to HiPPI and IPI disks.

Universities eventually developed something similar to TOOLSRUN called
LISTSERV (which was purely mailing list oriented) ... and several
mailing list computer conferences were spawned.

As might be expected, several were IBM technology oriented ... one was
IBM-MAIN which survives today (bitnet was eventually subsumed by the
internet) ... and is also gatewayed to USENET news as
bit.listserv.ibm-main. A current reference is
http://listserv.ua.edu/archives/ibm-main.html

TOPS-10

Joe Pfeiffer <pfeiffer@cs.nmsu.edu> writes:
APL had a *huge* character set, including a large part of the Greek
alphabet. So neither ASCII nor EBCDIC could deal with it at all (when
I took a class in APL as a student, we didn't have the spiffy APL
terminals so we had to type out ALPHA, iota, etc -- probably with some
quoting convention I don't remember now -- which managed to convert an
almost unuseably terse language into an almost unuseably verbose
one....).

TOPS-10

Charles Richmond <frizzle@tx.rr.com> writes:
Betting that a hard disk will *not* be filled... has *always*
been a bad bet. :-) It's like money. Most folks have little
trouble spending more than they make.

Once capacity expands, a computer or a hard disk is used for
*new* things that would *not* have been considered before.
I remember a *large* hard disk drive that was attached to the
IBM 370-155 where I went to college. This was the largest
drive there, both physically and memory-capacity wise. It was
about two feet taller than a refrigerator, about as wide, and
almost twice as deep. It held about as much as a medium-sized
SD memory card for a digital camera today! (Somewhere in the
2 gig range.)

on the right of the above picture is bank of 3330 drives. you can notice
the difference between 2314 drives & 3330 drives. The 2314 had manual
"handle" for opening disk drive drawer (blue color to the right of the
drive drawer plexiglass window). The 3330 drive drawers didn't have a
manual drawer opening handle ... it had an electric switches in the area
at the top of each pair of drives.

standard bank of 2314 had 9 physical drive drawers (155 picture)
... with eight "addressing" plugs, eight drives addressable at any time
with a nineth "spare" drawer. If a request for mounting a disk pack, it
could be loaded in the spare drawer and power up. when the drive was up
to speed and ready ... the "address" plug could be removed from one of
the other drives and placed in the drive with the newly mounted pack.

one of the issues with 3380 compared to 3330 was that the capacity
increased significantly, by much larger factor than the performance
increased. if a customer datacenter replaced (larger number of) 3330s
with 3380s for the same aggregate capacity ... they would tend to see
overall decreased performance. we had some data useage monitoring and
simulator that would create a plan for re-organizing disk allocation,
where data was physically placed to "balance" activity across available
drives (could include specification limiting amount of "active" data per
drive ... with rest of drive left empty or filled with rarely accessed
data).

i had been making statement that the relative system thruput of disks
had declined by an order of magnitude over a period of yrs. this got the
disk division executives upset and the division's performance group was
tasked to refute my statements. after several weeks, they came back with
position that i had slightly understated the issue(/problem).

TOPS-10

"Michael N. LeVine" <mlevinespmfltr@redshift.com> writes:
I cannot remember the model number(s), but in the 70's, Tektronics had
at least one storage tube terminal with APL character set capability.

there was also "3277GA" ... basically a tektronics tube pluged into the
side of a 3277 display terminal. special escape characters would address
the tektronics display ... as opposed to the 3277.

3277ga was sort of considered an inexpensive 2250/3250 ... basically
computer interface operating at channel speeds (couple hundred
kbytes/sec rather than kbits/sec).

random reference to vsapl & 3277ga (search engine for 3277ga)
... and for some random trivia ... the following references a '84 YKT
research report ... author for a period was my 1st line manager at the
cambridge science center.

Quality Control in GRAFSTAT
http://www.priorartdatabase.com/IPCOM/000148800/

TOPS-10

Charles Richmond <frizzle@tx.rr.com> writes:
I think the large disk I mentioned above was a "third party" disk,
and *not* made by IBM. I'm *not* sure how I could find out now.
If Lee Courtney is reading this, he may remember.

the IBM east coast visit include (mainframe) POK ... an item mentioned
is being told that the 303x "channel director" has the power of 370/158
cpu. In fact, the 303x "channel director" is a 370/158 engine w/o the
370 instruction microcode aka 370/158 had "integrated channel" feature,
and the 158 engine ran 370 instruction microcode as well as the
"integrated channel" microcode; 3031 was 370/158 engine w/o the
"integrated channel" microcode and 303x "channel director" was 370/158
engine w/o the 370 instruction microcode.

was IBM sponsored network for educational institutions leveraging the
technology used for the IBM internal network

Universities eventually developed something similar to TOOLSRUN called
LISTSERV (which was purely mailing list oriented) ... and several
mailing list computer conferences were spawned.

As might be expected, several were IBM technology oriented ... one was
IBM-MAIN which survives today (bitnet was eventually subsumed by the
internet) ... and is also gatewayed to USENET news as
bit.listserv.ibm-main . A current reference is
http://listserv.ua.edu/archives/ibm-main.html

toolsrun was deployed on the internal network somewhat after an
investigation into this new "computer conferencing" stuff that I was
doing. it supported both a usernet like operation as well as listserv
like operation (predating listserv on bitnet).

(virtual machine) cp67/cms was developed at the science center in the
mid-60s. later that evolved into vm370/cms. during the late 60s and
early 70s some number of commercial timesharing service bureaus were
spawned ... some starting with cp67/cms and others with vm370/cms. one
was TYMSHARE. TYMSHARE developed computer conferencing on their
vm370/cms commercial timesharing service. They offered a "free"
service to the vendor user group organization SHARE:
http://www.share.org/

The Pattern of Engagement in High Value Sales Campaigns

I've mentioned before in similar reference in xing "Greater IBM" ... I
sponsored Boyd's briefings at IBM in the early 80s.
http://www.garlic.com/~lynn/2008p.html#81 How to Plan a High Value Sales Campaign Using Military Principles

For a recent, similar post in a Boyd blog ... somebody created:

Has Online Advertising Lost Its Schwerpunkt?
http://www.chetrichards.com/c2w/2008/11/21/has-online-advertising-lost-its-schwerpunkt/#more-188

from above:
Ever since Sun Tzu's Art of War, business schools have borrowed
concepts from great military thinkers. Miyamoto Musashi's Book of
Five Rings has been used extensively to direct short and long term
strategies. Patton's brilliant essay, "Secret of Victory" has
guided my own personal business philosophy.

One of these concepts, which I contend is more timely and appropriate
than ever, is the principle of Schwerpunkt, first introduced nearly
200 years ago by the German philosopher, Carl von Clausewitz, in his
brilliant treatise, On War. US military strategist, John Boyd, and his
acolytes helped introduce the concept of Schwerpunkt to the modern US
military.

from above:
Max Burnet has turned his home in the leafy suburbs of Sydney into
arguably Australia's largest private computer museum. Since retiring as
director of Digital Equipment Corporation a decade ago, Burnet has
converted his interest in the computing industry into an invaluable
snapshot of computer history. Every available space from his basement to
the top floor of his two-storey home is covered with relics from the
past. His collection is vast, from a 1920s Julius Totalisator, the first
UNIX PDP-7, a classic DEC PDP-8, the original IBM PC, Apple's Lisa, MITS
Altair 8800, numerous punch cards and over 6000 computer reference
books. And more. He happily opened his doors for CIO to take a look.

TOPS-10

Joe Pfeiffer <pfeiffer@cs.nmsu.edu> writes:
The end result was that it was pretty easy to write an efficient
interpreter for it, and even if it weren't the ratio of "parsing the
line" to "doing all that work" would have been pretty good.

had done the port of apl\360 to (cp67/)cms for cms\apl (and had to do a
lot of stuff for running in large virtual, memory environment ... as
well as adding stuff to access system services, file i/o, etc).

later, palo alto science center did some more work for what became
apl\cms (by that time on vm370/cms) ... as well as the apl microcode
assist for 370/145 (for purely apl functions that didn't fillup 145 real
storage ... apl on 370/145 with apl microcode assist ran as fast as on
370/168 w/o microcode assist ... in some cases 10times faster).

HONE was really big operation (after consolidation of US datacenters in
the mid-70s, they ran large number of loosely-coupled 370/168 SMP
multiprocessors). there was lots of attention paid to APL operation and
some tests done with comparing 370/145 (heavily leveraging palo alto
science center expertise) and 370/168. The microcode assist gave 145
nearly same processor performance as 168 for "small" applications
... but the HONE environment also relied heavily on the larger real
storage configurations available on 168s (compared to 145s).

Have not the following principles been practically disproven, once and for all, by the current global financial meltdown?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Have not the following principles been practically disproven, once and for all, by the current global financial meltdown?
Date: November 24, 2008
Blog: Risk Management

Congressional hearings a couple weeks ago into the rating agencies
... had the mortgage originators paying the rating agencies for
triple-A rating on toxic CDOs (even tho they all knew that the toxic
CDOs weren't worth the triple-A ratings). The result was that a huge
market opened up for toxic CDOs that would have never dealt in such
risky instruments.

The two sides of those triple-A ratings resulted in a) enormous amount
of money became available for risky speculation in the home owner
market (the root of much of the current real estate crisis) and b)
institutions that only dealt in "safe" investments turned out to have
hundreds of billions or even trillions of toxic CDOs on their books.

About the time of the hearings, a representative from one of the
rating agencies was on TV business news program to discuss the
downrating of some company or another ... the host spent much of the
time trying to get the representative to admit to being responsible
for the whole crisis.

A lot of risk management is dependent on transparency and accurate
information. If the input data is erroneous ... the result is likely
also to be erroneous ... aka GIGO.

Lars Poulsen <lars@beagle-ears.com> writes:
The credit crises was not so much caused by too many people
applying for subprime mortgages, as by two related factors:
- banks overeager to lend to unqualified borrowers in order
to collect loan origination fees, knowing that they would
not be on the hook of the loans failed
- investment banks using a moderate amount of subprime loans
as an excuse to generate a much larger amount of Credit
Default Swaps that would fail when the loans failed.

Congressional hearings a couple weeks ago into the rating agencies
... had the mortgage originators paying the rating agencies for triple-A
rating on toxic CDOs (even tho they all knew that the toxic CDOs weren't
worth the triple-A ratings, the word "fraud" was used periodically). The
result was that a huge market opened up for toxic CDOs that would have
never dealt in such risky instruments.

The two sides of those triple-A ratings resulted in a) enormous amount
of money became available for risky speculation in the home owner
market (the root of much of the current real estate crisis) and b)
institutions that only dealt in "safe" investments turned out to have
hundreds of billions or even trillions of toxic CDOs on their books.

About the time of the hearings, a representative from one of the rating
agencies was on TV business news program to discuss the downrating of
some company or another ... the host spent much of the time trying to
get the representative to admit to being responsible for the whole
crisis.

A lot of risk management is dependent on transparency and accurate
information. If the input data is erroneous ... the result is likely
also to be erroneous ... aka GIGO.

Cramer (tv business host) really excoriated SEC and SEC chairman on
his show Friday night, for their part in helping create the current
crisis (removed regulations and/or failed to enforce regulations)

One example he cited was that the (short sale) "uptick" rule was done
by people in the 30s with lots of expierence with what went wrong in
the frenzy leading up to crash of '29 (and nobody presently at SEC
really understands what works and what doesn't).

He also mentioned that all the illegal, "naked" short sales should
completely bypass any SEC involvement and go directly to US attorney
for prosecution.

SOX also required SEC to do something about the rating agencies
... but other than publishing a report in jan2003, nothing appeared to
have been done.

today on tv business news show half dozen were all talking over each
other at the same time. basically the net was that a lot of executives
had taken enormous risks in order to walk away with billions in bonuses
... even if the activity later took down the institution (need to look
at personal motivation as opposed to institutional objectives). last
week there was brief reference to article that NY state attorney general
had sent letters to various financial institutions asking for
information about executive bonuses. there were comments that if the
objective was going to be an attempt to recover the funds ... they were
going to have to prove fraud.

jmfbahciv <jmfbahciv@aol> writes:
PBS showed an interview last Friday night right after that Washington
Week show. The guy interview was one who put together those CDOs for
his Wall Street firm. He knew they were bad risks. He also appeared
very proud of his work and didn't seem at all chagrined that he
helped make a very big mess. I didn't see the whole show, though
(fell asleep).

aka ... business school article from last spring that estimated approx
1000 executives are responsible for 80% of the current crisis/mess
... and it would go a long way to fixing the situation if the gov. could
figure out how they could loose their jobs ... modulo the enormous
bonuses.

There was a comment on tv business news show yesterday that with the big
stock market downturn, goldman sachs equity evaluation was just $17B; he
contrasted that with goldman sachs' bonus pool in 2007 was $21B.

GAO has been doing database of increasing number of financial
"restatements" of public companies (again, despite SOX). Basically, the
executives fiddle the numbers in order to boost their bonuses. Later
the financial statements, may or may not be restated ... but the bonuses
are not forfeited (enormous risks can be used to really boost executive
bonuses ... with little accountability and whether or not the risks
later take down the institutions).

in the past there was periodic hand wringing that the era of enormous
bonuses may be passing ... but that seems to becoming less popular as
the extent of the damage has become more apparent.

much of the real estate crisis is because the triple-A ratings on toxic
CDOs ... enormously expanded the market for such instruments (including
institutions that had mandates/convenents allowing them only to deal in
"safe" investments). this greatly increased the funds for unregulated
mortgage originators for funding risky, speculation (coupled with being
able to immediately unload the mortgages, eliminating any motivation for
unregulated mortgage originators to pay attention to mortgage
quality). the mortgages for risky, speculation (akin to unregulated '20s
stock market speculation) lead to the home owner market crash.

TOPS-10

Kim Enkovaara <kim.enkovaara@iki.fi> writes:
Wheter VCS is single point of failure or not is dependent on the
system. Usually they are replicated and also backed up. That is the
most important system for a software company. And normally when
migration to a new VCS is done, also the old data is migrated.
Today it is a big exception if the code is not in version control
system.

In distributed version control systems the data and history is
replicated to all users. For example linux source code is on
thounsands of systems with history. The ancient history
is facotored out to another repository, but it also
exists.

Also the major version control systems have been shipping
with their C source code for a long time. And also the
commercial software very widely ported to different
platforms. And the systems have extensive import/export
facilities to other systems.

the standard process at the science center was to create a production
(cp67) system on tape ... basically it was a bootable kernel image on
tape ... which (when booted) would write the boot image to a disk. this
could be done in a virtual machine ... reducing the elapsed time to
switch kernels. when there turned out to be a system glitch ... it
became relative easy to regress to earlier system version.

since the boot image, left most of the tape empty, fairly early in may
career, I adopted a process that added (behind the boot image), all
files (source, executables, process, shell scripts, etc) necessary to
recreate the boot image.

later, in the mid-80s, Melinda Varian was looking for the multilevel
source control system that had been developed at the science center
during the cp67 period. for various reasons ... I had managed to keep
some number of these early cp67 boot image tapes (copying the material
from 800bpi to 6250bpi tapes and later to 3480 cartridges).

as previously mentioned ... not too long later, the almaden had a glitch
in their operations where somebody was randomly selecting cartridges to
be mounted for "SCRATCH" requests (which managed to destory many of my
carefully preserved tapes).

TV business news show (in real time) said that the housing market
numbers (just in) have housing market prices so far reset to 2004. In
the past, I speculated that the reset point (about when the risky,
speculation started) is 2001 ... but there is big problem with
speculation crashes which can take things down past the reset point.

One factor is heavy speculation creates impression that demand is much
higher than actually exists, which can result in over production. From
the "law" of supply&demand ... significant over supply results in
"buyers" market, contributing to additional downward pressure on prices.

Obama, ACORN, subprimes (Re: Spiders)

jmfbahciv <jmfbahciv@aol> writes:
PBS showed an interview last Friday night right after that Washington
Week show. The guy interview was one who put together those CDOs for
his Wall Street firm. He knew they were bad risks. He also appeared
very proud of his work and didn't seem at all chagrined that he
helped make a very big mess. I didn't see the whole show, though
(fell asleep).

something of a cheap shot ... but TV business news show this morning had
an item (with snickering going on in the background) that one of the
rating agencies had just gotten around to downgrading from triple-A to
(just) "C", a bunch of mortgage backed securities that had been packaged
in 2006 & 2007 (there were also some snide references to it still going
on).

I remember being on HA/CMP marketing trips to HK in the early 90s and
reading an article about China being at disadvantage for IT
outsourcing compared to India.

IT outsourcing really picked up steam in the late 90s with Y2K
remediation and Internet bubble occurring at the same time ... there
weren't enough skills & resources in the US to handle both ... so
there was really big uptick in outsourcing to cover the significant
skill/resource shortfall. It was after Y2K came & went and the
Internet bubble burst that IT outsourcing started to be considered.

one of the issues was that in the 80s, foreign car makers moving to
build in the US (as countermeasure to import quotas) ... was having to
require junior college degree to get workers with high school
education.

there were some articles from 1990 census that half of the 18 yr olds
were functionally illiterate.

more recently there have been studies that US education system ranks
near the bottom of industrial nations.

One of the studies from 90s was that half off all advance technical
degree graduates from institutions of higher learning in cal. were
from foreign countries ... and that it wouldn't take a lot to reach a
tipping point where they returned home instead of staying in the
US. It was also this group of highly skilled graduates that helped
make the internet bubble possible in the late '90s.

Some of the institutions that are doing the most IT outsourcing to
places like India are financial institutions. This can be attributed
to the fact that during the days of Y2K remediation ... there were
frequently forced to go overseas in order to find the resources (since
so much of the country had been sucked into the internet bubble).
After those business relationships were established and Y2K
remediation was over with ... it was only natural that those business
relationships continued.

TOPS-10

Rich Alderson <news@alderson.users.panix.com> writes:
That's because the editor of the RFCs refused to publish it any other way.
Believe me, the author was quite serious in his intent, as he told me at the
time (of first submittal, not 1 April).

current convention for "normal" RFCs have just Month/Year publication
date in the header of the RFC (even RFCs published on April 1st)
... while April 1st RFCs are differentiated with Day/Month/Year
publication date in the header of the RFC.

business TV news show just now discussing some number of the current
problems ... including citibank. comment was that citibank wasn't worth
anything but it was too big to allow to fail (as opposed to FDIC
liquidating stuff, stock disappears and resources parceled out to other
banks).

misc. past posts referencing news stories that citigroup was going
to win the write-down sweepstakes (presumably because citigroup had
the largest balance of toxic CDOs, SIV, off-balance-sheet, etc)

from above:
As part of a rescue agreement with federal regulators, Citigroup will
effectively halt dividend payments for the next three years and will
agree to restrictions on and review of certain executive compensation,
it was announced on Monday. The bank will also put in place the Federal
Deposit Insurance Corporation's loan modification plan, which is
similar to one it recently announced.

... snip ...

the responsible parties have taken their bonuses and long departed.

yesterday morning tv business news show discussing the citigroup
bailout, it was mentioned that citigroup had $2t "on-balance" but still
over $1t being carried off-balance sheet.

previous reference to PBS program detailing citigroup playing the major
role in laying the basis for new century with repealed regulations
and/or lack of regulation enforcement.

long-winded, decade-old post mentioning many of the current problems as
well as mentioning that in the S&L crisis, citigroup was nearly taken
down by dealing in ARMs and needing a (private) bailout to continue
functioning:
http://www.garlic.com/~lynn/aepay3.htm#riskm

congressional testimony didn't exactly say that ... the congressional
testimony was that both the mortgage originators and the rating agencies
knew that the toxic CDOs weren't worth triple-A rating.

my analogy has been more the "emperor's new clothes" parable ...

there was also mentioned that SOX required SEC to do something about
rating agencies ... but nothing seemed to have happened except for
jan2003 report (lots of activity eliminating regulations and/or at least
eliminating enforcement of regulations).

week ago sunday there were a couple people on CSPAN program mentioning
that the relaxed regulatory environment has had other unattended
side-effects ... including drug money being laundered through
mortgages. they raised the suggestion that the institutions should be
prosecuted under RICO (including seizing all assets and going for treble
damages).
https://en.wikipedia.org/wiki/Racketeer_Influenced_and_Corrupt_Organizations_Act

APL

Rich Alderson <news@alderson.users.panix.com> writes:
Address space available to workspaces is an implementation detail. I
believe, though I do not know this for certain, that large commercial
implementations on big IBM iron provide a full 24- or 31-bit address
space. APLSF only provided a couple of hundred pages max on TOPS-20,
and APL\360 was limited to something like 32K on the 360/50 at UTexas
in 1969-70.

that was part of the work the science center did adapting apl\360 for
cms\apl (on cp67/cms). it wasn't just the larger workspace size ... the
whole internal storage management infrastructure had to be redone to
eliminate (virtual memory) page thrashing, where apl was rapidly and
repeatedly touching every (possible) virtual page.

Mainframe files under AIX etc

joarmc@SWBELL.NET (John McKown) writes:
This is strictly for z/Linux use. I really doubt that you can connect
"mainframe" DASD to your AIX system. The interface is different. The
"mainframe" uses FICON. The AIX likely uses FCP (or maybe some SCSI
variant). To the best of my knowledge, there is no "host adapter" for
a p Series which will connect it to a FICON DASD unit. And even if
there were, you'd need a device driver.

one of the rs/6000 engineers had taken some fiber optic technology that
had been kicking around POK (for possibly a decade) and tweaked it so
that it was about 10% faster and used significantly cheaper optical
driver technology. This was announced as SLA (serial-link-adapter) and
was incompatible with what POK announced as ESCON (because of the SLA
enhancements).

He then wanted to do a 800mbit version of SLA ... but we had been
working with various national labs and standards organizations and
eventually talked him into joining the FCS standards body where he
became secretary and "owned" the standards specification document for
some period.

There were significant discussions that went on in the FCS standards
mailing list, where mainframe channel engineers were insisting on
layering all sorts of complexity on top of FCS ... mostly to support
various mainframe channel idiosyncrasies (which has been called FICON).

Most FCS use has been for both messaging interconnect as well as for
things like carrying "packetized" SCSI commands.

Blinkenlights

sidd <sidd@situ.com> writes:
CEO of Citi, Mr. Pandit was on Charlie Rose last nite, admitted
concentration of risk in real estate, laid blame on previous
management, and justified the bailout because Citi was too large to
fail.

there was some pundents, claiming that after the merger with
travelers, a lot of "bankers" were replaced with "insurance
executives" ... who were use to operating under different set of
standards & totally different regulatory climate.

Virtualization: What is it exactly?

During '90s, in the era of killer micros, there was an approach to
dedicating hardware for each, individual operation. It was much
cheaper than the skill & expertise necessary to get large number of
operations to gracefully co-exist on single set of hardware. More than
a decade of that paradigm has resulted in large server farms
frequently with 10% or lower utilization.

virtualization allows for multiple different (virtual) servers to
co-exist on the same hardware ... requiring only modest skills and
expertise. this has resulted in many server farms seeing a factor of
ten times reduction in the number of (physical) servers (and some
cases a corresponding reduction of ten times in the number of
datacenters).

virtualization has also been leveraged for security purposes ... even
on desk tops ... not for improved hardware utilization ... but for
process partitioning and isolation. one scenario has a dedicated
virtual machine being dynamically spawned for an internet browser
session ... and then being dissolved/discarded when the session
completes (along with any downloaded virus, trojans, or other exploits
& compromises).

It is also been used for software simplification ... somewhat focused
around the emerging buzzword virtual appliance (in the late 60s &
early 70s, we referred to it as service virtual machines). There have
been a number of articles about virtual appliances representing the
death knell for traditional (extremely complex) monolithic operating
systems. A virtual appliance is an extremely stripped down,
simplified environment ... tailored for doing a specific operation.

EAL5 Certification for z10 Enterprise Class Server

wfarrell@US.IBM.COM (Walt Farrell) writes:
But personally, I would not call it an operating system (I would call it a
hypervisor) nor would I claim it as EAL6+.

above EAL4 gets kind of funny. I tried to get EAL5 for AADS chip
... one of the things I was doing was putting everything in silicon;
all part of chip manufacturing, including EC/DSA (NIST digital
signature standard). Since everything was part of the silicon ... then
it required to be included in the evaluation. Problem was that there
wasn't a specification for EC/DSA that could be used as part of an
EAL5 evaluation (EAL4 didn't require demonstration that outputs of
EC/DSA met some specification, there had been a draft specification
... but it had been withdrawn).

Other vendors were getting EAL5 evaluation on similar chips ... except
they were bare bones chip ... where all the applications were done in
software and loaded into the chip after manufacturing. Their
evaluation was for the manufactured chip (what came from the foundary)
... not final delivered product to end-user.

EAL5 Certification for z10 Enterprise Class Server

wfarrell@US.IBM.COM (Walt Farrell) writes:
I'll agree that things are generally different above EAL4, but in my
experience typically because the mutual recognition agreements apply only at
EAL4 and lower. And because (I think) in the US you may need the NSA
involved in evaluations at EAL5 and higher.

But in my experience you can add functional and assurance claims and still
meet any EAL level you want. So I don't quite understand why you couldn't
have gotten an EAL5 evaluation, but obviously I don't have all the details.

What you can't do is change the basic nature of the assurance claims. Each
assurance level (EAL1, EAL2, etc.) has a prescribed set of assurance claims
that you need to satisfy. The Common Criteria allows some small intended
kinds of modifications (selection from a specified list of actions,
specification of a list of objects or users, etc.). But you're not allowed
to take one of the standard claims and modify the wording to say something
else. And as I understand it that's what the authors of the SKPP protection
profile did. I believe they did so to make the claims better (stronger), as
they see it, for their intended usage. But the changes make the profile no
longer EAL6, or EAL6+ (since they included some EAL7 items) but really
"designed to be like EAL6".

That's not to say it's a bad or improper protection profile. But I don't
think it's correct to call it EAL6+ (which is probably why the protection
profile authors and the security target authors did not call their works EAL6+).

that is separate issue ... in theory, common criteria & EAL is
replacement for "rainbow books" and "orange book" evaluation. I've
characterized "orange book" as criteria evaluating for general,
multi-user, multi-purpose system. This was hard &/or impossible to
achieve for many ... and the case was made that a lot of stuff was much
more special purpose and didn't need to meet all the requirements for a
general purpose system. thus was born common criteria and plethora of
"protection profiles"

four yrs ago there was a report about 64 or so systems that had received
EAL2 evaluation and that 60-some had undisclosed modifications to the
standard protection profile ... which defeated the purpose of using the
evaluations for comparing different products.

for other drift for AADS ... i've claimed that possibly 95% of the
"standard" smartcard protection profile (as opposed to pure chip
protection profile) ... has to do with assurance related to loading
software on a chip. since AADS-based chipcard has no provisions for
loading software ... most of smartcard protection profile is
superfluous.

APL

"Dave Wade" <g8mqw@yahoo.com> writes:
I remember using APL on the 360/67 and later the 370 at Newcastle
University UK under MTS. Any idea which APL this was. I also remember
frequently getting slung off for having too much Virtual Memory. Even
on a Saturday morning

I believe much of MTS on 360/67 (and later on 370) application was stuff
borrowed from os/360 ... somewhat similar to the way that (cp67/)CMS
borrowed os/360 stuff ... or written from scratch for MTS.

pure conjecture would be apl\360 adopted for MTS environment.

standard apl\360 allocated a new storage location for every assignment
... until it reached end of workspace ... at which point it did garbage
collection and coalesced/compacted all allocated variables ... and then
started again.

one of the tools at science center was trace & virtual memory modeling
tool ... which was later released as product called VS/Repack (which
would do semi-automated program re-org/optimization for virtual memory
environment). This was initially used to identify the effect that
apl\360 storage management had in virtual memory environment ... and
that validate changes to make it better run where workspace was in
(paged) virtual memory.

Dumb And Dumber -- And Dumbest; How decisions by big CEOs are driving
the crisis of confidence.
http://www.forbes.com/opinions/2008/11/25/bailout-ceo-confidence-oped-cx_dg_1126gerstein.html

from above:
We don't just have a mass liquidity problem. We have a mass stupidity
problem.

...

The recent epidemic of stupidity in the marketplace is not happening in
a vacuum. It's coming on top of revelation after revelation of
shortsightedness and recklessness--often fueled by extravagant, at times
unimaginable, avarice--from our top business leaders in the run-up to
the September meltdown.

...

It's the kind of breathtakingly poor management at Citi that moved the
New York Post--hardly a banner-carrier for the Nader Raider types--to
run a rare front-page editorial yesterday branding the destitute bank
"Citi of Fools" and demanding the resignation of its board of directors.

... snip ...

reference to study of 270 companies got into crisis because of the
structure of their executive bonuses plans (significant reward to take
excessive business risks, as well as cooking the books, to boost size of
executive bonus ... with no personal downside and w/o regard to effect
on the institution) ... that then revamped their executive bonus
plans to align with long term corporate viability:
http://www.garlic.com/~lynn/2008p.html#9 Do you believe a global financial regulation is possible?

A couple weeks ago, there was somebody on CSPAN that said in the
congressional session which repealed Glass-Steagall, the financial
industry made $250m in congressional contributions. In the most recent
session approving the wallstreet bailouts, the financial industry made
$2b in congressional contributions.

and as recently mentioned, Sunday before last, a couple people on CSPAN
were talking about repealed regulations & relaxed regulation enforcement
has resulted in large amounts of drug money being laundered through
mortgages. They raised the issue of prosecuting the institutions under
RICO ... siezing all their assets and assessing treble damages.
http://www.garlic.com/~lynn/2008q.html#11 Blinkenlighs
http://www.garlic.com/~lynn/2008q.html#12 Blinkenlighs

Certificates turn 30, X.509 turns 20, no-one notices

On 11/27/08 05:13, Nicholas Bohm wrote:
I've never been quite sure whether "Public" qualifies "Key" or
"Infrastructure" - this may make a difference to what you count as a PKI.

SWIFT (interbank messaging), BOLERO (bills of lading) and CREST
(dealing in dematerialised stocks and shares) all use public key
cryptography, I believe, and have all been reasonably successful; but
they are all closed systems where each of the participants believes
that it and the others can stand the risk of contractually-imposed
non-repudiation rules (or they used to believe it, anyway).

But what these examples illustrate, by the lack of "open" comparables,
is the very limited utility of the technology.

in the past, capitalization referred to CAs making the rounds of
wallstreet with $20B/annum business case (i.e. approx. $100/annum per
adult in the US).

The lower case "public key" met that an entity could make their public
key available ... as countermeasure to the shortcomings of
shared-secret (password/PIN) paradigm ... where a unique shared-secret
was required for every unique security domain (the current scenario
where scores or hundreds of unique shared-secrets have to be managed).

going from lower-case ... where an entity could share the same public
key with large number of different entities, to upper-case, was the
scenario justifying the $20B/annum business case.

sometimes the issue isn't whether the public key is open/closed
... the issue is whether the business liability is between the parties
involved ... or should random, unrelated participants also get
involved in the business processes.

there have been some attempts at obfuscation ... attempting to confuse
the boundaries between the authentication technology and the parties
involved in business processes liability

i was at annual acm sigmod (aka database) conference in 91 (92?) and
during one of the sessions, somebody asked a question regarding what
was all this X.5xx stuff going on ... and the reply was that a bunch
of networking engineers were trying to re-invent 1960s database
technology.

from above:
Watsa's only sin was in being a little too early with his prediction
that the era of credit expansion would end badly. This is what he said
in Fairfax's 2003 annual report: "It seems to us that securitization
eliminates the incentive for the originator of [a] loan to be credit
sensitive. Prior to securitization, the dealer would be very concerned
about who was given credit to buy an automobile. With securitization,
the dealer (almost) does not care."

if you are an powerful financial regulator , how would you have stopped the credit crunch?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: if you are an powerful financial regulator , how would you have stopped the credit crunch?
Date: November 27, 2008
Blog: Equity Markets

In the recent congressional hearings there were comments that both the
unregulated mortgage originators and the rating agencies knew that the
toxic CDOs weren't worth triple-A ratings ... but the
unregulated mortgage originators were paying the rating agencies to
give them triple-A ratings anyway (the word "fraud" was periodically
used).

About the time of the hearings, a representative from one of the
ratings agencies was on a TV business new shows to discuss some
corporate rating downgrade, and the host kept trying to get the
representative to take the blame for the whole credit crisis.

SOX supposedly required SEC to do something about the credit rating
agencies ... but other than a study/report from Jan2003, there doesn't
seem to have been anything done.

from above:
Watsa's only sin was in being a little too early with his prediction
that the era of credit expansion would end badly. This is what he said
in Fairfax's 2003 annual report: "It seems to us that securitization
eliminates the incentive for the originator of [a] loan to be credit
sensitive. Prior to securitization, the dealer would be very concerned
about who was given credit to buy an automobile. With securitization,
the dealer (almost) does not care."

... snip ...

Something similar has been claimed regarding the triple-A rated toxic
CDOs from mortgage originators. With little motivation to manage credit
quality ... they could write no-documentation, no-downpayment, 1%
introductory, interest-only ARM for all comers ... which became quite
attractive for speculators. In recent news story there was reference
to 60% of home mortgage defaults involve people with multiple
propterties.

Employees sue for non-paid PC boot-up time

Bernd Felsche <berfel@innovative.iinet.net.au> writes:
Wimp. Ignoring two 4-hour breaks for a snooze, I've done 90 hours
straight, spanning a long weekend, which should have entitled me to
4 weeks' extra leave. That I never got.

Although not strictly "work", I gather that far too many who have
worn baggy green skin have spent 10 days or more without a break;
other than brief naps whenever possible.

at the univ, i would regularly get the datacenter at 8am sat. morning
and have it all to myself until 8am monday morning ... and then have to
go to class.

What do you think is holding up the use of cellphone-initiated micro payments in the U.S.?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: What do you think is holding up the use of cellphone-initiated micro payments in the U.S.?
Date: November 27, 2008
Blog: Wireless

there was betting in the mid-to-late 90s that telcos would emerge as
the new payment processing ... because of efficient transaction system
they had for handling call records. the ones that tried it ... found
that while they could efficiently process payment transaction ... they
didn't handle the associated financial risk well.

the risk characteristics of advancing funds to the merchant in evening
settlement and then collecting for the transactions in the user's
monthly bill ... was totally different risk profile than simply
collecting for useage (i.e. they were actually out of pocket funds
payed to the the merchant). while they could take 20-30% hit on
remittance problems with useage ... the telco operations that tried
payments found that the they were actually loosing real money.

several of the telcos that tried it a decade ago ... got out of it
almost as fast as they got into it. it isn't particularly a technical
issue ... but much more of a cultural issue associated with difference
in financial risk management.

now the telcos were the early adapters in the "in-memory" databases
(advertising ten times the thruput of traditional disk oriented
databases) ... that were used for things like cellphone call
records. in the last year or two ... there were announcements that
some of the payment processing operations were installing these new
"in-memory" database products ... looking at starting to leverage
technology that had evolved for call-record processing, for larger
volumes of lower-value payment transactions.

In the same time-frame also started to see some of the legacy rdbms
operations buying some of these relatively newer "in-memory" database
companies (possibly reflecting the impression that the technology was
starting to move out of mostly telco market into other market
segments).

we had been called to consult with small client/server startup that
wanted to do payment transactions on their server and they had this
technology they had invented called SSL that they wanted to use.
recent post in usenet group thread ("https question") discussing some
of that activity (archived here)
http://www.garlic.com/~lynn/2008q.html#72 https question

then in the mid-90s we were asked to participate in the x9a10
financial standard working group which had been given the requirement
to preserve the integrity of the financial infrastructure for all
retail payments. in the later half of the 90s, there was some telco
participation in x9a10. part of what was discussed was (while still
preserving strong integrity) a protocol that was lightweight enough
that could be efficiently done with cellphones using very limited
bandwidth. the telco participation then evaporated ... seemingly have
the telcos discovered that managing payment transaction involving
other parties was different sort of financial risk management than
they were used to.

while they didn't particpate, various transit organizations were also
interested in having a super lightweight protocol, that could be
performed within the power & elapsed time constraints for contactless
at transit gates ... they wanted x9.59 to be done within constraints
already met by the octopus card in HK (which actually involved a chip
from Sony) and later the oyster card in london (while still preserving
x9.59 high integrity). misc. past references to x9.59 financial
standard protocol
http://www.garlic.com/~lynn/x959.html#x959

not unless the ISP is running the server. HTTPS provides for
authenticating the (remote) server and then establishing an encrypted,
end-to-end "session" between the client and the server (only the
end-points see the unencrypted information, all others just see a lot
of encrypted noise).

we had been called in to consult with small client/server startup that
wanted to do payment transactions on their server and had also invented
this technology called SSL (or sometimes HTTPS) they wanted to use.
They also wanted to use it between the server and something called the
"payment gateway" ... lots of past posts mentioning deployment of
"payment gateway":
http://www.garlic.com/~lynn/subnetwork.html#gateway

part of the gateway deployment was some number of compensating processes
that further increased the integrity/assurance of the server/gateway
communication.

there was also detailed end-to-end audit of all the processes related to
SSL/HTTPS, including walk-thrus of several of these new operations
calling themselves certification authorities that were issuing these
things referred to as SSL domain name digital certificates ... some
number of past posts
http://www.garlic.com/~lynn/subpubkey.html#sslcerts

part of the assumption for correct client/server HTTPS operation was
that the end-user understood the relationship between the server that
the client thought they were talking to and the (HTTPS) URL used by
the browser. The end-user types in the URL (for the known server), and
the browser then uses SSL/HTTPS to validate the relationship between
the URL and the server that it is talking to. This creates the trust
chain that the server, the end-user believes they are talking to, is
actually, the server they are talking to (but a critical component is
the end-user understand the relationship between the server they think
they are talking to and that server's URL).

for electronic commerce, almost immediately the merchant servers
discovered that SSL cut their thruput by 90%-95% ... and as a result
they dropped back to just using SSL for check-out/payment.

Now the URL provided by the end-user is no longer validated by the
browser (since SSL is no longer being used). The (unvalidated)
merchant site then provides a button to click ... which provides the
URL. This invalides original assumption about SSL integrity ... since
the URL provided by the end-user isn't being validated ... and the URL
that is being validated is provided by an (unvalidated) website (not
by the end-user).

This "click" convention has created a disconnect for end-users
... between the server they think they are talking to and the
corresponding URL for that server (an original integrity assumption
for SSL, required that the end-user understand/know the relationship
between the server they think they are talking to and the URL for that
server).

an unvalidated source ... provides a field asking the end-user to
click. This field supplies a URL that takes the browser to a server
... that may actually have a valid digital certificate for that
server. The (potentially bogus) server (with a valid digital
certificate), then has a valid SSL session, connected to the
client. This server can run a slightly modified version of some "proxy
software" ... which transparently creates another SSL connection with
another server (the server that the user actually believes they are
talking to) ... and transparently forwards everything between the two
sessions (while evesdropping on all the communication).

Another kind of attack, is a client end-point attack ... where the
client downloads some sort of applet, virus, and/or trojan ... that
compromises the PC and views all the unencrypted input/output
... before the browser does the SSL part. This applet/virus/trojan
then users a different session with a 3rd party, providing a copy of
all unencrypted information.