EU central banks didn't differentiate between "stored-value" and
"electronic cash" ... they were treated as value present in the cards
... and the EU central banks made announcement that interest would
have to start being paid on the value in cards (use of term
"electronic cash" may have been obfuscation attempt to bypass the
interest paying rule).

Several of the operators sponsored internet standard for value
transfer (of their products) via internet ... IOTP internet standard
was 2801 ... the 2801 abstract happen to mention several such products
that it was intended to support. (including geldkart).

The EAL4+ evaluation was because it was done for fully deploy'able
operating chip including all the crypto ... as would be used in hands
of typical customer. There were lots of "bare" chip evaluations at
higher level. I was going for higher than EAL4 ... but NIST pulled the
crypto evaluation criteria (for higher level evaluation).

The (trivial to clone/counterfeit chip&pin) YES CARDS apparently
started appearing in 1999 (according to cartes2002 presentation) and
continued ... secret service made presentations at ATM Integrity Task
Force meetings in 2003 including YES CARD activity (based on
presentation, somebody observed that billions had been spent to prove
that chips were less secure than magstripe)

Trusted computing had trusted computing module (i.e. security chip)
... most of the same vendors doing security chips for financial
industry were leveraging the technology for TPM ... trusted computing
was integral part of PCs (for intel) and so there was lots of
attention being paid at the intel developer's forum (even tho intel
wasn't actually producing any such chips). I did do an exercise with
taking semi-custom security chip design and doing fully custom (reduce
number of circuits by possibly factor of 50 times, slightly faster
with nearly corresponding 50times reduction in power & 50 times as
many chips per wafer) ... which was also small enough to possibly
include in the corner of an existing chip (that might be used in PC or
cellphone ... trivially support TPM function at the same time doing
x9.59 financial transactions). A couple yrs ago some of the POS
terminal vendors looked adding such a TPM feature to their terminals.

In the mid-90s, when it still looked like intel might get into
security chips and their people would attend x9a10 financial standard
working group meetings. At one point, intel offered to
underwrite/subsidize the hardware cost for implementing secure
internet transactions (this was before tpm). it seemed to throw the
payment industry into decision paralysis ... seemed to be a lot of
concern that the existing players would loose control of the
industry. when intel changed its mind about security chip business,
some went to work in redmond ... but continued to participate in x9
standards activity. There were lots of meetings in Redmond and there
is boat loads of detail with regard to business decisions about what
to support and not support.

disclaimer: i was con'ed into interviewing for chief security
architect in redmond ... it went on for a couple weeks, but never
could come to agreement on the terms.

The TPM in POS terminal scenario from a couple years ago were ISOs
(independent sales organizations) were placing terminals at merchants
for "free" ... and making it up with uplift on per transaction
charge. The problem was that other ISOs were coming in and
"compromising" the terminal (by reconfiguring it so the transactions
were for the pirate ISO). The "pirate" ISOs didn't have to uplift the
per transaction charge ... since they didn't have the expense of
providing a "free" POS.

PIN nominally is something you know authentication as countermeasure
to lost/stolen (something you have) card/token. Basic concept is
that multi-factor authentication has higher security given an
assumption that the different factors have independent compromises.
This has come under attack for pin+magstripe in skimming compromises
(both PIN & magstripe are skimmed at the same time, invalidating
assumption about independent compromises). In the case, of the
counterfeit chip&pin YES CARD, the counterfeit card would
always say YES that it was the correct "PIN", regardless of
what was entered (as well as always saying YES to offline
transaction and YES to transaction within limit).

--
virtualization experience starting Jan1968, online at home since Mar1970

however, in retrospect, some of the requested items may have
originated from such organizations. notice reference in the above
about remember/notice the people (above predates vm370, in the days of
cp67)

secret service made presentation at ATM Integrity Task Force meeting
in 2003 that included some YES CARD stats & details ... which
prompted somebody in the audience to comment that billions were spent
to prove chips are less secure than magstripe.

In any case, evidence of the pilot appeared to evaporate w/o a
trace. My impression is that it could be some time before it is
attempted again ... this time allowing others to thoroughly vet the
technology.

POS terminals would ask card 3 questions: 1) is PIN correct, 2) is it
offline transaction, 3) is transaction within credit limit.
Countermeasure to counterfeit magstripe is to deactive the
account so online transaction don't go through. With counterfeit
YES CARD, don't need to know correct PIN (everything entered is
accepted) and the transactions are always offline, so account
deactivation has no effect, and all transactions are accepted
regardless of value.

One of the issues of compromised ATMs is, in some cases, been done
during manufacturing (no external evidence).

Note the US reluctance to deploy chip technology ... isn't so much the
cost of a single deployment (such arguments are frequently obfuscation
and misdirection); they've already tried it once and had to back off
... the current situation is possibly concern that there might have to
be the cost of a large number of deployments (after already being
burned once)

and the 404 URL lives on at the wayback machine:
http://web.archive.org/web/20061102063039/http://www-03.ibm.com/industries/financialservices/doc/content/solution/1026217103.html

part of the issue in the '90s was a lot of dithering over chips for
SDA versus the power-hungry and expensive beasts for DDA. The
challenge by the transit industry, in that time-frame, was come up
with a chip that was more secure than the "DDA chips" ... while being
much cheaper than the "SDA chips" and being able to securely do a
(contactless) x9.59 financial standard transaction within the transit
turnstile elapsed time and power limitation requirements.

origin of 'fields'?

"Joe Morris" <j.c.morris@verizon.net> writes:
As a note for readers who weren't around in the high days of the mainframe
VM community...CIA representatives to SHARE were open about their
employment. There was even a bit of humor in CIA's membership in SHARE: its
installation code was CAD - as in "Cloak And Dagger". (And before someone
asks, the membership lists were published with both names and installation
codes, and meeting badges showed both.)

in the past decade, had a visit to them (something to do with financial
industry); required to make advanced preperations so was on the visitor
list out at the front guard bldg. I had assumed that vm use had long
been discontinued ... but when checking in at the guard bldg, the
visitor list was on computer fanfold paper with vm "separator" page
printed on top.

Fun with ATM Skimmers, Part III

... part of recent post in linkedin payment system thread ...
part of the issue in the '90s was a lot of dithering over chips for
SDA versus the power-hungry and expensive beasts for DDA. The
challenge by the transit industry, in that time-frame, was come up
with a chip that was more secure than the "DDA chips" ... while being
significantly cheaper than the "SDA chips" and being able to securely
do a contactless x9.59 financial standard transaction within the
transit turnstile elapsed time and power limitation requirements.

nearly all of the chips associated with the paradigm (that YES
CARD vulnerability was associated with) were insecure and/or
extremely expensive ...with various other shortcomings.

disclaimer: we had been called in to consult with small client/server
startup that wanted to do payment transactions on their server; they
had also invented technology called "SSL" they wanted to use; it is
now frequently called "electronic commerce". somewhat as result in the
mid-90s we were asked to participate in the x9a10 financial standard
working group which had been given the requirement to preserve the
integrity of the financial infrastructure for all retail payments
(i.e. ALL as in debit, credit, ach, stored-value, POS, unattended,
wireless, contactless, high-value, low-value, internet, aka ALL).

I was co-author of the resulting x9.59 financial transaction standard
...that also eliminated the threats & vulnerabilities from
skimming, breaches, and harvesting information from previous
transactions.

Part of being able to apply x9.59 was then doing a chip that also met
all the same requirements (have extremely high security while at the
same time have as close to zero cost as possible) and be useable for
ALL environments.

Basically x9.59 financial transaction transaction is sent to the chip
... which returns a code that is unique to that transaction ...
which is added to the transaction before sending off to be
processed. The transaction processing includes verification of the
unique transaction code as a form of authentication.

Current skimming and breach exploits use harvested (static)
information to perform (new) fraudulent financial transaction
(basically a form of replay attack, current paradigm, there is no
transaction unique information, static information from previous
transactions are sufficient) ... usually as far away as possible from
the compromised end-point (to maximize compromise ROI).

X9.59 eliminated all such replay attacks (with non-static,
unique code for every transaction). As aside, the major use of SSL in
the world today is this earlier work for "electronic commerce" to hide
transaction details. Since X9.59 eliminates transaction detail
information leakage as a vulnerability, it is no longer necessary to
hide transactions details (as countermeasure to replay
attack fraudulent financial transactions) ... and therefor also
eliminates the major use of SSL in the world today (this earlier work
we did for "electronic commerce").

X9.59 didn't eliminate exploits where compromised end-point actually
performs a fraudulent transaction (as opposed to skimming information
to perform a replay attack fraudulent transaction someplace
else). However, reducing fraudulent transactions to only the
end-points that have been compromised ... does make them much easier
to identify and quicker to shutdown.

X9.59 did provide for allowing both the account owner's chip as well
as the transaction environment (aka "end-point") to provide unique
transaction codes ... so that both the account owner and the end-point
can be authenticated on every transaction. This minimizes problem with
counterfeit end-points and also helps speedup identification of
compromised endpoints (that may be performing fraudulent
transactions).

The static data paradigm results in millions of places all over the
planet where the information might be harvested. PIN-debit
transactions can be done at counter POS terminals (where costs are
well under hundred dollars) ... as a point of compromise. Then the
static data is used to produce a counterfeit card (along with PIN)
that is used at ATM machines (and/or other POS terminals) , which
haven't been compromised.

A lot of stories that make into public news are related to things that
consumers might actually be able to do something (like recognize
overlays). Lots of other exploits rarely make it into the public news.

X9.59 does nothing for armed robberies ... but enormously eliminates
the other kinds of financial threats ... and the ROI on credit/debit
card armed robberies is drastically lower (enormously more effort per
each transaction, with much greater risk to the criminal)

I was tangentially involved in the Cal. state data breach
notification legislation having been brought in to help wordsmith the
Cal. electronic signature legislation. Several of the participants
were heavily involved in privacy issues and had done detailed,
in-depth consumer privacy surveys. The #1 issue was identify theft,
namely the form of fraudulent financial transactions against existing
accounts because of data breaches (another form of static data
vulnerability, similar to skimming). There seemed to be little or
nothing being done about breaches ... and it was apparently hoped that
the press resulting from the notifications would prompt some
corrective action.

As a side-note, most security is motivated by self-interest ...
protecting ones own assets. In the case of breaches ... it is the
account owners that are at risk, and usually unrelated to the entities
that experience the breach.

Part of the x9.59 and the chip effort was something called
parameterised risk management ... which could associate
integrity level of all components involved in transaction and, if
necessary update it in real time. Transaction can then be evaluated
based on whether minimum integrity levels of the components met the
requirement for performing the transaction.
http://www.garlic.com/~lynn/x959.html#aads

--
virtualization experience starting Jan1968, online at home since Mar1970

What banking is. (Essential for predicting the end of finance as we know it.)

big part of securitization coming to dominate finance was that the
sellers could buy triple-A ratings (when both the sellers and the
rating agencies knew they weren't worth triple-A); securitization had
been used during the S&L crisis to obfuscate the underlying values
... but w/o anywhere near the success that comes with being able to
buy triple-A.

there were enormous fees and commissions related to dealing in the
securitization transactions ... providing sufficient individual greed
motivation to overcome any concern regarding what the transactions
might have on the institution, economy, and/or country. The NY
comptroller had report that wall street bonuses spiked over 400%
during the bubble (lots of subsequent activity to prevent bonuses from
returning to pre-bubble levels) and other reports that financial
services industry tripled during the bubble (as percent of GDP &
providing no positive benefit to society). The report about total of
$27T in such transactions during the bubble, easily accounting for the
bonus & industry size spike (only part of the total siphoned out of
the infrastructure doing the bubble).
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

Sarbanes-Oxley actually included something about having SEC look at
rating agencies ... but didn't actually result in anything. In fact,
it appeared like SEC was doing little or nothing during the period; as
evidence in the Madoff hearings by the person that tried for decade to
get SEC to do something about Madoff. Possibly because GAO also didn't
think that SEC was doing anything, even after SOX ... GAO started
doing reports about audits of public company financial filings showing
increase in fraudulent filings and/or audit errors (even after SOX). A
question then is 1) SOX had no effect on fraudulent filings, 2) SOX
motivated the increase in fraudulent filings, 3) if it wasn't for SOX,
all financial filings would be fraudulent.

Don't look at it as the institutional motivation for doing or not
doing anything. There was enormous, wide-spread, unbridled, personal
greed that totally overwhelmed everything else (even assuming any
concern for institution, economy and/or country).

supposedly commodities market were for traders that had substantial
interest in the commodity ... to keep out speculators that would
result in large irrational price fluctuations ... but after a series
of confidential/secret letters allowing major speculators to play
... the market started to have huge fluctuations.

from above:
While the average number of companies listed on NYSE, Nasdaq, and Amex
decreased 20 percent from 9,275 in 1997 to 7,446 in 2002, the number
of listed companies restating their financials increased from 83 in
1997 to a projected 220 in 2002 (a 165 percent increase) (table
1). Based on these projections, the proportion of listed companies
restating on a yearly basis is expected to more than triple from 0.89
percent in 1997 to almost 3 percent by the end of 2002. In total, the
number of restating companies is expected to represent about 10
percent of the average number of listed companies from 1997 to 2002.

from above:
The database consists of two files: (1) a file that lists 1,390
restatement announcements that we identified as having been made
because of financial reporting fraud and/or accounting errors between
July 1, 2002, and September 30, 2005, and (2) a file that lists 396
restatement announcements that we identified as having been made
because of financial reporting fraud and/or accounting errors between
October 1, 2005, and June 30, 2006.

... snip ...

with regard to securization, a great deal of the loans were by
unregulated loan originators that would have had very little money to
lend w/o being able to pay the rating agencies for triple-A ratings
for everything they packaged. Since they could immediately sell off
everything they wrote regardless of quality as triple-A, they no
longer had to care about loan quality or borrowers qualification (only
thing limiting their income was how fast they could make loans, and
how big they could make them).

Speculators found the no-documentation, no-down, 1% interest only
payments extremely attractive ... possibly 2000% ROI in parts of the
country with 20-30% real-estate inflation (with the speculation
further fueling the inflation). The enormous speculation and inflation
help create the appearance that demand was significantly larger than
actually existed. This resulted in all sorts of infrastructure
investment for demand that didn't exist. When the bubble bursts the
effects spread thru-out the economy.

There have been a number of reports regarding the events leading up to
the repeal of Glass-Steagall (including the account in
Griftopia mentioned upthread) which eliminated the separation of
investment banking and regulated depository institutions. The
investment banking operations then participated in the securitization
frenzy ... heavily involving those financial institutions in the
bubble.

Early last year, I was asked to take a copy of Pecora hearing (which
had been scanned the previous fall at the Boston public library), html
it, heavily cross-index, and also provide some number of references
between what happened then corresponding to what happened this time
(apparently in anticipation that the new congress had appetite to do
something about it). After putting quite a bit of effort into the
project, i got a call saying it wouldn't be needed after all. There is
direct relationship between the 20s Brokers' Loans that were at
basis of the '29 bubble/crash and the securitized funded loans that
were behind this bubble/crash (some $27T in triple-A rated toxic CDO
transactions)
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

There is a industry publication that gives detail industry operations
... couple hundred pages, possibly 50-60 lines per page ... the avg
for the major regional banks compared to the major (too big to fail)
national banks. For whatever reason, just as the bubble was starting,
the regional bank operation avgs. were slightly better than the
national banks avg (implying that the national banks should have been
allowed to get smaller, rather than getting larger).

Boyd also talked about reviewing yearly exercises where those in
command had spent their time playing golf all year while their staff
practiced ... and when it came time for the rubber to meet the road,
they didn't perform well (either).

lots of command&control hasn't been about people or ideas but about
logistics and stimulus/response, these are areas that technology has
had lots of improvement in the last 30 yrs. 30yrs ago technology had
enormous limitations on amount of realistic inputs, processing, and
output.

One of the limitations in real-time, realistic (war) gaming is the
limitations on processing power. The issue is that computers have been
stalled for a couple decades in getting faster and the work-around has
to package multiple processors (cores; this in turn scales-up to the
massive supercomputers ... which for last 20 years have effectively
been increasing aggregations of cores). In the computer programming
industry in general ... but also in the gaming industry, a "holy
grail" is how to decompose processing into parallel operations. For
the most part, the human programmers represent operations as a
sequence of serialized tasks ... unable to take advantage of the
independent, parallel processing available.

Rapid, agile game development is dependent on very few individuals
that are skilled at translating real-world environment into parallel,
non-serialized/non-sequential operations (taking advantage of current
parallel computational resources) ... with about the only alternative
being very painful and time-consuming refinement, attempting to adapt
sequence of serialized operations into independent parallel operations
(short-cuts have been to utliize specific well-worn constructs that
partition well-understood or well-practiced operations).

Some of this is possibly related to the references about effective
maximum sized command unit of around 150 ... above that the human
processing appears to saturate and is less effective. In the
realistic, real-world gaming ... the technology power is there in the
form of large number of processing units ... but there is distinct
shortage of human skill in being able to represent real-world tasks as
parallel operations. There has been 20yrs of floundering around hoping
for some automated solution ... that or some technology break-thru
that resumes increasing the speed of sequential processors.

I would contend that there is quite a bit of overlap between the
limitations of the majority of computer programmers having difficulty
understanding (and representing) real-world as parallel operations
... and commanders being able to make sense of large number of
concurrent activities.

The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET

From: lynn@garlic.com (Lynn Wheeler)
Date: 28 Nov, 2010
Subject: The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET
Blog: IETF - The Internet Engineering Task Force

I contend one reason was that the internal network effectively had a
form of gateway in nearly every node from just about the
beginning. The arpanet/internet didn't really get that gateway
capability until the great cut-over to internetworking on 1/1/83
... which was significant factor in the internet catching up with the
internal network. The other factor was the internal network restricted
workstations and PCs to "terminal emulation" function ... which
started to see big explosion in the mid-80s as internet nodes.

Disclaimer: with regard to NSFNET backbone being precursor to modern
internet ... we were working with some of the locations that would
become NSFNET backbone locations ... some old email from the period
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

then when the NSFNET backbone RFP came out ... we were prevented from
bidding. The director of NSF along with backing from some agency chief
scientists, wrote a letter to the corporation (copying the CEO)
requesting our involvement. This appeared to just further inflame the
internal politics. References to what we already had running being at
least five years ahead of all the RFP responses (to build something
new), didn't help.

At Interop '88, I had some stuff in one of the booths ... but it was
for a totally different organization (than the one that I was
affiliated with at the time). One of the other features of Interop '88
was many of the booths were heavily oriented towards (the gov
mandated) GOSIP.

IBM paid for much of BITNET ... which used technology similar to the
internal network ... and then for EARN in Europe. Old email from
person responsible for setting up EARN in Europe looking for some
help:
http://www.garlic.com/~lynn/2001h.html#email840320

This is reference to HTML evolving from SGML at CERN (GML had been
invented at the science center in 1969 ... and then standardized as
SGML a decade later)
http://infomesh.net/html/history/early/

Big problem with original HTTP was that it was nearly stateless per
transaction run on top of TCP ... which has a minimum seven packet
exchange ... with a long tail on closings. There was period as
webserver loads started to rampup where servers were spending 90% of
their CPU running the FINWAIT list (closing TCP session).

We had been called in to consult with small client/server startup that
wanted to do payment transactions on their server, they had also
invented this technology called "SSL" they wanted to use (its now
frequently called "electronic commerce"). As the load on their servers
started to pickup (both HTTP & HTTPS), the FINWAIT list problem
resulted in them adding more and more servers. They finally installed
a large Sequent server which had solved the FINWAIT problem for large
commercial installations (predating appearance of problem heavy HTTP
using TCP).

I still claim that large missing piece of OSI was gateway and
internetworking ... much more like the 60s & 70s networking
architecture ... which also reflected single infrastructure "owned"
network.

One of the commercialization changes was RFC copyright rights .... I
had been very careful with regard to using any material from RFCs
after the copyright policy change ... but still had to hire a
copyright attorney to talk with ISOC.

I had been on xtp technical advisory board ... which had a reliable
transaction with minimum 3-packet exchange ... that would have been
much more sensible for (reliable) HTTP/HTTPS transaction if you
weren't going to do longer lived sessions.
http://www.garlic.com/~lynn/subnetwork.html#xtphsp

I did have final authority on the interface between webservers and the
payment gateways (sat on internet and handled interface between the
internet and the payment network) and could mandate various things
... however, I could only recommend details on browser/webserver
interface. One of the things I mandated for payment gateway was
multiple A-record support for establishing connection ... but even
given presentations to the browser group had a much harder time
getting them to budge (claim was multiple A-record was too advanced,
even after I provided them example client code from 4.3Tahoe; I guess
it wasn't in some college text). I was still fighting the browser
multiple A-record support at the time of Spring96 MDC at Moscone
... and IE had hired some oldtimers that knew about such things.

4.3Tahoe was possible one of the biggest reasons for proliferation of
TCP/IP. There are stories about DARPA constantly telling CSRG that
they weren't allowed to do networking, CSRG saying "yes, they
wouldn't", and then kept on doing it.

There was sequence when VERIFONE bought EIT ... (my impression was
VERIFONE looking for expertise to move upstream into e-commerce). Then
HP bought VERIFONE, the HP executive that VERIFONE reported to, had us
in several times to talk about electronic commerce.

Part of ASN.1 came in from the x.50x stuff, PKIs, and digital
certificates. There was annual ACM database conference (SIGMOD) in san
jose in the early 90s, and during a large panel session in the
ballroom, somebody asked what was this x.50x stuff all about. The
response was a bunch of networking engineers attempting to re-invent
1960s database technology.

There were a number of specifications for payment transactions on the
internet in the early to mid 90s. One such specification was by the
card associations that included appending digital certificates on
every transaction. A big issue was that the typically appended digital
certificate payload was 100 times larger than the typical payment
transaction payload. Somewhat as a result, in their "payment gateway",
all the PKI gorp was stripped off and just a flag was turned on (in
actual payment transaction) claiming that all the PKI gorp stuff had
been done correctly. Later, there was presentation at ISO meeting in
europe by card association business person giving statistics on
payment transactions with the PKI flag turned on ... and they could
prove that no PKI was ever involved (motivation was rules that charged
less for "PKI" transactions and/or possibly actual fraud). The other
downside (besides the 100* PKI payload bloat), was that it also
resulted in something like 100* computational bloat.

Lots of the Internet is pretty hostile environment. Windows networking
had come up from closed, safe business networking environments ... w/o
any concern for protection and countermeasures. It was relatively
straight-forward to technically adapt that platform support to
internet environment ... but it opened the way for all sorts of
vulnerabilities.

The spring '96 MDC at moscone had all sorts of banners about internet
support ... but the constant subtheme was "protect your investment"
(visual basic based application stuff); coming from the (safe, closed)
business environment ...including programs automatically executing
scripts when processing application files. Translating that to the
anarchy of the internet opened up all sorts of problems. Up until
then, buffer overflows dominated cause of internet exploits (primarily
because of characteristic of C programming language; tcp/ip stacks
implemented in various other languages didn't suffer such large number
of buffer overflow exploits). Automated script execution exploits grew
until they passed buffer overflows as major cause of internet
exploits.

The payment gateway (for "electronic commerce") was replicated and had
multiple connections into different parts of the internet backbone
... with support for advertising different routes to handle various
internet failure modes, outages, downtime, etc. During the early
payment gateway period, the backbone transitioned to hierarchial
routing (pretty much eliminating route advertising) ... forcing
fall-back to multiple A-records (with client connection support) as
means for masking failures. I could mandate use by webservers (for
connecting to payment gateway), but as previously mentioned it was a
battle to get the browser group to support it.

One of the big early e-commerce servers was a sporting goods merchant
that advertised heavily on Sunday football games ... and was expected
heavy traffic during half-times. However, in this time-frame, many of
the service providers had several hour outages on sunday for doing
router maintenance (before the days of telco provisioning for major
service providers). Even though the merchant had multiple connections
into different parts of the internet backbone ... w/o browser
supporting multiple A-record ... it wasn't possible to mask the
outages from lots of end-users.

--
virtualization experience starting Jan1968, online at home since Mar1970

During Organic Command & Control briefing, Boyd would touch on
American business heavily suffering from rigid, top-down, command &
control structure that the army put in place for WW2 ... i.e. had to
field large numbers of inexperienced resources and the rigid structure
leveraged the few experienced skills available. Later as many of those
young officers left the army and started to populate the American
business executive ranks, they tended to replicate that rigid,
top-down, C&C (assuming vast numbers of unskilled workers).

Boyd's observation has also been used to explain recent reports that
the ratio of US business executive to worker compensation had exploded
to 400:1 after having been 20:1 for a long time and 10:1 in much of
the rest of the world.

An exception was possibly engineering combat groups. My wife's father
had commanded 8th armored 53rd engineering combat battalion. Then in
mid-44, he was given command of the 1154th engineering combat group
... engineering combat groups were relatively fluid operations,
typically with 3-6 battalions that were attached, detached and moved
around as needed. I've done some research on 1154th status reports at
the national archives ... from one such:

On 28 Apr we were put in D/S of the 13th Armd and 80th Inf Divs and
G/S Corps Opns. The night of the 28-29 April we cross the DANUBE River
and the next day we set-up our OP in SCHLOSS PUCHHOF (vic PUCHOFF); an
extensive structure remarkable for the depth of its carpets, the
height of its rooms, the profusion of its game, the superiority of its
plumbing and the fact that it had been owned by the original financial
backer of the NAZIS, Fritz Thyssen. Herr Thyssen was not at home.

Forward from the DANUBE the enemy had been very active, and an intact
bridge was never seen except by air reconnaissance. Maintenance of
roads and bypasses went on and 29 April we began constructing 835' of
M-2 Tdwy Br, plus a plank road approach over the ISAR River at
PLATTLING. Construction was completed at 1900 on the 30th. For the
month of April we had suffered no casualties of any kind and Die
Gotterdamerung was falling, the last days of the once mighty
WHERMACHT.

... snip ...

The army divisions had strategic objectives with regard to the
enemy. The engineering combat groups were much more fluid operations
attempting to dynamically adapt to dealing with environmental
conditions.

--
virtualization experience starting Jan1968, online at home since Mar1970

jmfbahciv <See.above@aol.com> writes:
I liked handling cards. I hated handling papertape. I would rather
have my data in cards than on magtape.

Cards were great; DECtapes were the best.

I got summer job at the univ. doing port of 1401 MPIO to 360
assembler. The univ. had 709 with 1401 front-end for doing unit-record;
cards were read to tape on 1401 and the tape moved to 709 tape
drive. the 709 did tape-to-tape processing and the resulting output tape
was moved to 1401 tape drive for output to printer/punch (MPIO was 1401
program that handle card-to-tape and tape-to-printer/punch).

As part of eventually replacing the 709/1401 with 360/67, a 360/30 was
brought in to replace the 1401. While 360/30 had 1401 hardware emulation
mode and could run MPIO directly, I got hired to rewrite it in 360
assembler; except for requirement to duplicate MPIO function, i got to
design and implement my own program; dispatcher, interrupt handlers,
device drivers, error recover, storage management, etc.

The datacenter shutdown at 8am sat. and didn't re-open until 8am mon
... so I had the machine room for 48hr period. I also got other
programming jobs ... and in the fall it was little difficult going to
monday morning class after not having slept for 48hrs.

The source assembler program eventaully grew to approx. 2000 cards
(could still fit in card box). The 360 assembler took a minimum 30
minutes to assemble the source and produce "TXT" deck (i.e. deck of hex
cards for execution loading). Since it took so long to assemble ... i
got pretty good at duplicating cards & punching patches. The "TXT" deck
just had hex "holes" ... no printing across the top and the 026 keypunch
was alphanumeric ... to get the correct combination of "hex" holes, had
to used "multi-punch" feature ... use keyboard to force combination of
holes to be punched. Put the original card in the duplication slot and
then duplicate out to the columns for the patch ... multi-punch the new
hole combinations and then duplication the remaining columns.

Got fairly good at being able to interpret the hex holes in "TXT" deck
... having to fan the deck to find the card that had the correct
displacement in the program (for applying the patch). Was typically able
to do patches in much less time than it took to re-assemble.

much, much later I was at SJR in san jose and my brother was regional
apple marketing rep (supposedly had largest physical region in CONUS).
He would come to town periodically and I could go to business dinners
with him. Got to argue with some of the mac developers about design
... before the mac was even announced.

--
virtualization experience starting Jan1968, online at home since Mar1970

Anne & Lynn Wheeler <lynn@garlic.com> writes:
much, much later I was at SJR in san jose and my brother was regional
apple marketing rep (supposedly had largest physical region in CONUS).
He would come to town periodically and I could go to business dinners
with him. Got to argue with some of the mac developers about design
... before the mac was even announced.

other random apple trivia ... my brother figured out how to dial into
the apple hdqtrs business computer (which was a s/38 at the time) from
his apple-II to track manufacturing schedules and deliveries.

--
virtualization experience starting Jan1968, online at home since Mar1970

... virtual machines provided partitioning and containment ... but
still allowed individuals to shoot themselves in the foot. some of the
virtual machine based online timesharing service bureaus (an earlier
"cloud" paradigm) added features to the virtual machine environment to
help prevent their uses from (also) "shooting themselves in the foot."

current genre of mainframes offer both virtual machine operating
system and a subset of the virtual machine features supported directly
by the hardware (not requiring additional software) called
LPARs. Standard production environments (even w/o virtual machine
operating systems) are normally configurated/partitioned with
environments that are regularly referred to as "sand box" and/or
"test".

trivia: some number of the CTSS people went to the science center on
4th flr and did the cp67 precursor to vm/cms (some number then left
science center and did some of the early virtual machine based online
timesharing service bureaus); others from CTSS went to the 5th flr and
did Multics.
http://www.garlic.com/~lynn/subtopic.html#545tech

-- virtualization experience starting Jan1968, online at home since
Mar1970

Walter Bushell <proto@panix.com> writes:
Payroll systems could be argued to be the hardest part of rocket
science. Shooting rockets off only need deal with physical reality, but
payroll needs deal with physical reality and acts of Congress and state
legislatures not to mention all sorts of government regulations.

i periodically mention the economic conference from a couple years ago
where one of the news stations broadcast a roundtable of economists.
they said the tax code (that was constantly being twiddled) was 65,000+
pages. The proposal was that going to flat tax would reduce the tax code
to 400-500 pages and vastly improve the productivity of the country.

there were statements that lobbying tax code (constant twiddling)
contributes to congress being the most corrupt institution on earth.
that changing to flattax & 400-500 page tax code would gain something
like 6 percent GDP (currently lost dealing with the special
provisions). It would also significantly reduce the enormous high-level
level of corruption. That 6percent would be much larger benefit
offsetting any loss of possible positive benefits buried in those
65,000+ pages.

The roundtable ended with semi-humourous observation that one of those
lobbying against the flat-tax change was Ireland ... supposedly some
number of the companies relocating to Ireland gave as reason the
problems dealing with US tax code.

I've mentioned before early last year, I was asked to take the scan of
the Pecora hearings (done the previous fall at boston public library)
... HTML them, heavily cross-index, and also provide links between what
went on then and what went on this time. This was apparently in
anticipation that the new congress had an appetite for doing something.
After putting quite a bit of work into it, I got a call that said it
didn't look like congress was interested in doing anything real after
all.

I was using tesseract to try and improve OCR of the scans ... but still
was doing lots & lots of manual fixups (the documents were printed in
the 30s and scans were somewhat faded):
http://code.google.com/p/tesseract-ocr/

The municipal bond market had collapsed two years ago when investors
realized that rating agencies had been selling triple-A ratings for
($27T?) toxic CDOs and there was huge ambiguity whether any ratings
could be trusted. Buffett stepped in at that time to rescue municipal
bond market by providing insurance.
Evil Wall Street Exports Boomed With 'Fools' Born to Buy Debt
http://www.bloomberg.com/apps/news?pid=newsarchive&refer=home&sid=a0jln3.CSS6c

The real-estate speculation (because of enormous amount of funds
available from securitization which also eliminated any concern
regarding loan quality and/or borrower's qualification) problem is
with the demand appearing much larger than it actually was, resulted
in a lot of additional developments. The increase in developments also
resulted in all kinds of borrowing for new infrastructure; commercial
loans for stripmalls; muni-bonds for new roads, new water&sewer
systems, etc. The municipalities were assuming that the new bonds
would be covered by big increase in real-estate collections ... which
didn't materialize when the bubble burst.

so besides all the properties that have been abandoned and the
cities&towns aren't getting taxes (lots of situations where they were
expecting revenue to cover new bonds floated for new infrastructure)
.... there are large parts of the country where real-estate appraisals
have dropped by 30% or more. Wide-spread drop in appraisals of 30%
results in corresponding shortfall in real-estate tax collections
(until they get around to increasing the tax rate) affecting ability
to cover pre-existing obligations (salaries, other bonds, services,
etc).

In the late 90s we were asked to look at all the ways that securitized
instruments could be perverted ... since they were on the rise again,
after having been used to obfuscate underlying loans during the S&L
crisis. However, this century all we could do was watch ... since
nothing was being done about it.

then there is the item about the Man Who Beat The Shorts when he
raised the issues that securitizing loans and selling them off met
that the loan originators no longer had to care about loan quality
and/or borrowers' qualifications
http://www.forbes.com/forbes/2008/1117/114.html

As previously mentioned the bubble/crash in the 20s was speculation in
the stock market directly attributable to Brokers' Loans
... this time the bubble/crash was speculation in the real-estate
market directly attributable to loan originators being able to
securitize and sell off all loans (being able to pay for triple-A
ratings helped immensely). In many ways, speculators were able to
treat the real-estate market similar to the unregulated stock market
of the 20s

Unregulated loan originators had unlimited amount of money from being
able get get triple-A ratings on all their toxic CDOs (regardless of
underlying value/quality). Repeal of Glass-Steagall met that
unregulated investment banking arms of regulated depository
institutions could buy up trillions in toxic CDOs and carry them
off-balance (while putting the regulated depository institutions at
risk of collapse).

The disastrous effects of the individual pieces had been understood
since the 30s; this time with lots of individuals playing in
self-interest ... apparently believing their individual
graft&corruption wouldn't be that significant ... however, the
combination resulted in nearly a perfect storm (aka systemic).

If any one of the pieces weren't there ... it could have significantly
mitigated the aggregate effects ... for instance, if Glass-Steagall
hadn't been repealed, it would have cut-back on the money for buying
the toxic CDOs (limiting the total amount that would have been sold,
and therefor decreasing the number of such loans that would have been
made).

At the end of 2008, it was estimated that the four largest
too-big-to-fail financial institutions were carrying $5.2T (toxic
CDOs) off-balance ... courtesy of their unregulated investment banking
arms. Early on, one of those institutions had unloaded something like
$60B at 22 cents on the dollar. If the four had been forced to deal
with the $5.2T at that price, they would have had to been forced to
liquidate and dissolved (lots of claims about large amount of
obfuscation going on because they are actually insolvent)
Bank's Hidden Junk Menaces $1 Trillion Purge
>http://www.bloomberg.com/apps/news?pid=newsarchive&sid=akv_p6LBNIdw&refer=home

one metaphor is that there were a lot of regulations keeping the
individual pockets of greed and corruption separated and
damped-down. This century the regulations were repealed, ignored
and/or not enforced, resulting in the individuals pockets of greed and
corruption to combine into a firestorm

another methaphor was that all the control rods in the economic
reactor were removed and it goes critical with major meltdown. it may
take the (economic) environment years to recover from the resulting
toxic radioactive mess.

from above:
Markets need regulation to stay stable. We have had thirty years of
financial deregulation. Now we are seeing chickens coming home to
roost. This is the key argument of Professor Nick Bingham, a
mathematician at Imperial College London, in an article published
today in Significance, the magazine of the Royal Statistical Society.

... snip ...

Federal Reserve's 'astounding' report: We loaned banks trillions; The
Federal Reserve offers details on the loans it gave to banks and
others at the height of the financial crisis. One program alone doled
out nearly $9 trillion.

Several times in the past two years ... I've pointed out that with Fed
lending trillions at near zero percent ... then it is relatively
straight-forward to earn hundreds of billions by relending with spread
of four percent of more (.04 of $10T is $400B profit) ... and if it
was used to buy Treasuries ... why didn't FED just give the US
treasury free money.

Also, rhetoric on floor of congress was that primary purpose of GLBA
was: if you were already a bank, then you got to remain a bank and if
you weren't already a bank you didn't get to be a bank (this is in
addition to repeal of Glass-Steagall and "opt-out" PII sharing). FED
gave out bank charters to some number of investment banks (so they
could get free money) ... which would appear to be counter to GLBA.

aka massive shell game, take free fed money ... buy treasuries ... the
interest on the US treasury easily pay back TARP with interest. auto
quotas were to reduce competition, significantly increase profit to be
used to totally remake themselves. when that didn't happen and money
was spent "business as usual" ... there was call for 100% unearned
profit tax.

--
virtualization experience starting Jan1968, online at home since Mar1970

the above mentions that film ribbons were only used once and could have
security implications ... being able to reproduce what was typed from
old ribbons. the above also mentions 2741 selectric terminal. I had
one at home from spring of 1970 until summer of 1977 ... when it was
replaced with cdi "miniterm" ... some old photos here
http://www.garlic.com/~lynn/lhwemail.html#oldpicts

... however, as some of the documents were moved to CMS script ...
there were more and more had characteristic of originating on 1403. One
of the earliest such was principle of operations:
http://www.bitsavers.org/pdf/ibm/370/princOps/

A major reason for moving principle of operations to cms script was the
material was actually a subset of the "architecture manual" (or
"redbook" for being distributed in red 3-ring binder). As cms script
file ... it was possible to have the "conditional" indicators bracketing
the sections that were only in the "principle of operations" ... and
then depending on how cms script was invoked, produce the full
architecture manual or the PoP/POO subset.

The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET

From: lynn@garlic.com (Lynn Wheeler)
Date: 28 Nov, 2010
Subject: The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET
Blog: IETF - The Internet Engineering Task Force

i'm accused of checkpointing everything ... included redundant backup
systems for email ... possibly even implicated in some of the
extensive backup processes that were part of email products for
customers (one possible case that was in the news in the early 80s
involved congressional investigations into Ollie's email). I did loose
a lot of the stuff from 60s & 70s that had been replicated on multiple
tapes ... but in the same tape library during a period when there was
operational problem of random tapes being mounted as scratch.

i had gotten blamed for computer conferencing on the internal network
in the late 70s and early 80s (as previously mentioned the internal
network was larger than arpanet/internet from just about the beginning
until sometime late '85 or early '86. somewhat as a result, a
researcher was paid to sit in the back of my office for 9months taking
notes on how i communicated; they also got copies of all my
incoming&outgoing emails as well as log of all instant
messages. Besides being a research report, the material was also used
for stanford phd (joint between language and computer ai) as well as
some number of papers and books.

for some trivia ... list of corporate locations that had one or more
new nodes added during 1983 (when the internal network passed 1000
nodes and when the internet was getting past 256 nodes).
http://www.garlic.com/~lynn/2006k.html#8

one of the differences between the internet and the internal network
was that all links had to be encrypted; at one point in the mid-80s,
the claim was that the internal network had over half of all link
encryptors in the world. one of the big headaches was dealing with
various countries .... especially when encrypted links crossed
national boundaries.

For my hsdt project, i got frustrated with the cost of encryptors
(when available) for higher speed links ... and got involved in
designing my own. At the time i was aware of two classes of crypto 1)
the kind they don't care about and 2) the kind you can't do; however i
found out when i was told that i could build as many boxes as i wanted
to ... but i couldn't keep any, they had to all be sent somewhere
.... that there is 3rd kind of crypto ... the kind you can only do for
them.

"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
Yup. We had a large production program that I'd patch until the
whole thing fell apart - only then would I try to wheedle the 40
minutes of machine time that it took to re-assemble it.

I finally wrote my own assembler. Although it had a number of very
nice features that were missing from the stock assembler, its primary
goal (which I achieved) was to run twice as fast. It was a bit easier
to scrounge 20 minutes of machine time than 40.

I had the machine time ... since I got 48hrs straight every weekend ...
it was that it was usually faster to patch it than re-assemble.

I had conditional assembly ... one that ran stand-alone ... with its own
device drivers, interrupt handlers, etc that assembled in approx. 30
mins ... and the one that ran under os/360 using open/close, read/write
and DCB macros that assembled in approx. an hour (on 360/30) ... the DCB
macros taking 5-6 mins elapsed time each ... it was possible to see it
in the front panel lights when it had hit a DCB macro.

the folklore was that the person doing opcode lookup assembler routine
had been told that it had to be done in 256 bytes (or some such) ... so
the lookup table was reloaded from disk on each statement. the assembler
got much faster when somebody improved opcode lookup (using the memory
to keep the table loaded).

--
virtualization experience starting Jan1968, online at home since Mar1970

Dataspaces are more painfully and probably slower then regular storage
dataspaces were probably meant to
Releive storage constriant

multiple address spaces started in "811" architecture (370/xa
appeared with 3081). subset was retrofitted to 3033 as "dual-address"
space mode.

os/360 was heavily pointer passing paradigm. in the initial morph to
virtual memory, it was basically mvt laid out in single 16mbyte virtual
address space (as SVS). In transition to multiple virtual address sapce
(MVS), copy of the kernel image was mapped to (half/8mbytes of) every
(16mbyte) application virtual address space. The problem was that there
were some number of mvt subsystems that now found themselves in their
own virtual address space (different from application). In order to
support the pointer passing paradigm (with subsystems accessing
application parameters), the common segment area (CSA) was created ...
applications could stuff their parameters in the common segment, and
generate a subsystem call (that required passing thru the kernel to
switch to the subsystem virtual address space).

At some larger installations CSA grew to 4-5 mbytes (leaving only 3-4
mbytes for application execution), some installations were facing
prospect of CSA increase to 6mbytes (leaving only 2mbytes in every
virtual address space for application execution).

dual-address space mode allowed for parameter pointer to be passed to a
subsystem and the parameters list could be access in the application
virtual address space w/o requiring CSA (starting to cap the explosion
in CSA size growth).

Some of the larger MVS internal shops were chip design with large
fortran applications that were seven mbytes that required carefully
crafted MVS systems that kept CSA to 1mbyte max. There was a period
when some of these internal MVS premier shops were facing being forced
to vm370/cms ... since they could get nearly the full 16mbyte virtual
address space for their application execution (it was rather odd some
of the hold-outs since in this period, vast majority of the internal
mainframes were vm370).

64 bit mode disabled

Tom.Harper@NEON.COM (Tom Harper) writes:
I'm not so sure. Many CICS shops have pointed out to me that they are
forced to run hundreds of CICS regions for the simple fact that 2G is
not enough address space to contain all of their programs. This
requires them to spend an inordinate amount of time managing regions
for the sole reason of address space exhaustion.

There is no question that there are other consumers of address space
in the CICS regions, but many of these are buffers which have to be
duplicated in every region, and in any case are not the limiting
factor.

I always thot that the hundreds of CICS regions sprouted up from when
CICS lacked multiprocessor support ... the only way to get execution on
more than one processor was to go to multiple regions. Then there was
problem of inter-application corruption ... lots of regions was method
of partitioning/fencing off problems ... somewhat akin to having
test/sandbox LPARs.

CICS multiprocessor support is relative recent in mainframe timeframe
... and it would take major motivation for large installation to make
any significant change in production environment.

... disclaimer: I was undergraduate at univ that had gotten ONR grant
to do univ. library online catalog ... used part of the money to
purchase 2321 datacell ... and also selected to be betatest site for
cics product ... and I got tasked to support/debug CICS. I found/fixes
some CICS bugs related to univ. library selecting different set of
BDAM features (than had been in use at original CICS site).

--
virtualization experience starting Jan1968, online at home since Mar1970

Several times in the past two years ... I've pointed out that with Fed
lending trillions at near zero percent ... then it is relatively
straight-forward to earn hundreds of billions by relending with spread
of four percent of more (.04 of $10T is $400B profit) ... and if it
was used to buy Treasuries ... why didn't FED just directly give the
US treasury free money.

Also, rhetoric on floor of congress was that primary purpose of GLBA:
if you were already a bank, then you got to remain a bank and if you
weren't already a bank you didn't get to be a bank (this is in
addition to repeal of Glass-Steagall and "opt-out" PII sharing). FED
gave out bank charters to some number of investment banks (so they
could get free money) ... which would appear to be counter to GLBA.

aka massive shell game, take free fed money ... buy treasuries ... the
interest on the US treasury easily pay back TARP with interest. auto
quotas were to reduce competition and significantly increase profit to
be used to totally remake themselves. when that didn't happen and
money was spent "business as usual" ... there was call for 100%
unearned profit tax.

in the mid-90s the x9a10 financial standard working group had been
given the requirement to preserve the integrity of the financial
infrastructure for ALL retail payments ... and came up with the
x9.59 financial transaction standard. There was need to do detailed
end-to-end threat & vulnerability studies for the various
environments, debit, credit, stored-value, ACH, POS, internet,
wireless, contact, contactless, high-value, low-value, transit
turnstyle, internet, aka ALL. One of the things done in x9.59 was
eliminate PAN as sensitive data ... as well as eliminating skimming
and data breaches exploits as threat (i.e. didn't do anything to
prevent them, just eliminated crooks being able to use the information
for performing fraudulent financial transactions).

Part of the standard clearly differentiated data elements (like PAN)
needed for standard business processes (at millions of locations
around the world) from authentication. Once there was clear
distinction between other business processes and authentication
... then it was much easier to support a large variety of
authentication mechanisms as part of the same standard
(enabling security proportional to risk, parameterised risk
management as well as person-centric). Then it was trivial
for x9.59 to work with institutional issued cards as well as things
like person-centric card or cellphone ... as well as the
identical card/cellphone to work w/o PIN at transit turnstyle and with
PIN (or other authentication) for higher-value transactions.

earlier in thread, I mentioned that first time I sponsored Boyd's
briefing, I attempted to do it through employee education. This is old
post from 2007 ... where I include a few old emails
regarding/announcing briefings
http://www.garlic.com/~lynn/2007c.html#25

SJR is the IBM research facility on the west coast (San Jose) and YKT
is the IBM research facility on the east coat (Yorktown).

Oh, and for a slight tie-in between Oracle history article and Japan
... where it mentioned that Oracle nearly went bankrupt 1990. This was
in the era when Japan was taking over the world ... the country had
economic policy to move heavily into information technologies and told
their "hard industries" that they had to invest something like 5% of
their profits (or it would be taken in taxes; as lower-paid
manufacturing jobs moved off-shore they had to be replaced with higher
paying technology jobs). In any case Oracle had reached an agreement
to be bought by Nippon Steel ... and then it signed a 6000 seat
enterprise license with Shell Oil that allowed them to back out of the
deal. There was significant amount of VC money in the (US) high-tech
industries from "hard industries" in Japan through the '90s.

--
virtualization experience starting Jan1968, online at home since Mar1970

A small amount of Evidence. (In which, the end of banking and the rise of markets is suggested.)

The Mobile Device Is Becoming Humankind's Primary Tool

Note that (effectively bottom) 1/3rd of the counry is considered
"unbanked". One analysis is that the financial services industry has
extremely high markup and that the bottom 1/3rd of society's financial
activity is under the profit margin of the financial industry.

In the 90s, walmart used the argument for getting into the financial
services business. Walmart is famous for doing end-to-end supply chain
analysis and doing significant optimization ... drastically reducing
the cost of doing business ... and was looking at doing something
similar for financial services (and making financial services for the
"unbanked" profitable).

As countermeasure, the financial industry got Bank Modernization act
passed in 1999; rhetoric on the floor of congress was that the primary
purpose of the bill was if you were already a bank, then you got to
remain a bank, but if you weren't already a bank, then you didn't get
to become a bank (specifically calling out walmart and microsoft). The
(GLBA) act also repealed Glass-Steagall (which played a role in the
current financial mess) and "opt-out" information privacy sharing
(somewhat as federal preemption of the california legislation
"opt-in" that was in progress).

There are several items in some of the linkedin financial related
groups regarding other parts of the world leveraging wireless devices
to provide financial services to those where it wouldn't otherwise be
possible (or profitable). As a result, there is something of an angst
in financial sector that has been accustomed to financial transactions
being tied to institutional issued cards. Enabling cellphones and
other devices for financial transactions may contribute to them
loosing control of the payment business.

--
virtualization experience starting Jan1968, online at home since Mar1970

Philosophy: curiousity question

jcewing@ACM.ORG (Joel C. Ewing) writes:
I've always felt it was a bad idea to have installation mainframe
documentation too far separated from the mainframe platform itself or
dependent on any other server platforms, under the general premise
that in a DR situation if we have recovered the mainframe we want to
be sure we have access to all documentation needed to operate it.

Some documentation was just kept as monocase or dualcase text files on
MVS, with links from ISPF screens. Before DCF/Script became too
expensive, some large documents were maintained as separate chapters
in DCF SGML using DCF to build text and pdf versions of the
document. Afterwards those SGML documents were converted to use
docbook tools with docbook document source on MVS, building
multi-html, single-html, and pdf version with free tools on a
workstation and then porting various forms either back to MVS or to
media that went off site for DR.

when rexx was very young, internal use only and before released as
product ... I wanted to demonstrate that rexx wasn't just another pretty
scripting language ... and chose as demonstration to rewrite IPCS
... idea was in six weeks of effort, re-implement from scratch ... make
it run ten times faster than original (assembler) implementation, with
ten times the function. I somewhat naively believed it would eventually
be released to customers, replacing the existing product ... and since
it was during the start of the OCO-wars period ... it had the added
advantage of needing to ship all the source (since there wasn't any
object code). While it became the standard internally as well as used by
nearly all the PSRs ... for some reason it was never shipped to
customers.

However, one of the features ... that I included softcopy of
messages&codes (GML) manual ... and do formatted online display of the
related information.

I mentioned (in above) that something similar was said in a speech
given Nov2005 ... which reverberated in news around the world. I got
an email that night from the speaker asking if I could find public
sources supporting the statement.

--
virtualization experience starting Jan1968, online at home since Mar1970

TCM's Moguls documentary series

hancock4 writes:
In terms of working with representatives from IBM vs. other companies,
my own experience has been (overall) that the IBM mainframe folks were
the most talented and prepared*. They always have their homework done
and come in ready to roll. The reps from other companies weren't as
well trained.

i was told a similar comment by an independent person that was at the
gov. antitrust trial regarding all those that testified (in this case
corporate executives as opposed to corporate reps).

--
virtualization experience starting Jan1968, online at home since Mar1970

Outsourcing and COTS

Outsourcing and COTS can be viewed as two facets of same business;
basically the costs to have inhouse proprietary operation along with
its possible competitive benefit vis-a-vis reduced costs of operation
that may be a lot more cost/effective in specific area with high
skills that are shared across large number of operations. If the area
is critical competitive area of the business, then it may justify have
proprietary in-house operation; however lots of COTS & outsourcing
movement is potentially enormously reduce costs in areas with less
competitive contribution (and to better use the saved money in other
areas; also applies to "wasting" executive attention in areas that
aren't directly a competitive factor)

Downside is it can result in totally loosing control over
organizational operations. a trivial case from few years ago was all
day meeting with dozen people from large system integrator. they had
modest contract with several scores of people working on it. at the
end of the day, there was some consensus that what they where doing
ultimately wouldn't work. however, they had just started this three
year contract and said they didn't want to leave the money on the
table; possibly considering doing something different when the current
contract ran out.

another example is a couple years ago we participated in defining and
pushing an industry implementation, sponsored out of an industry
consortium. after putting quite a bit of work into the effort, the
major industry players decided that the area was a critical
competitive area and they would individually do their own proprietary
implementations.

in the early 90s when the corporation went into the red and the
appeared to be lots of preparation to break up the different divisions
into separate companies, a lot of internal chip design tools were
transferred to external company (form of COTS).

lots of big software houses are form of COTS starting in the late 70s
... operations going to (usually) higher quality implementations
(specialized, more higher skilled resources than possibly could be
justified for purely internal operation).

Some part of motivation for POSIX was theoretically it free'ed
applications from being locked into particular vendors' proprietary
offering; drastically simplifying being able to move applications
between platforms ... to whatever is most cost/effective at the
moment. TPC benchmarks can be viewed similarly (some assumption about
easily moving application between platforms with best benchmark
number).

disclaimer: I know the person that originated the term COTS and played
large role in gov. migration to COTS

hancock4 writes:
In those days there seemed to be a deep schism in the I.T. world--
mainframers and anti-mainframers. People working in the business
world who had to get out paychecks and invoices on time week after
week tended to be more mainframe centric.

But high school and college students and their teachers and hobbyists
tended to be anti-IBM and pro-mini or pro-micro computer. I think
they felt that way mostly since it was easier to purchase or get time
on a mini or micro computer than it was on a corporate mainframe, plus
they could freely experiment which wasn't allowed on the mainframe.
Further, the mainframe architecture dated from 1964 and was oriented
toward batch processing and multi-tasking while they had a different
orientation.

a lot of the mainframe "architecture" was specific software
implementations ... as opposed to hardware architecture. some
deficiencies were listed in 360 for interactive and time-sharing ... as
referenced in melinda's discussion of the early days of science center
and virtual machines (bidding on followon to ctss, etc) ... can be found
here:
http://www.leeandmelindavarian.com/Melinda/

part of the educational issue was that with the litigation (by
gov. and others) in the 60s ... which resulted in the 23jun69
unbundling announcement (starting to charge for software and
other services) ... some posts here
http://www.garlic.com/~lynn/submain.html#unbundle

there was also big cutback in educational discounts ... which saw a lot
of institutions moving to other platforms for a whole variety of
reasons.

in the late 70s ... there was big jump in price/performance of mid-range
machines ... saw big explosion in vax machines as well as (midrange
mainframe) 43xx machines. 43xx had similar volumes to vax in the small
orders ... place where 43xx really differed was the multi-hundred orders
by large corporations.

in large corporations, the mid-range machines were finding their way out
into departmental supply rooms and converted conference rooms ... in
part because the datacenters were starting to burst at the seams (as
well as improved price/performance and lower environmental
requirements). this was somewhat the leading edge of distributed
computing. misc. old email mentioning 43xx
http://www.garlic.com/~lynn/lhwemail.html#43xx

internally, the explosion in 43xx (distributed) machines was
contributing to scarcity of conference rooms ... as well as big
explosion in the number of internal network nodes; internal network was
larger than arpanet/internet from just about the beginning until
sometime late '85/early '86. recent posts in an IETF (linkedin)
discussion:
http://www.garlic.com/~lynn/2010o.html#83 The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET
http://www.garlic.com/~lynn/2010p.html#9 The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET
http://www.garlic.com/~lynn/2010p.html#19 The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET
http://www.garlic.com/~lynn/2010p.html#24 The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET

the 4361/4381 were the next generation in the mid-80s, and they were
expecting similar explosive growth ... however by that time, the
mid-range was starting to be taken over by larger PCs and workstations
... as can be seen in this old post of a decade of VAX numbers:
http://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

the 43xx numbers were similar to the VAX numbers except for the addition
of the large multi-hundred machine orders by larger operations ...
like this old reference to AFDS
http://www.garlic.com/~lynn/2001m.html#15 departmental servers

The move of distributed computing to non-mainframes contributed to large
amount of data leaking out of the datacenters in the late 80s. The
communication division had a large install base of terminal emulation
that it was trying to protect; in the early days of personal computers;
terminal emulation contributed significantly to the uptake in machines
(could get an IBM/PC for about the same price as an already justified
3270 terminal ... and in single desktop footprint do both 3270 terminal
emulation as well as local computing). misc. past posts mentioning
terminal emulation
http://www.garlic.com/~lynn/subnetwork.html#emulation

The disk division had attempted to bring out a number of products that
would have made the mainframe significantly more friendly to
disitributed computing environment ... but was constantly being shotdown
by the communication division (who owned "strategic" responsibility for
everything that crossed the datacenter walls). Finally, one of the
senior disk engineers got a talked scheduled at the annual, worldwide,
internal communication conference .... and opened his talk with the
statement that the communication division was going to be responsible
for the demise of the disk division (because of the distributed
computing unfriendly strangle-hold that the communication division had
on the datacenter).

In the early 90s, the disk division was doing a lot of VC investment in
distributed computing friendly external corporations ... attempting to
side-step the internal corporate politics with the communication
division ... however, it obviously wasn't sufficient.

The mid-range had already introduced distributed computing ... which by
the mid-80s was starting to move to large PCs and workstations (which
resulted in the 4361/4381 not having same explosive growth as the prior
43xx generation). The datacenter was already starting to feel those
effects. However, senior corporate executives were claiming that the
mainframe business would continue to explode, doubling corporate
revenues from $60B to $120B. In that time-frame, executives had huge
buildout of mainframe manufacturing capacity ... in support of that
projected demand increase. I've mentioned before that at the time, it
wasn't a very career enhancing move to be showing that the mainframe
business was actually moving in the opposite direction (kill the
messenger).

A trivial example (of communication division protecting its terminal
emulation product base) involved the communication division's 16mbit
T/R microchannel card for the PS2. The workstation division had done
its own 4mbit T/R for the PC/RT. The RS/6000 had microchannel and the
workstation division was directed that they couldn't do their own
cards, but had to use PS2 cards. The 16mbit T/R card had a terminal
emulation design point, with 300 or more PCs, sharing common 16mbit
... and as a result it had lower per card thruput than the PC/RT 4mbit
T/R card.

Another trivial example was the mainframe TCP/IP product that
effectively used a variation of communication division controller,
getting 44kbytes/sec using nearly whole 3090 processor (very processor
intensive with not very high thruput). I did the RFC1044 enhancements
to the product and in some tuning work at Cray research got mbyte/sec
thruput between 4341-clone and Cray (basically 4341 channel media
speed, using only modest amount of the 4341 ... approx. 500 times
improvement in bytes/instruction). misc. past posts mentioning
rfc1044
http://www.garlic.com/~lynn/subnetwork.html#1044

--
virtualization experience starting Jan1968, online at home since Mar1970

Language first, hardware second

MitchAlsup <MitchAlsup@aol.com> writes:
The RISC generation of machines were primarily designed to execute the
<semi>portable assembly language known as C. To this end, they execute
C well. In this context, every name and every operator are supposed to
be represented in the final assembly code as an instruction. Due to
the way C-pointers have been defined, and then due to the way arrays
are mapped onto pointer arithmetic, C has an unsolvable aliasing
problem, and to a large extent this precludes optimizing the resulting
code (much).

slight nit ... the original generation of (801) RISC (70s) was designed
to execute PL8. There were explicit statements that the lack of 801/RISC
sophistication would be compensated by sophisticated optimization in
PL8. There were also other hardware/complexity trade-offs that were
supposed to be compensated by compiler and (closed) operating system
(CPr) ... one such was no (hardware) protection domain.

later migration of (801) risc to unix ... involved having to add things
like hardware protection domain (between kernel/supervisor &
application) as well as moving a lot of the PL8 compiler optimization to
C (and/or doing C front-end to PL8 backend).

the company tried to come back in the education market starting in the
1st part of the 80s with ACIS (academic information systems) ... it
started out with something like $300m that it was suppose to distribute
to academic institutions. $25m went to mit project athena (matching $25m
from DEC) and $50m went to cmu project andrew. out of athena came
several efforts including X and Kerberos (authentication technology on
several platforms including underlying in windows). andrew produced
andrew filesystem, camelot, and mach. mach was picked up by a number of
platforms ... including NeXT ... which then went on to be used at Apple.

From UCLA, they picked up Locus (another unix-like) and morphed it into
aix/370 and aix/386.

for the fun of it, random reference to writing it as PL8 ... from long
ago and far away:
Date: 07/05/79 10:22:40From: wheeler
To: somebody in endicott microcode

attempting to convert to release 6. Ran DELTA between release 5 and
release 6 and came up with an UPDTR6 updatew that when applied to
release 5 turns it into release 6. We are now going thru everything in
the system to merge UPDTR6 under all our updates (and COMMON and
SEPP).

We seem to have the engineering labs running resonably well. SJRL
replaced the VM 158 with a 168 about 4 weeks ago and the Engineering
labs are scheduled to replace one of the VM 3031s with a 3033 next
month.

When we get release 6 somewhat stable I will be getting more
serious about decompiling CP source. I would like to take a module
thru a round trip back into a TEXT deck and attempt to merge it into
the production system. The PL8 and PASCAL compilers don't appear like
they will be ready for that soon. I guess I will attempt it with PLS3
(at least for one module) to see how much manual work will be
involved. I still haven't been able to create DO WHILE structures but
I do have IF/THEN/ELSE down.

I still haven't obtained a copy of CMS BSEPP. When I get a copy,
i will have to see about modifying the new files system to support PAM
disks. It is time to start work on shared nucleus subsystem extension.

The IF/THEN/ELSE reference was to a PLI program I had written that
read 360/370 assembler listings and tried to generate a program that
resembled PASCAL. The issue was could I take production software run
it thru decompile, compile the results and then being able to have it
operate correctly.

This was in the days when the corporation was looking at replacing
internal microprocessors with 801/Iliad (including low & mid range
370s, the as/400 followon to s/38, lots of others). I had previously
worked with the Endicott microcode group moving parts of the vm370
into microcode ... ECPS ... some old reference here:
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

The issue in this case ... was to at least semi-automate some of the
process by converting assembler kernel routine into PL8 and then
having PL8 generate the (801) "micrcode" directly. misc. past posts
mentioning 801, risc, iliad, romp, rios, power, etc:
http://www.garlic.com/~lynn/subtopic.html#801

I had originally done paged map filesystem support in cp67/cms, which
included things like being able to share pages/segments based on image
of records (usually executable) in the filesystem. Latest level of CMS
introduced a mechanism that allowed for dynamic adding to CMS
nucleus/kernel ... the reference above was to support subsystem
extension being shared. past posts mentioning doing paged mapped
filesystem
http://www.garlic.com/~lynn/submain.html#mmap

--
virtualization experience starting Jan1968, online at home since Mar1970

TCM's Moguls documentary series

Stan Barr <plan.b@dsl.pipex.com> writes:
I use an Epson Perfection V350 which works well, but you do have to
sit by it and feed slides manually (it has an automatic negative feed
though). Cheap-ish and cheerful. Not tried it under Linux, but I'm
told there's a working driver. Post-processing is done with the Gimp,
of course...

on the other hand ... I've mentioned before that one of the
major/national financial institutions had outsourced Y2K remediation of
some its core financial systems (atm transaction backend) to the lowest
bidder ... later they found out it was a front for a criminal
organization (they possibly felt that they could have even paid the bank
for the privilege ... since they were planning on making it up with
fraudulent financial transactions).

--
virtualization experience starting Jan1968, online at home since Mar1970

Here's my bet: the ease of this overall approach and the lack of real
good security alternatives (firewalls & SSL, anyone?) means there will
be a pause, and then the professionals will move in. And they won't be
caught, because they'll move faster than the Feds. Gonzalez was a
static target, he wasn't leaving the country. The new professionals
will know their OODA.

... snip ...

I had made some quip about possibly infecting some of the financial
community with OODA-loops (especially those dealing with information
security and fraud countermeasures). I got a private response back
bemoaning the fact that there hasn't been more uptake of OODA-loops in
gov. agencies that also deal in such stuff.

the above references that there was a speech nov2005 in the middle
east that made similar statement and reverberated around the world;
that night I got email from the person that made the statement asked
if I could find public sources supporting the statement. Nearly all
law enforcement related sites around the world had drug crime
information publicly available ... but everyone of them had cybercrime
information requiring authorized access. It was interesting task to
turn up cybercrime data.

Some amount of OODA-loop references is ongoing race between those
putting up defenses and the attackers. Part of the problem is many of
the countermeasures that have been done are strictly point solutions
my analogy with defenders in valley with little cover and the other
side holds all the high ground (the financial cryptography ref is the
crooks have been easily/constantly out-maneuvering the defenders).

Lots of the account fraud involves ACH, credit card, debit card, ATM
transactions, etc ... involve something I call "dual-use paradigm"
... that account number is used for dozens of business processes at
millions of locations around the world as well as just knowing the
account number is sufficient authentication.

Having done "electronic commerce" for small client/server startup
(that had also invented "SSL"), in the mid-90s, I was asked to
participate in x9a10 financial standard working group which had been
given the requirement to preserve the integrity of the financial
infrastructure for *ALL* retail payments. The result was x9.59
financial standard which eliminates nearly all existing account fraud
exploits (skimming, data breaches, etc). misc. x9.59 financial
standard ref:
http://www.garlic.com/~lynn/x959.html#x959

for retail POS card transactions ... financial institutions charge the
merchant for fraud ... plus profit margin. As referenced in the above,
some large US financial institutions have had 40-60% of their bottom
line coming from those fees. Eliminating that form of fraud
potentially reduces those fees by order of magnitude with
corresponding hit to the banks bottom line (by comparison, comparable
European banks get less than 10% of their bottom line from such
fees). The other downside with elimination of POS/retail fraud, the
crooks are likely to move to fraud involving opening new accounts,
which is strictly a bank issue and can't be charged off on the retail
merchants (at a profit).

note that the original SSL/electronic-commerce had some requirements
how SSL was deployed and used at merchant websites ... those
requirements were almost immediately violated ... creating the way for
lots of the existing online fraud. however, SSL only really hid the
transaction information while it was being transmitted.

the x9.59 standard work eliminated harvesting/skimming card detail,
transaction detail, account number as a vulnerabilities; the major use
of SSL in the world today is this earlier stuff we had done for
electronic commerce for hiding transaction detail ... x9.59 eliminates
the need to hide such detail and so eliminates the major use of SSL in
the world today (in addition to eliminating the financial fraud threat
from the majority of data breaches).

part of the OODA-loop combat between the defenders (security vendors)
and attackers (crooks) has been the various point solutions like the
signature-based anti-virus paradigm ... with the attackers constantly
coming up with new ways around the defenses.

Much of the current desk/home network computing was adapted from
closed, safe, small business/office LANs. Lots of automated scripting
features were added to all sorts of business applications ... which
helped provide visually appealing additions in the safe
business/office LAN environment.

The spring '96 MDC at SF Moscone had all sorts of banners about
supporting Internet ... but the constant theme in every session was
"protecting your investment". The "investment" was large body of VB
programmers and the scripting they had written for the safe/closed
small business LANs. Taking that network paradigm and plugging it into
the internet would be like pushing somebody out the airlock into deep
space w/o spacesuit (a metaphor that I've frequently used in the past)
... no countermeasures for the hostile anarchy of the internet.

Later ... before he disappeared, he con'ed me into interviewing for
the position of chief security architect in redmond ... the interview
went on over a few weeks ... but we could never come to terms. A
reference to his disappearance.
http://www.garlic.com/~lynn/2008p.html#27

--
virtualization experience starting Jan1968, online at home since Mar1970

long ago and far away ... a large premier research organization made
some public statement about their virtual machines predating the
science center's implementations (cp40, cp67, etc). it turns out they
were referring to one of the other flavors of virtualization.

Note that the commercial time-sharing service bureau CP67 spin-offs
(60s&70s genre of cloud computing) would go on to severely restrict
the virtual machine capability for their general customer. In that
sense, cp67 become somewhat a microkernel platform for delivering
online services (as opposed to strictly virtual machine).

Hennessy was thesis adviser and there was big festouche in the academic
community over a PHD thesis that looked at global LRU (as opposed to
the true religion). for awhile the exchange from both sides were up on
the net. a more recent post given some additional background
http://www.garlic.com/~lynn/2010f.html#85

--
virtualization experience starting Jan1968, online at home since Mar1970

Which non-IBM software products (from ISVs) have been most significant to the mainframe's success?

support for hasp/jes2 had moved to gburg (crabtree went to gburg)
... initial conversion of MVT to virtual memory was SVS. my wife did
stint in gburg group ... one of the things was "catcher" and
documentation for ASP turning into JES3. She also worked on JESUS
... JES unified system ... that merged JES2 & JES3 ... but there was
too much politics. a couple recent JESUS refs:
http://www.garlic.com/~lynn/2010l.html#61 and
http://www.garlic.com/~lynn/2010n.html#3

Simpson went to Harrison where he had group doing RASP ... a "new"
virtual memory operating system somewhat from scratch. He then left
and went with Amdahl in Dallas ... redoing RASP from scratch. There
was some litigation involved verifying that it was totally "clean
room" and no copied code.. Some recent RASP thread
http://www.garlic.com/~lynn/2010i.html#44

CPS was done by the boston programming center on 3rd flr of 545 tech
sq (science center that did virtual machines, cp40, cp67, invented
GML, bunch of interactive stuff was on 4th flr). Much of CPS
implementation was done under contract by Allen-Babcock ... recent
joint post with a.f.c. & ibm-main (with various references
... including ptr to scan of Allen-Babcock CPS document):
http://www.garlic.com/~lynn/2010e.html#14

above also mentions trivia that Jean Sammet was member of BPS on 3rd
flr.

CP67 development group split off from science center and moved to the
3rd flr, absorbing the boston programmming center (some of the people
did a flavor of CPS that ran on cms). The group was growing rapidly
working on the morph of cp67 into vm370 ... and eventually outgrew the
3rd floor ... that was when they moved out to the old SBC bldg. in
burlington mall.

Lots of univ. where convinced to get 360/67s to run tss/360 ... when
tss/360 ran into all sorts of problems ... some number of univ. just
dropped back to running in 360/65 mode with os/360. some number
installed cp67.

from above:
With Assange's next release apparently targeting Bank of America,
traders fear a subprime lending scandal will be exposed. Charlie
Gasparino talks with someone who has read the leaked files.

Ratio of workers to retirees

Baby boomers are four times the previous generation and nearly twice
the following generation (that is why they are called baby
boomers). As the baby boomer population bubble moves thru the economy
there are all sorts of side-effects. During their peak earning years
it is relatively easy to siphon off money to pay for the prior
generation retirement. As baby boomers move into retirement, the ratio
of workers to retirees decreases by factor of eight times (ignoring
all other effects).

It wasn't 8 times as many people in boomer generation ... it was that
the boomer population bubble was 4 times the previous generation and
nearly twice the following generation. the change is in the ratio of
baby boomers in prime working years to previous retired generation
compared to the ratio of the following generation in prime working
years to retired baby boomers (baby boomers increase retirees by
factor of four ... since there are four time as many baby boomers and
the number of workers in prime working years are nearly cut in half
... since there are only half as many of them ... a factor of eight
change in ratio of workers in prime working years to retirees)

note that the side effects of the baby boomer population bubble isn't
just general ratio of workers to retirees changes by factor of eight
times ... but also things like the ratio of geriatric health care
workers to retirees is also likely to change by factor of eight times
(four times as many retirees, half as many workers)

following generation is actually closer to 2/3rds ... calling baby
boomer population bubble nearly twice ... simplified math for change
in worker:retiree ratio ... actual reduction is more like six times
(than 8 times). One conspiracy theory is that lobbying to effectively
ignore illegal immigrants volume over the last 25yrs is trying to
backfill as the baby boomer bubble moves thru economy.

jmfbahciv <See.above@aol.com> writes:
Sure, this will work until the first politician begins to think
in single-term election years. Debt next term will be someone
else's problem. Your assumption is that people will care about
5 years from now. Once this "short-term" thinking starts,
people will rationalize away all long-term thinking. If you want
an example that's occurring now, look at the US in banking,
insurance, and Congress.

the shortterm "personal" magnitude spike over the past decade was so
large ... that it appeared to totally swamp any concern about downside
to their institution, economy, and/or the country.

some of the examples were recent reverse-IPO buyouts ... with cost of
borrowing at near zero ... they could borrow for the buyout ... while
taking big commission and fees ... and then borrow more on the company
books ... for their own "reward" ... and then re-IPO the company
... with the new company taking all the "debt" incurred during the
process. Rather than claim that the process creates value ... it
showed how the process just sucked enormous wealth out of the
infrastructure ... going to a few individuals.

--
virtualization experience starting Jan1968, online at home since Mar1970

Patrick Scheible <kkt@zipcon.net> writes:
The FDIC didn't originally insure the bank, it insured the
depositors. If the banks' investments went sour, the bank would be
closed and the executives would be looking for work and not get their
bonuses, just the depositors would get their money back.

The Great Cyberheist

small x-over with this reference by Fred Leland
http://redteamjournal.com/2010/11/death-by-a-thousand-cuts/

the scenario is that enormous resources are being devoted to huge
vulnerability landscape ... a lot of which is actually possible to
eliminate from the playing field ... with possible motivation that
there are lots & lots of large stakeholders with vested interest in
the status quo.

--
virtualization experience starting Jan1968, online at home since Mar1970

Note that DTCC referenced in the wiki reference on (illegal naked)
short selling ... there has been legal action trying to get DTCC to
release transaction details (that apparently could be used to prove
instances of illegal naked short) ... which does show up in the DTCC
wiki reference:
https://en.wikipedia.org/wiki/Depository_Trust_&_Clearing_Corporation

Before NSCC & DTC merged, I had been asked by NSCC to look at
improving the integrity of transactions ... but after working on it
for some time, the effort was called off ... with some references to
my work on improving the transaction integrity would have side-effect
of increasing transaction transparency.

some background to the above ... somebody from NSCC had been attending
the X9A10 financial standards meetings (that resulted in the X9.59
financial transaction standard); they were interested in "internet
trades" ... including internet financial transactions as part of
settlement. I was then invited into NSCC to look at doing something
analogous for trading ... as had been done in X9.59 for financial
transactions. As mentioned in the above, it was then called off with
references that a side-effect of the integrity work would have been
visibility and transparency.

I just finished the free kindle sample version of "Knowledge Creating
Company" (x-over from the Compressing the OODA-Loop discussion)
... one of the areas it dwelt on was economic market knowledge
... there is significant effort that goes on to maintain a lot of
smoke (obfuscation) on the playing field (lack of visibility and
transparency).

--
virtualization experience starting Jan1968, online at home since Mar1970

in addition to the other descriptions ... there was snipet about
commodities market not allowing those w/o significant position in the
commodity (basically also from 30s like Glass-Steagall that separated
safe, regulated epository institutions from risky, unregulated investment
banking). speculators (w/o significant position) resulted in wild
irrational swings in prices ... which then happened recently after there
were 19 "secret" letters that allowed specific speculator entities to
play

Line printers still in use on mainframe-class systems?

hancock4 writes:
A few years ago I ordered something via the Internet. When it arrived
I was surprised that the packing slip was an old fashioned custom form
(sprocket feed, with carbons) and printed by a line printer. I
figured a firm using the Internet to take orders would have modernized
the print end, too. But the form was pre-printed in colored lines
which made it contrast well with the variable information the computer
generated--much easier to read than the monochrome generated by laser
printers.

some number of internet companies are pure shell ... everything is
outsourced/subcontracted to other (frequently existing) entities (back
room, order fullfilment, transaction processing, possibly even the
hosting and operation of the webserver).

it was a major problem for acquring credit card operation. the issuing
(customer) financial institution takes liability for the customer and
the acquiring (merchant) financial institution takes liability for the
merchant (and there are contractual obligations between issuing and
acquiring financial institutions).

many "internet" merchants had/have effectively no assets, being purely
shell operations ... so there is little or nothing that acquiring
institution could seize in case of merchant defaulting (to cover
possible 4-8 weeks of credit card receipts).

early days of e-commerce it was a major problem getting acquiring banks'
risk management departments to approve signing up "internet" merchants
(that lacked any assets).

--
virtualization experience starting Jan1968, online at home since Mar1970

Lars Poulsen <lars@beagle-ears.com> writes:
Railroads are extremely capital intensive, and by law everything must
be depreciated over 30 years (in the US). This does tend to make
(smart) management take a longer view of how things are run. There is
no point in trying to run a purchasing department that squeezes the
suppliers too much: You need your suppliers to stay in business so
they can fix the stuff you bought 20 years from now.

in the 50s & 60s ... out west, i remember seeing track maintenance crews
come thru every summer. on the east cost in the 70s, i remember seeing
tracks that hadn't any maintenance for possibly 20yrs; there was some
article that possibly late 50s some of the east coast train operators
were deferring track maintenance and using the money for executive
bonuses and stock dividend (one or two yrs possibly didn't make a lot of
difference ... but it builds up quite a deficit if done for decades).
outside boston there was stretch of track near acton ... where the
tracks were so bad that freight train speed limit was 5MPH and it was
still known as the boxcar boneyard ... because of large number of
derailments.

part of the change in the east was possibly the interstate system
... making heavy trucking more competitive with railroad ...
effectively heavy trucking got their track maintenance nearly for free.

TCM's Moguls documentary series

jmfbahciv <See.above@aol.com> writes:
But they are going to rich (I'm not using the Democrats' definition of
rich) for a moment. When the nation goes bankrupt, its money is worthless.
So what if gold is $1000K/ounce? You won't be able to buy your annual food
for an ounce of gold.

there is hostile scenario ... that an organization with a significant
amount of money could use 90% of it to buy up large amount of stock
... and then start dumping the stock ... precipitating a stock market
crash. then they could use the remaining 10% to pretty much buy
everything. modeling is whats the minimum "significant amount of money"
needed to play the game.

the cyclic bubble conspiracy scenarios are related ... possibly not
quite the extreme highs & lows ... but basically a pump&dump
play ... on much larger scale than the penny-ante players at the low
end (that sometimes show up in the news as criminals). pump&dump
is sort of the reverse of the illegal short sales referenced in
previous post
http://www.garlic.com/~lynn/2010p.html#50 TCM's Moguls documentary series

a different scenario is that non-friendly foreign govs. holding so much
of the country's debt, wield significant power.

the "congress is the most corrupt institution on earth" scenario
... would then conclude that most of the political party rhetoric is
just distraction for the populace ... akin to roman games.

enron then happened ... and congress responded with sarbanes-oxley ...
but even what regulation there was ... appeared to have no enforcement;
evidence the Madoff congressional testimony by the person that tried for
a decade to get SEC to do something about Madoff ponzi scheme.

the whole institutional infrastructure is involved ... political party
rhetoric is just there to misdirect the public (facade of any
substantial difference between the parties)

jmfbahciv <See.above@aol.com> writes:
Of course. The Democrat leadership since the late 80s has been trying
their damnedest to open the gates to the barbarians while we watch
those games. Even in the 80s, some of them were doing things which
I would judge as treason in their attempts to "win" against the
Republicans.

but with no substantive difference between the parties ... the party
politics is just part of misdirection & distraction for the populace
(purely staged facade). jaundiced view is when the parties rachet up the
rhetoric in public ... wonder what is actually going on in the backrooms
(the more that the populace gets involved in the public party political
rhetoric ... they less they are likely to pay attention to what is
actually happening)

above references earlier post about effective net "costs" of illegal
aliens to the country; something like $10k/annum/illegal ... which
can also be considered per head annual subsidy to businesses that
employ them
http://www.garlic.com/~lynn/2007i.html#18 John W. Backus, 82, Fortran developer, dies

past posts/threads here in a.f.c.

side-point is that with all the periodic rhetoric on the subject ... i
couldn't find congress had bothered to ask GAO to do any updates on the
subject (since 1995) ... allowing lots of hot air and rhetoric with
nobody really wanting it based on actual facts.

From OODA to AAADA

I always considered orientation included anticipating (variation on
situation awareness), i.e. part of integrating latest observation into
the context of what is happening (and whether latest observation
correspond with earlier expectations).

I had done something similar in 60s & 70s ... with dynamic adaptive
computer resource control algorithms ... including calculating
difference between actual observed and previous prediction.

Orientation including anticipating as well as adjusting based on
whether earlier anticipation corresponds with actual observation
.... potentially requiring real-time adjustment of whatever basis is
being used for the anticipating (any assumptions about the environment
may turn out to be completely wrong).

I've periodically repeated the joke I played on the community w/regard
to dynamic adaptive and language/context. I had done all this work on
dynamic adaptive and prior to one of the product releases ... some
hdqtr type did a review and concluded that all the "modern" resource
control algorithms included large number of human set'able parameters
(to adapt algorithm to specific environments/workloads). My counter
was that I spent a decade implementing automated processes that
eliminated the need for such stuff ... which fell on deaf ears (in
most cases, environment and workload would dynamically change in real
time ... which statically set parameters wouldn't deal with).

So I put in some number of human setable static parameters, published
detailed document on the formulas and description of operation, along
with all the code. In the area of "Operations Research" there is
something about "degrees of freedom" with regard to control
parameters.

Most low level/kernel computing has (computer) language and
orientation of "state" at any particular time ... frequently ignoring
how things are changing over time (4th dimension). It turns out the
"degrees of freedom" allowed the static setable parameters were less
than the "degrees of freedom" that I allowed that dynamic adaptive
operation (in effect the dynamic adaptive would always be able to
compensate for any statically set parameter. The dynamic adaptive
included calculating the difference between most recently observed and
previously predicted, as part of orientation).

Over period of a couple decades ... nobody appeared to have ever
realized the "joke" ... potentially because most of the people that
dealt with the subject were so focused on state at any particular
point in time ... and "change over time" wasn't part of their world
view.

Tactical orientation may be more concerned with the other side is
trying to obfuscate what is going on (and observe/orientation is
attempting to divine/overcome the fog). Part of strategic should at
least include testing whether the assumptions are valid (in part, do
previous predictions correspond with what is actually being
observed). The tactical can be problems with both observation and
orientation ... the particular strategic case can be a deficiency in
orientation. Agile adaptation needs to recognize that viewpoint can be
wrong (possibly unduely influenced by past experience) and/or things
are actually changing (or both).

--
virtualization experience starting Jan1968, online at home since Mar1970

Peter Flass <Peter_Flass@Yahoo.com> writes:
That's what they said about hedge funds, too, and it seemed to make
sense. The hedge fund basically took the other side of a bet -
betting a price would go down when everyone else assumed it would go
up, and visa-versa. This should have led to price stability. It
didn't.

i.e. suppsoedly the justification for only allowing those with
significant position in the commodity to play in commodity futures
market (as countermeasure to wild, irrational price swings). The point
was that after the 19 "secret letters" allowing speculators to play in
the market ... it radically affected commodity price stability with
irrational wide swings.

the other scenario is something like "portfolio churning" and/or
gambling houses stack the deck ... not particularly caring about any
particular win/lose ... but because of the way the market is slanted
... the insiders will always win ... as long as they can keep everybody
else playing.

part of the speculation wild swings is heavy leveraging at up to 100:1
... where 1% change can precipitate a wipe-out of the original
investment (resulting in various cascading effects) or relatively minor
upswing create thousand percent ROI.

--
virtualization experience starting Jan1968, online at home since Mar1970

TCM's Moguls documentary series

Anne & Lynn Wheeler <lynn@garlic.com> writes:
the other scenario is something like "portfolio churning" and/or
gambling houses stack the deck ... not particularly caring about any
particular win/lose ... but because of the way the market is slanted
... the insiders will always win ... as long as they can keep everybody
else playing.

Quadibloc <jsavard@ecn.ab.ca> writes:
A few of IBM's manuals were typeset conventionally, like System/360
Principles of Operation. Many were reproduced from printout on a TN
print train.

most of the manuals started out normal typeset ... there was some
transition with cp67 moving documents into (cms) script (cp67/cms morph
of ctss runoff) ... with output to 1403 TN print train used as master
for printing (usually using "film" ribbon, rather than standard fabric
ribbon). Some number of "smaller" documents were done using 2741 as
output (instead of 1403, using selectric "film" ribbon). recent post
mentioning film ribbon (and 3800 laser printer)
http://www.garlic.com/~lynn/2010p.html#18 Rare Apple I computer sells for $216,000 in London

principles of operation was one of the first (outside the cp67 group) to
move to cms script; (as mentioned before) because of being a subset of
the "architecture manual". moved to cms script ... the full document was
the architecture "red book" (named for being distributed in red 3-ring
binder). using script command line options, it was possible to print
only the "principles of operation" subset (w/o all the internal-only
stuff ... like engineering notes and/or discussions of things like
alternatives and trade-offs). recent post mentioning "red-book"
http://www.garlic.com/~lynn/2010b.html#11 Happy DEC-10 Day

TCM's Moguls documentary series

Peter Flass <Peter_Flass@Yahoo.com> writes:
You ai'nt kiddin. Here in New York we recently had a near bankruptcy
by the New York State Racing Association, that runs Aqueduct, Belmont,
and Saratoga. Just last week New York City Off-Track Betting just
went broke and had to shut down. How the heck could gambling
operations go bankrupt? The house *always* wins. Obviously the pols
were using these organizations as their personal piggy-banks.

jmfbahciv <See.above@aol.com> writes:
FYI, last night, Coast to Coast was advertising that tonight's guest will
explain what happened and "name names". Based on previous shows
with this guest, none of it will be true and all will probably be
"justified" by referring to "scenitists". However, it will give a clue
about which direction these talk show people want to point the next
riot.

the federal reserve had been fighting court orders for release of
information about how it was aiding the financial institutions ... well
beyond anything in TARP. I've mentioned numerous times over the past
couple of years that this was somewhat sparked when they were going to
take the $700B in TARP funds and buy toxic CDOs being held off-balance
... but they then found out that just the four largest too-big-to-fail
financial institutions were carrying something like $5.2T off-balanace
(the appropriated TARP funds would have barely been able to make a dent
the real problem).

actually there are quite a few wallstreet stories (not so humorous)
about GIGO ... the risk departments and business modeling were
repeatedly told to change the inputs ... until the business people got
the results they wanted (GIGO). in this case, pointing the finger at
modeling has been used as misdirection and to obfuscate what was
really going on (in large part, bonuses, fees, commissions, etc for the
business people on the transactions swamped any possible consideration
as to the side-effects that the transactions might have on the
institution, economy and/or country)

TCM's Moguls documentary series

jmfbahciv <See.above@aol.com> writes:
"Us" won't have any because there won't be grocery stores. It will
take over a year for people to grow their annual food and preserve it.
And that's assuming those people will know how to do that kind of work;
most don't.

i remember stories in the early 70s about people on social security
being forced to buy canned dog food at the grocery store ... and
realizing that they had no idea about food value (canned dog food being
extremely bad food economics). past post mentioning it being an
ignorance issue:
http://www.garlic.com/~lynn/2006p.html#20 news group maintenance: SPEC CPU2006 announced

--
virtualization experience starting Jan1968, online at home since Mar1970

Bernd Felsche <berfel@innovative.iinet.net.au> writes:
The main difference between small business and a corporation is that
those running the small business put their "sweat" into making their
customers happy. Corporations are run by people obsessed with the
bottom line and maximising income by screwing the customers in more
ways than described in the Karma Sutra.

or executives just fiddling the corporation financial statements to
spike their compensation:

from above:
The database consists of two files: (1) a file that lists 1,390
restatement announcements that we identified as having been made
because of financial reporting fraud and/or accounting errors between
July 1, 2002, and September 30, 2005, and (2) a file that lists 396
restatement announcements that we identified as having been made
because of financial reporting fraud and/or accounting errors between
October 1, 2005, and June 30, 2006.

from above:
Moody said Monday that it would consider downgrading its triple-A
rating for US Treasury Bonds if Washington continues to pile up record
deficits. The move would make it significantly harder for the US to
finance its debt by borrowing from other countries.

... snip ...

doing the fall2008 congressional hearings into the role that the
rating agencies played in the economic mess (selling triple-A ratings
on toxic CDOs when they knew they weren't worth it), there was
speculation that the rating agencies could blackmail the fed gov. into
not taking any punitive action with the threat of rating downgrade.

there have been some number that have gone thru reverse-IPO and then are
in the process of being re-IPO'ed ... as soon as they suck out as much
value as possible ... aided by flood of cheap money over last decade:
http://www.garlic.com/~lynn/2010p.html#45 TCM's Moguls documentary series

origin of 'fields'?

Anne & Lynn Wheeler <lynn@garlic.com> writes:
this doesn't even take into account baby boomers living longer,
reguiring more tax collections. it also doesn't take into account that
the following generation has lower education, lower skills, and facing
better educated foreign competition (so total, inflation adjusted
taxable income rather than half ... because half as many workers, it may
only be 1/4th). the combination of factors, may require further uplift
of the tax rate to meet required tax collection ... say to 300%.

on tv business news show just now:
china spends as much on supplemental education as they spend on housing
and transportation ... compared to US which spends 25 times more on
housing and transportation as it does on supplemental education

stock markets in US, europe and japan use to account for more than 90%
of total world markets ... it is now down to almost 60% and heading for
1/2.

i've seen it go both ways ... huge amount of information exceeding
single person that has to be processed ... versus the line about camel
is a race horse designed by committee (possibly attempting to address
too many objectives and view points).

i gave a talk on chip design i had done at assurance panel in the
trusted computing track at intel developer's forum. person running
trusted computing chip design was sitting in the front row ... so i
quipped (in part because of extreme KISS of design) that 1) it could
(also) meet all the (important) requirements for TPM and 2) it was
nice to see that the TPM chip design had begun to look more and more
like my chip. The guy running TPM quipped back that I didn't have a
committee of 200 people helping with my chip design.

From OODA to AAADA

I mentioned that I really liked feedback loops ... because I started
doing them as undergraduate in the 60s as part of dynamic adaptive
computer resource management algorithms ... old Boyd post from 1994
mentioning
http://www.garlic.com/~lynn/94.html#8 scheduling & dynamic adaptive ... long post warning

in any case, one of the things I had to do in the 60s, was make
changes all over the system as part of measurement for the
"observation" input into the feedback controls ... and then make
changes all over the system ... eliminating all sorts of implicit
resource control decisions ... and making them explicit, integrating
them all into dynamic adaptive mechanism.

this post mentions that as part of the whole thing there was
predictions about what effect any changes in control should have
... and then calculatons the next cycle whether what was predicted to
happen corresponded with what did happen (earlier in this thread)
http://www.garlic.com/~lynn/2010p.html#56 From OODA to AAADA

lots of feedback implementations are somewhat more free-wheeling
... don't even bother to check whether expectations of changes in
previous cycles actually correspond with actual results.

As well as appearance of COTS email on COTS platform in number of
gov. agencies (I've mentioned before being blamed for online computer
conferencing on the internal network in that period; the internal
network was larger than arpanet/internet from just about the beginning
until late '85 or early '86). This mentions that obsession with
backup/archive (including for email) may have contributed to some
troubles that a marine had in the early 80s.
http://www.garlic.com/~lynn/2010p.html#19 The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET

During Boyd's briefings in the early 80s ... one of his examples were
various large-scale war games ... where the staff had been practicing
in war rooms all year (while the generals and admirals played golf)
... and then when the generals and admirals came in ... they had very
little finger-feel for the arriving information (effectively poor
observation as well as ability to orientate).

I've been involved in some number of high-availability products
... where there might be all sorts of possible failure modes that had
to be anticipated (including results of various kinds of purposeful
events). One past example is stuff that is now frequently called
"electronic commerce" ... where traditional high integrity software
development processes had been used. I then did a matrix of most of
the components that might be involved (from user browser thru
webserver to backend payment transaction systems) and a large array of
things that might affect correct processing. The requirement was that
infrastructure needed to automatically handle any of the problems
and/or trouble desk could do 1st level problem determination within
five minutes.

Lots of things are done starting from the standpoint of creating
countermeasures to anticipated problems; another starting point is to
methodically go thru every possible failure, disaster and/or attack
... before starting to considering countermeasures; slightly related
to difference between (attacking) red & (defending) blue teams.

jmfbahciv <See.above@aol.com> writes:
That's the business model of the 90s. Companies only buy the startups
who have a successful product. These companies don't have to fund
research and development, taking the risks that most of the attempts
won't produce a money making product.

in addition, R&D spending shows up as expense ... buying a company shows
up as value on bottom line ... it is possible to spend 10-100 times more
for the exact same results (by buying a company), but it shows up better
in the financial reports (as an asset rather than as an expense).
another way that current public corporate infrastructure has severely
distorted business operations.

also shows up in the push for flat-tax ... part of the inefficiency of
the 65,000 some pages in the current tax code has all sorts of special
provisions that results in unnatural acts by corporations in order to
take advantage of the special provisions. recent posts mentioning the
flat-tax push
http://www.garlic.com/~lynn/2010j.html#88 taking down the machine - z9 series
http://www.garlic.com/~lynn/2010p.html#14 Rare Apple I computer sells for $216,000 in London

with respect to some of the other people in the 2001 IDF panel ... in
the mid-90s, I had semi-facetiously said I would take a $500 mil-spec
part, aggressive cost reduce by 2-3 orders of magnitude while making
it more secure. In addition to needling the head of the TPM committee,
I also needled some of the guys on the panel that the chip was as
secure as anything they were producing (in-house). I also had
opportunity to cross swords with GSA in the 90s regarding the chips
that would be used in the CAC-card
https://en.wikipedia.org/wiki/Common_Access_Card

--
virtualization experience starting Jan1968, online at home since Mar1970

from above:
Prosecutors face considerable obstacles in proving criminal charges if
they have only sketchy evidence of an executive's involvement in
questionable decisions and the applicable legal standards are vague.

... snip ...

another view on prosecuting financial crimes (I've always been a
sucker for lots of Boyd & OODA-loop references)

above partially in response to reference upthread mentioning IBM and
its massive "Future System" effort from the early 70s. One of the
accounts of FS mentions that IBM staked so much FS project (absolute
dollars as well as killing off all alternatives) that if it had been
any other company, they would have gone out of business. For some Boyd
tie-in. one of Boyd's biographies mentions his '70 stint at "spook
base" and "spook base" was a $2.5B windfall (nearly $20B in today's
dollars) for IBM (which would have partially covered the massive
amount sunk into the FS project). misc. past posts mentioning FS
http://www.garlic.com/~lynn/submain.html#futuresys

I mentioned several times that during the FS effort, I would draw
comparisons with a continuously playing cult film down in (cambridge)
central sq (king of hearts, analogy reference to inmates being in
charge of the institution). There was also some analogy with Boyd's
story about air force air-to-air missile used during Vietnam
... highly skilled engineers with no practical experience in
missiles. Several accounts of FS mention that the failure had enormous
impact on the IBM culture that it took decades to recover from
(although many of the people involved remained in positions of
responsibility). misc past posts mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd1

this is old post with quote from Ferguson and Morris 1993 IBM book
... with quote about after FS failure the culture was replaced with
one of sycophancy and "make no waves"http://www.garlic.com/~lynn/2001f.html#33

leading up to the FS failure there was growing sense of Emperor's New
Clothes parable aspect to the whole thing.

one of the analogies I drew regarding the FS project (from just about
the beginning; at the time, I was at the science center ... just a
couple blks down the street from theater that had been playing it
nearly continuously since it was released in the US; the wiki article
even references the central sq theater).

--
virtualization experience starting Jan1968, online at home since Mar1970

jmfbahciv <See.above@aol.com> writes:
Now track the people who create those startups and sell to
the highest bidder. Then they create another startup and
sell to the highest bidder. that's how software is developed
these days.

as mentioned previously during the internet bubble ... large portion
involved investment bankers (some of the same ones that possibly had
been involved in S&L crisis and in the more recent financial troubles
this century) churning startup/IPO mill .... it was even beneficial for
previous IPO'ed companies to fail ... since it left the playing field
open for the next startup IPO; they made more money from (succession of)
hardware & software that would fail.
http://www.garlic.com/~lynn/2010p.html#7 What banking is. (Essential for predicting the end of finance as we know it.)

there was similar theme about the culture of failure and the big
beltway bandits making more money from gov. projects that fail
... than from those that succeed.

the various entities have tended to always be somewhat parasites
... sucking extra blood here & there. an important issue is that
parasites keeping their appitite under control to avoid killing the host
... however, the past couple decades of managed failures is on the way
to doing in the host.

from above:
Prosecutors face considerable obstacles in proving criminal charges if
they have only sketchy evidence of an executive's involvement in
questionable decisions and the applicable legal standards are vague.

... snip ...

another view on prosecuting financial crimes (I've always been a
sucker for lots of Boyd & OODA-loop references)

a couple of the references are in Boyd-related discussion groups
focusing on OODA-loops and agility ... however Boyd also had
his To Be or To Do choice ... quoted in the "Agile Workforce" post;
for a little topic drift ... To Be or To Do example from today
http://baselinescenario.com/2010/12/18/why-citigroup/