Windowed Interfaces 1981-2009

Larry__Weiss <lfw@airmail.net> writes:
I felt lucky that I was able to use a teletype back in 1969. I also
punched a lot of cards.

i had mostly 2471s to deal with in the 60s ... but did have to add
TTY/ascii support to cp67 in the 60s at the univ; 2741 had much better
feel (not a whole lot different than modern pc keyboards) than tty.

i've mentioned before ... cp67 had support for dynamic terminal type
recognition for 1052 & 2741 ... and so when I went to add TTY ... I
wanted to make it consistent ... even to allowing single dial-in number
for rotary pool of lines. turned out to almost work ... except for
problem with 2702 actually being able to allow any terminal to connect
to any port.

from above:
IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead
that the competition would never be able to keep up, and to have such
a high level of integration that it would be impossible for
competitors to follow a compatible niche strategy. However, the
project failed because the objectives were too ambitious for the
available technology. Many of the ideas that were developed were
nevertheless adapted for later generations. Once IBM had acknowledged
this failure, it launched its 'box strategy', which called for
competitiveness with all the different types of compatible
sub-systems. But this proved to be difficult because of IBM's cost
structure and its R&D spending, and the strategy only resulted in a
partial narrowing of the price gap between IBM and its rivals.

(somebody else's) fergus/morris quote (cited in above)
... that so much energy went into FS that s370 was neglected, hence
Japanese plug-compatibles got a good foothold in the market; after
FS's collapse a tribe of technical folks left IBM or when into
corporate seclusion; and perhaps most damaging, the old culture under
Watson Snr and Jr of free and vigorous debate was replaced with
sycophancy and make no waves under Opel and Akers. It's claimed that
thereafter, IBM lived in the shadow of defeat (by the FS failure),
hence, while still agressive in business practices, IBM faltered at
being aggressive in technology.

from above:
The bundling of consumer loans and home mortgages into packages of
securities -- a process known as securitization -- was the biggest
U.S. export business of the 21st century. More than $27 trillion of
these securities have been sold since 2001, according to the
Securities Industry Financial Markets Association, an industry trade
group. That's almost twice last year's U.S. gross domestic product of
$13.8 trillion.

also mentions in 1989 that citigroup (largest player in the market at
the time) figured out that ARM mortgage portfolio could take down the
institution (and nearly did) ... and got out of the business. Much of
the $27 trillion (triple-A rated) toxic CDOs are a flavor of ARM
portfolio.

role forward to the current time ... and the institutional knowledge
from 1989 seems to have evaporated ... in ability to evaluate large
ARM portfolios packaged as toxic CDOs.

from above:
But neither competitors nor Congress liked open-bank assistance,
wondering why the institutions getting it shouldn't just be allowed to
fail. So a 1991 banking law called FDICIA, and a subsequent amendment
to a related law, essentially barred the FDIC from granting such
assistance -- except in instances of systemic risk.

from above:
Using household terms such as "QSPEs" and "VIEs," Pandit revealed that
Citi has more than $1.2 trillion dollars in off-balance sheet
assets. These off-balance sheet entities are similar in structure to
Enron's SPVs (special purpose vehicles)

The Economic Crisis and its Implications for The Science of Economics
http://www.perimeterinstitute.ca/en/Events/The_Economic_Crisis_and_Implications_for_Science/The_Economic_Crisis_and_its_Implications_for_The_Science_of_Economics/

from above:
May 1 - 4, 2009
Perimeter Institute

Concerns over the current financial situation are giving rise to a
need to evaluate the very mathematics that underpins economics as a
predictive and descriptive science. A growing desire to examine
economics through the lens of diverse scientific methodologies -
including physics and complex systems - is making way to a meeting of
leading economists and theorists of finance together with physicists,
mathematicians, biologists and computer scientists in an effort to
evaluate current theories of markets and identify key issues that can
motivate new directions for research.

... snip ...

On the other side ... there have lots of reports of business people
overruling the risk department and/or instructing that the inputs be
fiddled until they risk managers came up with the outputs that the
business people wanted.

as noted in the above, shortly after the above meeting, the effort was
transferred and we were told we couldn't work on anything with more
than four processors (not long after that we chose to leave, taking
one of the early-out offers).

two of the other people (mentioned in the jan92 meeting), later show
up at small client/server startup responsible for something called
"commerce server" (and we were brought in to consult about doing
payment transactions on their server; effort is now frequently called
electronic commerce).

from above (something of an understatement):
Bernanke said the packaging and sale of mortgages into securities
"appears to have been one source of the decline in underwriting
standards" because originators have less stake in the risk of a loan.

from above:
So investors betting for quick solutions to the financial crisis could
be disappointed. The tangled web that banks wove over the years will
take a long time to undo.

At the end of 2008, for example, off-balance-sheet assets at just the
four biggest U.S. banks -- Bank of America Corp., Citigroup Inc.,
JPMorgan Chase & Co. and Wells Fargo & Co. -- were about $5.2
trillion, according to their 2008 annual filings.

from above:
But there's a deeper and more disturbing similarity: elite business
interests -- financiers, in the case of the U.S. -- played a central
role in creating the crisis, making ever-larger gambles, with the
implicit backing of the government, until the inevitable
collapse. More alarming, they are now using their influence to prevent
precisely the sorts of reforms that are needed, and fast, to pull the
economy out of its nosedive. The government seems helpless, or
unwilling, to act against them.

from above:
More blows are coming. Banks worldwide have written down their assets
by $1.1 trillion. The final tally is expected to be double that, or
more. The pain is only now starting to spread through commercial
property and commercial loans. As a result, the first-quarter reprieve
will turn out to be a "head fake", says Chris Whalen of Institutional
Risk Analytics.

tty 33/35 keys were much more like mechanical typewriter ... distance
that the keys had to be depressed and the force needed to depress them

2741 was effectively a higher duty-cycle (heavy duty) selectric
(although not as heavy duty as 1050 or 1052 operator's console) ... but
they were similar to current pc keyboards in that keys needed to be
depressed very little (and took very little force to depress them).

from above:
He played a leading role in writing and pushing through Congress the
1999 repeal of the Depression-era Glass-Steagall Act, which separated
commercial banks from Wall Street. He also inserted a key provision
into the 2000 Commodity Futures Modernization Act that exempted
over-the-counter derivatives like credit-default swaps from regulation
by the Commodity Futures Trading Commission. Credit-default swaps took
down AIG, which has cost the U.S. $150 billion thus far.

I've been doing some amount of work "cleaning" the OCR of scan of the
Glass-Steagall (Pecora) hearings ... from the hearings (pg. 7281):
BROKERS' LOANS AND INDUSTRIAL DEPRESSION

For the purpose of making it perfectly clear that the present
industrial depression was due to the inflation of credit on brokers'
loans, as obtained from the Bureau of Research of the Federal Reserve
Board, the figures show that the inflation of credit for speculative
purposes on stock exchanges were responsible directly for a rise in
the average of quotations of the stocks from sixty in 1922 to 225 in
1929 to 35 in 1932 and that the change in the value of such Stocks
listed on the New York Stock Exchange went through the same identical
changes in almost identical percentages.

... snip ...

there is a correspondence between the speculation in the real-estate
market leveraging (ARM) loans from non-depository institutions (which
used securitization as source of funds) and the speculation in the
'20s stock market using brokers' loans.

from above:
Bill Moyers asked me to join his conversation this week with Michael
Perino - a law professor and expert on securities law - who is working
on a detailed history of the 1932-33 "Pecora Hearings," which
uncovered wrongdoing on Wall Street and laid the foundation for major
legislation that reformed banking and the stock market.

from above:
The bundling of consumer loans and home mortgages into packages of
securities -- a process known as securitization -- was the biggest
U.S. export business of the 21st century. More than $27 trillion of
these securities have been sold since 2001, according to the
Securities Industry Financial Markets Association, an industry trade
group. That's almost twice last year's U.S. gross domestic product of
$13.8 trillion.

It wasn't so much CDS ... it was that they were unregulated and no
provisions were made for payout ... effectively treating all incoming
funds as profits (made for enormous commissions and bonuses; in a
regulated insurance environment, such activity would have been viewed
quite harshly)

from above:
He played a leading role in writing and pushing through Congress the
1999 repeal of the Depression-era Glass-Steagall Act, which separated
commercial banks from Wall Street. He also inserted a key provision
into the 2000 Commodity Futures Modernization Act that exempted
over-the-counter derivatives like credit-default swaps from regulation
by the Commodity Futures Trading Commission. Credit-default swaps took
down AIG, which has cost the U.S. $150 billion thus far.

... snip ...

In the session that repealed Glass-Steagall, the financial industry
contributed $250M to Congress, and in the recent session that passed
TARP, they contributed $2B. More recent was comment that financial
industry contributed a total of $5B during the period.

from above:
Enron was a major contributor to Mr. Gramm's political campaigns, and
Mr. Gramm's wife, Wendy, served on the Enron board, which she joined
after stepping down as chairwoman of the Commodity Futures Trading
Commission.

from above:
A few days after she got the ball rolling on the exemption, Wendy
Gramm resigned from the commission. Enron soon appointed her to its
board of directors, where she served on the audit committee, which
oversees the inner financial workings of the corporation. For this,
the company paid her between $915,000 and $1.85 million in stocks and
dividends, as much as $50,000 in annual salary, and $176,000 in
attendance fees, according to a report by Public Citizen

from above:
That same year Greenspan, Treasury Secretary Robert Rubin and SEC
Chairman Arthur Levitt opposed an attempt by Brooksley Born, head of
the Commodity Futures Trading Commission, to study regulating
over-the-counter derivatives. In 2000, Congress passed a law keeping
them unregulated.

... snip ...

one of the articles from the period mentioned that House passed the
bill ... and even before the copy of the bill was distributed in the
Senate, the Senate passed it unanimously. Also Born (as chairman) must
have been fairly quickly replaced by Gramm's wife (before she resigned
the position to join Enron).

In the wake of ENRON, congress passed Sarbanes-Oxley, but didn't do
much about the underlying problem. SOX put much of the responsibility
on SEC ... but as mentioned in the Madoff hearings, SEC was quite lax
in enforcement.

SOX also supposedly had SEC doing something about the rating agencies
... but there doesn't seem to have done anything but:

from above:
The database consists of two files: (1) a file that lists 1,390
restatement announcements that we identified as having been made
because of financial reporting fraud and/or accounting errors between
July 1, 2002, and September 30, 2005, and (2) a file that lists 396
restatement announcements that we identified as having been made
because of financial reporting fraud and/or accounting errors between
October 1, 2005, and June 30, 2006.

Wrong. Here's how the Office of the Comptroller of the Currency opened
its latest quarterly derivatives report:

The notional value of derivatives held by U.S. commercial banks
increased $24.5 trillion in the fourth quarter, or 14%, to $200.4
trillion, due to the migration of investment bank derivatives business
into the commercial banking system.

...

To put this in perspective, AIG nearly blew up the universe with
derivatives notionally worth about $2.7 trillion -- a fraction of some
of our largest banks

Architectural Diversity

Walter Bushell <proto@panix.com> writes:
Shirley, if we go to 64 bit dates that will take care of the problem for
the foreseeable future?

With 30 year bonds it should already have hit and aren't there a lot of
40 year mortgages out there?

370 (circa 1971) introduced 64bit TOD clock (cycle approx. 143 yrs).
the original architecture definition was that it start at the first of
the (last) century ... which caused some confusion regarding whether a
century started with 1900 or 1901.

bit 12 was defined to be microsecond and bit 32 was 1024/1000 second (or
if counting from the other direction bit 31 was 1024/1000 second and bit
51 was microsecond).

____________________ _ ____ _________________
| | | | |
| | | | |
|____________________|_|____|_________________|
0 51 64 103
The TOD clock nominally is incremented by adding a one in bit position
51 every microsecond. In models having a higher or lower resolution, a
different bit position is incremented at such a frequency that the rate
of advancing the clock is the same as if a one were added in bit
position 51 every microsecond. The resolution of the TOD clock is such
that the incrementing rate is comparable to the instruction-execution
rate of the model.

The TOD clock can be inspected by executing STORE CLOCK, which causes
bits 0-63 of the clock to be stored in an eight-byte operand in storage,
or by executing STORE CLOCK EXTENDED, which causes bits 0-103 of the
clock to be stored in bytes 1-13 of a 16-byte operand in storage. STORE
CLOCK EXTENDED stores zeros in the leftmost byte, byte 0, of its storage
operand, and it obtains the TOD programmable field from bit positions
16-31 of the TOD programmable register and stores it in byte positions
14 and 15 of the storage operand. The operand stored by STORE CLOCK
EXTENDED has the following format:
_____ _____________________________ __________
| | |Programm- |
|Zeros| TOD Clock |able Field|
|_____|_____________________________|__________|
0 8 112 127

we've used a number of metaphors to describe the current environment
... archived discussed in "PCI security rules may require
reinforcements" .... also archived here
http://www.garlic.com/~lynn/2009f.html#36

part of the stuff like TJX (merchant) and more recent Heartland
(processor) are extremely vulnerability to insiders (since the
vulnerable information is required in numerous business
processes). Also part of the metaphor is that the value of the
information to the merchant is some part of profit on transaction
(possibly a dollar or two), and the value of the information to the
processor is possibly a couple cents per transaction. The value of the
information to the attackers/crooks is the credit limit or account
balance .... at least a couple hundred dollars. As a result, the
attackers/crooks can spend hundred times to thousand times more
attacking the system as the defenders can spend protecting the system.

was addressing such vulnerabilities. However, x9.59 didn't do anything
about countermeasures against evesdropping, skimming,
sniffing, data breaches, etc ... what X9.59 did was slightly
tweak the paradigm and eliminate the usefulness of the information to
the crooks (didn't prevent the crooks from getting the information,
just eliminate the usefulness of the information to the crooks).

Also, the early work involving "electronic commerce" use the
technology they had invented called "SSL" to hide the transaction
information. This is likely the largest use of "SSL" in the world
today. X9.59 eliminates the need to hide the information ... so it
eliminates the need of SSL in this earlier work we had done for
"electronic commerce".

As part of the "electronic commerce" activity we looked at the
vulnerabilities of webserver end-points. We drew up a list of things
that should be mandated for all "electronic commerce" webservers that
included things like only equipment with at least EAL4 certification
and all individuals with any sort of access having indepth FBI
background checks and processes setup so that all human events
required multi-party operations (and some amount of more checks &
balances).

There were going to be millions of these ... and everyone required
that level of setup. We basically got overruled. Another major
vulnerability we identified was that a lot of these electronic
commerce servers were using RDBMS backends. There was starting to be
an alarming number of RDBMS backend exploits ... basically human
mistakes, The problem was that RDBMS activity was high-skill and human
intensive operations. Maintenance especially was a vulnerability point
... since it required taking down the business operations ... and the
people were under constant time-pressure to get it back up as fast as
possible ... that in turn contributed to frequent mistakes.

In the dual-use vulnerability metaphor, we point out that the information needs to
be kept confidential and never divulged to anyone (including NEVER
swiping a payment card at pos terminal) ... while at the same time the
information needs to be readily available for numerous business
processes. Because of the confliciting requirements, we claim that
even if the planet was buried under miles of information hiding
encryption ... it would still be unable to prevent information
leakage.

Note that RDBMS vulnerabilities from the early 90s in electronic
commerce ... didn't get a lot of press ... but they were there. Now
they hare starting to get more press with things like the SQL
x-system problems (a lot of it because the complexity and ease which
mistakes can be made)

While lots of the exploits and vulnerabilities are well documented
... it isn't general knowledge for tens of millions of people
... which would be required to cover the hundreds of millions of
possible vulnerabiles.

there have been some examples even where everything has been done
absolutely perfectly in the current paradigm ... and it still wouldn't
be enough. Given the 100-1000 times difference in the resources
between the defenders (between a couple cents to a couple dollars per
transactions) and the attackers (couple hundred to couple thousand per
transaction) ... having done everything perfectly wouldn't be
enough. For instance there have been some number of instances where
backdoors were built into the boxes at manufacturing plant (at one
time, in some world market segment, it may have been a significant
percentage of boxes sold).

As mentioned, x9.59 didn't do anything to try and increase the
protection of the information by factors of 100-1000 times at no
increase in cost (to try and level the playing field between what the
defenders could afford to spend and the value of the information to
the attackers), what x9.59 did was eliminate the usefulness of the
information to the attackers (instead of a paradigm where the
information was worth 100-1000 times more to the crooks/attackers
compared to the defenders ... it eliminated the value of the
information to the crooks/attackers ... and therefor most of the
financial motivation for the crooks to attack the system).

We also were tangentially involved in the cal. state breach
notification law (which has been copied since by some number of states
... however there has been quite a bit of conflict at the federal
level with breach notification legislation that is similar to cal. and
breach notification legislation that eliminates the requirement for
notification).

we had been brought in to help word-smith the ca. state electronic
signature legislation and several of the parties were heavily involved
in privacy issues. there had been in-depth, detailed privacy surveys
and the number one item turned up was identity theft ... and a major
component of that identity theft was various kinds of financial
information breaches that resulted in fraudulent financial
transactions ... which had no publicity and little or nothing was
being done about it. They appeared to believe that any publicity from
breach notification might provide motivation for correcting the
situation.

The point of the security proportional to risk metaphor about the
current paradigm is that the value to the crooks is worth 100-1000
times more than it is to the merchants or the processors ... i.e that
the crooks/attackers can afford to outspend by 2-3 orders of
magnitude attacking the system as the defenders can spend defending
the system. The value of the information to the merchants and the
processors is a few cents to a few dollars per account ... the value
of the information to the crooks is the account balance &/or credit
limit.

The analogy is that each mud hut has lots of something that is worth
relatively little to each inhabitant ... but the value to the crook is
that it would require for each mud hut, a massive bank vault with six
foot bank thick vault doors to keep them out. What we did in X9.59 was
not to try and come up of ways to turn every mud hut into a bank vault
... but instead to eliminate the value to the crooks.

we did detailed end-to-end vulnerability study of tcp/ip ... not
specifically for security ... but to try and eliminate all points of
failure (regardless of kind ... nature, human mistakes, accidents,
done on purpose, threats, vulnerabilities, risks, etc) ... and created
a list of some number of items. I gave a presentation on it for the
IETF RFC editors (internet standards organizations) at ISI combined
with graduate students in security at USC.

There was an incident in the mid-90s involving the largest online
service provider at the time ... where some of their internet
connected boxes were failing. For two months they had nearly every
internet specialist in to look at it (while it continued to fail). 60
days after it started happening, one of people flew out to buy me a
hamburger after work. While I ate the hamburger, they explained the
symptoms. When I was done eating, I mentioned it was several things we
had identified in the late 80s, and explained a Q&D dirty fix that
they applied later that night. I then tried to interest the major
vendors in the problem ... but apparently since it wasn't in the press
(and the largest online service provider wasn't interested in it
getting into the press) ... they weren't interested in addressing
it. Exactly 12 months later a small service provider started
experiencing almost the same symptoms ... this time it made the press
and this time, all the traditional vendors rushed to do something.
After a month or so all the major vendors were patting themselves on
the back on how quickly they had reacted (to that single symptom).

was larger than the arpanet/internet from just about the beginning
until possibly late 85 or early 86. One of the differences with the
internal network was that all links (leaving corporate premise)
required encrypted links. One of the comments in the mid-80s was that
the internal network had over half of all link encryptors in the
world.

that NSF was interested in using (what we were doing) for the NSFNET
backbone (tcp/ip was technology basis for modern internet, NSFNET
backbone was operational precursor to the modern internet, and CIX was
the business basis for the modern internet). However, corporate
politics got in the way and we were prohibited from participation. The
head of NSF tried to help by sending a letter to the company (copying
the CEO) but that just aggravated the internal politics (it included
things like what we were already running was at least five yrs ahead
of all bid submissions to build something new for NSF). ... misc. old
NSF related email from the period
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

In the naked transaction metaphor ... with regard to the
existing payment paradigm ... we discuss that the transactions are
vulnerable everywhere they exist ... and they exist in billions of
places. as a result there are enormous number of different
opportunities for exploits of all kinds. The enormous difference in
the value of the information to the defenders vis-a-vis attackers
(security proportional to risk metaphor) ... further aggravates
the circumstances.

from above:
Some 75% of the world's businesses data is still processed in Cobol, and
about 90% of all financial transactions are in Cobol, according to Arunn
Ramadoss, head of the academic connections program at Micro Focus
International PLC, which provides software to help modernize Cobol
applications.

One of the problems with HSDT was getting encryption to support "high
speed" links ... which was non-trivial at the time. I like to use the
following example from the mid-80s. I had gotten blamed for online
computer conferencing on the internal network in the late 70s &
early 80s. TOOLSRUN somewhat grew out of corporate investigation into
what I was doing (started out with IBMVM, then IBMPC, then lots of
other things). One friday before I left for a business trip to the
other side of the pacific (to talk to vendors about getting high-speed
hardware for HSDT project), somebody in the communication group sent
out announcement for a new online computer conference to discuss
networking. As part of the announcement, they included the following
definition:

from above:
Some 75% of the world's businesses data is still processed in Cobol,
and about 90% of all financial transactions are in Cobol,

... snip ...

Long ago and far away, my wife had been con'ed into going to POK to be
in charge of loosely-coupled architecture. While there, she created
the "peer-coupled shared-data" architecture ... but except for IMS
hot-standby, saw little uptake (until much more recently with
SYSPLEX), which contributed to her not staying in the position very
long.
http://www.garlic.com/~lynn/submain.html#shareddata

More recently we had some discussions with a major financial
transaction system and they attributed their 100% availability over an
extended number of years to
1) IMS hot-standby
2) automated operator

as well as picking up contacts with banking institutions (like BofA)
that were starting to get interested in relational databases
(System/R) ... misc. posts mentioning original relational/SQL
http://www.garlic.com/~lynn/submain.html#systemr

from the above:
A senior Pentagon official has delivered a stinging attack on the US Air
Force, saying that its philosophy of using fully qualified human pilots
to handle unmanned aircraft at all times has resulted in unnecessary,
expensive crashes. By contrast, US Army drones with auto-landing
equipment and cheaply-trained operators have an enviable record

... snip ...

... and ...
The US Army has a differing philosophy: it's "Sky Warrior" variant of
the Predator is intended to land itself automatically, and the
present-day Shadow has such kit already. Army drones are controlled by
noncomissioned tech specialists who, while fully trained and qualified
for their job, have no airborne stick time in regular aircraft. They are
always in theatre with the rest of the troops.

from above ...
Once upon a time--two decades ago this year, actually--a startup called
Quantum Computer Services changed the name of its moderately popular
online service to America Online and added a cheery e-mail notification
recorded by an employee's husband: "You've got mail!"

besides tcp/ip specific threats and vulnerabilities (malicious attacks
was just a small subset of what we considered to be threats and
vulnerabilities) ... we also looked at the general environment. One of
the things we identified was implementation characteristics of the C
language would result in larger number of buffer overflows ... not
just in past code ... but ongoing nearly forever as long was existing
C language was being use. Lots of past posts in buffer overflowhttp://www.garlic.com/~lynn/subintegrity.html#overflow

The original mainframe tcp/ip implementation was done in
vs/pascal. This is past posts mentioning the base implementation only
getting about 44kbytes/sec thruput and using nearly a full 3090 cpu
doing. I added rfc 1044 support and in tuning tests at cray research
between a cray and 4341, I hit channel thruput limitations
(1mbyte/sec) using only a modest amount of 4341 processor.
http://www.garlic.com/~lynn/subnetwork.html#1044

Note that none of the vs/pascal implementations were prone to the
buffer overflow exploits that are found everywhere in C language
implementations.

This is past post mentioning a couple yrs ago, Jim Gray badgering me
into interviewing for position of Chief Security Architect in redmond
... the interview went on for a couple weeks ... but couldn't come to
agreement regarding moving to redmond (we had been there in the past
on temp. assignment in seattle area and my wife had a bad case of
SAD).
http://www.garlic.com/~lynn/2009.html#60 The 25 Most Dangerous Programming Errors

vs/pascal had lots of enhancements ... it was originally developed at
the Los Gatos VLSI lab for VLSI tools and was used in many industrial
strength applications ... including original mainframe TCP/IP
... again reference to work I did for rfc 1044
http://www.garlic.com/~lynn/subnetwork.html#1044

as an aside, vs/pascal was also ported to workstation platform ... in
addition to its original mainframe implementation.

PLI had similar characteristics to Pascal from the standpoint of
handling buffers and storage ... and major difference from C. Multics
was implemented in PLI and here is reference to study that it didn't
have any overflow problems (either) ... aka it was about as hard to
have overflows in PLI or Pascal ... as it was easy to have them in C.
http://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from the Multics Security Evaluation
http://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
http://www.garlic.com/~lynn/2002l.html#45 Thirty Years Later: Lessons from the Multics Security Evaluation

study was at
http://domino.watson.ibm.com/library/cyberdig.nsf/papers/FDEFBEBC9DD3E35485256C2C004B0F0D/$File/RC22534.pdf

now after I left, as part of IBM moving to COTS VLSI tools ... several
of the internal VLSI tools were being made available to outside VLSI
tool company. One of these was a >50,000 statement vs/pascal tool. I
got a contract to port it to another vendor platform with their own
Pascal. This other platform appeared to have pascal implementation
that had never been used for other than student & teaching
activities. To make it worse, the vendor had outsourced their pascal
product to a operations that was located 12 times zones away (so every
problem interaction i had with the vendor required minimum of 24hr
turnaround).

Turns out that that group (12 time zones away) was a spinoff of a
former gov. organization. A relative recently visited there and
brought me back some souvenirs ... reference here:
http://www.garlic.com/~lynn/2006r.html#48

there are all sorts of explanation regarding why people could avoid
making mistakes in C ... but there have constantly continued to be an
enormous number of such mistakes ... which can be contrasted to PASCAL
& PLI where such mistakes are extremely rare. My frequent analogy is
that when a section of highway has had as many accidents as there have
been C language buffer overflow accidents ... such highway sections
are redesigned to drastically reduce the accidents (and there are lots
of examples of other languages where such accidents are nearly
non-existent).
http://www.garlic.com/~lynn/subintegrity.html#overflow

I wasn't responsible for the 1st generation of tcp/ip on os/390.

However, that implementation had (at least) two issues:

1) as I mentioned, the original implementation had some serious
performance implementation issues. that is why it got only
44kbytes/sec thruput using a full 3090 processor ... and why I was
able to do the RFC 1044 support that got channel thruput (1mbyte/sec)
between a cray and a 4341 ... using only small amount of 4341
processor (almost a factor of 1000 times improvement in the number of
bytes moved per instruction executed).

The really slow performance and high processor utilization of the
original implementation wasn't an attribute of vs/pascal (example was
that with relatively little effort I was able to get a improvement of
one thousand times ... still all done in vs/pascal)

2) the original implementation was done on vm/370 using a diagnose
instruction into the vm kernel for some functions. For the original
implementation to os/390, they effectively took the unmodified vm370
code and moved it to os/390 by writing a diagnose simulation for the
vm/370 functions. This suffered from both the enormous pathlength
inefficiency of the original, base vm370 implementation ... plus the
relatively ugly way that it was crafted into os/390.

from above:
An IBM survey of over 2750 banking executives worldwide forecasts a
new world order for the financial services industry, characterised by
a shift to specialisation, more transparency and lower overall
returns.

one aspect is that payment fees have avg. nearly 40% of revenue for
financial institutions (and possibly 60% for some institutions)
... and the whole battle over interchange fees .... which have tended
to be related to level of fraud. There have been some issues that
technologies that significantly reduced fraud ... might also then
significantly reduce such revenue. a little of that is slightly
touched on here:

discusses that there were some very large financial re-engineering
efforts in the 90s that were major failures (contributing to why there
are still so much cobol legacy application code still running).

This post discusses some of the issues regarding the transition of the
80s dial-up online banking to internet online banking (in the 90s) and
some of those implications (in linkedin payment system network
thread):
http://www.garlic.com/~lynn/2009f.html#7

with regard to the "new world order" it mentions financial industry
possibly moving to more outsourcing. also mentions too big to fail
related effectively to lots of rogue activity during the last decade
of deregulation and lax regulation enforcement.

for other topic drift ... recent reference to doing some tuning work
on a >450k statement cobol application that runs every night on 40+
max configured mainframe systems
http://www.garlic.com/~lynn/2009e.html#76

if you play the above ... there is something about IMS doing 50
million transactions per day. However, in the blog comments, somebody
comments that it may be more like 50 billion ... since they work for a
company that does over 40+ million IMS transactions/day (so all IMS should
be a lot larger)

on the outsourcing side of the question ... some of the issue is
whether it represents a competitive and institutional differentiation.

a couple yrs ago, i took a proposal to FSTC to do a industry project
for a new payment system implemtation.
http://www.fstc.org/

FSTC talked the proposal over with the members and turned it down with
the comments that the major members felt that payment systems were
institutional differentiation and didn't want to have a shared
industry standard implementation. This is somewhat analogous to
difference between COTS (commercial off the shelf) and RYO (roll your
own).

Note, FSTC was outgrowth of some legislation change in the early 90s
to promote national competitiveness ... which included promoting
technology transfer programs from gov. agencies (we had done some
consulting with one of the people responsible when it was being set
up)

Possibly related to BI technology ... there is this profile/article
over in greaterIBM ibmconnection.com ... that I've duplicated here
(for those not registered on ibmconnection)
http://www.garlic.com/~lynn/ibmconnect.html

We've used the technology for complex information like "cleaning"
legacy databases that may have evolved over a period of decades
(looking for dirty data as exceptions to required patterns).

Simpler demonstration is that we use it to manage repository of
internet standards information. as standards changes, the information
is loaded and then checked against a whole lot of consistency rules
that are represented as patterns. Then a set of static HTML pages are
generated which can be found here:
http://www.garlic.com/~lynn/rfcietff.htm

We also use it for managing various "merged" taxonomies and glossaries
in a number of areas (security, payments, financial, privacy) where an
attempt is made to organize how to "think" about the subject. What is
available on the website are generated static HTML pages
http://www.garlic.com/~lynn/index.html#glosnote

Architectural Diversity

Walter Bushell <proto@panix.com> writes:
When you were using punch cards, every character position was vital. It
was the right decision at the time, after all, nearly all cases were
caught before they caused problems and even that gave analysts a chance
to change some other things that desperately needed to be changed. Yes,
some POS systems froze when the credit card dates were in the 21st
century and procurement schedule programs had to be changed to allow the
new century, but to get a system working now with the current resources
was the thing.

a side effect is that the financial networks were provisioned for lots
of transactions (tens of millions per day) that are 60-80 bytes in size.

we had been called in to consult with small client/server startup that
wanted to do payment transactions on their server and they had invented
this technology called SSL they wanted to use. part of the
infrastructure was something called a payment gateway ... misc. past
posts
http://www.garlic.com/~lynn/subnetwork.html#gateway

which had internet on one side and the payment network on the other side
... handling transactions from webservers on the internet side and
payment processing on the payment network side (and then returning
responses).

somewhat as a result, in the mid-90s, we were invited to participate in
the x9a10 financial standard working group which had been given the
requirement to preserve the integrity of the financial infrastructure
for ALL retail payments (ALL, debit, credit, stored-value, giftcard,
internet, POS, attended, unattended, contract, contractless, wireless,
transit turnstyle, low-value, high-value, ALL). the result was
x9.59 standard ... some reference
http://www.garlic.com/~lynn/x959.html#x959

about the same time there were some other industry activity to do a
payment specification that was internet specific. when it was initially
published ... we did a crypto operations profile and a business
operation profile. we then got somebody to do benchmark for the crypto
operations profile on a number of platforms. turns out that they used a
crypto BSAFE library that they had enhanced to run 4* faster. When I
reported the numbers, there was a claims that they were 100 times too
slow (when they should have commented that they were 4* too fast).
About six months later ... when there was prototype code available (and
the BSAFE speedups had been given back to RSA), the profile benchmarks
were within a couple percent of measured. It turns out that the actual
processor use of being 100 times larger than expected ... was a major
inhibitor for the uptake of the specification.

The other issue was the way that the specification made use of public
key infrastructure and the specification that all transactions not only
required public keys operations but also that things called "public key
certificates" be appended to each transaction. These appended data
typically ranged from 4kbytes to 12kbytes.

"certificates" effectively have design point to address the first-time
communication between strangers ... i.e. the electronic equivalent of
the letters of credit/introduction from sailing ship days (when there
was no other sources of information about who was being dealt with).

the issue in payments, was that there is already an established
relationship between a cardholder and the cardholders financial
institution ... and so any appended certificates were redundant and
superfluous ... in addition represented a factor of 100 times bloat in
payload size (besides the processor 100 times in processing) ...
some past posts
http://www.garlic.com/~lynn/subpubkey.html#bloat

so the issue about saving a byte or two in payment transaction payload
... sort of was drawfed by effort to add 4k-12k bytes of useless,
redundant, and superfluous information to every transaction.

later there was an effort in standardization committee to try and come
up with a defintion for compressed certificates ... recognizing that
trying to add 4k-12k bytes of useless, redundant and superfluous
information to every (60-80 byte) payment transaction would be a problem
... so the objective was to try and reduce the amount of useless,
redundant and superfluous information to only 300 bytes.

In the past, we viewed dealing with malicious and/or attacks as just
part of industrial strength dataprocessing ... along with natural
disasters, human mistakes, software &/or hardware failures.

A lot of the current infrastructure is low-end, consumer products
(that were never designed or built for handling industrial strength
dataprocessing) being pushed into industrial and commercial
environments.

During ha/cmp product ... we had done detailed threat & vulnerability
of tcp/ip protocol/implementation as part of looking at how systems
might fail (not limited to malicious behavior and/or attacks). Some
past posts mentioning ha/cmp
http://www.garlic.com/~lynn/subtopic.html#hacmp

Also, as part of ha/cmp activity, I was asked to write a section for
the corporate continuous availability strategy document
... however, both Rochester and POK complained that (at the time) they
couldn't meet the specification and that section was pulled. When we
were out marketing HA/CMP, I coined the terms disaster survivability
and geographic survivability to differentiate from disaster/recovery
http://www.garlic.com/~lynn/submain.html#available

Sometime later, after joining the company, a new CSO was hired (that
had been head of presidential detail in former life), I got assigned
to run around with him for some time providing information about
computer security ... and some amount of physical security rubbed off
on me.

from above:
MAG concerns itself with everything from interchange and pricing to
member education, but the group's first high-profile initiative is
data security. MAG is the prime force behind an effort by the
Accredited Standards Committee X9 (ASC X9) to develop a new standard
to protect cardholder data. ASC X9 is accredited by the American
National Standards Institute (ANSI), a body that sets voluntary
standards for members of a broad range of industries. For example, ASC
X9 helped develop standards for credit card magnetic stripes and ATM
systems.

Architectural Diversity

jmfbahciv <jmfbahciv@aol> writes:
Oh, bullshit. It was a problem created because of the limitation of
space on IBM cards and scarcity of available disk space. That was not
a problem created in the 80s, but in the 60s and 70s.

for payment cards transaction ... was also bandwith and misc. other x9.15
(merchant to acquiring standard) and then incorporated into iso8513
... transaction size could be down to almost 40 bytes.

most of those terminals are 1200 baud (from 80s). at one point there was
some look upgrading to 28kbit modems ... but the "sync" time for initial
connect turned out to be longer than total elapsed time using 1200 baud
modems (in fact 1200 baud sync time is usually longer than the rest
of the transaction).

things like this that had to be considered by the x9a10 financial
standard working group when doing x9.59 (having been given the
requirement to preserve the integrity of the financial infrastructure
for ALL retail payments).

in the congressional hearings into the rating agencies last fall,
several times it was mentioned that the rating agencies' business
model had become mis-aligned in the early 70s when they changed from
the buyers paying for the ratings to the sellers paying for the
ratings (opening things up for conflict of interest). It was also
mentioned several times that both the sellers/issuers of toxic CDOs
and the rating agencies knew that the toxic CDOs weren't worth
triple-A ratings ... but the toxic CDO issuers/sellers were paying for
the triple-A ratings.

Toxic CDOs (securitized mortgages) date back to at least the S&L crisis
... but getting triple-A ratings significantly increased the
institutions that would deal in the instruments ... as well as the
money available to the issuers. Unregulated, non-depository loan
originators were using toxic CDOs as a source of funds ... so getting
triple-A ratings significantly increased the amount of money they had
for lending.

from above:
Watsa's only sin was in being a little too early with his prediction
that the era of credit expansion would end badly. This is what he said
in Fairfax's 2003 annual report: "It seems to us that securitization
eliminates the incentive for the originator of [a] loan to be credit
sensitive. Prior to securitization, the dealer would be very concerned
about who was given credit to buy an automobile. With securitization,
the dealer (almost) does not care."

from above (something of an understatement):
Bernanke said the packaging and sale of mortgages into securities
"appears to have been one source of the decline in underwriting
standards" because originators have less stake in the risk of a loan.

... snip ...

And per above ... with securitization (and triple-A ratings), the
unregulated, non-depository loan originators no longer had to pay
attention to loan quality/qualification (they only had to worry about
how fast they could write the loans). Speculators found no-down,
no-documentation, 1% interest only payment ARMs extremely attractive,
since the carrying cost was much less than real-estate inflation in
many parts of the country.

With the repeal of Glass-Steagall, regulated depository institutions
were actually providing lots of the funding for such lending, with
their investment banking arms buying up a lot of the toxic CDOs.

from above:
The bundling of consumer loans and home mortgages into packages of
securities -- a process known as securitization -- was the biggest
U.S. export business of the 21st century. More than $27 trillion of
these securities have been sold since 2001, according to the
Securities Industry Financial Markets Association, an industry trade
group. That's almost twice last year's U.S. gross domestic product of
$13.8 trillion.

In January, there was some news items that gov. was using Interactive
Data to value the toxic CDOs/assets being held by regulated,
depository institutions.

Interactive Data had started as a (virtual machine) CP67 commercial
time-sharing service bureau (disclaimer: I had interviewed with them
in the late 60s, but didn't join). They relatively quickly moved up
the value stream providing financial information on their
service. Their website mentions that in the early 70s, they bought the
pricing service division from one of the rating agencies (about the
time the congressional testimony said that the rating agencies
business process became mis-aligned and were opened up to conflict
of interest.).

from above:
A one-day trial that raises questions about the security of cash cards
used in the U.K. and Europe concluded Thursday, with a decision
expected in about a month.

...
The liability rules are different for phantom withdrawal cases in the
U.K. than in the U.S., where banks must directly prove fraud in order
to reject a claim. In the U.K., that responsibility is on the
customer, and banks tend to steadfastly maintain there are no security
issues with their systems.

In the congressional Madoff hearings, the person that tried
unsuccessfully for a decade to get the SEC to do something about
Madoff, had a repeated theme that crooks and fraud thrive where there
is a lack of transparency and visibility. They mentioned that there is
requirement for new regulation, but much more important is
transparency and visibility.

Also related to the congressional hearings into the rating agencies,
there were side comments that regulation is much easier when business
processes are aligned (i.e. regulation is much easier when people are
incented to do the right thing ... but becomes much harder when the
business processes are misaligned and people are incented to do the
wrong thing).

We had been brought in to consult with a small client/server startup
that wanted to do payment transactions on their server (and they had
invented this technology called SSL they wanted to use). The result of
that effort is now frequently called "electronic commerce". Somewhat
as a result, in the mid-90s we were asked to participate in the x9a10
financial standard working group, which had been given the requirement
to preserve the integrity of the financial infrastructure for all
retail payments. the result was the x9.59 financial standard
http://www.garlic.com/~lynn/x959.html#x959

Somewhat as a result of the x9.59 standards work, we were invited into
NSCC (since merged with DTC to become DTCC) to look at doing something
similar for trader operations. That effort was suspended after it
appeared that a side-effect of the integrity work would have been
significant increase in transparency and visibility of trader
operations (which appears to not be part of the trader culture).

"Chris Burrows" <cfbsoftware@hotmail.com> writes:
Yes - it does depend on the capabilities of the language in use. Multi-level
nested IF statements aren't always the only alternative to a GOTO. Many
languages also have statements which allow you to take exceptional actions
when in loops or procedures.

in early 70s, i wrote a pli program that would read a 370 assembler
listing and convert instructions to abstraction ... creating flow
control, register store/load references, storage locatioin store/load
references and other operation. it would then generation pascal-like
psuedo code representation of the program. it would attempt to convert
condition setting/branch instruction sequences into things like
if/then/else operation (attempting to eliminate "GOTO").

problems were that 370 instructions could generate 4 state conditions
... and some highly optimized kernel sequences made use of capability
... conversion to if/then/else type sequences sometimes looked
significantly less understandable and convoluted than the original
highly optimized branch instruction sequences.

a lot of this seems quite straight forward ... of course, one of the
things that i did as an undergraduate in the 60s, was dynamic adaptive
resource management. this required re-engineering operating system
kernels with instrumentation to provide the necessary visibility to
correctly make resource management decisions.

later, I was taken with Boyd & OODA-loops and sponsored his briefings
at IBM in the early 80s. I've pontificated how CDOs obfuscate the
"observe" in Boyd's Observe, Orientate, Decide, Act paradigm.

there have been a few articles trying to lay the blame on inadequate
and/or incorrect risk models. However, that also seems to be
obfuscation. there have been quite a bit more articles about business
people overriding &/or ignoring the risk managers in the last
decade. there have also been stories about the business people
instructing the risk department to fiddle the inputs until they got
the desired outputs (i.e. the GIGO scenario; garbage-in, garbage-out),
exp:

mentions in 1989, citibank doing the analysis that their ARM portfolio
(largest in the business at the time) could take down the institution
(and nearly did). This motivated them to get out of the business. Roll
forward to the current period and all that institutional knowledge
appeared to have evaporated (and/or the part of institution dealing in
triple-A rated, toxic CDOs had no experience in the underlying
components).

another way of viewing the whole infrastructure (besides triple-A
rated, toxic CDOs obfuscating what was going on) was that it created a
whole lot of additional transactions ... some analogy to stock
portfolio managers "churning" accounts with transactions to inflate
commissions.

from above:
Here's a staggering figure to contemplate: New York City securities
industry firms paid out a total of $137 billion in employee bonuses
from 2002 to 2007, according to figures compiled by the New York State
Office of the Comptroller. Let's break that down: Wall Street honchos
earned a bonus of $9.8 billion in 2002, $15.8 billion in 2003, $18.6
billion in 2004, $25.7 billion in 2005, $33.9 billion in 2006, and
$33.2 billion in 2007.

... snip ...

the obfuscated transactions appears to have spiked the wall street
bonuses by something like a factor of four times during the period
(over and above the additional fees and commissions generated).

from above:
"In 1973, Wm. Mack Terry and his colleagues at the Bank of America in
San Francisco introduced the world's first matched maturity transfer
pricing system," added Dr. Donald R. van Deventer, Kamakura Chairman
and Chief Executive Officer. Over the last 35 years, the concept has
been increasingly refined and modified to incorporate the best
practice calculations embedded in KRM Version 7.0. Best practice
transfer pricing calculations would have made it clear that neither
Bear Stearns nor Lehman Brothers had more than a marginal chance of
survival when funding 30 year sub-prime mortgage loans with thirty day
borrowings. Board members can and should demand clarity of disclosure
on the total risk of an institution and the contribution of each
business unit and transaction to total risk.

from above:
"Two years ago the Wall Street Journal in a page 1 story pointed out
the dangers in relying on the copula approach for CDO valuation, but
investors were slow to realize the magnitude of their model risk"

there were cases of spaghetti code, but there were other cases where the
branch logic was actually quite clear and understandable ... one problem
was trying to take something that actually leveraged 4-state operations
and represent it with 2-state/binary.

from above:
Senator Paul Wellstone, Democrat of Minnesota, said that Congress had
"seemed determined to unlearn the lessons from our past mistakes."

... snip ...

Note that while a decade ago, the Glass-Steagall (Pecora) hearings
weren't online, they were available. A copy of the hearings were
scanned by archive.org last fall at Boston Public Library and put
online. I've mentioned before that I spent some time working with the
OCR'ed copies of the hearings ... creating HTML files.

Also mentioned in the above article:
"The world changes, and we have to change with it," said Senator Phil
Gramm of Texas, who wrote the law that will bear his name along with
the two other main Republican sponsors, Representative Jim Leach of
Iowa and Representative Thomas J. Bliley Jr. of Virginia. "We have a
new century coming, and we have an opportunity to dominate that
century the same way we dominated this century. Glass-Steagall, in the
midst of the Great Depression, came at a time when the thinking was
that the government was the answer. In this era of economic
prosperity, we have decided that freedom is the answer."

from above:
He played a leading role in writing and pushing through Congress the
1999 repeal of the Depression-era Glass-Steagall Act, which separated
commercial banks from Wall Street. He also inserted a key provision
into the 2000 Commodity Futures Modernization Act that exempted
over-the-counter derivatives like credit-default swaps from regulation
by the Commodity Futures Trading Commission. Credit-default swaps took
down AIG, which has cost the U.S. $150 billion thus far.

... snip ...

from above:
Enron was a major contributor to Mr. Gramm's political campaigns, and
Mr. Gramm's wife, Wendy, served on the Enron board, which she joined
after stepping down as chairwoman of the Commodity Futures Trading
Commission.

from above:
A few days after she got the ball rolling on the exemption, Wendy
Gramm resigned from the commission. Enron soon appointed her to its
board of directors, where she served on the audit committee, which
oversees the inner financial workings of the corporation. For this,
the company paid her between $915,000 and $1.85 million in stocks and
dividends, as much as $50,000 in annual salary, and $176,000 in
attendance fees, according to a report by Public Citizen

from above:
That same year Greenspan, Treasury Secretary Robert Rubin and SEC
Chairman Arthur Levitt opposed an attempt by Brooksley Born, head of
the Commodity Futures Trading Commission, to study regulating
over-the-counter derivatives. In 2000, Congress passed a law keeping
them unregulated.

... snip ...

one of the articles from the period mentioned that House passed the
bill ... and even before the copy of the bill was distributed in the
Senate, the Senate passed it unanimously. Also Born (as chairman) must
have been fairly quickly replaced by Gramm's wife (before she resigned
the position to join Enron).

In the wake of ENRON, congress passed Sarbanes-Oxley, but didn't do
much about the underlying problem. SOX put much of the responsibility
on SEC ... but as mentioned in the Madoff hearings, SEC was quite lax
in enforcement.

SOX also supposedly had SEC doing something about the rating agencies
... but there doesn't seem to have done anything but:

from above:
The database consists of two files: (1) a file that lists 1,390
restatement announcements that we identified as having been made
because of financial reporting fraud and/or accounting errors between
July 1, 2002, and September 30, 2005, and (2) a file that lists 396
restatement announcements that we identified as having been made
because of financial reporting fraud and/or accounting errors between
October 1, 2005, and June 30, 2006.

more of "The yin yang of financial disruption" discussion at:
https://www.ibmconnection.com/network/forums/151

other references to comments "In 1973, Wm. Mack Terry and his
colleagues at the Bank of America in San Francisco introduced the
world's first matched maturity transfer pricing system" and neither
Bear Stearns nor Lehman had more than marginal chance of survival:

from above:
"Similarly, Merrill Lynch and UBS both admitted that their Boards did
not have appropriate visibility on the home price risk of those
institutions, allowing the exposure to grow too large and making
appropriate hedging a shot in the dark. Modern transfer pricing
technology like that embedded in KRM version 7.0 eliminates the fog
around risk positions to give perfect visibility to the total risk of
the institution, both in aggregate and at the transaction level."

... snip ...

One could make the claim that the people in financial institutions,
who had the expertise to correctly evaluate the components of
(triple-A rated) toxic CDOs, where not in the loop when all the
components were packaged (and bought) as toxic CDOs.

Note that the original draft of Basel II, included a new qualitative
section to compliment the traditional quantitative sections ... which
basically required top management and the board to have end-to-end
awarenesss of what went on in the business. During the Basel II review
process, much of the qualitative section was eliminated ... Basel II
reference
http://www.bis.org/publ/bcbsca.htm

there were some disparaging comments (about what was done to the Basel
II qualitative section), that obviously top management and boards of
financial institutions shouldn't be required to understand what goes
on (just be able to go thru a set of motions).

Old-school programming techniques you probably don't miss

Peter Flass <Peter_Flass@Yahoo.com> writes:
I've mentioned before the SDS "ALTRET" parameter on system
calls. Normal return was to the statement following the call, but if
an error occurred the call returned to the "ALTRET" address where the
programmer could check return codes. etc.

CMS SVC202 call was like that ... if the four bytes after the svc202
(0ACA) instruction looked like a (24-bit) address (1st byte zero), it
was treated as abnormal return address. a normal return would start
after the abnormal address field (or if there wasn't one, immediately
after the instruction). if the application didn't specify an abnormal
return address, control would go to a system defined function on
abnormal condition.

Lots of application would code an abnormal return "*+4" address
... where return was always to the same place (abnormal or normal) and
then the application would have inline code that would test return code
condition in a register.

There has been a lot about using "copula" approach for CDO valuation
was incorrect ... but that argument appears to be more obfuscation &
misdirection.

As mentioned, there has been a lot more written about businessmen
overruling risk managers and/or having the risk department fiddle the
inputs until they got the desired outputs (i.e. GIGO, garbage-in,
garbage-out).

Also there has been some references that the lending people in
commercial banks still knew how to value the components of toxic CDOs,
but with courtesy of Glass-Steagall repeal, the investment arms were
now handling loan portfolios (packaged as toxic CDOs) and they had no
idea what-so-ever.

it references that the rating agencies knew that the toxic CDOs
weren't worth triple-A ratings but they were giving triple-A ratings
because the issuers/sellers of the toxic CDOs were paying for
triple-A ratings.

the post about "Board Visibility Into The Business" references a post
from 2003 about what was Basel II suppose to accomplish (i.e. prevent
crisis like this one) ... but also references this post from last
year:
http://www.garlic.com/~lynn/2008e.html#42 Banks failing to manage IT risk - study

where Bernanke was asked in congressional testimony why Basel II
didn't help (prevent the current crisis) and his reply was that
numbers used in Basel II were from the rating agencies (which nearly
everybody knew were open to question).

How did the monitor work under TOPS?

Anne & Lynn Wheeler <lynn@garlic.com> writes:
Because of the funding ... we got to be corporate reps that did periodic
reviews of Project Athena. One week we were there for review, I sat thru
the evoluation of cross-domain support in Kerberos. For some topic
drift, various past posts mentioning Kerberos and/or Kerberos pk-init
http://www.garlic.com/~lynn/subpubkey.html#kerberos

from above:
Markets need regulation to stay stable. We have had thirty
years of financial deregulation. Now we are seeing chickens coming
home to roost. This is the key argument of Professor Nick Bingham, a
mathematician at Imperial College London, in an article published
today in Significance, the magazine of the Royal Statistical Society.

There is no such thing as laying off risk if no one is able to insure
it. Big new risks were taken in extending mortgages to far more people
than could handle them, in the search for new markets and new
profits. Attempts to insure these by securitisation -- aptly described
in this case as putting good and bad risks into a blender and selling
off the results to whoever would buy them -- gave us toxic debt, in
vast quantities.

... snip ...

in this previously mentioned post ... there is some mention of Basel
II and what role it should have played in preventing the current
crisis:
http://www.garlic.com/~lynn/2009g.html#34 Board Visibility Into The Business

some of LDAP was re-action to original x.50x, at sigmod conference in
early 90s ... somebody asked what it was all about ... and the reply
was a bunch of networking engineers attempting to re-invent 1960s
database technolgy.

from above:
Microsoft is trying to get under IBM's skin with some benchmarks run
in its Redmond labs using Big Blue's own Java-based test, Trade, and a
variant of it ported to C#, which Microsoft calls .NET
StockTrader. But as Microsoft throws down the benchmarking gauntlet,
IBM is ignoring the calls for a WebSphere duel at the Middleware
Corral.

Anne & Lynn Wheeler <lynn@garlic.com> writes:
in early 70s, i wrote a pli program that would read a 370 assembler
listing and convert instructions to abstraction ... creating flow
control, register store/load references, storage locatioin store/load
references and other operation. it would then generation pascal-like
psuedo code representation of the program. it would attempt to convert
condition setting/branch instruction sequences into things like
if/then/else operation (attempting to eliminate "GOTO").

problems were that 370 instructions could generate 4 state conditions
... and some highly optimized kernel sequences made use of capability
... conversion to if/then/else type sequences sometimes looked
significantly less understandable and convoluted than the original
highly optimized branch instruction sequences.

from above:
Although TARP companies now must comply with new and significant
restrictions on executive compensation, our focus is on the potential
impact of the legislation on the broader universe of companies.

... snip ...

last year there was article claiming that ratio of executive
compensation to worker compensation had exploded to 400:1 after having
been 20:1 for a long time and 10:1 in much of the rest of the world.

some amount of evidence is that risky behavior includes fiddling SEC
financial filings to boost compensation ... and then later possibly
refiling ... but increased compensation is not forfeited (might
represent some downside to the institution, but little effect on the
individual). last fall there was a study of some 270 institutions that
redid their executive compensation plans to better align with the
interests of the corporation (after having problems with executives
personal interest not aligned with institutional interest)

from above:
One thing that the Vaio had, though, was a completely usable
keyboard. The current crop of netbooks seem to have some significant
problems in that regard. Perhaps the most usable keyboard I've seen on
a recent netbook was the 10-inch Eee.

... snip ...

I've had terminal at home since March 1970 ... starting with a AJ(?)
"portable" 2741 (two large 40lb suitcases). Which was replaced with a
"real" 2741 after a month or so. I'm still looking for pictures of
2741 ... but have pictures of the miniterm that replaced the 2741
... which was then replaced with a 3101 ... and finally an ibm/pc
... some old pictures
http://www.garlic.com/~lynn/lhwemail.html#oldpicts

The univ. had 2741s that they got with 360/67 to go with tss/360.
tss/360 never took off and the 360/67 spent most of its time running
as a 360/65 with os/360. As undergraduate in the 60s, I did get to
play some with (virtual machine) cp67 on the weekends. The
univ. got some ascii/TTY terminals and I did the design and coding to
add tty terminal support to cp67.

cp67 native support would dynamically figure out whether it was
dealing with a 1052 or 2741. when I went to add tty support, I wanted
to be able to preserve the dynamic identification ... in theory could
have a common ("rotary") base phone number of all terminals. The
dynamic identification worked correctly ... but being able to
dynamically use any terminal on any port ran into a short-coming in
the 2702 telecommunication controller.

that was part of the motivation of the univ. to start a clone
controller project, reverse engineer the channel interface and build a
channel interface board that went into (initially) interdata/3,
programmed to eliminate 2702. four of us got written up for being
responsible for clone controller business. The interdata/3 was
upgraded to interdata/4 (for handling channel interface) in a
"cluster" with multiple interdata/3s supporting port/line-scanner
interface.

interdata was eventually bought by perkin/elmer and the product
continued to be sold under the perkin/elmer brand (ran into somebody
in the 90s, who claimed that the channel interface board in the
product may still have been the original design from the 60s).

re: columbia 2741 picture; I've referenced it many times ... what i'm
looking for is a picture of my 2741. I've posted pictures of my other
terminals at home ... some of the terminal pictures show my IBM
tie-line phone ... rotary phone in this picture (along with a compact
microfiche viewer)

this picture of my (personal) pc (used as home terminal) ... the IBM
tie-line phone had been upgraded to push-button

2741 had no flat surface to lay paper. science center had formica
covered plywood pieces that fit on the edge of the surface surrounding
the typewriter housing of the 2741 ... and provided flat surface for
paper (which for some reason sat in the garage a long time after I no
longer had a 2741) ... it also provided surface in the back for tray
for fan-fold paper to feed the 2741. I typically had the tray ... but
actually had box of fan-fold paper on the floor behind the 2741
... that fed thru the space on the bottom part of the tray ... and
printed output collected on the top part of the tray.

one of the analysis in the wake of the S&L crisis was that in a
heavily regulated environment ... there is very little natural
selection for people that understand what they are doing ... just
selection for people that are able to go through the prescribed
processes day after day. The issue is that when regulations are
relaxed ... there can be a large group of people that are at a loss to
know what to do (since they weren't originally selected for knowing
what they are doing). They are also vulnerable to being preyed on by
others.

the other facet is that there have been lots of hotspots of greed and
corruption but were somewhat kept under control with various
regulations and controls ... relaxing those controls allowed the
individual hotspots of greed and corruption to combine together into
economic firestorm.

of course, I had gotten blamed for online computer conferencing on the
internal network in the late 70s and early 80s ... misc. past posts
mentioning the internal network (larger than the arpanet/internet from
just about the beginning until sometime late '85 or early '86)
http://www.garlic.com/~lynn/subnetwork.html#internalnet

from above:
In a recent interview at the RSA Conference, Elgamal explained how SSL
man-in-the-middle attacks and the interception of session cookies are
really related to browser design. The most revealing information from
the interview came from Elgamal's response to the question: Would you
have done things differently, with the knowledge of the security
landscape as it is today?

... snip ...

We had been brought in to consult with small client/server startup
that wanted to do payment transactions on their server ... and they
had invented this technology called SSL they wanted to use. They had
two people in charge of something called "commerce server", who we had
worked with in prior life ... reference in this post about meeting
from jan92
http://www.garlic.com/~lynn/95.html#13

as part of the effort, we had to look at the "end-to-end" business
processes ... including these new things calling themselves
certification authorities. Part of the effort including something
called a payment gateway ... some past references
http://www.garlic.com/~lynn/subnetwork.html#gateway

which handled SSL payment transactions between webservers and the
gateway (that acted as middleman between the internet and financial
payment network).

A big part of the justification for SSL was making sure that the
webserver the user thought they were talking to ... was, in fact, the
webserver they were talking to (countermeasure to things like
man-in-the-middle attacks). Part of the ground-rules was that the user
understood the relationship between URL they entered into the browser
and the webserver that the browser was talking to (using SSL).

Almost immediately, several of the basic security ground-rules related
to SSL use were violated. Part of this was merchant webservers found
that use of SSL cut their throughput by 85-95%. As a result, they
dropped back to just using SSL for check-out/paying (with a URL
provided by the "unauthenticated" webserver, typically with a
check-out/pay button). No longer was the initial webserver connection
being validated ... so the user could not really be sure there wasn't
some sort of compromise. The pay/checkout button was typically
providing the SSL URL (not the user) ... so instead of

1) SSL making sure that the webserver that the user thought they were
talking to, was the webserver that they were talking to.

2) SSL was making sure that the webserver was the webserver that it
claimed to be (by the SSL URL provided by webserver)

This isn't solely an attribute of the browser design ... but the whole
way that URLs are handled and whether they are provided by the user
... or provided by (possibly unauthenticated) webservers on the
network.

The other issue that is in question was the whole PKI & digital
certificate model.

Part of the justification for SSL was things like man-in-the-middle
attacks and related questions about the integrity of the domain name
infrastructure.

Web merchants applied to Certification Authorities for a digital
certificate that certified they were the owner of the corresponding
domain name/URL. The certification authorities then had to cross-check
the supplied documentation from the applicant with the authoritative
agency for domain name ownership ... aka the domain name
infrastructure (which has integrity issues that motivates requirement
for having SSL digital certificates). There are proposals backed by
the certification authority industry to improve the integrity of the
domain name infrastructure (because certification authorities have to
rely on that integrity when doing verification as part of certifying
information for SSL digital certificates). That represents a catch-22
for the industry, since improving the integrity of the domain name
infrastructure also lessens the original motivation for having SSL
digital certificates ... misc. past posts mentioning catch-22 bind for
the SSL digital certificate industry
http://www.garlic.com/~lynn/subpubkey.html#catch22

Basically PKI digital certificates are the electronic analog of
letters of credit/introduction for first time communication between
two strangers. The browser validates that the URL it uses to contact
the webserver corresponds with the URL in the SSL digital certificate
supplied by the webserver.

As mentioned this can be subverted at many points. It turns out in the
SSL connection between the webserver and the payment gateway ... we
required that SSL implementation support mutual authentication (which
didn't exist originally). The information about the payment gateway
was registered with each webserver and information about each
(authorized) webserver was registered at the payment gateway. This
resulted in the digital certificates being redundant and superfluous (from
a paradigm standpoint, it wasn't first time communication between two
strangers). The resulting use of digital certificates still being
used, was then an artificial side-effect of the SSL software library
that was used. However, since the business process was dependent on
having existing registered information for both parties, it wasn't
vulnerable to the PKI/certificate vulnerabilities that exist in many
browser/webserver interactions. Misc. past posts mentioning digital
certificates are redundant and superfluous in situations that aren't
first time communication between two parties where they have no other
mechanism to obtain information about each other.
http://www.garlic.com/~lynn/subpubkey.html#certless

Larry__Weiss <lfw@airmail.net> writes:
I always wanted to code for a machine whose microcode could be loaded
with custom instructions on the fly by a set of regular instructions
designed to change the microcode. Of course, then there would be a
way to use your customized instruction as if it were a regular
instruction, once it was instantiated. Has there been a machine like
that?

sort of the opposite ... the service processor on the 3081 was a uc.5
and had (I believe) a 3310/piccolo FBA hard disk. there was some amount
of the 370-xa instruction set that was in "pageable" microcode ... that
had to be paged in/out.

this is a URL for description ... but says that documents are now
available online for a "fee"
http://www.research.ibm.com/journal/rd/261/ibmrd2601B.pdf

google has a HTML flavor:
http://74.125.95.132/search?q=cache:YUMSgUFLs1UJ:www.research.ibm.com/journal/rd/261/ibmrd2601B.pdf+pageable+microcode+ibm+3081&cd=4&hl=en&ct=clnk&gl=us&lr=lang_en

we effectively did load custom instructions into microcode starting with
138/148 for virtual machine microcode assist. entry/mid-range 370s (and
many 360s) were microcode ... tended to avg. about 10 microcode
instructions per 370 instruction. criteria we were given was that there
was 6k bytes of loadable microcode space for our use. the idea was
to design new instructions that moved the highest used portion of
the vm370 kernel into native microcode (getting approx 10:1 thruput
improvement for those pathlengths). old post given some of the
measurement effort to identify 6k bytes highest executed kernel
pathlengths (there being approx same number of bytes in microcode
instructions as in 370 kernel instructions)
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

for other topic drift ... the company had a whole lot of different,
custom developed microprocessors used in a wide range of different
projects. In the late 70s, there was an effort to converge majority of
these microprocessors to 801/risc (Iliad) processor. misc. past posts
mentioning 801/risc, Iliad, ROMP, RIOS, power/pc, etc
http://www.garlic.com/~lynn/subtopic.html#801

Has any public CA ever had their certificate revoked?

On 05/05/09 14:01, wrote:
Before the collapse of the .com market in year 2000, there were
grandiose views of "global PKIs," even with support by digital signature
laws.

Actually, it turned out that CA liability avoidance was the golden rule
at the law and business model abstraction level. Bradford Biddle
published a couple of articles on this topic, e.g. in the San Diego Law
Review, Vol 34, No 3.

The main lesson (validated after the PKI re-birth post-2002) is that no
entity will ever position itself as a commercially viable global CA
unless totally devoid of liability towards relying parties.

Thus no punishment is conceivable beyond the Peter's opinions (they are
protected by Freedom of speech at least). That was predicted by the Brad
Biddle analysis 12 years ago.

we had been brought in to help word-smith the cal. state electronic
signature law. there was some legal types who very clearly
differentiated what was required for something to be considered "human
signature" (implication that something has been read, understood,
agrees, approves, &/or authorizes) and PKI "digital signatures" used
for authentication. misc. past posts mentioning signature
http://www.garlic.com/~lynn/subpubkey.html#signature

we've periodically commented that there may be some cognitive
dissonance because both terms contain the word "signature".

from above:
The crisis did not begin when Lehman failed; it began in the summer of
2007 with the markets' sudden realization that the triple-A ratings on
asset-backed securities were not accurate.

... snip ...

with something like $27T of this stuff out there ... being bought up
by people that didn't really know what they were buying/doing ... just
going through the motions and relying on the rating agencies for the
numbers that they were to plug in to the process that they went
through.

All of a sudden, it was an Emperor's New Clothes "moment" ... when the
community had to face the fact that it was possible to pay the rating
agencies for a rating ... and they had no real idea what they were
dealing in.

Some of this is all those playing "long/short" mismatch ... borrowing
with 30day paper to fund buying 30yr ARM mortgages (as mortgage-backed,
triple-A rated, toxic CDOs), and finding their market for 30day paper
had dried up ... a situation that has been known for centuries to take
down institutions.

also from article
The resulting loss of confidence in ratings was a powerful external
shock to the market, causing a collapse in trading of all asset-backed
securities. That market is still frozen, and the Fed's efforts to
revive it through TALF have not borne fruit.

... snip ...

Part of the point was that Lehman's failure was a symptom of the
credit market freezing ... not the cause.

One of the items in the article mentions that the current crisis is
not quite as pervasive as the 1930s ... however in some ways it is
actually more pervasive. The 1930s was a result of the speculation in
the stock market (relying on brokers' loans as source of funds). The
current situation is the speculation in the home & real estate market
(using things like no-documentation, no-down, 1% interest only payment
ARMs from unregulated non-depository lenders ... which relied on
selling off the loans as triple-A rated, toxic CDOs to the tune of
$27T as source of funds).

In the current crisis ... there is the collapse of the real-estate
market (because of speculation bubble collapsing), the freeze up in
the credit market (because of the Emperor's New Clothes "moment"), and
all the institutions holding that $27T in securitized, triple-A rated,
toxic CDOs (and not having any idea what it is really worth).

It is possible to argue about what exactly that $27T is worth ... but
it is not possible to argue that it is actually worth $27T ... with
recent stories like 1/5th of mortgages under water and property values
resetting to where they started in the early part of this decade
(before all the speculation frenzy).

The article refers to regulation need and interconnection of the
financial market. The simpler view is that loans had been relatively
straight-forward handled by loan departments of regulated financial
institutions who had incentive and experience to correctly value
loans. The current scenario is the obfuscation and lack of visibility
of lending by unregulated non-depository institutions with no
motivation for paying attention to loan quality.

The circuitous route of the transactions allowed individuals to
extract huge fees, commissions and bonuses w/o actually having to
understand the underlying characteristics of the instruments they were
dealing in. (a lot of the triple-A rated, toxic CDOs finding
their way to the books of regulated depository institutions that were
traditionally source of loans).

from above:
"Complex interdependent systems and networks" might as easily describe
global financial markets, and DARPA's desired new calculus might -
when assessing the "survivability" of balance sheets based on complex
derivative bits of paper - be quite a handy thing to have.

from above:
For example, the first quarter's unemployment rate of 8.1% is higher
than the regulators' "worst case" scenario of 7.9% for this same
period. At the rate of job losses in the U.S. today, we will surpass a
10.3% unemployment rate this year -- the stress test's worst possible
scenario for 2010.

from above:
Indeed many of these concepts were inherent in the Basel II Advanced
Approaches, resulting in reduced capital requirements. In hindsight,
it is now clear that the international regulatory community
over-estimated the risk mitigation benefits of diversification and
risk management when they set minimum regulatory capital requirements
for large, complex financial institutions.

Notwithstanding expectations and industry projections for gains in
financial efficiency, the academic evidence suggests that benefits
from economies of scale are exhausted at levels far below the size of
today's largest financial institutions. Also, efforts designed to
realize economies of scope have not lived up to their promise. In some
instances, the complex institutional combinations permitted by the
Gramm-Leach-Bliley (GLB) Act were unwound because they failed to
realize anticipated economies of scope. Studies that assess the
benefits produced by increased scale and scope find that most banks
could improve their cost efficiency more by concentrating their
efforts on improving core operational efficiency.

.... a lot of it was result of circuitous set of transactions starting
with loans by (mostly) unregulated non-depository institutions using
securitization (triple-A rated, toxic CDOs) as source of funds for
loans (to the tune of $27T), which also eliminated their motivation to
pay any attention to loan quality. Finally ending up with lots of
regulated depository institutions (traditionally source of loans)
ending up with all those triple-A rated toxic CDOs on their
books. Along the circuitous route, there were individuals taking
enormous fees, commissions and bonuses out of the infrastructure
(possible to use the stock portfolio transaction churn as an analogy
why so much of the industry became involved).

Slightly different view regarding Basel II ... was Bernanke's response
in congressional testimony in 2008 to a question why Basel II didn't
prevent the current crisis (i.e. Basel II calculations were also using
the information provided by the rating agencies ... see "Future of
Financial Mathematics" reference regarding the Emperor's New
Clothes "moment" with regard to the rating agencies)

Windowed Interfaces 1981-2009

"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
Many of us have watched users painfully clicking here and there,
taking a couple of minutes to do what a few keystrokes could do
in seconds. They think all this GUI stuff is boosting their
productivity, while they're actually frittering away their
time on what has become a glorified video game. In such cases,
usability is definitely subjective. But this subjective usability
translates into sales, which are quite objective - and if sales
are your overriding goal, objective usability can - and does -
go to hell.

I was recently been asked (by "Greater IBM" ... for current, former, &
retired IBMers) to run a online webinar conference on introduction to
social networking.

One of the things I mentioned to them is being annoyed by the
internet/web waiting, whenever clicking on URLs ... and having started
using background tabs ... some of this coming from having done a lot of
work on subsecond system response in the 70s.

the referenced topic (website now starting to charge for articles
... but they still are available for free via google) has also been
recently getting some amount of play in postings on the ibm-main mailing
list (also gatewayed to usenet as bit.listserv.ibm-main).

from above:
LexisNexis acknowledged Friday that criminals used its information
retrieval service for more than three years to gather data that was
used to commit credit card fraud.

... snip ...

We had been tangentially involved with the ca. state breach
notification law. We had been brought in to help wordsmith the
ca. electronic signature legislation and several of the institutions
involved were also heavily involved with privacy issues and had done
indepth consumer surveys about privacy issues. they found the number
one (privacy) issue was identity theft and a major form of identity
theft were breaches were information acquired was used to perform
fraudulent financial transactions (account fraud) and there was
little or nothing being done about the cases. There seemed to be some
hope that the (breach notification) publicity would motivate
corrective action..

Some number of agencies have had efforts to differentiate various
forms of identity theft like account fraud (fraudulent
transactions against existing accounts) from other forms of identity
theft.

There have been some articles written accusing financial institutions
making profit off of account fraud (with interchange fees being much
higher for transactions that have higher fraud rates). These articles
conjecture that if there was a serious effort to eliminate account
fraud type of identity theft ... it could shift the crooks effort
to the kind of identity theft involving opening new accounts. This
form of identity theft becomes purely a financial institution
liability (including getting into various gov. "know your customer"
mandates) ... as opposed to something that can be laid off against
merchants using interchange fees.

We had been brought in to consult with small client/server startup
that wanted to do payment transactions on their server ... they had
also invented this technology called SSL they wanted to use. The
result is frequently now called electronic commerce. Somewhat as a
result, in the mid-90s we were asked to participate in the x9a10
financial standard working group which had been given the requirement
to preserve the integrity of the financial infrastructure for ALL
retail payments. Part of this involved detailed, end-to-end threat &
vulnerability studies of wide variety of different kinds of retail
payments. Part of the study identified the extreme (account fraud)
threat that information from existing transactions represented and the
millions of places around the world where such information was
exposed. The result was x9.59 financial transaction standard
http://www.garlic.com/~lynn/x959.html#x959

X9.59 didn't do anything about preventing the leakage of information
from existing transactions, what X9.59 did was slightly tweak the
paradigm and eliminated the usefulness of the information to crooks
for the purposes of account fraud.

Now, the major use of SSL in the world today is for this earlier
effort (now called electronic commerce) to hide payment transaction
information. A side effect of x9.59 eliminates the need to hide that
information ... and therefor also eliminates the major use of SSL in
the world today.

At a PPOE, we used a CDC FORTRAN compiler that gave numeric errors.
If there was a run-time misstep, it would print something like:

ERROR 25

and you looked it up in the manual to find out what the error was.

Once we got a run-time error and looked the number up in the manual.
It said:

THIS ERROR CAN NEVER HAPPEN.

i started doing the rudiments of kernel debugger for cp67 ... including
when i did the original version of "pageable kernel" ... appending a
copying the loader symbol table that was saved to disk as part of kernel
build process. cp67 had a convention of doing various kinds of integrity
tests and if something appeared to be fatal ... executing an "SVC 0"
(supervisor call zero) ... which would take a "core image" dump (at
immediately restart the system, although immediate restart wasn't in the
original implementation).

As part of improving problem determination ... I started adding a
half-word unique failure code following the SVC0 instruction (since
execution never return to that location). This was institutionalized in
vm370 and the unique codes documented in the vm370 "messages & codes"
manual.

A couple years ago, I had opportunity to examine a periodic report
that compared detailed financial & operational statistics of regional
financial institutions against national institutions (avg. values for
20 regional institutions compared to avg. values for 10 national
institutions). There was no analysis ... just raw data ... 60 items
per page ... one column for regional, one column for national
... couple hundred pages.

The regional & national profiles were effectively identical for all
items ... except regionals had higher profit margin than nationals
(which seemed to imply that larger national institutions were less
efficient in some manner).

After examining the report for 15-30 minutes ... the only significant
item that was different between regional and national was for some
(unexplained) reason, regional institutions had higher percentage of
electronic transactions.

The cost to process an electronic transaction is less than the cost to
process a non-electronic transaction (and the processing costs for the
different kinds of transactions are effectively the same for both
regional and national) ... the difference was that regionals had
higher percentage of electronic transactions ... which lowered their
aggregate processing costs.

Old-school programming techniques you probably don't miss

"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
At the top of my list of Famous Last Words is one I often encountered
in design meetings: "Oh, don't worry about that; it'll never happen."
After a while I refused to accept that as an answer; experience taught
me that "never" is usually about six months.

corollary is "oh, don't worry about that; it is water under the bridge"
... i remember one project with annual reviews where the same comment
was made about the same problem ... three years running. it crossed my
mind that the people involved must have been considered useful for
unrelated reasons.

there was some activity to call it ha/6000 (instead of ha/cmp) ... but i
kept referring to it as ha/cmp because of all the work I was doing on
cluster scaleup ... even before medusa ("cluster-in-a-rack", specific
physical packaging effort to crame more into smaller footprint) ... old
email referring to medusa
http://www.garlic.com/~lynn/lhwemail.html#medusa

from above:
In effect, by booking assets at a higher price than the market would
offer, the banks reported earnings that never existed, he argued. The
earnings wound up in the bonus pool and were then paid out. "There's
no doubt in my mind that this is fraud," he said. "This was the
bezzle," he added, using the late economist John Kenneth Galbraith's
term for the hidden embezzled inventory that piles up in boom times.

On 05/09/09 07:33, Jerry Leichter wrote:
On May 8, 2009, at 3:39 PM, Ian G wrote:
>The difficulty with client certs is that I need them to also work on my
>laptop. And my other laptop. And my phone.
>
>So, how do I get hold of them when I'm on the road?
>
>Good point. The difficulty with my passwords is that I have so many
>that are so long that I can only manage them on my laptop, and have to
>carry my laptop with me ...
>
>We can imagine all sorts of techie solutions to this, but it does
>appear that we are in a bit of a grey zone with auth at the moment,
>and the full solution might take a while to emerge. Try them all?

This is part of a broader UI issue.

I had a discussion with a guy at a company that was proposing to create
secure credit cards by embedding a chip in the card and replacing some
number of digits with an LCD display. The card would generate a unique
card number for you when needed. They actually had the technology
working - the card was pretty much indistinguishable from any other. (Of
course, how rugged it would be in typical environments is another
question - but they claimed they had a solution.)

I pointed out that my wife knows one of her CC numbers by heart. The
regularly quotes it, both on phone calls and to web forms. The card
itself is buried in a thick wallet, which is buried in her pocketbook,
which is somewhere in the house - likely not near the phone or the
computer.

Hell, one of the nice things about on-line shopping is that I can do it
in my bathrobe - except that I *don't* know my CC by heart, so in fact I
tend to put off buying until later when I have my wallet with me. (This
does save me money....)

When I'm in a store, I'm used to having to have my CC with me, because I
always had to have the wallet with money anyway. At home, it's a whole
different story. In any case, merchants are trying to make the in-store
experience as simple as possible, pushing for things like RFID credit
cards and even fingerprint recognition.

So many people would see these "safer" cards as a big step backwards in
usability. Why would they want such a thing? The card companies are
trying to sell "safety", but in the US, where your liability is at most
$50 if your CC number is stolen (and where in practice it's $0), the
only cost you as an individual bear is the inconvenience of replacing a
card. Because replacements for security problems have gotten so common,
the CC companies have streamlined the process. It's really no big deal.
I've had CC numbers stolen a couple of times (by means unknown);
recently, two of my CC's were replaced by the companies based on some
information known only to them. In every case, the process was very
quick and painless. Hell, these days even on-line continuing charges
often update to the new number automatically (though I've learned to
keep track of those and check).

The person arguing for this claimed that CC companies could offer a
discount for users of the "secure" cards. But if you look at actual loss
rates - how much could you offer? (I'd guess it's about the same as
Discover offers: About a 1.5% rebate on most purchases. Not enough to
let Discover steal customers from Visa and MC. Given all the other
charges - and the absurdly high interest rates - on cards, anything like
this gets lost in the noise.)

Security that depends on people changing their habits in a way that is
inconvenient to them ... won't happen (unless you're in an environment
where you can *force* such changes).
-- Jerry

at least the initial introduction of one-time-account number displays
had a problem because they couldn't meet the flexing specification
(like cards in wallet and getting sat on).

in the early part of this decade/century, related to attempt to
introduce some "safer" internet payment technologies, there was an
attempt to justify even higher merchant interchange fees ... than the
"unsafe" fees. this resulted in some amount of cognitive dissonance
... since merchants had been accustomed to their interchange fees
being proportional to amount of fraud ... aka as the amount of fraud
goes up ... so does the interchange fees ... but this change would
have created two domains ... one where the interchange fees go up
proportional to fraud ... and then a point where interchange fees
continue to climb as fraud is reduced. related post
http://www.garlic.com/~lynn/2009f.html#60

Another part of the AADS chip strawman was enabling a shift from an
institutional-centric hardware token paradigm to a person-centric
hardware token paradigm ... i.e. the same AADS chip could be used for
contact, contactless, proximity, transit turnstyle, single-factor
authentication, multi-factor authentication, low value transactions,
high value transactions, payment transactions, point-of-sale
transactions, internet transactions, login authentication, etc. It
wasn't just that the same kind of chip could be used for all these
different purposes ... but provide the individual the option of being
able to register their personnal chip(s) for a broad range of
applications. Part of the challenge was documenting all the issues
that were raised justifying a institutional-centric hardware token
paradigm ... and addressing each issue.

A slightly different approach in the X9A10 financial standard working
group for X9.59 was recognition that transaction information can be
harvested by crooks for fraudulent transactions ... that the
transaction information is available at millions of places around the
world ... and the transaction information is frequently required to be
readily available as part of the business processes involved in
execution of the transaction (one reason that it is frequently
referred to as transaction information).

the X9.59 approach was to slightly tweak the paradigm and make the
transaction information useless to the crooks (as opposed to constant,
ever increasing cycle of making it harder and harder to access
transaction information ... until at some point it becomes impossible
to actually execute the transaction because it is not possible to make
transaction information available).
http://www.garlic.com/~lynn/x959.html#x959

On 05/11/09 06:06, Peter wrote:
I was looking for information on this recently to update an old reference to
the DSTU version but it seems to have vanished, there's no information on it
online that I could find after about 2001 or so (apart from a reference to a
2006 version in a conference paper). The ANSI web site claims that it doesn't
exist, stopping the series at X9.58.

x9 series have passed "100" ... but no longer lists x9.59. to some
extent x9.59 went the way of some other payment technologies in the
late 90s and early part of this decade ... when there was a big
retrenching from hardware tokens and other more secure technologies
for one reason or another ... some of it touched on in this recent post
http://www.garlic.com/~lynn/2009g.html#62

in large part because of the rapidly spreading opinion that hardware
tokens weren't practical in consumer market. I've discussed this more
recently (although cognitive dissonance with merchants & interchange
fees played a role).
http://www.garlic.com/~lynn/2009f.html#7

The coming death of all RISC chips

nmm1 writes:
Some of us were a bit suspicious at the time, but it wasn't obviously
impossible - and the later success of the Alpha emulating the VAX and
even x86 showed that it isn't necessarily impossible. I was at a few
talks on it and, while the speakers flanneled in response to questions,
they didn't say anything that was an obvious porkie or obviously
deluded.

The corresponding software project was the OS/2 Personality design,
which WAS a crazy idea. Each of AIX, OS/2, MacOS, POSIX, TCP/IP etc.
would be a bolt-on component, and all would interoperate. Now, it
is possible to fix up endian problems for all calls where data have
a clearly defined type, but that ain't so for all of the interfaces.

It was exactly like the POSIX interface for MVS, where the speakers
(who claimed to be the architects!) clearly didn't know the details
of the systems they were proposing to interface to. All right, I
was fairly rare in knowing that range of systems, but they should
have employed someone (even as a consultant) who did.

the earlier generation of that was things like fort knox and (801) iliad
chip from late 70s ... where 801/iliad chips were to replace the large
variety of internal microprocessors. the follow-on to the (370) 4341,
the 4381 started out to be an Iliad 801 microprocessor ... as/400
started out being iliad ... lots of other internal (cisc)
microprocessors were going to migrate to 801 (as common architecture
across large number of different microprocessors).

For various reasons they floundered ... and 4381 went with (traditional)
cisc microprocessor (although it was starting to get closer & closer to
executing 370 instructions natively ... instead of microcode
emulation). as/400 had crash program for cisc processor (instead of
801).

part of the iliad 370 effort, included looking at JIT translation
370->native 801, as boost to traditional emulation (something similar
can be found in some of the current generation of 370 emulators that run
on i86 platforms).

scottyt.harder@GMAIL.COM (Scott T. Harder) writes:
Very cool. Funny, though... I remember first logging onto TSO on what
I thought was a 3082 (although I didn't know what even DASD was at the
time). Then, when I finally got my hands on a mainframe in MCO, it
was a 3084. This slideshow shows a 3083, which I don't have any
recollection of. Looks like a 3084, from what I can remember, though.

308x were going to be multiprocessor only ... 3081 was two-processor
machine, 3084 was pair of 3081s ganged together for four-processor
machines.

traditional 370 cache machines slowed the processing cycle down by 10%
to allow for cross-cache chatter in a two-processor configuration (and
four-processor was even slower) ... that is addition to the actual cache
processing overhead of handling cross-cache signals (two-way met that
there was signals from one other cache, four-way resulted in signals
from three other caches).

TPF/ACP was an important market segment at the time ... but didn't have
SMP (tightly-coupled, shared memory, multiprocessor) support. 3083 was
3081 with some of the hardware removed for a single processor and the
single machine running nearly 15% faster (cross-cache chatter slowdown
disabled). Prior to 3083, TPF/ACP operation on 3081 was under vm/370
(handling multiprocessor hardware) providing multiple (single processor)
virtual machines for TPF/ACP operator (TPF/ACP did have loosely-coupled,
cluster support ... so the multiple TPF/ACP virtual machines could be
coordinated ... as opposed to say, production vis-a-vis test). Although
there were some TPF/ACP 3081 operations where the 2nd processor would
sit mostly idle. 3083 was primarily introduced to address TPF/ACP
market.

prior to 308x, a 370 multiprocessor had fully replicated hardware ...
and a two processor system could be split and run as two independent
single processors. for the 3081, the term "dyadic" was introduced to
differentiate that while it had two execution processors ... all the
hardware was not fully duplicated and so a 3081 couldn't be split and
operated as two independent uniprocessors (although a 4-processor 3084
could be split into two 3081s).

3082 was the "service processor". One of the issues was that field
engineering required a "boot-strap" diagnostic process ... which
started with scoping failed components and going up from there. TCMs
in 308x were not "scope'able" ... so things started with a service
processor that was simpler technology and was scope'able ... then a
"working" service processor had all sorts of diagnostices
instrumentation into the TCMs.

There were lots of issues with developing a roll-your-own operating
system and diagnostic applications for the service processor in the 308x
... so for the 3090 ... it was decided to go with a standard (low-end,
"scope' able") 370 for the service process. The 3090 effort started out
with 4331 running a customized version of vm370 release six and all the
service screens implemented in cms ios3720. by the time, the 3090
shipped, the "service processor" had been upgraded to a pair of 4361s
(effectively replicated units in lieu of having to scope the service
processor for diagnostic process).

"segment protection" had been part of the original 370 virtual memory
architecture and had been implemented on several machines and was
supported in (pre-release) vm370.

when the engineering retrofit of virtual memory hardware support to
370/165 started running into schedule delays, there was decision to
eliminate several parts of the full 370 virtual memory architecture (to
gain back six months in the schedule). this required that other machines
that had already implemented the full 370 virtual memory architecture to
remove those dropped features ... and for vm370 to come up with a real
kludge/hack to "protect" cms shared segments ... w/o actually having
hardware segment protection support.

The coming death of all RISC chips

Anne & Lynn Wheeler <lynn@garlic.com> writes:
in the wake of killing off iliad related projects ... some number of
801/risc engineers left the company ... and showed up at emerging risc
efforts at other vendors.

when future system was killed, there was mad rush to get stuff back in
370 product pipeline ... and basically a 308x & 370-xa effort was kicked
off (expected to take 6-8 yrs) ... in parallel with crash 303x, Q&D
stop-gap effort until 308x.

303x channel director was basically 158-3 processor engine with just
the integrated channel microcode and the 370 microcode removed

3031 was 158-3 with the integrated channel microcode removed (only 370
microcode) and reconfigured to work with 303x channel director
(i.e. 158-3 bascially multiplexed integrated channel microcode on 370
microcode on single engine, 3031 had two processor engines, one
dedicated to integrated channel microcode and one dedicated to 370
microcode)
http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_2423PH3031.html

3033 started out as 168-3 wiring diagram mapped to faster chip
technology ... originally only going to be 20% faster than 168-3. the
chips were 20% faster ... the chips also had about ten times the
circuits per chip ... but using the 168-3 wiring diagrams would have
left all the additional circuits unused. during the 3033 development,
there were some critical path redesign that took advantage of the
higher onchip circuit density resulting in 3033 being closer to 50%
faster than 168-3.
http://www-03.ibm.com/ibm/history/exhibits/3033/3033_album.html

initial 3081 ... was 3081D where each processor was about five mips ...
not a whole lot faster than 3033 two-processor. fairly quickly after
that, 3081K shipped with each processor about seven mips (14mips
aggregate).

issue was that disk thruputs weren't keeping pace with the rest of the
system infrastructure ... i.e. processing & memory performance was
increasing faster than disk performance.

I had started pontificating in the 70s about the growing performance
mismatch. what was happening was that increasing amounts of electronic
storage (starting with real memory on the processor and then disk
controller cache) was being used to cache disk information to compensate
for the increasing disk thruput bottleneck.

this is referencing comparing 360/67 to 3081k (separated by almost 15
yrs) running similar (virtual machine) CMS workload ... and claiming
that relative system disk thruput had declined by a factor of ten times
in the period.
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

some disk division executives took some offense with the claims and
assigned the division performance group to refute my statements. after a
few weeks, the group came back and effectively said that I had slightly
understated the problem. That study eventually turned into a SHARE (63)
presentation (B874) recommending how to configure/manage disks to
improve system thruput. old post with reference:
http://www.garlic.com/~lynn/2006f.html#3 using 3390 mod-9s
http://www.garlic.com/~lynn/2006o.html#68 DASD Response Time (on antique 3390?)

in any case, it was starting to become a real issue in the 3033
time-frame. it was possible to configure vm clusters of 4341s with
higher aggregate thruput than 3033 at a lower cost. furthermore, each
4341 could have 16mbytes (and six i/o channels) compared to 3033's with
16mbytes (and 16 i/o channels).

to somewhat address/compensate ... there was a hardware hack to have
3033 configured with 32mbytes of real storage (even though the processor
was restricted to both real & virtual 16mbytes addressing).

the hack involved

1) using (31bit) IDALs to being able to do I/O for real addresses above
16mbyte "line" (most importantly being able to read/write pages above
the line)

2) page table entry was defined as 16bits, 12bit page number (4096
4096byte pages or 16mbytes), 2 defined bits and 2 undefined bits. the
two undefined bits were re-allocated for prepending to the page number
allowing up to 16384 4096byte pages or up to 64mbytes real storage,
but only max. of 16mbytes per virtual address space).

...

lots of things would require virtual pages, that were above the
(16mbyte) line, to be brought into the first 16mbytes of real storage.
initially there was a definition where the software would write the
(above the line) virtual page out to disk and then read it back into
real storage (below the line). I generated some example code that
involved special virtual address space and fiddling the real page
numbers in two page table entries ... allowing 4k of real storage above
the line to be "copied/moved" to 4k of real storage below the line
(avoiding having to write to disk and read back in).

The internal network was larger than the arpanet/internet from just
about the beginning until possibly late '85 or early '86. The internal
network also required all links leaving physical corporate property to
be encrypted. Somebody commented in '85 time-frame that the internal
network had over half of all link encryptors in the world.
http://www.garlic.com/~lynn/subnetwork.html#internalnet

This was not bad for 56kbit links ... but it started to become much
more of problem when running at (full-duplex) T1 (1.5mbits/sec in each
direction) and higher speeds.

for other drift, one friday (in that time-frame), somebody from the
communication group sent out an announcement for a new "networking"
discussion conference on the internal network ... which included the
following definition:

from virtual paging standpoint ... all space (below and above 16mbyte
line) was available for paging. there was some additional overhead
that could be involved when a virtual page above the line had to be
brought down below the line.

the big problem was that some of the page replacement algorithm
implementations messed up in how they treated below the line and above
the line (reducing the effectiveness of the above the line space). all
other things being equal, a virtual page above the line and a virtual
page below the line should have equal probability of being replaced
(unfortunately because of some of the implementation glitches ... this
wasn't always the case ... resulting in non-optimal page replacement and
less effective thruput).

i had done a lot of work on how virtual page replacement algorithms
should work and maintaining optimal selection strategy ... as
undergraduate in the 60s working on (virtual machine) cp67.

there were sometimes relatively trivial appearing code changes that
actually resulted in significant difference in how effective the
replacement strategy worked. There was some amount of this in original
VS2/SVS implementation that continued well thru MVS releases ... that I
got to use the "I told you so" line.

clicking on RFC number brings up that summary in lower
frame. selecting ".txt" field in summary, fetches the actual RFC.

original RADIUS implementation was authentication for particular
vendor's modem pool manager. Since then, RADIUS has become an internet
standard and RADIUS servers extended to handle authentication,
authorization, and accounting. Found in lots of ISP and webhosting
operations. Basically some sort of userid is supplied and the
appropriate record for that userid is retrieved. That userid record
then has information regarding at least authentication, but may also
contain authorization/permissions as well as for accounting.

from above:
2000 Commodity Futures Modernization Act that exempted
over-the-counter derivatives like credit-default swaps from regulation
by the Commodity Futures Trading Commission. Credit-default swaps took
down AIG, which has cost the U.S. $150 billion thus far.

... snip ...

Undo previous legislation as well as hopefully improving visibility
and transparency

from above:
Now it seems to be recognised that inflation targeting is not
enough. Given the explicit government guarantee behind the banking
system, central banks need to monitor both financial stability and
asset prices. At the same time, some central banks have adopted (via
quantitative easing) a policy of creating money to boost markets that
also has the convenient side-effect of funding budget deficits. That
is just what opponents of fiat money feared would happen in the long
run.

from above:
The bonanza is intentional. Governments and regulators want the banks
to make profits so that they regain their health faster after roughly
$3 trillion of write-downs. It is part of the monstrous bargain that
bankers have extracted from the state (see our special report this
week).

... snip ...

article also mentions the two evils of excessive risk and excessive
reward can poison capitalism and ravage the country .... as in other
articles ... it was the top business executives ... intent on
excessive reward ... that overruled the risk managers (sacrificing the
institution and economy for personal gain)